Resource Hosting Subsystem was terminated which caused Availability group to fail

The following error was occurred in cluster events and the availability group was failed which resulted the databases in non-synchronizing state.

A component on the server did not respond in a timely fashion. This caused the cluster resource ‘AG’ (resource type ‘SQL Server Availability Group’, DLL ‘hadrres.dll’) to exceed its time-out threshold. As part of cluster health detection, recovery actions will be taken. The cluster will try to automatically recover by terminating and restarting the Resource Hosting Subsystem (RHS) process that is running this resource.

Please help me to find the root cause (A component on the server did not respond in a timely fashion).

Thanks

csv file going from 316 KB to ~3GB during django app run time then back instantly once app is terminated

The explanation is a bit long, but most of this is to give some background regarding my problem (the problem is probably not Python related, but having extra information wont hurt):

I am currently working on a django application. The application window (browser) has two iframes each taking 50% of the screen. On the left-hand side there will be snopes (fact-check website) pages displayed and on the right-hand side one of the pages linked in that specific snopes article will be displayed.

A form in the bottom of the app will let the user choose and post whether the RHS page is a source of the claim in the snopes article or not (there is also “invalid input” or “I dont know”).

Submitting calls a function which tries to get the other links of the current snopes page, and if there is none, get any pages annotated (in this priority) twice, once, or least (so 0 then 3 then 4 …..). This is done by using a count.csv which simple stores how many each page+link combination has been annotated (as snopes articles can repeat and so can linked sites).

The header of count.csv is :

page source_url count 

The pages to be displayed one either side are retrieved from a csv with the following header:

page claim verdict tags date author source_list source_url 

And the user input is stored in a separate csv for each user inside a results directory with the header:

page claim verdict tags date author source_list source_url value name 

With value being 1 (yes), 2 (no), 3 (invalid input), 4 (dont know)

The html of all the links in the first csv (called samples.csv) is retrieved in advance and stored using the article name as the directory name. The page itself is stored as “page.html” and the sources are stored as “some_number.html” where some_number is the index of the source in the source_list.

For example, the html of the first link in a snopes article named “is-water-wet” will be

Annotator/annotator/data/html_snopes/is-water-wet/0.html 

manage.py is in Annotator

After getting a row from samples (a pandas dataframe created using samples.csv). my Django app gets all of the rows with the same page and automatically annotates the rows without a corresponding path as 3 (invalid input) as that means that html retrieval failed.

When I ran the app on a virtual machine, I noticed a major issue. When I log in (to the app) with a user and annotate, the corresponding results csv for some reason goes from 316kb to ~3gb and back once the app is terminated, even though the csv has only around 248 lines.

I checked the first couple of lines (of the results csv) and they look completely normal.

Here is the code:

def get_done_by_annotator(name):     # creates a list of pages that have been already annotated by the current annotator     results_filename = results_path+name+".csv"     if os.path.exists(results_filename):         results = pd.read_csv(results_filename, sep=',', encoding="latin1")         done_by_annotator = (results["page"]+results["source_url"]).unique()     else:         done_by_annotator = []     return done_by_annotator 
def get_count_file(s_p):     #Creates or reads countfile:     if os.path.exists(count_path):         count_file = pd.read_csv(count_path, sep=',', encoding="latin1").sample(frac=1)     else:         count_file = s_p[['page','source_url']].copy()         count_file['count'] = 0         count_file.to_csv(count_path, sep=',', index=False)     return count_file 
def increase_page_annotation_count(page, origin):     count_file = pd.read_csv(count_path, sep=',', encoding="latin1")     count_file.loc[(count_file['page'] == page) & (count_file['source_url'] == origin), 'count'] += 1     count_file.to_csv(count_path, sep=',', index=False) 
def save_annotation(page, origin, value, name):     # Read samples file     print("SAVING ANNOTATION")     s_p = pd.read_csv(samples_path, sep='\t', encoding="latin1")     entry = s_p.loc[(s_p["page"] == page) & (s_p["source_url"] == origin)]     if not (entry.empty):         n_entry = entry.values.tolist()[0]         n_entry.extend([value, name])         results_filename = results_path+name+".csv"         if os.path.exists(results_filename):             results = pd.read_csv(results_filename, sep=',', encoding="latin1")         else:             results = pd.DataFrame(columns=res_header)         oldEntry = results.loc[(results["page"] == page) & (results["source_url"] == origin)]         if oldEntry.empty:             results.loc[len(results)] = n_entry         results.to_csv(results_filename, sep=',', index=False)         # keeps track of how many times page was annotated         increase_page_annotation_count(page, origin) 
def get_least_annotated_page(name,aPage=None):     done_by_annotator = get_done_by_annotator(name)      #Print number of annotated pages and total number of pages     s_p = pd.read_csv(samples_path, sep='\t', encoding="latin1")     print("done: ", len(done_by_annotator), " | total: ", len(s_p))      if len(done_by_annotator) == len(s_p):         return "Last annotation done! Thank you!", None, None, None, None, None, None, None      #Creates or reads countfile:     count_file = get_count_file(s_p)      #Get pages not done by current annotator     not_done_count = count_file.loc[~(count_file['page']+count_file['source_url']).isin(done_by_annotator)]       print(">>",aPage)     if aPage is not None:         remOrigins = not_done_count.loc[not_done_count['page'] == aPage]         if len(remOrigins)==0:             return get_least_annotated_page(name)     else:         twice_annotated = not_done_count.loc[not_done_count['count'] == 2]         if len(twice_annotated) > 0:             page = twice_annotated.iloc[0]['page']         else:                 once_annotated = not_done_count.loc[not_done_count['count'] == 1]             if len(once_annotated) > 0:                 page = once_annotated.iloc[0]['page']             else:                 index = not_done_count['count'].idxmin(axis=0, skipna=True)                 page = not_done_count.loc[index]['page']         remOrigins = not_done_count.loc[not_done_count['page'] == page]      page = remOrigins.iloc[0].page     #Automatically annotate broken links of this page as invalid input (op = 3)     src_lst = s_p.loc[s_p['page'] == page]     src_lst = ast.literal_eval(src_lst.iloc[0].source_list)     for idx, e in remOrigins.iterrows():         src_idx_num = src_lst.index(e.source_url)         if not (os.path.exists(snopes_path+(e.page.strip("/").split("/")[-1]+"/")+str(src_idx_num)+".html")):             save_annotation(e.page, e.source_url, "3", name)      #Update done_by_annotator, count_file, and not_done_count     done_by_annotator = get_done_by_annotator(name)     count_file = get_count_file(s_p)     not_done_count = count_file.loc[~(count_file['page']+count_file['source_url']).isin(done_by_annotator)]      remOrigins = not_done_count.loc[not_done_count['page'] == page]     if len(remOrigins)==0:         return get_least_annotated_page(name)      entry = remOrigins.iloc[0]     entry = s_p[(s_p.page.isin([entry.page]) & s_p.source_url.isin([entry.source_url]))].iloc[0]     a_page = entry.page.strip()     o_page = entry.source_url.strip()     src_lst = entry.source_list.strip()      a_page_path = a_page.strip("/").split("/")[-1]+"/"     src_idx_num = src_lst.index(o_page)     o_page_path = a_page_path+str(src_idx_num)+".html"      f = codecs.open(snopes_path+a_page_path+"page.html", encoding='utf-8')     a_html = bs(f.read(),"lxml")     f = codecs.open(snopes_path+o_page_path, encoding='utf-8')     o_html = bs(f.read(),"lxml")      return a_page, o_page, str(a_html), str(o_html), src_lst, a_done, a_total, len(done_by_annotator) 

Drush terminated abnormally

Drush version 9.7.0 Drush Launch version 0.6.0 Drupal ver 8.7.3 Open social dist 5.5.

The error I get at the command line when running drush topic or drush cr or any Drush command:

[warning] Drush command terminated abnormally. Check for an exit() in your Drupal site.

I have tried reinstalling Drush, removing and reinstalling, clearing cache via the GUI, but nothing seems to help.

I don’t know where to check for an “exit() in your Drupal site”.

Error: The forked VM terminated without saying properly goodbye

I am trying to run a github repo: https://github.com/jMotif/sax-vsm_classic

However, while try to build it using Maven, I am getting the error – ”’[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project sax-vsm: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? [ERROR] Command wascmd.exe /X /C ""C:\Program Files\Java\jdk-11.0.2\bin\java" -javaagent:C:\Users\U374235\.m2\repository\org\jacoco\org.jacoco.agent.7.9\org.jacoco.agent-0.7.9-runtime.jar=destfile=C:\Users\U374235\Desktop\sax-vsm_classic-master\target\jacoco.exec,excludes=**/SAXVSMClassifier*.:**/GoldsteinPriceFunction.:**/ShubertFunction.* -Xms512m -Xmx1024m -jar C:\Users\U374235\Desktop\sax-vsm_classic-master\target\surefire\surefirebooter14726615967899630127.jar C:\Users\U374235\Desktop\sax-vsm_classic-master\target\surefire\surefire18163570171388548023tmp C:\Users\U374235\Desktop\sax-vsm_classic-master\target\surefire\surefire_018138880120797554179tmp” ”’

While browsing StackOverflow to debug this error, I saw a post asking me to run a portion of the error code –

cmd.exe /X /C ""C:\Program Files\Java\jdk-....................surefire_018138880120797554179tmp" 

After doing that, I get the output – ”’

Error: Unable to access jarfile C:\Users\U374235\Desktop\sax-vsm_classic-master\target\surefire\surefirebooter18159503811890219171.jar 

”’

The AppFabric Caching Service service terminated unexpectedly

The event viewer of the WFE server recieves errors regarding the AppFabric Caching Service:

  1. AppFabric Caching service crashed with exception

  2. {6d304b9000000000000000000000000} failed to refresh lookup table, exception: {Microsoft.Fabric.Common.OperationCompletedException: Operation completed with an exception —> System.TimeoutException: The operation has timed out.

  3. Faulting application name: DistributedCacheService.exe, version

Any chance to survive after “Apple Developer Program membership will be terminated” message?

I’ve got “your Apple Developer Program account has been flagged for removal message on the 3rd May 2019. On 15th May 2019, I’ve got the “your Apple Developer Program membership will be terminated message. Apple does not share specific reasons for that decision, as usual.

However, as of 20th May the account is still active: I can access it and make changes.

Since the 15th of May I’ve removed from sale all apps that could have caused Apple’s anger. I’ve disabled all corresponding In-App purchases. I’ve deleted most of the apps. I can’t delete one of them because there’s an In-App purchase with “in Review” status.

Is there still a chance for canceling the termination of my membership? How long does it usually take to terminate a membership?

Waiting for death is sort of worse than the death itself.

Virtualbox on Ubuntu 18.04: vm terminated unexpectedly

I have to use the virtual machine on which Elmer FEM is installed and downloaded from the ElmerCSC website I installed Virtualbox 6.0 and extracted the vm from the ova file. But when I try to start the vm I get the following error: ‘ElmerCSC_LinuxMint18.3_AMD64’ has terminated unexpectedly during startup with exit code 1 (0x1) The error in log files are:

00:01:47.771925 ERROR [COM]: aRC=NS_ERROR_UNEXPECTED (0x8000ffff) aIID={c0447716-ff5a-4795-b57a-ecd5fffa18a4} aComponent={SessionWrap} aText={The session is not locked (session state: Unlocked)}, preserve=false aResultDetail=0 00:17:01.444507 ERROR [COM]: aRC=NS_ERROR_UNEXPECTED (0x8000ffff) aIID={c0447716-ff5a-4795-b57a-ecd5fffa18a4} aComponent={SessionWrap} aText={The session is not locked (session state: Unlocked)}, preserve=false aResultDetail=0

and

00:00:00.218181 nspr-2 Support driver version mismatch: DriverVersion=0x290001 ClientVersion=0x290008 rc=VERR_VERSION_MISMATCH

Which driver is wrong? I have already reinstalled virtualbox a couple of times without success

Do you have any suggestions on how to solve the problem? Thank you Claudio

Minimum swaps algorithm terminated due to timeout

I have been trying to solve this question.

Given an unordered array consisting of consecutive integers [1, 2, 3, …, n], find the minimum number of two-element swaps to sort the array.

I was able to solve it but my solution’s complexity is not good enough that it terminated due to a timeout when it runs with bigger arrays. This is a typical problem for me, I always solve the problem somehow but the complexity is not optimal and the solution does not pass the all the cast cases. If you have any suggestions for me for the future interview questions like that, I’d appreciate knowing how should I approach the questions.

function findIndice(arr,i){   let iterator=0   while(arr[iterator]!==i+1){     iterator++   }   return iterator }  function swap(arr,x,y){   let temp=arr[x]   arr[x]=arr[y]   arr[y]=temp }    function minimumSwaps(arr){   let i=0;   let counter=0;   let size=arr.length;    for(i=0;i<size-1;i++){     if(arr[i]!==i+1){       let index=findIndice(arr,i)       swap(arr,index,i)       counter++     }   }   return counter } 

What expectations are there of support when handing development over to a different contractor after my contract is terminated?

This may be a better fit for a different site, but I couldn’t really find one that served the intersection of software engineering and business norms. My primary concern here is about something I thought was a business norm: a couple of hours of post-handoff cooperation when a new developer is brought in to start work.

I was brought on as a subcontractor on a project. That project continued for several months in active development, and was (very abruptly) canceled without notice — literally just ‘invoice your remaining hours and goodbye’.

The code base was handed over to the super client, as were the working Heroku instances. Moving on, I’ve discovered that my client has been getting some support requests — basically an hour or so’s phone meeting with the new developer to run him down on what has been done.

I always thought that a reasonable amount of handover support — an hour or two on the phone to give a quick rundown on the code, how it’s laid out, why it’s laid out that way, current issues, etc etc — was a norm. But my client is refusing to do any such thing. I am worried about how professional it does (or rather, does not) make him look, and how that may reflect on me as a subcontractor.

The situation is further complicated by the fact that, late in the project life cycle, the decision was made to shift me from sub-contractor to contractor. My client was basically shifted from a contract to develop software to a contract to manage a team of contractors doing further development. I haven’t been contacted directly by super-client, but I now technically have my own relationship with him and want to plan ahead in case he reaches out to me.