ODBC Calling Fill – Unexpected closed connection after 2 hours

Using PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit I use driver PostgresSQL Unicode(x64) version 13.00.00.00

I have a query that is executed through an ODBC connection in a Powershell 5.1 script. I use the Fill() method to retreive about 3500 records daily. When the script works, it takes 2-5 minutes to execute and retrieve data when it works. Problem is that the script "fails" half of the time approx. When this occurs, the Powershell script does stop only after 2h and 30 seconds.

We double checked the postgres logs and when this occurs, we see that the query successfully completed within 6 minutes always. I don’t know what to look for. any Idea?

Below is the error got:

Executed as user: NT Service\SQLSERVERAGENT. A job step received an error at line 94 in a PowerShell script. The corresponding line is ‘(New-Object system.Data.odbc.odbcDataAdapter($ cmd)).fill($ ds) | out-null ‘. Correct the script and reschedule the job. The error information returned by PowerShell is: ‘Exception calling "Fill" with "1" argument(s): "The connection has been disabled." ‘. Process Exit Code -1.

Not too familiar with postgreSQL.

Thanks!

Auto fill with bitwarden security question?

Screenshot of my bitwarden autofill feature in action. Showing "Go to vault" Hi guys, I just started using bitwarden as a password manager and stumbled across a very confusing feature… When I enable the password autofill feature (I know it’s called auto fill..but still…) I don’t even have to type in my password or fingerprint to go to my vault and see my password in plaintext or just login to the website… Is there a way of having autofill activated having to authenticate oneself before being able to view the password? This seems kind of counter productive to me

What’s the best bucket fill algorithm in terms of efficiency?

I am looking for an algorithm that fills a given region of connected particular nodes in minimum time. I have tried using flood flow algorithm but it’s too slow and inefficient for large array, it checks each pixel more than once. Is there any algorithm that is more efficient than flood flow algorithm.

If that helps, i want to implement this algorithm for a paint filling app

Need to fill in missing parts c++ Language

template void hashT::insert(int hashIndex, const CType2& rec) {

// Fill up this part!

}

template void hashT::search(int& hashIndex, const CType2& rec, bool& found) const {

// Fill up this part!

}

template void hashT::retrieve(int hashIndex, CType1& rec) const { // Fill up this part!

}

template void hashT::remove(int hashIndex, const CType2& rec) {

//  Fill up this part! 

}

// to print all the BSTs for the entire hash table // For each BST you need to print the stateData information stored template void hashT::print() const {

//  Fill up this part! 

}

template hashT::hashT(int size) {

//  Fill up this part!! 

}

template hashT::~hashT() {

//  Fill up this part!! 

}

Interface for users that need to fill in data for thousands of items

The problem I encounter is that I have thousands of items a day (e.g. transactions of toys that have been purchased in a store). My goal is to provide the users with an interface to fill in manually 10+ data fields on this transaction (e.g. toy category, is it the main toy or something you purchase as extra, what age is it for, etc.). Then, I would categorize the data by the information the users provided me. How would you recommend doing it?

Thanks!