ODBC Calling Fill – Unexpected closed connection after 2 hours

Using PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit I use driver PostgresSQL Unicode(x64) version

I have a query that is executed through an ODBC connection in a Powershell 5.1 script. I use the Fill() method to retreive about 3500 records daily. When the script works, it takes 2-5 minutes to execute and retrieve data when it works. Problem is that the script "fails" half of the time approx. When this occurs, the Powershell script does stop only after 2h and 30 seconds.

We double checked the postgres logs and when this occurs, we see that the query successfully completed within 6 minutes always. I don’t know what to look for. any Idea?

Below is the error got:

Executed as user: NT Service\SQLSERVERAGENT. A job step received an error at line 94 in a PowerShell script. The corresponding line is ‘(New-Object system.Data.odbc.odbcDataAdapter($ cmd)).fill($ ds) | out-null ‘. Correct the script and reschedule the job. The error information returned by PowerShell is: ‘Exception calling "Fill" with "1" argument(s): "The connection has been disabled." ‘. Process Exit Code -1.

Not too familiar with postgreSQL.


PostgreSQL connection URI fails, but parameters work

Running postgreSQL 13.2 with ssl on.

I need to connect to a database from a third party application that requires a connection URI, but it is not working. So I tested connecting with psql using two different formats with the same credentials. One worked and one didn’t.

The following command logs me into mydb as user myuser:

# psql -U myuser -d mydb -h -p 5432 -W Password: psql (13.2 (Debian 13.2-1.pgdg100+1)) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help.  mydb=> 

However this command fails:

# psql postgresql://myuser:MYPASSWORD@ psql: error: FATAL: password authentication failed for user "myuser" 

I am using exactly the same credentials. I’ve verified it more than 10 times. Removing "sslmode=require" does not fix the problem.

My pg_hba.conf file contains:

host   mydb   myuser   password 

I made it the first line in my pg_hba.conf file, so it can’t be getting hung up on any other line.

What am I doing wrong?

Can SSIS packages call other SSIS packages in the connection string?

I am running an SSIS package that transfers data from one (SQL2008) server to another (SQL2000). However after P2V conversion the SQL2008 server cannot execute an SSIS package due to a user authentication error.

Lets say the package is called "Transfer-Go". In the connection string of that package can it call another SSIS package? In the SSIS library there is another package with the same that appears in the string (called Transfer-Now) name The string is below:

Data Source=<IP>;User ID=<user>;Initial Catalog=<db_name>;Provider=SQLNCLI10.1;Persist Security Info=True;OLE DB Services=-13;Auto Translate=False;Application Name=SSIS-<Transfer-Now-name of other SSIS package>-{8ABA18EE-637E-424F-A3F7-F7E4EA50DD9D}<IP.db_name.user>; 

So is this SSIS package connection string calling that package?

And if the credentials are wrong in that package could that be why I am unable to authenticate?

Thanks for any input, not a DB/SQL guy at all so I apologize if I sound green here.

Database connection lost whenever SQL query string is too long

I recently switched from running my Rails app on a single VM to running the database — MariaDB 10.3 — on a separate (Debian Buster) VM. Now that the database is on a separate server, Rails immediately throws Mysql2::Error::ConnectionError: MySQL server has gone away whenever it tries to make a query where the SQL itself is very long. (They query itself isn’t necessarily one that would put significant load on the system.)

An example query that causes the problem looks like this:

SELECT `articles`.`id` FROM `articles` WHERE `articles`.`namespace` = 0 AND `articles`.`wiki_id` = 1 AND `articles`.`title` IN ('Abortion', 'American_Civil_Liberties_Union', 'Auschwitz_concentration_camp', 'Agent_Orange', 'Ahimsa') 

… except the array of titles is about 5000 items long, and the full query string is ~158kB.

On the database side, this corresponds to warnings like this:

2021-03-25 15:47:13 10 [Warning] Aborted connection 10 to db: 'dashboard' user: 'outreachdashboard' host: 'programs-and-events-dashboard.globaleducation.eqiad1.wikimed' (Got an error reading communication packets)

The problem seems to be with the network layer, but I can’t get to the bottom of it. I’ve tried adjusting many MariaDB config variables (max_allowed_packet, innodb_log_buffer_size, innodb_log_file_size, innodb_buffer_pool_size) but none of those made a difference. The problem seems to be that the connection is aborted while it is attempting to transmit the long SQL query string from the app server to the database server. (There’s no corresponding problem with receiving large query results from the database.)

I’ve tried adjusting several timeout-related settings as well, although that seems unlikely to be the problem because I can replicate the connection error without any significant wait, just by issuing one of the long-SQL-string queries from a Rails console.

I’ve tried using tcpdump to see what’s coming in, but didn’t pick up any additional clues from that.

Jquery ajax loses connection after minutes

Is there a way to get jquery ajax to remain connected until the external execution of code is done?


$ .ajax({
        type: "POST",
        data: "pro=2",
        url:"engine/process.php", // huge data to be processed here
        error: function(){
        success: function(finish){
                //php code is complete

the code works but after some few minutes it returns error alert when the connection is lost.
due to that the php code will not finish processing

Azure SQL Database – dedicated administrator connection (DAC)

If you check the setting in any of your Azure SQL databases, you will see that the value_in_use column value is zero for Remote admin connections. Meaning ‘Remote admin connections’ are not allowed from remote clients. There is no way to change that at the time of writing this question. sp_configure is not available for Azure SQL Database.

SELECT * FROM sys.configurations WHERE NAME = 'remote admin connections' ORDER BY NAME; 

Does that mean Remote admin connections are not allowed for Azure SQL Databases?

Content-Security-Policy Headers are there and showing the correct settings, but still getting a refused connection

So I’m putting a plugin together that will allow me to connect multiple client sites with an online service.

I can get the service vendors snippet to load, but once you interact with it, that’s where things get tricky and it refuses to load an (I guess) iframe… …it’s pretty poorly documented.

Refused to load https://www.service-domain.com/ because it does not appear in the frame-ancestors directive of the Content Security Policy.

That’s the console log error I was receiving.

So I jumped back into my plugin and added the following:

function bbti_send_headers() {     header( "Content-Security-Policy: frame-ancestors https://www.service-domain.com/; frame-src https://www.service-domain.com/;" ); } add_action( 'send_headers', 'bbti_send_headers' ); 

Now, when I reload the page I’m still getting the same error Refused to load https://www.service-domain.com/... etc...

However, if I look at the network panel and check the page’s headers this is what I get:

HTTP/1.1 200 OK Content-Encoding: gzip Content-Security-Policy: frame-ancestors https://www.service-domain.com/; frame-src https://www.service-domain.com/; 

So the header is there but still getting the same error from the script.

Anyone know what it is I missed?

An exception occurred while opening a connection to the server

I’ve encountered this from time to time and I cant seem to find the fix.

Usually when this error occur, I am seeing increasing number of connections being used while its connection is not being disconnected.

Here is a sample logs on mongo. Here is the sample below

But I observed that this log below is its normal behaviour: enter image description here

Another thing that I noticed before the error occur is that there are numerous "Unable to gather storage statistics for a slow operation due to lock aquire timeout"enter image description here

At the first time I encountered this issue. We set MaxConnectionPoolSize = 200; then it occurred again. Any idea how to fix this?

Understanding Connection To Proxy Ratio

I am watching looplines video on safely scraping google in 2020 and am having a fundamental misunderstanding on the terminology.

It says that there should be one connection for every 5 proxies, or that the connection ratio can vary.

How can more than 1 IP address make a single connection? When a page loads does it not load from a single IP?

How would it be possible for 50 IP addresses to load one connection?

What does “connection” mean in this case?