Detect what version of SQL Server Express engine is available before connection?

I would like to detect what version of SQL Server Express engine is available so that I can connect either to the (localdb)\v.11 (SQL Server 2012 per https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2012/hh510202(v=sql.110)?redirectedfrom=MSDN#Anchor_1) or (localdb)\MSSQLLocalDB (SQL Server 2014 and up per https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/sql-server-express-localdb?view=sql-server-ver15&redirectedfrom=MSDN&viewFallbackFrom=sql-server-2014#Anchor_1) instance name when attempting to attach a file in the connection string using the AttachDBFileName= mechanism.

I would most likely want to do it from Powershell in some manner, but whatever method is reliable, I can use. I do know that sometimes a LocalDB connection can be a bit slow as it attaches the file and starts up on demand, so I have in the past been pretty lenient on the connect timeout for these LocalDB connections compared to the real SQL Server connections, so I would prefer not trying to connect and waiting for timeout, since I think I’ve already got an exaggerated timeout just for normal successful connection.

Is it possible and how to log a website internal connection activity?

For instance, I have developed a WordPress website and prepared to deploy to client’s server. Since they request the site to be scanned in their VPN server first. For some reason, the application connects to many external resources and because the VPN is blocked the site from connecting to other location than its internal address, it is very slow. The slowness is believed to be due to blocking and retrial until giving up.

Since the application consists of a lot of plugins. What connection it is trying to make is not so obvious and straightforward.

Is it possible to track/log what the connection it is trying to made? If it is possible, I could do it in local server environment level and coding level to make a list of connection that it is trying to connect. And then I could use this list to discuss with the client and ask for white listing.

eg. normal connection log /user activity utility could log user A with IP A visit > website.

But currently, if the site itself connect to

  • wordpress.org <<< possible log it down?
  • website B …
  • website C

Any hints, suggestions, insight is highly appreciated. Thanks a lot.

ODBC Calling Fill – Unexpected closed connection after 2 hours

Using PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit I use driver PostgresSQL Unicode(x64) version 13.00.00.00

I have a query that is executed through an ODBC connection in a Powershell 5.1 script. I use the Fill() method to retreive about 3500 records daily. When the script works, it takes 2-5 minutes to execute and retrieve data when it works. Problem is that the script "fails" half of the time approx. When this occurs, the Powershell script does stop only after 2h and 30 seconds.

We double checked the postgres logs and when this occurs, we see that the query successfully completed within 6 minutes always. I don’t know what to look for. any Idea?

Below is the error got:

Executed as user: NT Service\SQLSERVERAGENT. A job step received an error at line 94 in a PowerShell script. The corresponding line is ‘(New-Object system.Data.odbc.odbcDataAdapter($ cmd)).fill($ ds) | out-null ‘. Correct the script and reschedule the job. The error information returned by PowerShell is: ‘Exception calling "Fill" with "1" argument(s): "The connection has been disabled." ‘. Process Exit Code -1.

Not too familiar with postgreSQL.

Thanks!

PostgreSQL connection URI fails, but parameters work

Running postgreSQL 13.2 with ssl on.

I need to connect to a database from a third party application that requires a connection URI, but it is not working. So I tested connecting with psql using two different formats with the same credentials. One worked and one didn’t.

The following command logs me into mydb as user myuser:

# psql -U myuser -d mydb -h 127.0.0.1 -p 5432 -W Password: psql (13.2 (Debian 13.2-1.pgdg100+1)) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help.  mydb=> 

However this command fails:

# psql postgresql://myuser:MYPASSWORD@127.0.0.1:5432/mydb?sslmode=require psql: error: FATAL: password authentication failed for user "myuser" 

I am using exactly the same credentials. I’ve verified it more than 10 times. Removing "sslmode=require" does not fix the problem.

My pg_hba.conf file contains:

host   mydb   myuser   127.0.0.1/32   password 

I made it the first line in my pg_hba.conf file, so it can’t be getting hung up on any other line.

What am I doing wrong?

Can SSIS packages call other SSIS packages in the connection string?

I am running an SSIS package that transfers data from one (SQL2008) server to another (SQL2000). However after P2V conversion the SQL2008 server cannot execute an SSIS package due to a user authentication error.

Lets say the package is called "Transfer-Go". In the connection string of that package can it call another SSIS package? In the SSIS library there is another package with the same that appears in the string (called Transfer-Now) name The string is below:

Data Source=<IP>;User ID=<user>;Initial Catalog=<db_name>;Provider=SQLNCLI10.1;Persist Security Info=True;OLE DB Services=-13;Auto Translate=False;Application Name=SSIS-<Transfer-Now-name of other SSIS package>-{8ABA18EE-637E-424F-A3F7-F7E4EA50DD9D}<IP.db_name.user>; 

So is this SSIS package connection string calling that package?

And if the credentials are wrong in that package could that be why I am unable to authenticate?

Thanks for any input, not a DB/SQL guy at all so I apologize if I sound green here.

Database connection lost whenever SQL query string is too long

I recently switched from running my Rails app on a single VM to running the database — MariaDB 10.3 — on a separate (Debian Buster) VM. Now that the database is on a separate server, Rails immediately throws Mysql2::Error::ConnectionError: MySQL server has gone away whenever it tries to make a query where the SQL itself is very long. (They query itself isn’t necessarily one that would put significant load on the system.)

An example query that causes the problem looks like this:

SELECT `articles`.`id` FROM `articles` WHERE `articles`.`namespace` = 0 AND `articles`.`wiki_id` = 1 AND `articles`.`title` IN ('Abortion', 'American_Civil_Liberties_Union', 'Auschwitz_concentration_camp', 'Agent_Orange', 'Ahimsa') 

… except the array of titles is about 5000 items long, and the full query string is ~158kB.

On the database side, this corresponds to warnings like this:

2021-03-25 15:47:13 10 [Warning] Aborted connection 10 to db: 'dashboard' user: 'outreachdashboard' host: 'programs-and-events-dashboard.globaleducation.eqiad1.wikimed' (Got an error reading communication packets)

The problem seems to be with the network layer, but I can’t get to the bottom of it. I’ve tried adjusting many MariaDB config variables (max_allowed_packet, innodb_log_buffer_size, innodb_log_file_size, innodb_buffer_pool_size) but none of those made a difference. The problem seems to be that the connection is aborted while it is attempting to transmit the long SQL query string from the app server to the database server. (There’s no corresponding problem with receiving large query results from the database.)

I’ve tried adjusting several timeout-related settings as well, although that seems unlikely to be the problem because I can replicate the connection error without any significant wait, just by issuing one of the long-SQL-string queries from a Rails console.

I’ve tried using tcpdump to see what’s coming in, but didn’t pick up any additional clues from that.

Jquery ajax loses connection after minutes

Is there a way to get jquery ajax to remain connected until the external execution of code is done?

Code:

$ .ajax({
        type: "POST",
        data: "pro=2",
        url:"engine/process.php", // huge data to be processed here
        error: function(){
              alert('error');
        },
        success: function(finish){
                //php code is complete
        }
});


the code works but after some few minutes it returns error alert when the connection is lost.
due to that the php code will not finish processing

Azure SQL Database – dedicated administrator connection (DAC)

If you check the setting in any of your Azure SQL databases, you will see that the value_in_use column value is zero for Remote admin connections. Meaning ‘Remote admin connections’ are not allowed from remote clients. There is no way to change that at the time of writing this question. sp_configure is not available for Azure SQL Database.

SELECT * FROM sys.configurations WHERE NAME = 'remote admin connections' ORDER BY NAME; 

Does that mean Remote admin connections are not allowed for Azure SQL Databases?

Content-Security-Policy Headers are there and showing the correct settings, but still getting a refused connection

So I’m putting a plugin together that will allow me to connect multiple client sites with an online service.

I can get the service vendors snippet to load, but once you interact with it, that’s where things get tricky and it refuses to load an (I guess) iframe… …it’s pretty poorly documented.

Refused to load https://www.service-domain.com/ because it does not appear in the frame-ancestors directive of the Content Security Policy.

That’s the console log error I was receiving.

So I jumped back into my plugin and added the following:

function bbti_send_headers() {     header( "Content-Security-Policy: frame-ancestors https://www.service-domain.com/; frame-src https://www.service-domain.com/;" ); } add_action( 'send_headers', 'bbti_send_headers' ); 

Now, when I reload the page I’m still getting the same error Refused to load https://www.service-domain.com/... etc...

However, if I look at the network panel and check the page’s headers this is what I get:

HTTP/1.1 200 OK Content-Encoding: gzip Content-Security-Policy: frame-ancestors https://www.service-domain.com/; frame-src https://www.service-domain.com/; 

So the header is there but still getting the same error from the script.

Anyone know what it is I missed?

An exception occurred while opening a connection to the server

I’ve encountered this from time to time and I cant seem to find the fix.

Usually when this error occur, I am seeing increasing number of connections being used while its connection is not being disconnected.

Here is a sample logs on mongo. Here is the sample below

But I observed that this log below is its normal behaviour: enter image description here

Another thing that I noticed before the error occur is that there are numerous "Unable to gather storage statistics for a slow operation due to lock aquire timeout"enter image description here

At the first time I encountered this issue. We set MaxConnectionPoolSize = 200; then it occurred again. Any idea how to fix this?