SQL Server Logon Trigger Error after server restart

I have a logon trigger running that is executing as another user to log connection details. It works totally fine without issue except right after a server reboot. At that point, I have to connect to the server via the DAC and disable it. Then I can just enable it again and everything is fine.

The error I see in the server logs says:

The connection has been dropped because the principal that opened it subsequently assumed a new security context, and then tried to reset the connection under its impersonated security context. This scenario is not supported.

I’m not sure why it would only run into this issue right after a reboot. Any idea what the issue might be?

Does dolphindb have a development plan to support embedded list as a column in a table?

is there any development plan by dolphindb to support embedded list as a column in a table ? we need something like the followings: 1. We quickly calculate the cumulative ask/bid qty according to the orderbook and save it to a column .  2. According to the provided qty , we can quickly figure out the corresponding price position on the orderbook and the distance from the price to mid .   A simple example, as shown, in this example only 5 levels in the ask volume, in practice, we have 64-128 profile price and volume.  

av1 av2 av3 av4 av5 cav
1 2 3 4 5 [1,3,6,10,15]

DBFiddle: Custom type and function (ORA-24344)

I’m attempting to create a custom version of Oracle’s GetVertices() function. The solution involves creating a user-defined type and a function.

Here is the code (working):

CREATE TYPE vertex_type_cust AS object (   x  NUMBER,   y  NUMBER,   z  NUMBER,   w  NUMBER,   id NUMBER );  CREATE TYPE vertex_set_type_cust AS TABLE OF vertex_type_cust;  CREATE OR replace FUNCTION getvertices_cust(geometry mdsys.sdo_geometry)   RETURN vertex_set_type_cust IS   i      NUMBER;   dims   NUMBER;   coords NUMBER;   result vertex_set_type_cust;   dim mdsys.sdo_dim_array;   is_zero BOOLEAN;   etype   NUMBER; BEGIN   result := vertex_set_type_cust();   -- handle the POINT case here   IF (geometry.sdo_ordinates IS NULL) THEN     result.extend;     result(1) := vertex_type_cust(geometry.sdo_point.x, geometry.sdo_point.y, geometry.sdo_point.z,NULL,1);     RETURN result;   END IF;   -- all other cases here   coords := geometry.sdo_ordinates.count;   dims := geometry.get_dims;   IF (dims = 0) THEN     RETURN result;   END IF;   coords := coords/dims;   FOR i          IN 0 .. coords-1   LOOP     result.extend;     IF (dims = 2) THEN       result(i+1) := vertex_type_cust(geometry.sdo_ordinates(2*i+1), geometry.sdo_ordinates(2*i+2), NULL,NULL,i+1);     ELSIF (dims = 3) THEN       result(i+1) := vertex_type_cust(geometry.sdo_ordinates(3*i+1), geometry.sdo_ordinates(3*i+2), geometry.sdo_ordinates(3*i+3) ,NULL,i+1);     ELSIF (dims = 4) THEN       result(i+1) := vertex_type_cust(geometry.sdo_ordinates(4*i+1), geometry.sdo_ordinates(4*i+2), geometry.sdo_ordinates(4*i+3), geometry.sdo_ordinates(4*i+4), i+1);     END IF;   END LOOP;   RETURN result; END; 

Test data:

create table a_sdo_geometry_tbl (line_id integer, shape mdsys.sdo_geometry);  insert into a_sdo_geometry_tbl (line_id, shape)  values (1, sdo_geometry (2002, null, null, sdo_elem_info_array (1,2,1),      sdo_ordinate_array (671539.6852734378,4863324.181436138, 671595.0500703361,4863343.166556185, 671614.013553706,4863350.343483042, 671622.2044153381,4863353.525396131))  );  insert into a_sdo_geometry_tbl (line_id, shape)  values (2, sdo_geometry (2002, null, null, sdo_elem_info_array (1,2,1),      sdo_ordinate_array (71534.5567096211,4863119.991809748, 671640.7384688659,4863157.132745253, 671684.8621150404,4863172.022995591))  );  insert into a_sdo_geometry_tbl (line_id, shape)  values (3, sdo_geometry (2002, null, null, sdo_elem_info_array (1,2,1),      sdo_ordinate_array (671622.2044153381,4863353.525396131, 671633.3267164109,4863357.846229106, 671904.0614077691,4863451.286166754))  );  insert into a_sdo_geometry_tbl (line_id, shape)  values (4, sdo_geometry (2002, null, null, sdo_elem_info_array (1,2,1),      sdo_ordinate_array (671684.8620521119,4863172.022995591, 671892.1496144319,4863244.141440067, 671951.2156571196,4863264.824310392, 671957.4471461186,4863266.847617676, 671966.8243856924,4863269.146632658))  )  select     a.line_id,     b.id as vertex_id,     b.x,      b.y from     a_sdo_geometry_tbl a cross join     table(getvertices_cust(a.shape)) b            --<<-- the query uses the custom function order by      a.line_id, b.id; 


(I’m testing this solution with online Oracle environments because I don’t have CREATE TYPE privileges in my company’s Oracle db.)

I’m able to get the above code working when I run it in Oracle’s free testing environment (19c):

  • https://livesql.oracle.com/
  • Create function: screenshot
  • Query using the function: screenshot

However, when I try to run the code with db<>fiddle, I get an error when I create the function:

  • db<>fiddle session (18c)
  • ORA-24344: success with compilation error


How can I avoid getting that error when creating the custom function with db<>fiddle?

I know there is a version difference between Oracle Live and db<>fiddle (19c vs. 18c). But I would be surprised if that was the cause of the problem. This doesn’t seem like a version issue to me.

Why do database need logging?

Most databases today have WAL (undo, redo), but why do databases need them?

Take a simple case of RocksDB deployment. I would imagine a storage/database cluster will have a certain redundancy scheme (erasure coding or replication) for fault tolerance, but I don’t see why application level also needs fault tolerance. For example, when one server temporarily (or permanently) fails, I would think the recovery will use the other servers’ data because there will be new data written during the time when the server fails and you cannot just rely on the logging to recover the server. Do I have a misunderstanding somewhere?

Not sure this is the right place for this question, if not, happy to ask at a different place. Thank you!

Express Backend deployed on Heroku returns the Result of an Update on a Mongodb Atlas DB as Undefined

I make a call from my front-end to my Express backend, which contains the following code. In my Development environment, everything works perfectly and a nonce is returned to the front-end. However, once my app has been deployed to Heroku (without modification) and my Mongodb database to the Mongo Atlas platform, the "result" returns as undefined, although I have confirmed that the database is being updated properly with a nonce. Why is this?

 app.post('/api/login', (req, res) => {   ...   db.collection("Users").updateOne(query, updatevalue, (err, result) => {     if (result) { // Is Undefined       res.send(`$  {nonce}`)     } else {       res.send(null)     }   })  } 

How do you JOIN two tables as a combination of LEFT and RIGHT JOINs?

This is probably a stupid question. In two tables of

CREATE TABLE t1 ( id int(11) unsigned NOT NULL, col1 int(11) unsigned NOT NULL, PRIMARY KEY(id) ) ENGINE=InnoDB;  CREATE TABLE t2 ( id int(11) unsigned NOT NULL, col2 int(11) unsigned NOT NULL, PRIMARY KEY(id) ) ENGINE=InnoDB;  INSERT INTO t1 (id,col1) VALUES (1,2), (3,5), (11,3); INSERT INTO t2 (id,col2) VALUES (1,1), (3,4), (10,2); 

How do you JOIN tables to include missing ids too?

id  col1    col2 1   2       1 3   5       4 10  NULL    2 11  3       NULL 

Grafana – general notification for whole dashboard

Is it possible to define general notificatin for whole dashboard in Grafana? Let’s say we have many graphs on dashboard with defined thresholds. Is it possible to send general notification in case of any threshold has been crossed?

To make it clear: Dashboard with graphs for severs of the same type. Each server has several licenses for some functionalities. Each license is depicted in separate graph. I don’t want to multiply notification definitions for each license. So it would be great if it would be possible to define notification once – which would point to the dashboard OR define notification template and reuse in every graph.

regards, Mike

Postgresql current_timestamp and ODBC Conection pooling

Current_timestamp returns the time of the beginning of the transaction, which is the time of the end of the previous transaction.

But what happens for connection pooling? The commit may have happened long ago at the end of some previous and unrelated transaction.

(And is connection pooling really necessary in postgresql anyway? It is a nasty hack to overcome slow connection open times which should be very fast if the TLS connection is reused.)

SSIS and SQL Agent – running jobs using proxy vs using MSA

We have multiple servers, each with its own SQL Server instance.

We have external companies that develop applications for us. This companies deploy SSIS packages and run them via SQL Agent.

I’m wondering how our setup should look like security vise.

Scenario 1

  • SQL Agent Service account – Local Virtual Account
  • SSIS login – personal domain login for each employee
  • Job owner – personal domain account of whoever created it
  • Step run as – domain proxy account

It seems quite "normal" scenario, but I’m not sure if it is still relevant. Microsoft changed some things concerning security and introduced Virtual and Managed Service Accounts couple years back. Yet many resources found online still do not mention them and it seems like they treat default service account the old way. So maybe it is now acceptable to use default SQL Service account to run the jobs? Like in scenario 2.

Scenario 2

  • SQL Agent Service account – Local Virtual Account
  • SSIS login – personal domain login for each employee
  • Job owner – personal domain account of whoever created it
  • Step run as – SQL Agent Service account

This uses Virtual Account to run the jobs, but Microsoft documentation mention that in case we need to access remote resources we should use MSA instead. And it is the case for us.

*When resources external to the SQL Server computer are needed, Microsoft recommends using a Managed Service Account (MSA), configured with the minimum privileges necessary. ** When installed on a Domain Controller, a virtual account as the service account isn’t supported.

Scenario 3

  • SQL Agent Service account – Managed Service Account (MSA)
  • SSIS login – personal domain login for each employee
  • Job owner – personal domain account of whoever created it
  • Step run as – SQL Agent Service account (MSA)

There is also a question about who should own the job. There is a possibility of creating generic SSIS login that would be used by everyone to deploy packages and this login would own all the jobs steps.

Scenario 4

  • SQL Agent Service account – Managed Service Account (MSA)
  • SSIS login – generic domain login for working with SSIS
  • Job owner – SSIS domain login
  • Step run as – SQL Agent Service account (MSA)

Which scenario would you recommend? Or maybe there is a different preferable scenario?

How to optimise this IN query?

Lets say i have this schema Resource(resource_mapping_id:Uuid, reaource_id: uuid, node_id: varchar, date: date, available: boolean, resource_start: varchar, resource_end: varchar)

So i have the composite key formed on (resource_mapping_id, resource_id, node_id, date, resource_start, resource_end)

Note: the node is is also uuid stored as text. Now I have these 2 queries:

update resource set available = :value where resource_id=:somevalue and date=:somedate and resource_start=:sometime and resource_end=:sometime


select * from resource where resource_id In (:resourceidlist) and date in (:dates) and node_id in (:nodeIds)

This table contains huge number of records, you can say around 500 million or so..

So whenever i hit these queries bia my Java application through jpa, they made the cpu utilisation of the database spiked upto 100%.

So after doing analysis, I created an index say

Index(resource_id, node_id, date)

Which in turn fixed the issue with the update query, even when it runs in parallel threads, the cpu never spiked up even a bit.

But now coming to the select statement, i was having issues when the parameters went high. So i batched them, i mean in a batch x no. of node ids, resource ids and dates can be processed, even though, with 100(note, size of all parameters is same, if i tel 100, all total its 300) parameters, it spiked up the cpu and ther other threads go into a waiting state!

How to resolve this issue? Should I change my query or something? Ot should I make any other change or create further index only for this situation based? Please help me.

I am using postgres v13.