Parallelism not engaging on specific database

I have a few databases with identical schemas, lets call they:

DB_A

DB_B

I have a query that when run takes 11 seconds and returns 11.000 records on DB_A but when the exact same query is run on the identical DB_B it takes 40 seconds and returns 7.000 records.

The schema is identical so is the query but when i run it on DB_B it runs on Degree of Parallelism 1, on DB_A it runs on 16.

I tried setting the cost of threshold to 0 to force parallelism but got the same result.

Why is that? How can the same query running on cloned databases behave so differently?

Any ideas are welcome.

Im using SQL 2017 Standard.

How to find SSL root cert that made connection to the database in PostgreSQL?

When we connect to postgreSQL via ssl-mode=verify-full how will I make sure if the certificate I passed is used while making the connection?

With ssl_is_used(); shows only true or false. Is there any other extension or pg_catalog views that shows the root cert used in making connection to the DB ?

Store sensitive decryptable data in database

I’m building a web application which configures and interconnects other applications through web services.

In all the data I have to save for each applications, there are some that are quite sensitive : the credentials.

Unfortunately, most of these applications do not provide dedicated API keys or token for this kind of usage, which means I have to store login, passwords of a technical users. And in some cases, client authentication certificates with their passwords.

These data must be accessible to users (as long as they have the permissions) in “clear text” because they can be updated. They also have to be in “clear text” to the server because we can reconfigure every application at any time with the data we want.

What are the options to protect the best we can these data ? Obviously storing them in “clear text” in the database doesn’t seem like a secure option despite being convenient.

How is calculated fill-factor percentage related to size of database?

I have rebuild all indexes in database setting fill-factor of 95 (5% free space) using maintenance plan, but database after reindex is almost doubled in size – reported free space is 42%.

How is calculated fill-factor related to size database?

Maybe something is wrong with reindex, which cause so much growth of size?

Some database info after reindex:

Size (MB):            164 983.625 Data Space Used (KB):      82 907 896 Index Space Used (KB):     14 073 320 Space Available (KB):      71 879 024 

Generated T-SQL for maintenance plan for one table:

ALTER INDEX [Table1_Index1] ON [dbo].[Table1] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 95) 

SQL Database ‘Project_Content’ on SQL Server instance ‘XXX’ not found – after removing project server

I keep getting this error in event viewer. I reckon project server was installed before on the farm and it has been removed but it seems its still looking for the content database see error below

SQL Database 'Project_Content' on SQL Server instance 'XXX' not found.  Additional error information from SQL Server is included below.  Cannot open database "Project_Content" requested by the login. The login failed. Login failed for user XXX\xx.farm'. 

How can i stop this and where can i inform the SP farm that it doesn’t exist anymore. Thanks in Advance

What are the security implications of allowing API consumer to decide primary key stored in database?


Story

We are developing an API that which allow consumer to create or modify (i.e. upsert) objects stored in database via an endpoint with HTTP PUT.

The primary key of the object stored in this way is a GUID instead of an auto-increment number to prevent potential conflicts in future and it was decided that the GUID should be provided by API consumer in both scenario during object creation and modification.

We are being informed that the advantage of this approach allows us to focus the intention of storing objects without differentiate between create or modify.

Question

In this case, we expect the API consumer to pass a GUID as object identifier and what can go wrong security-wise if we allow someone else to decide the primary key of the object stored?

I understand I may treat the provided GUID as candidate key and generating another unique identifier internally but it seems redundant and wonder if it’s a plausible approach.

Does encrypted content in a database need to be signed?

If a user is logged into a website and they submit sensitive info over HTTPS, which is encrypted and stored in a database, does it matter if the info is not also signed?

Given that signing requires a private key, if a hacker has access to the server and they wanted to tamper with the data, couldn’t they also resign the data with the same key?

User Research Insights Database

at our company, we are struggling to document all the insights that we gain through user research and make them accessible and easy to find for everyone inside of the company. The perfect solution for us would be:

  • option to enter tags
  • search function
  • option to include media (images, prototypes or videos)
  • having a tool in which we list all the observations during user tests (often times some of the observations that we normally write in PostIt’s are not digitalized, because they are not relevant at that moment)

How are you solving this issue in your company? Any best practices? Do you maybe know a good tool that could serve here as a solution?

Thanks in advance.

Schema changes on sharded database

I have performance issues with one particulary large table, 500+ Million rows, 300Gb data, Postgres 10.5. It is already partitioned. I am working on optimising it here and there, but that is not trivial and only provides small improvements. Table is constanlty growing and we expect our userbase increace significantly so I need a way to scale up.

I want to use multi-tenant sharding approach. X tenants per shard. Shard resolving on app layer. Most of tenants have relateively small datasets, but few are huge and I want to be able to place them to separate shards . To do that I need lookup tables. Cross-shard queries are not concern at all, naturaly we have almost all of our queris per tenant, so all the data for the tenant will sit in same shard.

I will be using logical sharding, 4 phisical shards x32 logical (that is twice more shards than partitions currenly). Each logical shard is separate database. In most tutorials/talks people seem to use schemas instead of databases. Why? Databases are more isolated, and when moving single tenant or virtual shard to other location it does not seem to have any difference. So db looks like a better candidate to me.

Drawbacks look acceptable: update existing code (significantly), app should be shard aware

The question is: How do I handle migrations(schema changes)?

As first step I will have to create 128 databases, ensuring all of them have all tables, indexes, etc. I also want each of dbs have its own sequences to have ids unique accross all shards. Not trivial to me.

But further changes are problem aswell. Do I just iterate all connections and aplly changes? Is there a better (maybe async) way? What do I do if at some point shema in one shard is different from another.