How to use MariaDB server_audit plugin on MySQL

I need to log connections on MySQL on Windows.

When I google it, I saw some advice to use MariaDB server_audit plugin on MySQL. (for example: here)

So, I gave it a try, but I failed.

I copied the plugin to the “lib/plugin” directory and tried to load it from the “my.ini” file.

In the err file I can see the following:

[ERROR] Unknown variable type code 0x8200 in plugin 'SERVER_AUDIT'. [ERROR] Bad options for plugin 'SERVER_AUDIT'. 

When I tried to load it manually using the command `install plugin server_audit SONAME ‘server_audit.dll’ I’ve got:

Can’t initialize function ‘server_audit’; Plugin is disabled

I’ve tried on MySQL versions 5.5 and 5.7.

Does anyone know how to do it right?

Doubt about Database design (newbie)

I am learning about Database design, I have doubt about some database design conepts

I want to know which is better design between two design

Design 1

there is two tables: Object and Field

Object

| ID | NAME|

ID is PK

Field

| ID | OBJECT_ID | NAME

Both ID and OBJECT_ID are PK and OBJECT_ID IS FK


Design 2

there is three tables: Object, ObjectField and Field

Object

| ID | NAME |

ID is primary key

Field

| ID | NAME |

ID is primary key

ObjectField

| OBJECT_ID | FIELD_ID |

Both OBJECT_ID AND FIELD_ID is PK and FK

relation is stored in another table

How do I join getting one row from the left table, no matter how many matches i get from the right table?

I have two tables – one is a data table and the other is a mapping table. I want to join them together, but only preserve the data from the right table. However, it is possible that the match table may contains multiple records that match to a single record in the right table. I cannot use a DISTINCT because there may be identical rows in the right table, and I want to preserve the same number of rows from the right-table in the result set.

Here is a sample of the data I am working with:

       DataTable                           MappingTable +-----+-----+-----+-----+           +------+------+------+------+ | ID1 | ID2 | ID3 | ID1 |           | ID1  | ID2  | ID3  | ID1  | +-----+-----+-----+-----+           +------+------+------+------+ |  1  |  1  |  1  |  1  |           |  1   | NULL | NULL | NULL | |  1  |  1  |  1  |  1  |           | NULL | NULL | NULL |  1   | |  2  |  1  |  1  |  1  |           |  3   |  3   | NULL | NULL | |  3  |  1  |  1  |  3  |           +------+------+------+------+ |  4  |  1  |  1  |  4  | |  2  |  2  |  1  |  1  | |  3  |  2  |  1  |  3  | |  3  |  3  |  1  |  3  | |  2  |  1  |  0  |  1  | |  2  |  1  |  0  |  1  | |  4  |  3  |  2  |  3  | +-----+-----+-----+-----+ 

Below is the join I am using. I wrote a custom function to handle the NULL-matching behavior, which I am including here as well.

SELECT * FROM DataTable P JOIN MappingTable M ON dbo.fNullMatchCheckIntS(P.ID1,M.ID1,0,1) = 1     AND dbo.fNullMatchCheckIntS(P.ID2,M.ID2,0,1) = 1     AND dbo.fNullMatchCheckIntS(P.ID3,M.ID3,0,1) = 1     AND dbo.fNullMatchCheckIntS(P.ID4,M.ID4,0,1) = 1 

CREATE FUNCTION dbo.fNullMatchCheckIntS (     @Value1 INT     ,@Value2 INT     ,@AutoMatchIfValue1IsNull BIT     ,@AutoMatchIfValue2IsNull BIT )     RETURNS BIT AS  BEGIN      DECLARE @Result BIT = 0      SELECT         @AutoMatchIfValue1IsNull = ISNULL(@AutoMatchIfValue1IsNull,0)         ,@AutoMatchIfValue2IsNull = ISNULL(@AutoMatchIfValue2IsNull,0)      IF         (@AutoMatchIfValue1IsNull = 1 AND @Value1 IS NULL)         OR (@AutoMatchIfValue2IsNull = 1 AND @Value2 IS NULL)         OR @Value1 = @Value2         OR (@Value1 IS NULL AND @Value2 IS NULL)     BEGIN         SET @Result = 1     END      RETURN @Result END 

The problem with the way the join works is that the first two rows in the DataTable match on the first two rows in the MappingTable, giving me four identical records in the result, but I only want 2. I know that I could add an identity column to the DataTable and then use DISTINCT or PARTITION to get the result I am looking for, but I would like to avoid that route if possible.

EDIT: I figured out a way to do this using EXISTS, but it looks a little ugly in my opinion. Still interested in other answers if anyone has an idea. Thanks!

SELECT * FROM DataTable D WHERE EXISTS (     SELECT D.ID1, D.ID2, D.ID3, D.ID4     FROM MappingTable M      WHERE dbo.fNullMatchCheckIntS(D.ID1,M.ID1,0,1) = 1         AND dbo.fNullMatchCheckIntS(D.ID2,M.ID2,0,1) = 1         AND dbo.fNullMatchCheckIntS(D.ID3,M.ID3,0,1) = 1         AND dbo.fNullMatchCheckIntS(D.ID4,M.ID4,0,1) = 1 ) 

SSRS 2017 permissions required to deploy

I’m having issues with one of our developers deploying reports to SSRS, they seem to be able to deploy datasets, however not reports themselves. As a temporary measure, we’ve granted all privileges e.g. Content Manager, Publisher etc. and also sysAdmin on SQL Server, yet they are not able to deploy reports. However, a developer with what appears to have the same permissions (sysAdmin and all privilleges in SSRS) can deploy without issue. The error message can be seen below:

"The permissions granted to user 'Domain\Username' are insufficient for performing this operation." 

Postgres 11+: are covering indices (INCLUDE) useful for join/where conditions?

I’d like to better understand when covering indices can be useful to make index-only scans possible in Postgres 11+. As the documentation says, given the covering index

CREATE INDEX tab_x_y ON tab(x) INCLUDE (y); 

queries like this can use it for index-only scans:

SELECT y FROM tab WHERE x = 'key'; 

Now I am wondering if such a covering index could also allow index-only scans when the covering columns appear as conditions. For instance, assume a covering index:

CREATE INDEX tab_x_y_z ON tab(x) INCLUDE (y, z); 

Would this allow for index-only scans for the following queries?

SELECT z FROM tab WHERE x = 'key' AND y = 1;  SELECT x, y, z FROM (VALUES ('key1'),('key2'),('key3')) sub(id) JOIN tab ON tab.x = sub.id WHERE y = 1; 

Can’t help the engine to choose the correct execution plan

The stuff are pretty complex to share the original code (a lot of routines, a lot of tables), so I will try to summarize.

Environment:

  • SQL Server 2016
  • standard edition

Objects:

  • wide table with the following columns:

    ID BIGINT PK IDENTITY Filter01  Filter02  Filter03  .. and many columns    
  • stored procedure returning visible ID from the given table depending on filter parameters

  • the table has the following indexes:

    PK on ID NCI on Filter01 INCLUDE(Filter02, Filter03) NCI on Filter02 INCLUDE(Filter01, Filter03) 

Basically, in the routine I am creating three temporary tables – each holding current filtering values and then join them with the main table. In some cases, Filter02 values are not specified (so the join with this table is skipped) – the other tables are always joined. So, I have something like this:

SELECT * FROM maintable  INNER JOIN #Filter01Values -- always exists INNER JOIN #Filter02Values -- sometimes skipped INNER JOIN #Filter03Values -- always exists 

So, how the IDs are distributed – in 99% of the cases it will be best to filter by Filter02Value and I guess, because of this, the engine is using the NCI on Filter02 INCLUDE(Filter01, Filter03) index.

The issue is that in the rest 1% the query fails badly:

enter image description here

In green is the Filter02 values table and you can see that filtering on this does not reduce the read rows at all. Then when the filtering by Filter01 is done (in red) about 100 rows are returned.

So, this is happening only when the stored procedure is executed. If I execute its code with these parameters I nice execution plan:

enter image description here

In such case, the engine is filtering by Filter01 first and Filter02 third.

I am building and executing dynamic T-SQL statement and I add OPTION(RECOMPILE) at at the end, but it does not change anything. If I add WITH RECOMPILE on the stored procedure level, everything is fine.

Note, the values in the temporary tables for filtering are not populating in the dynamic-tsql statement. The tables are defined, populated and then the statement is built.

So, my questions are:

  • is the engine building a new plan for my dynamic statement as I have OPTION(recompile) – if yes, why is wrong
  • is the engine using the values populated in my filter02 temporary table to build the initial plan – maybe yes, that’s why it is choosing the wrong plan
  • using recompile on procedure level feels very hard/lazy fix – do you have any ideas how I can assist the engine further and skip this option – new indexes for examples (I have try a lot)

Postgres: handy way/function to compute bearing change from coordinates

I am going to compute point-based acceleration and bearing relative to true/magnetic north from the sequence of GPS points with timestamp and instantaneous speed.

Here’s how the dataset looks like:

postgres=> SELECT * FROM location_track LIMIT 10; user_id  | timestamp  |                        geo                         |    lat     |    lon     | speed ---------+------------+----------------------------------------------------+------------+------------+-------       49 | 1458203631 | 0101000020E6100000F74BD5873F9C21C07530AC2C1BDF4340 | 39.7430168 | -8.8051722 |     9       49 | 1458203632 | 0101000020E6100000F74BD5873F9C21C07530AC2C1BDF4340 | 39.7430168 | -8.8051722 |     9       49 | 1458203633 | 0101000020E610000005888219539C21C0F8CC48731ADF4340 | 39.7429947 | -8.8053215 |     9       49 | 1458203634 | 0101000020E610000005888219539C21C0F8CC48731ADF4340 | 39.7429947 | -8.8053215 |     9       49 | 1458203635 | 0101000020E610000007D7378B729C21C05452CCE616DF4340 | 39.7428864 | -8.8055614 |  7.75       49 | 1458203636 | 0101000020E610000007D7378B729C21C05452CCE616DF4340 | 39.7428864 | -8.8055614 |  7.75       49 | 1458203637 | 0101000020E610000027B220DE849C21C04E4DDDF016DF4340 | 39.7428876 | -8.8057012 |     8       49 | 1458203638 | 0101000020E610000027B220DE849C21C04E4DDDF016DF4340 | 39.7428876 | -8.8057012 |     8       49 | 1458203639 | 0101000020E610000028EFE3688E9C21C03505D78118DF4340 | 39.7429354 |  -8.805774 |  8.25       49 | 1458203640 | 0101000020E610000028EFE3688E9C21C03505D78118DF4340 | 39.7429354 |  -8.805774 |  8.25 (10 rows)  postgres=> \d location_track Table "postgres.location_track"    Column   |       Type       |           Modifiers ------------+------------------+--------------------------------  user_id    | integer          | not null  timestamp  | integer          |  geo        | geography        |  lat        | double precision | not null  lon        | double precision | not null  speed      | real             | 

Then I want to modify the location_track table by adding acceleration and bearing columns. I know acceleration could be derived from speed / timestamp AS acceleration, but I am wondering if there’s a handy way or function in Postgres to derive the bearing.

The GPS is received at an interval of 1 second, so the bearing of the point in row 1 would be computed in relation to the point in row 2 (i.e the next consecutive point) that way, until the last point. Additionally, lat and lon are in degrees, so should be converted to radians.

Any handy way of doing this in postgres?

Using Alias in Calculations

i know that this Problem was adressed before but my SQL knowledge is not that good to figure it out. Maybe you can help me.

select ProductionLine, Product, POId, BatchNumber, Sum(Prime) as Prime, Sum(Offspec) as OffSpec, Sum(InQ) as InQ, Sum(Prime) + Sum(Offspec) + Sum(InQ) as Total

from @t2

group by ProductionLine, BatchNumber, POId, Product;

I now need a Column PercentagePrime where the column Prime is defides by the alias Total

I hope you can help me with this

Impacts of a change to the Page Verify option

There’s no doubt about importance of doing regular integrity checks on production databases.
I’m testing the impact of changing the maintenance plans and changing how databases are backed up.
First of all, it is necessary to activate the “checksum” pages verification mode, this changing I think gradually impact the performance because it activates when a certain page is read in memory, changed, and written back to disk.

I have started to test this change with backups. I prepared a backup TSQL script that save the duration of each backup command and I ran 100 times each of the following command:

  • BACKUP DATABASE [MyDB] TO DISK = N'nul' (ran 100 times)
  • BACKUP DATABASE [MyDB] TO DISK = N'nul' WITH CHECKSUM (ran 100 times)

MyDB it’s a 50gb database with page verify set to “checksum”. The results is:

  • Backup with checksum is 20% slowler

I started some test with a larger database (400GB) and I noticed that first backups (with empty cache) are generally slower. After the first backups have completed the duration tends to stabilize.

My questions are:

  • Does the backup process buffers data in cache ? If yes, could this cause a variation of PLE trend or memory pressure ?
  • If backups with checksum are 20% slower what happens with application queries? I don’t think my tests are the absolute truth but..
  • Is there any kind of waiting time linked to the checksum process?
  • Do you know if there are people online who have tested the impacts of this change ? Any additional material can be useful.