MariaDB docker “Can’t init tc log” on start up when using mounted storage

I am initializing a new MariaDB database. Running docker with a volume to my home directory allows MariaDB to start up just fine:

docker run -it --rm --name mymaria \   -e MYSQL_RANDOM_ROOT_PASSWORD=yes \   -e MYSQL_PASSWORD=p@$  $  w0rd \   -e MYSQL_DATABASE=myapp \   -e MYSQL_USER=myapp \   -v /home/myuser/mysql:/var/lib/mysql \   mariadb:10.2 

However, running the mariadb container with a volume via a mounted directory like so:

docker run -it --rm --name mymaria \   -e MYSQL_RANDOM_ROOT_PASSWORD=yes \   -e MYSQL_PASSWORD=p@$  $  w0rd \   -e MYSQL_DATABASE=myapp \   -e MYSQL_USER=myapp \   -v /mnt/storage/mysql:/var/lib/mysql \   mariadb:10.2 

This configuration returns this from the docker logs output:

Initializing database 2019-09-23  5:12:13 139724696503616 [ERROR] Can't init tc log 2019-09-23  5:12:13 139724696503616 [ERROR] Aborting   Installation of system tables failed! ... 

Simply removing tc.log as some folks have suggested does not work. Restarting mariadb will rewrite tc.log back into the volume /var/lib/mysql.

Perhaps this is a permissions issue? I feel like I’ve tried every combination of chown with each directory. I do notice that docker appears to change the mounted volume /mnt/storage/mysql to drwxr-xr-x 2 systemd-coredump root 176 Sep 22 23:11 mysql which I find odd.

I encounter this issue only with the 10.2 tag and not the latest. However, for an orchestration I’m working on, it suggests mariadb:10.2.

Using Transactions To Rollback SSIS package

I’m at a loss with an SSIS package I inherited. It contains: 1 Script Task 3 Execute SQL tasks 5 Data flow tasks (each contains a number of merges, lookups, data inserts and other transformations) 1 file system task of the package.

All of these are encapsulated in a Foreach loop container. I’ve been tasked with modifying the package so that if any of the steps within the control/data flow fails, the entire thing is rolled back. Now I’ve tried two different approaches to accomplish this:

I. Using Distributed Transactions.

I ensured that:

MSDTC was running on target server and executing client (screenshot enclosed) msdtc.exe was added as an exception to server and client firewall Inbound and outbound rules were set for both server and client to allow DTC connections. ForeachLoop Container TrasanctionLevel: Required All other tasks TransactionLevel: Supported My OLEDB Connection has RetainSameConnection set to TRUE and I’m using SQL Server Authentication with Save Password checked

When I execute the package, it fails right after the script task (first step) After spending an entire week trying to figure out a workaround, I decided to try SQL Tasks to try to accomplish my goa using 3 Execute SQL Tasks:

BEGIN TRAN before the foreach loop container COMMIT TRAN After the ForeachLoop Container with a Success Constraint ROLLBACK TRAN After the ForeachLoop Container with a Failure constraint

In this case, foreachloop container and all other tasks have TransactionLevel property set to Supported. Now here, the problem is that the package executes up to the fourth data flow task and hangs there forever. After logging into SQL Server and verifying the running sessions, I noticed sys.sp_describe_first_result_set;1 as a headblocker session

Doing some research, I found it could be related to a few TRUNCATE statements in some of my Data flow tasks which could cause a schema lock. I went ahead and changed the ValidateExternalMetaData property to False for all tasks within my data flow and changed my truncate statements to DELETE statements instead. Re-ran package and still hangs in the same spot with the same headblocker. As an alternative, I tried creating a second OLEDB connection to the same database, assigned that new OLEDB Connection to my BEGIN, ROLLBACK and COMMIT SQL tasks with RetainSameConnectionProperty set to TRUE and changed the RetainSameConnectionProperty to FALSE (and tried it with TRUE as well) in the original OLEDB connection (the one used by the data flow tasks). This worked in the sense that the package appeared to execute (It ran and Commit Tran executed fine) and then I ran it again with a forced error to cause it to fail and the Rollback TRAN task executed successfully, however, when I queried the affected tables, the transaction hadn’t rolled back, all new records were inserted and old ones were updated (the begin tran was clearly started in a different connection and hence didn’t affect the package’s workflow). I’m not sure what else to try at this point. Any help would be truly appreciated, I’m about to go nuts with this!

P.S. additionally, all objects have “DelayValidation” set to true on everything and SQL Server version is 2012.

How to connect to a database in SQL Sever Management Studio (SSMS) using Microsoft OLE DB Provider for SQL Server

After asking this question, I realized that the true slowness comes from SQL Server Engine, and I used tons of things and still no success.

However, I created a connection.udl file on desktop and tried to connect to my localhost via different providers and via different connection string combinations.

I realized that when I use localhost instead of . and use Microsoft OLE DB Provider for SQL Server, the connection is almost instantaneous and truly fast.

Now I want to be able to connect to SQL Server via SSMS using that provider. But I don’t know how. Could you please help.

How can I login in SSMS, using ‘Microsoft OLE DB Provider for SQL Server’?

ERROR: check constraint “dates_check” is violated by some row

ALTER table myTable ADD CONSTRAINT dates_check CHECK(start_date::date < end_date::date);

Gives..

ERROR: check constraint “dates_check” is violated by some row

But..

select start_date, end_date from myTable where start_date::date > end_date::date;

Returns..

start_date | end_date ------------+---------- (0 rows)

Postgres – how to investigate a dependency before DROPing a field

I want to drop a field from a partitioned table, POSTGRES claims there are dependencies and proposes using CASCADE. How can I check that nothing important will be deleted before risking DROP with CASCADE?

My table is defined as (edited for brevity) –

CREATE TABLE public.positionfix (     fixid3 integer,     fixid bigint NOT NULL DEFAULT nextval('positionfix_fixid_seq'::regclass),     messageid bigint NOT NULL,     fixtime timestamp with out timezone NOT NULL ) PARTITION BY RANGE (fixtime)  WITH (     OIDS = FALSE ) TABLESPACE fastspace; 

I’m trying to remove the field ‘fixId3’ using the command –

ALTER TABLE positionfix  DROP COLUMN fixId3; 

The error messages I get are –

ERROR:  cannot drop table positionfix column fixid3 because other objects depend on it DETAIL:  default for table positionfix column fixid depends on sequence positionfix_fixid_seq default for table positionfix_201909 column fixid depends on sequence positionfix_fixid_seq default for table positionfix_20190926 column fixid depends on sequence positionfix_fixid_seq default for table positionfix_20190927 column fixid depends on sequence positionfix_fixid_seq default for table positionfix_20190928 column fixid depends on sequence positionfix_fixid_seq HINT:  Use DROP ... CASCADE to drop the dependent objects too. SQL state: 2BP01 

All the tables mentioned in the errors are partitions of the parent table. The issue is that the error message mentions other objects dependent on “fixId3”, but the detail messages refer to a different field “fixId”

There should be nothing that’s dependent on “fixId3”.

My questions are

1) Based on the messages above, which additional objects will postgres delete if I attempt to drop “fixId3” with the cascade option? Field “fixId”? the sequence? both? Something else?

2) why is postgres listing a dependency between “fixId” and the sequence “positionfix_fixid_seq” when the field being deleted is “fixId3”?

3) how do I remove “fixId3” without risking accidentally removing “fixId” instead. (I am reluctant to try CASCADE before I understand the consequences.

Permissions issue in Docker SQL Server 2017 while restoring certificate

Docker SQL Server 2017 container @latest. Using master database.

The error I am facing is the following:

[S00019][15208] The certificate, asymmetric key, or private key file is not valid or does not exist; or you do not have permissions for it.

The closest thing I have found to this exact question is this issue on Stackoverflow. However the answer doesn’t work for me. This question has a similar answer.

I have also tried the instructions here, and here.

So going through the parts of the error:

  1. I have recreated the files twice, so I don’t think it’s the “invalid” part. And it’s obviously not the “does not exist” part (if I put in the wrong password, it tells me it’s the wrong password).
  2. I have backed up and restored the SMK and Master Key without issue, so I don’t think it’s the permissions issue. The files have the exact same permissions.

I can’t get the certificate to restore no matter what I try. I have searched the GitHub issues to no avail so I don’t think it’s a bug. I must be doing something wrong.

Relevant code:

--on Prod BACKUP CERTIFICATE sqlserver_backup_cert TO FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.cer'     WITH PRIVATE KEY ( FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.key' ,     ENCRYPTION BY PASSWORD = 'foobar') GO 
--on Test CREATE CERTIFICATE sqlserver_backup_cert FROM FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.crt'   WITH PRIVATE KEY (     FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.key',     DECRYPTION BY PASSWORD = 'foobar'   ) GO 

It’s noteworthy that /var/opt/mssql/certs is a Docker volume. However I have also tried creating my own directory inside the container and using docker cp. No change.

ORA-01031: insufficient privileges to a newly created user

I have installed 18C Enterprise Edition. I just created a new user with a prefix of c##. And when I tried to login through the newly created user it says…

ORA-01031: insufficient privileges 01031. 00000 – “insufficient privileges” *Cause: An attempt was made to perform a database operation without the necessary privileges. *Action: Ask your database administrator or designated security administrator to grant you the necessary privileges Vendor code 1031

Please guide me to solve this issue.

How to log/view remote calls/queries to DB in Toad Data Point?

I am running an Apache Sqoop job from a local computer against an Oracle DB that we connect to via a virtual tunnel and find that for certain large sqoop jobs, the process hangs after a certain X% completed. I would like to see exactly what queries are being sent to and read by the DB on Toad Data Point (the UI we normally use to manually run queries against the remote DB). Does anyone know how to do this?

Designing database with filter tables

I am tasked with the redesign of an old database that is not performing very efficiently. Now I am no where near an expierenced database designer, so I am hoping you guys can help me figure out some things.

First of all the application has the user answer a few questions and performs some calculations based on the answers. This is the base of the application and the database should perform this with the best possible performance. So we have products that can be shown based on the answers of the questions. But the questions can also be “filtered” based on the answer(s) of a previous question(s). And also based on the user logged in and whether the user is using the application from within the platform instead of a webmodule. So the current structure has entities that can be “filtered” with each a filter table. So products has a product_filter table and questions has a question_filter table.

To explain the current structure further here is a diagram:

product ------------- product_id   product_filter ------------------- product_filter_id product_id // FK filter_label operator // i.e. equals, not equals, greather than etc. filter_value   question ------------- question_id   question_product ---------------------- question_product_id question_id // FK product_id // FK   question_product_filter ------------------- question_product_filter_id question_product_id // FK depenend_on_question_product_id filter_label operator // i.e. equals, not equals, greather than etc. filter_value 

For context the application currently is running on ASP classic and the wish for the database to be redesigned is because the system will be migrated to a ASP.NET application with the use of Entity Framework. So the senior dev has asked me to take a code first approach on redesigning the Database.

Copy SQL Server DB into another instance without SA access

I’ve two SQL Server databases and both of them are on cloud.

  1. First server is source
  2. Second server is destination

I only have a non SA access. And I’m using DBForge Studio for managing the database. I’m trying to copy the database from the first server to the second server through DBForge→Database Sync→Copy Database and then select copy (I’m following this article).

Note: The database already exists in destination with same name, but no tables yet.

I got an error message when trying to copy:

The EXECUTE permission was denied on the object ‘xp_fileexist’…..

Then I try to copy & overide but I got the same error message.

So I try to use another way like export & import. I export the database from source server into .sql, but there are only queries for insert, it didn’t serve the query for create table schema, and I tried to import into another file like .csv and .xsl but I got the same, it only contain data without schema.

Is there any better way to copy the database without SA access?