Stop UUID injection in MYSQL Database

I have a cordova app that logs users in based on their devices model+platform+uuid. For example: Pixel 2Android39798721218. The way this works when a user uses a new device is detailed in the following:

  1. Users opens app
  2. App sends uuid code to checking page like: login-uuid?id=(uuid_here)
  3. If the uuid does not exist in the database the user is directed to a login page with the url: login?uuid=(uuid_here)
  4. User logs in and the uuid is sent to the login backend where it gets stored in a database
  5. When the user opens the app again they are logged in because their uuid is in the database

My question is basically, if someone knows a users login details. They can navigate to login?uuid=foo and then even if the user changes their password the attacker can still login by navigating to login-uuid?id=foo. Is there any way to mitigate this or will simply removing all logged in devices when a user resets there password be enough?

Why is disk IO higher on Debian 10 (MariaDB 10.3) with MySQL replication?

I have a MySQL/MariaDB master-master replication setup that has been working well for several years, the db and tables are not very large (under 200MB for 18 tables). These were on 2 servers running Debian 9 and MariaDB 10.1.44. Now I’ve spun up 2 new servers running Debian 10 and I’m in the process of moving things over to them, but stopped half-way because I’m seeing much higher disk IO usage on the new servers (about 6x more).

So currently, one of the Debian 9 servers and one of the Debian 10 servers are in master-master relationship, with one Debian 9 still being a slave of the master Debian 9 server, and same on the Debian 10 side of things.

I didn’t notice the increased disk IO until after all read/write operations were moved to the Debian 10 master. I was trying to browse tables and saw how slow it was outputting the query results, and it felt like I was on a dial-up connection watching the rows scroll across. It turned out there was some disk contention with the virtual host that was partly responsible, and that problem is now mostly gone.

Now, as you can imagine, none of this is crashing the server with such a "small" set of tables, but as things continue to grow, I’m concerned that there is some underlying mis-configuration which will rear its ugly head at an inopportune time. On the Debian 9 servers, iotop shows steady write IO at around 300-600Kb/s, but on Debian 10 it spikes as high as 6MB/s, and averages around 3MB/s.

Here is the standard config on all 4 servers, everything else is default Debian settings (or MariaDB, as the case may be), full config for Debian 10 at https://pastebin.com/Lk2FR4e3:

max_connections = 1000 query_cache_limit       = 4M query_cache_size        = 0 query_cache_type        = 0 server-id               = 1 # different for each server log_bin                 = /var/log/mysql/mysql-bin.log binlog_do_db            = optimizer replicate-do-db         = optimizer report-host             = xyz.example.com #changed obviously log-slave-updates       = true innodb_log_file_size    = 32M innodb_buffer_pool_size = 256M 

Here are some other settings I’ve tried that don’t seem to make any difference (checked each one by one):

binlog_annotate_row_events = OFF binlog_checksum = NONE binlog_format = STATEMENT innodb_flush_method = O_DIRECT_NO_FSYNC innodb_log_checksums = OFF log_slow_slave_statements = OFF replicate_annotate_row_events = OFF 

I’ve gone through all the settings here that have changed from MariaDB 10.1 to 10.3, and can’t seem to find any that make a difference: https://mariadb.com/kb/en/replication-and-binary-log-system-variables/

I also did a full listing of the server variables and compared the configs on 10.1 to the 10.3 configuration and didn’t find anything obvious. But either I’m missing something, or the problem lies with Debian 10 itself.

Results of SHOW ENGINE INNODB STATUS are here: https://pastebin.com/mJdLQv8k

Now, how about that disk IO, what is it actually doing? I include 3 screenshots here to show what I mean by increased disk IO: Resource graphs on the Debian 10 master

That is from the Debian 10 master, and you can see where I moved operations back to the Debian 9 server (more on that in a second). Notice the disk IO does go down slightly at that point, but not to the levels that we’ll see on the Debian 9 master. Also note that the public bandwidth chart is pretty much only replication traffic, and that the disk IO far outstrips the replication traffic. The private traffic is all the reads/writes from our application servers.

Resource graphs on Debian 9 master

This is the Debian 9 master server, and you can see where I moved all operations back to this server, the private traffic shoots up, but the write IO hovers around 500kB/s. I didn’t have resource graphs being recorded on the old servers, thus the missing bits on the left.

Debian 10 slave server resource graphs

And lastly, for reference, here is the Debian 10 slave server (that will eventually be half of the master<–>master replication). There are no direct reads/writes on this server, all disk IO is from replication.

Just to see what would happen (as I alluded to above), I reverted all direct read/write operations to the Debian 9 master server. While disk IO did fall somewhat on the Debian 10 server, it did not grow on the Debian 9 server to any noticeable extent.

Also, on the Debian 10 slave server, I did STOP SLAVE once to see what happened, and the disk IO went to almost nothing. Doing the same on the Debian 10 master server barely did not have the same drastic effect, though it’s possible there WAS some change that wasn’t obvious; the disk IO numbers on iostat fluctuate much more wildly on the Debian 10 servers than they do on the Debian 9 servers.

So, what is going on here? How can I figure out why MariaDB is writing so much data to disk apparently and/or how can I stop it?

Thanks in advance!

Mysql querying millions of row regularly

enter image description here

I am developing an eLearning application using MySql, which has the following schema. My issue with handling millions of rows in tables like attempted_questions and question_choices.
Let say I have 1000 users and 100000 questions. then the number of rows in question_choices will become 400000(no of choices 4 x no of questions 100000).
If each user attempts question 2 times, the number of rows in attempted_questions = 2 x 1000 users x 400000 questions, which will be 800000000 rows. In attempted_questions table, I need to track all attempts of the users.

Some of the most regularly used queries are:

SELECT COUNT(DISTINCT question_id) FROM attempted_questions WHERE user_id = 1  SELECT COUNT(DISTINCT question_id) FROM attempted_questions WHERE user_id = 1 AND subject_id = 1  
SELECT s.id, s.name, qc.qn_count, aq.attended_cnt  FROM subjects s LEFT JOIN (SELECT COUNT(q.id) as qn_count, q.subject_id      FROM questions q GROUP BY q.subject_id) qc ON qc.subject_id = s.id  LEFT JOIN (SELECT COUNT(DISTINCT question_id) as attended_cnt, subject_id      FROM attempted_questions WHERE user_id = 1 GROUP BY subject_id) aq ON aq.subject_id = s.id 

How can I optimize the DB for this much data?
what are the issues with my DB design that may arise when the application grows?

mysql source recovers after process restarts

I am doing a large import (~300GB – MyISAM) into a MySQL server version 5.6.48 within a FreeBSD jail, before starting I did:

SET GLOBAL bulk_insert_buffer_size = 1024 * 1024 * 1024; SET GLOBAL net_buffer_length=1000000; SET GLOBAL max_allowed_packet=1000000000; SET foreign_key_checks = 0; 

And then:

source file.sql 

The dump started to load but after a while, the server memory RAM/SWAP got full and the mysqld process exited.

What I notice and surprised me is that when the MySQL server came up again, the source command continued to load the dump, kind of “resuming” the process.

From the docs, I haven’t found much documentation about the source command, only from the MySQL shell:

mysql> source ERROR: Usage: \. <filename> | source <filename> 

Therefore wondering what is keeping the state or track of the inserted data and how could I check/monitor the source progress, besides willing to understand if this “resume” option is the expected behavior and if it could be configured.

MySQL InnoDB Weird Query Performance

I designed two tables, both using InnoDB. The first one has columns “Offset, Size, and ColumnToBeIndexed” with BIGINT “Offset” being the primary key, and the second one has columns “OffsetAndSize, and ColumnToBeIndexed” with BIGINT “OffsetAndSize” being the primary key. And there are both 15 millions rows in each table. I added indexes to both tables on “ColumnToBeIndexed.”My two queries for them are “SELECT Offset, Size FROM someTable WHERE ColumnToBeIndex BETWEEN 20 AND 10000 ORDER BY Offset ASC” and “SELECT OffsetAndSize FROM someTable WHERE ColumnToBeIndex BETWEEN 20 AND 10000 ORDER BY OffsetAndSize ASC.” Because the second query could use the secondary index and does not need to look up the clustered index to find the “size” information, I expected that the second query on the second table performs better than the first query on the first table. However, as the test came out, it turned out that the first query performs better every single time. Does anybody out there know what seems to be the problem?

*[Hostpoco.com]-50% off- SSD hosting- 20x Faster Speed- Unlimited Bandwidth & MySql D

Hostpoco offers cheap SSD Shared hosting with 30 Days Money back guarantee where clients can claim for full refund within 30 Days.

Running heavy site, large blog, high traffic site which requires high resources? Yes, then you are at the correct spot. Hostpoco understands the importance of a fast speed loading website hence designed some cheaper and affordable web hosting solutions based on SSD Web Hosting. SSD hosting plans run on servers that store your data on solid-state drives. SSD hard drives are the latest big advancement and achievement in data storage technology, resulting in faster and more reliable hosting for your website than servers with traditional hard disk drives. Our SSD hosting plans are starting from $1/Month and assure you 20x Lighting Fast Hosting with best performance. Our plans are cpanel based and come with the best possible features. You can boost the loading speed of your website by 300% faster than those hosted on hard drive servers…Sign up today and take advantage of our offer.

$1 SSD Hosting With Lightning Fast Speed And Tier 2 Support
Coupon Code : HP50

=================
Plan Start from:
=================
Startup SSD $1/month
Pro SSD $2.5 /Month
Premium $SSD 5.5 /Month
Elite SSD $8.5 /Month

For more info please visit us at www.hostpoco.com

==================
Feature included:
==================
~20x Faster Speed
~Unlimited Bandwidth
~Unlimited Email Accounts
~Unlimited Sub Domains
~FREE Online Sitebuilder
~Unlimited MySql Databases
~FREE Auto SSL
~DDOS Protection
~FREE Online Sitebuilder
~FREE Data Migration
~Unlimited FTP Accounts
~FREE Cpanel Control Panel
~Softacolous Supported
~FREE Backup Restore
~FREE PhpMyAdmin
~99.99% uptime

Hostpoco.com – cheap web hosting, one dollar hosting, cheap dedicated server, $1 unlimited hosting,cheap reseller hosting,ssd shared hosting, affordable linux hosting, Linux SSD Hosting, US VPS Hosting.

Thank you.

Two persons-rule on MySQL databases for “manual fixes”

In order to “harden” our compliance, we wanted to enforce a two-persons rule on the MySQL production database for “manual fixes”. Such “manual fixes” frequently arise due to:

  • Bug in the application (we are a fast company :D)
  • Various customer requests that do not have an application feature implemented yet, such as GDPR update requests, special discounts, etc.

We wanted a process that does not require the two persons to be physically side-by-side. One person is on-call, rather junior and is responsible to translate customer service requests into SQL. They might need a GUI (such as MySQL Workbench) to navigate the complex data model and figure out the exact SQL script to produce. The SQL script should feature SELECTs showing the data before and after the change in a non-committed transaction (e.g., AUTOCOMMIT OFF no COMMIT at the end).

The second person is not on-call, rather senior, and fairly familiar with the application’s data model. They should be able to look at the SQL script the non-committed transaction output, and approve or reject via a mobile app during the evening.

We cannot be the first to have this or a similar requirements.

Does anyone know good documentation or tooling to implement such a process?

Here are some similar questions on the topic, but not quite as specific as the present one:

  • What can a company do against insiders going rogue and negatively affecting essential infrastructure?
  • How can two-man control be implemented efficiently?

restore mysql database from ibdata1 and frm files

I am trying to restore some database data from my tables frm files. I am running a mariadb database. The data structure works fine and i can see the tables etc. But as soon as i add the ibdata1 and logfiles i run into trouble and get the errors below. I’ve tried to follow recommendations of the other similar posts but nothing seems to work… Any ideas? BR Lukas

my my.cnf config file:

[client-server]  # # This group is read by the server # [mysqld] # Disabling symbolic-links is recommended to prevent assorted security risks innodb_log_file_size=170M innodb_force_recovery = 1 symbolic-links=0  # # include all files from the config directory # !includedir /etc/my.cnf.d 

Journalctl -xe error:

jun 09 01:29:04 ipx.eu-central-1.compute.internal sudo[15322]: pam_unix(sudo:session): session opened for user root by ec2-user(uid=0) jun 09 01:29:04 ipx.eu-central-1.compute.internal systemd[1]: Starting MariaDB 10.2 database server... -- Subject: Unit mariadb.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel --  -- Unit mariadb.service has begun starting up. jun 09 01:29:04 ipx.eu-central-1.compute.internal mysql-prepare-db-dir[15360]: Database MariaDB is probably initialized in /var/lib/mysql already, nothing is done. jun 09 01:29:04 ipx.eu-central-1.compute.internal mysql-prepare-db-dir[15360]: If this is not the case, make sure the /var/lib/mysql is empty before running mysql-prepare-db-dir. jun 09 01:29:05 ipx.eu-central-1.compute.internal mysqld[15399]: 2020-06-09  1:29:05 140284526120768 [Note] /usr/libexec/mysqld (mysqld 10.2.10-MariaDB) starting as process 15399 ... jun 09 01:29:05 ipx.eu-central-1.compute.internal mysqld[15399]: 2020-06-09  1:29:05 140284526120768 [Warning] Changed limits: max_open_files: 1024  max_connections: 151  table_cache: 431 jun 09 01:29:05 ipx.eu-central-1.compute.internal systemd[1]: Started MariaDB 10.2 database server. -- Subject: Unit mariadb.service has finished start-up -- Defined-By: systemd --  -- Unit mariadb.service has finished starting up. --  -- The start-up result is done. jun 09 01:29:05 ipx.eu-central-1.compute.internal sudo[15322]: pam_unix(sudo:session): session closed for user root 

mariadb.log file:

2020-06-09  1:17:36 140094574546752 [ERROR] InnoDB: Page [page id: space=0, page number=308] log sequence number 41316604 is in the future! Current system log sequence number 1620080.     2020-06-09  1:17:36 140094574546752 [ERROR] InnoDB: Your database may be corrupt or you may have copied the InnoDB tablespace but not the InnoDB log files. Please refer to http://dev.mysql.com/doc/refman$       2020-06-09  1:17:36 140094574546752 [ERROR] InnoDB: Operating system error number 13 in a file operation.     2020-06-09  1:17:36 140094574546752 [ERROR] InnoDB: The error means mysqld does not have the access rights to the directory.     2020-06-09 01:17:36 0x7f6a4f59cf40  InnoDB: Assertion failure in file /builddir/build/BUILD/mariadb-10.2.10/storage/innobase/fil/fil0fil.cc line 752 

What settings to use to export mysql table rows that I’ll be importing to another table with data using phpmyadmin

If I’m using phpmyadmin to manage mysql database, what settings should I use to export only two rows of data that I’ll be importing to another table with the same name that contains many other rows?

I used the quick option, but I’m getting an error that table already exists and nothing gets imported.

If I use the default custom option with the default settings, I get the same error message.

So, what’s the best settings to use to export settings that I’m going to import to another table?