How to safely change Innodb variables

I’m using Maria DB, version 10.2.22, where one database column uses FULLTEXT for a broad document searching. However, I’ve ran into a "Table handler out of memory" on some searches. The table itself is only 4.4 GB. I’ve read on stackoverflow, that changing some of the InnoDB variables such as:

• innodb_buffer_pool_size

• innodb_ft_result_cache_limit

from their default value to say 4 GB could potential solve to the memory issue. My question is three parts, I suppose.

  1. Are there any other variables I should consider changing.

• innodb_buffer_pool_instances
• innodb_ft_cache_size
• innodb_ft_total_cache_size

  1. Because this DB is fairly critical to run, after running the command lines to change the variables would I need to stop and start the MariaDB service to fetch these changes?

  2. If, restarting MariaDB services is needed, can anyone point me to a guide as far as how to safely change these variables?

Is there performance loss in out of sequence inserted rows (MySQL InnoDB)

I am trying to migrate from a bigger sized MySQL AWS RDS instance to a small one and data migration is the only method. There are four tables in the range of 330GB-450GB and executing mysqldump, in a single thread, while piped directly to the target RDS instance is estimated to take about 24 hours by pv (copying at 5 mbps).

I wrote a bash script that calls multiple mysqldump using ‘ & ‘ at the end and a calculated --where parameter, to simulate multithreading. This works and currently takes less than an hour with 28 threads.

However, I am concerned about any potential loss of performance while querying in the future, since I’ll not be inserting in the sequence of the auto_increment id columns.

Can someone confirm whether this would be the case or whether I am being paranoid for no reasons.

What solution did you use for a single table that is in the 100s of GBs? Due to a particular reason, I want to avoid using AWS DMS and definitely don’t want to use tools that haven’t been maintained in a while.

MySQL InnoDB Weird Query Performance

I designed two tables, both using InnoDB. The first one has columns “Offset, Size, and ColumnToBeIndexed” with BIGINT “Offset” being the primary key, and the second one has columns “OffsetAndSize, and ColumnToBeIndexed” with BIGINT “OffsetAndSize” being the primary key. And there are both 15 millions rows in each table. I added indexes to both tables on “ColumnToBeIndexed.”My two queries for them are “SELECT Offset, Size FROM someTable WHERE ColumnToBeIndex BETWEEN 20 AND 10000 ORDER BY Offset ASC” and “SELECT OffsetAndSize FROM someTable WHERE ColumnToBeIndex BETWEEN 20 AND 10000 ORDER BY OffsetAndSize ASC.” Because the second query could use the secondary index and does not need to look up the clustered index to find the “size” information, I expected that the second query on the second table performs better than the first query on the first table. However, as the test came out, it turned out that the first query performs better every single time. Does anybody out there know what seems to be the problem?

MySQL innoDB cluster auto rejoin failed

3 node cluster, single primary. heavy read/write was happening on the master node. Restart the Master node. Then node 3 became the master. After the restart, the old master was in recovery state

"recoveryStatusText": "Distributed recovery in progress",                  "role": "HA",                  "status": "RECOVERING"    select * from gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | NO               | YES       |                   0 |                 8401 | +------------------+-----------+---------------------+----------------------+ 

this trx_to_cert never decreased even after 15mins,

then I tried to reboot node2

this also went to recovery mode.

Finally restart the node3, that’s all

It is saying no eligible primary in the cluster. Not able to recover it.

ERROR LOG:

2020-06-03T15:24:19.735261Z 2 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0' 2020-06-03T15:24:19.735271Z 2 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds' 2020-06-03T15:24:19.735285Z 2 [Note] Plugin group_replication reported: 'Member configuration: member_id: 1; member_uuid: "41add3fb-9abc-11ea-a59d-42010a00040b"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; ' 2020-06-03T15:24:19.748017Z 6 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. 2020-06-03T15:24:19.846752Z 9 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './dev-mysql-01-relay-bin-group_replication_applier.000002' position: 4 2020-06-03T15:24:19.846765Z 2 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!' 2020-06-03T15:24:19.868161Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3' 2020-06-03T15:24:19.868183Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 33061' 2020-06-03T15:24:21.722047Z 2 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address dev-mysql-03:3306.' 2020-06-03T15:24:21.722179Z 0 [ERROR] Plugin group_replication reported: 'Group contains 3 members which is greater than auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.' 2020-06-03T15:24:21.722427Z 24 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10' 2020-06-03T15:24:21.722550Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dev-mysql-01:3306, dev-mysql-03:3306, dev-mysql-02:3306 on view 15910200188085516:19.' 2020-06-03T15:24:21.803914Z 24 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dev-mysql-02', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. 2020-06-03T15:24:21.855802Z 24 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor bd472ec4-9abc-11ea-976d-42010a00040c at dev-mysql-02 port: 3306.' 2020-06-03T15:24:21.856155Z 26 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information. 2020-06-03T15:24:21.862169Z 26 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'mysql_innodb_cluster_1@dev-mysql-02:3306',replication started in log 'FIRST' at position 4 2020-06-03T15:24:21.918855Z 27 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './dev-mysql-01-relay-bin-group_replication_recovery.000001' position: 4 2020-06-03T15:24:42.718769Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200603 15:24:42 2020-06-03T15:24:55.206155Z 41 [Note] Got packets out of order 2020-06-03T15:29:29.682585Z 0 [Warning] Plugin group_replication reported: 'Members removed from the group: dev-mysql-02:3306' 2020-06-03T15:29:29.682635Z 0 [Note] Plugin group_replication reported: 'The member with address dev-mysql-02:3306 has unexpectedly disappeared, killing the current group replication recovery connection' 2020-06-03T15:29:29.682635Z 0 [Note] Plugin group_replication reported: 'The member with address dev-mysql-02:3306 has unexpectedly disappeared, killing the current group replication recovery connection' 2020-06-03T15:29:29.682729Z 27 [Note] Error reading relay log event for channel 'group_replication_recovery': slave SQL thread was killed 2020-06-03T15:29:29.682759Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dev-mysql-01:3306, dev-mysql-03:3306 on view 15910200188085516:20.' 2020-06-03T15:29:29.683116Z 27 [Note] Slave SQL thread for channel 'group_replication_recovery' exiting, replication stopped in log 'mysql-bin.000009' at position 846668856 2020-06-03T15:29:29.689073Z 26 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery' 2020-06-03T15:29:29.689089Z 26 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-bin.000009', position 846668856 2020-06-03T15:29:29.700329Z 24 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10' 

How to improve performance for read only innodb MySQL Database

I have a particular task which is to maximize the concurrent performance. There is only one particular type of query, which is

select * from table where col1 between ? and ? and col2 between ? and ? 

I have created a composite index for (col1, col2). The table is about 20G in size and 100 million rows

However, even in peak concurrent requests, the CPU utilization for MySQL is only 30%. I have tried various techniques like increase max_connections, innodb_buffer_pool_instances but none of them are working.

How to maximize the configuration so that it can perform such read-only query to extreme?

MySQL 5.6 row format changes when changing storage engine from MyISAM to InnoDB

I am testing an upgrade of all existing MySQL 5.6 tables from MyISAM to InnoDB. I converted all row formats to ‘dynamic’ first for all the tables to be on Barracuda then running “alter table engine = InnoDB” for 16 tables. 12 of the 16 tables changed file formats as well without an alter table command. I am at a loss to understand this. I think this may be related to the .frm files, but I’m not sure how. I’ve checked environment variables:

innodb_file_format is showing Barracuda

innodb_file_format_check is ON

A couple of the tables: Articletranslations is showing as compressed, pubmedabstractauthors and pubmedtranslated are showing as compact. The create table statements from tables that I had changed to dynamic file format before changing the storage engine to InnoDB.

Table: articletranslations

Create Table: CREATE TABLE `articletranslations` (   `TranslationID` int(11) NOT NULL AUTO_INCREMENT,   `ArticleID` int(11) NOT NULL,   `language` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,   `TextContent` longtext COLLATE utf8_unicode_ci,   `Name` text COLLATE utf8_unicode_ci,   `Tags` varchar(1000) COLLATE utf8_unicode_ci DEFAULT NULL,   `Detail_Abstract` longtext COLLATE utf8_unicode_ci,   `Disclosures` varchar(2000) COLLATE utf8_unicode_ci DEFAULT NULL,   `Discussion` longtext COLLATE utf8_unicode_ci,   `Acknowledgements` longtext COLLATE utf8_unicode_ci,   `D` longtext COLLATE utf8_unicode_ci,   `Materials` text COLLATE utf8_unicode_ci,   `HTMLTopContent` text COLLATE utf8_unicode_ci,   `Rep_Results` longtext COLLATE utf8_unicode_ci,   `Introduction` text COLLATE utf8_unicode_ci,   `IsMachine` tinyint(1) NOT NULL DEFAULT '1',   `DateTranslated` datetime DEFAULT CURRENT_TIMESTAMP,   PRIMARY KEY (`TranslationID`),   KEY `ArticleTranslations_Language_ArticleID` (`language`,`ArticleID`),   KEY `ArticleTranslations_ArticleID` (`ArticleID`) ) ENGINE=InnoDB AUTO_INCREMENT=177437 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=COMPRESSED 

Table: pubmedabstractauthors

Create Table: CREATE TABLE `pubmedabstractauthors` (   `AuthorID` int(11) NOT NULL AUTO_INCREMENT,   `ForeName` varchar(255) NOT NULL,   `LastName` varchar(255) NOT NULL,   `Initials` varchar(255) NOT NULL,   PRIMARY KEY (`AuthorID`),   KEY `names` (`ForeName`,`LastName`,`Initials`) ) ENGINE=InnoDB AUTO_INCREMENT=712515 DEFAULT CHARSET=latin1 

Table: pubmedtranslated

Create Table: CREATE TABLE `pubmedtranslated` (   `PMID` int(11) NOT NULL,   `ArticleTitle` text COLLATE utf8_unicode_ci NOT NULL,   `ArticleAbstract` text COLLATE utf8_unicode_ci NOT NULL,   `LanguageID` smallint(6) NOT NULL,   PRIMARY KEY (`PMID`,`LanguageID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci 

MySQL import – all rows missing except those in final INSERT of InnoDB tables

This is a very strange bug. I’m using this on a mac:

mysql Ver 15.1 Distrib 10.4.6-MariaDB, for osx10.13 (x86_64)

If I import an SQL dump of a WordPress DB (so not very complicated structure, but some long lines) from the command line, the table structure for all tables appears normal, but lots of content is missing from InnoDB tables.

Specifically, where the INSERTS have been ‘chunked’ into groups of records, only the final group is there. If I use --skip-extended-insert to write a statement for every record, only one record is ever in the InnoDB table.

MyISAM data seems fine.

I’ve tried dropping and recreating the database, could there be some corruption elsewhere?

Mysql 5.7.25 crash on first query to corrupted innodb table

Setup:

Ubuntu 18.04.1 LTS 4.15.0-34-generic #37-Ubuntu Docker: 18.06.1-ce build e68fc7a mysql:5@sha256:e47e309f72c831cf880cc0e1990b9c5ac427016acdc71346a36c83806ca79bb4  mysql --version = Ver 14.14 Distrib 5.7.25, for Linux (x86_64) using  EditLine wrapper 

Log:

07:17:39 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. Attempting to collect some information that could help diagnose the problem. As this is a crash and something is definitely wrong, the information collection process might fail.  key_buffer_size=8388608 read_buffer_size=131072 max_used_connections=6 max_threads=151 thread_count=5 connection_count=5 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 68196 K  bytes of memory Hope that's ok; if not, decrease some variables in the equation.  Thread pointer: 0x7f9814012240 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f98580a4e80 thread_stack 0x40000 mysqld(my_print_stacktrace+0x2c)[0x560da4deb81c] mysqld(handle_fatal_signal+0x479)[0x560da4716879] /lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7f987679b0c0] mysqld(_Z32page_cur_search_with_match_bytesPK11buf_block_tPK12dict_index_tPK8dtuple_t15page_cur_mode_tPmS9_S9_S9_P10page_cur_t+0x167)[0x560da4eb2be7] mysqld(_Z27btr_cur_search_to_nth_levelP12dict_index_tmPK8dtuple_t15page_cur_mode_tmP9btr_cur_tmPKcmP5mtr_t+0x989)[0x560da4fd81a9] mysqld(+0xe682d9)[0x560da4f2c2d9] mysqld(_Z15row_search_mvccPh15page_cur_mode_tP14row_prebuilt_tmm+0x1b9e)[0x560da4f3307e] mysqld(_ZN11ha_innobase10index_readEPhPKhj16ha_rkey_function+0x244)[0x560da4e2ee24] mysqld(_ZN7handler17ha_index_read_mapEPhPKhm16ha_rkey_function+0x30f)[0x560da476830f] mysqld(+0xac68f4)[0x560da4b8a8f4] mysqld(_Z10sub_selectP4JOINP7QEP_TABb+0x105)[0x560da4b8dbb5] mysqld(_ZN4JOIN4execEv+0x370)[0x560da4b86a30] mysqld(_Z12handle_queryP3THDP3LEXP12Query_resultyy+0x233)[0x560da4bf5eb3] mysqld(+0x61cf32)[0x560da46e0f32] mysqld(_Z21mysql_execute_commandP3THDb+0x460d)[0x560da4bb972d] mysqld(_Z11mysql_parseP3THDP12Parser_state+0x395)[0x560da4bbbff5] mysqld(_Z16dispatch_commandP3THDPK8COM_DATA19enum_server_command+0xfc4)[0x560da4bbd0b4] mysqld(_Z10do_commandP3THD+0x197)[0x560da4bbe487] mysqld(handle_connection+0x278)[0x560da4c7af38] mysqld(pfs_spawn_thread+0x1b4)[0x560da5153164] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7494)[0x7f9876791494] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f9874fddacf]  Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f98141c3520): SELECT `id`, `state`, `param`, `cursor_row`, `cursor_column`, `can_send`, `createdAt`, `updatedAt`, `userId`, `botId` FROM `states` AS `state` WHERE `state`.`userId` = 131913 AND `state`.`botId` = 3202 LIMIT 1 Connection ID (thread ID): 863486 Status: NOT_KILLED  The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 

Probebly unrelated but repeated a lot:

2019-06-12T14:52:32.090130Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 10295ms. The settings might not be optimal. (flushed=9 and evicted=0, during the time.) 2019-06-12T14:53:56.177026Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 8427ms. The settings might not be optimal. (flushed=200 and evicted=0, during the time.) 2019-06-12T15:03:05.055209Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 6080ms. The settings might not be optimal. (flushed=3 and evicted=0, during the time.) 2019-06-12T15:03:52.398646Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 5042ms. The settings might not be optimal. (flushed=168 and evicted=0, during the time.) 

more logs and settings


Default settings from the docker have not been changed. It runs fine and queries to all unaffected tables work fine, but any queries, even CHECK TABLE on the affected table crashed the server instantly.

I haven’t attempted data recovery and simply dropped and restored the table from backups. However, this is the second time in 2 months that this has happened (to different tables, but both were tables with more writes compared to the rest).

How can I diagnose the cause of corruption or make it less likely?

What I’ve checked so far:

  • The included log is the first sign of anything going wrong, the previous log entry is from 30min before and unrelated. And while crash with a corrupted table is reproducible I have no way of reproducing the corruption and higher log verbosity is unideal on the production server under load.
  • dmesg | egrep -i 'killed process' is empty, the box has 4gb of ram and swap enabled. I don’t think the tables got corrupted due to oom kill in the middle of a write.

innodb file per table

We are running with shared tablespace on MySQL database server in our environment. We are looking to enable file per table on database. We want to know

  • What would be ideal plan to enable file per table on server ?
  • What would be the consequences for same ?
  • Is their precautionary steps to be performed ?
  • If any issue occurs how can we revert to shared tablespace ?

Database Statistics :

Logical DB size : 860 GB Physical DB size : 1450 GB

InnoDB Non Locking Select Statement Using db_query in Drupal 6

From a thread on stackoverflow, I found a way to implement Select query in InnoDB table without locking.
Ref: https://stackoverflow.com/questions/917640/any-way-to-select-without-causing-locking-in-mysql/918092

The example statement given there was this:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ; SELECT * FROM TABLE_NAME ; COMMIT 

Now if I want to implement a similar statement using db_query() can I do so by issuing three consecutive calls like this

db_query("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED") db_query("SELECT * FROM TABLE_NAME") db_query("COMMIT") 

Note: db_query doesn’t support multi line statements on Drupal 6.