https://www.percona.com/blog/2013/02/01/implications-of-metadata-locking-changes-in-mysql-5-5/
implications of Metadata Locking Changes in MySQL 5.5
While most of the talk recently has mostly been around the new changes in MySQL 5.6 (and that is understandable), I have had lately some very interesting cases to deal with, with respect to the Metadata Locking related changes that were introduced in MySQL 5.5.3. It appears that the implications of Metadata Locking have not been covered well, and since there are still a large number of MySQL 5.0 and 5.1 installations that would upgrade or are in the process of upgrading to MySQL 5.5, I thought it necessary to discuss what these implications exactly are.
To read what Metadata Locking exactly is please read this section here in the MySQL manual.
Let’s start off with having a look at the Meta Data Locking behavior prior to MySQL 5.5.3
Metadata Locking behavior prior to MySQL 5.5.3
Prior to MySQL 5.5.3 a statement that opened a table only held meta data locks till the end of the statement and not the end of the transaction. This meant that transaction was not really isolated, because the same query could return different results if executed twice and if a DDL was executed between the query invocations. Let me give you an example via a simple test case where I will add a new column to the table while a transaction in REPEATABLE-READ isolation mode is ACTIVE.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
session1 > select @@tx_isolation;
+-----------------+
| @@tx_isolation |
+-----------------+
| REPEATABLE-READ |
+-----------------+
1 row in set (0.00 sec)
session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)
session1 > select * from test where id=1;
+----+------+
| id | x |
+----+------+
| 1 | foo |
+----+------+
1 row in set (0.00 sec)
session2 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (0.57 sec)
Records: 2 Duplicates: 0 Warnings: 0
session1 > select * from test where id=1;
Empty set (0.00 sec)
session1 > rollback;
Query OK, 0 rows affected (0.00 sec)
session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)
session1 > select * from test where id=1;
+----+------+------------+
| id | x | c |
+----+------+------------+
| 1 | foo | dummy_text |
+----+------+------------+
1 row in set (0.00 sec)
|
And you can see how isolation is broken because the SELECT was not repeatable although transaction isolation level of REPEATABLE-READ was used. This behavior of versions prior to 5.5 also means that queries could be written in different order to the binary log breaking locking semantics and contrary to serialization concepts. For example take a look at the following excerpt from the binary log of a case when an UPDATE transaction is mixed with an ALTER:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
# at 536
#130201 11:21:03 server id 1 end_log_pos 658 Query thread_id=8 exec_time=0 error_code=0
SET TIMESTAMP=1359714063/*!*/;
ALTER TABLE test add column id_2 int(11) default 0 after id
/*!*/;
# at 658
#130201 11:21:39 server id 1 end_log_pos 726 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1359714099/*!*/;
BEGIN
/*!*/;
# at 726
# at 773
#130201 11:21:35 server id 1 end_log_pos 773 Table_map: `test`.`test` mapped to number 17
#130201 11:21:35 server id 1 end_log_pos 829 Update_rows: table id 17 flags: STMT_END_F
BINLOG '
L5cLURMBAAAALwAAAAUDAAAAABEAAAAAAAEABHRlc3QABHRlc3QAAwMD/gL+CQY=
L5cLURgBAAAAOAAAAD0DAAAAABEAAAAAAAEAA///+AIAAAAAAAAAA2JhcvgCAAAAAAAAAANob3A=
'/*!*/;
### UPDATE test.test
### WHERE
### @1=2 /* INT meta=0 nullable=0 is_null=0 */
### @2=0 /* INT meta=0 nullable=1 is_null=0 */
### @3='bar' /* STRING(9) meta=65033 nullable=1 is_null=0 */
### SET
### @1=2 /* INT meta=0 nullable=0 is_null=0 */
### @2=0 /* INT meta=0 nullable=1 is_null=0 */
### @3='hop' /* STRING(9) meta=65033 nullable=1 is_null=0 */
# at 829
#130201 11:21:39 server id 1 end_log_pos 856 Xid = 85
COMMIT/*!*/;
|
Note how ALTER is logged before the UPDATE, because ALTER did not block waiting for the transaction to commit.
For the reasons described above the implementation of Metadata Locking was changed, starting MySQL 5.5.3. Let’s see how this works now.
Metadata Locking behavior starting MySQL 5.5.3
Starting with 5.5.3 DDL statements that modify the table metadata are executed in an isolated fashion consistent with transactional behavior. This means that any open transaction will hold metadata locks on the table it has accessed for as long as the transaction is open. Since an open transaction retains metadata locks on all tables that were opened by the transaction, hence any DDL operation cannot commence till all the transactions that accessed that table are open. Let’s see this in affect via a simple test case:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)
session1 > select * from test order by id;
+----+------+
| id | x |
+----+------+
| 1 | foo |
| 2 | bar |
+----+------+
2 rows in set (0.00 sec)
session2 > ALTER TABLE test add column c char(32) default 'dummy_text';
session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
| 1 | msandbox | localhost | test | Sleep | 140 | | NULL |
| 2 | msandbox | localhost | test | Query | 3 | Waiting for table metadata lock | ALTER TABLE test add column c char(32) default 'dummy_text' |
| 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist |
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
3 rows in set (0.00 sec)
|
You can see how the ALTER blocks, because the transaction in session1 is still open and once the transaction in session1 is closed, the ALTER proceeds through successfully:
1
2
3
4
5
6
|
session1 > rollback;
Query OK, 0 rows affected (0.00 sec)
session2 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (46.77 sec)
Records: 2 Duplicates: 0 Warnings: 0
|
Let’s see where the ALTER spent most of its time:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
session2 > show profiles;
+----------+-------------+-------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+-------------+-------------------------------------------------------------+
| 1 | 46.78110075 | ALTER TABLE test add column c char(32) default 'dummy_text' |
+----------+-------------+-------------------------------------------------------------+
1 row in set (0.00 sec)
session2 > show profile for query 1;
+------------------------------+-----------+
| Status | Duration |
+------------------------------+-----------+
| starting | 0.000060 |
| checking permissions | 0.000003 |
| checking permissions | 0.000003 |
| init | 0.000005 |
| Opening tables | 0.000045 |
| System lock | 0.000006 |
| setup | 0.000016 |
| creating table | 0.168283 |
| After create | 0.000061 |
| copy to tmp table | 0.165808 |
| rename result table | 46.446738 |
| end | 0.000035 |
| Waiting for query cache lock | 0.000003 |
| end | 0.000006 |
| query end | 0.000003 |
| closing tables | 0.000008 |
| freeing items | 0.000016 |
| cleaning up | 0.000004 |
+------------------------------+-----------+
18 rows in set (0.00 sec)
|
So the ALTER waited on the meta data locks just after the table with the new structure had been created and populated with data but before the old table was swapped with the new one. Note that ALTER is a multi-step process, the old table is locked in shared mode and then something similar to the following steps are taken: a new table with the new structure is created and then INSERT INTO new_table SELECT * FROM old_table is done and then RENAME old_table to tmp_table, new_table to old_table and finally DROP tmp_table.
Let’s see another example, this time trying a RENAME:
1
2
3
4
5
6
7
8
9
10
11
|
session2 > RENAME TABLE test to test_2;
session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| 1 | msandbox | localhost | test | Sleep | 49 | | NULL |
| 2 | msandbox | localhost | test | Query | 35 | Waiting for table metadata lock | RENAME TABLE test to test_2 |
| 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
3 rows in set (0.00 sec)
|
And you can see that the RENAME is also blocked, because a transaction that accessed the table “test” is still open.
So we have an interesting conclusion here that the ALTER waits only at the last stages when its making changes to the table metadata, a table ALTER that alters a big table can keep executing without any hindrance, copying rows from the table with the old structure to the table with the new structure and will only wait at the last step when its about to make changes to table metadata.
Let’s see another interesting side-affect of metadata locking.
When can ALTER render the table inaccessible?
Now there is another interesting side-affect, and that is that when the ALTER comes at the state where it needs to wait for metadata locks, at that point the ALTER simply blocks any type of queries to the table, we know that writes would be blocked anyhow for the entire duration of the ALTER, but reads would be blocked as well at the time when the ALTER is waiting for metadata locks. Let’s see this in action via another test case:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)
session1 > select * from test_2 order by id;
+----+------+------------+
| id | x | c |
+----+------+------------+
| 1 | foo | dummy_text |
| 2 | bar | dummy_text |
+----+------+------------+
2 rows in set (0.00 sec)
session6 > ALTER TABLE test_2 DROP COLUMN c;
session7 > select * from test_2 order by id;
session8 > select * from test_2 order by id;
session9 > select * from test_2 order by id;
session10 > select * from test_2 order by id;
session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
| 1 | msandbox | localhost | test | Sleep | 403 | | NULL |
| 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist |
| 6 | msandbox | localhost | test | Query | 229 | Waiting for table metadata lock | ALTER TABLE test_2 DROP COLUMN c |
| 7 | msandbox | localhost | test | Query | 195 | Waiting for table metadata lock | select * from test_2 order by id |
| 8 | msandbox | localhost | test | Query | 180 | Waiting for table metadata lock | select * from test_2 order by id |
| 9 | msandbox | localhost | test | Query | 169 | Waiting for table metadata lock | select * from test_2 order by id |
| 10 | msandbox | localhost | test | Query | 55 | Waiting for table metadata lock | select * from test_2 order by id |
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
7 rows in set (0.00 sec)
|
And you can see that the table is blocked for any kind of operation. Let’s see the profiling information for one of the queries that was blocked to see where the query spent most of its time:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
session10 > show profile for query 1;
+----------------------+------------+
| Status | Duration |
+----------------------+------------+
| starting | 0.000058 |
| checking permissions | 0.000006 |
| Opening tables | 213.028481 |
| System lock | 0.000009 |
| init | 0.000014 |
| optimizing | 0.000002 |
| statistics | 0.000005 |
| preparing | 0.000006 |
| executing | 0.000001 |
| Sorting result | 0.000002 |
| Sending data | 0.000040 |
| end | 0.000003 |
| query end | 0.000002 |
| closing tables | 0.000003 |
| freeing items | 0.000007 |
| logging slow query | 0.000002 |
| cleaning up | 0.000002 |
+----------------------+------------+
17 rows in set (0.00 sec)
|
And you can see how the query spent nearly all its time waiting in the “Opening tables” state. Now this behavior with respect to ALTER making the table inaccessible in some cases is not really documented and as such I have reported a bug:http://bugs.mysql.com/bug.php?id=67647
Metadata locking behaves differently for queries that are serviced from the Query Cache, let’s see what happens in that case.
Metadata Locking and Query Cache
How does metadata locking behave with query_cache? That is an important question. If Query Cache is enabled and the SELECT can be serviced from the Query Cache then the SELECT will not block on the ALTER even though the ALTER is waiting for meta data locks. Why? Because in such a case no table open operation has to be performed. Let’s see this scenario via a test case:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)
session1 > select * from test_2 order by id;
+----+------+
| id | x |
+----+------+
| 1 | foo |
| 2 | bar |
+----+------+
2 rows in set (0.00 sec)
session6 > RENAME TABLE test_2 to test;
session10 > select * from test_2 order by id;
+----+------+
| id | x |
+----+------+
| 1 | foo |
| 2 | bar |
+----+------+
2 rows in set (0.00 sec)
session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| 1 | msandbox | localhost | test | Sleep | 22 | | NULL |
| 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist |
| 6 | msandbox | localhost | test | Query | 3 | Waiting for table metadata lock | RENAME TABLE test_2 to test |
| 10 | msandbox | localhost | test | Sleep | 37 | | NULL |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
4 rows in set (0.00 sec)
|
The query proceeds without being blocked on anything while the RENAME is still waiting for metadata locks. Let’s see the profiling information for this query:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
session10 > show profile for query 11;
+--------------------------------+----------+
| Status | Duration |
+--------------------------------+----------+
| starting | 0.000018 |
| Waiting for query cache lock | 0.000003 |
| checking query cache for query | 0.000007 |
| checking privileges on cached | 0.000003 |
| checking permissions | 0.000005 |
| sending cached result to clien | 0.000011 |
| logging slow query | 0.000002 |
| cleaning up | 0.000002 |
+--------------------------------+----------+
8 rows in set (0.00 sec)
|
You can see that no table open operation was performed and hence no wait.
Does the fact that the table has already been opened and table object is in the table_cache change anything with respect to metadata locks.
Metadata Locking and Table Cache
No matter if a connection accesses a table that is already in the Table Cache, any query to a table that has a DDL operation waiting, will block. Why? Because MySQL sees that the old entries in the Table Cache have to be invalidated, and any query that accesses the table will have to reopen the modified table and there will be new entries in the Table Cache. Let’s see this phenomenon in action:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
|
session6 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (59.80 sec)
session10 > show status like 'Open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 30 |
| Opened_tables | 0 |
+---------------+-------+
2 rows in set (0.00 sec)
session10 > select * from test order by id;
+----+------+------------+
| id | x | c |
+----+------+------------+
| 1 | foo | dummy_text |
| 2 | bar | dummy_text |
+----+------+------------+
2 rows in set (53.78 sec)
session10 > show status like 'Open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 30 |
| Opened_tables | 1 |
+---------------+-------+
2 rows in set (0.00 sec)
session10 > show profile for query 18;
+----------------------+-----------+
| Status | Duration |
+----------------------+-----------+
| starting | 0.000059 |
| checking permissions | 0.000010 |
| Opening tables | 53.786685 |
| System lock | 0.000009 |
| init | 0.000012 |
| optimizing | 0.000003 |
| statistics | 0.000007 |
| preparing | 0.000006 |
| executing | 0.000001 |
| Sorting result | 0.000004 |
| Sending data | 0.000033 |
| end | 0.000003 |
| query end | 0.000002 |
| closing tables | 0.000004 |
| freeing items | 0.000009 |
| logging slow query | 0.000002 |
| cleaning up | 0.000002 |
+----------------------+-----------+
17 rows in set (0.00 sec)
|
As you can see that the SELECT query still blocks, and the status counter Opened_tables is also incremented once the query finishes.
So much for the explanation, now let’s take a look at the consequences.
Consequences
The consequences of these changes in metadata locking is that, if you have some really hot tables, for example in web applications its typical to see a “sessions” table that is accessed on every request, then care should be taken when you have to ALTER the table otherwise it can easily cause a stall as many threads can get piled up waiting for table metadata lock bringing down the MySQL server or causing all the connections to get depleted.
There are some other interesting consequences as well for application that use MySQL versions prior to 5.5:
- I remember a customer case where there is a reporting slave that daily runs a long running transaction, this transactions tends to run for hours. Now everyday one of the tables was renamed and swapped and that table was the one that is read from by the long running transaction. As the slave tried to execute the rename query it would simply block waiting for the long running transaction to finish, this would cause the slave to lag for hours waiting for the transaction to be completed, as you know that the slave is single-threaded so it cannot really apply any other event. This was never an issue when the application was using MySQL version < 5.5 as the datastore.
- There was another interesting case this time with how Active MQ uses MySQL when in HA mode. In HA mode there are two Active MQ servers, both try to do something similar to the following sequence of events:
123456789101112131415161718session1 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;Query OK, 0 rows affected (0.09 sec)session1 > insert into t1 values(null);Query OK, 1 row affected (0.21 sec)session1 > start transaction;Query OK, 0 rows affected (0.00 sec)session1 > select * from t1 where i=1 for update;+---+| i |+---+| 1 |+---+1 row in set (0.00 sec)session2 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;When using MySQL 5.1 the second CREATE would just fail immediately with the error “ERROR 1050 (42S01): Table ‘t1’ already exists”, but because of how meta data locking works in 5.5 this is no longer the case, the second CREATE will simply block with unintended consequences. A workaround here would be to set lock_wait_timeout variable to a very low value and then execute the CREATE TABLE, this will make sure that the CREATE fails immediately (however due to a different reason):
1session2 > set session lock_wait_timeout=1;CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;However, I feel that the CREATE TABLE should fail in such a case when the table already exists and there is no other DDL like a DROP table waiting to run on the same table, and as such I have reported the bug:http://bugs.mysql.com/bug.php?id=67873