41.RESET SLAVE
For a server where GTIDs are in use (gtid_mode is ON), issuing RESET SLAVE has no effect on the GTID execution history. The statement does not change the values of gtid_executed or gtid_purged, or the mysql.gtid_executed table. If you need to reset the GTID execution history, use RESET MASTER, even if the GTID-enabled server is a replica where binary logging is disabled.
To use RESET SLAVE, the replication threads must be stopped, so on a running replica use STOP SLAVE before issuing RESET SLAVE. To use RESET SLAVE on a Group Replication group member, the member status must be OFFLINE, meaning that the plugin is loaded but the member does not currently belong to any group. A group member can be taken offline by using a STOP GROUP REPLICATION statement.
42.SHOW SLAVE STATUS
Master_Log_File
The name of the source binary log file from which the I/O thread is currently reading.
Read_Master_Log_Pos
The position in the current source binary log file up to which the I/O thread has read.
Relay_Log_File
The name of the relay log file from which the SQL thread is currently reading and executing.
Relay_Log_Pos
The position in the current relay log file up to which the SQL thread has read and executed.
Relay_Master_Log_File
The name of the source binary log file containing the most recent event executed by the SQL thread.
Exec_Master_Log_Pos
The position in the current source binary log file to which the SQL thread has read and executed, marking the start of the next transaction or event to be processed.You can use this value with the CHANGE MASTER TO statement's MASTER_LOG_POS option when starting a new replica from an existing replica, so that the new replica reads from this point.The coordinates given by (Relay_Master_Log_File, Exec_Master_Log_Pos) in the source's binary log correspond to the coordinates given by (Relay_Log_File, Relay_Log_Pos) in the relay log.
Inconsistencies in the sequence of transactions from the relay log which have been executed can cause this value to be a “low-water mark”. In other words, transactions appearing before the position are guaranteed to have committed, but transactions after the position may have committed or not.
Retrieved_Gtid_Set
The set of global transaction IDs corresponding to all transactions received by this replica. Empty if GTIDs are not in use.This is the set of all GTIDs that exist or have existed in the relay logs. Each GTID is added as soon as the Gtid_log_event is received. This can cause partially transmitted transactions to have their GTIDs included in the set.When all relay logs are lost due to executing RESET SLAVE or CHANGE MASTER TO, or due to the effects of the --relay-log-recovery option, the set is cleared. When relay_log_purge = 1, the newest relay log is always kept, and the set is not cleared.
Executed_Gtid_Set
The set of global transaction IDs written in the binary log. This is the same as the value for the global gtid_executed system variable on this server, as well as the value for Executed_Gtid_Set in the output of SHOW MASTER STATUS on this server. Empty if GTIDs are not in use.
43.FLUSH Statement
The FLUSH statement has several variant forms that clear or reload various internal caches, flush tables, or acquire locks. To execute FLUSH, you must have the RELOAD privilege.
FLUSH HOSTS
Empties the host cache and the Performance Schema host_cache table that exposes the cache contents, and unblocks any blocked hosts.
Flush the host cache if some of your hosts change IP address or if the error message Host 'host_name' is blocked occurs for connections from legitimate hosts. (See Section B.4.2.5, “Host 'host_name' is blocked”.) When more than max_connect_errors errors occur successively for a given host while connecting to the MySQL server, MySQL assumes that something is wrong and blocks the host from further connection requests. Flushing the host cache enables further connection attempts from the host. The default value of max_connect_errors is 100. To avoid this error message, start the server with max_connect_errors set to a large value.
FLUSH LOGS
Closes and reopens any log file to which the server is writing. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file. If relay logging is enabled, the sequence number of the relay log file is incremented by one relative to the previous file.FLUSH LOGS has no effect on tables used for the general query log or for the slow query log.
FLUSH SLOW LOGS
Closes and reopens any slow query log file to which the server is writing.
FLUSH TABLES
Closes all open tables, forces all tables in use to be closed, and flushes the query cache and prepared statement cache. FLUSH TABLES also removes all query results from the query cache, like the RESET QUERY CACHE statement.FLUSH TABLES is not permitted when there is an active LOCK TABLES ... READ. To flush and lock tables, use FLUSH TABLES tbl_name ... WITH READ LOCK instead.
FLUSH TABLES tbl_name [, tbl_name] ...
With a list of one or more comma-separated table names, this statement is like FLUSH TABLES with no names except that the server flushes only the named tables. If a named table does not exist, no error occurs.
44 Add PRIMARY KEY
If you create a table without a primary key, InnoDB chooses one for you, which can be the first UNIQUE key defined on NOT NULL columns, or a system-generated key. To avoid uncertainty and the potential space requirement for an extra hidden column, specify the PRIMARY KEY clause as part of the CREATE TABLE statement.The online performance enhancements that apply to operations on secondary indexes do not apply to the primary key index.
When an operation on the primary key uses ALGORITHM=INPLACE, even though the data is still copied, it is more efficient than using ALGORITHM=COPY because:
• No undo logging or associated redo logging is required for ALGORITHM=INPLACE. These operations add overhead to DDL statements that use ALGORITHM=COPY.
• The secondary index entries are pre-sorted, and so can be loaded in order.
• The change buffer is not used, because there are no random-access inserts into the secondary indexes.
Only ALGORITHM=COPY supports dropping a primary key without adding a new one in the same ALTER TABLE statement.
45.creates a new clustered index
MySQL creates a new clustered index by copying the existing data from the original table to a temporary table that has the desired index structure. Once the data is completely copied to the temporary table, the original table is renamed with a different temporary table name. The temporary table comprising the new clustered index is renamed with the name of the original table, and the original table is dropped from the database.
46.online DDL support for column operations
Operation | In Place | Rebuilds Table | Permits Concurrent DML | Only Modifies Metadata | Remark |
Adding a column | Y | Y | Y* | N | Concurrent DML is not permitted when adding an auto-increment column. |
Dropping a column | Y | Y | Y | N | |
Renaming a column | Y | N | Y* | Y | To permit concurrent DML, keep the same data type 、[NOT] NULL attribute and only change the column name.Be not supported for renaming a generated column |
Reordering columns | Y | Y | Y | N |
To reorder columns, use FIRST or AFTER in CHANGE or MODIFY operations. Data is reorganized substantially, making it an expensive operation. |
Setting a column default value | Y | N | Y | Y | Only modifies table metadata. Default column values are stored in the .frm file for the table, not the InnoDB data dictionary. |
Changing the column data type | N | Y | N | N | Changing the column data type is only supported with ALGORITHM=COPY. |
Extending VARCHAR column size | Y | N | Y | Y | The number of length bytes required by a VARCHAR column must remain the same. |
Dropping the column default value | Y | N | Y | Y | |
Changing the auto increment value | Y | N | Y | N* | Modifies a value stored in memory, not the data file. |
Making a column NULL | Y | Y* | Y | N | Rebuilds the table in place. |
Making a column NOT NULL | Y* | Y* | Y | N | Rebuilds the table in place. STRICT_ALL_TABLES or STRICT_TRANS_TABLES SQL_MODE is required for the operation to succeed. The operation fails if the column contains NULL values. |
Modifying the definition of an ENUM or SET column | Y | N | Y | Y |
47.online DDL support for index operations
Operation | In Place | Rebuilds Table | Permits Concurrent DML | Only Modifies Metadata | Remark |
Creating or adding a secondary index | Y | N | Y | N | The table remains available for read and write operations while the index is being created. |
Dropping an index | Y | N | Y | Y | The table remains available for read and write operations while the index is being dropped. |
Renaming an index | Y | N | Y | Y | |
Adding a FULLTEXT index | Y* | N* | N | N | Adding the first FULLTEXT index rebuilds the table if there is no user-defined FTS_DOC_ID column. Additional FULLTEXT indexes may be added without rebuilding the table. |
Adding a SPATIAL index | Y | N | N | N | |
Changing the index type | Y | N | Y | Y |
48.Online DDL Performance
Online DDL improves several aspects of MySQL operation:
• Applications that access the table are more responsive because queries and DML operations on the table can proceed while the DDL operation is in progress. Reduced locking and waiting for MySQL server resources leads to greater scalability, even for operations that are not involved in the DDL operation.
• In-place operations avoid the disk I/O and CPU cycles associated with the table-copy method, which minimizes overall load on the database. Minimizing load helps maintain good performance and high throughput during the DDL operation.
• In-place operations read less data into the buffer pool than the table-copy operations, which reduces purging of frequently accessed data from memory. Purging of frequently accessed data can cause a temporary performance dip after a DDL operation.
49.The LOCK during a DDL operation
LOCK clauses are described below, in order of least to most restrictive:
• LOCK=NONE:
Permits concurrent queries and DML. For example, use this clause for tables involving customer signups or purchases, to avoid making the tables unavailable during lengthy DDL operations.
• LOCK=SHARED:
Permits concurrent queries but blocks DML. For example, use this clause on data warehouse tables, where you can delay data load operations until the DDL operation is finished, but queries cannot be delayed for long periods.
• LOCK=DEFAULT:
Permits as much concurrency as possible (concurrent queries, DML, or both). Omitting the LOCK clause is the same as specifying LOCK=DEFAULT. Use this clause when you know that the default locking level of the DDL statement will not cause availability problems for the table.
• LOCK=EXCLUSIVE:
Blocks concurrent queries and DML. Use this clause if the primary concern is finishing the DDL operation in the shortest amount of time possible, and concurrent query and DML access is not necessary. You might also use this clause if the server is supposed to be idle, to avoid unexpected table accesses.
50.在线修改
注意1.在线修改拓展VARCHAR字段的长度。
解释说明: In-place ALTER TABLE does not support increasing the size of a VARCHAR column from less than 256 bytes to a size equal to or greater than 256 bytes. In this case, the number of required length bytes changes from 1 to 2, which is only supported by a table copy (ALGORITHM=COPY).
注意2.在线收缩VARCHAR字段
解释说明:Decreasing VARCHAR size using in-place ALTER TABLE is not supported. Decreasing VARCHAR size requires a table copy (ALGORITHM=COPY).
注意3.行的最大长度,即受限与最大行长度的最大字段长度。
The maximum row length is slightly less than half a database page for 4KB, 8KB, 16KB, and 32KB innodb_page_size settings. For example, the maximum row length is slightly less than 8KB for the default 16KB InnoDB page size.
LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.
----个人学习总结笔记,可能比较粗糙,观者见谅。