根据MySQL的架构优化
参数调整的最终效果:
1)SQL执行速度足够快
2)业务吞吐量足够高:TPS,QPS
3)系统负载可控,合理:cpu,io负载
在调整参数的时候,应该熟悉mysql的体系架构,可以根据体系架构分快进行参数调整。
主要方面:内存,线程,磁盘文件,各级缓存
内存
一.通过配置缓冲池的各个方面来提高性能
buffer_pool配置
innodb_buffer_pool_size
- 默认值:128M ,一般调整为物理内存的50%-80%
- 含义:innodb缓存池大小
- 内存池构成:innodb buffer pool,用户线程空间(调整用户线程空间的时候要注意这个业务是否有复杂的sql,下面讨论这个问题)
- 调整方法:看命中率(不能说明问题)
mysql> show global status like 'Innodb_buffer_pool_read%s'; +----------------------------------+-------+ | Variable_name | Value | +----------------------------------+-------+ | Innodb_buffer_pool_read_requests | 52231 | 总的请求次数 | Innodb_buffer_pool_reads | 477 | 物理读请求次数,InnoDB缓冲池无法满足的请求数。需要从磁盘中读取 +----------------------------------+-------+ 2 rows in set (0.00 sec)
命中率 (1-read)/request request越大,read的越小越好
mysql> show global status like 'Innodb_buffer_pool_wait%'; +------------------------------+-------+ | Variable_name | Value | +------------------------------+-------+ | Innodb_buffer_pool_wait_free | 0 | +------------------------------+-------+
通常,对InnoDB缓冲池的写入发生在后台。 当InnoDB需要读取或创建页面并且没有可用的干净页面时,InnoDB首先刷新一些脏页并等待该操作完成。 如果已正确设置innodb_buffer_pool_size,则此值应该很小
Q:什么时候增大该值?
看命中率,此值低于99%,则可以考虑增加innodb_buffer_pool_size。
Q:什么时候减少该值?
可以通过 show engine innodb statusG 观察
---------------------- BUFFER POOL AND MEMORY ---------------------- Total large memory allocated 1117782016 Dictionary memory allocated 234666 Internal hash tables (constant factor + variable factor) Adaptive hash index 17740608 (17706944 + 33664) Page hash 139112 (buffer pool 0 only) Dictionary cache 4661402 (4426736 + 234666) File system 854280 (812272 + 42008) Lock system 2659448 (2657176 + 2272) Recovery system 0 (0 + 0) Buffer pool size 65528 Buffer pool size, bytes 1073610752 Free buffers 63676 如果该值长期处于比较大的状态,就可以考虑适当减小buffer pool
Q:为什么不能单纯的相信命中率?
如果系统出现了慢SQL,也可能导致命中率变低。这时候就不仅仅是参数配置的问题
缓冲池调整进度监控
SHOW STATUS WHERE Variable_name='InnoDB_buffer_pool_resize_status';
或者通过错误日志中也可以看到
innodb_buffer_pool_instances
范围:1-64
划分多个instance可以减少lru list,free list 和flush list 等的长度,减少缓存的争用,当较多数据加载到内存时, 使用多缓存实例能减少缓存争用情况,仅当设置innodb_buffer_pool_size 为1GB或更大的大小时,此选项才会生效innodb_buffer_pool_size 大于 1GB 时, innodb_buffer_pool_instances 默认为 8。如有更多buffer pool, 平均每个instances 至少1GB。
innodb_buffer_pool_chunk_size
- 默认值为
128M
- 含义:https://www.cnblogs.com/asea123/p/10089910.html
- 动态调整buffer_pool时的块的大小。避免在调整大小操作期间复制所有缓冲池页面,操作以“ 块 ”执行 。默认情况下,
innodb_buffer_pool_chunk_size
为128MB(134217728字节)。块中包含的页数取决于的值innodb_page_size
。innodb_buffer_pool_chunk_size
可以以1MB(1048576字节)为单位增加或减少。
注意:块数(innodb_buffer_pool_size
/ innodb_buffer_pool_chunk_size
)不应超过1000。更改时应小心 innodb_buffer_pool_chunk_size
,因为更改此值可以自动增加缓冲池的大小。在更改之前 innodb_buffer_pool_chunk_size
,请计算它将产生的影响, innodb_buffer_pool_size
以确保生成的缓冲池大小可以接受
(缓冲池扫描可抗性)Pool Scan Resistant
innodb_old_blocks_pct
默认:37(%)
innodb_old_blocks_time
默认:1000 (ms)
调整方法:
相关监控:
通过 show engine innodb status G
---------------------- BUFFER POOL AND MEMORY ---------------------- .
.
. Pending writes: LRU 0, flush list 0, single page 0 Pages made young 0, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 476, created 38, written 347 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 514, unzip_LRU len: 0 LRU列表中包含了unzip_LRU的项 I/O sum[0]:cur[0], unzip sum[0]:cur[0] ----------------------
主要看两个指标:
youngs/s:使数据页边年轻的此时(不是页数)过大
- pct过大 不容易被刷出来
- time过小
non-youngs/:使数据页边老的次数(不是页数)过大
- 可能存在严重的全表扫描
- 也可能是pct的设置过小
- 也可能是times设置的过大
用户线程优化
IO优化
优化原则:
IO优化的原则:尽可能能缓存,减少读对数据库的随机IO的请求;同时减少写的随机IO的随时发生,利用各种buffer去缓存。
innodb_flush_method
参考值:O_DIRECT
写数据的时候绕过文件系统缓存,直接写入数据文件中
参数含义:https://www.cnblogs.com/asea123/p/10089763.html
这样减少操作系统级别VFS的缓存使用内存过多和Innodb本身的buffer的缓存冲突,同时也算是给操作系统减少点压力
innodb_io_capacity
这个参数据控制 Innodb checkpoint时的IO能力,一般可以按一块SAS 15000转的磁盘200个计算,6块盘的SAS做的Raid10这个值可以配到600即可。如果是普通的SATA一块盘只能按100算。
参考文档:SSD下的IO优化
innodb_max_dirty_pages_pct
含义:这个百分比是,最大脏页的百分数,当系统中 脏页 所占百分比超过这个值,INNODB就会进行写操作以把页中的已更新数据写入到磁盘文件中。
innodb_lru_scan_depth
该参数与checkpoint机制有关。
一个影响对缓冲池flush操作算法, 它指定每个缓冲池实例中的LRU列表被page_cleaner
扫描的深度,page_cleaner
通过扫描LRU列表来查找脏页进行flush
Q:如何降低物理写?
innodb_lru_scan_depth 调小
innodb_io_capacity 调小
innodb_max_dirty_pages_pct 调大
Innodb_buffer_pool_pages_dirty/Innodb_buffer_pool_pages_data比值接近90%的时候就不能再调了
相关状态参数
mysql> show global status like 'Innodb_buffer_pool_page%';
+-----------------------------------------+-------+ | Variable_name | Value | +-----------------------------------------+-------+ | Innodb_buffer_pool_pages_data | 1852 |The number of pages in the InnoDB buffer pool containing data. The number includes both dirty and clean pages | Innodb_buffer_pool_pages_dirty | 0 |The current number of dirty pages in the InnoDB buffer pool. | Innodb_buffer_pool_pages_flushed | 4362 | | Innodb_buffer_pool_pages_free | 63676 | | Innodb_buffer_pool_pages_LRU_flushed | 0 | | Innodb_buffer_pool_pages_made_not_young | 0 | | Innodb_buffer_pool_pages_made_young | 0 | | Innodb_buffer_pool_pages_misc | 0 | | Innodb_buffer_pool_pages_old | 0 | | Innodb_buffer_pool_pages_total | 65528 | +-----------------------------------------+-------+ 10 rows in set (0.00 sec)
调整:
Innodb_buffer_pool_pages_flushed 是否在下降
Innodb_buffer_pool_pages_dirty 脏数据到了多少了
Innodb_buffer_pool_pages_dirty/Innodb_buffer_pool_pages_data比值接近90%的时候就不能再调了
Innodb_buffer_pool_wait_free/s 还要观察这个参数,有wait就不是好事。
innodb_change_buffer_max_size
默认值:25
innodb_change_buffering
all
innodb_max_changed_pages
10000
预读优化(也影响IO)
innodb_read_ahead_threshold
默认值:56
参考文档:http://www.ywnds.com/?p=9895
innodb_random_read_ahead
随机预读默认没开启,属于一个比较活跃的参数,如果要用一定要多测试一下。 对用passport类应用可以考虑使用。
调整:
通过show engine innodb status
---BUFFER POOL 7 . . . Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
purge read write 线程
相关参数
mysql> show variables like 'innodb%threads'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | innodb_purge_threads | 4 | | innodb_read_io_threads | 8 | | innodb_write_io_threads | 8 | +-------------------------+-------+ 3 rows in set (0.00 sec)
-------- FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (read thread) I/O thread 7 state: waiting for completed aio requests (read thread) I/O thread 8 state: waiting for completed aio requests (read thread) I/O thread 9 state: waiting for completed aio requests (read thread) I/O thread 10 state: waiting for completed aio requests (write thread) I/O thread 11 state: waiting for completed aio requests (write thread) I/O thread 12 state: waiting for completed aio requests (write thread) I/O thread 13 state: waiting for completed aio requests (write thread) I/O thread 14 state: waiting for completed aio requests (write thread) I/O thread 15 state: waiting for completed aio requests (write thread) I/O thread 16 state: waiting for completed aio requests (write thread) I/O thread 17 state: waiting for completed aio requests (write thread) Pending normal aio reads: [0, 0, 0, 0, 0, 0, 0, 0] , aio writes: [0, 0, 0, 0, 0, 0, 0, 0] , ibuf aio reads:, log i/o's:, sync i/o's: Pending flushes (fsync) log: 0; buffer pool: 0 521 OS file reads, 4847 OS file writes, 608 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
- 如果发现八个读线程或者写线程总是在进行IO(不是waiting for状态)说明线程数量不够,要增加线程的数量。