http://www.eygle.com/archives/2004/10/statspack14-logfilesync.html
当一个用户提交(commits)或者回滚(rollback),session的redo信息需要写出到redo logfile中.
用户进程将通知LGWR执行写出操作,LGWR完成任务以后会通知用户进程.
这个等待事件就是指用户进程等待LGWR的写完成通知.
对于回滚操作,该事件记录从用户发出rollback命令到回滚完成的时间.
如果该等待过多,可能说明LGWR的写出效率低下,或者系统提交过于频繁.
针对该问题,可以关注:
log file parallel write等待事件
user commits,user rollback等统计信息可以用于观察提交或回滚次数
解决方案:
1.提高LGWR性能
尽量使用快速磁盘,不要把redo log file存放在raid 5的磁盘上
2.使用批量提交
3.适当使用NOLOGGING/UNRECOVERABLE等选项
可以通过如下公式计算平均redo写大小:
avg.redo write size = (Redo block written/redo writes)*512 bytes
如果系统产生redo很多,而每次写的较少,一般说明LGWR被过于频繁的激活了.
可能导致过多的redo相关latch的竞争,而且Oracle可能无法有效的使用piggyback的功能.
我们从一个statspack中提取一些数据来研究一下这个问题.
1.主要信息
DB Name DB Id Instance Inst Num Release OPS Host
------------ ----------- ------------ -------- ----------- --- ------------
DB 1222010599 oracle 1 8.1.7.4.5 NO sun
Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 3473 13-Oct-04 13:43:00 540
End Snap: 3475 13-Oct-04 14:07:28 540
Elapsed: 24.47 (mins)
Cache Sizes
~~~~~~~~~~~
db_block_buffers: 102400 log_buffer: 20971520
db_block_size: 8192 shared_pool_size: 600M
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 28,458.11 2,852.0
2.等待事件
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
log file sync 14,466 2 4,150 3 1.0
db file sequential read 17,202 0 2,869 2 1.2
latch free 24,841 13,489 2,072 1 1.7
direct path write 121 0 1,455 120 0.0
db file parallel write 1,314 0 1,383 11 0.1
log file sequential read 1,540 0 63 0 0.1
....
log file switch completion 1 0 3 30 0.0
refresh controlfile command 23 0 1 0 0.0
LGWR wait for redo copy 46 0 0 0 0.0
....
log file single write 4 0 0 0 0.0
我们看到,这里log file sync和db file parallel write等写操作相关事件同时出现在TOP 5 Event之中,这可能说明系统在执行IO写操作时存在竞争。
这里磁盘IO肯定存在了瓶颈,实际用户的redo和数据文件同时存放在Raid的磁盘上,存在性能问题,需要调整.
3.统计信息
Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
....
redo blocks written 93,853 63.9 6.4
redo buffer allocation retries 1 0.0 0.0
redo entries 135,837 92.5 9.3
redo log space requests 1 0.0 0.0
redo log space wait time 3 0.0 0.0
redo ordering marks 0 0.0 0.0
redo size 41,776,508 28,458.1 2,852.0
redo synch time 4,174 2.8 0.3
redo synch writes 14,198 9.7 1.0
redo wastage 4,769,200 3,248.8 325.6
redo write time 3,698 2.5 0.3
redo writer latching time 0 0.0 0.0
redo writes 14,572 9.9 1.0
....
sorts (disk) 4 0.0 0.0
sorts (memory) 179,856 122.5 12.3
sorts (rows) 2,750,980 1,874.0 187.8
....
transaction rollbacks 36 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
user calls 1,390,718 947.4 94.9
user commits 14,136 9.6 1.0
user rollbacks 512 0.4 0.0
write clones created in backgroun 0 0.0 0.0
write clones created in foregroun 11 0.0 0.0
-------------------------------------------------------------
avg.redo write size = (Redo block written/redo writes)*512 bytes
= ( 93,853 / 14,572 )*512
= 3K
这个平均过小了,说明系统的提交过于频繁
Latch Sleep breakdown for DB: DPSHDB Instance: dpshdb Snaps: 3473 -3475
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
row cache objects 12,257,850 113,299 64 113235/64/0/0/0
shared pool 3,690,715 60,279 15,857 52484/588/6546/661/0
library cache 4,912,465 29,454 8,876 23823/2682/2733/216/0
cache buffers chains 10,314,526 2,856 33 2823/33/0/0/0
redo writing 76,550 937 1 936/1/0/0/0
session idle bit 2,871,949 225 1 224/1/0/0/0
messages 107,950 159 2 157/2/0/0/0
session allocation 184,386 44 6 38/6/0/0/0
checkpoint queue latch 96,583 1 1 0/1/0/0/0
-------------------------------------------------------------
由于过渡频繁的提交,LGWR过度频繁的激活,我们看到这里出现了redo writing的latch竞争.
关于redo writing竞争你可以在steve的站点找到详细的介绍:
http://www.ixora.com.au/notes/lgwr_latching.htm