SQL> select event#,name,parameter1,parameter2,parameter3 from v$event_name where name = 'db file parallel read'; EVENT# NAME PARAMETER1 PARAMETER2 PARAMETER3 ---------- ------------------------------ --------------- --------------- --------------- 120 db file parallel read files blocks requests
1. db file parallel read
Contrary to what the name suggests, the db file parallel read event is not related to any parallel
operation—neither parallel DML nor parallel query. This event occurs during the database recovery
operation when database blocks that need changes as a part of recovery are read in parallel from the
datafiles. This event also occurs when a process reads multiple noncontiguous single blocks from
one or more datafiles.
Wait Parameters
Wait parameters for db file parallel read are described here:
l P1 Number of files to read from
l P2 Total number of blocks to read
l P3 Total number of I/O requests (the same as P2 since multiblock read is not used)
Wait Time
No timeouts. The session waits until all of the I/Os are completed.
案例
http://www.itpub.net/thread-1586802-1-1.html
虽然执行计划正确,但是由于index的聚合因子过高,导致prefetch功能大量预读取数据.
一个语句执行计划没有问题,单独提取出来执行只要0.8秒。但是放到存储过程中就很慢 需要10秒左右甚至更多。
Execution Plan
Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time |
0 | SELECT STATEMENT | 1839 (100) | ||||
1 | SORT AGGREGATE | 1 | 37 | |||
2 | FILTER | |||||
3 | TABLE ACCESS BY INDEX ROWID | V_RPT_PLYEDR_INSRNC | 1 | 37 | 1839 (1) | 00:00:23 |
4 | INDEX RANGE SCAN | ACIF_INDEX_001 | 2147 | 18 (0) | 00:00:01 |
但是带入数据单独这个语句执行计划一样的情况下,只要1秒左右。应该不是绑定变量导致的执行计划问题,因为可以看到执行计划是最优的,我查看了等待事件
491 db file parallel read 2041
491 db file sequential read 526
491 db file scattered read 23
SQL> select distinct(file_id) from dba_extents where segment_name='V_RPT_PLYEDR_INSRNC';
FILE_ID ---------- 25 22 SQL> select name from v$datafile where file# in ('22','25'); NAME -------------------------------------------------------------------------------- /repdata/ora9i/CIRCI.dbf /repdata/ora9i/circi01.dbf
Top 5 Timed Events
Event | Waits | Time(s) | Avg Wait(ms) | % Total Call Time | Wait Class |
db file parallel read |
351,880 |
2,891 |
8 |
68.3 |
User I/O |
db file sequential read |
463,984 |
1,216 |
3 |
28.7 |
User I/O |
CPU time | 184 | 4.4 | |||
log file parallel write |
1,346 |
3 |
2 |
.1 |
System I/O |
db file parallel write |
512 |
3 |
6 |
.1 |
System I/O |
File IO Stats
- ordered by Tablespace, File
Tablespace | Filename | Reads | Av Reads/s | Av Rd(ms) | Av Blks/Rd | Writes | Av Writes/s | Buffer Waits | Av Buf Wt(ms) |
CIRCI | /repdata/ora9i/CIRCI.dbf |
2,847,514 |
787 |
12.59 |
1.02 |
1,258 |
0 |
3 |
20.00 |
CIRCI | /repdata/ora9i/circi01.dbf |
915,158 |
253 |
8.63 |
1.00 |
13 |
0 |
0 |
0.00 |
REPORT | /repdata/ora9i/REPORT01.dbf |
257,679 |
71 |
0.75 |
15.15 |
0 |
0 |
186,811 |
0.45 |
REPORT | /repdata/ora9i/REPORT02.dbf |
255,701 |
71 |
0.71 |
15.21 |
0 |
0 |
187,164 |
0.43 |
REPORT | /repdata/ora9i/REPORT03.dbf |
135,105 |
37 |
0.72 |
15.35 |
0 |
0 |
125,856 |
0.39 |
Av Rd(ms) 过大 排除 整列本生有问题,是否和 集群因子过大导致通过ROWID寻找TABLE ROWS时跳跃过大有关?
结果显示集群因子相当的大,表中一共3000多W调数据 集群因子达到2600W,但是索引的DISTINCT值确只有846,表只是批量的进行INSERT
所以是否可以考虑如下的方法
1、建立BITMAP索引代替以前爱的B-TREE索引
2、或者建立一个大的联合索引,让查询直接走INDEX FAST FULL SCAN。
3、同时建立一个新的表空间来存放索引,做好把TABLE也MOVE到新建的表空间中。
我已经建立了联合索引。效果灰常好。已经不通过TABLE ACCESS BY INDEX ROWID 了。只扫描索引就好了。
新的Execution Plan 如下:
PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 2161530321 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 156 (1)| 00:00:02 | | 1 | SORT AGGREGATE | | 1 | 37 | | | |* 2 | INDEX RANGE SCAN| TEST01 | 2 | 74 | 156 (1)| 00:00:02 |
原理:
高聚簇因子 index range scan -----> 引发灰常多的 rowid 回表扫描离散的block ------>buffer prefetching(11G) ------------> db file parallel read...
当 db 出现过多的 db file parallel read 优化SQL 去吧。
案列二:
http://yangtingkun.net/?p=695 消除11.2上的db file parallel read
客户在11.2.0.3环境中进行压力测试,发现出现大量的db file parallel read等待事件。
这个等待是11g以后才出现的,而在11g以前,一般这个等待事件发生在数据文件的恢复过程中。而11g新增了prefetch的特性,也可能导致这个等待事件的产生。
当运行压力测试时,后台的等待事件如下:
SQL> SELECT event, COUNT(*) FROM v$session WHERE username = USER GROUP BY event ORDER BY 2; EVENT COUNT(*) ---------------------------------------------------------------- ---------- SQL*Net message FROM client 1 SQL*Net message TO client 1 db file sequential READ 24 db file scattered READ 33 db file parallel READ 42
可以看到用户进程经历比较严重的IO等待,而此时的db file parallel read,并不会带来性能提升。
可以通过添加隐含参数的方法来屏蔽prefetch功能,从而避免db file parallel read等待事件的产生:
_db_block_prefetch_limit=0
_db_block_prefetch_quota=0
_db_file_noncontig_mblock_read_count=0
修改这三个隐藏参数后,发现db file parallel read等待事件已经消失:
SQL> SELECT event, COUNT(*) FROM v$session WHERE username = USER GROUP BY event ORDER BY 2; EVENT COUNT(*) ---------------------------------------------------------------- ---------- SQL*Net message TO client 1 db file scattered READ 30 db file sequential READ 70