In 10g, serial full table scans for “large” tables used to always go through cache (by default). If table was small it was placed at most recently used (MRU) end of the buffer cache. If table is large it is placed at least recently used (LRU) end of the buffer cache.
In 11g, full table scan do not always go through the buffer cache. Decision to read via direct path or through cache is based on the size of the table, buffer cache and various other stats. Table is considered to be small or large based value of _small_table_threshold internal parameter. Default value of this parameter is 2% of buffer cache size and is specified in blocks. This means any object (table) smaller than 2% of the buffer cache will be read via the buffer cache and not using direct path read. And tables larger than 2% of the buffer cache are read via direct path read and not via buffer cache. With AMM (Automatic Memory Management) or ASMM (Automatic Shared Memory Management), buffer cache could drop to a lower value if memory is falling short for shared pool. In such a case after restart of instance, _small_table_thresholdparameter would become even lower due to decreased buffer cache.
Information in this document applies to any platform.
High 'direct path read' waits in 11g [ID 793845.1] | |||||
Modified 31-MAY-2012 Type PROBLEM Status PUBLISHED |
In this Document
Symptoms |
Changes |
Cause |
Solution |
References |
Applies to:
Oracle Server - Enterprise Edition - Version 11.1.0.6 to 11.2.0.2 [Release 11.1 to 11.2]Information in this document applies to any platform.
Symptoms
After migrating an 11g database from a standalone to a 4-node RAC, a noticeable increase of 'direct path read' waits were observed at times.
Here are the Cache sizes and Top 5 events.
waits Cache Sizes Begin End ~~~~~~~~~~~ ---------- ---------- Buffer Cache: 3,232M 3,616M Std Block Size: 8K Shared Pool Size: 6,736M 6,400M Log Buffer: 8,824K Top 5 Timed Foreground Events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DB Event Waits Time(s) (ms) time Wait Class ------------------------------ ------------ ----------- ------ ------ ---------- DB CPU 13,916 42.1 direct path read 1,637,344 13,359 8 40.4 User I/O db file sequential read 47,132 1,111 24 3.4 User I/O DFS lock handle 301,278 1,028 3 3.1 Other db file parallel read 14,724 554 38 1.7 User I/O
Changes
Migrated from a standalone database to a 4-node RAC.
Moved from Unix file system storage to ASM.
Using Automatic Shared Memory Management (ASMM).
The setting of db_cache_size in spfile/pfile is low compared to normal workload requirements.
Moved from Unix file system storage to ASM.
Using Automatic Shared Memory Management (ASMM).
The setting of db_cache_size in spfile/pfile is low compared to normal workload requirements.
Cause
There have been changes in 11g in the heuristics to choose between direct path reads or reads through buffer cache for serial table scans.
In 10g, serial table scans for "large" tables used to go through cache (by default) which is not the case anymore. In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats.
Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.
In 10g, serial table scans for "large" tables used to go through cache (by default) which is not the case anymore. In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats.
Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.
Solution
When using Automatic Shared Memory Management (ASMM) and with buffer cache low limit set at a low end compared to the normal workload requirements and usually after startup, 11g might choose to do serial direct path read scans for large tables that do not fit in the SGA. When ASMM increases the buffer cache due to increased demand, 11g might not again do serial direct path read scans for these same large tables. If you like to avoid this from happening, you should note the buffer cache and share pool requirements for a normal workload and set the low limits of buffer cache and shared pool in spfile/pfile close to these normal workload values.
db_cache_size
shared_pool_size
db_cache_size
shared_pool_size
Niciun comentariu:
Trimiteți un comentariu