Oracle Database - Enterprise Edition - Version 126.96.36.199 to 188.8.131.52 [Release 11.2]
Oracle Exadata Storage Server Software
Information in this document applies to any platform.
This note lists the recommended patches for Direct NFS client for Oracle RDBMS Enterprise Edition 184.108.40.206.
Direct NFS is an NFS client embedded in the Oracle database kernel and provides simplified administration and improved performance for NFS.
The list below is a list of known bug fixes for Direct NFS for Oracle RDBMS Enterprise Edition 220.127.116.11. These fixes are recommended for customers using Direct NFS client based on the configuration details described in the list below. Some of these bugs may also be applicable to older releases.
|12391034||ORA-7445 [KGNFSGETMNTHDL()+3418] when using Direct NFS|
This bug fixes an issue in Direct NFS where 64 byte NFS file handles were not being managed correctly by Direct NFS. This problem has been observed while running NFS backups to a Data Domain NFS server but could potentially happen with any NFS server.
|13043012||This bug fixes an issue that impacts Direct NFS multipath failover. Without this fix, Direct NFS may attempt a failover even if there is a single path to the NFS server. Although there are no noticeable performance regressions, it is advisable to apply this fix.||Generic|
|13599864||Improve Exadata RMAN backup performance to a ZFSSA.|
Direct NFS is used to backup Exadata databases to a ZFS storage appliance. Direct NFS uses 1 MB send and receive buffers to issue 1 MB I/Os and this causes a performance degradation. The fix increases the buffer sizes to 4 MB to provide improved backup performance from an Exadata to a ZFSSA.
|Exadata backup to ZFSSA|
|12755502||Intermittent slow I/O if process is idle for over 5 minutes.|
Each Oracle process creates its own connection to the NFS server. If the process is idle for over 5 minutes i.e. it does not issue any I/O for over 5 minutes, the NFS server cleans up the connection and the Oracle process has to reconnect when the next I/O is issued. This bug fixes an issue with the reconnect that caused the reconnect to take longer and increase the I/O latency.
|12821418||Intermittent performance degradation with Direct NFS client.|
Direct NFS retries I/Os with a very short timeout and under heavy I/O load the amount of retries can go up significantly leading to wasteful retransmissions and performance degradation. This bug increases the retry interval to eliminate the performance degradation. This bug can affect all configurations and may be accompanied with a PING timeout error in certain occurences.
|14128555||Intermittent performance degradation with PING timeout messages and database hang symptoms.|
Direct NFS: channel id 0 path **** to filer **** PING timeout.
This bug is caused by a spin in Direct NFS when the network and I/O load is very high and I/O latencies in excess of 5 seconds are seen. This issue is only seen in multipath environments.
If the NFS storage is NetApp, the following signature is observed in the messages file on the server.
Event 'nfsd.tcp.close.idle.notify' suppressed * times. Shutting down idle connection to client (***.***.***.***) where receive side flow control has been enabled. There are *** bytes in the receive buffer. This socket is being closed from the deferred queue.
|14054411||This bug is a Linux bug that affects Direct NFS configurations that do not use oranfstab. This bug affects 2.6.32 and 2.6.39 UEK Linux kernels and the bug is fixed in 2.6.32-300 and 2.6.39-200. Direct NFS users hitting this issue will see a database hang. Direct NFS is spinning in kgnfs_connect() -> skgnfs_bind() loop as the bind() call fails.|
Workaround: A simple workaround for this issue is to configure oranfstab. The 'local' field should be specified in oranfstab and this will allow the bind() to succeed.
|Linux environment running 2.6.32 or 2.6.39 UEK kernel|
|16038929||There is a small performance degradation for multiblock reads when using Direct NFS as Direct NFS fragments the reads into multiple single block reads. These reads are not fragmented if the OS NFS client is being used.||Generic|
|15987992||Hang observed in Direct NFS when running read intensive workloads. The hang call stack will contain one of the following functions.|