oracle aix cio



Where possible, Oracle recommends enabling Concurrent I/O or Direct I/O on file systems containing Oracle data files. The following table lists file systems available on AIX and the recommended setting.
File System Option Description
JFS dio Concurrent I/O is not available on JFS. Direct I/O is available, but performance is degraded compared to JFS2 with Concurrent I/O.
JFS large file none Oracle does not recommend using JFS large file for Oracle Database because its 128 KB alignment constraint prevents you from using Direct I/O.
JFS2 cio Concurrent I/O is a better setting than Direct I/O on JFS2, because it provides support for multiple concurrent readers and writers on the same file. However, due to AIX restrictions on JFS2/CIO, Concurrent I/O is intended to be used only with Oracle data files, control files, and log files. It should be applied only to file systems that are dedicated to such a purpose. For the same reason, the Oracle home directory is not supported on a JFS2 file system mounted with the cio option. For example, during installation, if you inadvertently specify that the Oracle home directory is on a JFS2 file system mounted with the CIO option, then while trying to relink Oracle, you may encounter the following error: "ld: 0711-866 INTERNAL ERROR: Output symbol table size miscalculated"
GPFS NA Oracle Database silently enables Direct I/O on GPFS for optimum performance. GPFS Direct I/O already supports multiple readers and writers on multiple nodes. Therefore, Direct I/O and Concurrent I/O are the same thing on GPFS.
Considerations for JFS and JFS2
If you are placing Oracle Database logs on a JFS2 file system, then the optimal configuration is to create the file system using the agblksize=512 option and to mount it with the cio option. This delivers logging performance within a few percentage points of the performance of a raw device.
Before Oracle Database 10g, Direct I/O and Concurrent I/O could not be enabled at the file level on JFS/JFS2. Therefore, the Oracle home directory and data files had to be placed in separate file systems for optimal performance. The Oracle home directory was placed on a file system mounted with default options, with the data files and logs on file systems mounted using the dio or cio options.
With Oracle Database 10g, you can enable Direct I/O and Concurrent I/O on JFS/JFS2 at the file level. You can do this by setting the FILESYSTEMIO_OPTIONS parameter in the server parameter file to setall or directIO. This enables Concurrent I/O on JFS2 and Direct I/O on JFS for all data file I/O. Because the directIO setting disables asynchronous I/O it should normally not be used. As a result of this 10g feature, you can place data files on the same JFS/JFS2 file system as the Oracle home directory and still use Direct I/O or Concurrent I/O for improved performance. As mentioned earlier, you should still place Oracle Database logs on a separate JFS2 file system for optimal performance.

First of all, since CIO bypasses the OS write lock, it is the
preferred option for JFS2. DIO is typically used only in JFS

Files not controlled by an application, such as Oracle binaries,
should never use CIO or DIO. As you correctly point out, inode
locking is very important where the application is not handling
this function.

CIO tends to improve I/O throughput significantly in high write,
random read environments, but where I/O is primarily sequential
reads, it is unlikely to be beneficial as the sequential read
activity typically benefit from the VMM prefetching pages into
filesystem buffer cache. Because of this, you may need to
increase the db cache size and the db_file_multiblock_read_count
when implementing CIO in an Oracle environment.

in 9i, CIO can only be used at the filesystem level - it is a
mount option. In 10g, Oracle will open JFS2 files using the
o_cio (and JFS files using the o_dio) call IF
filesystemio_options is set to SETALL or DIRECTIO.
Filesystemio_options has the following possible values: NONE,
and ASYNCH. In 9i, setting filesystemio_options=SETALL has
exactly the same behavior as setting
filesystemio_options=ASYNCH. So beware if you have
filesystemio_options=SETALL and you upgrade to 10g; your I/O
behavior will change.

CIO and DIO code also requires I/O to be a multiple of the
filesystem block size, which defaults to 4k on AIX. If the I/O
is not a multiple of the filesystem block size, filesystem cache
pages will be created in the VMM anyway for the purpose of
manipulating the I/O into a multiple of the filesystem block
size. We call this "demoted I/O".

What does this mean in an Oracle environment? It means that we
must look at the I/O being performed to know where CIO or DIO is

With any Oracle database, we can break down the I/O types into
groups of files as follows:

data files (including undo, data, index etc)
redo log files
archive logs
control files

For data files, the I/O size is always a multiple of the db block
size, which is almost always 4k, 8k, 16k, or 32k. These are a
multiple of the default AIX block size, so no special
considerations are needed.
Redo log files and control files, however, are written at a
multiple of 512 bytes. Therefore, it is vital that the block
size of a filesystem containing redo logs and control files must
be set to 512 bytes to avoid demoted I/O.
For archive log files, the primary consideration is recovery
time, where the archive logs are read sequentially, and the fs
buffer cache is helpful for faster I/O. Therefore the best
thing to do is to avoid CIO/DIO for use with archive logs, and
instead, to reduce strain on lrud, dump pages out of the VMM
immediately after use by using the 'rbrw' mount option.

Niciun comentariu:

Trimiteți un comentariu