Purpose
The goal of this document is to present the best practices for the deployment of data warehousing applications on the Sun Oracle Database Machine. The goal of these best practices is to enable a small number of users to run large table scans, sorts, and aggregations in an optimal manner on the database machine.Scope and Application
This article is provided for product management, system architects, and system administrators involved in deploying and configuring the Sun Oracle Database Machine. This document will also be useful to field engineers and consulting organizations to facilitate installations and configuration requirements of data warehouse applications on the Sun Oracle Database Machine.The best practices defined here assume you are using the production hardware and software for the Sun Oracle Database Machine. It is also assumed that you are familiar with the generic data warehousing best practices.
Best practices for data warehousing on the Oracle Database Machine
The best practices defined here assume you are using the production hardware and software for the Sun Oracle Database Machine. It is also assumed that you are familiar with the generic data warehousing best practices. The goal of these best practices is to enable a small number of users to run large table scans, sorts, and aggregations in an optimal manner on the database machine.Tablespace creation
You should use locally managed, auto-allocate tablespaces. With auto-allocate Oracle automatically grows the size of the extent depending on segment size, available free space in the tablespace and other factors. The extent size of a segment starts at 64KB and grows to 1MB when the segment grows past 1MB, and 8MB once the segment size exceeds 64MB. So for a large table, the extent size will automatically grow to be large.The use of uniform extents is strongly discouraged for two reasons; space wastage and the impact that wasted space has on scan performance. Setting cell_partition_large_extents will achieve large extent sizes without the space wastage.How many tablespaces should you use?
For large partitioned objects you should use multiple big file tablespaces to avoid file header block contention during parallel load operations. File header block contention appears as the gc buffer busy enqueue wait event in an AWR report. Checking the buffer wait statistic will indicate if it is the file header block that is being contended for.To evenly distribute a partitioned table among multiple big file tablespaces using the STORE IN clause. For example, the DDL for a partitioned table with 128 subpartitions would look as follows:
CREATE TABLE sales_composite
(salesman_id NUMBER(5),
salesman_name VARCHAR2(30),
sales_amount NUMBER(10),
sales_date DATE)
PARTITION BY RANGE(sales_date)
SUBPARTITION BY HASH(salesman_id)
subpartitions 128
store in (ts1, ts2, ts3, ts4)
(PARTITION sales_jan2000 VALUES LESS THAN(TO_DATE('02/01/2000','MM/DD/YYYY')),
PARTITION sales_feb2000 VALUES LESS THAN(TO_DATE('03/01/2000','MM/DD/YYYY')),
PARTITION sales_mar2000 VALUES LESS THAN(TO_DATE('04/01/2000','MM/DD/YYYY')),
PARTITION sales_apr2000 VALUES LESS THAN(TO_DATE('05/01/2000','MM/DD/YYYY')),
PARTITION sales_may2000 VALUES LESS THAN(TO_DATE('06/01/2000','MM/DD/YYYY')));
Setting CELL_PARTITION_LARGE_EXTENTS to TRUE when creating partitioned tables
It is good practice to ensure that large segments will use large extents. In Oracle Database 11g a new init.ora parameter was introduce to ensure large partitioned objects would be created using larger extent size straight away. When CELL_PARTITION_LARGE_EXTENTS is set to TRUE all partitioned objects will be created with an initial extent size of 8MB. By starting with a large extent size we can ensure that large partitioned objects has as few extents as possible.If you are doing a temp segment merge load (when SQL*Loader is use or the Merge statement) then the final extents will be trimmed. The extent is trimmed to the nearest multiple of 64KB. For direct path loads or loads that use high watermark brokering no trimming is done.
Setting extent size using INITIAL and Next in the storage clause of non-partitioned table create statement
CELL_PARTITION_LARGE_EXTENTS does not help in the case of non-partitoned tables. If you have large non-partitioned tables you still need to ensure that large extents are used. To do this set the storage clause of the create table statement to have the INITIAL clause set to 8MB. The database will make best effort to size every extent in the segment to be at least INITIAL extent size. When doing a parallel insert append on non-partitioned tables, using a parallel SQL*loader job, or using the MERGE statement, data will be loaded into new segments and then the new segments will be merged into the table being loaded. The new segments by default start with 64K extents and grow the extent size as the new segment grows in size (just like any other new segment). If you know that a lot of data will be loaded, then you can skip creating all these small extents by setting NEXT in the table's storage clause. Starting with 11.1.0.7, NEXT controls the starting extent size for the new segments created by these load operations. So if you know for example that you will be loading 500MB of data, you can set NEXT to 8M to ensure that the extent sizes used by the new segments will be large.Create Table sales(.....) parallel compress storage (INITIAL 8M NEXT 8M) (........);
Grant user unlimited tablespace quota
Prior to 11gR2 you must grant an UNLIMITED tablespace quote to the database user that will be doing the data loads to avoid Row Cache Lock contention. You may also see a high number of recursive sql statements updating the tsq$.alter user oracle QUOTA UNLIMITED on tablespace ts_data;
Data Types
It is strongly recommended that you use the VARCHAR2 datatype for storing character data rather than CHARs even if the data is known to be fixed width. CHAR datatype is typically required to have a fixed width representation. However, CHAR columns in Oracle will be space padded when the data inserted into that column does not match the fixed width, to meet the SQL standard. Beacuse of this CHARs take more space on disk and are less efficient when it comes to processing and evaluating predicates compared to VARCHAR2s.For these reasons we strongly recommend VARCHAR2 should be used for new applications to store character data. If your customer has existing tables with CHAR columns in it, consider using VARCHAR2s instead on the database machine if possible.
Data Loading
In order to achieve the best load performance possible the system should be CPU. If you are not CPU bound during the load you either have contention (waiting on something) or there is a skew in the data. You can confirm your system is CPU busy by looking atFlat file staging on DBFS
It is strongly recommended that you stage all flat files (raw data files) for loading on an Oracle Database File System (DBFS). DBFS is similar to NFS in that it provides a shared network file system that looks like a local file system. More details on how to configure DBFS can be found in the Oracle Database SecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2).External tables
We recommend that you only load using external tables because when SQL*Loader is used to load data in parallel, the data is loaded into temporary extents, only when the transaction is committed are the temporary extents merged into the actual table. Any existing space in partially full extents in the table will be skipped (wasted). For highly partitioned tables this could potentially lead to a lot of wasted space.Parallel direct path loading
The key to good load performance is to use direct path load as the data is written directly to the database storage (multiple blocks per I/O request using asynchronous writes if the host platform supports asynchronous I/O) bypassing the database buffer cache. A CTAS will always use direct path load but an ‘Insert As Select’ (IAS) statement will not. In order to achieve direct path load with an IAS you must add the APPEND hint to the command.Insert /*+ APPEND */ into Sales partition(p2) Select * From ext_tab_for_sales_data;Direct path loads should also be run in parallel. You can set the parallel degree for a direct path load either by adding the PARALLEL hint to the CTAS or IAS statement or by setting the PARALLEL clause on both the external table and the table into which the data will be loaded. Once the parallel degree has been set a CTAS will automatically do direct path load in parallel but an IAS will not. In order to enable an IAS to do direct path load in parallel you must alter the session to enable parallel DML.
ALTER SESSION ENABLE PARALLEL DML; Insert /*+ APPEND */ into Sales partition(p2) Select * from ext_tab_for_sales_data;
Partition exchange load
It is strongly recommended that the larger tables or fact tables in a data warehouse should be partitioned. One of the great features about partitioning is the ability to load data quickly and easily with minimal impact on the business users by using theexchange partition command. The exchange partition command allows you to swap the data in a non-partitioned table into a particular partition in your partitioned table. The command does not physically move data it simply updates the data dictionary to reset a pointer from the partition to the table and vice versa. Because there is no physical movement of data it is import that the non-partitioned table used for the data load should have the same storage layout as the partitioned fact table. That means the non-partitoned table should reside in the correct tablespace and should have the same extent sizes as the rest of the partitioned table. You can ensure this by setting the INITIAL and NEXT parameters in the storage clause of the table definition to be 8MB.
Create Table tmp_sales2(.....) parallel storage (INITIAL 8M NEXT 8M) tablespace main_fact_tbs ...........
Alter table Sales exchange partition p2 with table tmp_sales2 including indexes without validation;For more information on partition exchange loads see chapter 3 of the Oracle database VLBD and Partitioning guide.
Preprocessors
If the file/s you need to load are not in an Oracle readable format (for example they are compressed or zipped files) then you can use the preprocessor clause in the external table definition to to specify a user-supplied preprocessor program that will execute for every data file. Note that the program specification must be enclosed in a shell script if it uses arguments. For example if the data files that need to be loaded have been compressed using gzip then you could specifiy the following external table definition that would execute the gunzip on each of the data files specified in the external table.CREATE TABLE sales_external (…) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY data_dir1 ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE PREPROCESSOR exec_dir: 'gunzip' FIELDS TERMINATED BY '|' ) LOCATION (…) )Where exec_dir is an Oracle directory that points to the OS directory that contains the gunzip binary.
However you should note that the parallel degree for an external table that uses a preprocessor will be determined by the number of data files specified in the external table. Regardless of what parallel degree is specified on the external table or via hints in the DDL statement. This means if you only have 1 raw data file specified in the external table definition that has a preprocessor clause all accesses to the data in that external table will be done serially.
Populate column usage statistics before data load
Prior to loading any data it is advisable to run all queries against the empty tables to populate or seed the column usage statistics. Column usage statistics are used during optimizer statistics gathering to automatically determine which columns require histograms and the number of buckets that will be used. A column is a candidate for a histogram if it has been seen in a where clause predicate e.g. an equality, range, LIKE, etc. and if there is data skew in that colum.Deferred Segment Creation
Beginning in Oracle Database 11g Release 2, when creating a non-partitioned table in a locally managed tablespace, the table segment creation is deferred until the first row is inserted. There are currently two known issues with deferred Segment Creation, which are outlined below. Due to these issues it is recommended that Deferred Segment Creation be switched off in 11.2.0.1. You can use the following command to turn it offAlter system set deferred_segment_creation = FALSE scope=both;Due to bug 9078678 the PLAN output for a parallel data load statement from both explain plan and v$SQL_PLAN will wrongly shown as a serial DML plan despite the fact it will run in parallel.
Due to bug 9329566 Insert As Select operations into an empty non-partitioned table will be slower in 11.2 then in previous release. You will also see two execution plans shown for the single DML statement. When Deferred Segment Creation is switched on (default) and a parallel Insert As Select statement is issued against an empty non-partitioned table we will execute the select or query part of the statement twice.
When the statement is issued initially the execution plan generated will have the select or query part of the statement happening in parallel but the insert or load part of the statement happening serial because we don't believe the table being inserted into exists. Once the first block is written to the table the segment will be created and we realize the load should have happened in parallel. This triggers the statement to be parsed again producing a second plan, this one showing both the query and the load section happening in parallel. We will rerun the query part of the statement again and then load the data in parallel. If the query part of the statement is complex or uses any sorting or grouping operation this behavior could result in a serious impact on performance. The workaround for this issue is to turn off deferred segment creation.
Data Compression
Using table compression obviously reduces disk and memory usage, often resulting in better scale-up performance for read-only operations. Table compression can also speed up query execution by minimizing the number of round trips required to retrieve data from the disks. Compressing data however imposes a performance penalty on the speed of the data loading and any subsequent DML operations. Oracle offers three types of compression on the Sun Oracle Database Machine; basic compression, OLTP compression (component of the Advanced Compression option), and Exadata Hybrid Columnar Compression (EHCC).With basic compression Oracle compresses data by eliminating duplicate values in a database block. Basic compression only works for direct path operations (CTAS or IAS, as discussed earlier). If the data is modified using any kind of conventional DML operation (for example updates), the data within that database block will be uncompressed to make the modifications and will be written back to disk uncompressed. This can often cause row chaining that result in an increase in ‘gc buffer busy’ waits. It is strongly recommended NOT to use basic compress for data that will be actively changed via DML statements. Basic compress should only be applied to data that is loaded once and is read-only or after all DML operations have been completed.
With OLTP compression, just like standard compression, Oracle compresses data by eliminating duplicate values in a database block. But unlike standard compression OLTP compression allows data to remain compressed during all types of data manipulation operations, including conventional DML such as INSERT and UPDATE. More information on the OLTP table compression features can be found in Chapter 18 of the Oracle Database Administrator's Guide 11g.
Exadata Hybrid Columnar Compression (EHCC) achieves its compression using a different compression technique. A logical construct called the compression unit is used to store a set of Exadata Hybrid Columnar-compressed rows. When data is loaded, a set of rows is pivoted into a columnar representation and compressed. After the column data for a set of rows has been compressed, it is fit into the compression unit. If conventional DML is issued against a table with EHCC, the necessary data is uncompressed in order to do the modification and then written back to disk using a block-level compression algorithm. If your data set is frequently modified using conventional DML EHCC is not recommended, instead the use of OLTP compression is recommended.
EHCC provides different levels of compression, focusing on query performance or compression ratio respectively. With EHCC optimized for query, less compression algorithms are applied to the data to achieve good compression with little to no performance impact. Compression for archive on the other hand tries to optimize the compression on disk, irrespective of its potential impact on the query performance.
CHAR column usage with EHCC
A CHAR datatype is required to have a fixed width representation (SQL standard). Due to this requirement CHAR columns in Oracle are space padded when the data inserted into that column does not match the fixed width.Pro's for using CHAR datatype
Using the CHAR datatype ensure there will not be any fragmentation/chaining of objects over time because the column values are fixed width. This eliminates the necessity to reorganize an object in the future. This approach is very common in companies where they use a lot of DB2. Effectively they trade space for reduced future administration.
Con's for using CHAR datatype
CHARs columns take more space on disk due to the space padding and are less efficient when it comes to processing and evaluating predicates compared to VARCHAR2s.
Recommendations
For new applications it is strongly recommended that you use the VARCHAR2 datatype for storing character data rather than a CHARs even if the data is known to be fixed width. If your customer has existing tables with CHAR columns in it, consider using VARCHAR2s instead on the database machine if possible. But you should bear in mind the cost of changing legacy of applications that use CHAR semantics. Changing these applications could require a lot of testing.
Required patches
In case where you have to use CHARs in 11.2.0.1, please make sure you have the fix for bug 9502734 and bug 9479565. The bug fix avoids extra decompression that can happen on EHCC when CHAR columns are used. Please note even with the bugfix, CHARs are still more expensive than VARCHAR2s.
Partitioning
The larger tables or the fact tables should be partitioned using composite partitioning (range-hash or range-list). There are three reasons for this:1. Allows for partition exchange loads 2. Partition pruning during queries 3. Efficient and performant partition-wise joins
How to pick a range partitions
The goal of range partitioning is to ensure only the necessary data to answer a query will be scanned. The column to use for range partitioning and the size of the range partitions should be decided on, based on the query workload. If the majority of the queries only access a single days worth of data then daily partitions would be a good choice. However, if all of the queries do weekly or monthly rollup style queries then it would be better to use weekly range partitions.How to pick the number of hash partitions
Oracle uses a linear hashing algorithm to create sub-partitions. In order to ensure that the data gets evenly distributed among the hash partitions the number of hash partitions should be a power of 2. On the database machine you should use 128 hash subpartitions (2 X number of CPU). However, each hash partition should be at least 16MB in size. Any smaller and they will not have efficient scan rates with parallel query. If 128 will make the subpartitions too small considering using 64.Partition-wise joins
The main performance benefits of hash partitioning is partiton-wise joins. Partition-wise joins reduce query response times by minimizing the amount of data exchanged among parallel execution servers when joins execute in parallel. For the optimizer to choose the full partition-wise join method, both tables must be equi-partitioned on their join keys. That is, they have to be partitioned on the same column with the same partitioning method and the same number of partitions. Below is the execution plan for a simple query that joins the Sales and Customers table. Both tables have the same degree of parallelism and are hash partitioned on cust_id column, which is also the join column. * Execution plan for partition-wise join:Parallel Query
When to use it
Parallel execution should be leverage for all resource intensive operations including: Complex queries that access large amounts of data, Building indexes on large tables , Gathering Optimizer statistics, Loading or manipulating large volumes of data. The only time you should not use parallel execution is if the tables being accessed are smaller than 64MB or if the number of concurrent users is in the hundreds.Parallel Degree
You should begin by setting the parallel attribute on all of the large tables in the schema. You can do this either during the table creation or using a simple ALTER TABLE statement.ALTER TABLE sales PARALLEL;
Controling parallel execution in RAC environmens
By default, the parallel server processes selected to execute a SQL statement can operate on any or all Oracle RAC nodes in the cluster. By setting PARALLEL_FORCE_LOCAL to TRUE, the parallel server processes are restricted to just one node, the node where the query coordinator resides (the node on which the SQL statement was executed). However, In 11.2.0.1 when this parameter is set to TRUE the parallel degree calculations are not being adjusted correctly to only consider the CPU_COUNT for a single node. The parallel degree will be calculated based on the RAC-wide CPU_COUNT and not the single node CPU_COUNT. Due to this bug 9671271 it is not recommended that you set PARALLEL_FORCE_LOCAL to TRUE in 11.2.0.1, instead you should setup a RAC service to limit where parallel statements can execute.Controlling parallel execution with RAC services
By default the Oracle database enables inter-node parallel execution (parallel execution of a single statement involving all RAC nodes). If you want to limit the number of nodes an application or workload operate on, you should use RAC services. A service can be created using the srvctl command line tool or using Oracle Enterprise Manager. The example below creates two services, one called ETL that will limit parallel server processes to nodes 1 and 2 in the cluster and another called Ad-hoc that will limit parallel server processes to node 3 and 4.Srvctl add service –d database_name -s ETL -r sid1,sid2 Srvctl add service –d database_name -s ADHOC -r sid3,sid4
Managing parallel execution with resource manager
Oracle Database Resource Manager (DBRM) enables you to group users based on characteristics, and restrict parallel execution for some of these users. DBRM is the ultimate last instance in determining the maximum degree of parallelism, and no user in a resource group (using a specific resource plan) will ever be able to run with a higher DOP than the resource group's maximum. For example, if your resource plan has a policy of using a maximum DOP of 16 and you request a DOP of 128 via a hint, or because the table has been decorated with the parallel attribute your SQL will run with a DOP of 16. Furthermore, DBRM can control the maximum number of active sessions for a given resource group. When this limit is reached, the DBRM queues all subsequent requests and runs them only after existing active sessions complete. For more information on DBRM see the Oracle Database Administrator's Guide.Optimizer statistics
Since the default value for the init.ora parameter optimizer_mode is set to ALL_ROWS on the databae machine, the Cost Based Optimizer (CBO) will be used for statement. Therefore all tables in the database need to have statistics including all of the dictionary tables (tables owned by ‘sys’ and residing in the system tablespace). Using dynamic sampling is not an adequate solution.When to gather dictionary statistics
Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window. If you choose to switch off the automatic statistics gathering job for your main application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET to ORACLE instead of AUTO using the procedure DBMS_STATS.SET_GLOBAL_PREF.EXEC DBMS_STATS.SET_GLOBAL_PREFS(AUTOSTATS_TARGET,'ORACLE');If you choose to switch off the auto job complete you will have to maintain the dictionary statistics manually using the GATHER_DICTIONARY_STATS Procedure.
When to gather fixed object statistics
Fixed object statistics (stats on the dynamic performance tables X$ tables and v$ views) are not maintained by the automatic stats gathering job. You will need to gather statistics on fixed objects only once when the database has been fully built and a representative workload has been run on the system. You can collect statistics on fixed objects using DBMS_STATS.GATHER_FIXED_OBJECTS_STATS.When to gather table statistics
Optimizer statistics should be gathered after the data has been loaded but before any indexes are created. Oracle will automatically gather statistics for indexes as they are being created.How to gather table statistics
When gathering statistics for partitioned tables you should use the new incremental statistics feature that enables global or table level statistics to be automatically derived from partition level statistics. This feature will greatly reduce the amount of time necessary to gather and maintain statistics on large partitioned tables by eliminating the necessity to execute multiple full table scans. Use the following command to enable the incremental feature for the tableEXEC DBMS_STATS.SET_TABLE_PREFS('SH','SALES','INCREMENTAL','TRUE');After loading data gather table statistics using GATHER_TABLE_STATS command but there is no need to specify many parameter just the schema name and the table name.
EXEC DBMS_STATS.GATHER_TABLE_STATS('SH','SALES');The command will collect statistics for partitions and update the global statistics based on the partition level statistics and synopsis . More information about how this feature works can be found here.
Please note you should apply patch 8411968 for base bug 8318020 in 11.1.0.7 before using incremental statistics.
You should also note that due to bug 8584129 it maybe necessary to set the reqired degree of parallelism(DOP) for the statistics gathering rather than letting it inherit the DOP from the objects. A degree of 128 is recommended.
EXEC DBMS_STATS.GATHER_TABLE_STATS('SH','SALES',degree=>128);
Setting statistics gathering preferences
It may be useful to alter the statistics gathering preferences for all tables at the system level, which would be all subsequent statistic gathering operations would use these preferences. For instance, you may want to set incremental to be true on all tables or you many want to set a different degree of parallelism then what is specified on each table. You can do this using the DBMS_STATS.SET_GLOBAL_PERFS procedure.EXEC DBMS_STATS.SET_GLOBAL_PREFS('INCREMENTAL','TRUE'); EXEC DBMS_STATS.SET_GLOBAL_PREFS('DEGREE','128');
Gathering system statisitcs
It is currently not necessary to gather system statistics on the database machine.Bloom filter
In Oracle Database 10g we introduced the concept of bloom filtering. When two tables are joined via a hash join, the first table (typically the smaller table) is scanned and the rows that satisfy the ‘where’ clause predicates (for that table) are used to create a hash table. During the hash table creation a bit vector or bloom filter is also created based on the join column. The bit vector is then sent as an additional predicate to the second table scan. After the ‘‘where’ clause predicates have been applied to the second table scan, the resulting rows will have their join column hashed and it will be compared to values in the bit vector. If a match is found in the bit vector that row will be sent to the hash join. If no match is found then the row will be disregarded. On Exadata the bloom filter or bit vector is passed as an additional predicate so it will be overloaded to the storage cells making bloom filtering very efficientRestrictions on Bloom Filters
However in 11.1.0.7 there are some restrictions on when bloom filtering will kick in. The plan must be a fully parallel plan and not a partition-wise join. So it may become necessary to put a low degree of parallelism on small lookup tables and indexes to take advantage of bloom filtering.One of the most common cases where we see bloom filtering being disabled occurs when there is an index on the smaller table. If the optimizer selects an index range scan of that index then it will be done in serial and not parallel. Thus preventing a bloom filter being created. Potential workarounds in 11.1.0.7 would be to mark the index invisible or drop the index in question. Bug 7199035 was filed for this and the restriction of having a parallel left hand side of the hash join has been lifted in 11.2.
Identifying a Bloom Filter in an Execution plan
You can identify a bloom filter in a plan when you see :BF0000 in the Name column of the execution plan.Initialization Parmeter Settings
We strongly recommend you leave the majority of the initialization parameters at their default values. There are just a small subset of the parameters that you need to change on the database beyond the obvious ones (DB_NAME, Control_files, undo_tablespace). Below is the list of parameters you should consider changing and some guidelines on what to set them to.- compatible=11.2.0.1
- In order to take full advantage of the latest exadata features (storage index etc) you must set the compatible parameter to 11.2. If for some reason you want to lower the compatible parameter value you should remember that the exadata software was only became available in 11.1.0.7 so you must set the compatible parameter for both the ASM and Database instance to 11.1.0.7 or higher in order to communicate correctly with the exadata software on the storage.
- db_block_size= 8KB or 16KB
- 8KB is the default block_size and is the block_size used during all of Oracle's testing. Typically this is good enough for a data warehouse. By doubling the default block size you can increase the chances of getting a good compression rate as Oracle applies data compression at the block level. The more rows in the block the greater the chance Oracle will find duplicate values within a block.
- parallel_adaptive_multi_user=FALSE
- Parallel_adaptive_multi_user automatically reduces the requested degree of parallelism for a query based on the system load at query startup time. Because the DOP is depended on the system load the elapse time for a query can vary, which is not acceptable during a POC or a benchmark. To ensure consistent elapse times set this parameter to false.
- parallel_execution_message_size=16384
- Parallel servers communicate among themselves and with the Query Coordinator by passing messages via memory buffers. If you execute a lot of large operations in parallel, it’s advisable to reduce the messaging latency by increasing the parallel_execution_message_size (the size of the buffers). By default the message size is 2K. Ideally you should increase it to 16k (16384). However, a larger parallel_execution_message_size will increase the memory requirement for the shared_pool so if you increase it from 2K to 16K your parallel server memory requirement will be 8 X more.
- parallel_max_servers=128
- This parameter determines the maximum number of parallel servers that may be started for a database instance, should there be demand for them. The default value on Oracle Database 11g is 10 * cpu_count * parallel_threads_per_cpu, and typically this is adequate. However, you may want to adjust this parameter depending the number of concurrent parallel query statements you will be executing and the DOP required for those statements. Bare in mind you do not want to flood the system with parallel server processes.
- parallel_min_servers=32
- This parameter determines the number of parallel servers that will be started during database startup. By default the value is 0. It is recommended that you set parallel_min_servers to 32. This will ensure that there are ample parallel server processes available for the majority of the queries executed on the system and queries will not suffer any additional overhead of having to spawn extra parallel servers. However, if extra parallel servers are required for additional queries above you average workload they can be spawn “on the fly” up to the value of parallel_max_servers.
- pga_aggregate_target=16384MB
- Oracle automatically sizes the working areas used by memory-intensive SQL operators (such as sort, group-by, hash-join, bitmap merge, and bitmap create) to ensure the total work area (private memory) used is below the target value specified. When you increase the value of this parameter, you indirectly increase the memory allotted to each of the work areas. Consequently, more memory-intensive operations are able to run fully in memory and less will spill into the temp tablespace on disk.
- processes=1024
- Increase the process count to allow for all background processes such as locks, job queue processes, and parallel execution processes.
- sga_target=8192MB
- sga_target specifies the total size of all SGA components (Buffer cache, shared pool, large pool, java pool, and streams pool). You need to set the sga_target high enough to ensure you have enough space in the buffer cache for all of the meta-data for the objects in the database and to have a large enough shared pool to accommodate all of the cursors for your application and the memory buffers used by parallel query.
- The space related metadata (segment headers, extent map blockss etc) for partitioned objects can be quite larage. The query coordinator in a parallel query needs to read the space metadata of a table before starting a table scan. If each partition of a table is smaller than a hundred megabytes, and there are many partitions, then the overhead of reading the space metadata into the buffer cache can be a significant fraction of the overall elapse time for the table scan. Therefore if you have tables with thousands of partitions, you should make sure to size your buffer cache to be large enough to hold the space metadata blocks for tables that are frequently scanned. The following formula may be used to determine the number of metadata blocks (segment header and extent map blocks) in an ASSM segment.
- multi_block_read_count=1024/db_block_size
- In the documentation it says "the default value of this parameter is a value that corresponds to the maximum I/O size that can be performed efficiently. This value is platform-dependent and is 1MB for most platforms". However, by changing some of the other parameters listed above such as, processes, you can impact the formula used to automatically set multi_block_read_count. In order to ensure maximum I/O size is 1MB set the parameter to be the 1024/db_block_size.
- Pre_page_sga=False
- When Pre_page_sga is set to ture Oracle reads the entire SGA into memory at instance startup. Operating system page table entries are then prebuilt for each page of the SGA. However, in 11.1.0.7 and 11.2.0.1 it is strongly recommened that you do not use pre_pages_sga when you are using Automatic Memory Management (AMM). Because there is a bug with with AMM where PRE_PAGE_SGA causes every process to touch every page of the entire AMM space, not just the SGA! This has significant impact on page table consumption and session connect time.
Exadata Smart Scan
One of the most powerful features of the Database Machine is that it offloads the data search and retrieval processing to the storage cell. Exadata Cell evaluate query predicates at the storage level to optimize the performance of certain classes of bulk data processing. For example the performance of queries that require a full table or index scans to evaluate selective predicates can be improved by pushing the database expression evaluations to the storage cell. These expressions include simple SQL command predicates, such as amount > 200, and column projections, such as SELECT customer_name. For example:
SQL> SELECT customer_name FROM calls WHERE amount > 200;
In the preceding example, only rows satisfying the predicate, and only the column specified, are returned to the database server, eliminating unproductive I/O operations.
What operations benefit from Smart Scan
Full scan of a heap table.
Fast full scan of a B-Tree or bitmap index.
What operations do not benefit from Smart Scan
Scans of IOTs or clustered tables.
Index range scans.
Access to a compressed index.
Access to a reverse key index.
Secure Enterprise Search.
Niciun comentariu:
Trimiteți un comentariu