"Civilizația a rămas undeva la granițele de vest, speriindu-se probabil de vameșii noștri care i-au cerut șpagă la intrarea în țară…"


AUD: Audit Commit Delay exceeded

Aud: Audit Commit Delay Exceeded, Written A Copy To Os Audit Trail [ID 788530.1]


how to solve ora-01555 when you have a loop with frequents updates


You really can find a ton of good stuff on Oracle's support web site, Metalink.  I found a terrific trick in NOTE:45895.1 which is titled "ORA-01555 "Snapshot too old" in Very Large Databases"  Look at the section numbered Solution 4c that refers to the issue of getting snapshot too old errors when you have a "Fetch across commit" situation.

Essentially, you can eliminate an ORA-01555 snapshot too old error in code where you are fetching from a select statement in a loop and within the loop you are updating tables and periodically committing.  The trick is to add an order by to your select statement that creates a temporary segment to sort the results.  The order by has to be on a column that isn't indexed or else a temp segment won't be created. 

In our case were weren't getting ORA-01555 errors, we were just spending hours reading from the rollback segments while executing the outer loop.  I added an order by on some random non-indexed column and voila - all our rollback segment read waits went away.  Our select went from running 14 hours to 19 minutes. 

The key point here is that selects from temporary segments don't use rollback segments.  I haven't tried it, but I assume that temporary tables would have the same effect.  i.e. load a temporary table with your data that you want to loop over and then you won't get any rollback segment waits or snapshot errors as you are processing your updates and commits within the loop.

sql tuning


speed up exadata


1. Exadata Smart Scans bypass the block buffer cache
2. Exadata Smart Scans happen instead of FULL scans
3. Make indexes invisible or increase optimizer_index_cost_adj to encourage FULL scans
4. Exadata Smart Scans tend to be part of a HASH JOIN
5. HASH JOINs can be sped up by adding PGA memory – pga_aggregate_target
6. You can free memory by reducing the size of the block buffer cache – sga_max_size


index clustering factor


exec dbms_stats.set_table_prefs(ownname=>user, tabname=>'BOWIE',pname=>'TABLE_CACHED_BLOCKS', pvalue=>42);


exceptions table for enable validate of constraints

Reporting Constraint Exceptions

If exceptions exist when a constraint is validated, an error is returned and the integrity constraint remains novalidated. When a statement is not successfully executed because integrity constraint exceptions exist, the statement is rolled back. If exceptions exist, you cannot validate the constraint until all exceptions to the constraint are either updated or deleted.
To determine which rows violate the integrity constraint, issue the ALTER TABLE statement with the EXCEPTIONS option in the ENABLE clause. The EXCEPTIONS option places the rowid, table owner, table name, and constraint name of all exception rows into a specified table.
You must create an appropriate exceptions report table to accept information from the EXCEPTIONS option of the ENABLE clause before enabling the constraint. You can create an exception table by executing the UTLEXCPT.SQL script or the UTLEXPT1.SQL script.

Both of these scripts create a table named EXCEPTIONS. You can create additional exceptions tables with different names by modifying and resubmitting the script.
The following statement attempts to validate the PRIMARY KEY of the dept table, and if exceptions exist, information is inserted into a table namedEXCEPTIONS:
If duplicate primary key values exist in the dept table and the name of the PRIMARY KEY constraint on dept is sys_c00610, then the following query will display those exceptions: