Upper or mixed case hostname being used, this is being investigated in bug 17580744



Nu vezi maşinile lor mari cât garsoniera ta? Consumă pe un drum la mare cât pensia sau salariul tău. Nu vezi că te-au furat pe faţă? Nu vezi că, într-un sfert de veac, ei au creat averi pe care alţii le fac în generaţii întregi? Nu vezi că ai ajuns să îi ierţi pe hoţi, pentru că ţi-au spus, nu-i aşa, că toţi fură? Tu ai furat cu ei? Sau crezi că ei, dacă au furat de la stat, sunt vreun soi de eroi, de haiduci? Gândeşte-te că banii aceia nu ar fi ajuns în vilele lor, în maşinile lor, în ceasurile şi costumele lor, ci în spitale mai bune pentru tine şi în şcoli pentru nepoţii tăi. În autostrăzi pe care copiii tăi, plecaţi în străinătate, ar fi ajuns mai repede acasă la tine. I-ai întrebat vreodată pe copii de ce au plecat, de ce le e scârbă, de ce nu vor să se întoarcă? Sau poate ţi-e teamă ca nu cumva să ai şi tu o parte de vină pentru ţara aceasta chinuită şi furată? 


Message 3511 not found; No message file for product=network, facility=TNSMessage 3512 not found; No message file for product=network, facility=TNS




ACFS-9109[oracleoks.ko driver failed to load] Error during GI Install When Running Root.sh (Doc ID 1590701.1)


disadvantages of ndmp

many people favor to avoid NDMP at all. I may write a separate blog on this but here are some reasons:
  • NDMP is not storage agnostic. In general you cannot backup data and restore to another array from another vendor or sometimes even another OS version.
  • NDMP requries admin privileges. No problem for backups of large systems but not nice for restores, especially if a user want's to restore a single file.
  • The majority of the backup software solutions do not index the files of the NDMP files. In TSM for example you can store a Table of Content (TOC) with the backup but if you want to restore a single file you have to load the TOC into a temporary table to work with it. This can be very time consuming.
  • NDMP doesn't really support a incremental forever strategy. That means you have to do a full backup periodically which is a no go with large filesystems at petabyte scale that contain billion of files.

zfs write performance


Monitor disk space and memory resources.
Keep 20% free space in your Oracle Solaris ZFS storage pools. 
The following command gives the current memory size in bytes that is used as Oracle
Solaris ZFS cache:
# kstat zfs::arcstats:size
Monitor Oracle Solaris ZFS cache sizes with the above command and readjust the
zfs_arc_max parameter when needed. If the vmstat command shows always
large free memory, you can also increase the value of zfs_arc_max.
2. Use Oracle Solaris ZFS quotas and reservations to keep free space in storage
Oracle Solaris ZFS writing strategies change when the storage volume used goes over
80% of the storage pool capacity. This change can impact the performance of rewriting
data files as Oracle's main activity. Keep more than 20% of free space is suggested for
an OLTP database. Consider setting quotas on the main pool's file systems to guarantee
that 20% free space is available at all time.
For a data warehouse database, keep 20% free space in the storage pool as a general
rule. Periodically copying data files reorganizes the file location on disk and gives better
full scan response time. For a large data warehouse database, we can have a specific rule
for read-only table spaces. When the data loading phase is ended, the table space is set
to read only. We can then copy the data files of the table space in a storage pool dedicated to read-only table spaces, and for this type of usage, we can use more than
80% of the storage pool's capacity.


Capacity-on-demand for exadata

  • Capacity-on-demand may only be used to decrease the number of active processor cores during initial installation. After initial configuration, the processor core count can only increase on a system, up to the maximum. It is the customer's responsibility to acquire the additional software licenses.
  • Reducing the number of active cores lowers the initial software licensing cost. It does not change the hardware cost.
  • The minimum number of processor cores that must be enabled is half of the physical cores on each processor. For Oracle Exadata Database Machine X4-2 systems, the minimum is 6 per processor (12 per database server). For Oracle Exadata Database Machine X4-8 Full Rack, the minimum is 8 per processor (64 per database server).
  • Additional cores are increased in 2-core increments per server on Oracle Exadata Database Machine X4-2, and in 8-core increments on Oracle Exadata Database Machine X4-8 Full Rack. Database servers in the same system can enable a different number of cores




"The new silicon will sit inside Oracle's just-announced Exadata Database Machine X4-8, which has been built for Oracle's "in-memory" database refresh.
This server is "specifically optimized for a new generation of workloads: database as a service (DBaaS) and database in-memory. With up to 12 terabytes (TB) of DRAM memory, the Exadata Database Machine X4-8 can consolidate hundreds of databases and can run massive databases entirely in-memory," Oracle says.
The machine can pack in 12TB of memory per rack, 672TB of disk storage and up to 44TB of PCIe-linked flash as well."