how to use HCC with ZFS


when to change batteries on exadata


choose a 4k blocksize for a database on a flash drive


oracle how to check if index is in unusable state




ow to disable IPv6 in Linux


price for purestorage

Straightforward $/GB usable pricing

which filesystem to use with HP IO accelerator



exadata - how its' made



Bug high 'direct path read'

Bug 12530276 - high 'direct path read' waits when buffer pools are not setup [ID 12530276.8]

KEEP BUFFER POOL Does Not Work for Large Objects on 11g

KEEP BUFFER POOL Does Not Work for Large Objects on 11g [ID 1081553.1]

what is an iDB protocol

Source : http://www.dbaleet.org 

top related events

a great collection of scripts : http://gavinsoorma.com/2012/11/ash-and-awr-performance-tuning-scripts/

the impact of jumbo frames in RAC interconnect

A very good article by Steven Lee:
 http://www.dbaleet.org/about_rac-interconnectjumbo-frame  ( use google translate ):

The real IOPS for exadata

From the graph you can see, the HC and HP IOPS are calculated separately. To illustrate the problem a simple presentation of the form:
Exadata RackDisk TypeDisk CountDisk ModelIOPS
FULL (1/1)HP14 * 12 = 16815000rpm SAS 600G50000
HC7200rpm SAS 3T28000
HALF (1/2)HP7 * 12 = 6815000rpm SAS 600G25000
HC7200rpm SAS 3T14000
QUAR (1/4)HP3 * 12 = 3615000rpm SAS 600G10800
HC7200rpm SAS 3T6000
A simple look can spot patterns: the total IOPS given in the datasheet is actually superimposed.
A quarter rack in ( HC - high capacity ) has only 6000 IOPS

Exadata use hard disk provider and models are as follows, the reader can google the detailed parameters: (Note 2T disk has been discontinued)
600G 1500rpm HP disk: Seagate ST3600057SS, Hitachi HUS156060VLS600
3T 7200rpm HC disk: Seagate ST33000650SS
2T 7200rpm HC disk: Seagate ST32000444SS, Hitachi HUS723020ALS640

Also, exadata's performance is affected by temperature (http://www.dbaleet.org/exadata_how_to_caculate_iops) :
Actually a test disk I / O performance when installed in Exadata steps in Step 9 - INFO: Step 9 RunCalibrate. This step will Exadata Cell disk IOPS and MBPS test. If the hard disk IOPS less than the requirements established in the time of installation will be an error. For example, there is a very common situation: Seagate hard at room temperature below 20 degrees Celsius, IOPS will become poor. See Bug 9,476,044: CALIBRATE IOPS SUBSTANDARD. This problem is a "feature" of the Seagate (Seagate) SAS disk later the Exadata to use the Hitachi (Hitachi) did not find this problem. The Exadata hard disk suppliers on only two, in view of this, in general we do not recommend the computer room air conditioning alignment Exadata blowing heat.

exadata storage server

The server behind an exadata storage cell is an : SUN FIRE X4270 M2 SERVER   



mercurialul nasterilor


direct reads problem in and



der neue logan


how to enable DB Flash Cache in RHEL/Centos

DB Flash Cache Feature

- Supported on Solaris and OEL ( Oracle Enterprise Linux )
- Tip for red-hat testing: actually just need package ‘enterprise-release’ from OEL to replace file ‘redhat-release’ ( from /etc )

From Testing Storage for Oracle RAC 11g with NAS, ASM and DB Smart Flash Cache

how to monitor OS usage in RAC

A query the authors find useful to monitor OS resources usage in oracle and especially in RAC via v$views  ( from storage testing rac 11g ukoug lc dw )


Don’t use RAC unless you need it


the fastest ram-flash device

If you want to make your redo logs faster, you can put them on this device .
The speed is about 4GB/s with the latency around 0.5 microseconds

 Â Module ArxCis-NV DRAM with 2 GB capacity can be purchased for $ 300.


performance with ODA - oracle database appliance



To summarize, you can expect ODA to deliver easily 3000 IOPS whether read or write with average IO response time up to 10ms and almost double that if you can afford average random IO response time to raise up to 20 ms.
We can also conclude that write activity has minimal impact on throughout as you would expect from a non-RAID5 system

ODA it’s configured to present disks as JBOD so that ASM is in charge of mirroring and striping.

why ODA use SSD for redo-logs

a very clever explanation http://www.pythian.com/news/33245/insiders-guide-to-oda-performance/

Another question during the webinar was how ODA storage differs from Exadata storage and why ODA can’t use storage cache.
If you was the webinar, you already know that ODA’s storage is simple and elegant. 20 SAS disks, 4 SSDs each with two ports, connected to the server nodes by two HBAs and two extenders per node. This is about as direct as shared storage can be. Which accounts in part for the performance we measured. No more misconfigured SAN switches. The catch is that because nothing is shared except the disks themselves, there is no place to locate a shared cache. Due to RAC, unshared cache (for example on the HBAs) can cause corruptions and cannot be used. This means that the storage system can easily get saturated, causing severe performance issues, especially for writes to redo logs. This is part of the reason the redo logs are located on SSD. We suggested additional methods to avoid saturating the disks in the webinar.

a filesystem based on a flash disk

In case you have a flash based pci express device in your system, probably it's better to create a filesystem witch 4k block size ( instead of the default 512 bytes for disk systems )

i)use parted post installation to make the partition and then type the following example (if your partition is called for instance /dev/sda2):
mkfs -t ext3 -b 4096 /dev/sda2


io accelerator

HP 640GB IO Accelerator BK836A

 This storage device is targeted for markets & applications requiring high transaction rates and real-time data access that will benefit from application performance enhancement. The HP IO Accelerator brings high random I/O performance and low latency access to storage, with the reliability of solid state technology and its low power and cooling requirements. This product, based on NAND flash technology is available in a mezzanine card form factor for HP BladeSystem c-Class. 
As an I/O card, the IO Accelerator is not a typical SSD; rather it is attached directly to the server's PCI Express fabric to offer extremely low latency and high bandwidth. The card is also designed to offer high IOPs (I/O Operations Per Second) and nearly symmetric read/write performance. The IO Accelerator uses a dedicated PCI Express x4 link with nearly 1.3GB/s of usable bandwidth
The HP IO Accelerator's driver and firmware provide a block-storage interface to the operating system that can easily be used in the place of legacy disk storage. 

HP IO Accelerator Generation 1 devices include:
• AJ876A
• AJ877A
• AJ878A
• AJ878B
• BK836A
HP IO Accelerator Generation 2 devices include:
• QK761A
• QK762A
• QK763A

The Remote Power Cut Module provides a higher level of protection in the event of a catastrophic power loss (for example, a user accidentally pulls the wrong server blade out of the slot). The Remote Power Cut Module ensures in-flight  writes are completed to NAND flash in these catastrophic scenarios. Write performance will degrade without the remote power cut module. HP recommends attaching the remote power cut module for the AJ878B and BK836A SKUs.

IO and Read/Write Performance
HP IO Accelerator for BladeSystem c-Class offers superior IO performance (up to 530,000 IOPs), and high read (up to 1.5 GB/s) and write (up to 1.3 GB/s) performance with MLC models.
For AJ878B and BK836A Models
NAND TypeMLC (Multi Level Cell)MLC (Multi Level Cell)
Read Bandwidth (64kB)735 MB/s750 MB/s
Write Bandwidth (64kB)510 MB/s550 MB/s
Read IOPS (512 Byte)100,00093,000
Write IOPS (512 Byte)141,000145,000
Mixed IOPS (75/25 r/w)67,00074,000
Access Latency (512 Byte)30 µs30 µs
Bus InterfacePCI-Express x4
For QK761A, QK762A and QK763A Models
365GB785GB1.2 TB
NAND TypeMLC (Multi Level Cell)MLC (Multi Level Cell)MLC (Multi Level Cell)
Read Bandwidth (1MB)900 MB/s1.5 GB/s1.5 GB/s
Write Bandwidth (1MB)575 MB/s1.1 GB/s1.3 GB/s
Read IOPS (Seq. 512 Byte)415,000443,000443,000
Write IOPS (Seq. 512 Byte)530,000530,000530,000
Read IOPS (Rand. 512 Byte)136,000141,000143,000
Write IOPS (Rand. 512 Byte)475,000475,000475,000
Read Access Latency68µs68µs68µs
Write Access Latency15µs15µs15µs
Bus InterfacePCI-Express Gen2 x4

RAM RequirementsThe HP IO Accelerator drivers use RAM for fast access to the storage metadata. The amount of RAM required is a fraction of the actual storage in use. It is important to ensure that the driver will have free RAM available as storage usage is increased. The amount of free RAM required by the driver is directly related to the size of the blocks used when writing to the drive. When smaller blocks are used, RAM usage increases. Here are the guidelines for memory needed based on the capacity of IO Accelerator and the Block Size of the write:Average

average block size 512 bytes --- > Minimum System RAM requirement for 640GB Mezz IO Accelerator ( 23 GB )

Hugepages are Not used by Database Buffer Cache if you have use_indirect_data_buffers=true

Hugepages are Not used by Database Buffer Cache [ID 829850.1]

why you should implement hugepages on linux

From metalink below

4.  Implement HugePages on Linux Environments

Applicable to Platforms:  ALL LINUX 64-Bit PLATFORMS

Why?:  Implementing HugePages greatly improves the performance of the kernel on Linux environments. This is especially true for systems with more memory. Generally speaking any system with more than 12GB of RAM is a good candidate for hugepages. The more RAM there is in the system, the more your system will benefit by having hugepages enabled. This is because the amount of work the kernel must do to map and maintain the page tables for this amount of memory increases with more memory in the system. Enabling hugepages greatly reduces the # of pages the kernel must manage, and makes the system much more efficient. If hugepages is NOT enabled, experience has shown that it is very common for the kernel to preempt the critical Oracle Clusterware or Real Application Clusters daemons, leading to instance evictions or node evictions.


how to install oracle on violin memory array

a very good article http://flashdba.com/install-cookbooks/ol5u7-11-2-0-3-single-instance/

very well explained