Currently Being Moderated

NetApp, Exadata & Chain Guns

Posted by gerren in Neil's Blog on Jan 23, 2012 12:58:08 PM

Hello friends!  I'm overdue for a blog post and thought I'd share something that's been on my mind the last few weeks!


Oracle is increasingly positioning Exadata for OLTP environments including their own suite of business applications. 


Let’s consider for a moment what Exadata was designed to do.  In a nutshell, it was designed to scan large database tables in their entirety as efficiently as technologically possible.  A lot of creative engineering went into the product to make it competitive with similar solutions as Teradata or Netezza.  The data path from disk to network was purpose built. 


Is this the kind of I/O your typical OLTP oriented MRP or ERP system performs?  I’ve analyzed hundreds of AWR workload reports for every major industry.  To say that huge sequential reads do not meet this profile is an understatement.  These business environments tend to exhibit I/O profiles of ninety-percent random read activity.  In fact, it’s not unusual to see peaks of 99% random reads!    No, this is not something Exadata was designed to do.  I guess you could always bring a 25mm chain gun with depleated uranium rounds to a paintball tournament to ensure superiority, but it might not be the right tool for the job! { insert stereotypical smile here }


A recent customer experience confirmed this for me.  It seems their depreciating MRP environment took hours for a critical task to run to completion.  The same workload was tested on updated NetApp storage as well as Exadata.  You’d expect the difference to be significant, wouldn’t you? The difference was fifteen minutes!  The cost differential was more than one million USD. Additionally, the hardware assets used in the NetApp test had a benchmarked deficiency of 30% versus modern technology.  Now, for the interesting part.  The disks used in the NetApp test were Serial ATA devices!  I sincerely look forward to measuring the results with SAS and SSD disks!  { insert another stereotypical smile here }


Why is NetApp a better fit?


1. Backup and Recovery Service Levels

NetApp was built based on technology that enables backup and recovery in minutes.  Most customer recovery scenarios with five terabyte databases recover in less than ten minutes.  Backups can be tested by creating a thin clone based on the backup.   This means backups can be tested in ten minutes.  No other vendor provides this capability.


2. Storage built on open standards: NFS, FC, ALUA


Oracle developed IDB which is incompatible with all current storage standards.  NetApp was the first to bring FCOE to market, and continues leading in industry standard protocols and multi-path solutions.


3. Data Protection with the lowest spindle count possible


Our Competitors rely on mirroring your data sets, doubling or tripling spindle count, power, floor space, and cooling requirements.  NetApp provides for uninterrupted data availability with double disk failure.  No other vendor can do that with as few disks as NetApp.


4. Multi-Purpose Unified Storage


Other vendors try to push you into purpose-built solutions designed for high-end workloads. NetApp provides one platform that supports all your storage needs.  This eliminates the need for training on new limited life, proprietary technologies.


5. Storage Efficiency with Cloning

No other approach provides an application aware infrastructure that generates thin clones of your databases with as little disk footprint as NetApp.  None of them can clone as quickly either.  Both SAP and Oracle use NetApp to accelerate their development lifecycles and reduce infrastructure cost!


6. Performance at Lower Cost

NetApp’s development of FlashCache was rapidly successful because our system architecture enabled onboard extended caching.  Other vendors reacted by incorporating solid state disk drives into their solutions to keep pace. They also promoted “Information Lifecycle Management” because they needed the ability to move data across storage tiers.  Oracle added "Smart Flash Cache" which extends the SGA Buffer Pool to SSD.  This ends up costing additional CPU cycles and increasing your Oracle licensing fees. With NetApp FlashCache, there is no need for adding all this complexity to make up for vendor product holes.



So then, what's the point?  If you have trillions of rows of data to sort through, look at solutions that do that well.  Oh, I almost forgot, NetApp has a high-profile European customer doing that today. 


Seriously though.  If you need the greatest I/O efficiency for your large warehouse, and you expect integration from the disk all the way to the desktop, then have a look at Exadata from my friends at Oracle.  For everything else...






As a follow-up to this post, I need to update you on a couple things. 


First, the difference between the Exadada and NetApp results was actually seven minutes!    Do you think the business owners would care that the process took seven minutes longer, especially considering they were sleeping while the job was running?   Would they pay $1M USD for that seven additional minutes?  We introduced solid state disks into the process, and the time was reduced by four minutes.  So now, the difference between NetApp and Exadata is four minutes!  The process is CPU intensive, virtually burying the database server.  For this reason, we were not able to take full advantage of the reduced disk latency of the faster spindles.  Now, imagine what will happen when the environment is upgraded to prodution class practices.  Turning on Jumbo Frames, load balancing across a second NIC, and enabling Direct NFS!  Again, stay tuned!  I truly believe that given the opportunity, we will equal or surpass the performance of the Exadata system that was built in a sterile lab under ideal conditions!






Filter Blog

By date:
By tag: