There’s no doubt that we’re in the midst of a fundamental storage technology transition from disk to flash; but when people talk about flash, the conversation often entails a dizzying assortment of topics ranging from all-flash arrays, hybrid arrays, or server-side flash. In many ways, it’s similar to the dialects of the English language: everyone can understand everyone else, but each dialect sounds vastly different throughout the world. So let’s step back and look at the broader flash conversation.
Oftentimes, the conversation starts with workloads: What applications are customers trying to service? Every application has two predominant characteristics: capacity and performance. These two metrics define how best to design the underlying storage to meet the application service level at the lowest cost possible.
There’s also an ever-widening gap between CPU performance and disk performance. The rule-of-thumb for CPUs follow “Moore’s Law” (every 18 months or so, we get a big boost of CPU performance for roughly what the previous technology cost). That’s a pretty good deal if you ask me!
However, this rapid pace of improvements isn’t the same when it comes to traditional spinning disk. In the storage world, one of the key measurements of rotating disk performance (IOPS per spindle) has seen limited gains over time.
If you think about it, hard drives do capacity quite cheap; but they’re also very stingy delivering performance. On the other hand, flash delivers performance very economically, but at a rather expensive price point for capacity.
So the question becomes: How do you combine flash performance with hard drive capacity in places where both performance and capacity are required? But here’s the catch: not all applications are created equal.
If it’s a capacity-oriented application, such as a multi-petabyte content repository where performance is not the primary driver, hard drive-based systems will remain the most cost-effective design approach.
Then there are workloads that benefit from combining flash with spinning disks. This is typically referred to as a “hybrid” storage system. Using flash to temporarily store hot data still makes sense for the vast majority of workloads, whether you do this as a tier or as cache (write-behind or write-through cache). The economics of flash and disk today also make this one of the best ways of applying solid state storage to business problems -- especially in shared storage environments where IT efficiency and reliability are the primary concerns. NetApp has been accelerating hard disks since 2009 with Flash Cache, and more recently, enabling automated storage tiering with Flash Pools. In fact, the attach rate is upwards of 60% for these hybrid array technologies.
Finally, the most recent debate in the industry is the role of all-flash arrays. Many high-end workloads need extreme performance, whether it’s IOPS or latency, with capacity being an afterthought. The economics here typically mean these applications are best served with all-flash arrays. FlashRay is a clean-sheet, redesigned storage operating system that builds upon what NetApp has learned in the last 20 years of data management challenges. It allows customers to increase the amount of information they can process at all levels of the infrastructure.
Clearly flash is not a new idea for NetApp; in fact, NetApp has actually shipped more than 35 PB of flash since 2009!
But hardware is hardware; and for customers who analyze data in real-time to make intelligent decisions that give them a competitive edge, what differentiates any type of flash array is not only performance, but enterprise-class support along with availability, reliability, and predictably -- which ultimately lowers operational costs, increases revenue, and reduces risk.
For more information on how Avnet can enable you and your sales team develop sales and marketing campaigns that will generate new incremental business to achieve higher revenue streams, visit: http://www.ats.avnet.com/na/en-us/suppliers/netapp/Pages/Overview.aspx