Guest post by Ran Gilboa, Technical Director
Understanding Server Flash and NetApp’s Flash Accel
Let’s start by discussing why you’d want to leverage server flash. There are two main reasons. The first is to reduce load from the storage or the network. The second is to accelerate performance for the application or virtual machine.
NetApp's server cache solution is called Flash Accel. NetApp® Flash Accel software combines the capabilities of server-based flash technology and NetApp intelligent caching to extend Virtual Storage Tier (VST) to the server level. Flash Accel v1.2, scheduled for release on July 25th, adds support for vSphere 5.1, iSCSI in the Windows guest OS, as well as support for VMware DRS, vMotion and VMware HA. It is available from the NetApp software download page.
Usually when you deploy a cache solution like Flash Cache it is for a specific node, so it’s for all of the volumes in that node. Typically if you are using a Flash Pool technology you are using it for the entire aggregate, so it’s all the volumes sitting on this aggregate. The good thing about Flash Accel is you can use it for a specific VM with a specific performance characteristic.
Optimizing Flash Accel
Once you’ve decided to implement Flash Accel, there may be some questions that you have about leveraging this technology, like:
- How do I find good candidates (applications or VMs) for Flash Accel?
- How much throughput did the application or VM gain?
- How much load was reduced from the storage volume?
- Is the Flash Accel leveraged efficiently?
These questions can be very difficult to answer if you don’t have a storage resource management solution like OnCommand Insight. Insight provides visibility from the VM through the ESX and into the storage. Insight is monitoring the configuration and the performance for all of the elements in the infrastructure and gives you planning capability on top of it.
Finding Good Candidates
Step 1: Review volume performance characteristics
A good way to determine good candidates for Flash Accel is to review the performance characteristics of a given volume. From the graph below, we can tell that Volume 114 has high latency and user IOPS.
Step 2: Review VM performance characteristics on the ESX (Write vs. Read)
So we would like to reduce the load from that volume. Now we need to understand whether Flash Accel may be a good solution. So let’s see if the performance characteristics of the VM are affecting that specific volume. The graph below shows the breakdown between the read and the write.
Right now, Flash Accel is optimized for read operation. We see all of the IOPS that were contributed by all of the VMs sitting on that specific volume. From the read operation you can see that VM84, the blue one, is generating most of the load. And VM81, the cream one, is the second one that is creating most of the load. Flash Accel will be applicable for those two VMs that generate the majority of the load.
Monitoring the Load: Determining How Many IOPs the VM Gained
Now that we applied Flash Accel to the VMs, the next step is to see if it had the intended effect. This graph shows that once the solution was deployed that most of the IOPS is coming from the Flash Accel enabled SSD card that is hosted on the ESX. That above graph shows that all of the IOPS that was going to the storage has been significantly reduced and is now being served by the cache. We can also see that the VM actually gained from 5,000 IOPS to 23,000 IOPS. We are getting almost 5 times more the amount of IOPS. You can see it took 34 minutes for the cache to warm up, but that’s expected. Eventually most of the data will be served by the cache which means 99.99% will be served by the cache (in blue) and almost nothing will be served from the storage.
Monitoring Volume Performance: Reducing Load on the Storage
Now let’s look at the performance of the volume. This graph shows the performance of the VMDK that accesses the storage. The VMDK IOPs dropped from ~5000 to ~0.
In the graph below, we see the corresponding performance from the storage. So you can see that there is a correlation, as we can see most of the load is being served out of the cache and not from the storage. Instead of buying more disks or adding a more complex solution from the back-end network, now it’s just local on the ESX. The storage volume IOPs dropped from ~5000 to ~0.
Assuring Efficient Use of Flash Accel
Finally, we can look at the Flash card to see if it’s leveraged efficiently. In this graph you can see all of the VMS on this specific ESX. The green arrow on the icon indicates that there are only two running VMs and both are allocated.
In the below VMDK Performance Distribution graph, we already allocated a piece of Flash Accel capacity for that specific VM. When we enable it for the blue one, the performance for the green one didn’t change, so it can service those VMs without any problems.
Powerful Monitoring and Optimization
When you decide to leverage Flash Accel, or any flash technology, you want to make sure you get the most out of it. As I hope this post has shown, OnCommand Insight is a powerful storage resource management solution for leveraging flash technologies. It provides guidance for finding good candidates for the technology, determining how the application and storage has changed, and whether it’s being used efficiently.
You can learn more about OnCommand Insight by watching some short videos on the OnCommand Insight Community page.