Currently Being Moderated
NETAPP_360

What’s New in Flash Accel 1.2

Posted by NETAPP_360 in NetApp 360 Blog on Jul 26, 2013 9:32:51 AM

Originally posted on SANbytes.


Earlier this year we released Flash Accel 1.1 which gave us the ability to extend Data ONTAP capabilities to a server by creating a caching space to complement the NetApp Virtual Storage Tier (VST). This allowed us to use flash devices more effectively and eliminates potential problems with data protection without isolating silos of data. In addition, Flash Accel provides:

 

  • Improved efficiency of back-end storage by offloading reads form the storage
  • Enhanced application performance
  • Data coherency: data in the cache is coherent with data in the backend storage
  • Hardware Agnostic (SSD or PCI-E flash) support
  • Persistent cache across all reboots
  • Data protection: all writes are performed to the backend storage
  • Data ONTAP mode agnostic: supported with 7-mode and Clustered Data ONTAP

 

What’s new with Flash Accel 1.2?

  • Support for VMware vSphere 5.1
  • VMs enabled with Flash Accel can now participate in vMotion and VMware HA events
  • Support for iSCSI enabled luns in the Guest VM
  • ASUP integration
  • Enhancement to FMC UI
  • Support for Fusion IO ioDrive PCI-E cards
  • Management of Flash Accel components using VSC (Virtual Storage Console) 4.2
  • Import and Export of FMC configurations, so that one can upgrade from older FMC to new FMC with minimal changes
  • Exporting of console logs for troubleshooting.

 

Now let me brief you on how Flash Accel works and some of the technical details. To use Flash Accel, a VIB needs to be installed on the ESXi host. The flash device can then be carved up into multiple cache spaces, which can then be presented to the windows Guest OS (Linux support is coming in the future). You can have only one cache space per VM but you can enable caching for up to 32 VM’s. The Guest OS must have an agent installed to leverage the flash based caching space. All of the Flash Accel configuration work is done via a new Flash Accel Management console. It is used for installation, provisioning, and assigning cache to VMs. This console is available as a plug-in to VSC 4.2, which runs in VMware vCenter. If you can’t decide which management console to use, here is a great knowledgebase article which describes the benefits of VSC and FMC for Flash Accel enabled VM’s. Flash Accel consists of three components:

 

  • NetApp Flash Accel Management Console (FMC). Configuration and management of Flash Accel is accomplished using a virtual appliance, which runs on vSphere.
  • Flash Accel host agent (installed on the ESX host). The host agent is installed on an ESX host and establishes control over locally attached devices (such as SSDs) and storage array paths according to the configuration you define using FMC. The host agent creates logical devices and presents them to the ESX storage stack as SCSI devices. Logical devices created on multiple ESX hosts with the same WWN allow ESX to treat a device as a shared device so that VMs using these devices can participate in vMotion® and VMware HA operations.
  • Flash Accel agent in Windows VM. This is a user-level agent implemented for Windows 2008 R2 guest VMs only. This agent is required for enabling/disabling the cache on a VM, adding support for managing the cache thru PowerShell cmdlets, communicating performance metrics to FMC and seamless integration with SnapDrive and SnapManager products.

 

Flash Accel Architecture and Internals

FA block diagram.jpg

 

Flash Accel and vMotion

vMotion is fully supported with Flash Accel and this support requires that cache space to a VM is reserved on all applicable hosts in a datacenter. There are some migration policies which need to be set before you enable vMotion for Flash Accel enabled VM’s and this can be set from the FMC console--Console Settings--Migration. The migration can be based on Cluster, Data Center and Host. When choosing the default migration scope, remember that cache space must be reserved on each host that a VM may migrate to; choosing Cluster instead of Datacenter (otherwise it will result in less overall flash consumed per VM).

 

FA Management console.jpg

 

Flash Accel Performance

Flash Accel, when tested with an OLTP workload, was able to offload 80% of the I/O’s from the storage array to the server.  And when deployed with Flash Cache, we were able to reduce the storage disk utilization by 50%. This also resulted in 60% reduction in array CPU utilization compared to using Flash Cache alone.

 

Flash Accel Integration

Now that you understand the internals of Flash Accel and how to deploy it, you may be wondering how to identify the VM’s which are good candidates for Flash Accel? The OnCommand Insight team has built a plugin for Flash Accel which provides visibility from the VM through the ESX and into the storage.  Insight is monitoring the configuration and the performance for all of the elements in the infrastructure and gives you planning capability. You can find more details about it here.

 

With the release of VSC 4.2 one can manage Flash Accel enabled VM’s within the same console;all you need is the updated plugin which you can download and install from our support site. Flash Accel is a released product and it is a free download for Data ONTAP customers from the NetApp support site. The demo for Flash Accel 1.2 is available on NetApp communities.

Comments

Filter Blog

By author:
By date:
By tag: