pumpkin.png

I'm looking out my window anticipating the onslaught of children in costumes roaming the streets looking for their next sugar kill. With the scent of glucose and chocolate and the rustle of candy wrappers, they will descend upon my neighborhood. Oh, and this year the weather is warm as opposed to snowy or cold and dreary. So, turnout should be good.

 

My nieces and nephew have already attended a church party where they stocked up on a couple of months worth (or a weeks worth for my niece) of candy. I've witnessed the carnage with wrappers all around the house. The authorities in the home swept in to sieze much of the contraband before  emergency dental appointments were required.

 

It's a wonderful time of the year and Halloween is just the tease as we enter to the approaching holiday season.

 

I want to wish you all a fun and safe holiday.

 

And for the partners that created this picture, thanks for the effort.

 

Happy Halloween from SANbytes.

Clustered Data ONTAP is fantastic technology and helps our customers design their multi-site data center infrastructure in a way that maximizes IT skills and controls cost. With clustered Data ONTAP, you can setup your primary and secondary data centers to optimize your performance, capacity, disaster recovery and backup requirements.

 

For this post, I want to point out a few ways in which clustered Data ONTAP can optimize your multi-site environment to address performance, cost and management needs. Consider the diagram below. What I’m showing is two data centers, a primary and a secondary data center. This is not an atypical configuration for our customers.


multi-site datacenter.pngThe primary data center is optimized for performance. It is supported by a 6-node cluster of high performing FAS6200 storage systems with high performance SAS or SSD drives or a hybrid configuration. The secondary data center is optimized for capacity and cost. It is supported with a two-node configuration using a FAS3200 storage system with high capacity SATA disks. The secondary data center has scaled up to achieve the capacity requirements, while the primary data center has scaled out to achieve greater I/O performance. To take this scenario further, in order to support SAN workloads, the primary data center can be deployed with FCoE or traditional Fibre Channel while the secondary data center may simply be piped with Gigabit or 10 Gigabit Ethernet supporting iSCSI. In either site both SAN and NAS protocols can be supported on the same infrastructure.

 

Backup and disaster recovery are supported with SnapVault and SnapMirror features, respectively, which are configured by volume, allowing you to select all or a subset of volumes to protect at the secondary site. This added flexibility allows you to further address cost pressures of maintaining a secondary data center. And you can leverage NetApp’s FlexClone feature to space efficiently create clones of the mirrored volumes which can be mounted to validate that the mirror is successful, or to test Disaster Recovery failover without breaking the mirror – so you can test during the day on a weekday instead of 2AM on a weekend. This is a simple way to validate your DR environment without a painful failover event.

 

The architecture above represents only one possible configuration. SnapMirror and SnapVault can be configured in a myriad number of ways including local site SnapVault for backup, then SnapMirror to the remote site; SnapMirror to remote site and then SnapVault from the mirror. NetApp also supports fan in, fan out and daisy chaining. One unique capability is if SnapVault is used in the primary data center to backup from one node to another in the same cluster, you can change source and destination volumes on the fly – even during replication!

 

Your secondary site doesn’t have to be a cost burden simply waiting around for a disaster. Many of our customers, BBM Canada is an example, actually use secondary sites for development work. Since the ongoing mirroring and backup of data is a light weight workload and requires little bandwidth, you can create clones of the mirrored production data and use these copies for test and development of new applications without impacting production systems. Along the same lines, instead of idle stand by servers, you have active servers for dev/test, and then in the event of a real disaster you can convert them to handle the production load.

 

Clustered Data ONTAP is extremely powerful and designed to accelerate your business. Do you see opportunities in your data center where clustered ONTAP can increase utilization of your valuable IT assets?

NetApp and Verizon Enterprise Solutions announced a collaboration to deliver Data ONTAP as a virtual storage appliance (VSA) for Verizon Cloud clients. Today over 175 NetApp Service Provider Partners deploy NetApp storage, running Data ONTAP, as the foundation for over 300 service offerings.  The Verizon announcement represents the first time a cloud provider will deploy Data ONTAP as a virtual storage appliance.


Data ONTAP becomes part of a software-defined infrastructure management layer that adds NetApp enterprise-grade efficiency, data protection and availability on top of Verizon’s high-performance cloud infrastructure. Integrated features will also provide seamless data management across multiple clouds based on Data ONTAP, which abstracts physical storage into a set of Storage Virtual Machines and delivers IT services to application owners.


When delivered with Data ONTAP as a VSA, the new Verizon Cloud offers a rich set of enterprise capabilities to new clients and sets the standard for storage appliances delivered as virtual functions through the cloud.

 

clPicture1.png

 

Mike McNamara

This blog is from guest blogger Jim Bahn.

 

NetApp is shipping SnapProtect v10 SP3B, the latest version of our leading backup application, now supporting clustered Data ONTAP systems. SnapProtect is a virtualization and app-aware backup application that uses NetApp Snapshot copies as backup images.  SnapProtect controls and catalogs Snapshot, thin-replication and tape operations from a single console enabling companies to implement disk to tape, disk to disk, and disk to disk to tape solutions.  Many NetApp customers are using both Clustered and 7-Mode systems and SnapProtect allows them to manage backup and recovery from a single pane of glass.

 

Compared with legacy backup application, SnapProtect enables you to:

• Shrink backup windows. Low-impact NetApp SnapProtect copies and replication can reduce backup windows from hours to minutes.

• Lower costs. Lower disk capacity utilization by up to 90% and significantly reduce network usage with industry-leading efficiencies.

• Build in virtualization and application awareness. Minimize downtime and help prevent data loss through integration with virtualization and major applications.

• Recover faster. Recover data in minutes, not hours, by using high-speed Snapshot copies, which are fully cataloged for rapid search and retrieval.

• Build in unified backup management. Accelerate response times and reduce costs by managing Snapshot copies, replication, and tape backup from a single source.

 

snaprotPicture1.png

mmike

How Storage QoS Works

Posted by mmike Oct 3, 2013

This blog is a follow-up to an earlier blog on QoS use cases and benefits.

How QoS Works

Storage QoS workload management allows you to control the resources that can be consumed by storage objects (such as volumes, LUNs, VMDKs, or SVMs) to manage performance spikes and improve customer satisfaction. You set throughput limits expressed in terms of MB/sec (for sequential workloads) or I/O operations per second (for transactional workloads) to achieve fine-grained control. When a limit is set on an SVM, the limit is shared for all objects within that SVM. This allows you to set a performance limit on a particular tenant or application, but it leaves the tenant or application free to manage the assigned resources however they choose within that limit.


QoS Policy Groups

Storage QoS is applied by creating policy groups and applying limits to each policy group. For instance, a policy group can contain a single SVM, or a collection of volumes or LUNS (within a SVM) used by an application.

 

                   

Qos.png

Figure 1) A QoS policy group contains a collection of storage objects such as SVMs, volumes, LUNs, or files. The limit on a policy group applies to all the objects in the group collectively, even when the objects are on different nodes in the cluster.

 

In virtual environments, a policy group could contain one or more VMDK files or LUNs within a datastore. The limit applied to a policy group is a combined limit for all the objects the policy group contains. The scheduler actively controls work so that resources are apportioned fairly to all objects in a group.

Note that the objects need not be on the same cluster node, and if an object moves within the cluster, the policy limit remains in effect.  You can set a limit on a policy group in terms of IOPS or MB/s, but not both.


Clustered Data ONTAP delivers the features and capabilities that enable your storage environment to succeed. Whether you are managing a traditional enterprise IT environment, a private cloud or a public cloud, Storage QoS gives you another important tool for optimizing storage.

 

With QoS you can increase utilization of storage resources by consolidating more workloads while minimizing the risk of workloads impacting each other. You can prevent tenants and applications from consuming all available resources, improving the end-user experience. Pre-defining service levels allows you to offer different levels of service to different consumers and applications.

 

Mike McNamara

Guest Post by Brian Schmidt, Sr. Product Manager, NetApp

 

snowbank3.jpgThe requirement for a dynamic infrastructure is being driven by our very fluid business environment.  These dynamic business environments are driving CIO’s to implement dynamic storage environments, led by clustered Data ONTAP.  And the smart CIO’s know that any change in one area of their datacenter often requires a change in other parts of their datacenter to really be effective. 

 

It’s like buying a new SUV to go skiing, and then putting old tires from your commute car on it.  You may have upgraded the vehicle, but if you haven’t fully enabled it with the right gear
you really aren’t going to get where you want to go, and may just get stuck in a snow bank.

 

The ability to easily scale the data infrastructure, both storage and networking, and to add the right amount of performance and capacity without disrupting application availability is vital to enabling a dynamic business infrastructure.  Scalability is also vital to enable a dynamic infrastructure which supports non-disruptive operations.EthernetFabric.png

 

NetApp and Brocade’s combined solution of clustered Data ONTAP with Brocade’s VCS Ethernet fabric network simplifies deployment and management of the storage and networking environment with automated provisioning, with the ability to add and remove resources dynamically.  Brocade’s zero-touch virtualization support, in combination with NetApp FAS/V-Series systems, significantly reduces management complexity and resource requirements.  This allows IT admins to focus on creating more value, rather than reacting to issues.

 

cDOT.pngBuilt on more than 20 years of innovation, clustered Data ONTAP combines the richest data management feature set with clustering for unlimited scale, operational efficiency, and non-disruptive operations. When combined with Brocade VCS Fabric technology, the joint solution reduces the management burden of scale-out NAS environments while providing the ability to grow as needed, seamlessly manage unpredictable data growth, and experience zero downtime while scaling.

 

Like that SUV needed for skiing, the right storage also needs the right network for you to quickly and safely get where you are going, while avoiding data snow bank.  Together, Brocade and NetApp deliver a data center storage infrastructure which offers the performance and predictability today’s dynamic data center environments demand.

 

Check out the paper on clustered Data ONTAP and Brocade Ethernet Fabric for more information, and find out how NetApp can help you get where you need to go without getting stuck in that snow bank of data.

A NetApp storage cluster running clustered Data ONTAP can be subdivided into distinct storage virtual machines (SVMs), each governed by its own rights and permissions. SVMs make multi-tenancy possible by securely isolating individual tenants—for instance, in a service provider environment—or individual applications, workgroups, business units, and so on. Each application or tenant typically has its own SVM, and that SVM can be managed directly by the application owner or tenant using the SVM.

 

Any time you put numerous workloads on a storage system or storage cluster there is the possibility that excessive activity from one workload can affect other workloads. Storage QoS workload management available in clustered Data ONTAP 8.2 allows you to define service levels by creating policies that control the resources that can be consumed by individual files, volumes or LUNs—or entire SVMs—to manage performance spikes and improve customer satisfaction. QoS gives you the ability to set not-to-exceed performance capacity limits (defined in terms of a maximum value for IOPS or MB/s) on a group of files or volumes or LUNS within a SVM, or on the entire SVM. This capability will enable enterprise IT administrators and service-providers to consolidate many workloads or tenants on a cluster without fear that the most important workloads will suffer or that activity in one tenant partition will affect another. As a result, you can push your storage infrastructure to higher levels of utilization, increasing your overall efficiency and decreasing capital costs.


NetApp storage QoS works with both SAN and NAS storage, and it runs across the entire NetApp FAS product line from entry to enterprise. You can incorporate third party storage into your NetApp storage environment using NetApp V-Series. With V-Series as a front end to SAN arrays from EMC, HP, HDS and others, your existing storage gets the full benefit of QoS limits and other NetApp capabilities.

Storage QoS Use Cases

There are many possible use cases for Storage QoS. The following two examples illustrate some of the ways that QoS can be used.

Create Storage Performance Limits for Enterprise Workloads

The traditional approach to ensuring QoS for workloads with different service-level needs, has been to house each application in its own dedicated infrastructure silo. However, this leads to lower resource utilization and higher costs.

 

Storage QoS makes it possible to run multiple workloads on a more cost-effective, shared infrastructure by creating limits for each significant workload. This can be accomplished by assigning each application workload to its own SVM and assigning a performance capacity (IOPS or MB/s) limit on that SVM, or by assigning a performance capacity limit on the group of volumes or LUNs associated with an application workload.

 

Limit Maximum Storage Performance for Cloud Tenants

In a multi-tenant cloud environment—whether private or public—the first tenants on a particular resource may see a level of performance in excess of their service level agreement (SLA). This can create a perception of performance degradation as additional tenants are added and performance decreases.

 

Storage QoS allows you to avoid this problem by assigning a performance capacity limit to each tenant in accordance with their SLA. That way, a tenant can’t exceed the set performance limit, even when resources are available, and are therefore less likely to notice changes in perceived performance over time.

 

Another advantage of storage QoS is that it makes it simple to establish tiers of service based on SLAs. For instance, Bronze service may correspond to a limit of 10,000 IOPS, Silver to a limit of 20,000 IOPS and Gold to a limit of 40,000 IOPS.


Figure ) In private or public clouds each tenant is assigned a storage virtual machine (SVM) and a performance limit. You can provide different service levels by establishing tiered limits (Bronze, Silver, Gold).

QoSPic.jpg

                   

For more information on QoS, see this lab validation from analyst ESG: http://www.netapp.com/us/media/esg-lab-spotlight-ontap-qos.pdf

 

Mike McNamara

Industry analyst IDC wrote a paper titled “Enterprise Storage: The Foundation for Application and Enterprise Availability”, sponsored by NetApp.  In the paper IDC discusses how IT has grown to become a vital component within successful organizations and while the numbers and types of application deployments may vary among organizational sizes and verticals, the reliance on IT solutions is causing availability and uptime to become paramount  This desire for high application availability is dependent on continuous access to the supporting data, which, for a growing number of organizations, resides on networked disk storage systems. As a result, highly available networked storage systems become a crucial component.

 

By focusing on both hardware and software aspects, NetApp delivers highly available disk storage solutions that are essential in today's highly virtualized shared environments supporting multiple applications across multiple tenants.  IDC concluded that “with NetApp's rigorous approach to measuring system availability and the ability to leverage that information with frequent Storage Availability Audits, the company continually builds upon strong relationships within its customer base to improve the quality and reliability of storage deployments, not to mention improving the customer experience. With current system uptime measured by NetApp to be over 99.999%, customers can be assured that NetApp disk storage solutions can play a vital role in supporting the application and data availability needed for success.”

blogpic.png

The detailed white paper is located here: http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-111029-16&m=idc-es-foundation.pdf

 

Mike McNamara

This article is contributed by Mike Harding, Product Marketing Manager, FlexPod Solutions, NetApp.

 

FlexPod had a great summer launch with more validated designs, press and customer wins.  But lost in all the buzz may have been the role that clustered Data ONTAP is playing in the evolution of this leading Converged Infrastructure solution.  FlexPod and Data ONTAP has been a topic on this blog series in the past, and the value and pervasiveness of clustered Data ONTAP continues to grow.

 

FlexPod Summer news

 

FlexPod-t-shirt-logo1.PNGWe recently announced the expansion of the FlexPod family, along with six new validated designs as well as new program tools.   The FlexPod family is now comprised of three solution categories: "FlexPod Datacenter”  as validated designs for enterprise and service provider workloads running within a shared infrastructure, "FlexPod Express" is the new name for former ExpressPod mid-market and branch office solutions, and "FlexPod
Select" is for dedicated architectures, high performance and Big Data designs.  FlexPod Select got the biggest news with its big data validations that move FlexPod beyond just a highly-efficient shared IT platform, but now one also optimized for powerful applications like Hadoop.

 

How Clustered Data ONTAP enhances FlexPod

 

FlexPod has been publishing validated designs with Clustered Data ONTAP for a number of years, and most of the new reference architectures include both 7-mode and clustered design guides.  Of the six new reference architectures just published, the NetApp Datacenter with Nexus 7000 Series Switch for end-to-end FCoE was validated with clustered Data ONTAP 8.1.2, which included a specific clustered Data ONTAP deployment guide.   This document provides details for configuring a fully redundant, highly available configuration for a FlexPod unit with clustered Data ONTAP storage. And the Nexus 7000 switch enables the use of end-to-end Fibre Channel over Ethernet, with FCoE-booted hosts with file- and block-level access to shared storage datastores, and Cisco virtual device context (VDCs) to logically separate IP and FCoE traffic to maximize hardware resource utilization while providing strong security and software fault isolation.  Using this enterprise class switch along with clustered Data ONTAP was a key way to improve FlexPod system availability.

 

There are other important ways that clustered Data ONTAP makes FlexPod better.  FlexPod with clustered Data ONTAP minimizes business disruption with a pre-validated platform leveraging the latest breakthrough technology for non-disruptive storage operations.  With the clustered Data ONTAP ability to virtualize and non-disruptively move storage volumes, FlexPod further eliminates deployment guesswork and accommodates ongoing workload optimization.

 

Clustered Data ONTAP offers a unified scale-out storage solution for an adaptable, always-on storage infrastructure that accommodates today’s highly virtualized infrastructures and is unique across converged infrastructure solutions – the competition has nothing like this.  Organizations can virtualize storage across multiple HA pairs, managed as a single logical pool – and scale easily to tens of petabytes. Support for clustered Data ONTAP further differentiates the FlexPod platform, enabling customers to:

   

  • Deploy an adaptable IT infrastructure with unmatched flexibility
    • Scale single FlexPod capacity up to 69PB+ by adding storage nodes to the cluster
    • Mix controller types to provide tiered storage within the cluster
    • Provide data mobility to optimize storage resources for greater efficiency, performance, flexibility, and ROI
  • Reduce risk with an always-on environment, optimized for zero planned downtime
    • Provide redundancy at each layer of the stack (compute, network, and storage)
    • Eliminate planned downtime through non-disruptive data migration for load balancing, maintenance, and technology refreshes
  • Achieve unparalleled efficiency and lower TCO with optimized resources
    • Match the performance and cost of storage to the requirements of your data
    • Improve service levels by automatically assigning data to optimal storage tiers
    • Leverage open APIs to integrate with validated orchestration partner solutions and enable single console management and automation of the entire infrastructure
    • Rebalance performance or capacity for critical workloads

 

More info on the expanded FlexPod

 

Since 2010 we’ve published about 50 validated design documents and we continue to grow our validation pipeline, tools to help you learn about the offering, as well as the business itself.  Check out the newly designed FlexPod web page, our FlexPodCommunity.com page, and the new FlexPod iPad app – all there to help you be more successful with FlexPod.   Please stay engaged with us, and look for more
FlexPod news in the Fall.

sayers.jpg

Sayers is known for delivering exceptional client services to a variety of customers, including ESCO and hundreds of companies across a variety of industries. Sayers’ clients are attracted to adopting cloud-based services to increase their agility and efficiency while reducing expenses. For Sayers, providing a cloud service was seen as a strategy to pave the way for future success.

 

Sayers also needed to support a variety of applications, from the custom-developed testing applications used at ESCO to Microsoft  SharePoint  and SQL Server  databases, high-end billing systems, and Microsoft Dynamics GP for accounting. It also needed to support web servers and file and print services used by many of its clients.

 

To deliver its cloud services, Sayers needed highly flexible, scalable storage clusters that would allow it to quickly provision storage for customers. Secure multi-tenancy was imperative to quickly and effectively partition storage into private, flexible, high-performance storage environments.

 

Instead of building out its own cloud infrastructure, Sayers tapped into an existing cloud services provider that uses NetApp FAS6210 and FAS6280 storage systems running clustered Data ONTAP. This storage architecture provided the massive throughput and scalability Sayers needed to rapidly grow its cloud services while providing superb service, performance, and uptime.

 

Sayers can now move storage as needed to lower-cost or higher-performance drives without downtime to maximize returns, as well as balance pricing and performance for clients. Clustered Data ONTAP enables effective management of Sayers’ storage infrastructure, supporting multiple network protocols—Fibre Channel, FCoE, iSCSI, NFS, or CIFS—to support customers’ varied applications.

 

Clients derived the following benefits by moving to a Sayers' Cloud:

  • Enables client to save $250K and use 55% fewer hardware resources
  • 100% uptime
  • Flexible systems that enable business agility
  • The ability to provision storage in less than two hours.

 

“NetApp is a key technology partner that has prepared us for the future. With NetApp and clustered Data ONTAP, we’re prepared.” Bill Tohtz Chief Architect, Sayers

 

The detailed success story is here.

 

Mike McNamara

This article is contributed by our guest, Ingo Fuchs, Sr. Product Marketing Manager for NAS Solutions at NetApp.

 

These days when speaking with customers about scalable storage infrastructure, the conversation often boils down to the three aspects of scalability: capacity, performance and operational scale. You can read more about them in Mike McNamara’s blog.

 

InfiniteVolume2.pngToday, I want to focus on one aspect of scale-out – capacity. Clustered Data ONTAP enables customers to scale to 69PB of capacity in a single cluster. With Infinite Volume – which we first introduced in 2012 with Data ONTAP 8.1.1 – you can now scale to 20PB in a single volume. So Infinite Volume gives you a big, 20PB bucket to store data, along with the operational scale and performance you need for large content repositories. The unique thing, however, is that you don’t lose key functionality that enterprise IT departments expect. Essentially, Infinite Volume works just like any other volume from an end-user or application perspective, and in many ways from an administration perspective as well.

   

Let’s take a closer look:

    

Administration – you create Infinite Volume just like any other volume. Typically you would use a GUI wizard, but you can use the command line, too.

   

Efficiency – deduplication and compression are supported. You can even steer data via policies into separate storage classes that compress or deduplicate data. Also you get to choose how data should be handled, or you can let it all go into one single repository, your choice.

   

Scalability – well, 20PB of capacity, 2B files – and all that with only 10 nodes. So you don’t need dozens (or 144) nodes to get to this capacity, which saves you a lot of configuration and equipment headaches. This gives you a highly capable scale-out NAS solution, with support for NFS, pNFS and SMB/CIFS for data access.

   

Availability – the 99.999% uptime you get from Data ONTAP, even while you expand or manage an Infinite Volume or perform software updates. High-performance snapshots and replication via SnapMirror are also supported to increase data availability.

   

Multi-workloads – you are not limited to having Infinite Volume for large-scale content repositories in a cluster. You can easily add other workloads such as virtualized servers and desktops, enterprise applications and many other items while the system is running. Infinite Volume supports NFS (including pNFS) and SMB/CIFS for data access, but you can add other protocols like FC, iSCSI, FCoE etc. into the cluster for other applications. Great for large organizations and – with secure multi-tenancy supported – service providers that want to share some of that storage infrastructure with various customers.

   

In summary, with Infinite Volume you have the ability to easily create a scale-out NAS content repository for up to 20PB of data, while retaining efficiency, availability and many other aspects of clustered Data ONTAP 8.2– without excluding other workloads.

   

So don’t create another silo – leverage clustered Data ONTAP 8.2 for all your workloads, even those where in the past you might have chosen another, often inferior, scale-out NAS solution.

In my blog post on the new features of clustered Data ONTAP 8.2, I highlighted three key benefits: nondisruptive operations, proven efficiency and seamless scalability.  This blog explores SnapVault, a key new feature with clustered ONTAP 8.2, and summarizes a spotlight hands-on lab validation from industry analyst ESG on SnapVault


SnapVault is a disk-to-disk backup product which is a core feature of Data ONTAP, with application-aware backup solutions tailored for application-specific protection provided by the suite of SnapManager products. SnapVault operates at the block level rather than the file level, transferring only new or changed blocks and leverages both Snapshot point-in-time dataset copies as well as native deduplication and/or compression, providing efficient disk-to-disk backup and restore. This results in both reduced capacity on-disk and bandwidth usage, minimizing the backup footprint.

 

Storage efficiency is maintained over-the-wire when the backup set is replicated for disaster recovery using either SnapVault or NetApp’s SnapMirror product. Individual SnapVault backups can also be made writable using FlexClone technology and can be leveraged for testing and development. Backup copies are stored in their native format resulting in quick and efficient restores including support for end-user drag-and-drop selection of specific files to be restored. SnapVault can backup volumes within a cluster, across clusters, and across data centers.


ESG Lab performed hands-on evaluation and testing of Data ONTAP 8.2 SnapVault at NetApp headquarters in Sunnyvale, California. Testing was designed to validate rapid implementation of data protection, and near instantaneous backup and recovery of data sets.

 

Figure 1 shows an example of how SnapVault may be deployed in an enterprise-scale environment with multiple data centers. Physical and virtual servers and applications connect to a Data ONTAP 8.2 cluster in the primary data center. The primary data center is connected to a secondary data center over a WAN connection, where a Data ONTAP 8.2 cluster is the target for offsite backup and disaster recovery.

Snapvault.png

Figure 1

 

Within this example environment, SnapVault is used in three ways:

  • SnapVault creates primary backups of storage data sets, including virtual machines, databases, and user file volumes using NetApp point-in-time Snapshot copies. These backups are stored on disk in the same Data ONTAP 8.2 cluster, in the primary data center.
  • SnapVault creates secondary backups of storage data sets. These backups are stored on disk in the secondary Data ONTAP 8.2 cluster residing in the secondary data center for offsite data protection.
  • Should the need arise, the SnapVault backups in the secondary data center can be promoted to be used as primary storage for disaster recovery, using the secondary data center application servers.

 

The detailed lab evaluation is here.

Mike McNamara

Earlier this year we released Flash Accel 1.1 which gave us the ability to extend Data ONTAP capabilities to a server by creating a caching space to complement the NetApp Virtual Storage Tier (VST). This allowed us to use flash devices more effectively and eliminates potential problems with data protection without isolating silos of data. In addition, Flash Accel provides:

  • Improved efficiency of back-end storage by offloading reads form the storage
  • Enhanced application performance
  • Data coherency: data in the cache is coherent with data in the backend storage
  • Hardware Agnostic (SSD or PCI-E flash) support
  • Persistent cache across all reboots
  • Data protection: all writes are performed to the backend storage
  • Data ONTAP mode agnostic: supported with 7-mode and Clustered Data ONTAP

What’s new with Flash Accel 1.2?

  • Support for VMware vSphere 5.1
  • VMs enabled with Flash Accel can now participate in vMotion and VMware HA events
  • Support for iSCSI enabled luns in the Guest VM
  • ASUP integration
  • Enhancement to FMC UI
  • Support for Fusion IO ioDrive PCI-E cards
  • Management of Flash Accel components using VSC (Virtual Storage Console) 4.2
  • Import and Export of FMC configurations, so that one can upgrade from older FMC to new FMC with minimal changes
  • Exporting of console logs for troubleshooting. 

Now let me brief you on how Flash Accel works and some of the technical details.

To use Flash Accel, a VIB needs to be installed on the ESXi host. The flash device can then be carved up into multiple cache spaces, which can then be presented to the windows Guest OS (Linux support is coming in the future). You can have only one cache space per VM but you can enable caching for up to 32 VM’s. The Guest OS must have an agent installed to leverage the flash based caching space. All of the Flash Accel configuration work is done via a new Flash Accel Management console. It is used for installation, provisioning, and assigning cache to VMs. This console is available as a plug-in to VSC 4.2, which runs in VMware vCenter. If you can’t decide which management console to use, here is a great knowledgebase article which describes the benefits of VSC and FMC for Flash Accel enabled VM’s.

 

Flash Accel consists of three components:

  • NetApp Flash Accel Management Console (FMC). Configuration and management of Flash Accel is accomplished using a virtual appliance, which runs on vSphere.
  • Flash Accel host agent (installed on the ESX host). The host agent is installed on an ESX host and establishes control over locally attached devices (such as SSDs) and storage array paths according to the configuration you define using FMC. The host agent creates logical devices and presents them to the ESX storage stack as SCSI devices. Logical devices created on multiple ESX hosts with the same WWN allow ESX to treat a device as a shared device so that VMs using these devices can participate in vMotion® and VMware HA operations.
  • Flash Accel agent in Windows VM. This is a user-level agent implemented for Windows 2008 R2 guest VMs only. This agent is required for enabling/disabling the cache on a VM, adding support for managing the cache thru PowerShell cmdlets, communicating performance metrics to FMC and seamless integration with SnapDrive and SnapManager products.

Flash Accel Architecture and Internals

FA block diagram.jpg

Flash Accel and vMotion

vMotion is fully supported with Flash Accel and this support requires that cache space to a VM is reserved on all applicable hosts in a datacenter. There are some migration policies which need to be set before you enable vMotion for Flash Accel enabled VM’s and this can be set from the FMC console--Console Settings--Migration. The migration can be based on Cluster, Data Center and Host. When choosing the default migration scope, remember that cache space must be reserved on each host that a VM may migrate to; choosing Cluster instead of Datacenter(otherwise it will result in less overall flash consumed per VM).

 

FA Management console.jpg

Flash Accel Performance

Flash Accel, when tested with an OLTP workload, was able to offload 80% of the I/O’s from the storage array to the server.  And when deployed with Flash Cache, we were able to reduce the storage disk utilization by 50%. This also resulted in 60% reduction in array CPU utilization compared to using Flash Cache alone. 

Flash Accel Integration

Now that you understand the internals of Flash Accel and how to deploy it, you may be wondering how to identify the VM’s which are good candidates for Flash Accel? The OnCommand Insight team has built a plugin for Flash Accel which provides visibility from the VM through the ESX and into the storage.  Insight is monitoring the configuration and the performance for all of the elements in the infrastructure and gives you planning capability. You can find more details about it here

With the release of VSC 4.2 one can manage Flash Accel enabled VM’s within the same console;all you need is the updated plugin which you can download and install from our support site.Flash Accel is a released product and it is a free download for Data ONTAP customers from the NetApp support site.   The demo for Flash Accel 1.2 is available on NetApp communities.

I remember when Gigabit Ethernet was pretty fast. In fact, the name of the technology was “Fast Ethernet”. That’s how fast it was. 100 megabits per second fast. And when 1 Gigabit Ethernet became available, whoa!
Who would ever use that bandwidth? Then we started talking about storage over Ethernet. Particularly, block storage over Ethernet. And the requirements for performance, latency, and reliability were cast in a new light as Gigabit Ethernet was suddenly being compared to Fibre Channel.

 

Ethernet development wasn’t standing still. But, the price per port for the next generation of 10GbE adapters and switches was an obstacle for most use cases. Gigabit Ethernet was (and in many cases still is) good enough for many storage workloads, with the exception of high end storage. And for that use case, there were other solutions already addressing that need. Well, times have changed and 10 Gigabit Ethernet is actually affordable (especially as measured by $/Gb of bandwidth) and is being adopted broadly. We are now even starting to see adoption of 40 Gigabit and preparation for 100 Gigabit Ethernet deployments.

 

cisco_nexus_6004.jpgOne of the first 40GbE switch products on the market is the new Cisco Nexus 6004. This is a beast of bandwidth with 96 40GbE ports or up to 384 10GbE ports using fan out cables. So, you can use 40GbE as uplink ports or use the Nexus 6004 as a very high density 10GbE switch. Either way, you get some serious bandwidth.

 

Additionally, the Nexus 6004 supports native FCoE traffic for both 10GbE and 40GbE speeds. So, when the time comes to dial your FCoE storage infrastructure up to 11, you have the switch infrastructure to accommodate
that transition.cisco_nexus_2248PQ.jpg

Complementing the Nexus 6004 is the Cisco Nexus 2248PQ fabric extender. This device allows you to aggregate 10GbE links and extend the bandwidth through 40GbE uplinks to a Nexus 6004 switch to other parts of your Ethernet infrastructure. As 10GbE becomes common in the access layer of the data center, aggregating the bandwidth becomes a necessity to connect to the rest of your network and storage.

 

NetApp and Cisco are strategic partners and continue to extend the value of Ethernet storage networks to our customers. From a storage protocol perspective, the Nexus 6004 has already been tested and is fully supported for FCoE and Ethernet connectivity to the NetApp storage controllers. As a convenience, NetApp resells Cisco switches, and when combined with FlexPod offers best in class data center converged infrastructures.

 

So, what do you think? Are you looking at 40GbE already? Let me know when you expect to make the leap.

In my blog post on the new features of clustered Data ONTAP 8.2, I highlighted three key benefits: nondisruptive operations, proven efficiency and seamless scalability.  This blog explores seamless scalability in more detail. 

 

A clustered storage environment running Data ONTAP scales in three dimensions: capacity, performance, and operations. You can scale SAN and NAS capacity from terabytes to tens of petabytes transparently and without reconfiguring running applications. Performance scales linearly as cluster nodes are added; a single administrator can manage petabytes of storage. You can start with a single cluster node and expand your cluster up to 24 nodes as your business needs grow.

 

 

seamlesspic.png

 

Figure 1, Clustered Data ONTAP scales in several dimensions.

 

The nondisruptive-operations capabilities of clustered Data ONTAP make your storage environment much more flexible. You can easily rebalance capacity and storage workloads as needed. You can improve service levels by dynamically redeploying workloads and avoid hot spots by moving volumes to less active disks or spreading workloads across more controllers. Each dataset gets the right technology to meet your performance and cost targets.

 

In clustered Data ONTAP 8.2 we’ve also significantly increased many of the limits to make the platform even more scalable. This includes support for:

  • Up to 100,000 NFS clients
  • Aggregates up to 400TB in size
  • 12,000 volumes in a 24-node NAS cluster
  • 49,152 LUNs in an 8-node SAN cluster
  • Multiple Infinite Volumes, each up to 20PB in size
  • Microsoft BranchCache v2, which allows Windows servers to cache data from a NetApp cluster locally to improve performance

 

A More Flexible Approach to Scale-Out

Most scale-out storage solutions provide a single large repository with limited control. Essentially, this means a single class of service for every workload sharing the storage. The virtualized storage services and unique capabilities of clustered Data ONTAP give you much more flexibility and greater control. With NetApp Infinite Volume, you can have a single repository or several large repositories to match your requirements, each capable of scaling to billions of files and petabytes of capacity. Any SVM can include an Infinite Volume, and you can control and adjust the resources available to the SVM—and the Infinite Volume—on the fly.

Clustered Data ONTAP gives you the ability to isolate particular workloads or tenants and offer different levels of service. You can mix certain controller types, offer different storage tiers, and define QoS policies to address a wide variety of storage needs, all from the same unified infrastructure. You can also virtualize third-party arrays with NetApp V-Series, and when it’s time to retire a storage system, you can simply upgrade the controllers—keeping data in place.

 

Mike McNamara

Filter Blog

By author: By date:
By tag: