This lab validation report by industry analyst ESG explores how NetApp clustered Data ONTAP can help organizations create a highly efficient and scalable data storage environment that supports a shared IT infrastructure foundation. ESG Lab combined hands-on testing of Data ONTAP 8.2 performed in 2013 with a detailed audit of what’s new in Data ONTAP 8.2.1 to validate the nondisruptive operations, proven efficiency, and seamless scalability offered by clustered Data ONTAP 8.2.1.

 

 

ESG.png

ESG provided an audit of the following capabilities.

 

Non-disruptive Operations: Scalability, Availability, and Resource Balancing. As storage nodes are added to the system, all physical resources—CPUs, cache memory, network I/O bandwidth, and disk I/O bandwidth—can be easily kept in balance. Clustered Data ONTAP 8.2.1 systems enable users to add or remove storage shelves (over 23 PB in an eight-node cluster, and up to 69 PB in a 24-node cluster); move data between storage controllers and tiers of storage without disrupting users and applications; and dynamically assign, promote, and retire storage, providing continuous access to data while administrators upgrade or replace storage. This enables administrators to increase capacity while balancing workloads, and can reduce or eliminate storage I/O hot spots without the need to remount shares, modify client settings, or stop running applications.

Unified Storage Efficiency. NetApp provides storage efficiency technologies for production and backup data sets that include block-level data deduplication, compression, and thin provisioning. These technologies can be deployed individually and in combination for both SAN and NAS, allowing customers to reduce the capital costs associated with storage. The Automated Workload Analyzer (AWA)available with clustered Data ONTAP 8.2.1 removes complexity and reduces time to deploy Flash Pool using real-time automated learning, computing Flash Pool sizing, and estimating performance gains, increasing both storage and administrator efficiency. Efficiency is also increased with in-place 32-bit to 64-bit aggregate upgrades.

Quality of Service Management. Clustered Data ONTAP quality of service (QoS) enables the definition and isolation of specific workloads. IT organizations can throttle or prevent rogue workloads, and service providers can define specific service level objectives.

Secure Multi-tenancy. Using Storage Virtual Machines (SVMs), clustered Data ONTAP provides secure, protected access to groups of servers and applications, allowing organizations to deliver dedicated administration, IP addresses, exports, storage objects, and namespaces to consumers of IT services.

Integrated Data Protection. NetApp provides on-disk snapshot backups using capacity and resource-efficient Snapshot technology. Customers can reduce recovery time objectives (RTOs) and improve recovery point objectives (RPOs) across all of the storage in their environment. SnapVault, provides block-level disk-to-disk backup. Backup targets can be within a cluster, across multiple clusters, or span multiple data centers, delivering fast, streamlined remote backup and recovery.

FlexClone. Using FlexClone provisioning, built on Data ONTAP Snapshot technology, customers can instantly create clones of production data sets and VMs in order to meet the requests of a dynamic infrastructure without requiring additional storage capacity. Clones can speed test and development, provide instant provisioning for virtual desktop and server environments, and increase storage utilization. This capability is integrated with a number of offerings from NetApp partners including Microsoft, VMware, Citrix, and SAP.

Unified Management. NetApp OnCommand data management software offers effective, cost-efficient management of shared scale-out storage infrastructure to help organizations optimize utilization, meet SLAs, minimize risk, and boost performance. By offering a single system image across multiple storage nodes in a Data ONTAP cluster, NetApp enables organizations to automate, virtualize, and manage service delivery and SLAs through policy-based provisioning and protection.

 

Additional Capabilities Added/Enhanced in Data ONTAP 8.2.1

NetApp OnCommand Workflow Automation (WFA). Automates common administrative tasks to standardize processes and adhere to best practices. It allows the design of highly customized workflows without the need for scripting expertise. Also, it acts as a point of integration for 3rd party tools such as orchestrators.

Antivirus. Data ONTAP 8.2.1 utilizes in-memory cache for efficiency and performance, integration with multi-vendor antivirus solutions provides highly available antivirus protection for SMB shares.

Data ONTAP Edge. Administrators can deploy virtualized data ONTAP 8.2.1 to extend the environment from the core of the business to the edge, providing centralized management and administration as well as backup and disaster recovery for remote office/branch office (ROBO) environments.

Multivendor Virtualization. Data ONTAP 8.2.1 fully supports NetApp FlexArray Virtualization Software with FAS8000 series systems to virtualize storage and incorporate capacity into a Data ONTAP 8.2.1 cluster via license key activation.

SMB/CIFS Capabilities Added/Enhanced in Data ONTAP 8.2.1

LDAP over SSL. Enables secure, encrypted authentication between clustered Data ONTAP and Microsoft Active Directory or OpenLDAP servers. This minimizes vulnerabilities where network monitoring devices or software are used to view users’ credentials.

Active Directory without CIFS. For Microsoft apps attached to clustered Data ONTAP over a SAN that connect to Active Directory and VMware over NFS, organizations can utilize Active Directory for security and management. This enables IT to comply with corporate standards on Active Directory integration. The ability to search across multiple domains for user mappings allows large, complex environments to maintain their existing cross-protocol workflows. Users with identities in both UNIX and Active Directory domains can map across domains in multiprotocol deployments.

Microsoft SQL Server with SMB 3.0. NetApp Data ONTAP 8.2.1 clusters can provide users with uninterrupted access to SQL Server instances over SMB 3.0. Microsoft Hyper-V with SMB 3.0 is also supported for uninterrupted share access for virtualized applications and servers.

 

In summary, ESG recommends a serious look at the benefits that can be realized from virtualizing storage environments with NetApp clustered Data ONTAP 8.2.1.Through hands-on testing, ESG Lab has confirmed that NetApp can bring a flexible and efficient service-oriented model to heterogeneous storage environments while reducing complexity and delivering a robust infrastructure foundation for shared, on-demand IT services.

 

Mike McNamara

DuPage Medical Group (DMG) is one of the largest and most successful independent multispecialty physician groups in the state of Illinois. With more than 425 physicians practicing in 50 medical and surgical specialties, DMG continually strives to innovate through a model of quality, efficiency, and access.

 

To provide the best care, DMG wanted to deploy the latest applications and upgrades, but ran into space, compute, and storage limitations in its data center. As part of the technology upgrade, IT wanted to put a storage infrastructure in place that would allow DMG to operate with near-zero downtime. For example, if a DMG patient goes to a nearby hospital in the middle of the night, the DMG IT team can’t tell the hospital doctor to wait for an image because a firmware upgrade is under way.

integratedoncology_1.jpg

DMG engaged Meridian IT Inc., a technology solutions provider, to help it evaluate storage solutions from three vendors, including NetApp. “We decided to go with NetApp for three reasons,” says Beaird. “First of all, NetApp gives us the capacity we need in a compact solution that takes up the least amount of rack space. Second, the clustered Data ONTAP operating system allows us to have multiple high-availability targets instead of a typical two-node cluster. The third reason is performance: it far exceeds what we had previously, which has a positive impact on healthcare operations.”

 

“We now have four storage controller nodes at each data center,” says Tony Beaird, IT Manager. “If there’s a hardware failure or we need a patch, we can roll our environment between nodes without downtime using clustered Data ONTAP and avoid service interruption. That’s vital to enabling uninterrupted delivery of care.”

Data is replicated between data centers with NetApp SnapMirror which transfers only new or modified data blocks to reduce bandwidth utilization and accelerate data transfers. In addition, NetApp also improves storage efficiency in a virtual environment with deduplication.

 

In addition to improving patient care by avoiding system downtime, DMG was able to improve performance for Citrix XenApp and other workloads, conserve data center space with intelligent caching and expand capacity quickly to onboard new physician practices faster.

 

“Now, with NetApp clustered Data ONTAP, we can operate with near-zero downtime and minimize data access issues, " says Tony Beaird.

For more details on this customer story, click here.

 

Mike McNamara

The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is home to the world’s largest laser.  It’s over 100 times more energetic than any previous laser system. Scientists at the Livermore, California facility are working on improving national security and developing the science for providing renewable energy sources for the future. Because of NIF’s size and criticality of its research, avoiding downtime is critical. Each time the laser is fired at a target, nonrelational object data produced by scientific instruments (about 50TB per year) is captured in files on network-attached storage, which must be accessible 24/7 for physicists to analyze. Algorithms then generate representations of the x-rays, plasmas, and other scientific phenomena that are stored as relational data in Oracle databases.

 

30757_Hohlraum_cut_away_with_capsule.jpg

 

 

NIF retired most of its legacy storage and deployed NetApp FAS3250 and FAS3220 storage systems running the clustered Data ONTAP operating system to provide nondisruptive operations. An eight-node NetApp cluster stores the virtual machine operating system images, while a four-node NetApp cluster stores scientific data in Hierarchical Data Format (HDF) to be ingested to Oracle SecureFiles. 800 Linux virtual machines connect to the NetApp NFS cluster over a 10GbE network.

NetApp Professional Services helped NIF migrate their data and decommission the older systems.  They performed the clustered Data ONTAP Migration Service, designed to help customers transition with minimal disruption.

 

NetApp’s unified scale-out architecture allowed NIF to maintain constant availability for very large amounts of data. NIF anticipates eliminating up to 60 hours of planned downtime annually, maximizing facility availability.  In addition, all of the NetApp storage systems can be managed as a single logical pool that can seamlessly scale to tens of petabytes and thousands of volumes. “Having a global namespace means we can move workloads around without losing the NFS file providers, a huge win for a 24/7 research facility like NIF,” says Frazier. NetApp block-level deduplication helps NIF make the most of its storage space, reclaiming an average of 40% of capacity for data volumes and up to 80% for virtual machine images.

“Reclaiming up to five hours a month from planned downtime is worth a lot to us, our sponsors, and the country” Tim Frazier, CIO, National Ignition Facility and Photon Sciences Principal Associate Directorate, Lawrence Livermore National Laboratory.

 

For more details on this success story, click here.

 

Mike McNamara

We have written quite a bit recently about the tremendous growth of flash technology adoption across our storage system families, both all-flash and hybrid.   And now that flash has been fully integrated into our core platform architectures (both hardware and software), we are spending more of our time and resources helping customers determine the best fit with their application and business requirements. For those workloads that need the lowest possible latency and highest IOPS, we recommend our EF550 all flash array but we’ve found that for most of our customers, a hybrid approach that combines spinning disk with a small amount of flash can meet their needs at a much lower price point. So, for this later hybrid approach, we are often asked how to best determine the optimal amount of flash to deploy and to ensure the best match for specific workloads.

One of the coolest new features of Data ONTAP 8.2.1 is Automated Workload Analyzer (AWA) which automates Flash Pool sizing for our FAS systems. AWA monitors live workloads in real time on a Data ONTAP system providing needed cache sizes and HDD offload percentages (hit rates for different cache sizes) as a result of the workload analysis. The result is that you can now estimate the performance gain of a hybrid array even before you deploy Flash Pool.

AWA graphic2.jpg

AWA works at the Data ONTAP aggregate level, and while it is a tool optimized for Flash Pools, it can also provide benefits for hybrid arrays based on Flash Cache. In the Flash Cache case, there are some caveats and since Flash Cache is a controller wide cache, AWA will need to be run all aggregates on the system. And AWA accounts for overwrite caching which is a Flash Pool only capability, so projected cache sizes will reflect this and will be higher when there is overwrite caching involved.

One of our key goals with AWA has been to keep things simple. With this in mind, AWA does not require any detailed understanding of workload characteristics. It can be run live on a storage system with flash installed or on one with HDD storage only. Summary output provides the following:

-Read/Write mix

-Percentage of reads and writes that are cacheable

-Maximum cache size including reserve space

-Projected cache hit rates

The bottom line is that we are taking the guesswork out of deployment of hybrid arrays. The result is that you can deploy flash faster and with predictable results while meeting the requirements of your actual workload environment.

NetApp first released Data ONTAP Edge in mid-2012, offering a new way to address the needs of distributed enterprises and remote offices. These remote offices typically have little to no IT personnel onsite but require some local storage with a mechanism to backup or replicate remotely to another site, such as a central data center. Data ONTAP Edge is ideal for these environments and delivers a high feature rich storage solution in a small foot print with advanced backup and replication to NetApp FAS or V-Series storage.

F_NA_Remote_Office_A_Infographic_102813.png


Data ONTAP Edge has offered support for FAS and V-Series systems deployed with Data ONTAP operating in 7-Mode. With the release of version 8.2.1, Data ONTAP Edge adds the ability to act as a single node cluster. This added functionality means that you can deploy Data ONTAP Edge in the context of your distributed enterprise built on a foundation of clustered Data ONTAP. SnapMirror and SnapVault for clustered Data ONTAP have been designed with increased efficiency to reduce latency and bandwidth requirements over distance. Support for clustered ONTAP is only available with the Data ONTAP Edge Premium license bundle.

 

edge table.png

For more information about Data ONTAP Edge, check out the product datasheet.

mmike

New Data Storage Calculator

Posted by mmike Mar 3, 2014

The NDO Savings Calculator will show you how much you could save with NetApp clustered Data ONTAP to achieve nondisruptive operations by measuring the benefits of eliminating storage downtime, planned and unplanned.

You only need to answer 6 simple questions that require inputting number of people, a percentage, number of systems and applications.  After the questions are answered an estimated savings from reduced OpEx (e.g. savings from eliminating the need to plan downtime and the activities performed before and after downtime), reduced CapEx (e.g. savings from not buying additional hardware due to consolidation of siloed workloads or reduction in overprovisioning)  and reduced risk (e.g. savings from eliminating deferred maintenance activities such as deferred hardware/software upgrades, thus reducing the risk of an unplanned outage) is generated, as well as a breakdown of the savings by use cases.  

If you choose, an email report will be sent to you with the estimated savings and an explanation of how the results were derived, as well as additional links to learn more.

This tool is available on NetApp.com on the Data ONTAP pages and the FAS/V-Series pages.  It is available and translated into the local language for all of NetApp’s foreign language sites.          

The web address for the English version is http://www.ndocalc.com/.

Mike McNamara

 

Picture1ndo.png

I always love a good Star Wars reference!
The following article has been provided by our partner Amit Jain, Sr. Product Marketing Manager at Cisco.
Difficult to see. Always in motion is the future” – Jedi Master Yoda.
Yoda pic.pngPlanning for the future can be very challenging. To effectively prepare for the unknown it is best to design into any plans, the added flexibility. This is especially true with IT. With the exponential growth in electronic data, storage requirements are rapidly evolving and will continue to change. So, you will need solutions offering the flexibility and investment protection to allow you to adapt to unexpected events. One focus area can be your data center infrastructure. If you can wire your data center storage network once with a flexible network technology that can be re-configured through software to run any storage protocol, you can stay ahead of the game.

Ethernet-based data center infrastructures offer a degree of future-proofing because the physical transport can support multiple protocols and data traffic types, including block and file storage traffic. Support for both SAN and NAS traffic allows you to configure and reconfigure your logical networks dynamically without having to rewire all or a portion of your network. For data centers with large investments in traditional Fibre Channel (FC) fabrics, Ethernet offers options for you as well. While FC SANs continue to power mission-critical data centers, the availability of Fibre Channel over Ethernet (FCoE) adds the ability to run the same FC technology constructs over a lossless Ethernet transport supported with data center bridging (DCB). So, you can now leverage your expertise in Fibre Channel and transition easily to a shared Ethernet infrastructure with FCoE, while at the same time gaining the flexibility to run iSCSI, NFS, SMB/CIFS and other LAN traffic on the same Ethernet link. This is one of the primary reasons, why multi-hop and end-to-end FCoE is increasingly gaining traction.
Yoda image credit: www.yodaquotes.net
endtoendunified.png
Now, if for some reason, you may not be ready to make the transition from FC to Ethernet right away, Cisco 10 Gigabit Ethernet networks offers Unified Port (UP) functionality to bridge the divide. The Unified Port functionality takes the guesswork out of the network design by allowing the device ports to be dynamically configured as FC or lossless Ethernet giving you the peace of mind so that you can transition when most convenient. On the server-side network, Unified Port capability is available on the Cisco UCS Fabric Interconnect and the Cisco Nexus switch platforms. With the introduction of the NetApp unified target adapter (UTA2), this functionality is now available with 16Gb FC/10Gb Ethernet on the storage target. With Cisco and NetApp together, you now have available end-to-end multi-protocol flexibility for maximum investment protection!

For more information on the flexible designs and real-world benefits of end-to-end Ethernet-based Multi-protocol storage networks, please visit the following customer adoption blog series.

The following article has been provided by our partner Rick Balderama, Director, Partner Marketing Brocade Communications Systems, Inc.

 

NetApp’s next generation unified target adapter (UTA2) providing 10 GbE or 16 Gbps Fibre Channel combined with Brocade networks provide customers the right infrastructure for the right environments.  But, shockingly not everyone is comfortable with choice. 

 

Model T w-caption.pngHenry Ford famously said that customers could have any color of Model T they wanted, as long as it was black.  That works great for a manufacturer – but not for the user.  NetApp and Brocade provide industry-leading flexibility with NetApp’s new UTA2 and Brocade’s network choices to meet today’s data center needs.

 

The 10 GbE iSCSI and iSCSI DCB capabilities of NetApp’s UTA2, along with the benefits of NetApp’s clustered Data ONTAP, are enabled by Brocade Ethernet fabric switches.  We are witnessing a significant transition for 1 GbE networks to 10 GbE networks, which is being driven by an increase in server-to-server network traffic due to virtualization, and server-to-storage traffic.  Brocade’s Ethernet fabrics provide the right network architecture to support these evolving needs.

 

Brocade’s VCS Ethernet fabric technology provides a quicker and simpler deployment of iSCSI storage as the network is managed as a single logical switch.  It’s designed to support the increase in server-to-server traffic by compressing two network layers into one.  And it’s optimized for virtualized server infrastructures with features like VXLAN support, integrated with VMWare Orchestration platform.  Finally, VCS Ethernet fabrics also offer automatic prioritization of IP Storage traffic.

 

The Gen 5 Fibre Channel (16 Gbps) capabilities of the NetApp UTA2 and Brocade’s Fibre Channel network solutions provide the right architecture to support Tier 1 storage workloads, including those supported by SSD, and highly virtualized environments needing extreme I/O performance.  Added to this, Brocade Gen 5 Fibre Channel features such as Fabric Vision technology enable a more efficient deployment of storage resources.  As an example, Brocade’s ClearLink technology provides improved visibility and diagnostics of the entire network – including the cabling – which enables a faster network deployment of the entire architecture. 

 

And to simply support for its customers, NetApp provides Automated Support (ASUP) on Brocade products using the Brocade Network Advisor software.

 

There are people who enjoy a black Model T.  But can we imagine the world without the red sports car?  NetApp and Brocade provide you the right infrastructure choices for your data center, today.

On February 19 we announced a new version of clustered Data ONTAP 8.2.1 and new FAS8000 series scale-out enterprise storage systems and FlexArray virtualization software .  The following is a summary of the new features with clustered Data ONTAP 8.2.1. Other blogs will drill down on certain features mentioned below.

 

ndoPicture1.jpg

 

Nondisruptive Operations (NDO) Enhancements

  • Nondisruptive shelf removal - provides continuous data access when upgrading or replacing storage shelves and the ability to dynamically assign, promote, and retire storage.
  • Storage refresh automation - with NetApp  OnCommand Workflow Automation (WFA), automate manual steps for upgrading controllers and shelves. It accelerate storage technology refresh with reduced errors.
  • Support for Microsoft SQL Server over SMB 3.0 with continuously available shares.

 

Antivirus

  • Provides on-access virus scanning support for CIFS/SMB access
    • Integrated with vendor management applications like McAfee ePO
    • Supports multiple AV scanners for better HA and performance
    • AV status monitoring and troubleshooting for better management
    • In-memory cache to avoid repetitive scanning means improved performance
    • Broad multivendor support — McAfee, TrendMicro, Symantec (Q1 CY2014)

 

Automated Workflow Analyzer (AWA)

  • Automates NetApp  Flash Pool sizing for FAS/V-Series systems
    • Sizes the Flash Pool cache based on real-time characteristics
    • Estimates performance gain before deploying Flash Pool
  • Removes complexity with real-time automated learning
  • Reduces time to deploy Flash Pool

 

Data ONTAP Edge

  • Central management for backup and disaster recovery for remote office/branch office (ROBO) network
  • Can now operate as a single- node cluster and mirror and vault to clustered environment

 

Improving Security and CIFS Management

  • LDAP over SSL enables secure authentication traffic
  • Separation between Active Directory authentication and the CIFS license
  • Multiple domain search for user mapping

 

NFS Qtree Exports

  • Enables customers to scale their NFS environments by using different export policies for qtrees
  • Eases transition from 7-Mode to clustered Data ONTAP

 

Other Features

  • OnCommand management software enhancements
    • System Manger 3.1, Unified Manager 6.1, Workflow Automation 2.2
  • Performance Manager 1.0
    • Manages storage performance issues and risks
    • Reduces mean time to resolution
    • Helps assure clustered Data ONTAP performance

 

  • Unified Target Adapter 2 (UTA 2)
    • Field configurable 16Gb FC or 10GbE

 

Mike McNamara

The following article has been provided by our partner, Brian Yoshinaka, an industry marketing manager with Intel’s Communications and Storage Infrastructure Group.  @Connected_Brian

    

When two leaders like Intel and NetApp work together on storage networking, the industry should expect big things. Intel® Xeon® processor-based storage systems from NetApp, for example, are delivering new levels of performance for customers around the world who are trying to keep up with the ever-increasing amounts of data generated by their users and applications. Intel and NetApp have also collaborated on many engineering efforts to improve performance of storage protocols including iSCSI and NFS.

 

Twinville.pngThis week’s announcement of the NetApp X1120A-R6 10GBASE-T adapter, which is based on the Intel® Ethernet Controller X540, is another significant development for Ethernet-based storage. Virtualization and converged data and storage networking have been key drivers of the migration to 10 Gigabit Ethernet (10GbE), and NetApp was an early adopter of the technology. Today, many applications are optimized for 10GbE. VMware vSphere, for example, allows vMotion (live migration) events to use up to eight Gigabits of bandwidth and move up to eight virtual machines simultaneously. These actions rely on high-bandwidth connections to network storage systems. 

 

10 Gigabit connectivity in these systems isn’t new, so why is the NetApp X1120A-R6 adapter special? For starters, it’s the first 10GBASE-T adapter supported by NetApp storage systems (including the FAS3200, FAS6200, and the new FAS8000 lines), and we believe 10GBASE-T will have a huge appeal to data center managers who are looking to upgrade from one Gigabit Ethernet to a higher-speed network.  

 

There are a few key reasons for this:

  • 10GBASE-T allows IT to use their existing Category 6/6A twisted-pair copper cabling. And for new installations, this cabling is far more cost-effective than other options.
  • Distance flexibility: 10GBASE-T supports distances up to 100 meters and can be field-terminated, making it a great choice for short or long connections in the data center.
  • Backwards-compatibility: Support for one Gigabit Ethernet (1000GBASE-T) allows for easy, phased migrations to 10GbE.

     

The NetApp X1120A-R6 adapter gives data center operators a new option for cost-effective and flexible high-performance networking. For the first time, they’ll be able to use 10GBASE-T to connect from server to switch to storage system.

   

Intel and NetApp have worked together to drive the market transition to 10GbE unified networking for many years, and this announcement is another example of our commitment to bringing these technologies to our customers. 

 

If you’d like to learn more about the benefits of 10GBASE-T, here are a couple of great resources:

The following article has been provided by our partner, Keith Burnett, Director, OEM Business Development, QLogic Corporation.

 

NetApp, in partnership with QLogic, has launched the industry’s first storage platforms with ports that support heterogeneous protocol connectivity for both block and file-based storage. These platforms provide end-users with flexible options for deploying storage solutions by allowing for field firmware upgrades to change between storage protocol personalities at the port level. Support for this  functionality is standard within the new unified target adapter 2, or UTA2. The UTA2 provides NetApp FAS storage systems with protocol personalities support for 10Gb Fibre Channel over Ethernet (FCoE), 10Gb Ethernet (10GbE), and 16Gb Gen 5 Fibre Channel connectivity.

OneWire_Hogan_2.jpg 
The new FAS8000 utilizes QLogic® technology to provide heterogeneous storage networking connectivity to support block, CIFS or NFS file structures. The UTA2 offers an upgrade path to the same functionality for the NetApp FAS3200 and FAS6200 series storage systems. The new I/O capability is based on the latest QLogic FlexSuite TM technology and delivers superior bandwidth and I/O performance compared to previous solutions and provides NetApp customers with the ultimate in performance, flexibility and efficiency. NetApp customers can now connect the FAS3200, FAS6200 or FAS8000 to host systems via Fibre Channel, FCoE or Ethernet and allow future migration paths to alternative I/O options.  

 

UTA2 is a next-generation interconnect for NetApp FAS and V-Series storage systems. The unique ASIC design, based on QLogic FlexSuite technology, provides both 10Gb converged Ethernet and Gen 5 Fibre Channel connectivity. The storage protocol personality can be changed at the port level via a firmware update. The update can be done in the field with no need to remove hardware from the storage system.  All that is required to change protocols for a FAS storage system with UTA2 is to update the firmware, change the optics and restart the controller.

 

The latest generation ASIC design from QLogic deployed in the UTA2 provides NetApp customers with the versatility to handle any protocol and significant performance advantages compared to the previous UTA ASIC. Using QLogic technology doubles Fibre Channel bandwidth compared to previous Fibre Channel versions and the Ethernet performance is increased up to 90 percent compared to the first-generation ASIC used in the first gen UTA adapter. This means NetApp customers deploying the UTA2 will have faster access to the storage device, regardless of the protocol used, when compared to the previous offering.

 

For NetApp customers deploying MetroCluster software with FAS3200, FAS6200 or FAS8000 Series arrays, the improved bandwidth performance for the UTA2 enhances this solution as well. With a faster I/O connection, more data can be transferred across the MetroCluster environment in a shorter time. This results in better overall application performance and an increase in response times.

 

Bottom line, if you are looking for faster responses to data from your databases and applications, more flexibility to your infrastructure deployments, and better efficiency in your storage system, you need not look any further than the FAS3200, FAS6200 or FAS8000 series storage systems with UTA2 from NetApp.  You can build the data center of tomorrow today by combining the technology from industry leaders like NetApp and QLogic.

The Fibre Channel Industry Association recently announced development Gen 6 Fibre Channel, representing the industry’s fastest networking protocol that enables SANs up to 128GFC.

 

entries_blue230x134.png


Gen 6 doubles the 16GFC data throughput of 3,200 megabytes-per-second (MBps) to 32GFC, which enables 6,400 MBps full-duplex speeds. Gen 6 also provides an option to quadruple 32 GFC to 128GFC throughput, achieving full-duplex speeds of 25,600 MBps.


At 32GFC, Gen 6 Fibre Channel doubles the data throughput of the current 16GFC Fibre Channel standard. In addition to faster speeds, key features of Gen 6 Fibre Channel include:

  • Forward Error Correction (FEC): Improves the reliability of links through the automatic detection and recovery from bit errors that occur in high speed networks.
  • Energy Efficiency: Lower energy consumption is achieved by allowing the Fibre Channel optical connectors to operate in a stand-by mode (or “nap”) multiple times each second.
  • Backward Compatibility: 128GFC and 32GFC Fibre Channel supports complete and total backward compatibility to 16GFC, 8GFC, and 4GFC networks.

 

Solutions are expected to be available in 2016.   Do you think we will see 128GFC become a reality in the future?   Will it be needed?

 

Mike McNamara

NetApp's Michael Goddard has posted useful directions on how to automate SnapProtect backup using OnCommand WFA (Workflow Automation).  It's in the WFA/OnCommand Community site:  https://communities.netapp.com/docs/DOC-30542

 

It includes a data-model gathered from the SnapProtect SQL database for efficiency, and two example workflows with commands that will backup existing Subclients or restore in-place VM’s.

mmodi

Service based Backup Tiers

Posted by mmodi Feb 6, 2014

Refer the communities site below where I have posted technical workflows that enable a service based approach to provision data protection tiers -

https://communities.netapp.com/message/122634

 

  • Operational Recovery (using Snapshots)
  • Backups Copy (using SnapVault)
  • Disaster Recovery (using SnapMirror)

 

Notes -

  • The technical workflows are assuming storage based crash-consistent snapshots are good enough.
  • All schedules are policy based and in essence truly integrated within Clustered Data ONTAP.
    • No external scheduler and element manager once provisioned.
    • Restore procedures are manual.

Analyst Evaluator Group recently completed testing of storage networking connectivity between blade servers and solid-state storage, evaluating Fibre Channel (FC) versus Fibre Channel over Ethernet (FCoE) and published a report.   The report was funded by Brocade, a long time FC vendor. All testing occurred at Evaluator Group labs using a combination of Evaluator Group and Brocade equipment to perform the testing.

 

The testing focused primarily on network performance and its impact with solid-state storage environments.  The goal was to understand the impact of storage connectivity on high-performance, enterprise applications as customers adopt solid-state storage, particularly in virtual server environments.

FCOE.png

The tested configuration showed the following interesting results which I’m sure surprised a lot of people:

  • FC provided lower response times as workloads surpassed 80% SAN utilization
    • FC response times were one-half to one-tenth of FCoE response times
  • FC provided higher performance with fewer connections than FCoE
    • Measured FC response times were lower, using 50% fewer connections than FCoE
    • Lower variation in FC results provided more predictable response times
  • FC used 20% to 30% less CPU than FCoE
    • CPU utilization was lower using FC than FCoE
  • A single 16Gbps FC connection outperformed two 10Gbps FCoE connections as measured by application latency.

 

The full Evaluator Group report is located here.

 

Anyone surprised by the results?

 

Mike McNamara

Filter Blog

By author: By date:
By tag: