I conducted a customer survey to better understand the dynamics behind eliminating planned storage downtime. The survey was administered online using Survey Monkey, and was listed and promoted in NetApp Communities, SANbytes blog, Tech OnTap, Facebook, LinkedIn and Twitter, and ran for 5 weeks. 208 responses were collected.

 

The survey respondents were all IT professionals, with approximately 44% storage or server admins, 23% architects and 19% application developers. Most of the respondents were from the following industries: financial services, technology, manufacturing, government and healthcare.  The majority of the respondents have either EMC and/or NetApp storage installed, and 70% of the respondents had a good understanding of scale-out/clustered storage.

 

The following summarizes some of the results.

 

Who does upgrades, maintenance?

 

Mainupgrades.jpg

 

Top challenges addressed by clustering?

 

Top challenges.jpg

 

# of times systems taken offline?

#timessystems taken offlie.jpg

Time down for maintenance?

#timessystems taken offlie.jpg

Effect of planned downtime?

  1. Lost productivity
  2. Lost sales
  3. Poor customer satisfaction

 

Financial impact of planned downtime per instance?

 

financialimpact.jpg

Reasons for planned downtime?

  1. Firmware upgrade
  2. Controller upgrade
  3. Adding disk shelves
  4. OS upgrade

 

Top vendors for eliminating planned downtime?

 

topvendros.jpg

 

 

Mike McNamara

Virtustream leverages virtualization and cloud computing expertise as well as its own proprietary cloud management solution to deliver private, public, and hybrid cloud solutions. The company’s enterprise-class cloud computing platform, xStream, is available to customers as software, an appliance, or as a managed service.

 

Their challenge has been to create flexible, scalable private and public cloud environments that meet the diverse needs of enterprise customers. After comparing several storage solutions, Virtustream selected the NetApp FAS6210 clustered storage solution for its xStream cloud platform, deploying NetApp systems in its primary and secondary data centers in California and Virginia.

 

The systems run on clustered NetApp Data ONTAP 8.1, which Virtustream considers essential to achieving the scalability and responsiveness needed for the xStream cloud platform.  “Teamed with clustered Data ONTAP 8.1, we can provide the highly flexible, scalable, and manageable storage platform needed to support our on-demand cloud model.”  Simon Aspinall, Chief Vertical Markets, Strategy, Marketing, Virtustream

xStream.jpg

Virtustream relies on the high performance of clustered Data ONTAP 8.1 to deliver its xStream and micro-VM capabilities, including reliable performance and consumption-based pricing. It also helps enable xStream customers comply with industry standards, such as SSAE16, ISO27001, HIPAA, PCI 2.0, and SAS70.

Clustered Data ONTAP 8.1 also provides added insurance by protecting against double disk failures. If a virtual machine or set of machines within a pool fails, recovery will take place on another node within the FAS6210 cluster. In a disaster scenario, clusters can be recovered at the remote site in 60 minutes via NetApp backup and recovery capabilities.

Each FAS6210 cluster stores approximately 500TB of data; features a combination of SAS, SATA, and SSD drives, and supports numerous connectivity protocols, enabling Virtustream to meet the diverse application and storage needs of its customers. The company leverages multiprotocol flexibility to move highly complex heterogeneous storage environments to the xStream cloud platform.

Based on NetApp storage, the xStream cloud platform is compatible with all leading hypervisors and compute hardware, working seamlessly with existing enterprise and service provider infrastructures. NetApp scalability offers Virtustream additional flexibility in meeting a wide range of customer needs. “Clustered NetApp Data ONTAP 8.1 allows us to scale up and down efficiently as IT and customer requirements change,” says Aspinall. “We can easily bring new enterprise customers into our multi-tenant cloud environment and respond dynamically to whatever demands arise as they use our xStream solutions and services.”

 

This blog is part of a series on clustered Data ONTAP.

 

Data ONTAP 8.1 blog series:

1.       Why Scale-Out & What’s New

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

10.    Announcing Data ONTAP 8.1.1 operating in Cluster-Mode

11.    PeakColo Guarantees 100% SLA for the Cloud with Data ONTAP 8 Clustering                               

12.    Data ONTAP #1 Storage Operating System 

13.    Do You Prefer Your IOPs Without Your Extra Latency

14.    Finding a Needle in 20 Million Haystacks – a Clustering Success Story

15.    Eliminating Planned Downtime

 

Mike McNamara

mmike

Eliminating Storage Downtime

Posted by mmike Dec 7, 2012

The cost of downtime is a major concern for organizations of all types and sizes. A disaster that causes an extended outage has the potential to put an entire company out of business. Because of this, storage professionals have designed their storage infrastructure to survive these rare events.

 

The downtime from storage maintenance and lifecycle management, however, is both predictable and preventable. It has been estimated that planned downtime accounts for nearly 90 percent of all outages while unplanned downtime is responsible for only 10 percent. Because it is more common, planned downtime can also be far more disruptive to the typical organization, with a greater impact on both business applications and operations.

 

The cumulative downtime from maintaining multiple storage systems can span from hours to days or even weeks, depending on the size of the organization. Additionally, the timing and urgency for these types of updates can be unpredictable, even though the maintenance itself can usually be scheduled.

 

Compared to maintenance operations, lifecycle management changes are fairly predictable and provide storage professionals with more time for planning. On the other hand, these types of changes can be even more disruptive than those resulting from maintenance operations.

Fortunately, the causes of planned downtime for storage systems are easy to identify and clustered Data ONTAP provides the ability to prevent these types of outages.  As discussed in prior blogs, Clustered Data ONTAP supports nondisruptive operations by design and can transparently migrate data and network connections anywhere within a scalable and highly available storage cluster.

 

 

NetApp-VMware-Virtualize.png

Note, this blog is part of a series on clustered Data ONTAP.

 

Data ONTAP 8.1 blog series:

1.       Why Scale-Out & What’s New

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

10.    Announcing Data ONTAP 8.1.1 operating in Cluster-Mode

11.    PeakColo Guarantees 100% SLA for the Cloud with Data ONTAP 8 Clustering                               

12.    Data ONTAP #1 Storage Operating System 

13.    Do You Prefer Your IOPs Without Your Extra Latency

14.    Finding a Needle in 20 Million Haystacks – a Clustering Success Story

 

Mike McNamara

On November 5, NetApp announced new models in its FAS3200 Series with the release of the FAS3220 and FAS3250. The new FAS storage systems help enterprises and midsized businesses that are consolidating infrastructure operations onto a shared storage platform improve performance and increase storage capacity. 

product-lp-fas3220-222x120.png

The FAS3220 has 80% higher performance, 100% more capacity (up to 480 disk drives) and 3x PCIe slots for greater flexibility compared to the FAS3210.  The FAS3250 has 70% higher performance and 20% more capacity (up to 720 disk drives ) then the FAS3240.

Like prior models of the FAS3000 family, these models support clustered Data ONTAP and help eliminate downtime, especially planned downtime.  These systems are ideal for unified scale-out NAS and SAN environments, and are flash-optimized for more choice and flexibility.

For more specifics on the announcement, click here

 

Mike McNamara

This blog is part of a series on Data ONTAP 8 clustering, and provides an overview of clustered or scalable SAN.

 

Data ONTAP 8.1 blog series:

1.       Why Scale-Out & What’s New

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

10.    Announcing Data ONTAP 8.1.1 operating in Cluster-Mode

11.    PeakColo Guarantees 100% SLA for the Cloud with Data ONTAP 8 Clustering                               

12.    Data ONTAP #1 Storage Operating System 

13.    Do You Prefer Your IOPs Without Your Extra Latency

14.    Finding a Needle in 20 Million Haystacks – a Clustering Success Story

Storage clustering has often been associated with NAS and technical applications, and several years ago this was where NetApp first focused.  However, with the release of Data ONTAP 8.1 in September of 2011 block storage protocols (FC, FCoE, iSCSI) were added and now NetApp supports a unified scale-out offering, with both file and block protocol support.  Enterprise and technical applications are supported with clustered Data ONTAP.

Enterprise applications like Microsoft Exchange, SQL Server, and SharePoint for example will benefit from the nondisruptive operation capabilities with a clustered SAN (scalable SAN) .  Fibre Channel SANs are so prevalent with mission critical environments, moving to a clustered SAN environment for a technology refresh, new data center build out or greenfield opportunity is the right and logical move for the reasons I have blogged about before.  For example, nondisruptive operations, storage efficiency, multitenancy, linear performance scaling, etc.

SANspickk.png

 

NetApp’s clustered or scalable SAN has unique benefits.

 

Capacity for scale

  • 12 PB+ of capacity
  • 192 I/O connections
  • 48TB+ FlashCache

Performance for consolidation

Efficiency for Shared Infrastructure

  • FlashPools (SSD), deduplicatoin, compression, thin Provisioning, virtual clones, snapshots

Designed for Multi-Tenancy

  • Vservers for isolation
  • NPIV for virtualized fabrics
  • iSCSI, FC, FCoE for unified fabrics

3rd party storage support

  • Support for storage from EMC, HDS, IBM and other vendors with V-Series

Mike McNamara

We are trying to better understand the market requirements for eliminating planned downtime so we can better position and message clustered Data ONTAP. Your feedback is very important to us, and this survey should only take about 5 minutes of your time. Your answers will be completely anonymous and by filling out the survey you will be entered into a drawing to win 1 of many $25 Amazon.com gift cards.

Click this https://www.surveymonkey.com/s/2376HZQ to complete the survey.

Thank you for taking the time to complete this survey.   For additional information on this topic, you can read my blog on nondisruptive operations.

 

Mike McNamara

speed2.jpg

 

 

Today there is support for 16GFC and 10GbE, and the next speed increases on the technology roadmap are 32GFC, and 40G Ethernet and FCoE.  The host and networks interfaces are always first to move to the higher speeds with storage systems transitioning last.  This chart summarizes a comparison between the speeds.

 

Speed

Clocking (Gbps)

Encoding (data/sent)

Data Rate (MBps)

8GFC

8.500

8b/10b

1600

10GFCoE

10.3125

64b/66b

2400

16GFC

14.025

64b/66b

3200

32GFC

28.050

64b/66b

6400

40GFCoE

41.225

64b/66b

9600

100GFCoE

103.125

64b/66b

24000

 

32GFC is expected to be available in 2014, and it’s expected that 40GFCoE and 100GFCoE which are based on 2010 standards will be used exclusively for Inter-Switch Link cores for the near future, thereby maintaining 10GFCoE as the predominant FCoE edge connection.

We all know that the volume and size of data is growing like crazy, but in many environments today’s current interface speeds are sufficient and not being fully utilized.  However, moving to a higher interface speed provides the following benefits:

  • Increased  throughput  (2-3x)
  • Reduced number of HBAs, NICs and switch ports required to achieve similar performance
  • Less cabling investment
  • Simplified manageability with reduced port counts
  • Less power consumption due to less HBAs, NICs and ports

What are your thoughts? Are current interface speeds sufficient or do we need 32GFC and 40GbE now?

 

Mike McNamara

This blog is part of a series on Data ONTAP 8 clustering, and provides an overview of a recent customer success story.

 

Data ONTAP 8.1 blog series:

1.       Why Scale-Out & What’s New

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

10.    Announcing Data ONTAP 8.1.1 operating in Cluster-Mode

11.    PeakColo Guarantees 100% SLA for the Cloud with Data ONTAP 8 Clustering                                   

12.    Data ONTAP #1 Storage Operating System 

13.    Do You Prefer Your IOPs Without Your Extra Latency

 

 

LHCtunnel1.jpg

 

CERN, the European Organization for Nuclear Research, is one of the world’s largest and most respected centers for scientific research.  Its business is fundamental physics, finding out what the universe is made of and how it works.  Physicists at CERN crash trillions of protons together in CERN’s Large Hadron Collider (LHC), the world’s largest particle accelerator, in the hope that even one collision will produce a Higgs particle.  The elusive Higgs boson particle is one of the last pieces of a puzzle that will describe how all matter fits together, from our DNA to the billions of galaxies in our universe.

 

In 2007, after a public tender process, CERN selected NetApp technology for the LHC logging database built on an Oracle database with Real Application Clusters (RAC) technology.  Since that time, CERN has unified its entire Oracle infrastructure on NetApp and today stores 99% of all Oracle data on NetApp solutions.  CERN has not lost a single data block on NetApp. 

 

Recently CERN has deployed Data ONTAP operating in Cluster-Mode for more efficient data mobility. Eric Grancher, database services architect within CERN IT said “we welcome Data ONTAP Cluster-Mode that lets us move data—for load-balancing, for moving less-used or inactive data to lower-cost drives, or for technology updates—without having to stop the application.”  If CERN databases don’t run, the accelerator doesn’t run, and physics doesn’t function.

 

NetApp Data ONTAP clustering makes it possible to maintain peak application performance and storage efficiency by adding storage and moving data without disrupting ongoing operations.  In CERN’s environment, no application can be stopped, so the infrastructure must deliver continuous availability with nondisruptive upgrade and other administrative operations.

 

Through NetApp’s agile data infrastructure and clustering, CERN is able to deliver nondisruptive operations, achieve its “keep-forever” data strategy and achieve storage utilization rates of up to 75% and decrease its IT footprint by 2 times. 

 

The detailed case study is located here.

 

Mike McNamara

This blog is part of a series on Data ONTAP 8 clustering, and provides more analysis of a recent SPC-1 performance benchmark.

 

Data ONTAP 8.1 blog series:

1.       Why Scale-Out & What’s New

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

10.    Announcing Data ONTAP 8.1.1 operating in Cluster-Mode

11.    PeakColo Guarantees 100% SLA for the Cloud with Data ONTAP 8 Clustering                               

12.    Data ONTAP #1 Storage Operating System

 

I posted another blog here that discussed our recent NAS and SAN cluster benchmark performance.  This analysis delves into the SPC-1 results in a little more detail. 

We performed an analysis of SPC-1 results to compare multiple disk-based systems—highly reliable, general-purpose systems that can provide high performance, low latency and high capacity—based on a response time threshold of approximately 3 milliseconds. This was done because, for a great majority of database workloads, very low I/O latencies are vastly preferred to higher latencies. The analysis shows that the NetApp SPC-1 results are among the best for enterprise disk-based systems, given the low latency delivered for the IOPS provided. See the figure 1 table below for details. 

chart for blog.jpg

Figure 1

 

A similar analysis of SPECsfs results showed that unlike scale-out solutions optimized specifically for sequential throughput, a NetApp storage cluster running the NFS protocol provides a predictable latency behavior that makes it suitable for a wide range of workloads, including databases and highly virtualized environments.

 

Mike McNamara

This blog is part of a series on Data ONTAP 8 clustering, and provides an update on product status.

 

Data ONTAP 8.1 blog series:

1.       Why Scale-Out & What’s New.

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

10.    Announcing Data ONTAP 8.1.1 operating in Cluster-Mode

11.    PeakColo Guarantees 100% SLA for the Cloud with Data ONTAP 8 Clustering

 

As of June 2012, data shows that more customers trust Data ONTAP software than any other storage system operating system.  NetApp strengthened its reputation as an innovation leader with the latest offering of Data ONTAP, the #1* storage operating system. 

 

 

Picture1.jpg

 

Why is Data ONTAP the #1* storage operating system?   It’s because of NetApp capabilities and features such as  Storage Efficiency, Virtual Storage TierIntegrated Data ProtectionUnified Architecture, and Secure Multi-tenancy.  In addition, Data ONTAP 8.1.1 went general availability in late August.

 

*Source: NetApp internal estimates of Revenue and Storage Capacity in the Worldwide Open-Networked Storage Market, as of June 2012. VNX, VNXe, and Celerra NS can run any Flare or Dart operating system. The contribution of these products to the OS share has been estimated based on the proportion of NAS and SAN installations in these products (NAS – Dart; SAN – Flare).

 

Mike McNamara

MikeWong.jpg

This is a summary of an interview between SafeNet and NetApp’s Mike Wong, a technical marketing engineer and acting product manager responsible for NetApp storage security solutions.  The impetus for this interview was SafeNet’s recent announcement of StorageSecure in partnership with NetApp.

Q: How did NetApp get into the encryption game?

A: NetApp customers store their most valuable data on our equipment and we’ve always believed in providing the strongest security technologies available. In 2005, NetApp acquired Decru, whose flagship products were storage encryption solutions, and I actually came on board in that acquisition. As part of NetApp, we’ve developed innovative ways to protect data at rest, and we’ve also looked to foster partnerships with industry leaders who complement our solution delivery. One of these partners is SafeNet. SafeNet has demonstrated leadership in the encryption and key management space, and has been able to help take our encryption product line to the next level.

Q: So what solutions are available today from SafeNet and NetApp?

A: We currently have KeySecure and StorageSecure. KeySecure is a key manager, and the successor to NetApp LKM appliance. StorageSecure is an Ethernet-based encryption solution that is the successor to DataFort. The SafeNet StorageSecure appliance brings a number of improvements to the original platform.  For example, where DataFort was available only in 1 GbE, StorageSecure has both a 1 GbE and a 10 GbE model to handle the increasing data storage needs of our customers today. KeySecure is able to store and manage keys for not just StorageSecure, but a plethora of other encryption products which support the Key Management Interoperability Protocol (KMIP).

The way I like to explain the product interaction between SafeNet and NetApp is that NetApp is the storage at the end of the data path, the customer is the host, and StorageSecure sits in between to encrypt information at the storage level, and then decrypt data at the host level. NetApp is the storage vendor and SafeNet offers products to help our customers protect that storage.

Q: What are some of the common use cases where organizations would need encrypted storage?

A: One of the biggest use cases for encrypted storage is virtualization, which is an area of expertise for NetApp. Many service providers want the ability to compartmentalize their storage systems to offer multi-tenancy. In the old paradigm, if a storage provider had customers A, B and C – who may all be competitors – they would need three separate systems to ensure separation of data. Now, providers are able to combine systems and compartmentalize with virtual storage running a single system. From the customer’s point of view, it looks like they have a separate, dedicated storage system, but really it’s just a virtual environment running on one central machine.

The financial sector has always been keen on encryption. Banks, for example, have been interested for a long time and are using encrypted storage. There’s also been a resurgence in the healthcare industry. This past year, numerous healthcare organizations have been asking for encrypted storage for HIPAA and HITECH compliance.

Many service providers tell me that their customers in other industries are coming to them and asking for encryption options, primarily for regulatory compliance such as PCI and California SB 1386.

Q: What’s unique about StorageSecure and how does that help NetApp customers?

A: The unique thing about StorageSecure is that its encryption is so granular.  Storage admins are able to enforce policies, compartmentalize, and separate data in ways that no one else is able to today. StorageSecure provides granular encryption for data at rest, encrypting at the CIFS and NFS level. Storage providers have the choice to able to encrypt at the vFiler level so the entire volume is encrypted, or simply shares within the virtual construct. NetApp customers such as ISPs are now able to offer their clients different tiers of storage, depending on whether they want just compartmentalized storage, or compartmentalized and encrypted storage.

Q: Where can people go to find out more about StorageSecure and NetApp storage solutions?

A: Both NetApp and SafeNet will be at VMworld next week, so attendees can stop by either of our booths for information. NetApp is at booth 1402 and SafeNet is at booth 1901. I’ll actually be presenting in the SafeNet booth at 3pm on Monday and Wednesday about securing storage in virtual environments. We also have several digital resources available on the web. My sessions will be posted to NetAppTV and NetApp.com is always a fantastic resource. For information on StorageSecure, visit http://www.netapp.com/us/products/storage-security-systems/storagesecure-encryption/, and for information on KeySecure visit http://www.netapp.com/us/products/storage-security-systems/key-management/keysecure/.

Mike McNamara

mmike

Why 16Gb Fibre Channel?

Posted by mmike Aug 17, 2012

The decision to upgrade an existing SAN (storage area network) infrastructure involves a variety of factors, some related to business processes and others related to technology choices. Customers need to evaluate whether everything is working adequately and whether the SAN requires changes to any of its components, including servers, switches, storage systems and applications. If the SAN infrastructure is handling current workloads efficiently and is expected to adequately support anticipated growth, then an infrastructure upgrade is not needed in the near term. However, if business and data are growing at a rapid pace, or if network and application performance is becoming an issue, a change is probably warranted before performance degradation occurs.

 

Data center and technology drivers such as multicore processors, high-density servers, increased performance in server I/O, SSD storage arrays and server virtualization are driving the need for increased performance and bandwidth. 16GFC delivers the following benefits that support these technology trends.          

 

      • Doubled Throughput, Higher IOPS - 16GFC delivers 3200 MByte/sec bi-directional throughput (double 8GFC’s 1600 MByte/sec) and more than one million IOPS adequately supportingdeployments of densely virtualized servers, increased scalability and matching the performance of multi-core processors and SSD based storage infrastructure

Jetpackimage.jpg

 

 

 

 

      • Backward Compatibility - 16GFC is backward compatible with 8G/4G/2GFC and will auto-negotiate down to the fastest speed supported by both ports. This allows 16GFC devices and switches to be seamlessly integrated into expansion segments of existing FC networks without a forklift upgrade. In addition, existing 8G/4G/2GFC OM1-OM4 optic cabling infrastructure can carry 16GFC traffic.
      • Reduced Capital Expenses (CAPEX) - Higher speed 16GFC reduces the number of expensive HBAs and switch ports required to achieve similar performance, resulting in lower front end CAPEX. Fewer links also mean less structured cabling investment, running at hundreds of dollars per port.
      • Reduced Operating Expenses (OPEX) - Installing two 8GFC HBAs for doubling bandwidth could result in 100 percent increase in power consumption while installing a single 16FC HBA could deliver the same result for approximately a 40 percent increase in power. Reduced port counts also simplify manageability.
      • Investment Protection Roadmap - Mission critical applications have long relied on a dedicated FC network to deliver the required performance, resiliency and serviceability. Deploying 16GFC not only preserves this architecture, but is also future-proofed.  Standardization of 32GFC is in process, ensuring the continuity of FC technology.

 

Mike McNamara

Every organization needs to ensure that intellectual property and confidential information are protected from unauthorized disclosure.  Explosive growth in data, accelerating trends in virtualization and multi-tenancy, increasingly sophisticated information security breaches, and more stringent government regulations are creating new challenges that must be met.

 

¡  92% of US data security breeches found by third party.

¡  96% of  victims subject to PCI DSS were not compliant

¡  31% increase in hacks and 20% increase in use of malware from 2010 to 2011

¡  58% of data stolen by hacktivists.

 

¡  97% of attacks were avoidable

 

NetApp has partnered with industry leaders to deliver a portfolio of solutions that help support a multipronged approach to data security:

  • NetApp Storage Encryption (NSE) provides transparent set-and-forget protection of your information. Both deduplication and data compression are supported.
  • Brocade Encryption Switches and blades (BES) encrypt data at rest on disk and tape in Fibre Channel environments.
  • The new SafeNet StorageSecure inline encryption appliances support granular encryption at the CIFS and NFS share, export, or volume level with the ability to compartmentalize shared storage into cryptographic silos.
  • The new SafeNet KeySecure key management appliance manages your encryption keys from a single platform, strengthening and simplifying your long-term key management needs.
  • NetApp® MultiStore® software creates multiple virtual storage systems within a single physical storage system. With MultiStore software, you can enable multiple users to share the same storage resource without compromising privacy and security.
  • Integrated antivirus scanning is a must-have feature to protect corporate data from malware attack. NetApp partners with leading vendors to protect corporate data from computer viruses.

Graphic for security.jpg

NetApp storage security systems deliver nondisruptive, comprehensive integrity and confidentiality of your data, protecting sensitive information across the enterprise.

Mike McNamara

This blog is part of a series on Data ONTAP 8 clustering, and provides an overview of a recent customer success story.

 

Data ONTAP 8.1 blog series:

 

1.       Why Scale-Out & What’s New.

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

10.    Announcing Data ONTAP 8.1.1 operating in Cluster-Mode

 

Overview

 

Founded in 2006, PeakColo is a cloud services provider headquartered in Denver, Colorado. The privately owned company focuses on infrastructure-as-a-service (IaaS) cloud computing, delivering public, private, hybrid, disaster recovery, and custom solutions.  To further its competitive edge, the company recently launched its PeakColo WhiteCloud Services, a unique IaaS solution that allows value-added resellers, systems integrators, and managed service providers to rebrand the solution for their own cloud services.

 

The Challenge

 

To deliver its competitive WhiteCloud Services, PeakColo needed to enhance its data centers with a flexible, cost-effective storage solution and robust storage management capabilities. The company needed highly flexible, scalable storage clusters that would allow it to quickly provision storage for new service provider customers, and Secure multi-tenancy was imperative to quickly and effectively partition storage into private, flexible storage environments.

 

PeakColo leveraged important strategic technology partnerships with VMware, Brocade, and Dell servers to build a state-of-the-art cloud infrastructure to support its WhiteCloud Services. It selected NetApp to serve as the storage foundation for the enterprise-class IaaS solution.

 

The Solution

 

PeakColo, a participant in the NetApp Partner Program and a NetApp Gold Level Service Provider, deployed a combined 10 NetApp FAS3270 and FAS3240 storage systems in its 5 Type-II SSAE 16/SOC 1 data centers. NetApp Data ONTAP 8 operating in Cluster-Mode provides the massive throughput and scalability needed to rapidly grow its WhiteCloud Services while delivering superior cloud services to existing customers. NetApp Vserver helps PeakColo achieve maximum flexibility of its shared storage infrastructure with secure multi-tenancy.

 

“PeakColo is in the business of enabling our service provider customers to be successful by helping them achieve new levels of cost efficiency and service flexibility,” says Luke Norris, chief executive officer at PeakColo. “NetApp is a business enabler for PeakColo. NetApp Data ONTAP 8 Cluster-Mode helps us optimize our NetApp storage environment, securely host multiple customers on individual storage systems, and easily scale to manage 300% annual growth.”

 

Data ONTAP 8 operating in Cluster-Mode is configured in pairs for high availability, which helps PeakColo meet its stringent 100% uptime SLAs. The Data ONTAP operating system equips PeakColo with the flexibility to add up to dozens of FAS controllers and storage as needed to scale to multiple gigabytes per second of throughput and petabytes of capacity. The company balances performance and cost with NetApp multiprotocol support, which enables PeakColo to leverage both NFS and iSCSI investments. NetApp deduplication enables PeakColo to reduce storage requirements by up to 73%, decreasing hardware, power, and cooling costs.

 

Business Benefits

 

Nondisruptive operations

 

nonstop.jpg

“NetApp Data ONTAP 8 Clustering is key to providing our customers the flexibility to leverage the same storage environment to meet the diverse and changing needs of their customers. Simply put, NetApp Data ONTAP 8 Clustering is a game changer.”  Luke Norris, chief executive officer at PeakColo

 

NetApp helps PeakColo meet the changing service requirements of its customers with Data ONTAP 8 Cluster-Mode, allowing PeakColo to deliver mixed workloads in a balanced, non-disruptive way. Whether customers demand high random reads, high random writes, non-random reads, or other types of workloads, IT managers can easily move data between controllers and storage tiers without disrupting users and applications. They can also seamlessly perform maintenance and upgrade activities by moving data among controllers, with no disruption to systems and business operations. In addition, PeakColo can also easily scale the storage as a logical clustered unit.

 

“The flexibility of NetApp Data ONTAP 8 Clustering is invaluable to us as a service provider, allowing us to be much more nimble than our service provider competitors,” notes Norris. “I don’t know any storage operating system technology that comes remotely close to it.”

 

Easy storage provisioning

 

Data ONTAP 8 Clustering technology helps enable PeakColo to build individual virtual SANs for its customers and connect the SANs to each customer’s private VMware cloud environment. Quickly provisioning virtual SANs is vital to customer satisfaction and revenue generation.

 

“NetApp Data ONTAP 8 Cluster-Mode enables us to provision storage in less than two hours and establish the entire virtual SAN for a new customer in less than four hours,” says Norris. “From there, customers can provision the resources needed to meet the requirements of their own customers. Our customers log into the Data ONTAP 8 Web interface and view and manage the storage like their own private SAN. We can manage the entire NetApp cluster from a central portal. Less than two full-time storage administrators manage 2PB of NetApp data. NetApp helps us reduce IT management costs, and we can pass along those savings to our customers.”

 

Scale-out as needed

 

Besides streamlining storage management, Data ONTAP 8 Clustering allows PeakColo to scale out at the pace needed to be competitive in the cloud market. NetApp performance enabled by the Virtual Storage Tier, 10GbE interfaces, and other capabilities helps PeakColo achieve 100% availability and meet rigorous SLAs with its customers. NetApp also helps the company close deals when service providers experience the superior I/O throughput delivered by NetApp during proof-of-concept (POC) trials.

 

The next blog in this series will focus on a recent white paper by an industry analyst.

 

 

Mike McNamara

This blog is part of a series on Data ONTAP 8 clustering, and provides an overview of the new features in Data ONTAP 8.1.1. 

 

Data ONTAP 8.1 blog series:

1.       Why Scale-Out & What’s New.

2.       Unified Scale-Out – What’s Supported & Why It’s Unique

3.       Nondisruptive Operations – What Does it Mean?

4.       Enterprise Ready Data ONTAP 8 Cluster-Mode

5.       What’s Included In A Cluster Configuration

6.       Data ONTAP 8 Clustering – Multitenancy Designed In

7.       ESG Lab Validation of Data ONTAP 8.1 Unified Clustering

8.       Storage Monitoring Made Easy:  A Video Demonstration of Data ONTAP 8 Clustering

9.       Data ONTAP Unified (SAN and NAS) Cluster Benchmark Performance

 

NetApp announced a new release of Data ONTAP operating in Cluster-Mode, version 8.1.1.  This release is shipping. The enhancements included the following features.

 

Data ONTAP 8.1.1

¡  Flash Pool

¡  Infinite Volume

¡  Scalable SAN (6-nodes)

¡  Performance improvements

¡  Improved manageability and supportability

¡  FAS2220, DS4486, and 4-port 8Gb FC adapter support

 

As a reminder, these are the features we started shipping in September 2011.

 

Data ONTAP 8.1

¡  Improved nondisruptive operations

¡  Scalable SAN (4 nodes)

¡  Antivirus on-box

¡  NFSv4, NFSv4.1, pNFS 

¡  Storage-efficiency

¡  Asynchronous mirroring

¡  vStorage APIs for Array Integration (VAAI) for NFS

¡  Server Message Block (SMB) 2.1

¡  Improved manageability and supportability

 

I will focus on these three key new enhancements: Infinite Volume, Flash Pool and 6-node SAN support.

 

Infinite Volume

 

With Data ONTAP 8.1.1, Infinite Volume support for NFS Enterprise Content Repositories provides a single NFSv3 mount point that can scale up to 20 petabytes or 2 billion files, all contained in a single Vserver. Infinite Volume currently supports up to five HA pairs (10 nodes) in Data ONTAP 8.1.1.

 

An Infinite Volume is a compound volume in which data is distributed across multiple constituent volumes (which we refer to as constituents) spread across all the nodes of the cluster. The namespace hierarchy is stored in a single active namespace constituent volume for the entire content repository. NFS Clients see the content of this volume. All metadata lookups (directory scans, file opens, get attributes, and so on) are performed on the namespace constituent volume. Subsequent reads and writes go directly through the node that ‘owns’ the data constituent that contains the file being accessed.  Data is automatically load balanced across Infinite Volume at ingest.

 

Infinite Volume provides simplified management using OnCommand System Manager 2.1. Snapshots for data protection and replication purposes are performed at the Infinite Volume level and are coordinated across all constituents in the repository to provide data consistency.  Infinite Volume provides all the storage resiliency and high availability features of a Data ONTAP cluster including nondisruptive operations and advanced storage efficiency features.

Infinite volume.jpg

 

Figure 1) NetApp Infinite Volume

 

Flash Pool

 

Also with the release of Data ONTAP 8.1.1, NetApp has added a new Flash Pool technology to Data ONTAP to further boost the scalability and performance of both 7-Mode and Cluster-Mode configurations. Flash Pool is supported on all NetApp storage systems including entry-level systems. No other vendor offers this type of functionality for the entry-level storage market.

 

Flash Pool is a persistent, aggregate-level, read and write cache. It lets you add RAID groups consisting of SSDs to aggregate containing HDDs with the goal of delivering performance comparable to that of an SSD-only aggregate while keeping cost closer to that of an HDD-only aggregate. A relatively small number of SSDs in an aggregate is used as a persistent cache to accelerate both random reads and writes.

 

Flash Pool is part of the NetApp Virtual Storage Tier (VST)  and operates in a manner that is in many respects similar to NetApp Flash Cache . Flash Pool shares the same 4KB granularity, works in real time, is fully automatic, and it works in conjunction with NetApp storage efficiency and data protection technologies. In addition, Flash Pool adds the ability to cache randomly written data and also provides consistent performance during failover and takeover operations because the aggregate-level data cache remains accessible and available during these events. Flash Pool and Flash Cache can co-exist on the same system and existing aggregates can be converted nondisruptively to utilize Flash Pool.

 

Flash Pool.jpg

Figure 2) NetApp Flash Pool.

 

6-Node SAN Support

 

Details on NetApp unified scale-out has already been explained in this blog and details on our very recent SAN SPC-1 benchmark with 6-nodes is summarized in this blog.  NetApp is the only storage vendor to support a unified architecture at scale, with support for iSCSI, Fibre Channel, FCoE, NFS v3, v4, v4.1, including pNFS and SMB 1 and 2. Data replication, management software and storage efficiency features are seamlessly supported across all protocols in Cluster-Mode.  Additional information is also found in this article: http://searchstorage.techtarget.com/Use-a-Scalable-SAN-to-Keep-Up-With-Rapid-Data-Growth

 

 

The next blog in this series will focus on a customer success story.

 

 

Mike McNamara

 

 

 

Filter Blog

By author: By date:
By tag: