Currently Being Moderated

This week I find myself in Dublin, ready for NetApp’s Insight EMEA (Europe Middle East & Africa) event later in the week. Our flagship event, for the technical community supporting NetApp customers to get together. As I wrote last year, a technical event for technical people! More on this year’s event later in the week……..

 

One of the people who will be on stage more than most this week is Manfred Buchmann, Senior Director Systems Engineering EMEA for NetApp. Based in south west Germany, he has been with NetApp since August 2000, so knows a thing or two about storage. Following the launch of NetApp’s new FAS3200 mid-range storage platform (as part of our agile data infrastructure strategy) last week, I spent some time with Manfred to discuss the most commonly misunderstood use of NetApp systems – for Storage Area Networking (SAN). Hope you find the discussion useful.

 

JR: For those not familiar with the storage market, what is SAN storage?

MB: SAN stands for Storage Area Network. A shared Storage Unit or multiple Storage Unit's which provide LUN's (kind of virtual disk) to servers over Fibre Channel, iSCSI or Fibre Channel over Ethernet (FCoE) protocols.

 

JR: What applications do companies usually run on a SAN system?

MB: Traditionally, everything from Virtual Infrastructure, Messaging, Databases up to ERP (Enterprise Resource Planning) applications. More recently we see also Cloud and Big Data applications.

 

JR: Who sells SAN systems?

MB: Nearly every IT vendor (including NetApp for the last 10 years) has SAN storage in their portfolio, typically a single box with multiple disks, RAID protection and some data mirroring capabilities.

 

JR: SAN devices typically don’t get changed or replaced very often – why is that?

MB: The complex design and management of traditional SAN systems, as well as the complicated networking and connectivity required make it very hard to change them. Typically, only a few specialists in an organisation know how to dynamically manage their SAN. A lack of virtualisation at the storage level (like VMware does for servers for example) doesn't allow a lot of agility. Lastly, data migration between old and new systems is usually very time consuming and expensive.  And it's a full time job to keep tuning the performance of the system, so little time to think about new systems.

 

JR: What are the limitations of a traditional SAN device or approach?

MB: Traditional SANs today are islands of a single Storage Units shared by certain number of servers, there is really no sharing, and no workload balancing between these islands. Efficiency is a nightmare in these islands, as free space can't be used between them, thus making operations difficult . Amazingly, it's not unusual for 60% of capacity on a traditional SAN to be unused at any given time. Think about how much IT budget is being wasted!

 

JR: NetApp recently launched a revolutionary approach to SAN storage – tell us more?

MB: Imagine you have storage system which you never have to switch off. Ever. An agile data infrastructure with 100% uptime! You can replace or upgrade storage controllers ‘on the fly’ - without any downtime. You can dynamically move workloads between Storage Units or between different disk types from the fastest SSD/Flash, down to the most cost effective SATA drives - depending on service level & cost requirements. You can replace old equipment without downtime or migration efforts. Most people know what VMware does at the server level and how simple it is to do a vMotion of VM's between servers - we do the same thing with clustered Data ONTAP SANs - i.e. we virtualise the physical Storage Unit to multiple virtual Storage Units. These virtual Storage Units can sit on any physical storage controller in the cluster. They are also reachable from any physical node in the storage cluster. They can be non-disruptively moved between nodes. And the data can be moved without any disruption to the application! So no downtime or data migration anymore! Your next migration could be your last.......

 

JR: Two of the major trends in IT are Cloud and Big Data. Will those solutions be built using SAN storage or some other technology?

MB: Big Data and Cloud requires shared storage. SAN is one option, with examples like a number of Media Application or HPC Environments looking for large scale, high performance SAN . Other requirements like Content Repositories will rely either on NAS (Network Attached Storage) or Object Stores . Key here, whatever you choose, the storage need be efficient and should have de-duplication and compression capabilities built-in.  As we're talking here about Petabytes of data, and where floor space, power and cooling savings are key.

 

JR: What will the SAN market look like in 3-5 years’ time?

MB: There are 3 key trends which will disrupt the traditional SAN market: 1) The trend from Fibre Channel to Ethernet is on-going , 2) Virtualised Infrastructure (like VMware, Microsoft) and Hypervisors are moving to NAS 3) Lastly, the islands of SAN will be transformed into a large shared pool, or cluster. We are ready for this as we offer the SAN and NAS in a single cluster – i.e. as a shared storage pool. I believe our technology will disrupt the SAN Market. Traditional, complex ‘frame arrays’ will be replaced by this technology. Also, smaller organisations will adopt this technology broadly over time. My take in general is that there will be a huge move to Cloud - private or public, doesn't really matter from an infrastructure point of view. There will be large scale data centres with massive storage deployments, at Service Providers or internal IT, but everything will be managed like a Cloud. This requires shared storage systems to offer SAN and NAS access, combined with efficiency and agility.

 

JR: What would your advice be to someone looking to purchase or sell a SAN system today?

MB: My advice would be to select a SAN storage system that allows you to build shared pool of storage out of multiple units. Check that Fibre Channel and Ethernet connectivity is built-in. Make sure Efficiency is part of the design. In short, build towards an agile data infrastructure, not a silo for your current project. And make sure that proven integrations into Hypervisors or Applications are available for the systems you choose.

Comments

Filter Blog

By date:
By tag: