Jay's Blog

15 Posts

We are really excited that Greg Gardner has joined NetApp as our Chief Architect of Defense Solutions.   Greg was previously the Deputy Chief Information Officer for the Intelligence Community, and also spent time at Oracle following a successful 30-year career in the US Army.   Greg graduated from the United States Military Academy and holds Masters degrees from Purdue University, the Army's School of Advanced Military Studies and the Naval War College.


Greg and I had a chance to catch up and talk about his arrival at NetApp and what he would be focused on.



1) What were you doing before you came to NetApp?


I was the Deputy Chief Information Officer (CIO) for the U.S. Intelligence Community. In this role, I assisted the IC CIO in developing the information sharing and management systems that enable our integrated, agile intelligence enterprise. The Intelligence Community is a coalition of 17 agencies and organizations within the executive branch that work both independently and collaboratively to gather the intelligence necessary to conduct foreign relations and national security activities. Its primary mission is to collect and convey the essential information the President and members of the policymaking, law enforcement, and military communities require to execute their appointed duties.


2) Why did you decide to join the NetApp team?

Several reasons: First, I had worked beside a number of members of the NetApp family when I was with Oracle.  They are good friends and they raved about the culture and the people at NetApp.  Second, I worked closely with the Federal CIO Council and had a sense of government wide data management challenges.  I did my homework and determined NetApp, which had a strong presence throughout the Intelligence Community and  the DoD, was the  best possible place to go to help address those issues.


3)What will your role be at NetApp?


I am Chief Architect for Defense solutions working for Mike Walsh, but am available to help anywhere in the company I can be of assistance.  I am happy to leverage the relationships I have made and nurtured over the years to help educate customers and potential customers from all sectors on NetApp’s exceptional value and quality.


4) I know you’ve only been at NetApp for a short time but what do you see as the one or two biggest opportunities for growth for NetApp within the DOD market?


I believe there are tremendous opportunities in the general area of data center optimization.  Each service, each command does that differently, but each has very similar challenges that our solutions directly address.


5) Given you’re relatively new to NetApp I’m sure you have some interesting outsider perspective. What do you think makes NetApp different than the competitors?


In my view, NetApp’s discriminators are, first and foremost, its people and its culture. I cannot overstate their importance from a customer standpoint.  Second, NetApp has a business model squarely focused on increasing efficiency and reducing costs for customers.  I first heard that unique message at Oracle OpenWorld 5 years ago and, as a customer, realized it was completely true.  NetApp’s simple, holistic architecture, quality products, and very effective partner arrangements combine to create real value in the marketplace.


6) When you’re not wearing your NetApp hat, what things keep you busy outside of work?


I’m working on a PhD in IT Management and that pretty much keeps me off the street.  For exercise, I love to hike the local trails both outside and inside DC.


Welcome, Greg!

Several years ago, I was having dinner with a customer and he was lamenting the unfairness of getting blamed for applications that had run slowly the night before.   He was responsible for managing the shared storage and since it was shared, it must be the problem.   He was wishing for some way to know what constituted 'normal' performance for the storage and tools that would prove that the storage was performing within 'normal' bounds or not.    And not just current performance, but looking back over a period of time.  

I encountered Akorri a few years later when they were starting out.  I was impressed with the idea:  to be able to monitor performance of the whole stack of devices in the path between a VM and storage, establish a metric that defined 'normal' performance, and then assist in determining when any device in the path was no longer operating within the 'normal' range.   At that time, Akorri was new, but the idea was nearly exactly the solution requested from the dinner years earlier.

On Monday, Akorri became a part of NetApp.   In a world where servers, networks, and storage are all virtualized and shared, managing performance without some kind of end-end tools is like crossing a street in Bangalore wearing a blindfold.   Akorri's BalancePoint product provides the information to know what is happening in the paths between VMs and storage, and to tell you when performance is varying from the norm.    It gives you the view you need to get out ahead of performance problems, and ideally address them before the application users ever see the impact. 

Coupled with NetApp SANscreen, which performs a similar function for connectivity and capacity, these two products will be a powerful combination in the world of managing shared virtual infrastructure.   And just like SANscreen, BalancePoint works with a wide range of storage devices, servers, and networks from different vendors.   We are absolutely committed to maintaining the heterogeneous capabilities of these products as we go forward.

For a short clip of Paul Turner, GM of NetApp's Storage Management BU and I talking abou this, see http://www.youtube.com/watch?v=12zH0bK7i1k

I wish I could remember exactly who had raised this issue to me so many years ago.   I need to buy him dinner again.   Or maybe it is his turn to buy...



Imagine Virtually Anything

Posted by kidd Jan 26, 2010

Today, Cisco, NetApp and VMware announced a collaboration on a reference architecture for building Secure Multi-Tenant IT infrastructure.   The architecture, published as a Cisco Validated Design, lays out the recipe for linking the capabilities of the three companies together to create a shared server/network/storage infrastructure that can securely host multiple workloads or 'tenants' with confidence that none will interfere with the other.


One of the great dilemmas in IT is balancing the economic benefits of consolidating equipment against the risks of one application stepping on the resources needed for another.   This is a challenge for internal IT, but it becomes a serious business issue for IT service providers.   If you can't share servers, networks, and storage between clients, an IT service provider will struggle to deliver that service at a profit.


Most IT infrastructure exists in silos.   Servers are dedicated to an application.  Storage is dedicated.  Purchasing is done on a per application or project basis.  Budgets are diffused.   Only networks have truly become horizontal infrastructure.    The maturing of VMware and virtual server technology has made sharing servers for many applications viable.   NetApp has offered the ability to logically partition a set of storage systems to allow workload isolation on shared infrastructure.   With Nexus, Cisco has built the capability to link the virtual servers to the partitioned storage in a way that assures end-end isolation of workloads across the whole stack. 


None of these technologies are brand new; they have all been deployed individually in many environments.  And our three companies have been working together on technology integration and customer deployments for some time.    What is new is the collaboration between Cisco, NetApp, and VMware to document the best practice for using these technologies together to create a Secure Multi-Tenant Infrastructure solution.    We have also lined up our respective support organizations to ensure resolution of an issue with a single call to any of the three companies. 


The spirit of this partnership is open.   And a critical part of that openness is the involvement of our collective integration partners who have been trained on this solution and are empowered to integrate this secure multi-tenant infrastructure into their solution delivery.   We believe that the new datacenter architectures that are emerging are too dynamic and too new to be claimed captive by any single vendor or alliance of vendors.   Nothing retards innovation like a closed ecosystem and customers have historically preferred an open, best-of-breed approach in their vendor selections.


New datacenter architectures only come along about every 20 years.  It takes a confluence of new technologies along with compelling economic reasons to change to overcome the comfort of the old ways.  These forces are converging now and the field is ripe with possibilities.   You will see more from Cisco, NetApp, VMware and our partners, both individually and perhaps collectively.   We share a common vision of how datacenters will evolve and we believe that with the right set of partners, virtually anything is possible.


The Importance of Being Open

Posted by kidd Nov 3, 2009

EMC and Cisco announced their Virtual Computing Environment Coalition today with the goal of accelerating the move to virtualization and private clouds.   This has been one of the industry's worst kept secret for several months, with wild rumors circulating about what it would and would not be.




Like most of these multi-company alliances, the reality has not lived up to the hype.  The Joint Venture that was supposed to compete with HP or IBM to sell full stack solutions of servers, networks, storage, virtualization, and services has devolved into a reference architecture called V-block (that you build by buying the components separately from the three companies), some high profile marketing, cooperative support (who doesn't cooperate with their partners to support customers?), and a scaled-back JV that will build and operate a v-block based solution for you as long as they get to transfer it somewhere pretty quickly.  I am sure everyone involved was disappointed.




This is good for Cisco because they get ushered in to EMC's installed base to sell UCS.   It is not clear that EMC gets much from this since UCS has no installed base at this point, and nothing about this announcement increases EMC's competitiveness in storage for virtual servers.    VMware is part of this because EMC needed their brand presence to give the solution credibility, but VMware is keeping a low profile to avoid angering their two largest customers - HP and IBM. 




I just can't figure out if it is a good thing for any particular class of customer.    For someone who wants to turn over decisions about the technologies they use and how they put it together, this might be a fit.   But those customers most likely already have a trusted integrator or else they are well on their way to eliminating the bulk of the infrastructure and moving to IT as a Service anyway.   For customers who are interested in best-of-breed technology choices and want someone to help them integrate them, this is a non-starter.   The v-block architecture v-blocks you from any choices that are not UCS servers, Nexus Switches, EMC storage, and VMware.   Even if you do like that combination of technologies, you don't need this coalition to acquire them and deploy them.   Perhaps the Acadia Services JV might bring something that your integrator can't do, but they appear to have a very short-term BOT focus - build, operate, transfer.   Where's the commitment to your success in that?




NetApp has been working with VMware for over 2 years to deploy Virtualize Data Center and private cloud infrastructure at companies like Telstra, BT, Spring, Intuit, T-Systems and others.    More recently, we have been working with Cisco and VMware on Virtualized Data Centers that include the UCS and NetApp storage at companies such as Terremark. Various integrators have been involved in these programs to bring the expertise and the customer knowledge to bear to build what is not only transformative, but also efficient.  None of this required a formal coalition - it was simply vendors cooperating to transform a customer's datacenter.



In all likelyhood, this initiative probably just won't matter (anyone remember the ACE initiative in 1992?).  All of the partners involved remain free to work with other companies that compete with another coalition partner.   NetApp will continue to work with VMware and Cisco as well as Microsoft, IBM, Fujitsu and a host of other technology and integration partners.   An open approach is the best one right now. 


You Really Do Know Clouds

Posted by kidd Aug 25, 2009

NetApp made a number of significant product announcements today as part of our Cloud strategy.


When I talk with customers about Cloud, I find they are of two minds.   There are those that think Cloud is the next wave of IT and the future of everything, and there are those that think it is just a repackaging of things they already use.    I believe they are both right.


Whenever a new trend like Cloud hits the IT scene, there is the temptation to believe that there must be a revolutionary new technology behind it that will enable the bright new future, and by the way, render everything you have now obsolete over time.   In reality, the trends that actually do survive are the opposite.   They are not built on revolutionary new technologies, they are built on using proven technologies in new ways.   They don't render everything obsolete,  they build on it and enable a graceful adoption of the trend alongside existing solutions.   Brand new technologies make small markets.  Small markets prove out technologies.  Big markets are made from proven technologies used in new ways, and they get big simply because late adopters are more plentiful than early adopters.


For many enterprise customers, Cloud is the same journey they have been on to deliver more efficient, more agile IT.  What is new is the manner in which they may deliver it.   Some have built this infrastructure themselves and deliver IT as a Service within their companies.   Others will opt to purchase it from those who have already built it and decide to offer it as a service.   Like most discussions these days, it comes down to the economics. 


We have been working with a number of customers such as Telstra, Sprint, Transplace, T-Systems and others over the past several years to build out very flexible storage infrastructure to speed up the provisioning of new applications.  These companies had a vision of offering IT-as-as-Service and sought the best technologies to build their infrastructure. NetApp offered the most flexible and most efficient (in terms of utilization and operational cost) storage.   VMware offered the most mature server virtualization technologies.  Combined, they could build "Clouds" that were cost effective and highly responsive to changes in workload. 


We developed expertise in how to optimally design, build, and run these IT-as-as-Service infrastructures.   With each engagement, we refined the best practices we had learned and have now codified that into a solution called NetApp Dynamic Data center Solution (NDDC). The first step is to engage in a "Fast Start" workshop to help enterprise customers get started in building their own internal cloud, as well as a full service offering to help them do it.   For those customers who then use this infrastructure to offer an IT service to external clients, we can help the end users they work with understand the benefits of their architecture. 


Compared to our larger storage system competitors, our Cloud strategy is unique - we will be the technology partner of choice for storage and data management rather than competing with our customers by offering a branded Cloud service.    Our customers and partners have responded very well to this cooperative approach, especially those who are using NetApp to build their own service offerings.


We are proud of the technology we offer to enable IT-as-a-Service, and we are proud to be announcing some new products that extend our offering.   We offer the ability to build secure, multi-tenant storage systems where multiple clients can share and even manage storage infrastructure with assurance that no other client can interfere with their data - essential a virtual storage controller.   We are extending that with a product called Data Motion that enables all of the data volumes a set of applications use to be migrated between controllers without disruption to the application.   We are also extending our Flash strategy with an add-in Performance Acceleration Module that uses flash memory as cache in the NetApp controllers, and can accelerate disk-bound applications far more cost effectively than adding racks of mostly empty disks. 


We are also announcing Data ONTAP 8.   This is the next major version of Data ONTAP and will be the technology foundation for all versions going forward.   ONTAP 8 combines ONTAP 7G and ONTAP GX into a single code base, which enables us to focus our development resources more sharply.   We have put a tremendous focus on compatibility with ONTAP 7G, so that existing ONTAP 7G customers can non disruptively upgrade to ONTAP 8 and operate in exactly the same way with no retraining of staff.   Many will see better performance and all will be able to build bigger aggregates and use larger disks.  Over time, they will then be able to take advantage of the scale-out capabilities of ONTAP 8, but  they will do that at their pace without facing a major upheaval in their infrastructure.


IT-as-a-Service, or Cloud if you like to call it that, is here to stay.   But this is true because it has already been here for some time in slightly different forms.   Economic downturns tend to give rise to new approaches based on technologies that are proven, but not yet widely accepted.   Virtual server technology shows compelling cost savings.   Storage efficiency technologies like deduplication, cloning, and thin provisioning show compelling cost savings.  And these technologies are now being used in place of the legacy choices to build infrastructure that is more flexible and more cost effective, which in turn is  enabling the rise of large scale service-based IT offerings that were simply not economically feasible before. 


The Clouds may be new, but they are made from technology you probably know and love.  

Last month, I blogged about the potential impact of the new Unified Computing Vision from Cisco and how a compute infrastructure designed from the ground up to integrate with a virtualized, unified fabric would deliver a whole new level of agility to data center deployments.


Well, it’s here. And it’s big. The combination of Unified Computing Systems (UCS) and technology like VMotion fills in the gaps in the vision of bladed servers – flexible, pooled compute resources that applications can use dynamically without constant human intervention. Virtual machines (VMs) are now really free to move about the network without being constrained to the same VLAN or subnet or other addressing constraint. In an era where staff is short and everyone is looking to get a more agile infrastructure with less management effort, this is a home run.


Cisco’s UCS is perfectly aligned with NetApp’s storage solutions.


First, the level of virtualization in the UCS is a perfect match for the NetApp Unified Storage platform we have been shipping since 2003. Just as UCS brings simplified scale-out compute to the datacenter, NetApp has been shipping scale-out storage for 3 years now, and will take this even further with our Data ONTAP 8 release later this year.


Second, UCS uses 10G Ethernet as the unified wire to network and storage. NetApp is the clear leader in Ethernet attached storage with strong support for FCoE, iSCSI, NFS, CIFS on the entire product line. Both platforms also fully support connection to existing Fibre Channel fabrics so there are no hard tradeoffs to make when it comes to connectivity.


Finally, this announcement builds on the strategic partnership between Cisco and NetApp. We partnered along with VMware last fall to announce Fibre Channel over Ethernet product. NetApp is using the Cisco Nexus products in our 1500-node KiloClient lab. And we have a number of very strong channel partners capable of building a “Data Center of the Future” based on Cisco UCS and NetApp storage.


Today’s economy sucks, and customers are actively looking for new approaches that materially reduce their operating costs. I was out over the last two weeks talking to customers about NetApp’s advantages in Storage Efficiency and it is clear that people’s minds are open to new ideas and new vendors to change the game in their infrastructure.



Cisco and NetApp. It’s a new game.


Cisco and Unified Computing

Posted by kidd Feb 7, 2009

The buzz about Cisco and their new approach of Unified Computing is building. As far as I’m concerned, it can’t arrive soon enough. The exact details are still pretty closely held, but from looking at Cisco Blog Post (Introducing Unified Computing to the Data Center) it is clear that Cisco is looking to make a big step in virtualizing the data center network.


This could be huge, in that it could fill in the last piece of the virtualization trinity. Server virtualization is now mainstream in the data center. The consolidation benefits are unquestioned and the ability to dynamically move workloads between compute resources is compelling if still a bit complicated. Storage is virtualized at some level on a wide scale, especially in the more modern architectures like Data ONTAP.


But the network has been a barrier to true dynamic mobility of applications and data since the VM typically has to make some assumption about a physical network address. On the storage side, volumes on iSCSI or file systems on NFS or CIFS are more dynamically addressed but they generally tend to stay in one place. This could change with a unified computing approach and systems like NetApp Data ONTAP 8 where volumes will move dynamically between nodes in a storage cluster but still be accessible at the same address.


Virtualizing the network with a unified computing approach frees up the last bond that had to be managed that tied applications and data together. In some ways, it is like the transition from a home phone number (tied to your house) to a cell phone number (always with you). Most people now are more easily accessible by cell since their “address” (phone number) moves with them. This kind of freedom of motion with continuity of access has transformed our day-day lives.


In future network-based unified computing environments, VMware’s VMotion and NetApp’s Data ONTAP will bring a level of dynamic agility to the modern data center. When apps and data are not tied to physical systems, yet the network still finds them, is a situation that simplifies maintenance and change management dramatically. This approach could be a great partner to NetApp Unified Storage. Both focus on lowering the cost of operation of the data center and increasing the ability of the infrastructure to rapidly adapt to new business requirements.


It could get interesting...


Flash and Cache Dash

Posted by kidd Feb 3, 2009

Last fall, I wrote about our plans to incorporate Flash technology into our offering, as well as our expandable memory cache offering, the Performance Acceleration Module.


We made some announcements today moving forward on this strategy that produce some pretty cool results.


We announced that the Texas Memory Systems RamSan-500 product can be used with the NetApp V-Series, effectively creating the industry’s only Enterprise Flash storage system that supports thin provisioning, fast snapshots, remote mirroring, and data deduplication (very important for Flash since this stuff is not cheap). These systems offer a much higher IOP rate on much less storage capacity, and therefore less power, space, and in many cases, price.


NetApp SAN systems are already fast – our SPC benchmarks have us faster than EMC by 20%+ but there are always customers who want more IOPS than any rational amount of 15,000 RPM disk drives can muster. The only solution today to get this level of IOPs is to string more and more drives together. Soon, you have 40TB of 300GB disks not because you need the capacity, but because you need that many disks to deliver the IOPs. This is like having a heard of mice pull a heavy load - at some point you need a different type of storage animal.


V-Series is a well proven solution for bringing the unique NetApp data management capabilities to other storage capacity solutions. Texas Memory Systems builds a screaming IOPs box. Tastes great, less filling.


Faster File Serving at Lower Cost


We also announced a set of new industry-standard benchmark results (SPECsfs2008_nfs.v3 results) on a dual-controller FAS3140 NetApp system using our cache expanding Performance Acceleration Module (PAM) and the results are stunning.


On the baseline system with no PAM cards, we saw throughput of 40,109 ops/sec and an overall response time of 2.59ms. Good results by themselves. We added a 16GB PAM PCIe card to each controller and were able to achieve the same throughput, with a 35% improvement in overall response time (down to 1.69ms) but more importantly, with 50% less Fibre Channel 15,000 RPM disks! This takes 27% out of the cost of this configuration.


But wait, there’s more!


We also ran the benchmark with SATA drives. We have seen a steady increase in use of SATA for enterprise applications as customers realize how fast they can be when used in a NetApp system. In this test, we swapped the 112 FC drives for SATA, kept the PAM cards in place, and got almost the same throughput (40,011 ops/sec) and overall response time (2.75ms) as the baseline, but with 75% more capacity. Also, at a 27% lower cost than the original.


The ability to expand cache via a simple PCIe plug-in board is creating some great results in our customers’ applications. We are still on track to offer a Flash-based version of the PAM this year, and will also be bringing Flash disks (SSDs) in a shelf slot as well.   So this is just the beginning.


We have been talking a great deal about Storage Efficiency– how to store more data in less spindles. We have a broad set of technologies that reduce the amount of capacity you need to store the data. These Flash and cache solutions bring a new dimension to this story by reducing the cost of disks purchased mainly for IOPs rather than capacity. Either way, it costs less, stores more, and goes faster.


A Great Place to Work

Posted by kidd Feb 1, 2009

We had some big news last week – NetApp was ranked #1 on the "Best Companies to Work For" by Fortune Magazine. This is great recognition for a company that has carefully built and sustained a culture that is truly unique.


It has been interesting in the past week how customers who were unaware of NetApp now seem more interested. Even friends of mine who would start making mental shopping lists when I described what NetApp does are now more curious. And oh, the resumes, LinkedIn requests, calls from Bobby who I shared a candy bar with in 1st grade, etc. It is all pretty cool.


While being #1 is a surprise, it is not a shock. NetApp is truly a great place to work. The unique benefits like paid time off to volunteer or autism and adoption benefits are unique in themselves, but they speak to a culture that cares very deeply about more than just the numbers.


We have a piece of collateral called "Create a Model Company" which is a set of cultural principles that was laid out very early in the company’s history. The principles there find their way into many other documents we produce but they also come up every day in thousands of conversations.


I have been at NetApp a little over 3 years, and my history with Dan Warmenhoven and many of the management team goes back further than that. I had always heard that NetApp had a unique culture, but it was not until I joined did I really understand what made it different. I attended an executive planning session just before I started and heard a presenter being very critical of engineering. He was the engineering guy. Sales leadership was critical of sales. They were also both complimentary about their functions as well as others. Everyone who spoke, was as objective and open about their own problems and successes as they were about those of others. It was really "One Team." At the end of this meeting, Dan asked everyone in the room to comment on the content, pace, and candor of the meeting. The “honesty check” at the end made sure that we had talked about what needed to be discussed and were not hiding something from ourselves or each other. It is a major force is keeping the leadership of the company aligned.


This integrity is widespread in the company. It is always safe to tell "truth to power" at NetApp. I have come to realize this was a missing element in many other places I have worked. It does keep us on track and focused on what really matters – delivering value to our customers.


No company is perfect and we certainly have our faults. But we are not afraid to confront them and change where we need to. Some people may not like honesty and a propensity to change. For them, NetApp is not a great place to work and they usually don’t last. But for me, and the many friends I have made working here, it is a great place to work, and worth the work to keep it great.

I worked for Brocade, a Fibre Channel switch company, from 2000 to 2005. About my third week on the job, Nishan ran full-page ads in the Wall Street Journal announcing Storage over IP (SoIP) and the imminent death of Fibre Channel. I suspect the ads cost Nishan more than the entire lifetime revenue of the company, but it did kick off a war that lasted 6 years - the iSCSI vs. Fibre Channel war.


In the beginning, the debate was like CNN Hardball - lots of dogmatic arguments and very little listening. When iSCSI products came to market and matured, it became clear that iSCSI would dominate the market for low-cost block network storage, and Fibre Channel would remain king in the high end. The problem with this for customers is that it forced them to make a major choice in storage switching infrastructure based on some pretty subtle differences. Lots of money was wasted putting low-end application servers on expensive Fibre Channel networks.


Never bet against Ethernet in the long run (remember token ring?). The Fibre Channel community - including Brocade, Emulex, Qlogic, and Cisco - has come together with the Ethernet community and defined two new standards that will allow a graceful migration of Fibre Channel networks to 10G Ethernet over the next several years. Data Center Bridging (DCB) is a set of extensions to 10G Ethernet that add the flow control and traffic prioritization that made Fibre Channel well suited to storage traffic. Fibre Channel over Ethernet (FCoE) makes the Fibre Channel framing and management protocols work over layer 2 Ethernet. It can't be directly routed over WANs since it does not use the IP layer, but then neither could Fibre Channel. FCoE also leverages much of the management tools and host-side driver work in the new Converged Network Adaptors (CNAs) that attach to the DCB 10GE network.


DCB is not just for FCoE. The “lossless” characteristics will also help other services such as NFS, CIFS, and iSCSI. All of these can run alongside each other on the same physical network. This is the real win for end users.


It will take a little while for all the standards to settle out. FCoE should be final by the middle of 2009 and DCB by the end of 2009, but there are first generation products out now. NetApp will ship FCoE target connectivity in our FAS systems around the end of 2008.


The net effect is twofold:   Customers looking at building new Data Centers in the 2010 and after timeframe can choose to use a unified fabric technology - 10GE with DCB - for all of their server-server and server-storage needs. This kind of volume adoption will drive cost and prices down - something which the duoplistic nature of the Fibre Channel industry could never achieve.


In the near term, Fibre Channel customers can extend their fabrics using switches that bridge 10GE to Fibre Channel like the Cisco Nexus systems. New servers can be attached to 10GE using CNAs and access the Fibre Channel attached storage already in place. So customer can migrate gradually, or do it all at once with a new facility.  


So does this mean the death of Fibre Channel? Not any time soon since there is so much of it out there. But I would bet that the generation beyond 8G FC will never see much adoption. By the time it might be available, 10GE adoption will be well along and 40Gbit Ethernet will be on the horizon.


I have also been asked if this means the death of iSCSI? Absolutely not. First, customers can run iSCSI and FCoE over the same 10G Ethernet DCB fabric.  Some servers using iSCSI, some using FCoE depending on their needs and past. The physical network - the real investment - is the same. iSCSI also will continue to be the only block data protocol running over 1G Ethernet which will be around in Data Center for a decade or more. 


The future is set. The only question is how fast it gets here. I believe that half the applications using Fibre Channel attachment today will be migrated to Ethernet within 5 years - the end of 2013. Virtualization will lower the absolute number of servers and ports, but by that time the trend will be unstoppable.  


So what should IT managers do? If you are not planning a new storage fabric, there is no rush. If you are adding to your Fibre Channel fabric a few ports at a time, keep doing that since it works. Let the early adopters get some experience with the FCoE adaptors and DCB switches over the next year. If you are building a new Data Center or storage fabric for deployment in 2010 or later, you need to understand the 10GE option. It will most likely save you money. It is definitely the way the industry will go in the long run.  I would hate to be the guy who put in the LAST new Fibre Channel network. 

Back in June, I had the fortune of attending Game 4 of the NBA finals between the Lakers and the Celtics courtesy of a good NetApp partner, Insight Investments. I also had the misfortune that night of having my briefcase stolen from the rental car in the parking lot.  


That night gave me a personal glimpse into the importance and complexity of key management.


If your laptop is like mine, you have all kinds of website passwords stored on it for the convenience of not having to remember them when you travel.  As I flew home, my level of panic grew as I calculated the financial havoc the thief could inflict if they broke through the top-line login. I got home at midnight and spent the next few hours changing logins and passwords on dozens of financial, storefront, and other sites. in doing this, I realized I had used the same two or three passwords for everything because it was easy for me.  Which made it easy for the thief. This prompted me to develop a more secure method of creating, using, and remembering personal passwords for the diversity of digital domains in which I dwell. My "system" is separate from my laptop or desktop so I can use it with either device, and avoid the problem of someone stealing it along with my data. I put my "system" in more than one place to protect against physical loss.  I also thought about what a pain it was and how it would not scale if I added more than the few dozen sites I use now.


I'll get back to this in a minute.


NetApp and Brocade announced a data security partnership today. Brocade has new blindingly fast Fibre Channel switches and director blades that integrate almost 100 GB/s of encrypting bandwidth. We worked with Brocade to ensure that the encryption/decryption capability of this switch is compatible with the NetApp DataFort, and NetApp will resell the Brocade products as our next generation FC DataFort. We always expected that encryption would become a feature of storage devices, tape drives, and fabric switches and this was our strategic intent when we acquired Decru 3 years ago.


This kind of interchangeability of encryption devices depends on centralized, strong key management. NetApp’s Lifetime Key Manager was designed to support multiple encrypting devices. It supports DataForts , Oracle Advanced Security Option, (come see this at Oracle Open World in San Francisco this week) and now Brocade.  It also enables millions of keys to be shared between multiple locations.  Keys can be automatically restored to a device that has been replaced, and are protected in a FIPS-140-2 Level 3 standard strongly secured system.  


Encrypting data solves a broad class of risks of unauthorized access.  Encryption requires keys. Unless a company decides to use the same key for all data they encrypt, (which has about as much security as Sarah Palin's email) they need to manage those keys.  And change them.  And be able to move them to DR sites.  And be able to recover them.  It is not a trivial task.     


Unlike my little system for keeping track of passwords, it is certainly not something that you can do manually.  The NetApp Lifetime Key Management (LKM) system will do all of this for you across a range of encryption devices.


There are several thousand DataFort systems installed now at companies like Iron Mountain, Qualcomm, CNL Financial, and Regulus Group.   There are hundreds of thousands of disk volumes and tapes encrypted with DataForts using keys stored in LKMs. The combination of Brocade's new fabric-based encryption with NetApp Lifetime Key Management will advance the state of the industry in making data in enterprise datacenters more secure.     


Flash Forward

Posted by kidd Jul 21, 2008


A friend of mine asked whether he could string together a bunch of flash-

based iPods and build an enterprise storage array. While this seems like a crazy idea (imagine the mountain of discarded white ear buds) there is no question that flash memory in some form will become a big part of enterprise storage. Flash is fast – it is somewhere between DRAM and 15k rpm disk in terms of IOP/sec

. It is also expensive – at least 10x the cost/GB of the fast est disk, but the prices are falling fast. The performance assures that flash will be of great benefit when used to store the most active data on an array. The cost will determine just how much of the disk market ultimately converts to flash. No matter what, it will be there in a very important way.

Flash will emerge in several forms in enterprise storage. Enterprise quality SSDs

are becoming available now as an alternative to 15k rpm disks in storage shelves. While performance varies greatly across vendors and read/write mix, they are very fast – 5000 IOPs and up vs. about 300 IOPs for a 15k FC disk. This means a few SSDs will deliver more IOPs than a full shelf of partially filled 15k drives. They take less power and less space too. NetApp is in the process of certifying enterprise-grade SSDs that you can use in our existing storage shelves.

But flash is memory and is fast enough to be a layer of cache in a storage system. Imagine having a terabyte or more of very fast and low-latency cache to hold the most frequently accessed data in your array. NetApp is shipping a plug-in DRAM cache card

today and we will offer a version using flash chips next year.

One compelling advantage of using a cache approach is that you don’t have to manage another “Tier” of storage – the system automatically puts your most active data blocks into the fast flash storage.

This makes many disk data placement science projects unnecessary since the most active data will remain in the large flash cache. Not just the data you ‘think’ will be hot – actually the data that is hot. Manually planning disk data placement for performance reasons was fun in the 80s, but customers I talk to seem to care much more about saving time and increasing the agility of their infrastructure than mastering the eccentricities of their storage systems.

In addition, that cache can be deduped

so that it won’t fill up with identical blocks from multiple VMware images (NetApp does this today). If you define a policy that certain data volumes are more important, they can either be pre-loaded in cache, or designated to never be kicked out of cache. Or you can pin metadata in cache ahead of data. Lots of ways to optimize here using policy, not people.

For the next few years, you won’t be using a lot of flash capacity in your systems, not just because of the costs. At 10x or more the IOP rate of hard disks, it only takes a small number of SSDs in disk slots to saturate the performance of the array controller. It’s like trying to fly a model airplane in your living room – you’ll run into a system performance wall long before you hit capacity limits. This is another reason that flash as cache is economically efficient – it puts the necessarily small amount of very fast storage at a point in the architecture where you can best exploit the performance.

Flash is hot. While there is probably more smoke than fire right now, it will definitely produce significant improvements in enterprise storage and application performance. SSDs will be the first wave and will be easy to plug in. But the real innovation will be in how enterprise array designs adapt to embrace flash. Then the fun starts.


We announced several new products today at NetApp that I am especially excited about. A new midrange storage system, the FAS3100, and a pair of new caching technologies called the Storage Acceleration Appliance (which is an NFS cache appliance that uses our FlexCache software) and the Performance Acceleration Module (which lets you expand the memory cache inside of the NetApp storage controllers). 


Many of our customers use NetApp storage systems to hold the data for engineering, HPC, scientific and other "technical" applications.    This includes software development, seismic analysisresearch, genomics, semiconductor design, computer animation, and many more.    These are the applications that drive the revenue of these companies - the top line, as opposed to the applications that drive efficiency of operations - the bottom line.    Anything that can be done to make these revenue producing applications run faster has a meaningful impact on the revenue growth of these companies.


Most of these applications need a lot of compute, but most are also constrained by the speed of their storage.    Compute speeds have grown much faster than disk drive IOPS over the past several years so anything that can be done at reasonable cost  to speed up the delivery of data to the app is a good thing.


The announcements NetApp made today do just this.    The Storage Acceleration Appliance is an easy-to-manage caching appliance that allows a lot more application servers to get access to the same set of files by spreading copies of them across more storage controllers and more drives.    These appliances can also be deployed around the world to deliver high performance for distributed workgroups.   NetApp uses them in our own software engineering groups.    Since they are caches, the data on them does not need to be managed.   All backup and other management is done on the 'source' system that feeds the caches .


The Performance Accelerator Module expands the size of the DRAM cache inside the NetApp storage controllers in a smart way.     Since DRAM is several orders of magnitude faster than spinning disk, a application request for data in cache will be serviced with much lower latency.    More cache means more data blocks will be served from memory.   You can also choose to cache just metadata or both data and metadata.  A test of an average NFS workload doubled IOPS at constant latency.   Nice.   Plus, it can be added to many of the existing NetApp systems installed in the field and we have a software tool to verify in advance that a bigger cache will help.   Even nicer.


I love working with the customers who run these types of Apps.    They are working on the frontier of knowledge and are building the future.    They are also constantly pushing the envelope of what their storage and compute systems must do in the quest for faster product development, faster data analysis or better science.     By doing a better job for them, we do a better job for all of our customers.



Go further, faster.   


Marketing With Integrity

Posted by kidd May 28, 2008

Steve Duplessie referred to me as "a marketing guy" in a recent post Kind of understandable given my CMO title. But I also bristle at the label a bit since it feels confining.


My wife is a teacher. I think we have stayed married for almost 25 years because we both enjoy seeing the light go on in someone's eyes when they realize they have learned something new.  She got to do that with kids in the classroom.  I get to do it every now and then when I talk with customers. Helping people understand how to solve a problem is what really excites me.  I get to learn from them as they describe what they are trying to do, and sometimes I can describe a new product or technology that will help them get it done.   It is a win-win.


While there are many aspects to the function of marketing, I believe the essence of marketing is about teaching.   Good marketing focuses on how customers learn, and how to reach them with just the right information at the right time to address the challenge they are trying to solve.  Marketing often gets a bad rap because the timing of problem and solution don't align, and customers find themselves barraged with information they can't use at that time.    Reminds me of 10th grade biology where I was buried in information that had no relevance to me finding a date for that Friday night.  


I have done "marketing" roles for most of my career, but those roles included technical teaching, product planning, and conducting technical customer councils.   I also did some software engineering management, and was even a Chief Technology Officer at Brocade.   Lots of time talking technology with practitioners of IT.    I think of marketing as the function that translates between customers with business and operational needs and engineering teams anxious to make a difference by solving them.   


Remember the teacher that really lit up your mind at some point in grade school?   For many people, it was a single teacher that set them on a career path by finding the link between their passion and the real world.     But also remember how hard that teacher had to work to simply get and hold the attention of the class?   He or she was "marketing" learning to you - sometimes you were buying and sometimes you were not.     A great deal of marketing activity is simply to try to catch your attention.


The analogy is not perfect.  As a child, you had required courses to take.    And the teacher did not starve if you did not learn.   But the principles were similar.    If marketing is the mutually profitable exchange of knowledge between consumer and provider with the goal of stimulating informed action, then I am proud to be a "marketing guy."


Scattered Clouds

Posted by kidd May 14, 2008

I have been hearing more and more about “cloud computing” and how it will become the nexus of everything new in computing. This trend reminds me of an alien abduction – when an innocent victim is grabbed from their bed by beings from a different world, poked and probed, and returned with a vague notion of having been violated.


Amazon, Google and IBM were the first to really put cloud computing out in the public eye. Amazon targeted small businesses with their S3 and EC2 services offering compute and storage for a metered rate. Google came out with the idea of allowing 3rd parties to run massively parallel applications on the Google compute cloud. IBM jumped in with their Blue Cloud initiative targeted at a similar audience of people looking to build new types of applications that require public access to a massive shared grid of compute nodes. Many companies in research and commercial industries have built grids of compute nodes, but these are all private facilities that run that company’s applications. Amazon, Google and IBM have a new idea – or at least are talking about a new way of democratizing the technology – and it deserves a new term. Cloud is cool.


Like any new term that catches the imagination of the market, a lot of companies tend to pile on and abduct it. They then use the new buzzword as an umbrella term for a wide range of things that already exist, may never exist, or never deserved to exist in the first place. I wonder if the guys at Google, Amazon, and IBM are feeling that vague sense of having been violated.


I have heard cloud computing, Web 2.0, Software-as-a-Service (SaaS), Online backup, Enterprise grids, and even email and messaging all be jumbled up in conversations with usually savvy people who have just been confused by the blizzard of abstractions. In a recent Business Week interview, Shane Robison of HP refused to define cloud computing, which is probably wise – it is too new an idea to constrain yet. However, I believe there are several things being called cloud that definitely are not new, and should not be used to pollute what cloud will become.



(SaaS) – SaaS is application software that runs on the service provider’s shared infrastructure that you can use via a web interface. Salesforce.com is probably the best example. Google apps are an emerging example. It is different from application hosting, where a client company rents computing infrastructure and applications that are dedicated to them. Hosting has been around for years. Oracle and SAP offer large enterprise application hosting services. SaaS is new, and is a new way to offer software, but is different from cloud computing in that the applications available are chosen by the service provider, not the client customer.


Enterprise Clouds – I have heard people refer to their company’s compute grid as a “cloud” which may be conceptually accurate, but it is a very different animal than a “public cloud.” We’ve called these ‘grids’ for some time and people generally know what that means. Why fuzzy it up?


Managed Backup Service

– A number of companies are offering online storage capacity for a metered fee. There are scads of consumer or SMB-oriented companies offering this service, and a handful of enterprise level services from companies like Iron Mountain or Symantec have also emerged. This is a pretty well understood idea. Definitely not new. Definitely not “cloud.”


Storage-as-a-Service - This idea goes back a long way, with Storage Networks being the most spectacular failure in the enterprise segment of this market. The idea of putting the data in the network accessed by applications either on premise or in another network just seems to add more complexity than is needed. Either remote it all (application hosting) or keep it all (enterprise computing). Amazon’s S3 is this type of service, with the expectation that only applications that can live with the service level and performance will use S3. I would bet that most S3 customers also pick up the companion offering for compute, called EC2. The combination of S3 and EC2 definitely qualifies as “cloud computing.” Individually, S3 is a stretch to be called cloud computing.


Web 2.0

– This is a great example of coining a generic term and then allowing the definition to evolve. To most people, Web 2.0 is web services which support interaction and collaboration over the internet, as opposed to Web 1.0 which was either reference content or commerce. Facebook, MySpace, photo sharing, instant messaging, even email would count in this very broad umbrella term. Ultimately, Web 2.0 and cloud computing may come together if a new class of cloud-based social networking applications emerge, but that is not here yet.


I think what contributes to the confusion is that all of these ideas depend on common infrastructure technologies to deliver. All of them need scalable compute. All of them need scale-out storage

to support the large data requirements. This encourages the infrastructure vendors to generalize and call them all by the hot new cloud computing term. To the vendors, the distinction between these services is inconvenient. To the providers and users of these services, the distinction is everything. New ideas need some room to be different and establish a unique added value. Most of all, they need to be developed and defined by the people delivering them, not the vendors supporting them.


It’s sort of like “Green” computing. I am all for environmental awareness and reduction in resource consumption. But we’ve all seen some pretty routine activities lumped under a company’s “green” initiatives. It’s like the marketing groups woke up one day and suddenly had to tell their story with a “green” filter. Perhaps they had been brainwashed in the night. Perhaps they had alien visitors.


Perhaps they were little green men….

Filter Blog

By date: