1 2 3 4 Previous Next

JR's IT Pad

58 Posts

Binero.jpgAccording to Gartner, 63% of IT budget is typically spent running current IT infrastructure, 21% on ‘Grow’ to meet the natural growth in application performance requirements and data, and only 16% on New Opportunities. I wrote previously about why that needs to change: The Evolution of IT - CIO's move from builder to broker. But imagine the impact on a business when 100% of IT budget and resources are focused on running current IT infrastructure, because it’s not working and customers are leaving….. 


A tale of 3 (very different) meetings

I first met Johan Kummeneje, CTO of Binero – one of the largest web hosting companies in Sweden – in May last year. And they were not in a good place (not Sweden, which is a very nice place, but the state of their business).  Having built the company from nothing in 2002 to an impressive 20% market share by 2012, they were suffering severe service outages every few weeks. As you can imagine, thousands of customers were affected (they handle 1.5 billion e-mails each year for example!) meaning stories of the business impact to their customers appeared regularly in the media. And for the first time in their history, they were losing customers. All due to underlying data storage platform problems. Nearly the whole of Johan’s team were dedicated to trying to keep it working. They had to take a different approach, to restore an increasingly damaged reputation.


The second time we met was in October last year.  A very memorable meeting – I think the first time in all my travels that I have been greeted at a customer by a small dog!  The other reason it was memorable was that Johan had decided to move to a NetApp Clustered Data ONTAP storage foundation a few months earlier. The system was now up, running and performing very well. The impact? Less bad press. Happier customers. And no longer 4 people dedicated to working on storage, but now only 1 part-time. Meaning the other 3 members of the team could start thinking about new services to drive business growth again.


The third time we met was a few weeks later, at VMworld Europe in Barcelona. The NetApp storage platform was by now fully in production and they could see the true value of Non-Disruptive Operations. Our conversation was almost entirely focused on what exciting New Opportunities lay ahead for the team at Binero.


I think this story is a good example of how important choosing the right data storage foundation is for a business. Especially as organisations look to spend less time operating IT, and more time using IT (and data) as a competitive weapon. It also is a good reminder that, while agility, performance and efficiency capabilities are typically what differentiates one storage platform from another, reliable data availability will always be the top priority. 



CBS photo 5.JPGNo doubt the IT industry is undergoing a significant transition. The sort that only comes along every 10-15 years or so. Broadly the idea is that companies will run less of their own IT over time, with specialist ‘Cloud’ service providers doing it for them instead. Not a new idea, but technology is finally making it possible. And innovative new services and applications are proving to be the catalyst.

Last week was a good one for me to assess where we on the journey towards Cloud Computing in Europe. I started the week with the annual NetApp Cloud Business Summit, a gathering in London of ~200 Service Providers from across EMEA plus several industry Analysts. On Wednesday I met with 5 CIO's on a panel for Meet the Boss TV. And finally I went to Cloud Expo Europe, along with 7000 other IT folk. No wonder I'm not getting any proper work done!


Where are we with the shift to Cloud?

Honest answer - most people don't really know. They know it's happening, and needs to happen, but are not sure how fast it will go. And those that tell you they do are almost certainly bluffing, or biased. Everything and every project now seems to be some sort of Cloud. And if you ask 10 people for a definition of what Cloud Computing really is, you get 11 answers.......


A more pragmatic view, what do we know?

Despite the confusion, Cloud is appealing to end users and IT providers alike. However IT services and applications delivered using shared Cloud services make up only a very small proportion of IT spend today – $10bn’s in a $3Trillion IT market. But consensus is that they are growing an order of magnitude or two faster than traditional IT, as users increasingly want to purchase services and infrastructure as they need it, rather than buying in big chunks of investment.


When it comes to Cloud, users want choice. The sensible ones see an infrastructure built up using many clouds - private, public and hyper-scale over time. What is becoming commonly known as a ‘hybrid’ model. Ideally connected together via a 'common data platform'. They don’t want to be locked-in to any one vendor. Nor do they want to be limited to any one location, although consistent data location is critical, especially in Europe. Trust and perception of risk are more important than technology when it comes to Cloud service adoption.


Of the hyper-scale players, AWS (Amazon Web Services) is the elephant in the room. Estimates are they account for $4bn per annum of Cloud service business today, admittedly with estimated costs of $6bn per annum! Google, Microsoft are the closest challengers with VMware the new kid on the block with their vCHS service – launched in Europe last week. These companies are also pitched against the OpenStack community, which seems to be gathering significant momentum.


Developers are becoming way more important, and their access to Cloud services is accelerating innovation. Hot topic last week - the ‘value’ created by WhatsApp, bought for $19bn recently by Facebook. 1 App, 55 people, 35 developers. $19 billion ! Allegedly all developed on the IBM Softlayer Cloud.


Finally, there is much debate on whether Private Cloud is really Cloud at all. I think they should be included, as long as they offer users self-service access to a catalogue of automated on-demand services. Not just a planned IT project re-named as a Cloud project to keep your bosses happy. Assuming you include it in the market sizing numbers, no doubt Private Clouds account for the vast majority of projects today. Although getting an accurate figure is near impossible given the vast number of definitions out there.


In summary, as you build your cloud approach, make sure you’re not falling for the hype - either way - no Cloud or a single Cloud for everything is definitely not a sensible, pragmatic or sustainable strategy for any IT department, or business.


Also see: 9 questions you should ask before putting your data in any Cloud

This week at the HiMSS14 event in Orlando our partner BridgeHead is announcing a new healthcare data management solution based on the FlexPod Select architecture.


Healthcare illustration 1.jpgAs I’ve written about before, I often get asked by non-IT folk (also known as ‘normal people’) what work I do? I usually respond by telling them I work in IT, for a great company called NetApp. We do storage systems and data management. And if that doesn’t work (it usually doesn’t), then I move on to explain that the computer systems we sell are the sort that companies rely on to store their and your most valuable information. I find the two best examples are your banking details, and perhaps even more importantly, your medical record.


Did you know that 30% of the world’s stored data is now devoted to healthcare? Largely as a result of the high growth of electronic patient data, multi-ology images, human genetics, clinical trial data, population information, increased regulation and many new technologies. Healthcare organisations need a more efficient and more integrated solution to deal with all this critical data and information. HealthStore for FlexPod can help. The solution combines the best in storage, network and healthcare-integrated software to create a pre-validated foundation for data management in hospitals. It has been developed to help hospitals manage all of the clinical and administrative data across the hospital enterprise – from mission critical applications such as Electronic Patient Record (EPR) systems through to Picture Archiving and Communication Systems (PACS).


BridgeHead HDM.png

HealthStore integrates BridgeHead’s Healthcare Data Management (HDM) Solution with FlexPod Select, the latest addition to the FlexPod family for big data environments. The fully validated and tested solution combines NetApp E-Series storage, Cisco Unified Computing System servers and Nexus fabric switches into a single, flexible architecture. The solution also includes BridgeHead’s HDM software, meaning hospitals can place their (or should that be ‘your’) data into a central repository, which is then fully indexed for easy search, and protected in the event of corruption, accidental deletion, system outage or disaster, and is available when it’s needed most from multiple locations.


Today, more than 3,500 healthcare organisations worldwide use innovative clinical data and IT storage solutions from NetApp to share patient data across the continuum of care. 


For those who are attending HiMSS14 – please stop by our exhibition stands – NetApp (Stand 2773) or BridgeHead (Stand 3283) to find out more.

For more information about HealthStore for FlexPod, please visit www.BridgeHeadSoftware.com


And if you’re not in IT, rest assured that very soon your medical record could be that bit better protected, and available that bit quicker if you should ever need it.

Cloud Jan 14.JPGI don’t get every decision right. Nobody does. The beard I grew at University would be a good example of many I have got very wrong. Thankfully in the days before camera phones & Facebook. In IT, the traditional decision making process usually starts with a business problem, an Application is chosen to best solve that problem, then a software and hardware stack is chosen to run that application. Lots of big, expensive decisions. After the project is in production changing the hardware you chose becomes near impossible. Or somebody else’s problem when you’re in a new job 3 years later. Especially with Storage systems, when you potentially have lots of very valuable data to migrate. And different types of SAN’s (Storage Area Networks) have never been very compatible with each other. Very reliable. Not very efficient. Not very flexible. Hard to change.


Start by asking the right question

Over the last year or so, when I meet NetApp customers for the first time I have started asking a simple question about their decision to use our products - “What is the most important thing you buy from NetApp?” Most answer a FAS2000, FAS3000 or 6000. Or some combination of these and the other storage systems we sell. And I like to point out they are wrong (politely of course, I’m British). They are in fact buying into our Data ONTAP software platform, integrated within a FAS hardware appliance. May seem a subtle difference, but ONTAP also can run in the Cloud as a VSA (Virtual Storage Appliance) or on non-NetApp ‘white box’ hardware as ONTAP Edge or to virtualise other storage arrays. Traditional SAN’s don’t let you do this. They do what they do well, but not much else. Great if you find you made the right decision. Not easy if you didn’t or requirements have changed. In other words, the data management software decision you make is way more important than the hardware you choose to run it on. Some call this a Software-defined approach. Getting this decision right means you should have a far more flexible infrastructure. You might even find it more reliable as I wrote about here.


The News: A much more Flexible storage approach.

Today we are launching FlexArray storage virtualisation software. A new capability in our Data ONTAP OS and available on the shiny new FAS8000 series – also available from today (with 2x faster performance & 3x more Flash!). It means you can fully virtualise traditional SAN’s and add a layer of Data ONTAP intelligence to EMC and HDS arrays, directly from the FAS8000 system. Once virtualised with ONTAP, you’ll find your storage infrastructure becomes way more flexible – Unified SAN/NAS workloads, scale-out for greater performance and scale, unified management, multi-vendor back-up/recovery, much faster data protection with Snapshot copies, thin-provisioning, cloning, deduplication, compression, integrated Flash acceleration, virtual storage tiering, pro-active automated support, access to integrations across the vast NetApp partner ecosystem. And perhaps the two most important things you’ll gain – 1) typically 35% storage space efficiency savings, meaning you need to buy 35% less SAN today, and 2) the potential to connect into more than 300 private, public and hybrid Cloud services based on Data ONTAP.


In summary, FlexArray gives you 35% efficiency savings to justify your decision today. And the flexibility of direct access to a world of new Cloud possibilities if you find your traditional SAN is not the right decision for you tomorrow.

Getting in vShape.JPGAt the end of last week I got the chance to join a briefing by the joint NetApp / Fujitsu team working on vShape. Launched in 2013, vShape is a reference architecture for converged virtualised infrastructure combining the best bits of Fujitsu (Primergy servers), VMware (vSphere software), Brocade (ICX switches) and NetApp (FAS storage running Data ONTAP software). Combined with the global scale, support and professional service expertise of Fujitsu to deliver the full, integrated solution anywhere you need it and way faster than you could ever build it yourself.


The traditional approach would be for a customer to spend much time designing, integrating and testing before moving an IT solution into production. Sometimes a project lasting many months, or even years. Only to find the solution they chose is not quite right, technologies and needs have changed, so the project cycle starts all over again. With a solution like Fujitsu vShape, they (with support of an ecosystem of partners like ourselves at NetApp) do most of that sizing and integration work before it is shipped, leaving the customer to concentrate on getting to production faster and fine tuning to support the specific needs of their business. With the knowledge the solution is designed to scale from less than 25 virtual machines, to as large as you could ever need with vShape Enterprise.


As the IT industry spends a lot of time working out what Cloud Computing is really for, it’s worth remembering that there is a huge amount of non-virtualised IT infrastructure out there – in fact most estimates are that this is still the majority of IT in use today. Even if it feels like Virtualisation is standard (and all ‘a bit 2009’) to many of us working in and around the data centre, lots of remote offices and smaller sites especially have yet to be virtualised fully. There is no reason they can’t be. If you’re reading this and are the lucky owner of some of this type of ‘2009 IT’ still today, take a look at vShape, you might find it’s faster, cheaper and easier than you think to get to a fully virtualised architecture (server, network and storage). And you might find the next logical architecture steps – automating, then moving to a hybrid cloud at some point – might be possible a bit sooner than you thought.


Bletchley Park.JPG

It’s 70 years today since the first electronic computer, Colossus, was used at Bletchley Park to crack messages sent by Hitler and his generals. It is thought these machines and their 2,400 valves shortened WW2 by up to two years! Still perhaps the most impressive impact of computer hardware in history. If you visit The National Museum of Computing, you can walk round all 5 tonnes of it. Colossus was kept Top Secret for 30 years because of the sensitivity of the work it did. Not surprising then it’s only the last 10-15 years or so that Colossus and Bletchley Park started to get the recognition they deserve.


Colossus Valves.pngI suspect not much attention was paid to the aesthetic appearance of Colossus. But as computers got smaller and more powerful in the years that followed, valves were replaced by transistors and then microchips. To the point where the interesting shiny hardware bits got hidden totally from view. Usually behind a cover known as a ‘bezel’ – the first of which was invented by Dr Alfred von Bezel in the late 1960’s*. Computer hardware has continued to become more powerful ever since, as Moore’s Law predicted.


Over recent years, innovation and differentiation in the computer systems industry has come mainly from Software. Hardware becoming more and more commoditised. There are only a few disk drive, memory, CPU manufacturers in the world. Most computer systems companies (at least the successful ones) understand that the value they add is packaging these components in the most optimum way, with unique software, through partners, to solve customer problems.


Much has been written on how ‘Software-Defined’ architectures change everything. As hardware components appear to become ever cheaper, many people are starting to question the value of integrated computer systems, or appliances – why not buy / build your own software and buy these commodity components direct from the manufacturers? Wouldn’t that save a lot of money? And add Agility? The utopia of the cheapest ‘dumb’ hardware, with everything and anything possible through software intelligence.


I recently bought a 2TB WD My Passport Ultra drive for £89. And it includes free ‘cloud back-up for my precious memories’. Not surprising then that people in organisations spending millions on data storage are looking seriously at what this could mean for the Enterprise. VMware (with vSAN, part of vSphere), Microsoft (Storage Spaces, part of Windrows Server) and several much smaller, VC backed companies have started to talk about the concept of replacing storage appliances entirely with software capabilities. With the assumption that you buy your own hardware. NetApp also has a product in this space – Data ONTAP Edge – which runs in a virtual machine on a standard enterprise server.


However tempting this may be, the realities are somewhat different. These software-only solutions are probably a good bet for small businesses or remote sites, but for core data centre use they lack a number of critical capabilities – ability to scale beyond a few 10’s of Terabytes, the necessary performance, storage efficiency features (workable deduplication, compression and the like), choice of support for multiple software stacks to avoid software vendor ‘lock-in’, and, most importantly, the ability to manage hardware availability efficiently. They also assume that IT departments have plenty of spare time to spend on integration, testing, scripting etc. And that, rather than storing intelligently, it is acceptable to store ever more copies of data. Most CIO’s I talk to want to spend less time building stuff, and more time delivering new services to the business. Less time & resources operating storage. More time exploiting Data.


In summary, software is about as much use as a chocolate teapot without the right hardware to run it on. No doubt innovation and differentiation is in software – NetApp spends 95% of our R&D efforts there. But disk drives have lots of moving parts and fail. Flash storage has no moving parts, but still fails, just in different ways. Companies like NetApp have 20+ years of experience building solutions to deal with this efficiently and at very large scale (a single NetApp customer now has more than 1,000,000 TB’s of storage to manage!). And we already outsource our hardware manufacturing, based in the main on ‘commodity’ components. Unique Data ONTAP & SANtricity software, bought and sold mainly as optimised FAS, E & EF-series systems built with 'commodity' components.


No doubt more choice is needed, a software-only solution will make sense for some people and those web-scale companies that can afford to build things their own way, but the vast majority will need packaged, price / performance optimised storage appliances for many years to come. The focus should be the value storage software can bring you on top of this, whether you run it in your data centre or buy a service from the Cloud. Uniquely, the best storage software can abstract pools of data from hardware and connect them across clouds to offer a whole new world of architecture possibilities for application owners. But you’re still going to need some hardware sitting in a data centre somewhere, probably with a shiny bezel or two, to run it on.


*This bit about Dr. Bezel is not true, but somehow the idea came up in conversation at the NetApp Coventry office Christmas lunch last year. I can’t remember why.

Most people & organisations I speak to love the idea of ‘pay-as-you-use’ cloud offerings. But, as I wrote about in this previous blog, they really don’t like the idea of being ‘locked-in’ to one cloud provider, have both legitimate and perceived security concerns and they are increasingly worried about the growing issue of Shadow IT (IT bought by individuals, business units outside the control and governance of the IT department). They want access to open, yet secure hybrid clouds. With the ability to securely manage, govern, and efficiently transport applications and data across those clouds. And ultimately focus on improved service delivery to their end customers.

At Cisco Live in Milan this morning Cisco announced an expansion of their Cloud strategy and more specifically Cisco InterCloud. This is a significant announcement from the world’s largest networking (and fastest growing server) vendor. Especially as all of us in the IT ecosystem try to work out how to help IT departments move from being builders of data centres, to brokers of services. This has to be based on the ability to offer a choice of hybrid cloud services to users, both owned (Private), borrowed (Public) and combinations of these (Hybrid). NetApp is one of Cisco’s partners for the launch of InterCloud (see Phil Brotherton’s blog post for more details), together with Citrix, service providers including BT (who announced their intention to integrate InterCloud into their Cloud Compute portfolio, based on FlexPod) and others.


In other news, again together with Cisco, we also announced 6 new FlexPod converged infrastructure designs including for Microsoft Private Cloud, Citrix CloudPlatform (powered by Apache CloudStack), VMware vSphere 5.1, and the recently announced Cisco 9000 series – the building block for Application Centric Infrastructure (ACI).

And finally, UShareSoft, in collaboration with Cisco France, APX and Cloudwatt, launched Plug2watt, the first integrated Hybrid Cloud solution based on FlexPod.


Based on this evidence, will 2014 finally be the ‘Year of [Hybrid] Cloud’?

World of Solutions.JPG

FlexPod Hero shirt.jpgAmazingly, we are now 3 years on from the launch of FlexPod, in late 2010. Time flies! The interest (and our joint investments) in the optimum combination of Cisco and NetApp to help customers with their IT infrastructure challenges shows no sign of slowing. Cisco Live Europe is approaching very soon, and I am very pleased to reveal our plans for the show. After 3 years in London, we will be travelling to Milan from January 27th-31st, 2014 for Cisco’s largest technical conference in Europe. NetApp is a very proud sponsor again this year. If you are there, please do stop by the NetApp stand to meet with us, our executives, partners and FlexPod experts. We’d love to discuss ways to accelerate your business with FlexPod Integrated Infrastructure, speed application deployment with clustered Data ONTAP and Cisco Application Centric Infrastructure (ACI), handle data growth and reduce IT disruptions by using NetApp Data ONTAP, the #1 storage operating system, and ultimately scale to any cloud solution. We’ll also have some very exciting new announcements to share at the show………


Book 1-on-1 meeting

Contact our team today to arrange a meeting with our Executives and Technical experts.


Breakout: Cisco and NetApp Strategy Update and Panel, Data Center technology trends and FlexPod Innovations   

  • Register here
  • Wednesday 29th @ 11:30am.
  • With me, Kerry Partridge from Cisco, our technical experts, plus very special guests from VMware, Citrix, BT and more


Webcast: Transition to any cloud with FlexPod

The leading infrastructure solution from Cisco and NetApp

  • Register here
  • Wednesday, 29th @ 13:00
  • Alan Gerrish, Consulting Systems Engineer, Cisco Systems
  • Patrick Strick,  Worldwide Technical Engineer, NetApp

Technical Demos

Our FlexPod Experts from California and EMEA will be available in the world of solutions to showcase our latest Innovations.   .


Meet FlexPod Premium Partners

We have special guests on our booth this year! APX, Computacenter UK, Axcess Denmark and Sinergy Italy.


Join the FlexPod Hero Challenge

Meet the challenge on the NetApp stand and you could win “FlexPod Hero” shirts and a daily prize of an Xbox One.


Follow-us at the show

Whether you're in attendance or just following along from home, you can join the conversation on Twitter using #NetAppCiscoLive, #FlexPod hashtags. We’ll also have plenty of NetApp folks and bloggers participating: Me: @JohnRollason, and my friends @davegingell, @Stephanie1508, @KateLechowicz, and @antoniolupo


See you there !

SMT People.jpgLast week I participated in an excellent round table hosted by www.meettheboss.tv with the CIO’s of several large European organisations. As with most IT people of that level I meet, one of the main topics of discussion was how to take advantage / avoid the risks of ‘The Cloud’. No doubt, on paper (or should that be screen these days?), the proposition on offer from Service Providers and the Hyperscale companies – mainly Amazon Web Services – is tempting. Limitless compute power, cheaper to run, simple pricing, pay as you use, global access, end-user transparency. It does indeed sound like a fabulous place. However, there seems to be quite a few catches that need careful thought before heading lock, stock and 2 smoking barrels towards The Cloud. And most of them have to do with your data. I could think of 9 major topics.......


  1. Who in your business ‘owns’ overall data governance and what do they think? IT, Privacy Officer, Security Director, others?
  2. What happens if the NSA or GCHQ want access your data? Is that a concern?
  3. What happens if you want to change suppliers – either in an emergency like what happened with Nirvanix or 2E2, or simply because a newer service might be better for your needs – e.g. the recently announced Google Compute Engine or VMware Hybrid Cloud Service?
  4. What data structure(s) will you use? Open standards or otherwise?
  5. What level of performance guarantee and Quality of Service do you have? Like broadband, is your service level dependent on how others choose to use the same Cloud?
  6. Have you considered any network distances involved, the speed of light and latency?
  7. Are you going to be breaking any laws - Data Privacy, Data location, etc. – by moving your data to the cloud?
  8. How will you audit your data in the Cloud, or across multiple Clouds, if you need to? If you have one, what does your regulator think?
  9. And, even if the cost for storing it is next to nothing, what does it cost to retrieve your data when you need it? How long will it take?


Despite all this, for many workloads, especially those with fluctuating and/or very high scale resource requirements, no doubt the speed and cost benefits do or will outweigh any of these risks. And, unlike traditional IT, Cloud services don’t have to be purchased by the IT department. So there is no real way of stopping them even if the IT or Security departments want to. Most organisations should plan a Hybrid of Private Cloud – essentially better run internal IT, Cloud Service Providers - who can offer bespoke IT and Applications run ‘as a Service’ in their data centres, and the Hyperscale players - who can offer global economies of scale for IT infrastructure that are going to be difficult to replicate in any other way. The key to this will be building a data platform that allows the IT department (led by the CIO) to offer users choices of service they have come to expect, but with the appropriate [long term] risks and 9 questions above taken into account. Phil Brotherton, Vice President, Cloud Solutions Group (CSG) at NetApp explores here why everyone needs to think like a Service Provider.


Over the past 6 months, I’ve heard a few very senior IT people recently say they are ‘all-in’ for the Cloud. I don’t really know what that means, but I do hope they have thought about what they would do if they decide they need to be ‘all-out’ at some point in the future. The Cloud should be an extension of your IT strategy, not a replacement for one.

flash blog.jpgOne of the great things about the English language is the collective noun. We could use the word ‘group’, as I suspect the Germans do, but instead have 1000’s of words depending on the thing we want to group. One of those oddities that makes English such a special language. And so confusing for non-native (and even native) speakers to master. Although I did hear an excellent speech by the Dutch Gartner analyst Frank Buytendijk this week on the future of Big Data, which included the phrase ‘airy fairy’. Proving some non-native speakers can indeed master it brilliantly.


There are always many oddities in the language of IT, but it seems one causing perhaps the most confusion at the moment is around Flash memory and storage technologies. For background on the raw technology itself and how it is disrupting the storage market, see Laurence ‘Flashman’ James’ blog. For me, that’s not where the confusion is particularly. It’s how the market and ecosystem is evolving around it. And that Flash is useful and transformative across the IT infrastructure stack – host, server, network, storage controller, storage media – so there are lots of options possible.


Concentrating on the Flash storage array market, there are three groups of vendors selling them [currently]– very small, large and very large. In the very small category there are 53+ start-ups who just do Flash. And are counting on it being a big enough market disruption for them to survive long term or be acquired. In the very large there is IBM, HP and Hitachi who do everything. And in the middle there are the storage market leaders NetApp (by Software OS) and EMC (by Hardware). Starting to sound a little bit like The Three Little Bears……..?


Broadly there are two types of Flash Array – the ‘All-Flash Array’ (AFA) and the Hybrid Array where Disk (capacity storage tier, cheaper per GigaByte) and Flash (performance tier, cheaper per IO per second) are combined and optimised based on workload requirements. The AFA is best suited for specific applications where speed is the main or only requirement – e.g. very large mission-critical applications and databases, Analytics / High-performance computing, some Virtual Desktop environments with unusual performance needs – usually to replace large, aging, monolithic storage arrays. The Hybrid Array is the best option for the other 99.3% of the market. Why? Disk continues to be 6-8x cheaper per GB. De-deduplicated disk is still 6-8x cheaper than de-duplicated Flash.


NetApp Hybrid Array options are 1) FAS systems running Data ONTAP. In fact, 70% of the many 1000’s of systems we now ship each quarter include Flash (either FlashCache or FlashPools) and 2) E-series running SANtricity (SSDCache).


The NetApp All Flash Array is EF540. For some reason lots of people think we don’t have one on offer. In fact, it has been in the market for more than 9 months now, based on technology proven over many years. We have sold them all across EMEA.  To a supercomputing customer in Poland. To Media customers and a major Insurance company in England. To a University in Scotland. To a global Pharmaceutical in Belgium. To a Telecoms company in Israel. To an Energy company in Turkey. To Public sector and media organisations in France. To several Finance and Tech customers in Germany. And quite a lot more…….. If you’re in the market for an AFA, make sure you don’t get blinded by seemingly amazing price offers from start-up vendors with little proven track record (and no support team) as they try to buy your business. Do you really want to take the risk for your Tier 1 data?


Of course there is a lot of room for Innovation in the Flash market over the next few years. Hope that helps at least a bit with some of the confusion today. Let me know if not? There are lots of opinions and options out there.

Infrastructure illustration.jpgIt doesn’t make sense for each application owner or business unit to buy their own choice of IT hardware or infrastructure*. Just like they can’t rent their own building. While there are benefits of an expanded end-user BYOD (Bring Your Own Device) strategy for both them and the IT department, there are too many economies of scale and business risk requirements to let people continue to build these ‘vertical silos’ in the Data Centre. As a result IT departments have evolved to have Infrastructure (Storage, Network, Server) and Application teams to support end users. Specialists in their areas whose job it is to buy the right IT for the organisation as a whole.  The problem is that those specialist infrastructure teams can become fiercely independent with their own choices of vendor, partners and architectures. This internal competition leads to individually managed ‘horizontal silos’ of control, which frustrates the application owners, in many cases causing them to go back to buying their own infrastructure, from the public cloud with associated risks or over-priced and inflexible ‘engineered systems’.


This problem has grown over recent years as new applications, speed, virtualisation, data growth, Software-Defined infrastructure and innovative new Cloud service options become ever more important to CIO’s and people running their applications. And so the IT industry is increasingly focused on helping customers struggling to deal with management of their horizontal silos of infrastructure in a way that is optimised for applications – old and new, in the corporate data centre and the Cloud.


The optimum infrastructure: application-centric, cloud integrated

Applications should be able to provision the services their end-users need, when they want them, based on service-levels and policies defined by the IT architecture team. Who then spend their time managing the infrastructure as efficiently as in-humanly possible. To do this, there has to be better integration between your choice of hardware and software vendors. And a profound shift to software platforms controlling the data centre infrastructure. No longer can an isolated specialist team simply buy an appliance (or even a Cloud service), throw it in your data centre and hope for the best. Might be a short term fix, but longer term it will cause way more problems in today’s more complex world.


IT vendors like NetApp need to continue to invest heavily in this area to allow customers to better Control (innovative device management and monitoring), Automate (perform repetitive workflow tasks in software) and Analyse (service management, monitoring and reporting) their infrastructure. But also, and maybe more importantly, in integrations with an eco-system of partners to allow them to manage and control our storage platform innovations from their application-centric and cloud orchestration platforms. Yesterday saw some major announcements from industry heavyweights Cisco and Fujitsu supporting this trend. Naturally both are long-term strategic partners for NetApp.


Fujitsu Forum.jpgFujitsu on Cloud and SAP

At Fujitsu Forum 2013 in Munich (where I find myself this week, along with an impressive 9,999 others) Fujitsu announced their Cloud Integration Platform to allow multi-vendor, hybrid clouds to be securely controlled from a single management portal. They also announced the general availability of FlexFrame Orchestrator for SAP (another major partner for NetApp). This allows organisations to manage their entire SAP environments from a single integrated software platform, including both traditional and HANA in-memory applications. You can add it to your existing infrastructure or buy it as part of an integrated appliance from Fujitsu. More than 300 clients of Fujitsu are running SAP HANA already.


Cisco on changing the world

And late yesterday Cisco launched their Application Centric Infrastructure (ACI) strategy at a major event in New York. They announced new products including the Application Policy Infrastructure Controller to automate network provisioning, Application Network profiles (seemingly based on learning from Service Profiles in UCS),  Nexus 9000 switches (from their Insieme team ‘spin-in’) and Application Centric Infrastructure Security which integrates physical and virtual security into the ACI framework. Our CTO Jay Kidd was there – read his thoughts on ACI here.


Although this application and cloud integrated management trend cannot be defined by any single new product from any one vendor, it certainly feels like it is a core focus for a lot of people in IT at the moment. To win in this new world, companies will need to have global scale, unique software innovation, and openly partner to develop the choices of integration needed to meet increasing customer demands for speed, differentiation and control.


*there will always be exceptions for Big Data, High-Performance Computing and the like, but should eventually be way less than 10% of most organisations IT (and real estate, I guess) spend.

Directions graphic.pngEvery now and again an IT Industry term seems to capture the direction of this slightly bizarre industry very well. It feels like ‘Software-Defined’ has been that term for the past 12 months*. VMware took the term from the emerging Software-Defined Networking (SDN) trend around the time of VMworld US & their Nicira (now VMware NSX) acquisition last year. And invented the term Software-Defined Data Centre (SDDC) as a way of defining their broader strategy. Last week was VMworld Europe and it certainly felt like there is no stopping the Software-Defined juggernaut as it continues to become an accepted industry term. See David Gingell’s review of the event.


Inevitably the term has moved to Storage, so we have ‘Software-Defined Storage’ or SDS. And a lot of noise and confusion in the industry as always. But what does it mean? And does it change everything you need to think about when you’re building your short, medium or long term storage strategy? As nearly always in IT, it depends……


I think it makes more sense to go back to the customer problem we’re all trying to deal with. There is no doubt the vast majority of IT is still trapped in inflexible, inefficient hardware silos. This ‘museum of past decisions’ means that 70% of IT resources tend to be spent on maintenance – i.e. stopping anything getting anywhere near a fan. Add in the promise of the economics and agility of the Hybrid Cloud and it’s clear that things have to change.


For me Software-Defined Storage means users provision storage services (NOT a storage admin) based on policies and service levels (set by the storage admin who has way more time to set them) from their choice of application. And there should be a choice of both hardware and cloud services that can be combined (i.e. in a hybrid fashion) to deliver those services in the most efficient and flexible way possible, at any given time - the right answer today might not be tomorrow.


To do this, storage resources must be defined and controlled in software that can run on a variety of hardware options – Disk, Flash, Integrated Appliances, Cloud, etc. And, perhaps the most important part of the jigsaw once this universal software platform has been built, is comprehensive integrations with applications and cloud automation software.


So, does this exist today? From a NetApp point of view, the answer is essentially yes. Of course there is more to do, but the vast majority of our efforts have gone into a single software platform – Data ONTAP – for many years. This is what you buy into when you choose NetApp. It's not an exaggeration to say that this approach it is the single thing that differentiates us most in the Storage industry (see here for Chris Mellor's view). The hardware is your choice – a NetApp appliance of varying sizes (FAS, with or without Flash), 3rd party hardware (V-series, ONTAP Edge) or, more recently, a cloud service (some examples being with Verizon & Amazon Web Services) . Application self-service through the widest ecosystem of application and infrastructure partners, both vendor specific and open source. And, we now have completely virtualised storage services - thanks to the Virtual Storage Machine (VSM), available since the advent of clustered Data ONTAP a few years ago and recently enhanced in ONTAP 8.2. This means far fewer hardware location barriers and boundaries for your data.


You might read lots about Data Planes, Control Planes, Supersonic Planes and Very Important Press Releases, but Data ONTAP is proven as the world’s best-selling storage OS today according to IDC. And as the world moves away from Hardware-Defined Storage (co-incidence that is HDS?) we expect more and more people will realise the true advantage of working with NetApp and Data ONTAP. The benefits in a nutshell? 1) Big efficiency gains, thanks to the economies of scale of a single storage OS platform, 2) Non-disruptive operations, data motion and portability as storage services are completely abstracted from hardware, and 3) Choice as NetApp is the only independent storage player left in the industry free to work with all the people you need us to.



*Smugness alert: as predicted at the start of the year.

OSB Epic graphic.pngAround the time I started university I remember seeing adverts everywhere in the UK that simply said ‘the future’s bright, the future’s Orange’. No indication of what on earth it was about. But it stuck in my mind. And it turned into one of the most iconic taglines. Even now anyone in the UK [over a certain age] would know it. At that time, Orange brought their innovation into the fledgling UK mobile market with impressive results. In 2010, nearly 20 years on [that dates me!], Orange Business Services, the business services arm of Orange, saw the emerging market for cloud computing was ripe for similar disruption.



Orange Business Services launched their innovative Flexible Computing Infrastructure-as-a-Service solutions:

  • Flexible Computing Express – an on-demand cloud solution, meaning you can instantly and securely manage your virtual infrastructure from anywhere, at any time.
  • Flexible Computing Premium – a more bespoke and comprehensive catalogue of virtualized IT components and service management.



Given Orange Business Services has the world’s largest data network, operates in 220 countries and serves more than 2 million businesses, it’s not surprising some of the biggest multi-national organisations are taking advantage of these services today. They include Tiens Group, GFI Software, Exaegis and a global luxury goods company.  This is not some emerging service either, Orange expects revenue of €500m by 2015 from cloud services. Predictable, secure cloud computing at a global scale.



So what does Flexible Computing give you that on-premise IT doesn’t? Answer - quite a lot. You can self-provision infrastructure remotely and only pay for what you use each day. You can select the service level appropriate for your need – across compute, network and storage. Provisioning is automated from a shared pool, meaning you don’t need to waste money by over-provisioning or worry about running out of resources. Finally, you can view real-time and historical usage reports, meaning you know exactly what you’re paying for and what is really being used.  And as a result, with cloud computing solutions as advanced as this, you can launch innovation fast without the need for huge amounts of capital and long IT procurement projects. You can make smarter decisions about how you apply IT to business problems.



I am very proud that Orange Business Services chose to work with NetApp and build their Flexible Computing services with NetApp Data ONTAP and over 35,000 Terabytes of our storage! And that they have partnered with us to be able to share their success. I look forward to more people understanding the power or our global partnership in the market for cloud computing services. We are starting this week at VMworld Europe in Barcelona, look forward to meeting you if you’re planning to be here.



For more details of the story, see here.

#NetAppVMworld     #NetAppOrange     #NetAppCloud

NetApp Mini-Me VMworld.jpgIt’s that time of the year again when I can reveal what NetApp is planning for VMworld Europe. Mine and NetApp’s 6th year! Time flies when you’re having fun. (If you want read about last year, see here). What’s in store for attendees this year? We have put together the following guide. I’ll be there…twice!! All part of the NetApp ONTAP Mini-me Experiment – new for this year. Hope to see you there…..


If you are in Barcelona from October 15-17th, there is one place to be – The NetApp Booth #D206 at VMworld 2013. Together with VMware and our partners, we will show ways to simplify your IT and drive business success with virtualization and cloud solutions. Learn how NetApp supports the Software-Defined Data Center,  hybrid cloud, and end-user computing with unique VMware® vSphere®, vCloud® Suite and Horizon Suite integration and NetApp clustered Data ONTAP software as the foundation.


Meet our executives & technical experts

Don’t miss the opportunity to meet NetApp. You can book personal meetings with NetApp executives and technical experts by clicking here


The NetApp Cloud Lounge – endless possibilities

Our cloud experts will be available for consultation onsite in the NetApp Cloud Lounge. Share your experience with NetApp using #NetAppVMworld and pick up fresh cocktails in the Lounge: NetApp Unified Fruit Smoothie, Orange Epic Elixir & Data ONTAP Clustered Fruit Cocktail. Also, we’ll be hosting a Drinks Reception @ The NetApp Cloud Lounge   Tuesday 15th  @  5.30pm to 6.30pm, please collect your pass on the NetApp booth.


Demos of the Latest NetApp Technology

Our experts will be showcasing new technical demos on the NetApp booth. Demos that show how clustered Data ONTAP is revolutionizing virtualization and cloud solutions with software-defined storage and nondisruptive operations.

Join our Hands on Labs:  HOL-PRT-1301 NetApp Virtual Storage Console. Take a guided tour of the NetApp Virtual Storage Console (VSC) and see how VMware and NetApp solutions work together to simplify storage management. Please remember that Hands-on-Labs are first come, first served and cannot be scheduled in Schedule Builder.


Helicopter 2.jpgNetApp breakouts

**You could win an exclusive Helicopter Tour of Barcelona by joining any one of the sessions!**

  • Tuesday 15th October 2013 14:00 - 15:00 EUC5400
    • VMware Horizon Suite, Innovations for Storage Scalability, Performance and Data Protection
    • Chris Gebhardt, Sr. Technical Marketing Engineer, NetApp
  • Tuesday 15th October 2013 12:30 - 13:30 STO5339
    • Implementing Software Defined Storage with SDDC to Deliver Increased Agility to Apps & End-Users
    • Joel Kaufman Senior Manager, VMware Technical Marketing, NetApp
  • Thursday 17th October 2013 09:00 - 10:00STO5423
    • Accelerate Your Existing Storage with Server Caching
    • Chittur V  Narayankumar, Sr. Technical Marketing Engineer, Server Caching Solutions NetApp
    • Larry Touchette, Sr. Solutions Architect, NetApp
  • Thursday 17th October 2013 10:30 - 11:30 STO5492
    • Extending the Benefits of Your Storage Arrays to Remote Offices and the Cloud
    • Pete Flecha Solutions Architect, NetApp


Kids bike_NetApp.jpgNetApp Live Theater

Our NetApp experts and partners will host technical presentations on our booth including NetApp clustered Data ONTAP, FlexPod, Flash, Cloud, Converged Infrastructure, End-user computing.  You could also win a professional racing bike for your child by attending any of these sessions! Learn more on our Community page.

NetApp Sessions

  • FlexPod: why converged stacks matter
  • ONTAP: The Foundation for Software-Defined Storage
  • Flash! Ahhh! It’ll save everyone of us! (JR: winner of worst presentation title 2013?)
  • Hybrid Cloud: Maintain Control of Your Data Across Any Cloud
  • NetApp and Software Defined Data Center
  • Disaster Recovery of Business Critical Apps with Site Recovery Manager (SRM)


Partner Sessions

  • Why should you consider the new Bull Converged infrastructure offer
  • Reducing Costs & Accelerating Modernization with Private Cloud Computacenter
  • ROI not DIY Fujitsu
  • Europe’s Enterprise Cloud: Delivering fast, High Availability Storage as a Service Interoute
  • Karolinska University: FlexPod with VMware, 3PB reference Atea
  • Case study about FlexPod Technology Ermestel
  • Generate higher value by NetApp Technology & IBM Services IBM


The NetApp ONTAP Mini-Me Experiment

At VMworld, you will get the chance to be part of the NetApp ONTAP mini-me Experiment – visit our Booth for more information. 


Check your agility with Team NetApp-Endura!

Take part in the Team NetApp Challenge and win from a range of exciting prizes, including a fabulous prize of a place in a team car at a forthcoming bike race.


Keep in the loop on NetApp activities, opportunities and news.

The IT market is changing faster than ever. Industry trends including Cloud, Big Data, Mobile, Flash and the Software Defined Data Centre mean this will only accelerate. The landscape is changing for CIO’s as workloads are becoming more distributed and hybrid cloud becomes reality. So, given this complexity, what should CIO’s and the rest of us in IT be discussing and thinking about? Where you should be looking to save money and where to invest your time? The following '3-stage framework' is based on a talk given by Matt Watts, our EMEA Director of Technology & Strategy at a few events recently. Thought it was worth sharing

Evolution of IT.jpg

1)      The cost of Commodity IT

The things IT has to do to support the business, necessary and time consuming, but typically add little value. Ask yourself the question ‘If I started today from scratch, what would IT do and not do?’ Once you’ve identified these Commodity areas you can make decisions about how to deal with them. Do you have them delivered to you as a Cloud service? Or do you build shared infrastructures that are highly efficient and automated to reduce the costs of running them. Typically a discussion focused on operational excellence and legislative requirements, rather than technology.

2)      The opportunity of Business Value IT

The things that create real value for your business, the applications that power your business, or the products that your business creates. With more focus here it’s amazing how much additional value IT can add through aggressive application of technology innovation. An example, accelerating test and development by enabling developers to instantly create database copies. The focus here should be providing high levels of automation and self-service.

3)      Be ready to invest in New Opportunities

Companies are looking to exploit data and information. Whether social media feeds like the Twitter Firehose (500,000,000 tweets streamed directly to you every day!), new analytics tools such as Hadoop to mine vast quantities of information for trends and patterns or BYOD (Bring Your Own Device) strategies to better enable your mobile workforce. For example, within NetApp IT we recently deployed a Hadoop solution, which has reduced queries on 24 billion records from 4 weeks to less than 10.5 hours, accelerating our team’s ability to respond to customer needs.  It enabled a previously impossible query on 240 billion records in less than 18 hours, further enhancing our proactive service capabilities. A recent survey by Vanson Bourne showed that 69% of C-level executives cite technology as one of the main reasons why business decisions are not being made quickly enough.


The CIO challenge

The opportunity for CIO’s to add value and competitive differentiation to the business increases exponentially as investment shifts from Commodity IT, to Business Value IT and New Opportunities. According to Gartner, 63% of IT budget is spent running current IT infrastructure, 21% on ‘Grow’ to meet the natural growth in application performance requirements and data, and only 16% on New Opportunities, where there is the potential to create the most value. This is the challenge for a modern CIO.  How do you maintain Operational Excellence whilst increasing investment on ways to add business value and exploit your unique data? How do you move from being a builder of IT to a broker of service?

Filter Blog

By date:
By tag: