1 2 Previous Next


20 Posts

It has certainly been an extremely busy couple of weeks since I last blogged so for this one I thought I would

share some of the experiences. The first was at the Cloud Expo Europe industry event at the Excel Centre in

London. There I chaired a panel session covering cloud adoption.


Cloud Triangulum.bmp

                                                  The Cloud Triangulum


Running panel sessions are rather like an apprenticeship to becoming a chat show host, or a radio presenter,

so you have to do your homework. You can see that I  (on the right) am an amateur, and the purple headphones

wasn't quite the look I was after.


CloudExpo Panel.jpg

But this one was a delight to run. The panel consisted of Jon Smith, Head of IT at WDS, Leigh Morgan,

Infrastructure Manager WDS, Andy Barrow, Technical Director, ANS. It very soon became apparent that WDS

were very advanced in their approach and adoption of  Cloud models, infrastructure and technologies. Firstly who

are WDS ? Their background is helping service providers and end customers get the most out of their wireless

products. By wireless that could be smart devices, tablets etc. WDS are a worldwide operation and their goal is

improving the end customer experience. This means quickly turning the gathered intelligence into a powerful

business tool. Big Data in action ! In other words if you have a problem or challenge using your handheld device

WDS will have the answer. 


The challenge WDS, and so many others have, was consolidating the dispersed infrastructure and finding a

model which delivered a stable and resilient infrastructure. Moving away from the organic growth, which had driven

complexity and was restricting the company from responding at speed. At the same time WDS needed to change

its method of backing up data.


This is where ANS expertise in converged infrastructure and Backup Services delivers. ANS used their expertise

to design and Implement their converged FlexPod i3 platform which combines leading cloud Compute, Network

and Storage technologies from Cisco, NetApp and VMware. The ANS implementation of Flexpod i3 also provided the

growth and scale capacity required by WDS, while reducing the number racks from 16 to 5. WDS also commissioned

ANS to move them to a new secure fully managed Backup as a Service (BaaS) system. Additionally WDS 

also make use of using Amazon AWS for developer services.

M33 in Triangulum.jpg


WDS are clearly an organisation with great vision,

who are making use of converged on premise platforms,

off premise BaaS, Hyperscaler developer services and are

seeing deliverable business value from a mix of Cloud



(I give in, Triangulum is Latin for Triangle, or the famous star

system in the constellation Perseus !)


If you want to know more then you can view the panel session

session at Cloud Expo here:


You can find the success story here


No surprise that the desktop, as we knew it, is dead ! No longer am I shackled to a big old clunky PC taking up valuable

space on my desk, not that I have, or want a desk ! The expectation is that we can display and use our workspaces on

a plethora of different devices. Devices of our choice, driven by the global nature of business. Collaboration is now the

keyword driving increasing number of mobile working styles, where the requirement to be online at anytime, anyplace

anywhere applies (even in the bar with a Martini in your hand, if you don’t understand the connection -you’re too young

and YouTube will help !)




Following this continuing trend, the talk is about enterprise mobility where the

applications, data and workspace follow the consumer, regardless of the device

they happen to have with them. The device is quietly becoming less important.

Without really giving it much thought I now use up to five devices to access my

business apps and data. This sort of crept up on me and I just accepted it as

normal. It is not just the new normal, it’s expected, even assumed.





The demise of the old legacy desktop infrastructure is no surprise as they came with a long list of reasons why they were

so expensive to maintain and manage. Notably protecting the locally stored data was a pain. Controlling the leakage of data

required patches, software and glue. Yes glue. One organisation I spoke with regularly resorted to gluing the PC USB ports

to prevent corporate data leakage onto USB memory sticks. It worked, and was cheap, if a little messy.





Moving to the mobile workspace model does have its own headaches,

but many of the data protection and security issues come back into the

domain of the datacenter and therefore back under central scrutiny and

control. Other issues, such as delivering the contracted performance and

response times to mobile workspace users, is more of an issue.











This week I was reading about the increasing number of FAS storage solutions shipping with Flash storage capacity alone,

such as the All-SSD FAS. This is no surprise as NetApp have shipped over 60PB of Flash Storage, and well over 60% of

the systems sold today include Flash Storage in the mix. Announcement of the new FAS8000 range -  FAS8020, FAS8040

and FAS8060 on the 19th February demonstrates the increasing momentum with a tripling of Flash capacity supported.



This makes perfect sense, especially for workforce mobility (VDI)

solutions. NetApp FAS Storage coupling Flash with ONTAP market

leading data management in an All-SSD FAS configuration is a no

brainer for this type of workload.






The Flash component delivers high performance and consistent low latency, while Data ONTAP delivers the industry leading

functionality to secure and manage data. Critically, FAS and ONTAP are validated and integrated with the key 

Virtual Desktop  Integration solutions from vendors such as Citrix and VMware. From a business risk perspective this takes

many unknowns off the project risk register. If you need more green lights then there are also plenty of success stories that

demonstrate the integration in production.


NetApp Customer Showcase Link


Introducing All-SSD FAS into the Workspace / VDI mix and you have the perfect combination of low latency and high

performance coupled with robust, proven, data management through Data ONTAP. This immediately gives you the tools to deliver

the performance VDI workloads demand.

If you ever travel through the Oxfordshire countryside in the UK you may come across the Harwell Scientific and Innovation Campus

and the Rutherford Appleton laboratory. Named after those two exceptional scientists Ernest Rutherford one of the great

experimentalists, the first to model the atom and nuclear reactions and Edward Appleton, winner of the Nobel prize for physics.

The Harwell Science and Innovation Park has seen some dramatic changes  with the appearance  of something that looks like

a giant silver flying saucer (UFO) in the fields near the facility. Frankly it looks like something out of a science fiction movie,

but it is very real, and has a purpose that delivers amazing benefits to mankind.


Diamond.jpgThis giant silver flying saucer is in fact a Synchrotron

run by Diamond Light Source Ltd. Which I know

probably doesn't mean anything,  but a Synchrotron

is another type of particle accelerator. A Synchrotron

is designed to produce brilliant flashes of light. These

flashes of light are 10 billion times brighter than our sun

and are produced by accelerating and bending beams

of electrons in magnetic fields. Like we used to do

with Cathode Ray Tubes in our old TV sets and computer

monitors - remember them !!!




It is the beams of light, or beamlines, that the physicists are after. So brilliant is the light produced that it can be used to analyze materials

in ultra fine detail from infra-red right up to X-Rays. Revealed is the structure right down to the molecular level. The outcome is a device that

is 10,000 times more sensitive than a standard microscope. The Synchrotron is therefore an incredibly powerful research tool into areas such

as deceases, new vaccines and analysis of aircraft engine parts.


Diamond 2.jpg

As with all particle accelerators the amounts of data generated are huge. Have

a look at our previous success story from the LHC at CERN for an example

of data volumes produced by these beasts..




Critical to the operation is the time required to post process and analyse the data

from the beamline reaserch, and produce the results. To accelerate this process

Diamond Light Source were looking for a solution that would provide fast caching

for data coming from the detectors on the beamlines, destined for their central storage.

They selected  2 x NetApp EF540 All Flash Arrays to do the job as they needed a

solution that could service the performance driven applications with very low latency

and a high level of resiliency.




They found that the EF540 provided the extreme performance they were looking for, delivering over 300,000 IOPs and 6GBps throughput. Space, power and cooling were cut by up to 95%. While other key factors in the selection process were the ease with which the EF540 can scale to meet future expansion requirements.


In Summary the EF540 All Flash Array benefits checked in as:


  • Reliable
  • Amazingly quick
  • Easy to Scale
  • Easy to Manage
  • Great Price point
  • Very Competitive


There is a clear trend here as NetApp EF Series All Flash Arrays help scientists with extreme performance and latency requirements get to results quicker. Another example

can be found in my previous Blog where the NetApp EF540 is helping the pharmaceutical scientists at UCB rapidly turnaround clinical trial results.


UCB Success Story: https://communities.netapp.com/community/netapp-blogs/flashman/blog/2014/02/27/clinical-trials-at-the-speed-of-flash

EF540 All Flash Array: http://www.netapp.com/uk/products/storage-systems/flash-ef540/index.aspx

EF550 All Flash Array: http://www.netapp.com/uk/products/storage-systems/flash-ef550/index.aspx

Diamond Light Source Success Story: See attached pdf


Follow me on Twitter @lozdjames

This week I was asked to comment what can only be described as a Flash Flood of EF540 /550 All Flash Array

success stories. This one was a bit different for reasons that will hopefully become clear. The organisation is a

worldwide biopharmaceutical company, based in Brussels, called UCB.




Their focus is developing new medicines to treat

debilitating conditions and diseases such as epilepsy,

arthritis and Parkinson’s disease. The challenge that

companies working in this area face is data growth.

Data volumes that double every 18 months are a real

test. But more of a challenge is processing this

increasing volume of data. The time to analyse this

volume and get to the results is of the utmost importance,

not only to the company, but also to the patients.




With the challenges of turning around clinical trial results in an acceptable timescale UCB reached the decision

to evaluate Flash Technologies as a potential solution to accelerating the analysis process. UCB use special SAS

database software for analysis of clinical trial data . It was this database that proved so sensitive to performance

and timely turnaround of results.  Working with SAS and NetApp UCB put the clinical studies database on a

NetAPP EF540 All Flash Array. The results proved to be stunning as they experienced a 20 fold improvement in

analysis and results processing speed on the Flash Array. The SAS engineers were amazed ! Not only was the

performance good but the solution achieved a maximum reliability score.


While the EF540 solution clearly gave UCB the competitive edge, for me the real winners are the patients.

I know this because my son suffered with epilepsy when very young. The drugs he was prescribed stopped the

epilepsy dead in its tracks. He then continued to develop and today he has recently passed all his school





Without those drugs he would not have developed and it would

be a very different story today. There is no doubt that Flash technologies

enable businesses to move into a higher gear, and that this has tangible

business benefits for companies such as UCB. It also has a massive

impact on humanity when the ability to analyse data faster results in

pharmaceutical products that change and enable peoples lives.





Rip Wilson's Tech Blog on the EF550: https://communities.netapp.com/docs/DOC-31477

My Blog on the EF550 Launch: https://communities.netapp.com/community/netapp-blogs/flashman/blog/2013/11/19/performance-storage-landscape-in-overdrive


Please take the time to read how UCB took the decision to implement the EF540 All Flash Array, and the

results they are experiencing :

UCB EF540 Story English Version attached below

UCB EF540 Story Dutch Version http://www.netapp.com/nl/system/pdf-reader.aspx?m=css-ucb-ef540-flash-nl.pdf


Predictions of phenomenal growth in data stored continue to abound Logo.jpgwith one prominent analyst predicting a 800% growth in data stored by 2015. While it is clear that initiatives, such as Cloud, drive the need to implement new applications and monitor / manage more stuff. The net/net is that these initiatives are also likely to cause an explosion in the size and complexity of the data stores that organisations need to manage.


Okay, so data growth predictions are a perennial thing, but the question is how to respond? Often it’s by entering reactive mode to solve the immediate challenge, but this leaves you with a solution that is in no way optimal, and unlikely to be able to fulfill future requirements. Indeed silos of tech are often the result. With all the management headaches that go with them. What is clear is that reactive modes of operation are not going to go away anytime soon. Reaction to competition, new and changing markets are a constant in business. How do you deal with this reality, stay true to the strategic direction, and not end  up with silos of stuff that doesn’t integrate?

There is a clear desire amongst many CIOs to move away from the ‘building and operating IT’ model, where IT staff spend their time responding to business requirements by provisioning new compute, network and storage resources, to a IT World that is Services Driven. What does that mean ? A world that enables the consumer of IT to dynamically provision the resources they require, when they require it. Stuff happens in minutes, with no outages, and an environment that is consolidated, clustered and shared.





The launch of the FAS8000 heralds the launch of a new generation of Shared Storage innovation designed to ready businesses for Cloud adoption and enabling the new Software Defined World, that empowers the consumer of IT.  Importantly it allows you to move away from silos of compute, networking and storage infrastructure and frees you to execute a clearly defined storage policy and strategy.







This launch combines three 3 important threads of continuous innovation and development at NetApp.

FAS8000 Storage Systems Deliver a Unified Scale-Out Platform to Adapt to Business Needs

Firstly the FAS8000 flags the consolidation of the FAS platform simplifying the portfolio and making selection easier. It also delivers a doubling in performance and tripling of the Virual Storage Tier (VST) Flash capacity.


FlexArray Virtualization Software Enables Easy Storage Virtualisation and Management

Secondly the capability to manage 3rd party storage arrays is incorporated into the FAS8000. What was called V-Series, becomes FlexArray on the FAS8000 and is now an easy to activate feature. FlexArray extends the FAS8000 capability, enabling it to take on the role of both integrated storage array, and a Virtual Storage Controller (VSC), the only Virtual Storage Controller that natively supports both SAN and NAS.


Clustered Data ONTAP - The Universal Data Platform

Finally a new release of Data ONTAP powers the FAS8000. In this release there are enhancements to the Non-Disruptive Operations capabilities for shelf removal, continuous availability for SQL, Virus protection, and Flash Cache sizing through the new Automated Workflow Analyzer.

Here are the links to the new FAS8000 content for those of you that want to know more.


The Press Release - http://www.netapp.com/us/company/news/press-releases/news-rel-20140219-652685.aspx

John Rollason Blog - https://communities.netapp.com/community/netapp-blogs/jr/blog/2014/02/19/flexarray-more-flexible-storage-decisions

FAS8000 Product Overview - http://www.netapp.com/us/products/storage-systems/fas8000/

FlexArray Software Overview - http://www.netapp.com/us/products/storage-systems/flexarray/

Data ONTAP Overview - http://www.netapp.com/us/products/platform-os/data-ontap-8/

Last week I found myself at a RedMonk developer conference. I was a developer a long time ago and still dabble a bit with Arduino and the

RaspberryPi , along with a little C and Python coding. But the days of FORTRAN, COBOL and Various assembly languages are well behind me !

So going back into the developer world was a little strange.


Hackney-20140130-00229.jpgBut what I found was a very positive group of people devoted to improving the craft and creative reward of developing code. It was

interesting that the collective process of design is now being referred to as craft with lots more focus on collaboration and community.


As for the language barrier, I fared better than expected once I'd got my head around the latest development platforms and

languages such as PuppetChef, Jenkins and the Chaos Monkey. Confused ? Just follow the links for further information. We worth

spending a little time discovering what each of these do, as they feature heavily in Cloud management and development.


The Chaos Monkey is a interesting development. It's is designed to randomly kill instances and services in an AWS

Auto Scaling Group (ASG) architecture. Originally developed by the NetFlix Team when porting their applications to AWS.

Their engineers recognised that each system has to work, no matter what it is hit with. It has to tolerate failure, even from

external systems outside itsAWS Monkey.jpg control. This enables a regime of constant testing that ensures the optimum level of availability is achieved in the Cloud.  It helps you bullet proof your operation. There are a number of Simians available Janitor Monkey, Conformity Monkey - you can see where this is going - The Simian Army. You can also see how this will be necessary in a cloud environment where safety, security and ultimate availability of service are king.


And one of the session tag lines at the event was "Software is eating the world, the developer is king - disrupt or be disrupted"

perhaps predictable, but the developer is playing an increasingly important role in delivering competitive differentiation in

applications through their craft and creativity. I think it is likely that the significance of their role will grow particularly as the

industry moves towards the Software Defined Storage and Data Centre infrastructure.


At NetApp, it is the designers and developers craft and creativity that has gone into making Data ONTAP what it is today.

It is this craft and creativity that differentiates NetApp Storage in the marketplace. An area of continuous focus in Data ONTAP

design is availability. In the world of 'storage-in-the-cloud' you can envisage many disruptive scenarios, such as data stores

going offline, connectivity disappearing, Hardware failures etc. NetApp designers and developers have introduced key Data

Management and Non-Disruptive Operations features in Data ONTAP that ensure data remains accessible, even amidst the

chaos of disruption . This aligns well with the disruptive intentions of the Chaos Monkey. You can see that the Mantra of Disrupt

or be Disrupted holds true in the new Cloud world. Disrupt your operation regularly and often. Make sure you know how to fix it,

or it fixes itself. In the old world we called this BC/DR testing.


Coupled with Craft Beer, World Foods, Fantastic Coffee and Humor, as IT events go, this developer event was a breath of fresh air !


Links to related Blogs

Forecasting the Future – IT at the Speed of the Business

NetApp Directions - London 51 31'N-0 7'W

Twitter Handle @lozdjames

As the weather in the UK deteriorates and a deluge of winter water threatens to come in through the front door, many of us  are spending the time

with the heating turned up and something to read. No point going out in it as my boots leak and the umbrella is broken. My reading this week has

focused on the 2014 predictions, and there are plenty to choose from. As is traditional in my blog posts I have found the band, and song, that fits the moment

the Kaiser Chiefs and  'I predict a riot', a riot in the IT Storage Market that is. With Flash Storage being one of the technologies leading the charge in 2014.


new_mid-range_systems_racecar_HighRes.jpgOne white paper I really enjoyed came from the pen of Paul Feresten and Rip Wilson (See attachment).

They have done a great job in exposing the common misconceptions surrounding the adoption of Flash.

A good example being the quote   'I hear great things about it, but Flash solutions are really expensive'

the conclusion being that this is an incorrect assumption. The evaluators need to focus on the longer

term holistic view and study the TCO for flash over time.  This may sound like common sense when

building the business case, and it is, but it's easy to focus on the acquisition costs. The whitepaper provides


real world insight from customers using NetApp Flash solutions, in production, who are experiencing power consumption reductions of between 60 -75% and

footprint savings of a 75%. As they say in the paper, look past the sticker price on the box. Remember to put the 'soft costs' of power, cooling  and space in the equation.


Not long ago Data Centre Efficiency was at the forefront of  thinking, and Power Usage Effectiveness (PUE) was the hot (pun intended) topic. The

Flash proposition certainly revives this discussion as traditional spinning disk storage can account for a significant proportion of the power draw and

footprint within the Data Centre. Flash technologies promise to change this dynamic and potentially extend the life of the Data Centre. That proposition, for many, has

measurable benefit of considerable value.


You can find the whitepaper at the link below and please review my other blogs where you will find more discussion on Flash technologies.

Reducing Total Cost of Ownership with Flash Storage

Analysis and Results in a Flash


I Can't Get no e-Satisfaction

Posted by laurenc1 Dec 12, 2013

Over the years there have been many scholarly articles written on the dynamics of online e-Commerce businesses, customer

satisfaction and the mechanisms by which people switch their loyalty. And so the terms e-Satisfaction, e-Loyalty were born.

Much of the early work went into analysis of customer behavior and focused on the key factors that influenced customer retention

online. This forms a marketing science in its own right. Witness the analysis of web click streams and traffic statistics.


ECommerce.jpgFor many organisations the beginning of their e-Commerce endeavors relied on simple

website design and ease of use. Over time, in the face of increasing competition, sites

became more complex as new features and functions are added in order to deliver differentiation,

retain customers and encourage loyalty - Web 2.0 was born. The laggards in this race soon found their

profitability and market share were easily depleted. While the fleet of foot quickly discovered

that they could understand and  influence site visitors intention to switch and go elsewhere.

What everyone is looking for is the creation and retention of stable long term  customer relationships.


Other interesting observations were that lowest price was not the key e-Commerce metric. Elements such as ease of use,

payment convenience are often high priority e-Satisfaction points. Fast completion was also a key factor in

achieving customer loyalty and retention. The rationale being that the capability to provide a reliable, prompt, response

drives customers perceived value and service quality in the direction of your brand.




This is certainly one area where Flash Storage technology is making a significant

difference. This  prompted me to take a look at the information on the many NetApp

EF540 / EF550 All Flash Array wins.  A good proportion of those deployed today address

performance deficiencies in database environments, such as Oracle. Improving application

responsiveness, reducing wait times, shortening  processing cycles and delivering more

operations in a given time are common requirements addressed by Flash.







If you are looking to regain control of your performance management, improve revenue, productivity,  your e-Satisfaction

and customer experience index, then the All Flash Array is a quick and easy way to move forward. But do take into account

that you will need to assess the arrays enterprise features. Features such as:


  • Redundant components
  • Automated failover
  • SnapShots, Clones and Remote Replication
  • Advanced Monitoring and Management
  • Advanced Data Protection features
  • Non disruptive maintenance
  • Remote support




NetApp EF-Series All Flash Arrays tick all these boxes and, as they are based on the

E-Series architecture, they inherit the experience from over 650,000 shipped worldwide along

with Flash deployments of over 1PB per month. Credibility and confidence are important factors

when deploying critical customer facing infrastructure so do aim to achieve the maximum

e-Satisfaction rating from your customers.









Finally, Caveat Emptor applies when it comes to brochure performance numbers for Flash Arrays, such as wild claims about millions of IOPS.

Look for numbers that are indicative of real world performance.


For more information and comment please follow these links:




John Rollason's Blog



Success Stories





Another first  was the excellent NetApp Directions event held at the IndigO2 (O2 Arena) last week.

For some reason I had it in my mind that this was going to be  a difficult place to get to.

How wrong I was. It's easy. Definitely on my 'Great Places to go to Work' list.








Cable Car.jpg



As for those that had to stay to get to the event on time, they had the best commute

to work ever. By cable car across the Thames. And on a sunny day. Which is unusual for London

at this time of year.






The theme for the event was NetApp Directions: The power of the Data ONTAP, Any Cloud, Endless Possibilities. A new format

Multi-EMEA-City event, and the IndigO2 was perfect. Big stage, loads of seating, lots of breakout area possibilities, all on one

level and a coffee bar ! For the presenters it was a new format too. We decided to dump the PowerPoint slides.  Electing to use

just one graphic to present the big themes of the day. Themes such as Hybrid Cloud, Software defined Storage,

drivers for business & technology, the challenge of virtualisation and non-disruptive operations.


I chose Non-Disruptive Operations. Non-Disruptive Operations describes a set of features in Data ONTAP 8.2 which

introduces the construct of the Storage Virtual Machine, or SVM. The SVM allows the physical resources it owns to

change, and the key thing is that they can change without any client or host disruption.


NDO Graphic.jpgIt was unsurprising that the response from the audiences was that these days

it is almost impossible to negotiate any form of planned outage time due to the

changing operational nature of business. I discussed this with 7 groups on the

day and in 4 of the groups there were companies experiencing  6 month planning

cycles in order to schedule and plan outages. While many others experienced

months of similar 'planning grief' - as one person put it. The common mantra was

that this was unacceptable to both IT and business functions. The whole issue

of both planned and unplanned downtime was doing nothing to bring IT and the

business together. After all, IT should be in a position to enable the business - 6

months of planning does not enable this!


Many described the issue of reducing the risk and impact of managing planned

downtime. This is where the features available in the SVM environment play a key

role in eliminating maintenance planning and downtime.


We discussed features such as  Data Motion for volume. Data Motion enables the movement of data volumes

from one aggregate to another in the cluster, either on the same or different cluster nodes. The other key

difference is the virtualisation of interfaces through the Logical Interface or LIF. The LIF virtualises the physical interface

while the LIF Migration feature allow changes in connectivity to the data on the same or other nodes in the cluster.

Finally the Relocation of entire Aggregates between high availability controller pairs.


These features under the SVM in Clustered Data ONTAP are, without doubt, a business enabler allowing IT to make

necessary changes to the data location and connectivity without effecting application availability. The audience offered many

instances where non-disruptive operations features would apply, such as lifecycle management, load balancing/rebalancing

and many hardware break/fix scenarios.The view was also that in cloud environments where service level management is

a priority, such features are becoming a mandatory requirement.


The business benefits derived from the elimination and control of unplanned outage are tangible and measurable in hard

currency and, perhaps more importantly, the ability to intelligently manage risk, changes the relationship with the business.


My Summary - a great event, so look out for NetApp Directions in your area. Check out these links for more information on the

NetApp Directions Roadshow:



It's been 9 months to the day since the launch of the NetApp All Flash Array, the EF540. And what a 9 months it has been.

I've seen the installed NetApp Flash  totals go from 36PB at the end of February 2013 to 62PB today across Cached

and All Flash solutions today. The attach is also very interesting with over 25,000 systems benefiting from Flash technology

with 80% Flash attach in high end enterprise solutions. Great adoption rates I think you will agree. In EMEA we have seen

EF540 installed in  a wide range of verticals from Finance, Education, Telco/Media, High Energy Physics and HPC.


HighOctane_Racecar3_HighRes.jpgWhat are NetApp doing that's different ? Well no wild claims around

IOPS numbers, but rather real world numbers driven by a real business

application workloads. Proof of Concept is a good thing. Plus advice

based on years of deploying Flash technologies !

One size/type of technology does not fit all. And so it is with Flash.

For may customers, introducing a Flash caching tier protects their existing

investment,  provides a usable performance improvement with no management

overhead. For others with specific workloads which are highly sensitive to any

delay, or fluctuation in response time then All Flash Arrays are very popular. Indeed

many of our existing customers are using EF Series to accelerate Database

workloads such as those based on Oracle. Others are deploying the EF Series

to address the inefficiencies of over provisioning and to offload the  I/O from intensive workloads.


Today speed is everything and I am amazed at my own lack of patience when waiting for websites to respond, or transactions to complete.

My two teenage boys are worse than me. Its an age thing I guess, but says a lot about our modern day expectations. If it takes to long, I'm gone !

And so this is one of the main reasons to invest in All Flash Arrays. To reduce delay to sub millisecond values, and introduce a consistent

customer experience. And hopefully retain impatient potential customers like me. Here is a link to an excellent IDC whitepaper

which investigates delivering the customer value proposition with Enterprise Flash.




As I said earlier it has been 9 Months since the EF540 launch and so it's appropriate that today NetApp launch the next generation

EF Series All Flash Array -the EF550. Alongside the EF550 NetApp have also announced updates and new products in the E-Series range.

The E-Series is designed for superior price / performance.  There are upwards 650,000 E-Series systems have been sold worldwide and the

excellent management software, SANTricity is in its eleventh generation, so it wasn't born yesterday and is mature and proven

for managing and reporting on Flash and Hybrid Storage environments. If you are looking for further insight on Flash then I can recommend  @JohnRollason

latest Blog as he investigates the confusion surrounding the flash storage array market: http://t.co/jFs5Cyovxf #NetAppFlash


The new EF550 extends the performance envelope to over 400,000 real world IOPS, while the maximum capacity expands to 96TB.

SANTricity 11.10 also brings a set of new premium features. Dynamic Disk Pools (DDP) is a next-generation architecture that

minimizes the impact of a drive failure, returning the system to optimal condition up to eight times faster than traditional RAID.

You can find all the latest information on the new EF550 here: http://www.netapp.com/us/products/storage-systems/flash-ef550/index.aspx


New in this release Thin Provisioning drives storage efficiency by separating internal and external allocation of storage.

Other existing features, rare within the All Flash Array category, include Asynchronous and Synchronous Remote Replication

which delivers  Enterprise protection for business critical workloads. Here is the link to the SANTricity OS software info:



New to the E-Series is the E2700. This solution is perfect for SME or Branch Applications, For the

latest information follow this link: http://www.netapp.com/us/products/storage-systems/e2700/index.aspx



Finally the updated E-Series  E5500 also launches today. E5500 delivers the ultimate

in modularity and flexibility while proven to meet the demands of high IOPS mixed

workloads, for example databases and high performance file systems and well as intense

streaming applications such as those found in media and video streaming applications.

As well as a wide range of connectivity options including  InfiniBand,FC, iSCSI and SAS.

The EF5500 can easily be configured for hybrid array operation while the SSDCache

feature in SANTricity manages and automates caching. For more information on the

EF5500 follow this link http://www.netapp.com/us/products/storage-systems/e5500/



Enough from me. If you are looking for storage solutions that drives greater speed and

responsiveness for a range of application driven workloads then the new NetApp E and EF Series are quite simply proven in the

field for high and extreme performance workload requirements.


Follow me on Twitter: @lozdjames 

I've just returned from a great event in London hosted by one of our #PerfectPartners SoftCat. The Venue was The Ritz Hotel.  I have

presented in most of the venues in London over the years but this one evaded me. Now I can tick it off the list. And what a great venue to

discuss NetApp Flash Technologies with customers and prospects. In fact #PerfectPartner, #PerfectSubject in the #PerfectVenue !



One learning for me from the discussions at the event was that many customers

are reaching an inflection point where they are looking closely at formulating their

future storage strategy. In one case for the next 5 Years. Flash featured highly

in the discussion with the promise of improving performance and storage efficiency.

Avoiding doing what we've always done also featured highly along with the changing

role of IT and how IT is measured within organisations.


The conversations centred around the inclusion of  Flash in their storage technology

strategies. What the options are, where it fits, how is it being adopted and what the

market looks like.


There was some discussion on the caution in adopting a new technology and understanding Flash is certainly a sensible direction to proceed in.

I, and others, have said in past posts that Flash will be a game changer for the storage industry and if the ramp in the amount of Flash

Storage NetApp have shipped is anything to go by, this prediction is one I am safe with.  The latest figure is a staggering 51PB shipped

to date. Its fair to say that NetApp have solid knowledge and experience when it comes to advising on Flash technologies and strategy..


NetApp brought the first Flash products to market on the FAS platform as long ago as 2009.


FlashCache was the first product and resides in the Storage Controller, caching Hot data.  This improves performance for workloads such as those

that consist of  random repeat reads,  increasing I/O throughput, reducing latency by up to a factor of 10, and can take significant load off the

backend storage. This can also help prevent over provisioning (short stroking) , and reduce costs. One key point  is that FlashCache does not

require administration making it easy to deploy with zero overhead.


Next up was  Flash Pools. Flash Pools is an ONTAP  feature combining Solid State Disk (SSD) with Hard Disk Drive (HDD) Technology in the

drive tray. The beauty here is that the SSDs act as a cache for those workloads provisioned on the HDDs. Again this is great for those

workloads that exhibit random read and overwrite behavior and the SSD data stays hot across all types of failover.


Lastly Flash Accel inter-operates with ONTAP and provides server based caching to improve application response times, working with 3rd party

Flash Cards in the Server tier. Other benefits include increasing server utilisation, and this is available to NetApp customers at no cost.




These are the key components in  the NetApp Virtual Storage Tier (VST), which allows  optimization of  performance and reduction of costs

without increasing complexity.


We also discussed the All Flash Array (see previous Blogs for more detail) space and the EF540 for accelerating performance and delivering

consistent low latency for dedicated workloads, such as Oracle or Web & Online  Retail applications where delay has tangible impact on

the business.


The goal for the sessions was to demonstrate how  NetApp deliver the most comprehensive and complete portfolio of Flash products. Whether

the requirement is for a low operational impact caching approach or dedicated All Flash Arrays for extreme performance. I think we achieved

this, and at such a prodigious location. Flashy is one description for The Ritz, but Class is another, and lots of it !



  If you ever do get the opportunity to go to The Ritz, take it !  I think the audience can certainly recommend the breakfast - I was presenting at the time :- (


If you are looking for more info in NetApp Flash products please follow this link:



The Song Remains the Same ?

Posted by laurenc1 Jul 25, 2013

Last week I found myself at Mercedes Benz world at Brooklands in the UK. I say 'found myself' as I was

pre-booked but it had been a long long time since I had been anywhere near the old Brooklands racetrack.

What a change, businesses had moved in and there were modern futuristic buildings. Where once there had

been an old runway and scrub land, now stood a new race track and skid pan with drivers having

lots of fun spinning various Mercedes models in circles.

However,  I was there for a serious reason – to help present at the Ideal Data Centre event with my colleagues

on behalf of our partner Arrow ECS. My topic was – All things Flash. An exciting topic I am getting

very familiar with these days !

This time I wanted to do something a little different. The previous week the team had been in Manchester in the

UK and I had remembered that Manchester University had been instrumental in the development of  modern

computing. A little investigation threw up some very interesting documents telling stories of computer history from

the pioneering early days going back over 50 years. The early research references the development of the Atlas

Computer with great pictures of the Atlas Data Centre in 1960. Interesting to see how  data centres have changed

since then. Some things are similar, for example the racks, other things are very different such as cabling and

power standards, and you can even see someone smoking a pipe. But this data centre, and the people that worked

in it at the time were instrumental in the development of virtualisation.


Atlas.jpgBearing in mind that this was over 50 years ago, they identified the need for memory

virtualisation in order to make the Atlas computer more agile, and to enable it to do

more work. Today, most computers on the planet implement memory virtualisation.

It's invisible and we take it for granted, just as it should be. Other areas that focused

the minds of those pioneers were the need to automate the transfer of data between

primary and secondary memory, and exploiting the latest developments in RAM an

ROM technology to match the speed of the processor.


Hold on a minute both of these sound very familiar. Have we moved on 50 Years and yet the challenges are similar?

1) Exploiting new technologies and 2) Feeding the processor quickly enough. It would appear so.

Looking at how Gordon Moores law was met over time, processor development and performance kept pace, which

had implications for mechanical hard disk technologies leaving a significant performance gap between processor and storage.

Today the challenges for many IT folk are similar. Meeting the demands of the business, on time and in budget. But the

landscape has changed. Gone are the days when you could negotiate with the business to take the system down for

planned maintenance at the weekend or overnight.

The new focus is on non-disruptive operations, efficiency and seamless scaling of performance and capacity.

Clustered Data ONTAP 8.2 addresses these challenges by integrating new technologies that enable you to scale performance

and manage physical growth of storage. You can now meet these challenges while reducing risk and cost, adding

additional performance and capacity as you grow without the business suffering from the impact of disruptive upgrades.

These new features are fundamental to the effective management of the lifecycle of IT -  introduction of new, retirement of

old and upgrade of existing.

Finally it is good to know that the pioneering virtualisation work started in Manchester is still being pioneered by NetApp

and  today the impact of adding more storage capacity and performance are no longer disruptive.


If you want to know more about Clustered Data ONTAP 8.2 then the following links will help you:




Follow me:




Have you noticeComplex.jpgd increased IT conversations referencing the term Software

Defined ? I certainly have and it’s also a rising topic of conversation in the

IT press. I was wondering, ‘is this a new phenomenon?’ A quick search

on the phrase throws up some interesting topics where ‘Software Defined’

made a significant difference to the production and maintenance process

in everyday appliances over the last 10 years.


One early example I found was Software Defined Radio (SDR) in 2001.This piece of common electronic gadgetry

was rigidly based on hardware and the use of discrete electronic components resulting in a solution that could

not easily be modified or changed to fix faults in design, or add new functions or features. This inflexibility seems

to have been the catalyst for the change, while new technology came onto the market which enabled features,

that were previously implemented in hardware, to be programmed in software. This made the new software design

flexible, portable and lower cost.


Challenges.jpgAnd so it is with Software Defined Storage (SDS). Features

that in the past used to be delivered through hardware

are delivered by software abstracting functions away from

the storage hardware. Software Defined Storage (SDS)

is often quoted as one of the four critical elements of the

Software Defined Data Centre (SDDC) initiative. SDDC

is another emerging topic defining data centre architecture that spans compute, network, storage and security.

Software Defined Storage is not a new initiative and the evidence is clear. NetApp Data ONTAP has for many

decades virtualised storage services, delivering on the SDS goals and vision by abstracting functions such as

storage efficiency, storage management and data protection.

On June 11th 2013 NetApp launched Clustered DATA ONTAP 8.2. Clustered DATA ONTAP 8.2 remains the only

shipping storage system delivering on the promise of Software Defined Storage, and building on existing cloud and

virtualisation models. In this release customers have the capability to abstract data access and services from pooled

SAN and NAS hardware resources. Clustered Data ONTAP also offers the broadest storage hardware support in




the market, from NetApp optimised storage, third Party storage, commodity

storage, and cloud. This release also includes comprehensive integration of

open programmable API’s for workflow automation and introduces the capability

to apply Storage based Non-Disruptive Operations (NDO) and Quality of Service

(QoS) to ensure availability, efficiency, access, security, protection and

performance of data assets.



If you want to know more about how Clustered Data ONTAP delivers on the goals of Software Defined Storage and

provides the foundation for the Software Defined Data Centre please review the following links


Clustered Data ONTAP



Software Defined Storage



Follow me @lozdjames

If only I could, well the next race at Royal Ascot would see me sorted! Predicting and forecasting are key activities

for many organisations looking to gain competitive advantage. Finding the next big thing, or responding to new

market forces drives a world of constant change, and organisations need to be responsive. These days, business

waits for no person; procrastination and delay only lead to one result – failure.

There is little doubt that the process of uncovering a piece of critical insight can define the future of an organisation.

Getting there relies heavily on IT. IT agility requires Infrastructure aligned to the business and responsive to changing



IT and agility are terms that rarely appear in the same sentence; however Agile IT that responds at the speed of the

business can an effective change enabler, similar to the impact of virtualisation technologies on business

versatility. There is no doubt that virtualisation has made a significant difference to the flexibility enjoyed by many

organisations today, but how far can you go ?


The benefits of virtualisation apply to all layers in the infrastructure, server, network and storage.

For many however infrastructure has often grown in a piecemeal fashion resulting in islands or

silo’s of servers, network and storage, and subsequent islands of virtualisation, leading to the

well documented concept of sprawl. In my experience silo’s have always been inflexible and

expensive to manage, while efficiencies and economies of scale are difficult to achieve.

The ever increasing requirement for the quick response to business opportunity implies assets delivered in minutes

and hours, not days and weeks. It has been a very interesting couple of weeks listening to the news on the release

of NetApp Clustered Data ONTAP 8.2. This new release builds on the solid foundations of storage virtualisation,

scalability, unified storage, efficiency and data protection.


It offers massive scale up& out (69 Petabytes, Clusters to 24 nodes), Non-Disruptive Operations (planned maintenance slots -

a thing of the past), Quality of Service controls (manage those problem workloads) and integration

with industry standard connectivity and APIs (SMB 3.0,ODX, VAAI VMware vSphere APIs)


In summary Clustered Data ONTAP 8.2 delivers agility and addresses the challenges that constrained

organisations in the past. Challenges such as responding quickly to business change and new opportunities. Activities

such as scaling and adding capacity as you grow, load balancing or the introduction of new storage technologies including  the

retirement of old. These challenges used to require significant planning time and were inevitably disruptive to normal

business operations.


One interesting use of Clustered Data ONTAP is at CERN. Probably one of the most data intensive organisations in the

world today. When the Large Hadron Collider (LHC) is running it can produce 1 Petabyte of new data in one second. After

filtering, Terabytes of data are stored for scientists, and this data requires moving for load balancing and technology upgrades to

more cost effective storage. In the past a 100TB Oracle™ Database took 28 days to restore, under Clustered Data

ONTAP it takes 15 minutes - without stopping the application.


If you want to find out more then links to all the resources can be found here:



Follow me @lozdjames

Read more at #ClusteredONTAP

In my years spent in Data Centre Management the perennial challenges were the efficient use of resources, the maintenance of assets, and the

ever present lifecycle of moving from old to new infrastructure. The common management messages were – Where is the plan, What is the business

impact on customers, What are the risks, What is the contingency, Who is accountable and responsible. There is no doubt in my mind that in the old world, 

IT did place constraints on the business.


CDOT2.bmp Then users and owners of compute and storage resources had little time to address the efficient use of

resources. Keeping the lights on was the main priority, an important job, and it still is! Multiple copies of the same data existed

far and wide, while utilisation rates for disk and the dreaded tape often languishing in single figures.

With little time to properly address these issues - delivering efficiency through smart home-grown software was a non-starter.

Early attempts at quota systems for storage and compute worked, but often lead to a queue of angry customers banging on

the sys-admins door once they had exhausted their budget for the month, usually in the first week. Although it did focus mindsCDOT3.bmp

that IT was not an infinite resource. However, not good for the soul first thing on the Monday morning when you are trying to design the next iteration of the Data Centre Infrastructure.

Upgrade and efficient migration from old to new infrastructure is one of the most challenging and exciting activities for IT professionals. The goal is always to deliver the cut over from old to new with minimal business impact This process required meticulous planning and testing, often taking many months, with no margin for error. Today many organisations have elevated their IT function from one that was focused on the IT Design , Build and Operate process to one that concentrates on Service Delivery, and Service levels.


Many of these challenges were changed for the better witCDOT.bmph the launch of  DATA ONTAP and today I am looking forward to the announcement from NetApp next week. I am sure that we will see a host of new features that will further improve efficiency, management

and scale, ultimately offering new facilities for driving IT Agility.

Definitely One for the diary - Register here - Virtual Event - Liberate your Business from IT Downtime - http://www.netapp.com/us/forms/cdot-launch.aspx?ref_source=bnrcdot-hp


Follow me on Twitter at @lozdjames