Currently Being Moderated

First, let me hit you with a couple of new videos from my storage efficiency compadres.  Chris "Stanford beat the hell out of the Irish and it felt good" Cummings gives you a great executive overview on how businesses are accelerating their performance AND conserving on storage, servers, power and cooling.  You hear, not only from Chris, but actual customers and partners such as AT&T, AISO, Joyent and VMware.  Next, Larry "Dr. Dedupe" Freeman gives you the technical breakdown

Now, I'm sure we can expect the usual Chris and Larry groupies to be drawn to the videos by their movie star good looks but, if you can just get past that, you'll hear a great message on why business agility and reducing data center expenses do not have to be conflicting ideas.

Allow me to give you an example: dedupe (or cloning) in a highly random, high demand I/O environment.  Performance or efficiency OR... performance AND efficiency?

 

Now, speaking of movie star good looks, I'm often told that I have a great face for blogging.  I'd appreciate it more if it wasn't my mom that said it but, when I do get a chance to go out in public to meet with customers, it doesn't take long for the conversation to zero in on dedupe and cloning primary data.  Sure, we can talk about the other storage efficiency features - practical snapshots, practical RAID-6, practical thin provisioning - but everyone intuitively "gets" the idea around dedupe or cloning and that's what we end up talking about - how it works; the benefits; what the customer gets out of the deal.  And I'm not talking about the BS EMC attempts to put into the market about "dedupe at the source."  That's backup.  Stevie Wonder can see through that fog machine.  I'm talking about deduping and cloning primary data while it's in use. 

 

Inevitably, that conversation will turn to trade-offs, particularly questions on performance such as: "What's the impact to performance if you dedupe data?  Won't the disks take a beating if everyone is banging away against the same dataset?"  DBAs - well, any application owner but DBAs in particular - get very particular on the size, speed and number of disks; stripe widths; plexes; RAID groups; inner tracks; outer tracks and so on.  Throw a message of big, fat slow drives and using less of them at a DBA and you're lucky if the pitchforks and torches don't come out.

 

It's a valid concern so the best way that I've found to talk about this is to describe the use case again - multiple clients, high I/O demand beating against a single set of unique (deduped) data.  Sounds tough, right?  But, it just so happens that cache loves this type of thing.  That's where NetApp has placed a big bet with their Performance Acceleration Modules (PAM).  I'm oversimplifying the engineering work but, all you have to do is make sure that these large (512GB), solid-state PAM cards are "dedupe aware" and, bam!, you have a solution that shows how dedupe can improve performance even with a smaller storage footprint.

 

You can follow up with sizers that model that customer's specific environment but, I like to show them actual customer results that demonstrate the impact of adding PAM into a deduped environment:

 

Guarantees and SLAs are great but nothing carries the weight of what actual customers are experiencing in their production environments.  Big difference when you take out the "ORs" and substitute in the "ANDs."

 

For more info check out:

http://www.Netapp.com/us/campaigns/brand/and/strategic.html

http://www.Netapp.com/us/campaigns/brand/and/technical.html

Comments

Filter Blog

By date:
By tag: