Currently Being Moderated
hitz

How Data De-Duplication Fits into our Master Plan

Posted by hitz in Dave's Blog on May 23, 2007 10:26:00 AM

Let me explain how our data de-duplication announcement this week fits into our long-term strategy. One blogger described our goal as making Data Domain the "next entrée on NetApp's dinner plate". Actually, de-dupe is part of a much higher-level strategy.

 

To summarize the announcement, we now support data de-duplication on all of our storage systems. (It takes a license.) If the same block of data is present in two different LUNs or files, the storage system spots this and saves space by keeping just one copy. For two years this functionality has been available for backups using SnapVault for NetBackup, but now people can enable de-dupe for any data on any NetApp storage system.

 

In some cases, like nightly backups of the same data, de-dupe can yield compression ratios as high as 50-to-1, although 10-to-1 or 20-to-1 are more common. Other cases, like user home directories, may save 40% or less. It all depends on how redundant the data is. De-dupe helps customers buy less storage, use less power, cooling, and floor space in their data centers, and – in the end – save money. (See here to understand why helping customers buy less storage is a good strategy for NetApp.)

 

Buying less storage is the small picture. The big picture is that we want to help customers create a disk-based copy for all of their primary storage.

 

Many customers already create disk-based copies for mission critical data, to ensure business continuity in case of disaster, but we believe the trend is to create disk-based copies for everything. Tape-based backup just isn't keeping pace with improvements in disk drives. Plus, compliance and discovery for litigation are creating new requirements that tape drives could never meet.

 

Interesting things start to happen when you create a disk-based copy of everything. Instead of doing searches on primary storage, which could hurt performance, why not search the secondary copy? If the people running decision support systems want their own copy of a critical database, why not clone the secondary instead of paying for a whole new copy? Why not create lots of cloned copies for the test and development team preparing to upgrade to the next version of Oracle or SAP? When you create a copy of everything, and add functionality like snapshots and clones, what you end up with is a smart copy infrastructure that can completely change the way you think about data management.

 

This won't happen overnight. We understand that. But anything that helps people reduce the cost of creating copies helps us achieve our vision more quickly. In the short run, data de-duplication helps customers save space and save money, but what's more important is that by reducing the cost of copies, it helps us achieve our master plan.

Comments

Filter Blog

By author: By date:
By tag: