Currently Being Moderated
parks

How NetApp Deduplication Works - A Primer

Posted by parks in Ask Dr Dedupe on Apr 7, 2010 5:13:52 PM

This month's Storage Magazine has a write-up by Curtis Preston that discusses various flavors of data reduction for primary storage.  Curtis does a good job in explaining the inner workings of NetApp dedupe as well as solutions from other vendors. I'd recommend you add this article to this month's reading list.

 

For the past several years, I've described the design of NetApp deduplication to hundreds of customers, prospects, NetApp resellers, and anybody else who would listen -  to the point where I distilled my summary down to about 10 minutes in front of a whiteboard.  For the benefit of those who haven't heard this mini-lecture, and in support of Curtis' article, I'll now give you my typical description of exactly how NetApp dedupe works.

 

The Building Blocks of NetApp Deduplication

 

If you think of a NetApp storage system in terms of three main modules, it becomes easier to understand how we designed deduplication.  As the diagram below shows, the three modules of a NetApp FAS or V-Series system are 1) an operating system (Data ONTAP) 2) a filesystem (Write Anywhere FIle Layout, or WAFL) and 3) the actual stored data objects (WAFL Blocks).

 

http://media.netapp.com/images/blogs-6a00d8341ca27e53ef0133ec867e10970b-800wi.jpg

 

As with any other modern filesystem, WAFL consists of a hierarchy of superblocks, inode pointers, and associated metadata.  It is important to understand that WAFL does not know or care what application wrote the data or what protocol sent it.  Whether a data block is from a database, word doc, or medical image is irrelevant as is the protocol that delivered it - CIFS, NFS, iSCSI, FC-SAN - none of that matters to WAFL, it just knows it has received a 4K chunk of data and it will store it as a file within its directory structure.

 

Designing Deduplication into Data ONTAP

 

The first step in designing deduplication is to create a method of comparing data objects and figuring out which objects are unique and which are duplicate.  This generally involves the creation of a hash, or fingerprint, which is a small digital representation of a larger data object, and NetApp deduplication is no exception.  Fortunately, this fingerprint already exists in Data ONTAP.  Each time a WAFL block is created, a checksum character is generated for the purpose of consistency checking.  NetApp deduplication simply "borrows" a copy of this checksum and stores it in a catalog of all fingerprints, as shown in the diagram below.

 

http://media.netapp.com/images/blogs-6a00d8341ca27e53ef0133ec86a73e970b-800wi.jpg

 

As the diagram above illustrates, each time a system write occurs, the deduplication process interrupts the I/O stream and requests a copy of the checksum and stores it in its catalog as a fingerprint.  Although customers tell me they don't see any measurable performance impact during this process, we've measured it in our labs to be approximately 7% write performance overhead.

 

The other thing shown in the diagram is that it is possible to scan existing data and pull those fingerprints into the catalog.  In fact the first time you run dedupe, you'll get a message reminding you that you should do this.

 

We Ain't Deduping Yet

 

At this point we have enabled deduplication and gathered information in the form of digital fingerprints. No deduplication has actually occurred however. NetApp deduplication uses a "post-processing" routine which means that deduplication is run at intervals after the data is stored.

 

In NetApp's case, the actual deduplication process is triggered in one of three ways:

 

1) It can be started manually from the CLI or GUI

 

2) It can be scheduled to run at predetermined times and intervals

 

3) It can run automatically based on a data growth threshold being crossed

 

Regardless of how deduplication is started, the figure below describes the process.

 

http://media.netapp.com/images/blogs-6a00d8341ca27e53ef01347fb6b3de970c-800wi.jpg

The NetApp Deduplication Process

 

There is a lot going on in this diagram, so lets break it down step-by-step:

 

1) The fingerprint catalog is sorted and searched for identical fingerprints

 

2) When a fingerprint "match" is made, the associated data blocks are retrieved and scanned byte-for-byte (as shown by the green boxes in the diagram above)

 

3) Assuming successful validation, the inode pointer metadata of the duplicate block is redirected to the original block (as shown by the two arrows pointing to the same WAFL block)

 

4) The duplicate block is marked as "Free" and returned to the system, eligible for re-use

 

Note: The inode redirect process in step #3 is known to NetApp as "multiple block referencing" and is the longstanding cornerstone of NetApp snapshot technology and its spin-offs (SnapMirror, SnapVault, FlexClone, etc).  Multiple block referencing simply refers to using a single physical data block to represent many logical data blocks.

 

More Questions (and answers) About NetApp Deduplication

 

Hopefully, this short tutorial gives you a better understanding of how NetApp deduplication works.  At this point in my discussion, a few hands usually go up:

 

*Question. Doesn't deduplication fragment the data and therefore slow down my reads?  *

 

Answer: Although WAFL tries to write data contiguously, fact is that WAFL is by nature a random layout filesystem.  The vast majority of users tell us they don't notice any difference in read performance before or after deduplication occurs.  For this reason, NetApp paved the way for the use of dedupe on primary storage application data.

 

Question: Do I need to stop doing anything while the system is running the dedupe process?

 

Answer:  No.  The storage system remains fully operational before, during, and after the dedupe process

 

Question: How do I know if my datasets have any duplicate data?

 

Answer:  If its a common application, NetApp has documented best practices.  If its an uncommon or home-grown application, NetApp has a tool that will crawl through the data and predict dedupe savings

 

Question: How do I know if my system can tolerate the performance overhead of dedupe?

 

Answer:  This one is a little more difficult to answer, since we don't know what other processing your system is doing while its deduplicating, and how critical this processing is.  As a general rule, deduplication runs as a low priority background process and should not place significant load on the system.  However, if this is a concen, we recommend a phased deduplication approach.  Start by implementing dedupe a single volume or LUN, and observe system behavior.  Repeat this step on other volumes and LUNs and observe the results, remembering that you can stop or undo the deduplication process at any time.

 

That's It - NetApp Dedupe In A Nutshell

 

Hope my little tour through dedupe design was informative.  If you are  interested in a deeper dive, here is a more complete technical reference.

 

DrDedupe

Comments

Filter Blog

By author: By date:
By tag: