Currently Being Moderated

http://media.netapp.com/images/blogs-6a00d8341ca27e53ef0120a58e7cda970b-200wi.jpgIn the first post, I discussed some of the challenges faced by general data protection solutions (including Backup/Recovery and Disaster Recovery).  In part 1, I discussed how NetApp can make Backup and Recovery storage efficient by leveraging NetApp SnapVault and Open Systems SnapVault solutions.  Now I would like to discuss how using NetApp SnapMirror technology improves your storage efficiency and how a recent the efficiency is further improved based on a recent announcement

 


Before discussing how NetApp SnapMirror newly added compression feature further improves your efficiency, I would like to discuss the benefit of SnapMirror and how it's storage efficient.  Much like the other data protection solutions NetApp offers, SnapMirror is built off NetApp snapshot technology.  SnapMirror (after the initial level 0 copy), only replicates the blocks that have changed over the wire.  There are two levels of replication with SnapMirror, Volume and Qtree. 

 

Qtree SnapMirror (QSM) functions very similarly to SnapVault in the fact that it replicates the data at the qtree level.  This functionality gives the user the ability to manage the replication at a very granular level, and not replicate the entire volume if all the data isn't required to be replicated. 

The other option is to leverage Volume SnapMirror (VSM), which replicates all the data (including the snapshot copies) in the volume.  VSM is a physical block level replication, so in addition to only replicating the changed blocks, it also replicates the data in a deduplicated state.  By implementing deduplication on the primary storage system, the space savings are automatically represented on the secondary storage system.  For example, if a volume has 500GB of data and is deduplicated to 250GB, only 250GB worth of data is replicated (over the wire) and stored on the DR storage system.  This not only saves storage space on the secondary system (without doing anything), it also saves valuable bandwidth. 

 

In addition to the benefits of replication, SnapMirror also adds the ability to easily perform failover and failback operations.  The failover operations can be easily streamlined with the use of NetApp Protection Manager or VMware Site Recovery Manager (for VMware environments). 

 

Now, to focus a little on the recent announcement.  Today, it was announced that Volume SnapMirror now supports compression over the wire.  With increasingnetwork bandwidth costs coupled with data growth, customers are having to domore with less. As the amount of data to be protected increases, more networkbandwidth is needed to maintain the return point objective (RPO) or thereplication window. Otherwise, replication times increase as the amount of datasent over the network to the DR site increases. Differently put, if you do notwant to or cannot increase the network bandwidth, you need to lower thereplication frequency that is causing larger RPO values and thus increasingyour exposure to larger data loss.

The SnapMirror native network compression feature can cut down on the amount ofdata replicated over the network. SnapMirror network compression enables data compression over the network for SnapMirror transfers. It is a native feature that is built in to SnapMirror software. It does not require any license and there is no additional cost. SnapMirror network compression is not the same as WAFL compression. SnapMirror network compression does not compress data at rest.  At a high level, this is how SnapMirror compression works:

 

SM_Compress

On the source system, the data blocks that need to be sent to the destination system are handed off to the compression engine, which compresses the data blocks. The compression engine on the source system creates multiple threads depending on the number of CPUs available on the storage system. The multiple compression threads help to compress data in a parallel fashion. The compressed blocks are then sent over the network.On the destination system, the compressed blocks are received over the network and are then decompressed. The destination compression engine also has multiple threads to decompress the data in a parallel fashion. The decompressed data is reordered and is saved to the disk on the appropriate volume.

 

In other words, when SnapMirror network compression is enabled, two additional steps are performed— compression processing on the source system before data is sent over the network and decompression processing on the destination system before the data is written to the disk.  SnapMirror network compression uses the standard gzip compression algorithm to compress the data blocks. 

 

As you can see, SnapMirror is not only efficient in the storage layer, but also very efficient at the network layer.  By adding compression, SnapMirror can continue to help drive down the costs associated with your Data Protection Solution.

 

I would like to thank my co-worker, Srinath Alapati, Technical Marketing Engineer focusing on SnapMirror, for his contribution to this post around SnapMirror compression. 

 

Jeremy Merrill
Technical Marketing Engineer - Data Protection Solutions
NetApp Storage Efficiency Community
Twitter: JeremyMerrill

Comments

Filter Blog

By date:
By tag: