Currently Being Moderated

I know there are several blog posts on SnapMirror and SnapVault technologies, but many of my colleagues and friends still have so many questions on how this actually works under the hoods. So I decided to write a post that might answer at-least few of there questions. This post is inspired from the NetApp university protection suite class I have attended last year.

 

SnapMirror Basics:

==============

1. Firstly, Snapshot technology is the foundation for SnapMirror

2. SnapMirror is a fast and highly flexible solution for replicating data over LAN, WAN and FC networks. It can also be used for data migration, data distribution for load balancing and remote access.

3. No interruption to data access while migrating data from one storage system to another.

4. There are three types of SM:

          a. Asynchronous (Incremental updates based on schedules)

                    i. Volume SnapMirror (VSM)

                    ii. Qtree SnapMirror (QSM)

          b. Synchronous (fully synchronous incremental updates are instantaneous)

                    i. Only applies to VSM

                    ii. Replicates data with little or no lag,  depending on network latency we can have                     performance issues.

          c. Semi-synchronous (almost instantaneous upon receipt of a write request on the source)

 

VSM key points:

============

1. Block-for-block replication

2. Supports async, sync and semi-sync modes

3. No performance effect even with large or deep directories

4. Source volumes are online writable, destination volumes are online read-only

5. Quotas cannot be enabled on destination volumes

6. Transfers between traditional and flexible volumes are not supported

7. From ONTAP 8.1 7-mode, transfers between 32-bit and 64-bit aggregates are supported

 

 

VSM Mechanism:

==============

1. Initial (baseline) transfer to a "restricted" destination volume, during which the source storage system creates a snapshot copy of the volume. All data blocks that are referenced by the snapshot copy, including volume metadata such as language translation settings and all snapshot copies of the volume, are transferred and written to destination volume.

2. Scheduled updates, these are configured in the "snapmirror.conf" file. After initialization is completed, the source and destination file system share one snapshot copy, there after the updates occur as scheduled in the "snapmirror.conf" file.

 

QSM key points:

============

1. QSM is a logical replication, all files and directories in the source file system are created in the destination qtree

2. Source and destination volumes can be traditional or flexible, volume size and disk geometry are irrelevant

3. Qtrees from multiple sources can be replicated to one destination

4. This is little bit tricky, with QSM the source and destination volume can be online and writable, but the destination qtree is not writable.

5. Supports only Async mode

6. Cannot initialize to tape, does not support SM cascades, deep directory structures and large number of files might affect performance

 

QSM Mechanism:

=============

1. Initial (baseline) transfer, creates the destination qtree automatically. QSM creates a snapshot copy of source volume including all its data and metadata.

2. Scheduled updates: after the baseline transfer is completed the source and destination filesystems share a snapshot copy and thereafter the updates occur as scheduled in the "snapmirror.conf" file.

3. QSM does not transfer the volume or the snapshot copy to the destination, rather it transfers only the qtree data that is new or has changed.

 

Now that we have gone over some key concepts, I will now go through the configuration process.

 

SnapMirror Configuration:

===================

 

Step:1 Add SnapMirror license

 

filerA > license add <license>

filerB > license add <license>

 

Step:2 Set SnapMirror access

 

filerA > options snapmirror.access host=filerB

filerB > options snapmirror.access host=filerA

 

Step:3 Set configuration file

 

Syntax: "Source-storage-system name:source-path" "destination-storage-system:destination-path" "arguments" "schedule"

 

Note:  1. Please don't put quotes, follow the example format shown below.

          2. Arguments can be left default or can set some thing like speed kbs=2000

          3. Schedule follow a format like this: 10 22 * 1,3,4

                    10 - Update 10 minutes past hour

                    22 - Update at 10 pm

                    * - Update on all applicable days of the month

                    1,3,4 - Update on Monday, Wednesday, Thursday

 

QSM

====

filerB > wrfile /etc/snapmirror.conf

 

filerA:/vol/volA/q1 filerB:/vol/volA/q1 - 10 * * *

 

In the above example I meant the source filer (filerA), Qtree (q1) inside Volume (volA) to replicate to destination filer (filerB), Qtree (q1) inside Volume (volA) with no arguments specified are scheduled to run every day at every 10minutes past the hour.

 

VSM:

====

 

filerB > wrfile /etc/snapmirror.conf

 

filerA:/vol/volA filerB:/vol/volA kbs=3000 15 7,19 * *

 

n the above example I meant the source filer (filerA),Volume (volA) be replicated to destination filer (filerB), Volume (volA) to use maximum of 3,000 kilobytes per second to transfer data  and is scheduled to run every day at every 15minutes past 7 am and also at 15 minutes past 7 pm (meaning the update to run at 7:15 am and 7:15 pm every day of the month and every day if the week).

 

Step:4 Perform Baseline transfer

 

Note: The destination volume must be restricted

 

filerB > vol restrict volA

 

filerB > snapmirror initialize -S filerA:volA filerB:volA

 

Step:5 Monitor

 

filerB > snapmirror status <options>

 

options: -l ( displays the long format of the output)

            -q ( displays the volumes or qtrees that are quiesced or quiescing)

 

Note: Never delete the snapshot copies that are created by snapmirror. If snapmirror cannot find the copy on the source, it cannot perform incremental changes to the destination. The affected relationship must be reinitialized.

 

Log files:

=======

 

Check to see if the snapmirror logging is enabled from "options.snapmirror.log". If enabled you should find logs at "/etc/logs/snapmirror"

 

Managing Transfers:

================

 

One can specify the number of volume snapmirror transfers to reserve resources. This option means the resources reserved for VSM transfers are not available for other replication types like QSM or Snapvault transfers.

 

options replication.volume.reserved_transfers <n>

 

n - number of VSM transfers for which you want to reserve resources, default is 0

 

Stream-count setting can be modified to previous by using

 

options replication.volume.transer_limits [current|previous]

 

Network Throttling:

===============

 

1. One can use per-transfer throttling by specifying arguments in the "snapmirror.conf" file as shown above with "kbs"

2. Dynamic throttling enables to change throttle value while the transfer is active

 

          snapmirror throttle <n> destination-hostname:destination-path

 

3. System-wide throttling limits the amount of bandwidth that can be used for transfers

 

          options replication.throttle.enable on

          options replication.throttle.incoming.max_kbs

          options replication.throttle.outgoing.max_kbs

 

Default is unlimited, this can be changed between 1 to 125,000 kbs

 

Snapmirror Network Compression:

============================

 

1. Snapmirror has a built-in feature that enables data compression over the network for SM transfers. This is different from WAFL compression, which compresses data at rest.

2. Compressed data is sent to and received by the destination filer, the destination filer than decompressed and saves to appropriate volume.

 

Note: Supported only on asynchronous mode of VSM

 

Breaking SnapMirror relationship:

===========================

 

One can break a SnapMirror relationship to convert a replica to a writable file system, do it from destination filer

 

filerB > snapmirror break <volume name>

 

To resume the SM relationship you can use the following command from destination filer

 

filerB > snapmirror resync filerB:volume-name

 

Note: In case "real" disaster the process is very different, please follow the scenario in Example 2

 

To break a relationship permanently execute the following from source filer

 

filerA > snapmirror release source_vol filerB:dest_vol

 

Example:1

=========

 

Scenario: To create and schedule asynchronous VSM relationship between filerA (source), volA and filerB (destination), volA-mirror.

 

Step:1 Configure access on both systems

 

filerA > options snapmirror.access host=filerB

 

filerB > options snapmirror.access host=filerA

 

Step:2 Create 100-MB flex volume with name volA_mirror in aggr1

 

filerB > vol create volA_mirror aggr1 100m

 

Step:3 Set Security style

 

filerB > qtree security /vol/volA_mirror ntfs

 

Step:4 Create CIFS share

 

filerB > cifs shares -add volA_mirror /vol/volA_mirror

 

Step:5 Initiate baseline transfer (Remember to restrict the destination volume)

 

filerB > vol restrict volA_mirror

 

filerB > snapmirror initialize -S filerA:volA filerB:volA_mirror

 

Step:6 Monitor and verify if the baseline transfer is completed

 

filerB > snapmirror status

 

filerA > snap list volA

filerB > snap list volA_mirror

 

Step:7 If you want to update destination volume, you can always do

 

filerB > snapmirror update -S filerA:volA filerB:volA_mirror

 

Step:8 Schedule snapmirror by setting the "snapmirror.conf" file

 

filerB > wrfile /etc/snapmirror.conf

 

filerA:volA filerB:volA_mirror - 10 * * *

 

 

Example:2

==========

 

 

Scenario: To simulate a disaster recovery and failover to the mirror

 

Step:1 To simulate disaster, bring the source volume offline

 

filerA > vol offline volA

 

Step:2 Break relationship from destination

 

filerB > snapmirror quiesce volA_mirror

filerB > snapmirror break volA_mirror

 

Resynchronize the SM relationship

 

Because new data was written to the destination during the disaster, the destination storage system is now acts as source. Lets resynchronize new data back to original volume

 

Step:3 Bring the volA back online

 

filerA > vol online volA

 

Step:4 From filerA replicate the new data that was written to filerB:/volA_mirror during the disaster back to filerA:/volA

 

filerA > snapmirror resync -S filerB:volA_mirror filerA:volA

 

Step:5 After the resync is completed break the reverse relationship

 

filerA > snapmirror quiesce volA

 

filerA > snapmirror break volA

 

Reinstate the original filerA-to-filerB SM relationship

 

Step:6 Now resync from filerB to return to the original source-to-destination relationship

 

filerB > snapmirror resync filerB:volA_mirror

 

Step:7 Test the operation by creating a file of filerA:/volA and then using snapmirror update command from filerB

 

filerB > snapmirror update volA_mirror

 

Hope this post is useful, I will try to write a new post on SnapVault as soon as I can.

 

Best Regards,

Ravi Paladugu

Comments

Filter Blog

By author: By date:
By tag: