Continuing from my previous post on SnapMirror, SnapVault also uses NetApp Snapshot technology. From now, I will use "SV" instead of SnapVault.


SV can be done from multiple primary storage systems to one secondary storage system, it also reduces storage requirements by using thin-replication technology and by inter-operating with deduplication. SV can be scheduled at multiple intervals to improve RPO.


One can provide read-only access that is stored on SV secondary storage by exporting the SV secondary volume to or sharing with UNIX or windows clients. Users can be allowed to copy-and-paste procedures to restore data.



1.Install license for each primary (sv_ontap_pri) and secondary (sv_ontap_sec) storage system.

2. SnapMirror license required for failover to a SV secondary qtree or volume.


Qtrees are the basic unit for SV. Primary qtrees, non-qtree data and even volumes are backed up to qtrees on the SV secondary system.


Initial Backup (baseline): The first time SV backs up a qtree or volume, it backs up all of the data blocks on primary storage, writes the data to the secondary volume, and then creates a Snapshot on the secondary volume.


Scheduled Updates: After the initial backup, SV performs updates, transferring and storing only the data blocks that have changed since the last backup. For every update, SV creates a snapshot copy of the relevant volume.


Archive: One uses SV mainly for archival purposes, as we take multiple snapshot copies, you can archive all those multiple backups that were performed multiple times.



1. For SV backup and restore operations, port 10566 must be open in both directions.

2. For NDMP management, port 10000 must be open on both primary and secondary systems.


Log file: One can locate the SV logs in the /etc/log/snapmirror


Throttling: One can enable throttling by setting "options replication.throttle.enable on|off"



1. By enabling dedup on SV secondary storage system and secondary volume, it automatically starts the process automatically after the completion of a SV transfer.

2. The deduplication of blocks is initiated when the number of changed blocks represents at least 20% of the number of blocks in volume.

3. As dedup synchronizes with SV schedule, you cannot schedule the dedup of a SV secondary volume. One can start this process manually from GUI or CLI with a maximum of eight concurrent dedup operations.


Scenario:1 To setup SV by adding license, enabling access, creating volume, schedule, start and update.


Step:1 Add license

filerA> license add <sv_ontap_pri>

filerB> license add <sv_ontap_sec>


Step:2 Enable SV access

filerA> options sanpvault.enable on

filerA> options snapvault.access all


filerB> options sanpvault.enable on

filerB> options snapvault.access all


Step:3 Create thin primary and secondary volume by setting several options which I generally use

filerA> vol create vol_primary -s none aggr1 500g

filerA> vol options vol_primary fractional_reserve 0

filerA> snap reserve vol_primary 0

filerA> snap sched vol_primary 000

filerA> snap autodelete vol_primary on

filerA> snap autodelete vol_primary target_free_space 5

filerA> sis on /vol/vol_primary


filerB> vol create vol_secondary -s none aggr1 500g

filerB> vol options vol_secondary fractional_reserve 0

filerB> vol options vol_secondary nosnap on

filerB> snap reserve vol_secondary 0

filerB> snap sched vol_secondary 000

filerB> sis on /vol/vol_secondary


Step:4 Create Qtree on primary side (Qtree will be automatically created on secondary)

filerA> qtree create /vol/vol_primary/qt


Step:5 Schedule Snapvault

filerA> snapvault snap sched vol_primary sv_hourly 6@0-23

filerB> snapvault snap sched -x vol_secondary sv_hourly 24@0-23


Step:6 Initialize baseline transfer

filerB> snapvault start -S filerA:/vol/vol_primary/qt /vol/vol_secondary/qt


When you run the above command, qtree "qt" will be created on the secondary volume automatically.


Step:7 Check the status of transfer on primary or secondary storage system

snapvault status -l filerA:/vol/vol_primary/qt


snapvault status


Step:8 Update SV secondary

filerB> snapvault update /vol/vol_secondary/qt


Scenario:2 Restore data to original qtree on the primary storage system


filerA> snapvault restore -S filerB:/vol/vol_secondary/qt /vol/vol_primary/qt


Scenario:3 To clean up the obsolete SV relationships


Step:1 Identify the relationships that needs to be cleaned up

filerA> snapvault destinations


Step:2 Release secondary destinations.

filerA> snapvault release /vol/vol_primary/qt filerB:/vol/vol_secondary/qt


Step:3 Stop the snapvault services

filerB> snapvault stop -f filerB:/vol/vol_secondary/qt


Step:4 Unschedule updates

filerA> snapvault snap unsched -f vol_primary sv_hourly

filerB> snapvault snap unsched -f vol_secondary sv_hourly


Step:5 Delete SV snapshot copies

filerA> snap list vo_primary

filerA> snap delete vol_primary <snapshot name>

filerB> snap list vol_secondary

filerB> snap delete vol_secondary <snapshot name>


Scenario:4 To restart SV by resynchronizing relationships between primary and secondary.


filerB> snapvault start -r -S filerA:/vol/vol_primary/qt filerB:/vol/vol_secondary/qt


Hope it is helpful.


Best Regards,

Ravi Paladugu


SnapMirror deep dive

Posted by RAVI.PALADUGU May 4, 2013

I know there are several blog posts on SnapMirror and SnapVault technologies, but many of my colleagues and friends still have so many questions on how this actually works under the hoods. So I decided to write a post that might answer at-least few of there questions. This post is inspired from the NetApp university protection suite class I have attended last year.


SnapMirror Basics:


1. Firstly, Snapshot technology is the foundation for SnapMirror

2. SnapMirror is a fast and highly flexible solution for replicating data over LAN, WAN and FC networks. It can also be used for data migration, data distribution for load balancing and remote access.

3. No interruption to data access while migrating data from one storage system to another.

4. There are three types of SM:

          a. Asynchronous (Incremental updates based on schedules)

                    i. Volume SnapMirror (VSM)

                    ii. Qtree SnapMirror (QSM)

          b. Synchronous (fully synchronous incremental updates are instantaneous)

                    i. Only applies to VSM

                    ii. Replicates data with little or no lag,  depending on network latency we can have                     performance issues.

          c. Semi-synchronous (almost instantaneous upon receipt of a write request on the source)


VSM key points:


1. Block-for-block replication

2. Supports async, sync and semi-sync modes

3. No performance effect even with large or deep directories

4. Source volumes are online writable, destination volumes are online read-only

5. Quotas cannot be enabled on destination volumes

6. Transfers between traditional and flexible volumes are not supported

7. From ONTAP 8.1 7-mode, transfers between 32-bit and 64-bit aggregates are supported



VSM Mechanism:


1. Initial (baseline) transfer to a "restricted" destination volume, during which the source storage system creates a snapshot copy of the volume. All data blocks that are referenced by the snapshot copy, including volume metadata such as language translation settings and all snapshot copies of the volume, are transferred and written to destination volume.

2. Scheduled updates, these are configured in the "snapmirror.conf" file. After initialization is completed, the source and destination file system share one snapshot copy, there after the updates occur as scheduled in the "snapmirror.conf" file.


QSM key points:


1. QSM is a logical replication, all files and directories in the source file system are created in the destination qtree

2. Source and destination volumes can be traditional or flexible, volume size and disk geometry are irrelevant

3. Qtrees from multiple sources can be replicated to one destination

4. This is little bit tricky, with QSM the source and destination volume can be online and writable, but the destination qtree is not writable.

5. Supports only Async mode

6. Cannot initialize to tape, does not support SM cascades, deep directory structures and large number of files might affect performance


QSM Mechanism:


1. Initial (baseline) transfer, creates the destination qtree automatically. QSM creates a snapshot copy of source volume including all its data and metadata.

2. Scheduled updates: after the baseline transfer is completed the source and destination filesystems share a snapshot copy and thereafter the updates occur as scheduled in the "snapmirror.conf" file.

3. QSM does not transfer the volume or the snapshot copy to the destination, rather it transfers only the qtree data that is new or has changed.


Now that we have gone over some key concepts, I will now go through the configuration process.


SnapMirror Configuration:



Step:1 Add SnapMirror license


filerA > license add <license>

filerB > license add <license>


Step:2 Set SnapMirror access


filerA > options snapmirror.access host=filerB

filerB > options snapmirror.access host=filerA


Step:3 Set configuration file


Syntax: "Source-storage-system name:source-path" "destination-storage-system:destination-path" "arguments" "schedule"


Note:  1. Please don't put quotes, follow the example format shown below.

          2. Arguments can be left default or can set some thing like speed kbs=2000

          3. Schedule follow a format like this: 10 22 * 1,3,4

                    10 - Update 10 minutes past hour

                    22 - Update at 10 pm

                    * - Update on all applicable days of the month

                    1,3,4 - Update on Monday, Wednesday, Thursday




filerB > wrfile /etc/snapmirror.conf


filerA:/vol/volA/q1 filerB:/vol/volA/q1 - 10 * * *


In the above example I meant the source filer (filerA), Qtree (q1) inside Volume (volA) to replicate to destination filer (filerB), Qtree (q1) inside Volume (volA) with no arguments specified are scheduled to run every day at every 10minutes past the hour.





filerB > wrfile /etc/snapmirror.conf


filerA:/vol/volA filerB:/vol/volA kbs=3000 15 7,19 * *


n the above example I meant the source filer (filerA),Volume (volA) be replicated to destination filer (filerB), Volume (volA) to use maximum of 3,000 kilobytes per second to transfer data  and is scheduled to run every day at every 15minutes past 7 am and also at 15 minutes past 7 pm (meaning the update to run at 7:15 am and 7:15 pm every day of the month and every day if the week).


Step:4 Perform Baseline transfer


Note: The destination volume must be restricted


filerB > vol restrict volA


filerB > snapmirror initialize -S filerA:volA filerB:volA


Step:5 Monitor


filerB > snapmirror status <options>


options: -l ( displays the long format of the output)

            -q ( displays the volumes or qtrees that are quiesced or quiescing)


Note: Never delete the snapshot copies that are created by snapmirror. If snapmirror cannot find the copy on the source, it cannot perform incremental changes to the destination. The affected relationship must be reinitialized.


Log files:



Check to see if the snapmirror logging is enabled from "options.snapmirror.log". If enabled you should find logs at "/etc/logs/snapmirror"


Managing Transfers:



One can specify the number of volume snapmirror transfers to reserve resources. This option means the resources reserved for VSM transfers are not available for other replication types like QSM or Snapvault transfers.


options replication.volume.reserved_transfers <n>


n - number of VSM transfers for which you want to reserve resources, default is 0


Stream-count setting can be modified to previous by using


options replication.volume.transer_limits [current|previous]


Network Throttling:



1. One can use per-transfer throttling by specifying arguments in the "snapmirror.conf" file as shown above with "kbs"

2. Dynamic throttling enables to change throttle value while the transfer is active


          snapmirror throttle <n> destination-hostname:destination-path


3. System-wide throttling limits the amount of bandwidth that can be used for transfers


          options replication.throttle.enable on

          options replication.throttle.incoming.max_kbs

          options replication.throttle.outgoing.max_kbs


Default is unlimited, this can be changed between 1 to 125,000 kbs


Snapmirror Network Compression:



1. Snapmirror has a built-in feature that enables data compression over the network for SM transfers. This is different from WAFL compression, which compresses data at rest.

2. Compressed data is sent to and received by the destination filer, the destination filer than decompressed and saves to appropriate volume.


Note: Supported only on asynchronous mode of VSM


Breaking SnapMirror relationship:



One can break a SnapMirror relationship to convert a replica to a writable file system, do it from destination filer


filerB > snapmirror break <volume name>


To resume the SM relationship you can use the following command from destination filer


filerB > snapmirror resync filerB:volume-name


Note: In case "real" disaster the process is very different, please follow the scenario in Example 2


To break a relationship permanently execute the following from source filer


filerA > snapmirror release source_vol filerB:dest_vol





Scenario: To create and schedule asynchronous VSM relationship between filerA (source), volA and filerB (destination), volA-mirror.


Step:1 Configure access on both systems


filerA > options snapmirror.access host=filerB


filerB > options snapmirror.access host=filerA


Step:2 Create 100-MB flex volume with name volA_mirror in aggr1


filerB > vol create volA_mirror aggr1 100m


Step:3 Set Security style


filerB > qtree security /vol/volA_mirror ntfs


Step:4 Create CIFS share


filerB > cifs shares -add volA_mirror /vol/volA_mirror


Step:5 Initiate baseline transfer (Remember to restrict the destination volume)


filerB > vol restrict volA_mirror


filerB > snapmirror initialize -S filerA:volA filerB:volA_mirror


Step:6 Monitor and verify if the baseline transfer is completed


filerB > snapmirror status


filerA > snap list volA

filerB > snap list volA_mirror


Step:7 If you want to update destination volume, you can always do


filerB > snapmirror update -S filerA:volA filerB:volA_mirror


Step:8 Schedule snapmirror by setting the "snapmirror.conf" file


filerB > wrfile /etc/snapmirror.conf


filerA:volA filerB:volA_mirror - 10 * * *







Scenario: To simulate a disaster recovery and failover to the mirror


Step:1 To simulate disaster, bring the source volume offline


filerA > vol offline volA


Step:2 Break relationship from destination


filerB > snapmirror quiesce volA_mirror

filerB > snapmirror break volA_mirror


Resynchronize the SM relationship


Because new data was written to the destination during the disaster, the destination storage system is now acts as source. Lets resynchronize new data back to original volume


Step:3 Bring the volA back online


filerA > vol online volA


Step:4 From filerA replicate the new data that was written to filerB:/volA_mirror during the disaster back to filerA:/volA


filerA > snapmirror resync -S filerB:volA_mirror filerA:volA


Step:5 After the resync is completed break the reverse relationship


filerA > snapmirror quiesce volA


filerA > snapmirror break volA


Reinstate the original filerA-to-filerB SM relationship


Step:6 Now resync from filerB to return to the original source-to-destination relationship


filerB > snapmirror resync filerB:volA_mirror


Step:7 Test the operation by creating a file of filerA:/volA and then using snapmirror update command from filerB


filerB > snapmirror update volA_mirror


Hope this post is useful, I will try to write a new post on SnapVault as soon as I can.


Best Regards,

Ravi Paladugu

Filter Blog

By author: By date:
By tag: