Saturday, 12 October 2019

Q: Can you Resync a Mirror-Vault with a Sync-SnapMirror Destination post Cutover?

A: No

Below was an experiment to see if resyncing a mirror-vault relationship with a Sync (StrictSync) mirror destination volume is possible post cutover. The result was no.

SnapMirror Synchronous (SM-S) has two modes:
- SnapMirror Synchronous Mode
- SnapMirror Strict Synchronous Mode

Note: SM-S is targeted at relatively short distances with less than 10ms RTT

1) Pre-requisites for Synchronous SnapMirror (SM-S)

Version of source and destination cluster must be ONTAP 9.5+
Clusters must be peered
SVMs must be peered


version
cluster peer show
vserver peer show


2) Setup the Strict Synchronous Mirror Relationships


cluster2::> vol create -volume OCF_dest_sm_s -aggregate aggr_dp -size 32.99G -type DP
cluster2::> vol create -volume ODATA_dest_sm_s -aggregate aggr_dp -size 40G -type DP
cluster2::> vol create -volume OFRA_dest_sm_s -aggregate aggr_dp -size 30G -type DP

cluster2::> snapmirror create -source-path svm_src1:OCF -destination-path svm_dst1:OCF_dest_sm_s -policy StrictSync
cluster2::> snapmirror create -source-path svm_src1:ODATA -destination-path svm_dst1:ODATA_dest_sm_s -policy StrictSync
cluster2::> snapmirror create -source-path svm_src1:OFRA -destination-path svm_dst1:OFRA_dest_sm_s -policy StrictSync

Warning: You are creating a SnapMirror relationship with a policy of type "strict-sync-mirror" that only supports all LUN based applications with FCP and iSCSI protocols, as well as NFSv3 protocol for enterprise applications such as databases, VMWare, etc.
Warning: For a SnapMirror relationship with a policy of type "strict-sync-mirror", client I/O will fail in order to maintain strict synchronization when the secondary is inaccessible.
Do you want to continue? y

cluster2::> snapmirror initialize -destination-path svm_dst1:OCF_dest_sm_s
cluster2::> snapmirror initialize -destination-path svm_dst1:ODATA_dest_sm_s
cluster2::> snapmirror initialize -destination-path svm_dst1:OFRA_dest_sm_s

cluster2::> snapmirror show -policy StrictSync -fields status,policy,lag-time
source-path  destination-path       policy     status lag-time
------------ ---------------------- ---------- ------ --------
svm_src1:OCF svm_dst1:OCF_dest_sm_s StrictSync InSync 0:0:0
svm_src1:ODATA svm_dst1:ODATA_dest_sm_s StrictSync InSync 0:0:0
svm_src1:OFRA svm_dst1:OFRA_dest_sm_s StrictSync InSync 0:0:0
3 entries were displayed.


3) Mirror-Vault the Source Volumes

cluster1::> snapshot policy create -vserver cluster1 -policy 24_hourly -enabled true -schedule1 hourly -count1 24 -snapmirror-label1 hourly -prefix1 hourly
cluster1::> volume modify -volume OCF,ODATA,OFRA -snapshot-policy 24_hourly -vserver svm_src1

cluster2::> vserver create -vserver svm_mirror_vault -subtype default -rootvolume svm_root -rootvolume-security-style unix -language C.UTF-8 -snapshot-policy none -aggregate aggr_dp
cluster2::> vserver peer create -vserver svm_mirror_vault -peer-vserver svm_src1 -peer-cluster cluster1 -applications snapmirror

cluster1::> vserver peer accept -vserver svm_src1 -peer-vserver svm_mirror_vault

cluster2::> snapmirror policy create -vserver cluster2 -policy 96_hourly -tries 8 -transfer-priority normal -ignore-atime false -restart always -type mirror-vault
cluster2::> snapmirror policy add-rule -vserver cluster2 -policy 96_hourly -snapmirror-label hourly -keep 96
cluster2::> vol create -volume OCF_mv -aggregate aggr_dp -size 32.99G -type DP -vserver svm_mirror_vault -language en_US.UTF-8
cluster2::> vol create -volume ODATA_mv -aggregate aggr_dp -size 40G -type DP -vserver svm_mirror_vault -language en_US.UTF-8
cluster2::> vol create -volume OFRA_mv -aggregate aggr_dp -size 30G -type DP -vserver svm_mirror_vault -language en_US.UTF-8
cluster2::> snapmirror create -source-path svm_src1:OCF -destination-path svm_mirror_vault:OCF_mv -policy 96_hourly -schedule hourly
cluster2::> snapmirror create -source-path svm_src1:ODATA -destination-path svm_mirror_vault:ODATA_mv -policy 96_hourly -schedule hourly
cluster2::> snapmirror create -source-path svm_src1:OFRA -destination-path svm_mirror_vault:OFRA_mv -policy 96_hourly -schedule hourly
cluster2::> snapmirror initialize -destination-path svm_mirror_vault:*
cluster2::> snapmirror show -destination-path svm_mirror_vault:* -fields state,status
source-path  destination-path        state        status
------------ ----------------------- ------------ ------
svm_src1:OCF svm_mirror_vault:OCF_mv Snapmirrored Idle
svm_src1:ODATA svm_mirror_vault:ODATA_mv Snapmirrored Idle
svm_src1:OFRA svm_mirror_vault:OFRA_mv Snapmirrored Idle
3 entries were displayed.


4) Failover to Secondary Site on Primary Site Failure


cluster1::> vol offline -volume OCF -vserver svm_src1
cluster1::> vol offline -volume ODATA -vserver svm_src1
cluster1::> vol offline -volume OFRA -vserver svm_src1

Volume "OFRA" in Vserver "svm_src1" has LUNs associated with it. Taking this volume offline will disable the LUNs until the volume is brought online again.
The LUNs will not appear in the output of "lun show" and any clients using the LUNs will experience a data service outage. Are you sure you want to continue? y

Warning: Volume "OFRA" in Vserver "svm_src1" is currently part of a SnapMirror Synchronous relationship. Taking the volume offline operation will disrupt the zero RPO protection. Do you want to continue? y

cluster2::> snapmirror quiesce -destination-path svm_dst1:OCF_dest_sm_s
cluster2::> snapmirror quiesce -destination-path svm_dst1:ODATA_dest_sm_s
cluster2::> snapmirror quiesce -destination-path svm_dst1:OFRA_dest_sm_s
cluster2::> snapmirror break -destination-path svm_dst1:OCF_dest_sm_s
cluster2::> snapmirror break -destination-path svm_dst1:ODATA_dest_sm_s
cluster2::> snapmirror break -destination-path svm_dst1:OFRA_dest_sm_s


5) See if we can resync our Mirror-Vaults


cluster2::> snapshot policy create -vserver cluster2 -policy 24_hourly -enabled true -schedule1 hourly -count1 24 -snapmirror-label1 hourly -prefix1 hourly
cluster2::> volume modify -vserver svm_dst1 -volume OCF_dest_sm_s -snapshot-policy 24_hourly
cluster2::> volume modify -vserver svm_dst1 -volume ODATA_dest_sm_s -snapshot-policy 24_hourly
cluster2::> volume modify -vserver svm_dst1 -volume OFRA_dest_sm_s -snapshot-policy 24_hourly

cluster2::> vserver peer create -vserver svm_dst1 -peer-vserver svm_mirror_vault -applications snapmirror
cluster2::> snapmirror delete -destination-path svm_mirror_vault:OCF_mv
cluster2::> snapmirror delete -destination-path svm_mirror_vault:ODATA_mv
cluster2::> snapmirror delete -destination-path svm_mirror_vault:OFRA_mv
cluster2::> snapmirror create -source-path svm_dst1:OCF_dest_sm_s -destination-path svm_mirror_vault:OCF_mv -policy 96_hourly -schedule hourly
cluster2::> snapmirror create -source-path svm_dst1:ODATA_dest_sm_s -destination-path svm_mirror_vault:ODATA_mv -policy 96_hourly -schedule hourly
cluster2::> snapmirror create -source-path svm_dst1:OFRA_dest_sm_s -destination-path svm_mirror_vault:OFRA_mv -policy 96_hourly -schedule hourly
cluster2::> snapmirror show -destination-vserver svm_mirror_vault
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_dst1:OCF_dest_sm_s XDP svm_mirror_vault:OCF_mv Broken-off Idle - true -
svm_dst1:ODATA_dest_sm_s XDP svm_mirror_vault:ODATA_mv Broken-off Idle - true -
svm_dst1:OFRA_dest_sm_s XDP svm_mirror_vault:OFRA_mv Broken-off Idle - true -
3 entries were displayed.

cluster2::> snapmirror delete -destination-path svm_dst1:OCF_dest_sm_s
cluster2::> snapmirror delete -destination-path svm_dst1:ODATA_dest_sm_s
cluster2::> snapmirror delete -destination-path svm_dst1:OFRA_dest_sm_s

cluster2::> snapmirror resync -destination-path  svm_mirror_vault:OCF_mv
Error: command failed: No common Snapshot copy found between svm_dst1:OCF_dest_sm_s and svm_mirror_vault:OCF_mv.

cluster2::> snapmirror resync -destination-path  svm_mirror_vault:ODATA_mv
Error: command failed: No common Snapshot copy found between svm_dst1:ODATA_dest_sm_s and svm_mirror_vault:ODATA_mv.

cluster2::> snapmirror resync -destination-path  svm_mirror_vault:OFRA_mv
Error: command failed: No common Snapshot copy found between svm_dst1:OFRA_dest_sm_s and svm_mirror_vault:OFRA_mv.

cluster2::> snapshot show -volume OCF_mv
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm_mirror_vault OCF_mv
hourly.2019-10-12_1205                   348KB     0%    0%
snapmirror.022e075b-ecdf-11e9-9600-005056b01916_2160175154.2019-10-12_120500 249.6MB 1% 2%
hourly.2019-10-12_1305                   308KB     0%    0%
snapmirror.022e075b-ecdf-11e9-9600-005056b01916_2160175154.2019-10-12_130500 147.1MB 0% 1%

cluster2::> snapshot show -volume OCF_dest_sm_s
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm_dst1 OCF_dest_sm_s
snapmirror.12ceb7f0-b078-11e8-baec-005056b013db_2160175147.2019-10-12_120504 272.3MB 1% 2%
snapmirror.12ceb7f0-b078-11e8-baec-005056b013db_2160175147.2019-10-12_130504 30.17MB 0% 0%


6) Performing the Reverse Resync

After my failure to resync the Mirror-Vault with the Sync-SnapMirror destination, I then also could not reverse resync the Sync-SnapMirror relationship back to cluster1. Deleting the original Sync-SnapMirror relationship was likely the problem. There is a Reverse Resync button in OnCommand System Manager, which makes doing the Reverse Resync - to restore services back to the primary cluster - easy.

Image: Reverse Resync button in OnCommand System Manager

No comments:

Post a Comment