Scenario
I want to do a DR test of production data, without using my production SVM-DR SVM.
Testing done on ONTAP 9.13.1.
Lab Walkthrough 01
In this walkthrough I'll go through all the steps of setting up a production SVM-DR relationship, setting up a DR SVM-DR relationship. Test volume and CIFS share. Flexclone from production SVM to DR. DR test and tidy up.
Note: Intercluster LIFs are already setup, and the lab is a flat network, no routes required.
###################### Initial Lab Setup #
#####################
# Create Prod(uction) and Dev SVMs #
cluster1::> vserver create -vserver SVMPROD
clsuter2::> vserver create -vserver SVMPROD-DR -subtype dp-destination
cluster1::> vserver create -vserver SVMDEV
clsuter2::> vserver create -vserver SVMDEV-DR -subtype dp-destination
# Cluster and Vserver Peering #
cluster1::> cluster peer create -address-family ipv4 -peer-addrs 192.168.0.123,192.168.0.124
cluster2::> cluster peer create -address-family IPv4 -peer-addrs 192.168.0.121,192.168.0.122
cluster1::> cluster peer show
cluster1::> vserver peer create -vserver SVMPROD -peer-vserver SVMPROD-DR -peer-cluster cluster2 -applications snapmirror
cluster1::> vserver peer create -vserver SVMDEV -peer-vserver SVMDEV-DR -peer-cluster cluster2 -applications snapmirror
cluster2::> vserver peer show
cluster2::> vserver peer accept -vserver SVMPROD-DR -peer-vserver SVMPROD
cluster2::> vserver peer accept -vserver SVMDEV-DR -peer-vserver SVMDEV
cluster1::> vserver peer show
# Create Data LIFs, DNS and CIFS Server #
cluster1::> net int create -vserver SVMPROD -lif data -data-protocol cifs -address 192.168.0.135 -netmask 255.255.255.0 -home-node cluster1-01 -home-port e0c
cluster1::> net int create -vserver SVMDEV -lif data -data-protocol cifs -address 192.168.0.136 -netmask 255.255.255.0 -home-node cluster1-01 -home-port e0d
cluster1::> dns create -vserver SVMPROD -domains acompany.com -name-servers 192.168.0.253
cluster1::> dns create -vserver SVMDEV -domains acompany.com -name-servers 192.168.0.253
cluster1::> cifs server create -vserver SVMPROD -cifs-server SVMPROD -domain acompany.com
cluster1::> cifs server create -vserver SVMDEV -cifs-server SVMDEV -domain acompany.com
# Create DNS entry svmprod.acompany.com to 192.168.0.135 #
# Create DNS entry svmdev.acompany.com to 192.168.0.136 #
# Create test volume and share in production SVM #
cluster1::> volume create -vserver SVMPROD -volume CIFSTEST01 -aggregate cluster1_02_SSD_1 -size 10G -junction-path /CIFSTEST01
cluster1::> cifs share create -vserver SVMPROD -share-name CIFSTEST01 -path /CIFSTEST01
# Because we're using CIFS I modify all volumes security styles to NTFS #
cluster1::> volume modify -vserver SVMPROD -security-style ntfs -volume *
cluster1::> volume modify -vserver SVMDEV -security-style ntfs -volume *
# Add some data to your volume (CIFSTEST01) which we can use for testing later. #
# Create the SVM-DR relationships with identity-preserve true #
cluster2::> snapmirror create -source-path SVMPROD: -destination-path SVMPROD-DR: -identity-preserve true -schedule 10min
cluster2::> snapmirror create -source-path SVMDEV: -destination-path SVMDEV-DR: -identity-preserve true -schedule 10min
cluster2::> snapmirror initialize -destination-path SVMPROD-DR:
cluster2::> snapmirror initialize -destination-path SVMDEV-DR:
#############
# FlexClone #
#############
# FlexClone from Production SVM to DEV #
cluster1::> volume clone create -vserver SVMDEV -flexclone CIFSTEST01 -parent-vserver SVMPROD -parent-volume CIFSTEST01 -junction-path /CIFSTEST01
cluster1::> cifs share create -vserver SVMDEV -share-name CIFSTEST01 -path /CIFSTEST01
# And test data on the DEV SVM #
# Note: If you're doing this will a large dataset, you might want to delete data in the cloned volume before the SnapMirror updates, else it could be a lot of data replicated over the WAN (the flexclone won't be a flexclone on cluster2 as it's parent volume is another SVM.) Also, if you have deleted a lot of data, delete snapshots containing the deletions. You may also want to clone split, but we don't do that here. #
cluster2::> snapmirror update -destination-path SVMDEV-DR:
# Stop Client Access #
cluster2::> snapmirror update -destination-path SVMDEV-DR:
cluster1::> vserver stop SVMDEV
cluster2::> snapmirror break -destination-path SVMDEV-DR:
cluster2::> vserver start SVMDEV-DR
# Test Availability in DR #
###########
# Tidy Up #
###########
cluster2::> vserver stop SVMDEV-DR
cluster1::> vserver start SVMDEV
cluster2::> snapmirror resync -destination-path SVMDEV-DR:
cluster1::> cifs share delete -vserver SVMDEV -share-name CIFSTEST01
cluster1::> volume clone show
cluster1::> volume offline -vserver SVMDEV -volume CIFSTEST01
cluster1::> volume delete -vserver SVMDEV -volume CIFSTEST01
cluster1::> volume clone show
cluster1::> set adv
cluster1::*> volume recovery-queue purge -volume CIFSTEST01_1033 -vserver SVMDEV
cluster1::*> volume recovery-queue show
cluster1::*> vol clone show
# Note: The volume will have a slightly different name in the recovery-queue. #
Image: Our test volume is a clone in the source SVMDEV but not SVMDEV-DR. Notice the parent vserver is SVMPROD.
Lab Walkthrough 02 (Using Clone Split)
This walkthrough is not much different to the above, except we use a clone split, so the volume we are working on is a flexvol in SVMDEV and SVMDEV-DR. And we want to work with a smaller subset of production data, and don't want to replicate the whole subset over the WAN.
This continues from the FlexClone bit above.
#############
# FlexClone #
#############
# First we quiesce the snapmirror for SVMDEV because we don't want a big snapmirror transfer.
cluster2::> snapmirror quiesce -destination-path SVMDEV-DR:
cluster2::> snapmirror show
cluster1::> volume clone create -vserver SVMDEV -flexclone CIFSTEST01 -parent-vserver SVMPROD -parent-volume CIFSTEST01 -junction-path /CIFSTEST01
cluster1::> cifs share create -vserver SVMDEV -share-name CIFSTEST01 -path /CIFSTEST01
# Delete the stuff you don't want to replicate from \\SVMDEV\CIFSTEST01
cluster1::> volume clone split estimate -vserver SVMDEV -flexclone CIFSTEST01
# We need to modify the vserver-dr protection settings so we can split, otherwise we get an error: Error: command failed: This volume cannot be split as it is configured for protection using Vserver level SnapMirror. Change the protection type using "volume modify" command to make it unprotected.
cluster1::> volume modify -vserver SVMDEV -volume CIFSTEST01 -vserver-dr-protection unprotected
cluster1::> volume clone split start -vserver SVMDEV -flexclone CIFSTEST01
# Note the: Warning: Vserver "SVMDEV" is the source of a Vserver SnapMirror relationship. Splitting a clone associated with this Vserver will result in re-creating the volume at the destination by a possibly lengthy "snapmirror initialize" operation. Also, all Snapshot copies will be deleted. To avoid this, use the "volume move" command instead of "volume clone split". Volume move is faster than a split operation and will preserve the Snapshot copies.
# For this test we don't care about snapshots
# These next two outputs will both be empty once the split is complete:
cluster1::> vol clone split show
cluster1::> vol clone show
# Delete the big snapshot as we don't want to replicate this. Once snapshots are deleted we should just have a relatively small used volume:
cluster1::> volume snapshot show -volume CIFSTEST01 -vserver SVMDEV
cluster1::> volume snapshot delete -vserver SVMDEV -volume CIFSTEST01 -snapshot *
cluster1::> volume snapshot show -volume CIFSTEST01 -vserver SVMDEV
cluster1::> vol show -volume CIFSTEST01 -fields used
# Re-enable protection, snapmirror re-enable, snapmirror update:
cluster1::> volume modify -vserver SVMDEV -volume CIFSTEST01 -vserver-dr-protection protected
cluster2::> snapmirror resume -destination-path SVMDEV-DR:
cluster2::> snapmirror show
cluster2::> snapmirror update -destination-path SVMDEV-DR:
cluster2::> snapmirror show
# And verify our reduced subset volume is in DR:
cluster2::> vol show -volume CIFSTEST01 -fields used
# Now we continue with the DR test! #
# Stop Client Access #
cluster1::> vserver stop SVMDEV
cluster2::> snapmirror update -destination-path SVMDEV-DR:
cluster2::> snapmirror break -destination-path SVMDEV-DR:
cluster2::> vserver start SVMDEV-DR
# Test Availability #
############
# RollBack #
############
# Note: This is not a full switchover, switchback, it's just switchover, test, we're happy, forget about DR)
cluster2::> vserver stop SVMDEV-DR
cluster1::> vserver start SVMDEV
cluster2::> snapmirror resync -destination-path SVMDEV-DR:
cluster1::> cifs share delete -vserver SVMDEV -share-name CIFSTEST01
cluster1::> volume offline -vserver SVMDEV -volume CIFSTEST01
cluster1::> volume delete -vserver SVMDEV -volume CIFSTEST01
cluster1::> set adv
cluster1::*> volume recovery-queue show CIFSTEST01_*
cluster1::*> volume recovery-queue purge -volume CIFSTEST01_* -vserver SVMDEV
cluster1::*> volume recovery-queue show
Further Reading
- FAQ - FlexClone split - NetApp Knowledge Base
- Including:
- Why does a FlexVol clone split take a long time?
- Clone-splitting operation in general might take considerable time to carry.
Comments
Post a Comment