In this post here
I talked about setting up a SnapVault relationship (XDP SnapMirror), and
post-initialization, creating a load of Snapshots to simulate a decently long SnapVault
retention. This is because CDOT SnapVault only starts bringing across Snapshots
with the desired label, after the time of initialization. But what if we’ve got
genuine historical Snapshots we want to bring across?
The answer is to SnapMirror first and then convert the
SnapMirror to a SnapVault!
A question arises ... will our carried across labelled
Snapshots get deleted as per the SnapVault retention? The answer is YES (as we
want) and a proof below!
Note: Below we also
include ‘How to Convert a CDOT SnapVault to SnapMirror Relationship’ - which is
perfectly possible!
The Lab
Clusters SVMs
======== ====
PRICLU1 PRICLU1V1
PRICLU2
(SV) PRICLU2V1
Walkthrough
Create Schedules
For the experiment, we create a 1 minutely schedule on
both clusters
cron
create 1minutely -minute
00,01,02,03,04,05,06,07,08,09,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59
Snapshot Policy
The schedule is used to create a 1minutely Snapshot
Policy on the Primary Cluster
PRICLU1::>
snapshot policy create -policy 1minutely -vserver PRICLU1 -schedule1 1minutely
-count1 90 -snapmirror-label1 1minutely -enabled true
Create Volumes
A test volume is created on the Primary Cluster with the
1minutely Snapshot Policy
PRICLU1::>
vol create -vserver PRICLU1V1 -volume v_TEST01 -aggregate aggr1 -size 10g
-junction-path /v_TEST01 -space-guarantee none -snapshot-policy 1minutely
And a DP volume is created on the SnapVault Cluster
PRICLU2::>
vol create -vserver PRICLU2V1 -volume v_TEST01 -aggregate aggr1 -size 10g
-space-guarantee none -snapshot-policy none -type DP
SnapMirror
On the SnapVault Cluster, a SnapMirror relationship is
created and initialized
PRICLU2::>
snapmirror create -source-path PRICLU1V1:v_TEST01 -destination-path
PRICLU2V1:v_TEST01 -type DP -schedule 1minutely -policy DPDefault
PRICLU2::>
snapmirror initialize -destination-path PRICLU2V1:v_TEST01
A snapshot-count
shows the volume already has 15 snapshots!
PRICLU2::> vol show
-volume v_TEST01 -fields snapshot-count
vserver volume
snapshot-count
--------- --------
--------------
PRICLU2V1 v_TEST01 15
Create a SnapVault
Policy
We create a Snapshot policy that collects only 30 of the 1minutely
Snapshots!
PRICLU2::>
snapmirror policy create -vserver PRICLU2 -policy 1minutelyX30
PRICLU2::>
snapmirror policy add-rule -vserver PRICLU2 -policy 1minutelyX30
-snapmirror-label 1minutely -keep 30
Convert SnapMirror
to SnapVault
We convert the SnapMirror to a SnapVault using the 30 x
1minutely Snapshot retention policy. The following commands are run on
PRICLU2::>
snapshot
show -volume v_TEST01 -snapmirror-label 1minutely
snapmirror
break -destination-path PRICLU2V1:v_TEST01
snapmirror
delete -destination-path PRICLU2V1:v_TEST01
snapmirror
create -source-path PRICLU1V1:v_TEST01 -destination-path PRICLU2V1:v_TEST01 -type
XDP -schedule 1minutely -policy 1minutelyX30
snapmirror
resync -destination-path PRICLU2V1:v_TEST01
snapshot
show -volume v_TEST01 -snapmirror-label 1minutely
Output:
See that before
breaking the SnapMirror we have 34 Snapshots. After recreating and resyncing as
an XDP (SnapVault) relationship, we have only 30 Snapshots as per our SnapVault
policy! And checking later reveals it is maintaining only 30 Snapshots as
desired!
PRICLU2::> snapshot show
-volume v_TEST01 -snapmirror-label 1minutely
---Blocks---
Vserver Volume
Snapshot
State Size Total% Used%
-------- -------
------------------------------- -------- -------- ------ -----
PRICLU2V1
v_TEST01
1minutely.2014-07-03_1958 valid 68KB
0% 29%
...
1minutely.2014-07-03_2031 valid 76KB
0% 31%
34 entries were displayed.
PRICLU2::> snapmirror
break -destination-path PRICLU2V1:v_TEST01
PRICLU2::> snapmirror
delete -destination-path PRICLU2V1:v_TEST01
PRICLU2::> snapmirror
create -source-path PRICLU1V1:v_TEST01 -destination-path PRICLU2V1:v_TEST01
-type XDP -schedule 1minutely -policy 1minutelyX30
PRICLU2::> snapmirror
resync -destination-path PRICLU2V1:v_TEST01
Warning: All data newer than
Snapshot copy snapmirror.1bf1a3bf-f990-11e3-9873-123478563412_2147484686.2014-07
03_203100 on volume PRICLU2V1:v_TEST01 will be deleted. Verify there is no XDP
relationship whose source volume is "PRICLU2V1:v_TEST01". If such a relationship exists then you are
creating an unsupported XDP to XDP cascade.
Do you want to continue?
{y|n}: y
PRICLU2::> snapshot show
-volume v_TEST01 -snapmirror-label 1minutely
---Blocks---
Vserver Volume
Snapshot
State Size Total% Used%
-------- -------
------------------------------- -------- -------- ------ -----
PRICLU2V1
v_TEST01
1minutely.2014-07-03_2002 valid 72KB
0% 26%
...
1minutely.2014-07-03_2031 valid
76KB 0% 27%
30 entries were displayed.
How to Convert a
CDOT SnapVault to SnapMirror Relationship?
This is illustrated by an example working from the above
PRICLU2::>
snapmirror break -destination-path PRICLU2V1:v_TEST01
Error:
command failed: "snapmirror break" not supported for "XDP"
relationships.
PRICLU2::>
snapmirror delete -destination-path PRICLU2V1:v_TEST01
PRICLU2::>
snapmirror create -source-path PRICLU1V1:v_TEST01 -destination-path
PRICLU2V1:v_TEST01 -type DP -schedule 1minutely -policy DPDefault
PRICLU2::>
snapmirror resync -destination-path PRICLU2V1:v_TEST01
Warning:
All data newer than Snapshot copy 1minutely.2014-07-03_2031 on volume
PRICLU2V1:v_TEST01 will be deleted.
Do
you want to continue? {y|n}: y
PRICLU2::>
snapshot show -volume v_TEST01 -snapmirror-label 1minutely
---Blocks---
Vserver Volume
Snapshot
State Size Total% Used%
--------
------- ------------------------------- -------- -------- ------ -----
PRICLU2V1
v_TEST01
1minutely.2014-07-03_1958 valid 68KB
0% 29%
...
1minutely.2014-07-03_2051 valid 76KB
0% 31%
54
entries were displayed.
See that we’ve now got 54 snapshots, way more than the
previous SnapVault’s retention of 30!
Something
Interesting
You can’t break a SnapVault destination volume and make
it read/write.
PRICLU2::>
snapmirror break -destination-path PRICLU2V1:v_PKD
Error:
command failed: [Job 118] Job failed: Cannot break volume
"PRICLU2V1:v_PKD". Breaking a vault destination volume is not
supported.
PRICLU2::>
snapmirror delete -destination-path PRICLU2V1:v_PKD
PRICLU2::>
snapmirror break -destination-path PRICLU2V1:v_PKD
[Job
118] Job is queued: snapmirror break for destination
"PRICLU2V1:v_PKD".
Error:
command failed: [Job 118] Job failed: Cannot break volume
"PRICLU2V1:v_PKD". Breaking a vault destination volume is not
supported.
But, what if we converted it back to a SnapMirror?
YES, that will work, as long as you’ve not done a
SnapMirror release on the source Cluster (which I found deletes the snapmirror.
snapshots and makes it not possible to resync). Then a break and it’s
read/write!
THE END
Just an FYI, the only reason you were able to convert the snapvault back to the mirror is because this was initially a snapmirror relationship where you did not delete the common snapshot. Unfortunately it seems you cannot convert a vault to a mirror when the initial configuration was a vault. So, if you started with a Snapvault and want to convert to a snapmirror, it means you need to re-seed the relationship from the source, basically starting from scratch.
ReplyDelete