How to SnapMirror a Large File from an Enormous Volume and Only Need the Large File Size Available on the Destination
Scenario: You have a database file in a
volume on cluster1, and you want to get that file to cluster2. The problem is
that the volume is too big to be SnapMirror-ed to cluster2 (there is not enough
space on cluster2’s aggregates). So, how do you get just the file you want to move to cluster2?
Job done!
The answer is to use FlexClone. We can:
- Flexclone the volume
- Delete everything but the “Large File” from the volume
- Split the clone
- Delete the clone snapshots
- SnapMirror the “Large File” to where it needs to go.
The following example illustrates this with fairly small
files really, but hopefully you get the gist.
Example
Our volume has 7821MB used. And the aggregate utilization
on cluster1 aggr_dp is 8037MB. The volume ‘tvol_001’ is made up of 6 *.db
files.
cluster1::> vol show -volume tvol_001
-fields used,snapshot-policy
vserver volume used
snapshot-policy
------- -------- ------ ---------------
svm1
tvol_001 7821MB
none
cluster1::> aggr show -aggr aggr_dp
-fields usedsize
aggregate usedsize
--------- --------
aggr_dp
8037MB
Image: tvol_001 with 6 database files
We create our flexclone volume tvol_001_clone. And see it
makes little difference to the aggregate size.
cluster1::> volume clone create
-flexclone tvol_001_clone -vserver svm1 -junction-path /tvol_001_clone
-parent-volume tvol_001
cluster1::> aggr show -aggr aggr_dp
-fields usedsize
aggregate usedsize
--------- --------
aggr_dp
8094MB
cluster1::> vol show -volume
tvol_001_clone -fields used,snapshot-policy
vserver volume used
snapshot-policy
------- -------------- ------
---------------
svm1
tvol_001_clone 7799MB none
Then we create a cifs share to get to the flexclone and
delete all the *.db files except database6.db. Notice the aggregate size does
not change.
cluster1::> cifs share create
-share-name tvol_001_clone -vserver svm1 -path /tvol_001_clone
Image: Deleted everything except database6
cluster1::> vol show -volume
tvol_001_clone -fields used,snapshot-policy
vserver volume used
snapshot-policy
------- -------------- ------
---------------
svm1
tvol_001_clone 6798MB none
cluster1::> aggr show -aggr aggr_dp
-fields usedsize
aggregate usedsize
--------- --------
aggr_dp
8095MB
cluster1::> vol snapshot show -volume
tvol_001* -fields owners,size
vserver volume snapshot owners size
------- -------- ----------------------
-------------- ----
svm1
tvol_001 clone_tvol_001_clone.1 "volume clone" 0MB
svm1
tvol_001_clone clone_tvol_001_clone.1 "volume clone" 5545MB
Then we split the clone and see that the used size of the
aggregate roughly increases by the size of the database6.db file.
cluster1::> vol clone split start
-vserver svm1 -flexclone tvol_001_clone
Warning: Are you sure you want to split
clone volume tvol_001_clone in Vserver svm1? {y|n}: y
[Job 107] Job is queued: Split
tvol_001_clone.
cluster1::> vol clone split show
-fields block-percentage-complete
vserver flexclone block-percentage-complete
------- -------------- -------------------------
svm1
tvol_001_clone 65
cluster1::> vol clone split show
-fields block-percentage-complete
There are no entries matching your query.
cluster1::> aggr show -aggr aggr_dp
-fields usedsize
aggregate usedsize
--------- --------
aggr_dp
11122MB
Now we delete the clone snapshots. If we didn’t, we’d end
up replicating the full size of the tvol_001, when we replicate tvol_001_clone,
since the previous deletions were captured in the clone snapshot which would get
mirrored too.
cluster1::> vol snapshot show -volume
tvol_001* -fields owners,size
vserver volume snapshot owners size
------- -------- ----------------------
------ ----
svm1
tvol_001 clone_tvol_001_clone.1 -
0MB
svm1
tvol_001_clone clone_tvol_001_clone.1 - 5545MB
cluster1::> vol snapshot delete
-vserver svm1 -volume tvol_001* -snapshot *
cluster1::> aggr show -aggr aggr_dp
-fields usedsize
aggregate usedsize
--------- --------
aggr_dp
8150MB
Not totally sure why the aggregate space went down
after deleting the snapshots, could it be to do with aggregate deduplication
(this lab is running ONTAP 9.4)? Answers on a postcard...
cluster1::> aggr show
-field sis-space-saved
aggregate sis-space-saved
-----------------
---------------
aggr0_cluster1_01 0MB
aggr_dp 5337MB
So, now we are left with our original volume (untouched)
and the clone volume with the database6.db file in.
cluster1::> volume show -volume
tvol_001* -fields used
vserver volume used
------- -------- ------
svm1
tvol_001 7822MB
svm1
tvol_001_clone 2277MB
Now we SnapMirror the volume with the file in to
cluster2. Essentially one action command and the volume is SnapMirrored!
cluster2::> aggr show -aggr aggr_dp
-fields used
aggregate usedsize
--------- --------
aggr_dp
40MB
cluster2::> snapmirror protect
-path-list svm1:tvol_001_clone -destination-vserver svm_dst1 -policy
MirrorAllSnapshots
[Job 103] Job is queued: snapmirror
protect for list of volumes beginning with "svm1:tvol_001_clone".
cluster2::> snapmirror show
-destination-path svm_dst1:tvol_001_clone_dst -fields state,status
source-path destination-path state status
-------------------
--------------------------- ------------ ------
svm1:tvol_001_clone
svm_dst1:tvol_001_clone_dst Snapmirrored Idle
cluster2::> aggr show -aggr aggr_dp
-fields used
aggregate usedsize
--------- --------
aggr_dp
2325MB
cluster2::> volume show -volume tvol_001*
-fields used
vserver
volume used
-------- ------------------ ------
svm_dst1 tvol_001_clone_dst 2229MB
Finally, we can tidy up the SnapMirror relationship:
break (on destination), delete (on destination), release (on source).
cluster2::> snapmirror break
-destination-path svm_dst1:tvol_001_clone_dst
Operation succeeded: snapmirror break for
destination "svm_dst1:tvol_001_clone_dst".
cluster2::> snapmirror delete
-destination-path svm_dst1:tvol_001_clone_dst
Operation succeeded: snapmirror delete
for the relationship with destination "svm_dst1:tvol_001_clone_dst".
cluster1::> snapmirror release
-destination-path svm_dst1:tvol_001_clone_dst
Job done!
Comments
Post a Comment