Monday, 12 August 2019

Tech Roundup – 11th August 2019

Stuff collated/new since Tech Roundup – 23 June 2019 with headings:
FlexPod, Kubernetes, Microsoft, NetApp, NetApp Cloud, NetApp E-Series, NetApp HCI, NetApp.io, NetApp Tech ONTAP Podcast, NetApp TRs, Python, Security, Storage Industry News, Ubuntu, Veeam, Miscellaneous

FlexPod

Introducing Memory-Accelerated Data for FlexPod
- Optimized integration of Cisco UCS B200 M5 with Intel Optane DCPMM into the FlexPod design
- MAX Data on FlexPod is capable of 5 times more I/O operations with 25 times less latency

Image: AFF A300 v AFF A300 + MAX Data on FlexPod B200 M5

Kubernetes

Kubernetes Cheat Sheet

Image: Kubernetes Cheat Sheet 1/2

Image: Kubernetes Cheat Sheet 2/2

Also, from Linux Academy see:

AWS Developer Tools Overview and CodeCommit Cheat Sheet

Ansible Roles Explained | Cheat Sheet

Your AWS Terminology Cheat Sheet

Microsoft

Azure migration center

About the Azure Site Recovery Deployment Planner for VMware to Azure

Microsoft’s new Windows Terminal now available to download for Windows 10
This change is by design and is intended to help reduce the overall disk footprint size of Windows. To recover a system with a corrupt registry hive, Microsoft recommends that you use a system restore point.

The new Windows Terminal

Windows Terminal (Preview)

The system registry is no longer backed up to the RegBack folder starting in Windows 10 version 1803

Microsoft Teams usage passes Slack in new survey
IT pros expect its presence to double by 2020

NetApp (General)

NetApp NVMe for your database

AFF A320: NVMe Building Block for the Modern SAN

MAX Data
Turbo-charge your applications
Rocket Fuel for Your Enterprise Apps

How to add storage capacity to a NetApp ONTAP Select 9.6 cluster

Protecting Your Data: Perfect Forward Secrecy (PFS) with NetApp ONTAP

Updated MetroCluster resources page

NetApp SnapCenter Plug-in for VMware vSphere 4.2 (NetApp Data Broker 1.0):

NetApp Data Broker 1.0

NetApp Data Broker 1.0: Release Notes

NetApp Data Broker 1.0: Deployment Guide for SnapCenter Plug-in for VMware vSphere

NetApp Data Broker 1.0: Data Protection Guide for VMs, Datastores, and VMDKs using the SnapCenter Plug-in for VMware vSphere

NetApp Data Broker 1.0 Documentation

NetApp SnapCenter 4.2:

Key Value Proposition of SnapCenter 4.2 – Simplicity:
- Simplified Installation
- Simplified Operations
- Continued quality enhancements

New Features in Version 4.2:
- SnapCenter Plug-in for VMware vSphere is part of NetApp Data Broker
- Simplified Storage management (Cluster Management LIF support)
- Simplified Host Management
- Enhanced Dashboard and Monitoring
- Simplified RBAC
- SnapCenter Custom plugin Integration with Linux File system
- Configuration Checker Integration

SnapCenter 4.2 Documentation

NetApp Cloud

Deploy SQL Server Over SMB with Azure NetApp Files

Azure Migration: The Keys to a Successful Enterprise Migration to Azure

Get the Most Out of Your Oracle Databases in Cloud Volumes Service for AWS

Cloud OnAir: New high-performance storage with NetApp and Google Cloud

What's New in the Beta Release of Cloud Volumes Service for GCP?

Cloud Volumes for GCP Technical Architecture and Automated Access

Global User Accessible API with Cloud Volumes for GCP

Cloud Volumes Service for Google Cloud – bringing high-performance file storage as a service to you

Lift and DON’T shift
Free up high-performance AFF storage space by automatically tiering infrequently used data to the cloud.

Any Cloud. One Experience.

The Route to Data is Now a Multi-Lane Super-Highway with ONTAP

Manage Your Data on the World's Biggest Clouds (NetApp Cloud Volumes Service (CVS))

A CEO Speaks: Why Azure NetApp Files Delivers Better Cloud Transformation

Cloud File Sharing: Backup and Archiving

Monitoring the Costs of Underutilized EBS Volumes

Get a First Look at Cloud Volumes ONTAP for Google Cloud (Webinar)

A Tour of NetApp Cloud Insights

Microsoft Announces Azure NetApp Files is Available

NetApp E-Series

Introducing Power New Analytics and Orchestration for E-Series:

Solution Brief: NetApp E-Series + Grafana: Performance Monitoring

eseries-perf-analyzer

NetApp E-Series Performance Analyzer

Grafana Handout

Solution Brief: Improve IT Automation with NetApp E-Series & Ansible

Ansible Gateway: nar_santricity_host

TR-4574: Deploying NetApp E-Series with Ansible

NetApp HCI

Disaggregated HCI Becomes a Thing
“IDC has announced a new ‘disaggregated’ subcategory of the HCI market in its most recent Worldwide Quarterly Converged Systems Tracker.  IDC is expanding the definition of HCI to include a disaggregated category with products that allow customers to scale in a non-linear fashion”

NetApp HCI Reference Architecture with Veeam Backup and Replication 9.5 Update 4

Image: NetApp HCI Reference Architecture with Veeam B&R

Element 11.3 and HCI 1.6 available on NSS (11 July):

- For Element 11.3 upgrades, use the mNode 11.1 with the latest HealthTools from NSS.
- mNode 11.3 and management services require Element 11.3 on the storage cluster (refer to the Management Node User Guide).
- NetApp HCI 1.6 Compute node image will now update the firmware and Bootstrap OS leaving ESXi and configuration data intact. Use Factory Reset option from the Compute TUI for reimaging. 

Download Links:

Element Plug-in for vCenter Server 4.3: https://mysupport.netapp.com/products/p/epvcenter.html
Element 11.3 Postman collection on GitHub:  https://github.com/solidfire/postman

Documentation Links:

HCI Documentation Center: http://docs.netapp.com/hci/index.jsp
SolidFire Documentation Center: http://docs.netapp.com/sfe-113/index.jsp
Management Node User Guide (also available from the Doc center links above): https://library.netapp.com/ecm/ecm_download_file/ECMLP2858123
Firmware and driver versions for NetApp HCI and NetApp Element software:  https://kb.netapp.com/app/answers/answer_view/a_id/1088658
Detailed procedure with screenshots to update NetApp HCI compute node firmware and driver: https://kb.netapp.com/app/answers/answer_view/a_id/1088186
In-place upgrade procedure for existing management node 11.0 or 11.1 to management node 11.3 (without requiring a new VM deployment): https://kb.netapp.com/app/answers/answer_view/a_id/1088660

NetApp.io

From June to now:

Dealing with the Unexpected

Trident 19.07 is now GA

Running a Playbook Against Multiple ONTAP Clusters

Using On-Demand Snapshots with CSI Trident

All New CSI Trident!

Welcome to Trident 19.07 Alpha!

Extending Kubernetes to Manage Trident’s State

Simple Made Simpler – Ansible Roles for ONTAP Select

NetApp Tech ONTAP Podcast

From Episode 196 to now:

Episode 203: Intel and NetApp - Edge to Core to Cloud to VMworld 2019

Episode 202: TCP Performance Enhancements in ONTAP 9.6 - CUBIC TCP

Episode 201: NVIDIA, NetApp and AI

Episode 200 - Cloud, Mentorship and Comics with Kaslin Fields

VMworld 2019 Tech ONTAP Podcast Playlist

Episode 198: NetApp A-Team ETL 2019

Episode 197: NetApp Accelerates Genomics

Episode 196: Intel, NetApp and High-Performance Computing

NetApp TRs + NVAs + Solution Briefs + White Papers

New NetApp TRs since Tech Roundup – 23 June 2019:

TR-4793: NetApp ONTAP AI and OmniSci GPU-Accelerated Analytics Platform: ONTAP in an OmniSci Environment    

TR-4790: FlexPod Solution Delivery Guide                                                                    

TR-4789: VMware Configuration Guide for E-Series SANtricity iSCSI Integration with ESXi 6.X: Solution Design

TR-4788: Architecting I/O-Intensive MongoDB Databases on NetApp                                             

TR-4785: AI Deployment with NetApp E-Series and BeeGFS                                                      

TR-4760: NetApp for Oracle Database 18c: Solution Delivery Guide                                            

TR-4758: Microsoft SQL Server 2017 on NetApp ONTAP: Solution Delivery Guide                                 

NVAs:

NetApp HCI for Multicloud Data Protection with Cloud Volumes ONTAP

Solution Briefs:

SB-3997: AI on E-Series and BeeGFS

SB-3996: NetApp E-Series + Grafana: Performance Monitoring

SB-3995: Improve IT Automation with NetApp E-Series & Ansible

SB-3994: NetApp SANtricity Cloud Connector

SB 3831: NetApp SaaS Backup for Salesforce

White Papers:

NetApp HCI for DevOps with NetApp Kubernetes Service

Python

Recommended Python course:
Pre-requisites: Python 3 and a decent editor.

Security

Using Windows FSRM to build a Killswitch for Ransomware
“I wanted to share a solution that uses resources already built into Windows”

Florida city gives in to $600,000 bitcoin ransomware demand

Storage Industry News

The Digitization of the World: From Edge to Core

Image: Tape is making a comeback!

The LTO Program Announces Fujifilm and Sony Are Now Both Licensees of Generation 8 Technology
“LTO Seeing Continued Relevance for Archive and Offline Long-Term storage”

Boeing’s 737 Max Software Outsourced to $9-an-Hour Engineers

Ubuntu

Statement on 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS
“Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS.”

Veeam

IT Guide to Build Converged and Hyper-Converged Infrastructures (White Paper)

Veeam Agent for Microsoft Windows FREE

Hyper-V Virtual lab Tips and Tricks

A few articles that allow you to fully unleash the power of Hyper-V:

What’s new in Hyper-V 2016

How to configure Hyper-V virtual switch

Resource Metering in Hyper-V

Setting up Hyper-V Failover Cluster in Windows Server 2012 R2

12 things you should know about Hyper-V snapshots

Miscellaneous

NetApp ActiveIQ PAS:

You can deploy PAS on premises:

There are a few KBs out there on https://kb.netapp.com/. Do a keyword search for PAS (login first).

There’s also a 3rd party site, and if you have questions you can post to the AIQ PAS Yammer or e-mail the NG on the download site.

Saturday, 27 July 2019

How to Restore SnapCenter database - SnapCenter Repository Backup Restore

Carrying on from the previous post - Disaster Recovery of SnapCenter Server with IP Address Change - restoring SnapCenter from a repository backup is straightforward.
Note: Using a SnapCenter 4.1 lab.

Restoring with MySQL and SnapManagerCoreService Up and Running


PS> Open-SmConnection -SMSbaseUrl https://SnapCtr.demo.corp.com:8146

cmdlet Open-SmConnection at command pipeline position 1
Supply values for the following parameters:
(Type !? for Help.)
Credential

PS> (Get-SmHost).HostName
DAG1.demo.corp.com
mb1.demo.corp.com
mb2.demo.corp.com
mb3.demo.corp.com
snapctr.demo.corp.com

PS> Get-SmRepositoryBackups

Backup Name
-----------
MYSQL_DS_SC_Repository_snapctr_07-26-2019_09.34.20.3512
MYSQL_DS_SC_Repository_snapctr_07-26-2019_10.35.02.3271
MYSQL_DS_SC_Repository_snapctr_07-26-2019_11.33.04.0570
MYSQL_DS_SC_Repository_snapctr_07-26-2019_12.33.03.8936

PS> Restore-SmRepositoryBackup -BackupName MYSQL_DS_SC_Repository_snapctr_07-26-2019_12.33.03.8936 -HostName SnapCtr.demo.corp.com

SnapCenter respository restored successfully!


Note: Don’t need to use Open-SmConnection and Get-SmHost before running SmRepositoryBackup cmdlets. I just did this to get the correct hostname.

One Slighty Odd Thing

One slightly odd thing is that when you restore, you lose the MySQL dmp from your backup location (perhaps this is to be expected).


PS> Restore-SmRepositoryBackup -BackupName MYSQL_DS_SC_Repository_snapctr_07-26-2019_12.33.03.8936 -HostName SnapCtr.demo.corp.com

Restore-SmRepositoryBackup: Backup files doesn't exist. Please provide a valid location that contains the SnapCenter backup files or specify RestoreFileSystem option to restore from the snapshot.

PS> Get-SmRepositoryBackups

Backup Name
-----------
MYSQL_DS_SC_Repository_snapctr_07-26-2019_09.34.20.3512
MYSQL_DS_SC_Repository_snapctr_07-26-2019_10.35.02.3271
MYSQL_DS_SC_Repository_snapctr_07-26-2019_11.33.04.0570
MYSQL_DS_SC_Repository_snapctr_07-26-2019_12.33.03.8936

PS> Restore-SmRepositoryBackup -BackupName MYSQL_DS_SC_Repository_snapctr_07-26-2019_11.33.04.0570 -HostName SnapCtr.demo.corp.com
SnapCenter respository restored successfully.


Image: After 2 restores, I’ve lost 2 dmp’s!

Q: How to Restore if the Database is Completely Gone?

If the database is completely gone and MySQL won’t start, what to do?

I’ve tried a few things but have not yet found the magic trick using DB restore CLI commands (MySQL or SnapCenter). The easiest way is going to be to revert the VM and database to a known good date.

This link looked promising (not for SnapCenter database though):

Or simply, with SnapManagerCoreService and MySQL57 stopped, replace these files with files from a known good backup -

C:\ProgramData\NetApp\SnapCenter\MySQL Data\Data

ib_buffer_pool
ib_logfile0
ib_logfile1
ibdata1

- restart MySQL57 and SnapManagerCoreService. And – hey presto – should all be back up and running again.

Image: Need to restore these ib files

Disaster Recovery of SnapCenter Server with IP Address Change

Assuming your SnapCenter server is replicated to a DR site, then recovery of SnapCenter including an IP change is very straightforward! (Even more straightforward if you have stretched VLAN and don’t need to re-ip.)

The SnapCenter IP address is not hardcoded anywhere, so once you’ve done the below, everything should just work!
- brought up the replicated SnapCenter in the DR site (not in this blog post)
- re-IP-ed SnapCenter
- updated DNS (make sure all the plugin hosts resolve SnapCenter to the correct IP)
- rebooted the SnapCenter server after re-IP

You would need to check your backups jobs are not trying to backup servers that aren’t available (because of the DR situation). We’d only need to recover from repository backup if there was something wrong with the database.

In the walkthrough below, we run through the steps and show DR of SnapCenter with IP change is nothing to be afraid of. We backup the repository as it is best practice to do so, but don’t use the repository backup (something I’ll blog about later.)

Note: SnapCenter version here is 4.1.

i: A little bit of setup:

Creating the SnapCenter repository backup LUN:


cluster1::>

volume create -vserver svm1 -volume SC_REPO_BKUP -size 10G -space-guarantee none -aggregate aggr1_cluster1_01

lun create -vserver svm1 -volume SC_REPO_BKUP -lun LUN101 -size 50G -ostype windows_2008 -space-reserve disabled -space-allocation enabled

igroup create -vserver svm1 -igroup snapctr -protocol iscsi -ostype windows -initiator iqn.1991-05.com.microsoft:snapctr.demo.corp.com

lun map -vserver svm1 -volume SC_REPO_BKUP -lun LUN101 -igroup snapctr -lun-id 101


ii: Replicating the SnapCenter repository backup LUN:


cluster2::>

volume create -vserver svm2 -volume SC_REPO_BKUP_DR -size 10G -type DP -aggregate aggr1_cluster2_01

snapmirror create -source-path svm1:SC_REPO_BKUP -destination-path svm2:SC_REPO_BKUP_DR -type XDP -policy MirrorAllSnapshots -schedule hourly

snapmirror initialize -destination-path svm2:SC_REPO_BKUP_DR


iii: Configuring SnapCenter Repository Backup:


PS> Open-SmConnection -SMSbaseURL https://snapctr.demo.corp.com:8146

PS> Add-SmHost -HostName snapctr.demo.corp.com -OSType Windows -DiscoverPlugins -RunAsName SCAdmin

PS> Get-SmResources -HostName snapctr.demo.netapp.com -PlugInCode SCW

Completed Discovering Resouces: Job Id [193]

HostResource: R:\
StorageSystemResource: svm1:/vol/SC_REPO_BKUP/LUN101

PS> Protect-SmRepository -HostName snapctr.demo.corp.com -Path R:\BACKUP -RetentionCount 24 -Schedule @{"ScheduleType"="hourly";"StartTime"="7/26/2019 8:33 AM"}

Successfully protected the SnapCenter respository.


0: Proof we have hosts with good status:

Image: SnapCenter host and DAG hosts with green statuses

1: Shut down SnapCenter services prior to Re-IP:


net stop SnapManagerCoreService

net stop MySQL57


2: Get the IP, change it (here we change from 192.168.0.75 to .175) and update DNS as required:


PS> (Get-NetIPAddress | Where-Object {$_.AddressFamily -eq "IPv4"}) | FT IPAddress,InterfaceAlias,InterfaceIndex -AutoSize

IPAddress    InterfaceAlias   InterfaceIndex
---------    --------------   --------------
192.168.0.75 DEMO             12

PS> New-NetIPAddress -InterfaceIndex 12 -IPAddress 192.168.0.175 -PrefixLength 24

PS> Remove-NetIPAddress -InterfaceIndex 12 -IPAddress 192.168.0.75 -PrefixLength 24


3: Restart SnapCenter server after re-IP

4: Verify that SnapCenter Plug-In Hosts DNS has updated:


Pinging snapctr.demo.corp.com [192.168.0.75] with 32 bytes of data:

Reply from 192.168.0.175: bytes=32 time<1ms ttl="128<o:p">


5: Verify SnapCenter Plug-In hosts connectivity is good, and verify that backup jobs are running as expected (except can’t backup hosts caught up in the DR situation)

APPENDIX: SnapCenter Config Files

The IP Address is not hardcoded into any of the below.

C:\Program Files\NetApp\SMCore
SMCoreServiceHost.exe.Config

C:\Program Files\NetApp\SnapCenter\SnapCenter Plug-in for Microsoft Exchange Server
SCEServiceHost.exe.config

C:\Program Files\NetApp\SnapCenter\SnapCenter Plug-in for Microsoft Windows
SnapDriveServices.exe.config
SnapDrive.Nsf.Common.Logging.dll.config
IConfiguration.config

C:\Program Files\NetApp\SnapCenter WebApp
packages.config
Web.config

Thursday, 25 July 2019

Making a SnapMirror Vault Tertiary Copy, Read-Write and Primary: Part 2 of 2: Clustershell Demonstration

Image: START and FINISH

How do we make Prod New the new production volume?

STEPS:

1) Modify Live Snapshot Policy with SnapMirror Labels for All Snapshots
2) Create new Live Snapshot Policy with extended retention
3) Modify Vault Policy with required labels for all SnapShots and Pre_Cutover label
4) Create new MirrorVault SnapMirror policy
5) Terminate client access: CIFS share delete, CIFS session check, volume unmount
6) Create Pre_Cutover Snapshot with Pre_Cutover label
7) Update DR, Break DR, verify DR has Snapshot Policy of none, Delete DR, Release DR
8) Update Vault, Delete Vault, Release Vault
9) Rename original Prod volume and offline
10) Update Mirror of Vault, break Mirror of Vault, set volume Snapshot policy, delete mirror of Vault, release mirror of Vault
11) Rename new Prod volume to original Prod volume name (remove _NEW)
12) Volume mount, CIFS share create, CIFS session check
13) Create DR with MirrorVault policy, resync DR
14) Verify SnapShot retention is working correctly
15) Delete Pre_Cutover Snapshot
16) Delete original Prod volume and SV volume

Clustershell Output:

This is going to be a bit long and my time is limited, so the Clustershell output comes with little explanation.

‌‌Volumes in play, their Snapshot Policy and Snapshot Count. Live Snapshot Policy. SnapMirrors and their type and policy. Vault snapmirror policy.

cluster1::> volume show -volume VOL001* -fields snapshot-policy,snapshot-count
vserver volume snapshot-policy snapshot-count
------- ------ --------------- --------------
SVMA    VOL001 7_1min_8_7min   16
SVMA    VOL001_NEW none        104
SVMB    VOL001_DR none         17
SVMC    VOL001_SV none         104

cluster1::> snapshot policy show -policy  7_1min_8_7min
Vserver: cluster1
              Number of Is
Policy Name   Schedules Enabled Comment
------------- --------- ------- -------
7_1min_8_7min         2 true    -
    Schedule Count Prefix SnapMirror Label
    -------- ----- ------ ----------------
    1min         7 1min   1min
    7min         8 7min   7min

cluster1::> snapmirror show -fields type,policy,schedule
source-path destination-path type schedule policy
----------- ---------------- ---- -------- ------------------
SVMA:VOL001 SVMB:VOL001_DR   XDP  1min     MirrorAllSnapshots
SVMA:VOL001 SVMC:VOL001_SV   XDP  7min     104_7min
SVMC:VOL001_SV SVMA:VOL001_NEW XDP 5min    MirrorAllSnapshots

cluster1::> snapmirror policy show -policy 104_7min
Vserver  Policy   Policy Number         Transfer
Name     Name     Type   Of Rules Tries Priority Comment
-------  -------- ------ -------- ----- -------- -------
cluster1 104_7min vault         1     8  normal  -
  SnapMirror Label: 7min       Keep:     104
                         Total Keep:     104


1) Modify Live Snapshot Policy with SnapMirror Labels for All Snapshots:
(Don’t need to do as this is already so!)

2) Create new Live Snapshot Policy with extended retention:


cluster1::> snapshot policy create -policy 7_1min_104_7min -enabled true -schedule1 1min -count1 7 -snapmirror-label1 1min -schedule2 7min -count2 104 -snapmirror-label2 7min

cluster1::> snapshot policy show -policy 7_1min_104_7min
Vserver: cluster1
                Number of Is
Policy Name     Schedules Enabled Comment
--------------- --------- ------- -------
7_1min_104_7min         2 true    -
    Schedule Count Prefix SnapMirror Label
    -------- ----- ------ ----------------
    1min         7 1min   1min
    7min       104 7min   7min


3) Modify Vault Policy with required labels for all SnapShots and Pre_Cutover label:


cluster1::> snapmirror policy add-rule -policy 104_7min -snapmirror-label 1min -keep 7 -vserver cluster1

cluster1::> snapmirror policy add-rule -policy 104_7min -snapmirror-label pre_cutover -keep 1 -vserver cluster1

cluster1::> snapmirror policy show -policy 104_7min
Vserver Policy    Policy Number         Transfer
Name    Name      Type   Of Rules Tries Priority Comment
------- --------- ------ -------- ----- -------- -------
cluster1 104_7min vault         3     8  normal  -
  SnapMirror Label: 7min              Keep:     104
                    1min                          7
                    pre_cutover                   1
                                Total Keep:     112


4) Create new MirrorVault SnapMirror policy:


cluster1::> snapmirror policy create -policy 7_1min_8_7min -vserver cluster1 -type mirror-vault

cluster1::> snapmirror policy add-rule -policy 7_1min_8_7min -vserver cluster1 -snapmirror-label 1min -keep 7

cluster1::> snapmirror policy add-rule -policy 7_1min_8_7min -vserver cluster1 -snapmirror-label 7min -keep 8

cluster1::> snapmirror policy show -policy 7_1min_8_7min
Vserver Policy         Policy Number         Transfer
Name    Name           Type   Of Rules Tries Priority Comment
------- -------------- ------ -------- ----- -------- -------
cluster1 7_1min_8_7min mirror-vault  3     8  normal  -
  SnapMirror Label: sm_created       Keep:       1
                    1min                         7
                    7min                         8
                               Total Keep:      16


5) Terminate client access: CIFS share delete, CIFS session check, volume unmount:


cluster1::> cifs share show -volume VOL001 -instance

              Vserver: SVMA
                Share: VOL001
                 Path: /VOL001_CIFS_volume
     Share Properties: oplocks
                       browsable
                       changenotify
                       show-previous-versions
   Symlink Properties: symlinks
            Share ACL: Everyone / Full Control
          Volume Name: VOL001
        Offline Files: manual
Vscan File-Op Profile: standard

cluster1::> cifs share delete -vserver SVMA -share-name VOL001

cluster1::> vol unmount -vserver SVMA -volume VOL001


Image: Access to CIFS share is terminated!

6) Create Pre_Cutover Snapshot with Pre_Cutover label:

‌‌
cluster1::> vol snapshot create -vserver SVMA -volume VOL001 -snapshot pre_cutover -snapmirror-label pre_cutover


7) Update DR, Break DR, verify DR has Snapshot Policy of none, Delete DR, Release DR:


cluster1::> snapmirror update -destination-path SVMB:VOL001_DR
Operation is queued: snapmirror update of destination "SVMB:VOL001_DR".

cluster1::> snapmirror show -destination-path SVMB:VOL001_DR -fields status
source-path destination-path status
----------- ---------------- ------
SVMA:VOL001 SVMB:VOL001_DR   Idle

cluster1::> snapmirror break -destination-path SVMB:VOL001_DR
Operation succeeded: snapmirror break for destination "SVMB:VOL001_DR".

cluster1::> snapmirror delete -destination-path SVMB:VOL001_DR
Operation succeeded: snapmirror delete for the relationship with destination "SVMB:VOL001_DR".

cluster1::> snapmirror release -source-path SVMA:VOL001 -destination-path SVMB:VOL001_DR
[Job 168] Job succeeded: SnapMirror Release Succeeded


8) Update Vault, Delete Vault, Release Vault:

‌‌‌
cluster1::> snapmirror update -destination-path SVMC:VOL001_SV
Operation is queued: snapmirror update of destination "SVMC:VOL001_SV".

cluster1::> snapmirror show -destination-path SVMC:VOL001_SV -fields status
source-path destination-path status
----------- ---------------- ------
SVMA:VOL001 SVMC:VOL001_SV   Idle

cluster1::> snapmirror delete -destination-path SVMC:VOL001_SV
Operation succeeded: snapmirror delete for the relationship with destination "SVMC:VOL001_SV".

cluster1::> snapmirror release -source-path SVMA:VOL001 -destination-path SVMC:VOL001_SV
[Job 169] Job succeeded: SnapMirror Release Succeeded


9) Rename original Prod volume and offline:


cluster1::> volume rename -vserver SVMA -volume VOL001 -newname VOL001_OLD
[Job 170] Job succeeded: Successful
cluster1::> vol offline  -vserver SVMA -volume VOL001_OLD
Volume "SVMA:VOL001_OLD" is now offline.


10) Update Mirror of Vault, break Mirror of Vault, set volume Snapshot policy, delete mirror of Vault, release mirror of Vault:


cluster1::> snapmirror update -destination-path SVMA:VOL001_NEW
Operation is queued: snapmirror update of destination "SVMA:VOL001_NEW".

cluster1::> snapmirror show -destination-path SVMA:VOL001_NEW -fields status
source-path    destination-path status
-------------- ---------------- ------
SVMC:VOL001_SV SVMA:VOL001_NEW  Idle

cluster1::> snapmirror break -destination-path SVMA:VOL001_NEW
Operation succeeded: snapmirror break for destination "SVMA:VOL001_NEW".

cluster1::> volume modify -vserver SVMA -volume VOL001_NEW -snapshot-policy 7_1min_104_7min
Volume modify successful on volume VOL001_NEW of Vserver SVMA.

cluster1::> snapmirror delete -destination-path SVMA:VOL001_NEW
Operation succeeded: snapmirror delete for the relationship with destination "SVMA:VOL001_NEW".

cluster1::> snapmirror release -source-path SVMC:VOL001_SV -destination-path SVMA:VOL001_NEW
[Job 171] Job succeeded: SnapMirror Release Succeeded


11) Rename new Prod volume to original Prod volume name (remove _NEW)


cluster1::> vol rename -vserver SVMA -volume VOL001_NEW -newname VOL001
[Job 172] Job succeeded: Successful


12) Volume mount, CIFS share create, CIFS session check:


cluster1::> vol mount -vserver SVMA -volume VOL001 -junction-path /VOL001

cluster1::> cifs share create -vserver SVMA -share-name VOL001 -path /VOL001 -share-properties oplocks,browsable,changenotify,show-previous-versions -symlink-properties symlinks -offline-files manual -vscan-fileop-profile standard


Image: Access to CIFS share is restored!

13) Create DR with MirrorVault policy, resync DR:


cluster1::> snapmirror create -source-path SVMA:VOL001 -destination-path SVMB:VOL001_DR -type XDP -policy 7_1min_8_7min -schedule 1min
Operation succeeded: snapmirror create for the relationship with destination "SVMB:VOL001_DR".

cluster1::> snapmirror resync -destination-path SVMB:VOL001_DR

Warning: All data newer than Snapshot copy 7min.2019-07-25_1021 on volume SVMB:VOL001_DR will be deleted.
Do you want to continue? {y|n}: y
Operation is queued: initiate snapmirror resync to destination "SVMB:VOL001_DR".

cluster1::> snapmirror show -destination-path SVMB:VOL001_DR -field state,status,health
source-path destination-path state        status healthy
----------- ---------------- ------------ ------ -------
SVMA:VOL001 SVMB:VOL001_DR   Snapmirrored Idle   true


14) Verify SnapShot retention is working correctly:


cluster1::> snapshot show -volume VOL001 -snapshot 1min.*
...
7 entries were displayed.

cluster1::> snapshot show -volume VOL001 -snapshot 7min.*
...
104 entries were displayed.

cluster1::> snapshot show -volume VOL001_DR -snapshot 1min.*
...
7 entries were displayed.

cluster1::> snapshot show -volume VOL001_DR -snapshot 7min.*
...
8 entries were displayed.


15) Delete Pre_Cutover Snapshot:


cluster1::> snapshot delete -vserver SVMA -volume VOL001 -snapshot pre_cutover

cluster1::> snapshot delete -vserver SVMB -volume VOL001_DR -snapshot pre_cutover


16) Delete original Prod volume and SV volume:


cluster1::> volume delete -vserver SVMA -volume VOL001_OLD
 [Job 174] Job succeeded: Successful

cluster1::> volume offline -vserver SVMC -volume VOL001_SV
Volume "SVMC:VOL001_SV" is now offline.

cluster1::> volume delete -vserver SVMC -volume VOL001_SV
 [Job 176] Job succeeded: Successful


THE END!

Final check:


cluster1::> volume show -volume VOL001* -fields snapshot-policy,snapshot-count
vserver volume snapshot-policy snapshot-count
------- ------ --------------- --------------
SVMA    VOL001 7_1min_104_7min 112
SVMB    VOL001_DR none         17
2 entries were displayed.

cluster1::> snapmirror show -fields type,policy,schedule
source-path destination-path type schedule policy
----------- ---------------- ---- -------- -------------
SVMA:VOL001 SVMB:VOL001_DR   XDP  1min     7_1min_8_7min