Wednesday, 9 September 2015

Ping All LIFs and SPs on my Cluster Tool

An update to Ping All LIFs on My Cluster Tool now including Service Processors also (but it will still work against SIMs that don’t have SPs)!

Image: Example run against a test physical 2-node cluster
Copy and paste the below script into a text file and save as say PingAll.ps1. Then run as:

PS C:\ > .\PingAll.ps1 -Cluster {CLUSTERNAME} -Username {USERNAME}

The Script

########################
# Ping All Lifs AndSPs #
########################

Param(
  [Parameter(Mandatory=$true)][String]$Cluster,
  [Parameter(Mandatory=$true)][String]$Username,
  [Int]$w = 80
)

FUNCTION Wr {
  Param([String]$ToDisplay,[String]$ForegroundColor,[String]$BackgroundColor)
  If(!$ToDisplay){ Write-Host; RETURN    }
  If($BackgroundColor){ Write-Host $ToDisplay -ForegroundColor $ForegroundColor -BackgroundColor $BackgroundColor -NoNewLine; RETURN }
  If($ForegroundColor){ Write-Host $ToDisplay -ForegroundColor $ForegroundColor -NoNewLine; RETURN }
  Write-Host $ToDisplay -ForegroundColor White 
}

## LOAD THE DATA ONTAP POWERSHELL TOOLKIT ##

[Void](Import-Module DataONTAP -ErrorAction SilentlyContinue)
If(!(Get-Module DataONTAP)){ Wr "Unable to load the DataONTAP PowerShell Toolkit - exiting!" Red; Wr; EXIT }
Wr "Loaded the Data ONTAP PowerShell Toolkit!" Green; Wr

## GET CREDENTIALS ##

Wr "Password: " Cyan; $Password = Read-Host -AsSecureString
$SecureString = $Password | ConvertFrom-SecureString
$Credential = New-Object System.Management.Automation.PsCredential($Username,$Password)

## TEST CREDENTIALS AND ACQUIRE LIFS ##

Wr "Checking connection to " Cyan; Wr $Cluster Yellow; Wr " ..." Cyan; Wr
$Connect = Connect-NcController -Name $Cluster -Credential $Credential -Timeout 20000 -ErrorAction SilentlyContinue
If($Connect){ Wr "Successfully connected to " Green; Wr $Cluster Yellow; Wr }
else { Wr "Unable to connect to " Red; Wr $Cluster Yellow; Wr " with provided credentials - exiting!" Red; Wr; EXIT }
      
$Attrs = Get-NcNetInterface -Template
$Attrs.role = ""
$Attrs.address = ""
$Attrs.interfacename = ""
$LIFs = Get-NcNetInterface -Attributes $Attrs

[System.Object]$targets = @{}
[Int]$count = 0
$LIFs | Foreach{
  If ($_.role -ne "cluster"){
    $count++
    [System.Object]$targets.$count = @{}
    [String]$targets.$count.address = $_.address
    [String]$targets.$count.interfacename = $_.interfacename
  }
}     

## >> START OF GET SP SECTION << ##

Function GetNcSpIPs {
  $AttrsNode = Get-NcNode -Template
  $Nodes = (Get-NcNode -Attributes $AttrsNode).Node   
  $AttrsSP = Get-NcServiceProcessorNetwork -Template
  $AttrsSP.IpAddress = "" 
  [System.Array]$NodesAndSpIPs = @()
  $Nodes | Foreach {
    $GetNcSp = Get-NcServiceProcessorNetwork -Node $_ -AddressType ipv4 -Attributes $AttrsSP -ErrorAction SilentlyContinue
    If($GetNcSp.IpAddress){
      $NodesAndSpIPs += $_
      $NodesAndSpIPs += $GetNcSp.IpAddress     
    }
  }
  ,$NodesAndSpIPs
}

[System.Array]$TargetsWithSPs = GetNcSpIPs
For($i = 0; $i -lt $TargetsWithSPs.Count; $i += 2){
  $count++
  [System.Object]$targets.$count = @{}
  [String]$targets.$count.interfacename = ($TargetsWithSPs[$i] + " (SP)")
  [String]$targets.$count.address = $TargetsWithSPs[$i+1]
}

## >> END OF GET SP SECTION << ##

If($count -eq 0){ Wr "No targets to ping - exiting!" Red; Wr; EXIT }

## PING ALL NON-CLUSTER LIFS ##

[System.Array]$status = "" # $status[0] is unused
For($i = 1; $i -le $count; $i++){ $status += "Unitialized" }

Function PrintStatus{
  Param([Int16]$pointer)
  If ($pointer -eq $count){ $pointer = 1}
  else { $pointer ++ }    
  cls
  For($j = 1; $j -le $count; $j++){
    If ($pointer -eq $j){ [String]$Display = " * " } else { [String]$Display = "   " }
    $Display += $Cluster + " : " + $targets.$j.interfacename + " : " + $targets.$j.address
    If ($status[$j] -eq "UP"){ Wr ($Display.PadRight($w).Substring(0,$w)) BLACK GREEN; Wr }
    elseif ($status[$j] -eq "DOWN"){ Wr ($Display.PadRight($w).Substring(0,$w)) WHITE RED; Wr }
    else { Wr ($Display.PadRight($w).Substring(0,$w)) BLACK GRAY ; Wr }
  }
}

Function GetResult{
  For($i = 1; $i -le $count; $i++){
    [Boolean]$Result = Test-Connection -ComputerName $targets.$i.address -Count 1 -Quiet
    If($Result){
      Start-Sleep -Milliseconds 100
      $status[$i] = "UP"
    } else {
      $status[$i] = "DOWN"
    }
    [Void](PrintStatus $i)
  }
}

While ($true){ [Void](GetResult) }

Sunday, 6 September 2015

cDOT Datacenter Relocation Experiment: Part 2/2 “The Clustershell”

The following takes us step by step through the entire scenario detailed in Part 1, focusing on one volume called AVOL01. For this lab I’m essentially interested in verifying the replication strategy is sound, hence no attention to data serving protocols here. Note that my Vservers are called CLUSTERNAME-V1. Cluster peers and Vserver peers for the initial setup already exist.

1) Setup

Firstly, we create our test volume - AVOL01, replicate it across the four clusters, and create a few test Snapshots for verification that everything is working as intended.

CLUA1::> vol create -vserver CLUA1-V1 -volume AVOL01 -aggregate N1_aggr1 -size 5g -junction-path /AVOL01 -space-guarantee none -snapshot-policy default
CLUAV::> vol create -vserver CLUAV-V1 -volume AVOL01 -aggregate N1_aggr1 -size 5g -space-guarantee none -snapshot-policy none -type DP
CLUB1::> vol create -vserver CLUB1-V1 -volume AVOL01 -aggregate N1_aggr1 -size 5g -space-guarantee none -snapshot-policy none -type DP
CLUBV::> vol create -vserver CLUBV-V1 -volume AVOL01 -aggregate N1_aggr1 -size 5g -space-guarantee none -snapshot-policy none -type DP

CLUAV::> snapmirror create -source-path CLUA1-V1:AVOL01 -destination-path CLUAV-V1:AVOL01 -type XDP -schedule daily -policy XDPDefault
CLUAV::> snapmirror initialize -destination-path CLUAV-V1:AVOL01
CLUB1::> snapmirror create -source-path CLUA1-V1:AVOL01 -destination-path CLUB1-V1:AVOL01 -type DP -schedule hourly
CLUB1::> snapmirror initialize -destination-path CLUB1-V1:AVOL01
CLUBV::> snapmirror create -source-path CLUAV-V1:AVOL01 -destination-path CLUBV-V1:AVOL01 -type DP -schedule daily
CLUBV::> snapmirror initialize -destination-path CLUBV-V1:AVOL01

CLUA1::> vol snapshot create -vserver CLUA1-V1 -volume AVOL01 -snapshot daily.2015-08-01 -snapmirror-label daily

... create a month worth of dailys ...

CLUA1::> vol snapshot create -vserver CLUA1-V1 -volume AVOL01 -snapshot weekly.2015-08-01 -snapmirror-label weekly

... create a month worth of weeklys ...

CLUB1::> snapmirror update -destination-path CLUB1-V1:AVOL01
CLUAV::> snapmirror update -destination-path CLUAV-V1:AVOL01
CLUBV::> snapmirror update -destination-path CLUBV-V1:AVOL01

CLUB1::> snapshot show -vserver CLUB1-V1 -volume AVOL01
CLUAV::> snapshot show -vserver CLUAV-V1 -volume AVOL01
CLUBV::> snapshot show -vserver CLUBV-V1 -volume AVOL01

2) Replicating to the New Site

Here we create cluster peers and vserver peers to the new site; and setup the SnapMirror relationship.

CLUA1::> cluster peer create -peer-addrs 10.10.10.243
CLUC1::> cluster peer create -peer-addrs 10.10.10.121
CLUA1::> vserver peer create -vserver CLUA1-V1 -peer-vserver CLUC1-V1 -peer-cluster CLUC1 -applications snapmirror
CLUC1::> vserver peer accept -vserver CLUC1-V1 -peer-vserver CLUA1-V1
CLUC1::> vol create -vserver CLUC1-V1 -volume AVOL01 -aggregate N1_aggr1 -size 5g -space-guarantee none -snapshot-policy none -type DP
CLUC1::> snapmirror create -source-path CLUA1-V1:AVOL01 -destination-path CLUC1-V1:AVOL01 -type DP -schedule hourly
CLUC1::> snapmirror initialize -destination-path CLUC1-V1:AVOL01

3) Cutting over to New Primary Site and Removing Original Primary from Peers

Create Cluster and Vserver peers to/from the DR Cluster (CLUB1) and the SnapVault Cluster (CLUAV), to the new Primary Cluster (CLUC1) and its Vserver (CLUC1-V1).

CLUB1::> cluster peer create -peer-addrs 10.10.10.243
CLUC1::> cluster peer create -peer-addrs 10.10.10.221
CLUB1::> vserver peer create -vserver CLUB1-V1 -peer-vserver CLUC1-V1 -peer-cluster CLUC1 -applications snapmirror
CLUC1::> vserver peer accept -vserver CLUC1-V1 -peer-vserver CLUB1-V1

CLUAV::> cluster peer create -peer-addrs 10.10.10.243
CLUC1::> cluster peer create -peer-addrs 10.10.10.126
CLUAV::> vserver peer create -vserver CLUAV-V1 -peer-vserver CLUC1-V1 -peer-cluster CLUC1 -applications snapmirror
CLUC1::> vserver peer accept -vserver CLUC1-V1 -peer-vserver CLUAV-V1

Stop access to data on CLUA1!!!

Update the DR, break and delete.
Update the SnapMirror to the new Primary/Production volume, break and delete.

CLUB1::> snapmirror update -destination-path CLUB1-V1:AVOL01
CLUB1::> snapmirror show -destination-path CLUB1-V1:AVOL01 -fields status,healthy
CLUB1::> snapmirror break  -destination-path CLUB1-V1:AVOL01
CLUB1::> snapmirror delete -destination-path CLUB1-V1:AVOL01

CLUC1::> snapmirror update -destination-path CLUC1-V1:AVOL01
CLUC1::> snapmirror show -destination-path CLUC1-V1:AVOL01 -fields status,healthy
CLUC1::> snapmirror break  -destination-path CLUC1-V1:AVOL01
CLUC1::> snapmirror delete -destination-path CLUC1-V1:AVOL01

Create a new DP Mirror to DR and resync.
Create a new XDP Mirror to the Vault and resync.

CLUB1::> snapmirror create -source-path CLUC1-V1:AVOL01 -destination-path CLUB1-V1:AVOL01 -type DP -schedule hourly
CLUB1::> snapmirror resync -destination-path CLUB1-V1:AVOL01 -type DP
CLUB1::> snapmirror show -destination-path CLUB1-V1:AVOL01 -fields status,healthy

CLUAV::> snapmirror delete -destination-path CLUAV-V1:AVOL01
CLUAV::> snapmirror create -source-path CLUC1-V1:AVOL01 -destination-path CLUAV-V1:AVOL01 -type XDP -schedule daily -policy XDPDefault
CLUAV::> snapmirror resync -destination-path CLUAV-V1:AVOL01
CLUAV::> snapmirror show -destination-path CLUAV-V1:AVOL01 -fields status,healthy

Create a test snapshot on the new primary volume and replicate it to verify the replications are good.

CLUC1::> vol snapshot create -vserver CLUC1-V1 -volume AVOL01 -snapshot daily.TEST -snapmirror-label daily

CLUB1::> snapmirror update -destination-path CLUB1-V1:AVOL01
CLUB1::> snapmirror show -destination-path CLUB1-V1:AVOL01 -fields status,healthy
CLUB1::> vol snapshot show -vserver CLUB1-V1 -volume AVOL01 -snapshot daily.TEST -fields snapshot

CLUAV::> snapmirror update -destination-path CLUAV-V1:AVOL01
CLUAV::> snapmirror show -destination-path CLUAV-V1:AVOL01 -fields status,healthy
CLUAV::> vol snapshot show -vserver CLUAV-V1 -volume AVOL01 -snapshot daily.TEST -fields snapshot

CLUBV::> snapmirror update -destination-path CLUBV-V1:AVOL01
CLUBV::> snapmirror show -destination-path CLUBV-V1:AVOL01 -fields status,healthy
CLUBV::> vol snapshot show -vserver CLUBV-V1 -volume AVOL01 -snapshot daily.TEST -fields snapshot

Release mirrors from the old primary, and delete peers from the old primary.

CLUA1::> snapmirror release -source-path CLUA1-V1:AVOL01 -destination-path CLUAV-V1:AVOL01
CLUA1::> snapmirror release -source-path CLUA1-V1:AVOL01 -destination-path CLUB1-V1:AVOL01
CLUA1::> snapmirror release -source-path CLUA1-V1:AVOL01 -destination-path CLUC1-V1:AVOL01

CLUA1::> vserver peer delete -vserver CLUA1-V1 -peer-vserver CLUAV-V1
CLUA1::> vserver peer delete -vserver CLUA1-V1 -peer-vserver CLUB1-V1
CLUA1::> vserver peer delete -vserver CLUA1-V1 -peer-vserver CLUC1-V1

CLUA1::> cluster peer delete -cluster CLUAV
CLUA1::> cluster peer delete -cluster CLUB1
CLUA1::> cluster peer delete -cluster CLUC1

CLUAV::> cluster peer delete -cluster CLUA1
CLUB1::> cluster peer delete -cluster CLUA1
CLUC1::> cluster peer delete -cluster CLUA1

4) Bringing the SnapVault Across and Renaming the SnapVault Cluster

Since the SnapVault Cluster and Vserver are going to be renamed with the new site naming convention, we must delete and release all mirrors to/from the vault, and delete all peers to/from the vault. Then we rename the cluster and vserver.

CLUAV::> snapmirror delete -destination-path CLUAV-V1:AVOL01
CLUC1::> snapmirror release -source-path CLUC1-V1:AVOL01 -destination-path CLUAV-V1:AVOL01
CLUBV::> snapmirror delete -destination-path CLUBV-V1:AVOL01
CLUAV::> snapmirror release -source-path CLUAV-V1:AVOL01 -destination-path CLUBV-V1:AVOL01

CLUC1::> vserver peer delete -vserver CLUC1-V1 -peer-vserver CLUAV-V1
CLUAV::> vserver peer delete -vserver CLUAV-V1 -peer-vserver CLUBV-V1

CLUC1::> cluster peer delete -cluster CLUAV
CLUAV::> cluster peer delete -cluster CLUC1
CLUAV::> cluster peer delete -cluster CLUBV
CLUBV::> cluster peer delete -cluster CLUAV

CLUAV::> hostname CLUCV

N.B.: The prompt changes from CLUAV to CLUCV

CLUCV::> vserver rename -vserver CLUAV-V1 -newname CLUCV-V1

Create new peers.

CLUC1::> cluster peer create -peer-addrs 10.10.10.126
CLUCV::> cluster peer create -peer-addrs 10.10.10.243
CLUCV::> cluster peer create -peer-addrs 10.10.10.227
CLUBV::> cluster peer create -peer-addrs 10.10.10.126

CLUC1::> vserver peer create -vserver CLUC1-V1 -peer-vserver CLUCV-V1 -peer-cluster CLUCV -applications snapmirror
CLUCV::> vserver peer accept -vserver CLUCV-V1 -peer-vserver CLUC1-V1
CLUCV::> vserver peer create -vserver CLUCV-V1 -peer-vserver CLUBV-V1 -peer-cluster CLUBV -applications snapmirror
CLUBV::> vserver peer accept -vserver CLUBV-V1 -peer-vserver CLUCV-V1

Create new SnapMirrors/SnapVaults.

CLUCV::> snapmirror create -source-path CLUC1-V1:AVOL01 -destination-path CLUCV-V1:AVOL01 -type XDP -schedule daily -policy XDPDefault
CLUCV::> snapmirror resync -destination-path CLUCV-V1:AVOL01
CLUCV::> snapmirror show -destination-path CLUCV-V1:AVOL01 -fields status,healthy

CLUBV::> snapmirror create -source-path CLUCV-V1:AVOL01 -destination-path CLUBV-V1:AVOL01 -type DP -schedule daily
CLUBV::> snapmirror resync -destination-path CLUBV-V1:AVOL01
CLUBV::> snapmirror show -destination-path CLUBV-V1:AVOL01 -fields status,healthy

And test.

CLUC1::> vol snapshot create -vserver CLUC1-V1 -volume AVOL01 -snapshot daily.TEST2 -snapmirror-label daily

CLUB1::> snapmirror update -destination-path CLUB1-V1:AVOL01
CLUB1::> snapmirror show -destination-path CLUB1-V1:AVOL01 -fields status,healthy
CLUB1::> vol snapshot show -vserver CLUB1-V1 -volume AVOL01 -snapshot daily.TEST2 -fields snapshot

CLUCV::> snapmirror update -destination-path CLUCV-V1:AVOL01
CLUCV::> snapmirror show -destination-path CLUCV-V1:AVOL01 -fields status,healthy
CLUCV::> vol snapshot show -vserver CLUCV-V1 -volume AVOL01 -snapshot daily.TEST2 -fields snapshot

CLUBV::> snapmirror update -destination-path CLUBV-V1:AVOL01
CLUBV::> snapmirror show -destination-path CLUBV-V1:AVOL01 -fields status,healthy
CLUBV::> vol snapshot show -vserver CLUBV-V1 -volume AVOL01 -snapshot daily.TEST2 -fields snapshot

cDOT Datacenter Relocation Experiment: Part 1/2 “The Scenario”

It’s the weekend, and so an excellent time to play on my lab and test something. The awesome thing about NetApp’s software is how much power and flexibility it gives you. I had every confidence the below was going to work, but nice to get a chance to try it out. The Clustershell commands in Part 2 were run against cDOT 8.2.3.

Scenario

1) The Starting Point

We start with two sites - Site A and Site B - and two cDOT clusters per site, with replication as below.

Image: The Starting Point

2) Setting up Baselines to a New Cluster in a New Site

We have a new site we want to move to. A new cluster is stood up in Site C, and we set up baseline SnapMirrors from the primary cluster (CLUA1 below) to our new cluster (CLUC1).

Image: SnapMirror-ing to a New Site

3) Cutting Over the Production Cluster

On a scheduled day, we break SnapMirrors and cutover data access to the new site, and manipulate the replications as below.

Image: SnapMirror and SnapVault now from CLUC1

4) Lift and Shift of the SnapVault

Finally, we have to bring the SnapVault cluster to our new site, but to complicate matters the SnapVault has to be renamed with the new site (Site C) naming convention.

Image: Moving the SnapVault to its New Site and Renaming

Wednesday, 2 September 2015

Manually Obtaining System Configuration Backups over the SPI

If you’re about to upgrade a Single-Node cDOT cluster, or perhaps your “system configuration backup upload” has stopped working for whatever reason (perhaps you need to schedule in a mgwd restart or - better - node reboot), and you want to obtain the latest System Configuration Backup, keep reading!

N.B. Here we have a single-node cluster (running 8.2.2P1) called CLU01, with a node called CLU01N1.

Via the Clustershell ::>

set adv
system configuration backup create -node CLU01N1 -backup-type cluster -backup-name SCBACKUP
job show -name "CLUSTER BACKUP ONDEMAND"
system configuration backup show -backup SCBACKUP.7z

N.B.: Wait for the job to finish. Once the backup is complete the “system configuration backup show” will display the backup.

Via the Clustershell ::*>

security login password -username diag
security login unlock -username diag
systemshell

Via the Systemshell %

ls /mroot/etc/backups/config
cp /mroot/etc/backups/config/SCBACKUP.7z /mroot/etc/log/SCBACKUP.7z

Connect to the SPI https://CLU01/spi

Images: Clustered Data ONTAP SPI (Service Processor Infrastructure)
Click on the logs link
Click on the backup file - SCBACKUP.7z - to download it

Via the Systemshell %

rm /mroot/etc/log/SCBACKUP.7z
exit

N.B.: This removes the file from the /mroot/etc/log.

Via the Clustershell ::*>

system configuration backup delete -node CLU01N1 -backup SCBACKUP.7z
system configuration backup show
set adm

N.B.: This correctly deletes the system configuration backup file

THE END!