Sunday, 29 May 2016

Moving Volumes Between SVMs post Transition

Note 1: Version of cDOT is 8.3.2 GA.
Note 2: Version of 7MTT is 2.3.1.

NetApp Library covers this very well in the following article:

There was something I wanted to see though (to be revealed - and I wanted to have a play anyway) hence I ran it though a simple lab.

Lab Setup

We have a 7-Mode system called FAS101.
We are going to create a volume called VOL001 that we want to transition to cDOT.
The 7-Mode volume is going to be transitioned to cDOT cluster clu1, and SVM1.
Post transition we’ll rehost the volume to SVM2.

SVM1 and SVM2 already exist, and transitions have already been carried out from FAS101 to SVM1, so all we need to do is create a test volume with some test data, and transition it:

On FAS101>


vol create VOL001 -l en -s none aggr0 2g
cifs shares -add VOL001 /vol/VOL001


Using 7MTT CLI>


transition credentials add -h 10.0.1.50 -u root
transition credentials add -h 10.0.1.100 -u admin
transition create -s SESSION1 -t standalone -n 10.0.1.50 -c 10.0.1.50 -h 10.0.1.100 -v SVM1
transition volumepair add -s SESSION1 -v VOL001 -c VOL001 -g clu1n2_sata_aggr1
transition schedule add -s SESSION1 -n SCHED1 -d 0-6 -b 00:00 -e 24:00 -t 25 -u 16:00
transition show -s SESSION1 -r no
transition precheck -s SESSION1 -r no
transition start -s SESSION1 -n -r no
transition precutover -s SESSION1 -m no_test -r no
transition cutover -s SESSION1 -r no


Re-Hosting the Volume - Full Output

Note: Since this was a NAS volume, I need to unmount first, otherwise the rehost will fail.

On clu1::>


set advanced
volume unmount -volume VOL001 -vserver SVM1
volume rehost -vserver SVM1 -volume VOL001 -destination-vserver SVM2

Warning: Rehosting a volume from one Vserver to another Vserver does not change the security information on that volume. If the security domains of the Vservers are not identical, unwanted access might be permitted, and desired access might be denied. An attempt to rehost a volume will disassociate the volume from all volume policies and policy rules. The volume must be reconfigured after a successful or unsuccessful rehost operation. See the "7-Mode Transition Tool 2.2 Copy-Free Transition Guide" for reconfiguration details.
Do you want to continue? y
[Job 124] Job succeeded: Successful

Info: Volume is successfully rehosted on the target Vserver.
Set the desired volume configuration - such as the export policy and QoS policy - on the target Vserver.


What I was Looking For

I hadn’t shown myself running the below in diag mode -


::> vol show -volume VOL001 -instance


- before and after the rehost operation. I was looking to see if any of these details changed post rehost (basically anything with the word “transition” in):

Transition Operation State: none
Transition Behaviour: none
Copied for Transition: true
Transition: true

The answer is that nothing changed; which got me to thinking...

Q: Can I move the volume back again? YES


volume rehost -vserver SVM2 -volume VOL001 -destination-vserver SVM1


Q: Can I move it to another SVM (say I picked the wrong SVM in the first place)? YES


volume rehost -vserver SVM1 -volume VOL001 -destination-vserver SVM2
volume rehost -vserver SVM2 -volume VOL001 -destination-vserver clu1-svm1


Q: Can I rehost transitioned volumes ad inifinitum? No idea - I’ll have to ask someone.

Saturday, 28 May 2016

Maintaining NETBIOS Names in 7 to C Transitions

This is very simple!

Note 1: It’s not the place of this blog to question why one may want to keep NETBIOS names.
Note 2: Version of Clustered Data ONTAP is 8.3.2.

Scenario 1: Maintaining NETBIOS in a 1-1 Transition

We’re transitioning one 7-Mode pFiler/vFiler to one cDOT SVM. Post cutover of the 7-Mode filer, we want to maintain the old NETBIOS name. The 7-Mode system is called FAS101 and has a CIFS server called FAS101. The cDOT cluster is clu1, and an SVM exists called FAS101 and has a CIFS server called FAS101-NEW.

Post cutover; terminate CIFS on the 7-Mode FAS101.


FAS101> cifs terminate


Terminate CIFS on the cDOT SVM. Enter diag mode and modify the CIFS server’s NETBIOS name (requires domain administrative account.)


clu1::> cifs server modify -vserver FAS101 -status-admin down
clu1::> set d
clu1::*> cifs server modify -vserver FAS101 -cifs-server FAS101

In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of a Windows account with sufficient privileges to add computers to the "CN=Computers" container within the "LAB.PRIV" domain.

Enter the user name: administrator
Enter the password:

Warning: An account by this name already exists in Active Directory at CN=FAS101,CN=Computers,DC=lab,DC=priv
Ok to reuse this account? {y|n}: y

That’s it!

Note: The CIFS server is automatically set status-admin up after the rename.

Roll back?

If we had to roll back, on the cluster we rename the CIFS server back to what it was:


clu1::*> cifs server modify -vserver FAS101 -status-admin down
clu1::*> cifs server modify -vserver FAS101 -cifs-server FAS101-NEW


Then we re-run cifs setup on the 7-Mode system to get it back on the domain (the output below has been abridged):


FAS101> cifs setup

Do you want to continue and change the current filer account information? [n]: y
(1) Keep the current WINS configuration
Selection (1-3)? 1
This filer is currently configured as a multiprotocol filer. Would you like to reconfigure this filer to be an NTFS-only filer? n
The default name for this CIFS server is 'FAS101'.
Would you like to change this name? n
(1) Active Directory domain authentication (Active Directory domains only)
Selection (1-4)?  1
What is the name of the Active Directory domain? LAB.PRIV
Would you like to configure time services? y
Enter the time server host(s) and/or address(es)? LAB.PRIV
Would you like to specify additional time servers? n
Enter the name of the Windows user [Administrator@LAB.PRIV]:
Password for Administrator@LAB.PRIV:
CIFS - Logged in as Administrator@LAB.PRIV.
An account that matches the name 'FAS101' already exists in Active Directory: 'cn=fas101,cn=computers,dc=lab,dc=priv'. This is normal if you are re-running CIFS Setup. You may continue by using this account or changing the name of this CIFS server.
Do you want to re-use this machine account? y
CIFS - Starting SMB protocol...


Scenario 2: Maintaining NETBIOS in Multi-1 Transitions

If we’re consolidating pFilers/vFilers to one SVM, but still want to keep the NETBIOS names, this is where add-netbios-aliases comes in:


clu1::> cifs server add-netbios-aliases -vserver FAS101 -netbios-aliases FAS102,FAS103,FAS104,FAS105
clu1::> cifs server show -vserver FAS101 -fields netbios-aliases,cifs-server
vserver cifs-server netbios-aliases
------- ----------- ---------------------------
FAS101  FAS101      FAS102,FAS103,FAS104,FAS105



Thursday, 26 May 2016

How to get all 32-bit Snapshots on a 7-Mode System

Credit for this post goes to my esteemed colleague Aved Chammt

The following script can be used to obtain a list - in PowerShell and CSV - of all the 32-bit snapshots on a 7-Mode System, along with various information like “how many days old is the snapshot?”

Note: You need Data ONTAP 8.1.4P4 or better to see FsBlockFormat information.

How to Use?

Run in PowerShell like in the example below>


.\Get-32bitSnaps.ps1 -UserName root -Password YOURPASS -Controller 10.0.0.50


The Script (formatted for blogger with tabs replaced by two spaces)

Copy and paste into a text editor (Notepad, Notepad++ etcetera), and save as Get-32bitSnaps.ps1


Param(
  [Parameter(Mandatory=$true)][String]$UserName,
  [Parameter(Mandatory=$true)][String]$Password,
  [Parameter(Mandatory=$true)][String]$Controller
)

Import-Module DataONTAP
$SecPass = $Password | ConvertTo-SecureString -asPlainText -Force
$Cred = New-Object System.Management.Automation.PsCredential($UserName,$SecPass)
[Void](Connect-NaController $Controller -Credential $Cred)

$GetNaSystemInfo = Get-NaSystemInfo
$SerialNumber    = $GetNaSystemInfo.SystemSerialNumber
$SystemId        = $GetNaSystemInfo.SystemId
$SystemName      = $GetNaSystemInfo.SystemName

$Objects = @()
$Today = Get-Date
Get-NaVol | foreach {
  $VolumeName = $_.Name
  $Snapshots = Get-NaSnapshot -TargetName $_ | Where{ $_.FsBlockFormat -eq "32_bit" }
  $Snapshots | Foreach{
    $Object = New-Object PSObject
    Add-Member -InputObject $Object -MemberType NoteProperty -Name "System Name" -Value $SystemName
    Add-Member -InputObject $Object -MemberType NoteProperty -Name "System S/N" -Value $SerialNumber
    Add-Member -InputObject $Object -MemberType NoteProperty -Name "System ID" -Value $SystemId
    Add-Member -InputObject $Object -MemberType NoteProperty -Name "Volume" -Value $VolumeName
    Add-Member -InputObject $Object -MemberType NoteProperty -Name "Snapshot" -Value $_.Name
    Add-Member -InputObject $Object -MemberType NoteProperty -Name "BlockFormat" -Value $_.FsBlockFormat
    Add-Member -InputObject $Object -MemberType NoteProperty -Name "Days Old" -Value ($Today - $_.AccessTimeDT).Days
    $Objects += $Object
  }
}

$Objects | FT
$Objects | Export-CSV "$Controller.CSV" -NoTypeInformation


Example Output

Image: Get-32bitSnaps PowerShell output
Example CSV Output

Image: Get-32bitSnaps CSV Output

Tuesday, 24 May 2016

SnapDrive for Windows 7.1.3P1 with cDOT 8.3.2, iSCSI, and Server 2008 R2

Part 1) Configuring cDOT

For a two-node cluster, the following commands:
- create the SVM
- configure a management LIF
- remove nfs and cifs
- create the iSCSI service
- create iSCSI-A and iSCSI-B SAN LIFs per node
- set a password for the vsadmin user and unlock the account.

vserver create -vserver clu1-san1 -aggregate clu1n2_sata_aggr1 -rootvolume rootvol -rootvolume-security-style UNIX
net int create -vserver clu1-san1 -lif clu1-san1 -role data -data-protocol none -address 10.0.1.106 -netmask 255.255.252.0 -home-node clu1n1 -home-port e0g
vserver show -vserver clu1-san1 -fields allowed-protocols
vserver remove-protocols -vserver clu1-san1 -protocols nfs,cifs
iscsi create -vserver clu1-san1 -target-alias clu1-san1 -status-admin up
net int create -vserver clu1-san1 -lif clu1-san1-n1iscsiA -role data -data-protocol iscsi -address 10.1.1.51 -netmask 255.255.255.0 -home-node clu1n1 -home-port e0c
net int create -vserver clu1-san1 -lif clu1-san1-n1iscsiB -role data -data-protocol iscsi -address 10.2.1.51 -netmask 255.255.255.0 -home-node clu1n1 -home-port e0d
net int create -vserver clu1-san1 -lif clu1-san1-n2iscsiA -role data -data-protocol iscsi -address 10.1.1.52 -netmask 255.255.255.0 -home-node clu1n1 -home-port e0c
net int create -vserver clu1-san1 -lif clu1-san1-n2iscsiB -role data -data-protocol iscsi -address 10.2.1.52 -netmask 255.255.255.0 -home-node clu1n1 -home-port e0d
security login password -vserver clu1-san1 -username vsadmin
security login unlock -vserver clu1-san1 -username vsadmin

Part 2) Configuring the Windows server

The server has two NICs to access the iSCSI SAN -
iSCSI-A on 10.1.1.21
iSCSI-B on 10.2.1.21
- both are configured to not register in DNS.

Via an elevated command prompt, first enable iSCSI (if not already)>

sc config MSiSCSI start= auto
net start MSiSCSI

Then Install the Multipath I/O feature>

dism /online /enable-feature:MultipathIo

Image: Installing MPIO via CLI on Server 2008 R2
Part 3) IMT Checks @ http://mysupport.netapp.com/matrix

The following is a supported configuration:
iSCSI + SnapDrive 7.1.3 + Windows Server 2008 R2 EE SP1 + Clustered Data ONTAP 8.3.2 + Windows Host Utilities 7.0 + Microsoft MS DSM

Part 4) Installing Windows Host Utilities 7.0

From the “Windows Unified Host Utilities 7.0 Installation Guide”, the minimum required hotfixes for Windows Server 2008 R2 SP1 are -


A reboot is required after installing these.

*These two did not install “The update is not applicable to your computer”

Double-click the MSI and follow the prompts or...
... to install Host Utilities from an elevated DOS command prompt>

Z:
cd NETAPP
cd "Windows Host Utilities 7.0"
msiexec /i netapp_windows_host_utilities_7.0_x64.msi /quiet MULTIPATHING=1

The install will reboot the server.

Part 5) Installing SnapDrive for Windows

Note: SnapDrive 7.1.3P1 requires .Net 4.0 and Windows Management Framework 3.0 (for PowerShell 3.0) pre-installed.

The SnapDrive domain user account needs to be a local admin on the server.

Double-click the exe and follow the prompts or...
... via an elevated command prompt:

SnapDrive7.1.3P1_x64.exe /s /v"/qn SILENT_MODE=1 /Li SDInstall.log SVCUSERNAME=LAB\SVC_SDW SVCUSERPASSWORD=******** SVCCONFIRMUSERPASSWORD=******** TRANSPORT_SETTING_ENABLE=0 SDW_ESXSVR_ENABLE=0 ADD_WINDOWS_FIREWALL=1"

The above example:
- Generates an installation log
- Default transport is HTTP
- Default credentials are LAB\SVC_SDW
- ESX server settings are disabled

For more examples see the Installation Guide from:

Default ports:
808 = SnapDrive Web Service Tcp/Ip Endpoint
4094 = SnapDrive Web Service HTTP Endpoint
4095 = SnapDrive Web Service HTTPS Endpoint

Part 6) Configuring SnapDrive and Attaching a LUN

SnapDrive cannot create volumes (SAN admin task) so we first do this::>


vol create -volume testvol001 -vserver clu1-san1 -aggregate clu1n1_sata_aggr1 -size 10g -space-guarantee none


In SnapDrive, add the storage system and credentials.

Image: Adding Storage System in SDW
Note 1: I have a DNS A record for clu1-san1 that points to the management LIF on my SVM which is also called clu1-san1.
Note 2: If you want AD Authentication for a SVM based AD user, then the SVM needs a CIFS server setup (or Active-Directory create)

And follow through the straightforward gui to Create Disk and perform iSCSI Management.

Image: iSCSI Management in SDW

Note: There’s 4 sessions above because the host has 2 iSCSI addresses, one per iSCSI Fabric, and then each node has one iSCSI SAN LIF per iSCSI Fabric.

Clustershell Script to Build Two 2-Node 8.3.2 Clusters

I needed to produce the commands to build a couple of HA-pairs (prod and DR), so set about testing my commands with two 2-node SIM clusters...

Basic Cluster Setup

1) The SIM nodes were constructed along the lines here: Pre-Cluster Setup Image Build Recipe for 8.3.2 SIM. Each SIM has 8 virtual network adapters to simulate connectivity requirements.

2) The node setup and cluster setup were completed as below:

2.1) clu1n1

Welcome to node setup.
Enter the node management interface ...
.............. port: e0g
........ IP address: 10.0.1.101
........... netmask: 255.255.252.0
... default gateway: 10.0.0.1

Welcome to the cluster setup wizard.
Do you want to create a new cluster or join an existing cluster? create
Do you intend for this node to be used as a single node cluster? no
Will the cluster network be configured to use network switches? no
Do you want to use these defaults? yes

Enter the cluster ...
............... name: clu1
... base license key: XXXXXXXXXXXXXXXXXXXXXXXXXXXX

Enter an additional license key:

Enter the cluster management interface ...
.............. port: e0g
........ IP address: 10.0.1.100
........... netmask: 255.255.252.0
... default gateway: 10.0.0.1

Enter the DNS domain names: lab.priv
Enter the name server IP addresses: 10.0.1.10,10.0.2.10
Where is the controller located: Site A

2.2) clu1n2

Welcome to node setup.
Enter the node management interface ...
.............. port: e0g
........ IP address: 10.0.1.102
........... netmask: 255.255.252.0
... default gateway: 10.0.0.1

Welcome to the cluster setup wizard.
Do you want to create a new cluster or join an existing cluster? join
Do you want to use these defaults? yes
Enter the name of the cluster you would like to join: clu1

2.3) clu2n1

Welcome to node setup.
Enter the node management interface ...
.............. port: e0g
........ IP address: 10.0.2.101
........... netmask: 255.255.252.0
... default gateway: 10.0.0.1

Welcome to the cluster setup wizard.
Do you want to create a new cluster or join an existing cluster? create
Do you intend for this node to be used as a single node cluster? no
Will the cluster network be configured to use network switches? no
Do you want to use these defaults? yes

Enter the cluster ...
............... name: clu2
... base license key: XXXXXXXXXXXXXXXXXXXXXXXXXXXX

Enter an additional license key:

Enter the cluster management interface ...
.............. port: e0g
........ IP address: 10.0.2.100
........... netmask: 255.255.252.0
... default gateway: 10.0.0.1

Enter the DNS domain names: lab.priv
Enter the name server IP addresses: 10.0.2.10,10.0.1.10
Where is the controller located: Site B

2.4) clu2n2

Welcome to node setup.
Enter the node management interface ...
.............. port: e0g
........ IP address: 10.0.2.102
........... netmask: 255.255.252.0
... default gateway: 10.0.0.1

Welcome to the cluster setup wizard.
Do you want to create a new cluster or join an existing cluster? join
Do you want to use these defaults? yes
Enter the name of the cluster you would like to join: clu2

3) Simulating a real-world build:

3.1) Node feature licenses have been installed (factory delivered kit usually comes with pre-loaded licenses) with ::> license add [LICENSE_CODE]

3.2) We pretend the service-processors were configured by the installation engineer (of course these are SIMs with no SP) ::> service-processor network modify -node [NODENAME] -address-family IPv4 -enable true -dhcp none -ip-address [ADDRESS] -netmask [NETMASK] -gateway [GATEWAY]

3.3) Disk assignment, make sure the root aggregate is on the right disks, etcetera ... would have been done by the installation engineer.

Extended (Scripted) Clustered Setup

Note: Most of this can be done in the OnCommand System Manager but a lot easier to document Clustershell.

1) Configuring clu1::>

The commands below do the following:

i) Set the CLI inactivity timeout value (so we don’t get disconnected whilst doing our work)
ii) Rename the nodes per naming convention
iii) Rename the LIFs per naming convention/with their DNS name (not FQDN here)
iv) Rename the root aggregates per naming convention (in this scenario we have only SATA disks)
v) Set the timezone
vi) Configure NTP
vii) Autosupport - configure, check settings, trigger, and check history
viii) Disks - verify disk ownership, show spare disks, verify disk options (in this scenario we don’t have dedicated stacks, hence the assign on shelf)
ix) Create data aggregates
x) Disable aggregate snap schedule and delete aggregate snapshots
xi) Rename root volumes per naming convention, and set to 75% root aggr size
xii) Configure broadcast domains and any ifgrps
Note 1: In this scenario we have two ifgrps per node - one for routed traffic (public), one for non-routed traffic (datacenter).
Note 2: Since this is a SIM, I actually destroy the ifgrps later since they don’t work.
Note 3: e0h is our substitute for e0M which will only be used for the SP here (there is an iLO only network).
Note 4: Network ports are access ports - no VLANs required here.
Note 5: Ideally the mgmt broadcast domain would have more ports - there’s no failover for node mgmt here!
xii) Set flowcontrol to off
xiii) A little bit of tuning (not for SSD)
xiv) Verify licensing
xv) Create a basic multi-protocol SVM for testing purposes
Note 1: There’s just one LIF here for General NAS and SVM mgmt traffic. Ideally, we’d have a NAS LIF per node, and a separate mgmt LIF.
Note 2: SAN LIFs must exist on each node in a HA-pair for failover to work.
Note 3: We also do CIFS and domain-tunnel for AD authentication to the cluster
xvi) Create various security logins
xvii) Generate CSRs
xviii) MOTD

Clustershell commands::>

system timeout modify -timeout 0
node rename -node clu1-01 -newname clu1n1
node rename -node clu1-02 -newname clu1n2
net int rename -vserver Cluster -lif clu1-01_clus1 -newname clu1n1-clus1
net int rename -vserver Cluster -lif clu1-01_clus2 -newname clu1n1-clus2
net int rename -vserver Cluster -lif clu1-02_clus1 -newname clu1n2-clus1
net int rename -vserver Cluster -lif clu1-02_clus2 -newname clu1n2-clus2
net int rename -vserver clu1 -lif cluster_mgmt -newname clu1
net int rename -vserver clu1 -lif clu1-01_mgmt1 -newname clu1n1
net int rename -vserver clu1 -lif clu1-02_mgmt1 -newname clu1n2
aggr rename -aggregate aggr0 -newname clu1n1_sata_aggr_root
aggr rename -aggregate aggr0_clu1_02_0 -newname clu1n2_sata_aggr_root
timezone -timezone Asia/Ulan_Bator
cluster time-service ntp server create -server 10.0.1.10

system autosupport modify -node * -state enable -transport HTTP -proxy-url 10.0.0.1 -mail-hosts 10.0.1.25,10.0.2.25 -from clu1@lab.priv -noteto itadmin@lab.priv
system autosupport show -instance
system autosupport invoke -node * -type all -message WEEKLY_LOG
system autosupport history show -node clu1n1
system autosupport history show -node clu1n2

rows 0
disk show -owner clu1n1
disk show -owner clu1n2
disk show -owner clu1n1 -container-type spare
disk show -owner clu1n2 -container-type spare
disk option show
disk option modify -node * -autoassign-policy shelf

storage aggregate show
storage aggregate create -aggregate clu1n1_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu1n1 -simulate true
storage aggregate create -aggregate clu1n1_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu1n1
storage aggregate create -aggregate clu1n2_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu1n2 -simulate true
storage aggregate create -aggregate clu1n2_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu1n2

node run -node * snap sched -A
node run -node clu1n1 snap sched -A clu1n1_sata_aggr1 0 0 0
node run -node clu1n1 snap delete -A -a -f clu1n1_sata_aggr1
node run -node clu1n2 snap sched -A clu1n2_sata_aggr1 0 0 0
node run -node clu1n2 snap delete -A -a -f clu1n2_sata_aggr1

vol show
vol rename -vserver clu1n1 -volume vol0 -new clu1n1_vol0
vol rename -vserver clu1n2 -volume vol0 -new clu1n2_vol0
set -units MB
aggr show *root -fields size
vol size -vserver clu1n1 -volume clu1n1_vol0 -new-size 8100m
vol size -vserver clu1n2 -volume clu1n2_vol0 -new-size 8100m

net int revert *
net int show
net port show
broadcast-domain show
broadcast-domain split -broadcast-domain Default -new-broadcast-domain mgmt -ports clu1n1:e0g,clu1n2:e0g
broadcast-domain split -broadcast-domain Default -new-broadcast-domain ilo -ports clu1n1:e0h,clu1n2:e0h
broadcast-domain remove-ports -broadcast-domain Default -ports clu1n1:e0c,clu1n1:e0d,clu1n1:e0e,clu1n1:e0f,clu1n2:e0c,clu1n2:e0d,clu1n2:e0e,clu1n2:e0f
ifgrp create -node clu1n1 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp create -node clu1n2 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp create -node clu1n1 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp create -node clu1n2 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp add-port -node clu1n1 -ifgrp a0a -port e0c
ifgrp add-port -node clu1n1 -ifgrp a0a -port e0e
ifgrp add-port -node clu1n1 -ifgrp a0b -port e0d
ifgrp add-port -node clu1n1 -ifgrp a0b -port e0f
ifgrp add-port -node clu1n2 -ifgrp a0a -port e0c
ifgrp add-port -node clu1n2 -ifgrp a0a -port e0e
ifgrp add-port -node clu1n2 -ifgrp a0b -port e0d
ifgrp add-port -node clu1n2 -ifgrp a0b -port e0f
broadcast-domain create -broadcast-domain public_routed -ipspace Default -mtu 1500 -ports clu1n1:a0a,clu1n2:a0a
broadcast-domain create -broadcast-domain private_dc -ipspace Default -mtu 1500 -ports clu1n1:a0b,clu1n2:a0b

net port show -fields flowcontrol-admin
net int show -role cluster-mgmt -fields home-node,home-port
net int migrate -lif clu1 -destination-node clu1n2 -destination-port e0g -vserver clu1
set -confirmations off
net port modify -node clu1n1 -port !a* -flowcontrol-admin none
net int revert *
set -confirmations off
net port modify -node clu1n2 -port !a* -flowcontrol-admin none
net int revert *
net port show -fields flowcontrol-admin

aggr modify {-has-mroot false} -free-space-realloc on
node run -node * options wafl.optimize_write_once off

license show

vserver create -vserver clu1-svm1 -aggregate clu1n1_sata_aggr1 -rootvolume rootvol -rootvolume-security-style UNIX
net int create -vserver clu1-svm1 -lif clu1-svm1 -role data -data-protocol cifs,nfs -address 10.0.1.105 -netmask 255.255.252.0 -home-node clu1n1 -home-port a0a
net int create -vserver clu1-svm1 -lif clu1-svm1-iscsi1 -role data -data-protocol iscsi -home-node clu1n1 -home-port a0b -address 10.0.1.121 -netmask 255.255.252.0
net int create -vserver clu1-svm1 -lif clu1-svm1-iscsi2 -role data -data-protocol iscsi -home-node clu1n2 -home-port a0b -address 10.0.1.122 -netmask 255.255.252.0
vserver services dns create -vserver clu1-svm1 -domains lab.priv -name-servers 10.0.1.10,10.0.2.10
cifs server create -vserver clu1-svm1 -cifs-server clu1-svm1 -domain lab.priv
domain-tunnel create -vserver clu1-svm1

Note: Because ifgrp’s (singlemode, multimode, and definitely not multimode_lacp) do not work at all in the SIM, the following deconstruction and reconstruction will allow the cifs server create to work.
net int delete -vserver clu1-svm1 -lif clu1-svm1
net int delete -vserver clu1-svm1 -lif clu1-svm1-iscsi1
net int delete -vserver clu1-svm1 -lif clu1-svm1-iscsi2
ifgrp delete -node clu1n1 -ifgrp a0a
ifgrp delete -node clu1n1 -ifgrp a0b
ifgrp delete -node clu1n2 -ifgrp a0a
ifgrp delete -node clu1n2 -ifgrp a0b
broadcast-domain add-ports -broadcast-domain public_routed -ports clu1n1:e0c,clu1n1:e0e,clu1n2:e0c,clu1n2:e0e
broadcast-domain add-ports -broadcast-domain private_dc -ports clu1n1:e0d,clu1n1:e0f,clu1n2:e0d,clu1n2:e0f
net int create -vserver clu1-svm1 -lif clu1-svm1 -role data -data-protocol cifs,nfs -address 10.0.1.105 -netmask 255.255.252.0 -home-node clu1n1 -home-port e0c
net int create -vserver clu1-svm1 -lif clu1-svm1-iscsi1 -role data -data-protocol iscsi -home-node clu1n1 -home-port e0d -address 10.0.1.121 -netmask 255.255.252.0
net int create -vserver clu1-svm1 -lif clu1-svm1-iscsi2 -role data -data-protocol iscsi -home-node clu1n2 -home-port e0d -address 10.0.1.122 -netmask 255.255.252.0
cifs server create -vserver clu1-svm1 -cifs-server clu1-svm1 -domain lab.priv
domain-tunnel create -vserver clu1-svm1

security login create -vserver clu1 -user-or-group-name LAB\ocum -application ontapi -authmethod domain -role admin
security login create -vserver clu1 -user-or-group-name LAB\ocum -application ssh -authmethod domain -role admin
security login create -vserver clu1 -user-or-group-name LAB\ocm -application ontapi -authmethod domain -role admin
security login create -vserver clu1 -user-or-group-name LAB\storageusers -application http -authmethod domain -role readonly
security login create -vserver clu1 -user-or-group-name LAB\storageusers -application ontapi -authmethod domain -role readonly
security login create -vserver clu1 -user-or-group-name LAB\storageusers -application ssh -authmethod domain -role readonly
security login create -vserver clu1 -user-or-group-name LAB\storageadmins -application http -authmethod domain -role admin
security login create -vserver clu1 -user-or-group-name LAB\storageadmins -application ontapi -authmethod domain -role admin
security login create -vserver clu1 -user-or-group-name LAB\storageadmins -application ssh -authmethod domain -role admin
security login create -vserver clu1-svm1 -user-or-group-name LAB\svc_sdw -application ontapi -authmethod domain -role vsadmin

security certificate generate-csr -common-name clu1.lab.priv -size 2048 -country MN -state Ulaanbaatar  -locality Ulaanbaatar -organization LAB -unit R&D -email-addr itadmin@lab.priv -hash-function SHA256

security login banner modify -vserver clu1

2) Configuring clu2::>*
*this is of course nearly identical to clu1, just names and IPs change

The commands below do the following:

i) Set the CLI inactivity timeout value (so we don’t get disconnected whilst doing our work)
ii) Rename the nodes per naming convention
iii) Rename the LIFs per naming convention/with their DNS name (not FQDN here)
iv) Rename the root aggregates per naming convention (in this scenario we have only SATA disks)
v) Set the timezone
vi) Configure NTP
vii) Autosupport - configure, check settings, trigger, and check history
viii) Disks - verify disk ownership, show spare disks, verify disk options (in this scenario we don’t have dedicated stacks, hence the assign on shelf)
ix) Create data aggregates
x) Disable aggregate snap schedule and delete aggregate snapshots
xi) Rename root volumes per naming convention, and set to 75% root aggr size
xii) Configure broadcast domains and any ifgrps
Note 1: In this scenario we have two ifgrps per node - one for routed traffic (public), one for non-routed traffic (datacenter).
Note 2: Since this is a SIM, I actually destroy the ifgrps later since they don’t work.
Note 3: e0h is our substitute for e0M which will only be used for the SP here (there is an iLO only network).
Note 4: Network ports are access ports - no VLANs required here.
Note 5: Ideally the mgmt broadcast domain would have more ports - there’s no failover for node mgmt here!
xii) Set flowcontrol to off
xiii) A little bit of tuning (not for SSD)
xiv) Verify licensing
xv) Create a basic multi-protocol SVM
Note 1: There’s just one LIF here for General NAS and SVM mgmt traffic. Ideally, we’d have a NAS LIF per node, and a separate mgmt LIF.
Note 2: SAN LIFs must exist on each node in a HA-pair for failover to work.
Note 3: We also do CIFS and domain-tunnel for AD authentication to the cluster
xvi) Create various security logins
xvii) Generate CSRs
xviii) MOTD

Clustershell commands::>

system timeout modify -timeout 0
node rename -node clu2-01 -newname clu2n1
node rename -node clu2-02 -newname clu2n2
net int rename -vserver Cluster -lif clu2-01_clus1 -newname clu2n1-clus1
net int rename -vserver Cluster -lif clu2-01_clus2 -newname clu2n1-clus2
net int rename -vserver Cluster -lif clu2-02_clus1 -newname clu2n2-clus1
net int rename -vserver Cluster -lif clu2-02_clus2 -newname clu2n2-clus2
net int rename -vserver clu2 -lif cluster_mgmt -newname clu2
net int rename -vserver clu2 -lif clu2-01_mgmt1 -newname clu2n1
net int rename -vserver clu2 -lif clu2-02_mgmt1 -newname clu2n2
aggr rename -aggregate aggr0 -newname clu2n1_sata_aggr_root
aggr rename -aggregate aggr0_clu2_02_0 -newname clu2n2_sata_aggr_root
timezone -timezone Asia/Ulan_Bator
cluster time-service ntp server create -server 10.0.1.10

system autosupport modify -node * -state enable -transport HTTP -proxy-url 10.0.0.1 -mail-hosts 10.0.1.25,10.0.2.25 -from clu2@lab.priv -noteto itadmin@lab.priv
system autosupport show -instance
system autosupport invoke -node * -type all -message WEEKLY_LOG
system autosupport history show -node clu2n1
system autosupport history show -node clu2n2

rows 0
disk show -owner clu2n1
disk show -owner clu2n2
disk show -owner clu2n1 -container-type spare
disk show -owner clu2n2 -container-type spare
disk option show
disk option modify -node * -autoassign-policy shelf

storage aggregate show
storage aggregate create -aggregate clu2n1_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu2n1 -simulate true
storage aggregate create -aggregate clu2n1_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu2n1
storage aggregate create -aggregate clu2n2_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu2n2 -simulate true
storage aggregate create -aggregate clu2n2_sata_aggr1 -diskcount 14 -maxraidsize 14 -node clu2n2

node run -node * snap sched -A
node run -node clu2n1 snap sched -A clu2n1_sata_aggr1 0 0 0
node run -node clu2n1 snap delete -A -a -f clu2n1_sata_aggr1
node run -node clu2n2 snap sched -A clu2n2_sata_aggr1 0 0 0
node run -node clu2n2 snap delete -A -a -f clu2n2_sata_aggr1

vol show
vol rename -vserver clu2n1 -volume vol0 -new clu2n1_vol0
vol rename -vserver clu2n2 -volume vol0 -new clu2n2_vol0
set -units MB
aggr show *root -fields size
vol size -vserver clu2n1 -volume clu2n1_vol0 -new-size 8100m
vol size -vserver clu2n2 -volume clu2n2_vol0 -new-size 8100m

net int revert *
net int show
net port show
broadcast-domain show
broadcast-domain split -broadcast-domain Default -new-broadcast-domain mgmt -ports clu2n1:e0g,clu2n2:e0g
broadcast-domain split -broadcast-domain Default -new-broadcast-domain ilo -ports clu2n1:e0h,clu2n2:e0h
broadcast-domain remove-ports -broadcast-domain Default -ports clu2n1:e0c,clu2n1:e0d,clu2n1:e0e,clu2n1:e0f,clu2n2:e0c,clu2n2:e0d,clu2n2:e0e,clu2n2:e0f
ifgrp create -node clu2n1 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp create -node clu2n2 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp create -node clu2n1 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp create -node clu2n2 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp add-port -node clu2n1 -ifgrp a0a -port e0c
ifgrp add-port -node clu2n1 -ifgrp a0a -port e0e
ifgrp add-port -node clu2n1 -ifgrp a0b -port e0d
ifgrp add-port -node clu2n1 -ifgrp a0b -port e0f
ifgrp add-port -node clu2n2 -ifgrp a0a -port e0c
ifgrp add-port -node clu2n2 -ifgrp a0a -port e0e
ifgrp add-port -node clu2n2 -ifgrp a0b -port e0d
ifgrp add-port -node clu2n2 -ifgrp a0b -port e0f
broadcast-domain create -broadcast-domain public_routed -ipspace Default -mtu 1500 -ports clu2n1:a0a,clu2n2:a0a
broadcast-domain create -broadcast-domain private_dc -ipspace Default -mtu 1500 -ports clu2n1:a0b,clu2n2:a0b

net port show -fields flowcontrol-admin
net int show -role cluster-mgmt -fields home-node,home-port
net int migrate -lif clu2 -destination-node clu2n2 -destination-port e0g -vserver clu2
set -confirmations off
net port modify -node clu2n1 -port !a* -flowcontrol-admin none
net int revert *
set -confirmations off
net port modify -node clu2n2 -port !a* -flowcontrol-admin none
net int revert *
net port show -fields flowcontrol-admin

aggr modify {-has-mroot false} -free-space-realloc on
node run -node * options wafl.optimize_write_once off

license show

vserver create -vserver clu2-svm1 -aggregate clu2n1_sata_aggr1 -rootvolume rootvol -rootvolume-security-style UNIX
net int create -vserver clu2-svm1 -lif clu2-svm1 -role data -data-protocol cifs,nfs -address 10.0.2.105 -netmask 255.255.252.0 -home-node clu2n1 -home-port a0a
net int create -vserver clu2-svm1 -lif clu2-svm1-iscsi1 -role data -data-protocol iscsi -home-node clu2n1 -home-port a0b -address 10.0.2.121 -netmask 255.255.252.0
net int create -vserver clu2-svm1 -lif clu2-svm1-iscsi2 -role data -data-protocol iscsi -home-node clu2n2 -home-port a0b -address 10.0.2.122 -netmask 255.255.252.0
vserver services dns create -vserver clu2-svm1 -domains lab.priv -name-servers 10.0.2.10,10.0.1.10
cifs server create -vserver clu2-svm1 -cifs-server clu2-svm1 -domain lab.priv
domain-tunnel create -vserver clu2-svm1

Note: Because ifgrp’s (singlemode, multimode, and definitely not multimode_lacp) do not work at all in the SIM, the following deconstruction and reconstruction will allow the cifs server create to work.
net int delete -vserver clu2-svm1 -lif clu2-svm1
net int delete -vserver clu2-svm1 -lif clu2-svm1-iscsi1
net int delete -vserver clu2-svm1 -lif clu2-svm1-iscsi2
ifgrp delete -node clu2n1 -ifgrp a0a
ifgrp delete -node clu2n1 -ifgrp a0b
ifgrp delete -node clu2n2 -ifgrp a0a
ifgrp delete -node clu2n2 -ifgrp a0b
broadcast-domain add-ports -broadcast-domain public_routed -ports clu2n1:e0c,clu2n1:e0e,clu2n2:e0c,clu2n2:e0e
broadcast-domain add-ports -broadcast-domain private_dc -ports clu2n1:e0d,clu2n1:e0f,clu2n2:e0d,clu2n2:e0f
net int create -vserver clu2-svm1 -lif clu2-svm1 -role data -data-protocol cifs,nfs -address 10.0.2.105 -netmask 255.255.252.0 -home-node clu2n1 -home-port e0c
net int create -vserver clu2-svm1 -lif clu2-svm1-iscsi1 -role data -data-protocol iscsi -home-node clu2n1 -home-port e0d -address 10.0.2.121 -netmask 255.255.252.0
net int create -vserver clu2-svm1 -lif clu2-svm1-iscsi2 -role data -data-protocol iscsi -home-node clu2n2 -home-port e0d -address 10.0.2.122 -netmask 255.255.252.0
cifs server create -vserver clu2-svm1 -cifs-server clu2-svm1 -domain lab.priv
domain-tunnel create -vserver clu2-svm1

security login create -vserver clu2 -user-or-group-name LAB\ocum -application ontapi -authmethod domain -role admin
security login create -vserver clu2 -user-or-group-name LAB\ocum -application ssh -authmethod domain -role admin
security login create -vserver clu2 -user-or-group-name LAB\ocm -application ontapi -authmethod domain -role admin
security login create -vserver clu2 -user-or-group-name LAB\storageusers -application http -authmethod domain -role readonly
security login create -vserver clu2 -user-or-group-name LAB\storageusers -application ontapi -authmethod domain -role readonly
security login create -vserver clu2 -user-or-group-name LAB\storageusers -application ssh -authmethod domain -role readonly
security login create -vserver clu2 -user-or-group-name LAB\storageadmins -application http -authmethod domain -role admin
security login create -vserver clu2 -user-or-group-name LAB\storageadmins -application ontapi -authmethod domain -role admin
security login create -vserver clu2 -user-or-group-name LAB\storageadmins -application ssh -authmethod domain -role admin
security login create -vserver clu2-svm1 -user-or-group-name LAB\svc_sdw -application ontapi -authmethod domain -role vsadmin

security certificate generate-csr -common-name clu2.lab.priv -size 2048 -country MN -state Ulaanbaatar  -locality Ulaanbaatar -organization LAB -unit R&D -email-addr itadmin@lab.priv -hash-function SHA256

security login banner modify -vserver clu2

3) Setting up Cluster and Vserver Peering

Note: Not using the destroyed ifgrps here.

clu1::>

net int create -vserver clu1 -lif clu1-icl1 -role intercluster -home-node clu1n1 -home-port e0e -address 10.0.1.111 -netmask 255.255.252.0
net int create -vserver clu1 -lif clu1-icl2 -role intercluster -home-node clu1n2 -home-port e0e -address 10.0.1.112 -netmask 255.255.252.0

clu2::>

net int create -vserver clu2 -lif clu2-icl1 -role intercluster -home-node clu2n1 -home-port e0e -address 10.0.2.111 -netmask 255.255.252.0
net int create -vserver clu2 -lif clu2-icl2 -role intercluster -home-node clu2n2 -home-port e0e -address 10.0.2.112 -netmask 255.255.252.0

clu1::>

cluster peer create -peer-addrs 10.0.2.111 -address-family ipv4

clu2::>

cluster peer create -peer-addrs 10.0.1.111 -address-family ipv4
cluster peer show

clu1::>

vserver peer create -vserver clu1-svm1 -peer-cluster clu2 -peer-vserver clu2-svm1 -applications snapmirror

clu2::>

vserver peer accept -vserver clu2-svm1 -peer-vserver clu1-svm1
vserver peer show

4) Returning to do the SSL Cert

clu1::>

security certificate install -type server -vserver clu1
ssl show
ssl modify -vserver clu1 -common-name clu1.lab.priv -ca ??? -serial ??? -server-enabled true -client-enabled false
security certificate delete [DELETE THE OLD CLUSTER CERT]

clu2::>

security certificate install -type server -vserver clu2
ssl show
ssl modify -vserver clu2 -common-name clu2.lab.priv -ca ??? -serial ??? -server-enabled true -client-enabled false
security certificate delete [DELETE THE OLD CLUSTER CERT]

5) Final Thought

I might make a few tweaks to the plan...