I needed to produce
the commands to build a couple of HA-pairs (prod and DR), so set about testing
my commands with two 2-node SIM clusters...
Basic Cluster
Setup
1) The SIM
nodes were constructed along the lines here: Pre-Cluster
Setup Image Build Recipe for 8.3.2 SIM. Each SIM has 8 virtual network
adapters to simulate connectivity requirements.
2) The node setup and cluster setup were completed
as below:
2.1) clu1n1
Welcome to node setup.
Enter
the node management interface ...
..............
port: e0g
........
IP address: 10.0.1.101
...........
netmask: 255.255.252.0
...
default gateway: 10.0.0.1
Welcome to the cluster setup
wizard.
Do
you want to create a new cluster or join an existing cluster? create
Do
you intend for this node to be used as a single node cluster? no
Will
the cluster network be configured to use network switches? no
Do
you want to use these defaults? yes
Enter
the cluster ...
...............
name: clu1
...
base license key: XXXXXXXXXXXXXXXXXXXXXXXXXXXX
Enter
an additional license key:
Enter
the cluster management interface ...
..............
port: e0g
........
IP address: 10.0.1.100
...........
netmask: 255.255.252.0
...
default gateway: 10.0.0.1
Enter
the DNS domain names: lab.priv
Enter
the name server IP addresses: 10.0.1.10,10.0.2.10
Where
is the controller located: Site
A
2.2) clu1n2
Welcome to node setup.
Enter
the node management interface ...
..............
port: e0g
........
IP address: 10.0.1.102
...........
netmask: 255.255.252.0
...
default gateway: 10.0.0.1
Welcome to the cluster setup
wizard.
Do
you want to create a new cluster or join an existing cluster? join
Do
you want to use these defaults? yes
Enter
the name of the cluster you would like to join: clu1
2.3) clu2n1
Welcome to node setup.
Enter
the node management interface ...
..............
port: e0g
........
IP address: 10.0.2.101
...........
netmask: 255.255.252.0
...
default gateway: 10.0.0.1
Welcome to the cluster setup
wizard.
Do
you want to create a new cluster or join an existing cluster? create
Do
you intend for this node to be used as a single node cluster? no
Will
the cluster network be configured to use network switches? no
Do
you want to use these defaults? yes
Enter
the cluster ...
...............
name: clu2
...
base license key: XXXXXXXXXXXXXXXXXXXXXXXXXXXX
Enter
an additional license key:
Enter
the cluster management interface ...
..............
port: e0g
........
IP address: 10.0.2.100
...........
netmask: 255.255.252.0
...
default gateway: 10.0.0.1
Enter
the DNS domain names: lab.priv
Enter
the name server IP addresses: 10.0.2.10,10.0.1.10
Where
is the controller located: Site
B
2.4) clu2n2
Welcome to node setup.
Enter
the node management interface ...
..............
port: e0g
........
IP address: 10.0.2.102
...........
netmask: 255.255.252.0
...
default gateway: 10.0.0.1
Welcome to the cluster setup
wizard.
Do
you want to create a new cluster or join an existing cluster? join
Do
you want to use these defaults? yes
Enter
the name of the cluster you would like to join: clu2
3) Simulating
a real-world build:
3.1) Node
feature licenses have been installed (factory delivered kit usually comes with
pre-loaded licenses) with ::> license add [LICENSE_CODE]
3.2) We
pretend the service-processors were configured by the installation engineer (of
course these are SIMs with no SP) ::> service-processor network
modify -node [NODENAME] -address-family IPv4 -enable true -dhcp none
-ip-address [ADDRESS] -netmask [NETMASK] -gateway [GATEWAY]
3.3) Disk
assignment, make sure the root aggregate is on the right disks, etcetera ...
would have been done by the installation engineer.
Extended
(Scripted) Clustered Setup
Note: Most of this
can be done in the OnCommand System Manager but a lot easier to document
Clustershell.
1) Configuring clu1::>
The commands below do the following:
i) Set the CLI
inactivity timeout value (so we don’t get disconnected whilst doing our work)
ii) Rename the
nodes per naming convention
iii) Rename
the LIFs per naming convention/with their DNS name (not FQDN here)
iv) Rename the
root aggregates per naming convention (in this scenario we have only SATA
disks)
v) Set the
timezone
vi) Configure
NTP
vii)
Autosupport - configure, check settings, trigger, and check history
viii) Disks -
verify disk ownership, show spare disks, verify disk options (in this scenario
we don’t have dedicated stacks, hence the assign on shelf)
ix) Create
data aggregates
x) Disable
aggregate snap schedule and delete aggregate snapshots
xi) Rename
root volumes per naming convention, and set to 75% root aggr size
xii) Configure
broadcast domains and any ifgrps
Note 1: In this scenario we have two ifgrps per node -
one for routed traffic (public), one for non-routed traffic (datacenter).
Note 2: Since this is a SIM, I actually
destroy the ifgrps later since they don’t work.
Note 3: e0h is our substitute for e0M which will only
be used for the SP here (there is an iLO only network).
Note 4: Network ports are access ports - no VLANs
required here.
Note 5: Ideally the mgmt broadcast domain would have
more ports - there’s no failover for node mgmt here!
xii) Set
flowcontrol to off
xiii) A little
bit of tuning (not for SSD)
xiv) Verify
licensing
xv) Create a
basic multi-protocol SVM for testing purposes
Note 1: There’s just one LIF here for General NAS and
SVM mgmt traffic. Ideally, we’d have a NAS LIF per node, and a separate mgmt
LIF.
Note 2: SAN LIFs must exist on each node in a HA-pair
for failover to work.
Note 3: We also do CIFS and domain-tunnel for AD
authentication to the cluster
xvi) Create
various security logins
xvii) Generate
CSRs
xviii) MOTD
Clustershell commands::>
system
timeout modify -timeout 0
node
rename -node clu1-01 -newname clu1n1
node
rename -node clu1-02 -newname clu1n2
net
int rename -vserver Cluster -lif clu1-01_clus1 -newname clu1n1-clus1
net
int rename -vserver Cluster -lif clu1-01_clus2 -newname clu1n1-clus2
net
int rename -vserver Cluster -lif clu1-02_clus1 -newname clu1n2-clus1
net
int rename -vserver Cluster -lif clu1-02_clus2 -newname clu1n2-clus2
net
int rename -vserver clu1 -lif cluster_mgmt -newname clu1
net
int rename -vserver clu1 -lif clu1-01_mgmt1 -newname clu1n1
net
int rename -vserver clu1 -lif clu1-02_mgmt1 -newname clu1n2
aggr
rename -aggregate aggr0 -newname clu1n1_sata_aggr_root
aggr
rename -aggregate aggr0_clu1_02_0 -newname clu1n2_sata_aggr_root
timezone
-timezone Asia/Ulan_Bator
cluster
time-service ntp server create -server 10.0.1.10
system
autosupport modify -node * -state enable -transport HTTP -proxy-url 10.0.0.1
-mail-hosts 10.0.1.25,10.0.2.25 -from clu1@lab.priv -noteto itadmin@lab.priv
system
autosupport show -instance
system
autosupport invoke -node * -type all -message WEEKLY_LOG
system
autosupport history show -node clu1n1
system
autosupport history show -node clu1n2
rows
0
disk
show -owner clu1n1
disk
show -owner clu1n2
disk
show -owner clu1n1 -container-type spare
disk
show -owner clu1n2 -container-type spare
disk
option show
disk
option modify -node * -autoassign-policy shelf
storage
aggregate show
storage
aggregate create -aggregate clu1n1_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu1n1 -simulate true
storage
aggregate create -aggregate clu1n1_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu1n1
storage
aggregate create -aggregate clu1n2_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu1n2 -simulate true
storage
aggregate create -aggregate clu1n2_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu1n2
node
run -node * snap sched -A
node
run -node clu1n1 snap sched -A clu1n1_sata_aggr1 0 0 0
node
run -node clu1n1 snap delete -A -a -f clu1n1_sata_aggr1
node
run -node clu1n2 snap sched -A clu1n2_sata_aggr1 0 0 0
node
run -node clu1n2 snap delete -A -a -f clu1n2_sata_aggr1
vol
show
vol
rename -vserver clu1n1 -volume vol0 -new clu1n1_vol0
vol
rename -vserver clu1n2 -volume vol0 -new clu1n2_vol0
set
-units MB
aggr
show *root -fields size
vol
size -vserver clu1n1 -volume clu1n1_vol0 -new-size 8100m
vol
size -vserver clu1n2 -volume clu1n2_vol0 -new-size 8100m
net
int revert *
net
int show
net
port show
broadcast-domain
show
broadcast-domain
split -broadcast-domain Default -new-broadcast-domain mgmt -ports
clu1n1:e0g,clu1n2:e0g
broadcast-domain
split -broadcast-domain Default -new-broadcast-domain ilo -ports
clu1n1:e0h,clu1n2:e0h
broadcast-domain
remove-ports -broadcast-domain Default -ports clu1n1:e0c,clu1n1:e0d,clu1n1:e0e,clu1n1:e0f,clu1n2:e0c,clu1n2:e0d,clu1n2:e0e,clu1n2:e0f
ifgrp
create -node clu1n1 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp
create -node clu1n2 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp
create -node clu1n1 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp
create -node clu1n2 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp
add-port -node clu1n1 -ifgrp a0a -port e0c
ifgrp
add-port -node clu1n1 -ifgrp a0a -port e0e
ifgrp
add-port -node clu1n1 -ifgrp a0b -port e0d
ifgrp
add-port -node clu1n1 -ifgrp a0b -port e0f
ifgrp
add-port -node clu1n2 -ifgrp a0a -port e0c
ifgrp
add-port -node clu1n2 -ifgrp a0a -port e0e
ifgrp
add-port -node clu1n2 -ifgrp a0b -port e0d
ifgrp
add-port -node clu1n2 -ifgrp a0b -port e0f
broadcast-domain
create -broadcast-domain public_routed -ipspace Default -mtu 1500 -ports
clu1n1:a0a,clu1n2:a0a
broadcast-domain
create -broadcast-domain private_dc -ipspace Default -mtu 1500 -ports clu1n1:a0b,clu1n2:a0b
net
port show -fields flowcontrol-admin
net
int show -role cluster-mgmt -fields home-node,home-port
net
int migrate -lif clu1 -destination-node clu1n2 -destination-port e0g -vserver
clu1
set
-confirmations off
net
port modify -node clu1n1 -port !a* -flowcontrol-admin none
net
int revert *
set
-confirmations off
net
port modify -node clu1n2 -port !a* -flowcontrol-admin none
net
int revert *
net
port show -fields flowcontrol-admin
aggr
modify {-has-mroot false} -free-space-realloc on
node
run -node * options wafl.optimize_write_once off
license
show
vserver
create -vserver clu1-svm1 -aggregate clu1n1_sata_aggr1 -rootvolume rootvol
-rootvolume-security-style UNIX
net
int create -vserver clu1-svm1 -lif clu1-svm1 -role data -data-protocol cifs,nfs
-address 10.0.1.105 -netmask 255.255.252.0 -home-node clu1n1 -home-port a0a
net
int create -vserver clu1-svm1 -lif clu1-svm1-iscsi1 -role data -data-protocol
iscsi -home-node clu1n1 -home-port a0b -address 10.0.1.121 -netmask
255.255.252.0
net
int create -vserver clu1-svm1 -lif clu1-svm1-iscsi2 -role data -data-protocol
iscsi -home-node clu1n2 -home-port a0b -address 10.0.1.122 -netmask
255.255.252.0
vserver
services dns create -vserver clu1-svm1 -domains lab.priv -name-servers
10.0.1.10,10.0.2.10
cifs
server create -vserver clu1-svm1 -cifs-server clu1-svm1 -domain lab.priv
domain-tunnel
create -vserver clu1-svm1
Note: Because
ifgrp’s (singlemode, multimode, and definitely not multimode_lacp) do not work
at all in the SIM, the following deconstruction and reconstruction will allow
the cifs server create to work.
net int delete
-vserver clu1-svm1 -lif clu1-svm1
net int delete
-vserver clu1-svm1 -lif clu1-svm1-iscsi1
net int delete
-vserver clu1-svm1 -lif clu1-svm1-iscsi2
ifgrp delete
-node clu1n1 -ifgrp a0a
ifgrp delete
-node clu1n1 -ifgrp a0b
ifgrp delete
-node clu1n2 -ifgrp a0a
ifgrp delete
-node clu1n2 -ifgrp a0b
broadcast-domain
add-ports -broadcast-domain public_routed -ports
clu1n1:e0c,clu1n1:e0e,clu1n2:e0c,clu1n2:e0e
broadcast-domain
add-ports -broadcast-domain private_dc -ports
clu1n1:e0d,clu1n1:e0f,clu1n2:e0d,clu1n2:e0f
net int create
-vserver clu1-svm1 -lif clu1-svm1 -role data -data-protocol cifs,nfs -address
10.0.1.105 -netmask 255.255.252.0 -home-node clu1n1 -home-port e0c
net int create
-vserver clu1-svm1 -lif clu1-svm1-iscsi1 -role data -data-protocol iscsi
-home-node clu1n1 -home-port e0d -address 10.0.1.121 -netmask 255.255.252.0
net int create
-vserver clu1-svm1 -lif clu1-svm1-iscsi2 -role data -data-protocol iscsi
-home-node clu1n2 -home-port e0d -address 10.0.1.122 -netmask 255.255.252.0
cifs server
create -vserver clu1-svm1 -cifs-server clu1-svm1 -domain lab.priv
domain-tunnel
create -vserver clu1-svm1
security
login create -vserver clu1 -user-or-group-name LAB\ocum -application ontapi
-authmethod domain -role admin
security
login create -vserver clu1 -user-or-group-name LAB\ocum -application ssh
-authmethod domain -role admin
security
login create -vserver clu1 -user-or-group-name LAB\ocm -application ontapi
-authmethod domain -role admin
security
login create -vserver clu1 -user-or-group-name LAB\storageusers -application
http -authmethod domain -role readonly
security
login create -vserver clu1 -user-or-group-name LAB\storageusers -application
ontapi -authmethod domain -role readonly
security
login create -vserver clu1 -user-or-group-name LAB\storageusers -application ssh
-authmethod domain -role readonly
security
login create -vserver clu1 -user-or-group-name LAB\storageadmins -application
http -authmethod domain -role admin
security
login create -vserver clu1 -user-or-group-name LAB\storageadmins -application
ontapi -authmethod domain -role admin
security
login create -vserver clu1 -user-or-group-name LAB\storageadmins -application
ssh -authmethod domain -role admin
security
login create -vserver clu1-svm1 -user-or-group-name LAB\svc_sdw -application
ontapi -authmethod domain -role vsadmin
security
certificate generate-csr -common-name clu1.lab.priv -size 2048 -country MN
-state Ulaanbaatar -locality Ulaanbaatar
-organization LAB -unit R&D -email-addr itadmin@lab.priv -hash-function
SHA256
security
login banner modify -vserver clu1
2) Configuring clu2::>*
*this is of course
nearly identical to clu1, just names and IPs change
The commands below do the following:
i) Set the CLI
inactivity timeout value (so we don’t get disconnected whilst doing our work)
ii) Rename the
nodes per naming convention
iii) Rename
the LIFs per naming convention/with their DNS name (not FQDN here)
iv) Rename the
root aggregates per naming convention (in this scenario we have only SATA
disks)
v) Set the
timezone
vi) Configure
NTP
vii) Autosupport
- configure, check settings, trigger, and check history
viii) Disks -
verify disk ownership, show spare disks, verify disk options (in this scenario
we don’t have dedicated stacks, hence the assign on shelf)
ix) Create
data aggregates
x) Disable
aggregate snap schedule and delete aggregate snapshots
xi) Rename root
volumes per naming convention, and set to 75% root aggr size
xii) Configure
broadcast domains and any ifgrps
Note 1: In this scenario we have two ifgrps per node -
one for routed traffic (public), one for non-routed traffic (datacenter).
Note 2: Since this is a SIM, I actually
destroy the ifgrps later since they don’t work.
Note 3: e0h is our substitute for e0M which will only
be used for the SP here (there is an iLO only network).
Note 4: Network ports are access ports - no VLANs
required here.
Note 5: Ideally the mgmt broadcast domain would have
more ports - there’s no failover for node mgmt here!
xii) Set
flowcontrol to off
xiii) A little
bit of tuning (not for SSD)
xiv) Verify
licensing
xv) Create a
basic multi-protocol SVM
Note 1: There’s just one LIF here for General NAS and
SVM mgmt traffic. Ideally, we’d have a NAS LIF per node, and a separate mgmt
LIF.
Note 2: SAN LIFs must exist on each node in a HA-pair
for failover to work.
Note 3: We also do CIFS and domain-tunnel for AD
authentication to the cluster
xvi) Create
various security logins
xvii) Generate
CSRs
xviii) MOTD
Clustershell commands::>
system
timeout modify -timeout 0
node
rename -node clu2-01 -newname clu2n1
node
rename -node clu2-02 -newname clu2n2
net
int rename -vserver Cluster -lif clu2-01_clus1 -newname clu2n1-clus1
net
int rename -vserver Cluster -lif clu2-01_clus2 -newname clu2n1-clus2
net
int rename -vserver Cluster -lif clu2-02_clus1 -newname clu2n2-clus1
net
int rename -vserver Cluster -lif clu2-02_clus2 -newname clu2n2-clus2
net
int rename -vserver clu2 -lif cluster_mgmt -newname clu2
net
int rename -vserver clu2 -lif clu2-01_mgmt1 -newname clu2n1
net
int rename -vserver clu2 -lif clu2-02_mgmt1 -newname clu2n2
aggr
rename -aggregate aggr0 -newname clu2n1_sata_aggr_root
aggr
rename -aggregate aggr0_clu2_02_0 -newname clu2n2_sata_aggr_root
timezone
-timezone Asia/Ulan_Bator
cluster
time-service ntp server create -server 10.0.1.10
system
autosupport modify -node * -state enable -transport HTTP -proxy-url 10.0.0.1
-mail-hosts 10.0.1.25,10.0.2.25 -from clu2@lab.priv -noteto itadmin@lab.priv
system
autosupport show -instance
system
autosupport invoke -node * -type all -message WEEKLY_LOG
system
autosupport history show -node clu2n1
system
autosupport history show -node clu2n2
rows
0
disk
show -owner clu2n1
disk
show -owner clu2n2
disk
show -owner clu2n1 -container-type spare
disk
show -owner clu2n2 -container-type spare
disk
option show
disk
option modify -node * -autoassign-policy shelf
storage
aggregate show
storage
aggregate create -aggregate clu2n1_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu2n1 -simulate true
storage
aggregate create -aggregate clu2n1_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu2n1
storage
aggregate create -aggregate clu2n2_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu2n2 -simulate true
storage
aggregate create -aggregate clu2n2_sata_aggr1 -diskcount 14 -maxraidsize 14
-node clu2n2
node
run -node * snap sched -A
node
run -node clu2n1 snap sched -A clu2n1_sata_aggr1 0 0 0
node
run -node clu2n1 snap delete -A -a -f clu2n1_sata_aggr1
node
run -node clu2n2 snap sched -A clu2n2_sata_aggr1 0 0 0
node
run -node clu2n2 snap delete -A -a -f clu2n2_sata_aggr1
vol
show
vol
rename -vserver clu2n1 -volume vol0 -new clu2n1_vol0
vol
rename -vserver clu2n2 -volume vol0 -new clu2n2_vol0
set
-units MB
aggr
show *root -fields size
vol
size -vserver clu2n1 -volume clu2n1_vol0 -new-size 8100m
vol
size -vserver clu2n2 -volume clu2n2_vol0 -new-size 8100m
net
int revert *
net
int show
net
port show
broadcast-domain
show
broadcast-domain
split -broadcast-domain Default -new-broadcast-domain mgmt -ports clu2n1:e0g,clu2n2:e0g
broadcast-domain
split -broadcast-domain Default -new-broadcast-domain ilo -ports
clu2n1:e0h,clu2n2:e0h
broadcast-domain
remove-ports -broadcast-domain Default -ports
clu2n1:e0c,clu2n1:e0d,clu2n1:e0e,clu2n1:e0f,clu2n2:e0c,clu2n2:e0d,clu2n2:e0e,clu2n2:e0f
ifgrp
create -node clu2n1 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp
create -node clu2n2 -ifgrp a0a -mode multimode_lacp -distr-func ip
ifgrp
create -node clu2n1 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp
create -node clu2n2 -ifgrp a0b -mode multimode_lacp -distr-func ip
ifgrp
add-port -node clu2n1 -ifgrp a0a -port e0c
ifgrp
add-port -node clu2n1 -ifgrp a0a -port e0e
ifgrp
add-port -node clu2n1 -ifgrp a0b -port e0d
ifgrp
add-port -node clu2n1 -ifgrp a0b -port e0f
ifgrp
add-port -node clu2n2 -ifgrp a0a -port e0c
ifgrp
add-port -node clu2n2 -ifgrp a0a -port e0e
ifgrp
add-port -node clu2n2 -ifgrp a0b -port e0d
ifgrp
add-port -node clu2n2 -ifgrp a0b -port e0f
broadcast-domain
create -broadcast-domain public_routed -ipspace Default -mtu 1500 -ports
clu2n1:a0a,clu2n2:a0a
broadcast-domain
create -broadcast-domain private_dc -ipspace Default -mtu 1500 -ports clu2n1:a0b,clu2n2:a0b
net
port show -fields flowcontrol-admin
net
int show -role cluster-mgmt -fields home-node,home-port
net
int migrate -lif clu2 -destination-node clu2n2 -destination-port e0g -vserver
clu2
set
-confirmations off
net
port modify -node clu2n1 -port !a* -flowcontrol-admin none
net
int revert *
set
-confirmations off
net
port modify -node clu2n2 -port !a* -flowcontrol-admin none
net
int revert *
net
port show -fields flowcontrol-admin
aggr
modify {-has-mroot false} -free-space-realloc on
node
run -node * options wafl.optimize_write_once off
license
show
vserver
create -vserver clu2-svm1 -aggregate clu2n1_sata_aggr1 -rootvolume rootvol
-rootvolume-security-style UNIX
net
int create -vserver clu2-svm1 -lif clu2-svm1 -role data -data-protocol cifs,nfs
-address 10.0.2.105 -netmask 255.255.252.0 -home-node clu2n1 -home-port a0a
net
int create -vserver clu2-svm1 -lif clu2-svm1-iscsi1 -role data -data-protocol
iscsi -home-node clu2n1 -home-port a0b -address 10.0.2.121 -netmask
255.255.252.0
net
int create -vserver clu2-svm1 -lif clu2-svm1-iscsi2 -role data -data-protocol
iscsi -home-node clu2n2 -home-port a0b -address 10.0.2.122 -netmask
255.255.252.0
vserver
services dns create -vserver clu2-svm1 -domains lab.priv -name-servers
10.0.2.10,10.0.1.10
cifs
server create -vserver clu2-svm1 -cifs-server clu2-svm1 -domain lab.priv
domain-tunnel
create -vserver clu2-svm1
Note: Because
ifgrp’s (singlemode, multimode, and definitely not multimode_lacp) do not work
at all in the SIM, the following deconstruction and reconstruction will allow
the cifs server create to work.
net int delete
-vserver clu2-svm1 -lif clu2-svm1
net int delete
-vserver clu2-svm1 -lif clu2-svm1-iscsi1
net int delete
-vserver clu2-svm1 -lif clu2-svm1-iscsi2
ifgrp delete
-node clu2n1 -ifgrp a0a
ifgrp delete
-node clu2n1 -ifgrp a0b
ifgrp delete
-node clu2n2 -ifgrp a0a
ifgrp delete
-node clu2n2 -ifgrp a0b
broadcast-domain
add-ports -broadcast-domain public_routed -ports clu2n1:e0c,clu2n1:e0e,clu2n2:e0c,clu2n2:e0e
broadcast-domain
add-ports -broadcast-domain private_dc -ports clu2n1:e0d,clu2n1:e0f,clu2n2:e0d,clu2n2:e0f
net int create
-vserver clu2-svm1 -lif clu2-svm1 -role data -data-protocol cifs,nfs -address
10.0.2.105 -netmask 255.255.252.0 -home-node clu2n1 -home-port e0c
net int create
-vserver clu2-svm1 -lif clu2-svm1-iscsi1 -role data -data-protocol iscsi
-home-node clu2n1 -home-port e0d -address 10.0.2.121 -netmask 255.255.252.0
net int create
-vserver clu2-svm1 -lif clu2-svm1-iscsi2 -role data -data-protocol iscsi
-home-node clu2n2 -home-port e0d -address 10.0.2.122 -netmask 255.255.252.0
cifs server
create -vserver clu2-svm1 -cifs-server clu2-svm1 -domain lab.priv
domain-tunnel
create -vserver clu2-svm1
security
login create -vserver clu2 -user-or-group-name LAB\ocum -application ontapi
-authmethod domain -role admin
security
login create -vserver clu2 -user-or-group-name LAB\ocum -application ssh
-authmethod domain -role admin
security
login create -vserver clu2 -user-or-group-name LAB\ocm -application ontapi
-authmethod domain -role admin
security
login create -vserver clu2 -user-or-group-name LAB\storageusers -application
http -authmethod domain -role readonly
security
login create -vserver clu2 -user-or-group-name LAB\storageusers -application
ontapi -authmethod domain -role readonly
security
login create -vserver clu2 -user-or-group-name LAB\storageusers -application
ssh -authmethod domain -role readonly
security
login create -vserver clu2 -user-or-group-name LAB\storageadmins -application
http -authmethod domain -role admin
security
login create -vserver clu2 -user-or-group-name LAB\storageadmins -application
ontapi -authmethod domain -role admin
security
login create -vserver clu2 -user-or-group-name LAB\storageadmins -application
ssh -authmethod domain -role admin
security
login create -vserver clu2-svm1 -user-or-group-name LAB\svc_sdw -application
ontapi -authmethod domain -role vsadmin
security
certificate generate-csr -common-name clu2.lab.priv -size 2048 -country MN
-state Ulaanbaatar -locality Ulaanbaatar
-organization LAB -unit R&D -email-addr itadmin@lab.priv -hash-function
SHA256
security
login banner modify -vserver clu2
3) Setting up
Cluster and Vserver Peering
Note: Not using the
destroyed ifgrps here.
clu1::>
net
int create -vserver clu1 -lif clu1-icl1 -role intercluster -home-node clu1n1
-home-port e0e -address 10.0.1.111 -netmask 255.255.252.0
net
int create -vserver clu1 -lif clu1-icl2 -role intercluster -home-node clu1n2
-home-port e0e -address 10.0.1.112 -netmask 255.255.252.0
clu2::>
net
int create -vserver clu2 -lif clu2-icl1 -role intercluster -home-node clu2n1
-home-port e0e -address 10.0.2.111 -netmask 255.255.252.0
net
int create -vserver clu2 -lif clu2-icl2 -role intercluster -home-node clu2n2
-home-port e0e -address 10.0.2.112 -netmask 255.255.252.0
clu1::>
cluster
peer create -peer-addrs 10.0.2.111 -address-family ipv4
clu2::>
cluster
peer create -peer-addrs 10.0.1.111 -address-family ipv4
cluster
peer show
clu1::>
vserver
peer create -vserver clu1-svm1 -peer-cluster clu2 -peer-vserver clu2-svm1
-applications snapmirror
clu2::>
vserver
peer accept -vserver clu2-svm1 -peer-vserver clu1-svm1
vserver
peer show
4) Returning to do
the SSL Cert
clu1::>
security
certificate install -type server -vserver clu1
ssl
show
ssl
modify -vserver clu1 -common-name clu1.lab.priv -ca ??? -serial ???
-server-enabled true -client-enabled false
security
certificate delete [DELETE THE OLD CLUSTER CERT]
clu2::>
security
certificate install -type server -vserver clu2
ssl
show
ssl
modify -vserver clu2 -common-name clu2.lab.priv -ca ??? -serial ???
-server-enabled true -client-enabled false
security
certificate delete [DELETE THE OLD CLUSTER CERT]
5) Final Thought
I might make a few
tweaks to the plan...
Hey man, can I just ask why the tuning isn't required for SSD? The NetApp PS guys told me to do this for magnetic and hybrid, what's changed for SSD? I'm referring to these commands:
ReplyDeleteaggr modify {-has-mroot false} -free-space-realloc on
node run -node * options wafl.optimize_write_once off
Hi AC, yes those commands are absolutely pointless for SSD.
DeleteI assume because the blocks on the SSD’s don’t have to be contiguous. If you look at this SPC benchmark report, they bother to configure the latter command on a AFF 8080. Doh.
ReplyDeleteMissed link:
ReplyDeletehttp://spcresults.org/sites/default/files/results/A00154/a00154_NetApp_FAS8080-EX_AFF_SPC-1_full-disclosure-report.pdf
In fairness that's an old document. The free-space-realloc optimizes the layout for spinning disk, and has no benefit for SSD. The wafl.optimize command stops the gradual performance degradation caused when data is initially written to the outer faster edge of the spinning disk, and the disk performance progressively slows down as new data has to be written closer to the spindle.
Delete