Setting up a 2-Node cDOT 8.3.1 Simulator Cluster: Part 3 (Beyond part 2 - iSCSI SVM)

Continuing from this post here ...

The following Clustershell commands were used to construct a first SVM that just serves iSCSI. Additionally; the SVM is used as an Active Directory Domain Tunnel (to allow AD authentication to Cluster and SVM); and a DMZ Broadcast Domain has been implemented (for VMs that might be in a DMZ and need to use SnapDrive for instance). This is just a SIM lab example though, and could be described as an advanced setup - it is not meant as a guide for how to do things in the real world.

Note: For testing, vsadmin has been allowed SSH login, also a Cluster Management LIF has been created in the DMZ.

i: The Current Network Setup

The SIMs in the 2-Node cluster have 7 vNICs each. The current setup of ports to Ipspace and Broadcast Domain, and LIFs, is illustrated below.

Note: In the images below, a square represents a physical port, circles are LIFs (logical interfaces).

Image 1: Ipspace = Cluster, Broadcast Domain = Cluster; and LIFs
Image 2: Ipspace = Default, Broadcast Domain = Default; and LIFs
ii: The New Network Setup

No changes are made to the Cluster broadcast-domain. We split the Default broadcast-domain into Default, iSCSI-1, iSCSI-2, and DMZ.

Image 3: Ipspace = Default, Broadcast Domains = Default, iSCSI-1, iSCSI-2, DMZ
Image 4: LIFs on the broadcast-domains Default, iSCSI-1, iSCSI-2, DMZ

iii: The Clustershell Commands

## Data Aggregates ##

rows 0
sto disk assign -all true -node CLU1N1
sto disk assign -all true -node CLU1N2
sto disk show -owner CLU1N1 -container-type spare -fields disk,usable-size
sto disk show -owner CLU1N1 -container-type spare -fields disk -usable-size 3.93g

# 42 disks spare per node (3 * 14 disk virtual shelves)
# Create a 12 disk RAID-DP on nodes 1 & 2

sto aggr create N1_aggr1 -diskcount 12 -maxraidsize 12 -node CLU1N1 -simulate
sto aggr create N1_aggr1 -diskcount 12 -maxraidsize 12 -node CLU1N1
sto aggr create N2_aggr1 -diskcount 12 -maxraidsize 12 -node CLU1N2 -simulate
sto aggr create N2_aggr1 -diskcount 12 -maxraidsize 12 -node CLU1N2

## Creating the iSCSI SVM ##

vserver create -vserver SVM1 -aggregate N1_aggr1 -rootvolume SVM1_root -rootvolume-security-style unix
vserver add-aggregates -vserver SVM1 -aggregates N1_aggr1,N2_aggr1
vserver show-protocols -vserver SVM1
vserver remove-protocols -vserver SVM1 -protocols nfs,cifs,fcp,ndmp
vserver show-protocols -vserver SVM1
iscsi create -target-alias SVM1 -status-admin up -vserver SVM1

## iSCSI LIFs Setup ##

broadcast-domain split -broadcast-domain Default -ipspace Default -new-broadcast-domain ISCSI-1 -ports CLU1N1:e0e,CLU1N2:e0e
broadcast-domain split -broadcast-domain Default -ipspace Default -new-broadcast-domain ISCSI-2 -ports CLU1N1:e0f,CLU1N2:e0f
net int create -vserver SVM1 -lif svm1-iscsi-n1-e0e-1 -role data -data-protocol iscsi -home-node CLU1N1 -home-port e0e -address 10.10.101.101 -netmask 255.255.255.0
net int create -vserver SVM1 -lif svm1-iscsi-n1-e0f-2 -role data -data-protocol iscsi -home-node CLU1N1 -home-port e0f -address 10.10.102.101 -netmask 255.255.255.0
net int create -vserver SVM1 -lif svm1-iscsi-n2-e0e-1 -role data -data-protocol iscsi -home-node CLU1N2 -home-port e0e -address 10.10.101.102 -netmask 255.255.255.0
net int create -vserver SVM1 -lif svm1-iscsi-n2-e0f-2 -role data -data-protocol iscsi -home-node CLU1N2 -home-port e0f -address 10.10.102.102 -netmask 255.255.255.0
portset create -vserver SVM1 -portset svm1-iscsi -protocol iscsi -port-name svm1-iscsi-n1-e0e-1,svm1-iscsi-n1-e0f-2,svm1-iscsi-n2-e0e-1,svm1-iscsi-n2-e0f-2

## AD Domain Tunnel Setup ##

cluster time-service ntp create -server 10.10.10.10
net int create -vserver SVM1 -lif svm1-adds -role data -data-protocol none -firewall-policy mgmt -home-node CLU1N1 -home-port e0c -address 10.10.10.103 -netmask 255.255.255.0
vserver services dns create -domains lab.priv -name-servers 10.10.10.10,10.10.10.9 -vserver SVM1
vserver active-directory create -account-name SVM1 -domain lab.priv -vserver SVM1
domain-tunnel create -vserver SVM1
security login create -user-or-group-name LAB\svm1_vsadmin -vserver SVM1 -application ontapi -authmethod domain -role vsadmin
security login create -user-or-group-name LAB\svm1_vsadmin -vserver SVM1 -application ssh -authmethod domain -role vsadmin

## DMZ Setup ##

broadcast-domain split -broadcast-domain Default -ipspace Default -new-broadcast-domain DMZ -ports CLU1N1:e0g,CLU1N2:e0g
net int create -vserver SVM1 -lif svm1-dmz -role data -data-protocol none -firewall-policy mgmt -home-node CLU1N2 -home-port e0g -address 10.10.20.101 -netmask 255.255.255.0
net int create -vserver CLU1 -lif cluster_mgmt_dmz -role cluster-mgmt -home-node CLU1N2 -home-port e0g -address 10.10.20.100 -netmask 255.255.255.0

## Intercluster LIFs (for later SnapMirror) ##

net int create -lif N1-icl -vserver CLU1 -role intercluster -home-node CLU1N1 -home-port e0d -address 10.10.10.111 -netmask 255.255.255.0
net int create -lif N2-icl -vserver CLU1 -role intercluster -home-node CLU1N2 -home-port e0d -address 10.10.10.112 -netmask 255.255.255.0


Comments

  1. Well coded model! I have to say that it is worth trying and if you don’t mind I would implement some parts in my work. I have heard that similar approach is applied in virtual data rooms.

    ReplyDelete

Post a Comment