We’ve run through
the setup and have our 2 Node Cluster (CLUSA) up and running, what elements
make up this 2 node cluster?
The Elements
2 Cluster Nodes:
CLUSA-01,
CLUSA-02
3 Vservers:
CLUSA,
CLUSA-01, CLUSA-02
7 LIFs (Logical Interfaces):
- cluster_mgt
on CLUSA (e0c on either node CLUSA-01 or CLUSA-02)
- clus1,
clus2, mgmt1 on CLUSA-01 (ports
e0a, e0b, e0f)
- clus1,
clus2, mgmt1 on CLUSA-02 (ports
e0a, e0b, e0f)
1 Failover Group:
clusterwide
(containing 6 ports - e0c, e0d, e0e on CLUSA-01 & e0c, e0d, e0e on CLUSA-02)
2 Aggregates:
aggr0
on node CLUSA-01 (900MB, 96% used, raid_dp)
aggr0_CLUSA_02_0
on node CLUSA-02 (900MB, 96% used, raid_dp)
112 disks
(after adding the optional SIM shelves):
- 4 shelves of 14 disks (56) assigned to each node
- 3 disks per node are contained in an aggr0 which holds the root
volume
- 53 spare disks per node, of which there are
- 25 spare 1GB disks per node, and
- 28 spare 4GB disks per node
2 Volumes:
vol0
belonging to Vserver CLUSA-01 (851.5MB, 25% used)
vol0
belonging to Vserver CLUSA-02 (851.5MB, 25% used)
0 LUNs
0 igroups
A good grasp of the
elements that make up a cluster, helps make the whole thing relatively easy to
understand.
How to Find the
Information?
i. Use OnCommand System
Manager and click on Advanced
which brings up the Data ONTAP Element Manager,
or simply point your web browser at
http://MANAGEMENT_IP_ADDRESS_OF_CLUSTER or
http://MANAGEMENT_IP_ADDRESS_OF_A_CLUSTER_NODE
http://MANAGEMENT_IP_ADDRESS_OF_CLUSTER or
http://MANAGEMENT_IP_ADDRESS_OF_A_CLUSTER_NODE
Image: Cluster
Element Manager
ii. CLI
cluster show
vserver show
network interface show
network interface
failover-groups show
aggr show
storage disk show
storage disk show -aggr aggr*
storage disk show
-container-type spare
storage disk show -physical-size
3.93GB
volume show
df -h
lun show
lun igroup show
Three Re-Configurations
i. Removing e0d and e0e from the clusterwide failover
group (used by the cluster management IP):
network interface
failover-groups delete -failover-group clusterwide -node CLUSA-01 -port e0d,e0e
network interface
failover-groups delete -failover-group clusterwide -node CLUSA-02 -port e0d,e0e
network interface migrate
-vserver CLUSA -lif cluster_mgmt -dest-node CLUSA-02 -dest-port e0c
The above lines remove port e0d, and e0e from the failover-group
clusterwide, so that the cluster management IP can exist only on e0c on
CLUSA-01 or CLUSA-02. And will migrate the LIF cluster_mgmt to CLUSA-02’s port e0c.
Note: When the node
homing the cluster_mgmt LIF is rebooted, the port fails over to a port on the
other node.
ii. Renaming the aggregates containing the root volumes to
have a matching naming convention:
storage aggregate rename aggr0
aggr_CLUSA1_00
storage aggregate rename
aggr0_CLUSA_02_0 aggr_CLUSA2_00
iii. Modifying disk options to put autoassign off:
storage disk option show
storage disk option modify -node
CLUSA* -autoassign off
Note: By default on
the 8.1.2 C-Mode SIM, autoassign is turned on. In this lab all disks are assigned
anyway, its useful practice though to check this is off.
Comments
Post a Comment