Scenario:
You have a NetApp AFF
with 36 disks (24 in shelf 0, 12 in shelf 1).
The system has been
partitioned with ADPv2.
Currently each node has
18 disks assigned.
Each node has 18 P3
partitions for root and 18 P1 & P2 partitions for data (not a
completely standard configuration, often you'll see node 1 with the
P1's and node 2 with the P2's, but it's fine).
Existing aggregate
layout:
Node 1: Root Aggr: rg0
of 16 * P3 partitions
Node 1: Data Aggr: rg0
of 17 * P1 partitions & rg1 of 17 * P2 partitions
Node 2: Root Aggr: rg0
of 16 * P3 partitions
Node 2: Data Aggr: rg0
of 17 * P1 partitions & rg1 of 17 * P2 partitions
You've bought 12 new
disks (of the same size as the originals) and want to assign 6 disks
to Node 1, and 6 disks to Node 2. And simply expand the existing
raidgroups so your aggregate layout will be:
New aggregate layout:
Node 1: Root Aggr: rg0
of 16 * P3 partitions
Node 1: Data Aggr: rg0
of 23 * P1 partitions & rg1 of 23 * P2 partitions
Node 2: Root Aggr: rg0
of 16 * P3 partitions
Node 2: Data Aggr: rg0
of 23 * P1 partitions & rg1 of 23 * P2 partitions
Image: Existing Aggregate Partitions Layout and New Aggregate Layout
Note: 48 partitioned disks is the maximum you can go to as per hwu.netapp.com, and then you have to start using full disks.
How to do?
Note
1: We use the clustershell to achieve our objective.
Note
2: I'm using a vSIM here, so disk names will look different to what's
seen in reality.
1)
Disable disk auto assign:
disk
option show
disk
option modify -node * -autoassign off
disk
option show
2)
Physically insert the new disks.
3)
Assign 6 disks to the Node 1:
disk
show
disk
assign -node CLU01-01 -disklist
VMw-1.1,VMw-1.2,VMw-1.3,VMw-1.4,VMw-1.5,VMw-1.6
disk
show
4)
Make sure your data aggr maxraidsize is sufficient since it needs to
be at least 23 here:
aggr
show -aggregate aggr1 -fields maxraidsize
aggr
modify -aggregate aggr1 -maxraidsize 24
aggr
show -aggregate aggr1 -fields maxraidsize
5)
Add the 6 newly assigned and spare but unpartitioned disks to your
aggregate and see they get partitioned!:
Note: Key thing in the output below A) Disks are being added to existing raidgroups rg0 & rg1 B) It actually says "The following disks will be partitioned".
::>
aggr add-disks -aggregate aggr1 -disklist
VMw-1.1,VMw-1.2,VMw-1.3,VMw-1.4,VMw-1.5,VMw-1.6 -simulate
Disks
would be added to aggregate "aggr1" on node "CLU01-01"
in the following
manner:
First
Plex
RAID
Group rg0, 6 disks (block checksum, raid_dp)
Usable
Physical
Position
Disk Type Size Size
----------
------------------------- ---------- -------- --------
shared
VMw-1.1 SSD 12.39GB 12.42GB
shared
VMw-1.2 SSD 12.39GB 12.42GB
shared
VMw-1.3 SSD 12.39GB 12.42GB
shared
VMw-1.4 SSD 12.39GB 12.42GB
shared
VMw-1.5 SSD 12.39GB 12.42GB
shared
VMw-1.6 SSD 12.39GB 12.42GB
RAID
Group rg1, 6 disks (block checksum, raid_dp)
Usable
Physical
Position
Disk Type Size Size
----------
------------------------- ---------- -------- --------
shared
VMw-1.1 SSD 12.39GB 12.42GB
shared
VMw-1.2 SSD 12.39GB 12.42GB
shared
VMw-1.3 SSD 12.39GB 12.42GB
shared
VMw-1.4 SSD 12.39GB 12.42GB
shared
VMw-1.5 SSD 12.39GB 12.42GB
shared
VMw-1.6 SSD 12.39GB 12.42GB
Aggregate
capacity available for volume use would be increased by 133.8GB.
The
following disks would be partitioned: VMw-1.1, VMw-1.2, VMw-1.3,
VMw-1.4,
VMw-1.5,
VMw-1.6.
::>
aggr add-disks -aggregate aggr1 -disklist
VMw-1.1,VMw-1.2,VMw-1.3,VMw-1.4,VMw-1.5,VMw-1.6
6)
Repeat for 3 to 5 Node 2.
7)
Finally, re-enable disk auto assign:
disk
option modify -node * -autoassign on
THE
END!
Comments
Post a Comment