The DS4486 is unique amongst NetApp disk shelves in having dual disk carriers - it has 24 bays of dual disk carriers, allowing 48 disks in a 6U enclosure (currently 6 to 10 TB MSATA drives are available - allowing up to 447 TiB physical in one enclosure). When a disk in the dual disk carrier fails, if the other disk in the carrier is part of another RAID Group (can’t have two disks in the same RAID group in the same carrier) then a well disk copy is undertaken to another disk so that the dual disk carrier can eventually be replaced (the disk failure isn’t flagged until the evacuation is complete, and both disks in the dual disk carrier are replaced at the same time - including the disk that was good.)
One best practice I learnt recently and struggled to find documented anywhere except in the Syslog Translator, is that:
All disks within a multidisk carrier should belong to one owner.
If you see dual disk carriers with number 1 disk assigned to say node 1, and 2 disk assigned to node 2, it is technically fine (it will work, it is supported), but it’s not best practice. Personally, I’m always keen to get disk autoassign working where possible, and disk autoassign would not work with number 1 disk to node 1, and 2 disk to node 2. Also, you can’t actually assign disks within a multidisk carrier to different owners without forcing it:
cluster::> disk assign -disk 188.8.131.52 -owner cluster-01
cluster::> disk assign -disk 184.108.40.206 -owner cluster-02
Error: command failed: Failed to assign disks. Reason: Unable to assign disk 220.127.116.11. Another disk enclosed in the same disk carrier is assigned to another node or is in a failed state. All disks in one disk carrier should be assigned to the same node. Override is not recommended but is possible via the -force option.
Image: DS4486 Dual Disk Carrier
Somethings Else of Note:
1) You can only have maximum of 5 x DS4486 in a stack. The limitation is actually with the 240 disks, so you could for example have: 1 x DS4486 and 8 x DS4246’s in a stack (240 disks).
2) And recommended minimum spares are 4 per node that has DS4486 (since when one disk fails it effectively takes two disks out of action.)