ACME Guide to 4a-ing a Factory Fresh NetApp System

 

There are circumstances when you might want to 4a (clean configuration and initialize all disks) on a brand new out-of-the-factory NetApp FAS Clustered Data ONTAP/7-Mode system. This post is not going to go into the circumstances, just the how to do.
                          
Note 1: A sample output is contained in the following Appendix B
Note 2: ‘To 4a a system’ comes from pre-Data ONTAP 8 days when there used to be option (4) ‘Clean configuration’ and option (4a) ‘Clean configuration and initialize all disks’ in the boot menu.

Walkthrough

1. Boot both controllers
2. Press Ctrl-C for Boot Menu when prompted
3. Selection option 5 ‘Maintenance mode boot’
4. Answer ‘y’ to ‘Continue with boot?’
5. Run the following command to get the system ID:
disk show -n
6. Run the following command to find the existing root aggregate disks:
aggr status -r
7. Offline and destroy aggregates as required (except the partners root aggregate):
aggr offline aggrname
aggr destroy aggrname
8. Unassign the nodes disks with:
disk remove_ownership -s 1234567890
9. On the partner node - repeat steps 2 to 8
10. Run the following command:
disk show -a
11. Remove ownership of any disks assigned to foreign controllers with:
disk remove_ownership -s 2345678901
12. Reassign the original 3 root aggregate disks (so these disks will be zeroed in the wipe filer procedure)
disk assign 0a.00.1
disk assign 0a.00.2
disk assign 0a.00.3
13. Halt the controller with:
halt
14. From the load prompt run:
boot_ontap
15. Press Ctrl-C for Boot Menu when prompted
16. Selection option 4 ‘Clean configuration and initialize all disks’
17. Answer ‘y’ to ‘Zero disks, reset config and install a new file system?’
18. Answer ‘y’ to ‘This will erase all the data on the disks, are you sure?’
19. On the partner node - Repeat step 12 to 18
20. The End!

Appendix A: Disk Zeroing Times

Image: NetApp Disk Zeroing Times for SSD/FC/SAS/SATA Disks

Appendix B: Example output from one head

*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************

Please choose one of the following:
(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 5

You have selected the maintenance boot option:
The system has booted in maintenance mode allowing the following operations to be performed:

?                       disk
key_manager             fcadmin
fcstat                  sasadmin
sasstat                 acpadmin
halt                    help
ifconfig                raid_config
storage                 sesdiag
sysconfig               vmservices
version                 vol
aggr                    sldiag
dumpblock               environment
systemshell             vol_db
led_on                  led_off
sata                    acorn
stsb                    scsi
nv8                     disk_list
ha-config               fctest
disktest                diskcopy
vsa                     xortest
disk_mung

Type "help " for more details.

In a High Availablity configuration, you MUST ensure that the partner node is (and remains) down, or that takeover is manually disabled on the partner node, because High Availability software is not started or fully enabled in Maintenance mode.
FAILURE TO DO SO CAN RESULT IN YOUR FILESYSTEMS BEING DESTROYED
NOTE: It is okay to use 'show/status' sub-commands such as 'disk show or aggr status' in Maintenance mode while the partner is up

Continue with boot? y

*> disk show -n
Local System ID: 1234567890

*> aggr status -r

Aggregate aggr0 (online, raid_dp) (block checksums)
  Plex /aggr0/plex0 (online, normal, active)
    RAID group /aggr0/plex0/rg0 (normal, block checksums)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------
      dparity   5b.18.0         5b    18  0   SA:B   -   SAS 15000 560000/1146880000 560208/1147307688
      parity    5b.19.0         5b    19  0   SA:B   -   SAS 15000 560000/1146880000 560208/1147307688
      data      5d.04.0         5d    4   0   SA:B   -   SAS 15000 560000/1146880000 560208/1147307688

*> disk remove_ownership -s 1234567890

All disks owned by the system with ID 1234567890 will have their ownership information removed. The system with ID 1234567890 must not be running !!!

Do you want to continue? y

Volumes must be taken offline. Are all impacted volumes offline? y

*> disk show -a
*> disk assign 5b.18.0
*> disk assign 5b.19.0
*> disk assign 5d.04.0
*> disk show
*> halt

LOADER-A> boot_ontap

*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************

Please choose one of the following:
(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 4

Zero disks, reset config and install a new file system? y
This will erase all the data on the disks, are you sure? y

Rebooting to finish wipeconfig request.
System rebooting...

*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************

Wipe filer procedure requested.

Comments

Post a Comment