I’ve not blogged
any lab stuff for ages, so, since I wanted to see what wipeconfig does to my ONTAP
8.3.2P2 SIM, and it’s a little bit interesting and different, here we go...
Some outputs from my single-node SIM cluster:
CLU1::>
version
NetApp
Release 8.3.2P2: Mon May 23 13:45:25 UTC 2016
CLU1::>
cluster show
Node Health Eligibility
---------------------
------- ------------
CLU1N1 true true
First thing we do is take some configuration backups:
CLU1::>
set -c off; set adv
CLU1::>
system configuration backup create -node CLU1N1 -backup-type node -backup-name
20161128_node
CLU1::>
system configuration backup create -node CLU1N1 -backup-type cluster
-backup-name 20161128_cluster
CLU1::>
job show -description *Backup*
Wait for the backup job to complete.
Note: We don’t use
the cluster backup, it’s just taken as extra backup. It is important that the
cluster backup name is different to the node backup name, otherwise it will
overwrite.
Halt the node:
CLU1::>
halt -node local -inhi -igno -skip
Boot the node (boot_ontap if you’re at the LOADER>
prompt).
At the Boot Menu type “wipeconfig”:
Image: Typing “wipeconfig”
at the Boot Menu
You will see the warning:
This will delete critical system
configuration, including cluster membership.
Warning: do not run this option on
a HA node that has been taken over.
Are you sure you want to continue?:
Type “y” to the warning.
The node will:
Rebooting
to finish wipeconfig request
And you should see on screen:
Wipe
filer procedure requested
Abandoned
in-memory /var file system
Then it will reboot again.
When it comes back up, it will have forgotten its
identity (you’ll see localhost in the messages) and the login prompt appears.
The login is now admin with no password.
login:
admin
CLU1::>
cluster show
Error:
“show” is not a recognized command
CLU1::>
node show
Node Health
---------
------
localhost
-
Q: How to we
restore this single node SIM cluster?
Type the following clustershell commands:
CLU1::>
set -c off; set adv
CLU1::>
system configuration backup show
CLU1::>
system configuration recovery node restore -backup 20161128_node.7z
You will see a warning:
Image: Node Restore
Warning
Type Y to continue.
And the node will reboot back to the login prompt, with
its identity has returned!
CLU1::>
cluster show
Node Health Eligibility
---------------------
------- ------------
CLU1N1 true true
What there some
purpose to this?
Yes! The purpose was reusing a HA-pair that had
previously been ARL-ed out of another cluster, and making sure it was clean and
ready for a disruptive headswap into the DR cluster. The wipeconfig worked
fine. For a physical system, after running wipeconfig you will see:
*******************************
*
*
*
Press Ctrl-C for Boot Menu. *
*
*
*******************************
The
boot device has changed. System configuration information
could
be lost. Use option (6) to restore the system configuration, or
option
(4) to initialize all disks and setup a new system.
Normal
Boot is prohibited.
It’s very good that the Normal Boot is prohibited here,
since with the ARL and cDOT disruptive headswap, it is essential to do the option
6 “Update flash from backup config” after the disks have been reassigned and
before first fully booting up ONTAP.
Comments
Post a Comment