Monday, 28 November 2016

Wipeconfig my cDOT SIM ... and Restore

I’ve not blogged any lab stuff for ages, so, since I wanted to see what wipeconfig does to my ONTAP 8.3.2P2 SIM, and it’s a little bit interesting and different, here we go...

Some outputs from my single-node SIM cluster:


CLU1::> version
NetApp Release 8.3.2P2: Mon May 23 13:45:25 UTC 2016

CLU1::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------
CLU1N1                true    true


First thing we do is take some configuration backups:


CLU1::> set -c off; set adv
CLU1::> system configuration backup create -node CLU1N1 -backup-type node -backup-name 20161128_node
CLU1::> system configuration backup create -node CLU1N1 -backup-type cluster -backup-name 20161128_cluster
CLU1::> job show -description *Backup*


Wait for the backup job to complete.
Note: We don’t use the cluster backup, it’s just taken as extra backup. It is important that the cluster backup name is different to the node backup name, otherwise it will overwrite.

Halt the node:


CLU1::> halt -node local -inhi -igno -skip


Boot the node (boot_ontap if you’re at the LOADER> prompt).

At the Boot Menu type “wipeconfig”:

Image: Typing “wipeconfig” at the Boot Menu
You will see the warning:

This will delete critical system configuration, including cluster membership.
Warning: do not run this option on a HA node that has been taken over.
Are you sure you want to continue?:

Type “y” to the warning.

The node will:


Rebooting to finish wipeconfig request


And you should see on screen:


Wipe filer procedure requested
Abandoned in-memory /var file system


Then it will reboot again.

When it comes back up, it will have forgotten its identity (you’ll see localhost in the messages) and the login prompt appears. The login is now admin with no password.


login: admin
CLU1::> cluster show

Error: “show” is not a recognized command

CLU1::> node show
Node      Health
--------- ------
localhost -


Q: How to we restore this single node SIM cluster?

Type the following clustershell commands:


CLU1::> set -c off; set adv
CLU1::> system configuration backup show
CLU1::> system configuration recovery node restore -backup 20161128_node.7z


You will see a warning:

Image: Node Restore Warning

Type Y to continue.

And the node will reboot back to the login prompt, with its identity has returned!


CLU1::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------
CLU1N1                true    true


What there some purpose to this?

Yes! The purpose was reusing a HA-pair that had previously been ARL-ed out of another cluster, and making sure it was clean and ready for a disruptive headswap into the DR cluster. The wipeconfig worked fine. For a physical system, after running wipeconfig you will see:


*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************
The boot device has changed. System configuration information
could be lost. Use option (6) to restore the system configuration, or
option (4) to initialize all disks and setup a new system.
Normal Boot is prohibited.


It’s very good that the Normal Boot is prohibited here, since with the ARL and cDOT disruptive headswap, it is essential to do the option 6 “Update flash from backup config” after the disks have been reassigned and before first fully booting up ONTAP.


Saturday, 19 November 2016

Highlights from the ONTAP 9.1 RC1 Release Notes

 

Back in June there was Highlights from the ONTAP 9.0 RC1 Release Notes. This is supplemental to that post with the changes in ONTAP 9.1 (and is my way of trying to keep up with all the awesome developments to ONTAP.)

Changes in ONTAP 9.1: New and changed features

Data protection enhancements

- Support for volume encryption

- Support for SnapLock technology

- Support for RAID-TEC as the default RAID type

Manageability enhancements

- Enhanced cluster dashboard

- Support for cluster setup
-- “… can use System Manager to set up a new cluster … ”

- Support for most active files or clients functionality
-- “… can track and report the most active instances of a file or client in a cluster using statistical sampling techniques.”

MetroCluster configuration enhancements

- Support for onboard FC-VI ports on AFF A300 and FAS8200 storage systems

New hardware support

- Support for new FAS and AFF platforms
-- “… FAS2600, FAS8200, FAS9000, AFF A300, AFF A700 …”

- Support for increasing the maximum SAN cluster size to 12 nodes

- Support for DS224C and DS212C disk shelves
Note: ONTAP 9 only supported these with AFF8080, ONTAP 9.1 expands this.

- Support for the X1134A adapter
-- “The X1134A is a 2-port 32 Gb FC target-only adapter”

Storage resource management enhancements

- Support for FlexGroup volumes

SAN enhancements

- Support for Foreign LUN Import (FLI) Interoperability Matrix (IMT)

- Support for Using Foreign LUN Import to import LUNS into AFF

- Support for simplified SAN AFF provisioning templates
-- ONTAP 9.1 added the following template: SAN SAP HANA

Upgrade enhancements

- Support for installing ONTAP software and firmware from an external USB mass storage device
-- “The USB device is specified as file://usb0/filename