Sunday, 26 May 2013

Notes for Upgrading a Z Node Clustered ONTAP Cluster from 8.X or 8.1.X to 8.1.X

*Where Z is any number from 2 to 24!

The following notes were written for an upgrade of a 4-node cluster running CDOT 8.1.2P1 to CDOT 8.1.4P4D2. The methodology below applies to pretty much any upgrade and any number of nodes within CDOT 8.X or CDOT 8.1.X to CDOT 8.1.X, and borrow greatly (highly abridged) from the manual - ‘Data ONTAP 8.1 Upgrade and Revert/Downgrade Guide for Cluster-Mode’.

Note i: Before doing upgrade work, run Upgrade Advisor from the My AutoSupport site, and read the manual.
Note ii: The latest patch release for 8.1.2 at the time of writing is 8.1.2P4. There are also two D releases which address specific bugs - 8.1.2P4D1 and 8.1.2P4D2. Data ONTAP 8.1.3 and Data ONTAP 8.2 are currently in Release Candidate.
Note iii: To obtain a P or D release, go to scroll down to the bottom and ‘To access a specific’, select Data ONTAP from the drop-down, and manually type in the required version in the ‘enter it here’ box. Here we want to download the 812P4D2_q_image.tgz.

Image: OnCommand System Manager - 4-node CDOT Cluster on 8.1.2P1
Image: Obtaining a Data ONTAP patch (and D Release) at


10 x Pre-Flight checks
10 x Upgrade steps
2 x Post-Flight checks
Additional Steps

Pre-Flight checks - Part 1: Cluster Health

Verify the nodes are online and eligible to participate in the cluster:
cluster show

Pre-Flight checks - Part 2: RDB Quorum

Verify the nodes are participating in the  replicated database (RDB) and that all rings are in quorum:
set -privilege advanced
cluster ring show -unitname vldb
cluster ring show -unitname mgmt
cluster ring show -unitname vifmgr
set -privilege admin

Pre-Flight checks - Part 3: Vserver Health

Verify Vserver health and readiness:
storage aggregate show -state !online
volume show -state !online
vserver nfs show
vserver cifs show
vserver fcp show
vserver iscsi show

Verify LIFs are up and on their home ports:
network interface show -status-oper down
network interface show -is-home false

Pre-Flight checks - Part 4: LIF Failover Configuration

Verify LIF failover configuration for a major NDU (e.g upgrade from CDOT 8.X & here for completeness):
network interface failover show

Verify LIF failover configuration for a minor NDU (e.g upgrade within CDOT 8.1.X):
network interface show -role data -failover

Pre-Flight checks - Part 5: Removing Load-Sharing and Data-Protection Mirror Relationships Before Upgrading from 8.X to 8.1.X*

*CDOT 8.1 replicates differently from CDOT 8.0!

View and save information about mirror relationships:
snapmirror show
volume show -type LS,DP -instance

Delete DP and LS mirror relationships:
snapmirror delete destination:volume

Delete each destination volume:
volume delete destination:volume

Pre-Flight checks - Part 6: Update disk and shelf firmware*

*To minimize nondisruptive takeover and giveback periods during the CDOT software upgrade, manually upgrade to current versions of disk and shelf firmware!

Pre-Flight checks - Part 7: Verify Images

Determine the current software version on each node:
system node image show

Pre-Flight checks - Part 8: HTTP Server

In the image below, we’re using the very easy to use HFS.exe from -
- to present the image over HTTP.

Image: New software image presented over HTTP
Pre-Flight checks - Part 9: Jobs

Ensure no jobs are running:
job show
Delete any running or queued aggregate, volume, SnapMirror copy, or Snapshot jobs:
job delete *
- or -
job delete -id job_id
Verify no jobs are running:
job show

Pre-Flight checks - Part 10: SnapMirror

Two identify any destination snapmirror volumes:
snapmirror show
To quiesce each SnapMirror destination:
snapmirror quiesce destination:volume

Note: After the upgrade is complete run:
snapmirror resume destination:volume

Upgrade - Part 1: Image Download to All Nodes

Download the software image:
system node image update -node * -package http://X.X.X.X/812P4D2_q_image.tgz -setdefault true
Verify the image is installed:
system node image show

Upgrade - Part 2: Verify Storage (and Cluster HA) Failover

Ensure that storage failover is enabled and possible:
storage failover show

(Optional) If the cluster consists of only 2 nodes (a single HA pair)!
Ensure that cluster HA is configured:
cluster ha show

Disable automatic giveback on each node of the HA pair if it is enabled:
storage failover modify -node nodename -auto-giveback false

Upgrade - Part 3: Migrate LIFs

Migrate LIFs away from the node that will be taken over during the upgrade:
network interface migrate-all -node FirstNodeInPair

Upgrade - Part 4: Takeover

Initiate a takeover:
storage failover takeover -bynode SecondNodeInPair

Here, the node that is being taken over reboots onto the new image and waiting for giveback state!
Wait 8 minutes!

Upgrade - Part 5: Giveback

Initiate the giveback:
storage failover giveback -fromnode SecondNodeInPair
- or -
storage failover giveback -fromnode SecondNodeInPair -override-vetoes true

Upgrade - Part 6: Verification

Verify all aggregates and network:
storage failover show
storage aggregate show -node FirstNodeInPair
network interface show
network port show

Upgrade - Part 7: The HA Partner

To upgrade the HA partner in the pair (mostly, just swapping the second node with the first in Upgrade - Parts 3 to 6):
Note: The option allow-version-mismatch is not required for a minor NDU within 8.1.X

network interface migrate-all -node SecondNodeInPair
storage failover takeover -bynode FirstNodeInPair -option allow-version-mismatch

Wait 8 minutes!

storage failover giveback -fromnode FirstNodeInPair

Verify all aggregates and network:
storage failover show
storage aggregate show -node SecondNodeInPair
network interface show
network port show

Upgrade - Part 8: Additional Verfication

Confirm that the new Data ONTAP 8.1.x software is running on the HA Pair:
system node image show

Upgrade - Part 9: Re-enable Storage Failover

Re-enable storage failover for the HA pair:
storage failover modify -node nodename -auto-giveback true

Upgrade - Part 10: Repeat

Repeat parts 2 to 9 for all HA pairs in the cluster!

Post-Flight checks - Part 1: Verification

Note: After a major NDU from 8.X, you can change the name of your cluster using cluster identify modify -name newname

Ensure upgrade status is complete for each node:
set -privilege advanced
system node upgrade-revert show
set -privilege admin

Verify cluster health:
cluster show

Verify aggregates and volumes:
storage aggregate show -state !online
volume show -state !online

Verify data access protocols:
vserver nfs show
vserver cifs show
vserver fcp show
vserver iscsi show

Verify LIFs:
network interface show -status-oper down
network interface show -is-home false

Post-Flight checks - Part 2: Enabling and Reverting LIFs to Home Ports

Display status, enable, revert, and verify:
network interface show -vserver vservername
network interface modify -vserver vservername -lif * -status-admin up
network interface revert *
network interface show -vserver vservername

Post-Flight Additional Steps

1. (If required) Recreate data-protection mirror and load-sharing mirror relationships
2. (If required) Set the cluster management LIF for Remote Support Agent (RSA) using rsa setup

Image: OnCommand System Manager - 4-node CDOT Cluster on 8.1.2P4D2

No comments:

Post a Comment