Back in October I posted Cluster
Software Update Essentials which outlined 3 different upgrade methods (really,
methods 1 & 2 were the same, just GUI v CLI.) In this post I’m going to run
through the method I’d use - “cluster image update” (for anything other than single
node clusters.) Upgrading from 8.3.0 to 9.1 is a one step process.
Pre-Preparation
1)
Verify your environment is supported for ONTAP 9.1
- and
make remediation’s where required.
Note: This includes FC SAN hosts, boot from SAN
hosts, OFFTAP products…
2)
Verify platform support, and root volume sizes
3) Verify
cluster health via AutoSupport
4)
Verify cluster configuration with Config Advisor
5)
Verify cluster health in OnCommand Unified Manager.
6)
Thoroughly go over the Upgrade Advisor outputs and do the appropriate!
7)
Download the ONTAP 9.1 image
8) Serve
the 91P1_q_image.tgz file on a web
server.
Note: 9.1P1 is the recommend release at the time of writing.
Cluster Image
Package Get
Use the
highlighted commands below to get the new image on the cluster, and perform
validation:
cluster1::> cluster image show
Node Current Version Installation Date
----------- ---------------
-------------------
cluster1-01 8.3.1 9/17/2015 16:54:16
cluster1-02 8.3.1 9/17/2015 16:55:04
cluster1::> cluster image package
show-repository
There are no packages in the
repository.
cluster1::> cluster image package get -url
http://192.168.0.5:8080/91P1_q_image.tgz
Software get
http://192.168.0.5:8080/91P1_q_image.tgz started on node cluster1-01
Downloading package. This
may take up to 10 minutes...
cluster1::> cluster image package
show-repository
Package Version Package
Build Time
---------------
------------------
9.1P1 2/14/2017 13:14:46
cluster1::> cluster image validate -version
9.1P1
It can take several minutes
to complete validation...
Pre-update Check Status
---------------------------- ------
Aggregate plex resync
status OK
Aggregate status OK
Autoboot Status OK
Broadcast Domain status OK
CIFS status OK
CPU Utilization Status OK
Cluster health status OK
Cluster quorum status OK
Data ONTAP Version
Status OK
Disk status OK
High Availability
status OK
Jobs Status OK
LIF failover OK
LIF load balancing OK
LIFs not hosted OK
LIFs on home node
status OK
Manual checks Warning: Manual validation
checks need to be performed. Refer to the Upgrade Advisor...
MetroCluster configuration
status OK
NDMP status OK
NFS netgroup check OK
Platform status OK
Previous Upgrade Status OK
SAN LIF status OK
SAN status OK
Security Config SSLv3
check OK
SnapMirror status OK
Snapshot copy count
check OK
Volume move status OK
Volume status OK
Overall Status Warning
Update the first HA
Pair
Note the following warning from Upgrade Advisor
regards SnapMirror:
“To prevent SnapMirror transfers from
failing, you must suspend (Quiesce) SnapMirror operations and upgrade destination nodes
before upgrading source nodes.
(i) Suspend SnapMirror transfers for
a destination volume
(ii) Upgrade the node that contains
the destination volume
(iii) Upgrade the node that contains
the source volume
(iv) Resume the SnapMirror transfers
for the destination volume”
This is
a good point at which to send AutoSupports (and verify they have sent)::>
autosupport invoke -node *
-type all -message "maint=4h upgrading to ONTAP 9.1P1"
autosupport history show
-node cluster1-01
autosupport history show
-node cluster1-02
Note: If this is a 4-node or greater cluster, you
might like to make sure Epsilon is not on the HA-Pair being upgraded (cluster
image update will handle this.)
set adv
cluster show
cluster modify -node EPSILON_NODE -epsilon false
cluster modify -node NEW_EPSILON_NODE -epsilon true
Update the
HA-Pair:
cluster1::> cluster image update -version
9.1P1 -nodes cluster1-02,cluster1-01
Starting validation for this
update. Please wait...
It can take several minutes
to complete validation...
Warning: Validation has
reported warnings. Do you want to continue? {y|n}: y
Starting update...
Note: You don’t need to do 1 HA-pair at a time,
“cluster image update” is designed to update the entire cluster.
Verifying the
Update Process
You
should be connected to Service-Processors to observe the update process.
Note: If you have aggregates serving CIFS, you
may need to do a “storage failover giveback -ofnode NODE -override-vetoes true”
cluster image show-update-progress
storage failover
show-takeover
storage failover show
storage failover
show-giveback
Update Subsequent
HA Pairs
Follow
the above to update subsequent HA Pairs as required.
Finally
Verify the cluster version has increased (the cluster version only updates when all nodes are upgraded).
If you
suspended SnapMirrors re-enable them.
Send
AutoSupports::>
version
snapmirror resume -source-path {SRC_PATH} -destination-path {DEST_PATH}
autosupport invoke -node * -type all -message "maint=END finished upgrade to ONTAP 9.1P1"
snapmirror resume -source-path {SRC_PATH} -destination-path {DEST_PATH}
autosupport invoke -node * -type all -message "maint=END finished upgrade to ONTAP 9.1P1"
And test
your environment.
Appendix: (Some) Supported
things with ONTAP 9.1
Just a personal reference to save some IMT results
(correct at 2017.03.18):
OnCommand Unified Manager 7.0
OnCommand Unified Manager 7.1
OnCommand Performance Manager 7.0
OnCommand Performance Manager 7.1
SnapDrive 7.1.3 for Windows
SnapDrive 7.1.4 for Windows
SnapCenter Host Plug-in 1.1 for Microsoft Windows
SnapCenter Host Plug-in 1.1 for UNIX
SnapCenter Host Plug-in 2.0 for Microsoft Windows
SnapCenter Host Plug-in 2.0 for UNIX
SnapCenter Host Plug-in 2.0 for VMware vSphere
Virtual Storage Console 6.1
Virtual Storage Console 6.2
Virtual Storage Console 6.2.1
SnapDrive 5.2.2 for Unix*
SnapDrive 5.3 for Unix*
SnapDrive 5.3.1 for Unix*
SnapCenter
Application Plugin has dependencies with:
-
SnapCenter Host Plugin (if 1.1 use 1.1/if 2.0 use 2.0)
-
SnapCenter Server
-
Host Application
SnapCenter
Server has dependencies with:
-
SnapCenter Host Plugin (if 1.1 use 1.1/if 2.0 use 2.0)
-
SnapCenter Application Plugin
-
Virtual Storage Console (VMware)
SnapManager
for Exchange has dependencies with:
-
SnapDrive (for Windows version)
-
Host OS (Windows Server version)
-
Host Application (Exchange Server version)
CN1610 Cluster Switch FASTPATH 1.2.0.7 and RCF 1.2
Cisco NX3132V Cluster Switch NX-OS 7.0(3)I4(1) and RCF
1.1
Cisco NX5596 Cluster Switch NX-OS 7.1(1)N1(1) and RCF
1.3
Cisco NX5020 Cluster Switch NX-OS 5.2(1)N1(8b) and RCF
1.3
Cisco NX5010 Cluster Switch NX-OS 5.2(1)N1(8b) and RCF
1.3
Comments
Post a Comment