Monday, 27 May 2013

Researches on IBM N Series (V-Series) Head Swap from Data ONTAP 7 to New V-Series on Clustered ONTAP 8.1.2

A post to record some researches:

From NetApp.com


Product Documentation:

Go to kb.netapp.com and ask a question or checkout the How To’s:

In the clustershell

set -priv diag
volume add-other-volumes -node nodename
*Add 7-mode volumes to the cluster database

Note: The "add-other-volumes" switch can only be used with VTW (volume transfer wizard) which uses snapmirror to migrate data volumes from 7-Mode systems.

From this thread
*Posted by Bob McKim and answered by Matt Hallmark
*A bit old - August 2010 - but still interesting

Part 1

Is this the correct procedure for headswap on a V-Series?
1) Gather existing v-series head configuration
2) Zone and LUN mask the new v-series head to the existing LUNs
3) Perform a disk reassign ownership on an aggr by aggr basis
4) Create new CIFS shares on new v-series head (only CIFS shares on existing v-series head)
5) Remove old v-series head from DNS (or swap servers in DFS)
6) Add new v-series head to DNS
7) Verify client connectivity

Part 2

1) The old heads are x86 (V3020) and new heads are x64 (N6040 / V3120), therefore the binaries are different.
2) NetApp ONTAP on old heads, IBM ONTAP on new ones

How about if I do a config dump and a config dump -v on the old heads and do a config restore on the new ones before swinging the aggrs?
They are both running 7.2.6.1 (had to downlevel the N6040 for snapmirror purposes).

Part 3

For 1) you are creating a new aggr and migrating data. The aggregates on a V-Series are raid0 (aggr options aggrname to see it), so you can't fail a drive/lun per se.
For 2) you are correct on most of the steps, but as you will be retaining your root volumes, there's no need to re-create shares or change dns etc.

Procedurally it's:

1. Halt existing HA Pair and power off the nodes
2. Rack and cable the new cluster, power on the nodes and go into maintenance mode.
3. Verify that the ports you want to use are initiators and document the WWPN's (storage show adapter)
4. Update the zoning on the switches for the new initiator ports
5. On the back-end SAN, update your host profile to reflect the new WWPN's
6. From maint mode, reassign the LUNs from the old systemID to the new one, repeat for the other node.
7. Using aggr status from maint mode, validate that you can see your various aggrs and that you have one marked as a root aggr on each controller.
8. Halt both nodes and boot them up normally.
9. Upgrade ONTAP if you are changing CPU architectures.

As a pre-step, you might want to validate that the version of ONTAP that's on the CF of the new controllers is compatible with the version of ONTAP (raid labels etc.) on the old. You may want to upgrade ONTAP before doing the head swap.

IBM N Series Redbooks


IBM N Series to NetApp Models Conversion Table

N3300 Model A10 Type 2859 = FAS2020
N3300 Model A20 Type 2859 = FAS2020C
N3150 Model A15 Type 2857 = FAS2220
N3150 Model A25 Type 2857 = FAS2220
N3220 Model A12 Type 2857 = FAS2240-2
N3220 Model A22 Type 2857 = FAS2240C-2
N3240 Model A14 Type 2857 = FAS2240-4
N3240 Model A24 Type 2857 = FAS2240C-4
N3400 Model A11 Type 2859 = FAS2040
N3400 Model A21 Type 2859 = FAS2040C
N3600 Model A20 Type 2862 = FAS2050C
N6040 Model A10 Type 2858 = FAS3140
N6040 Model A20 Type 2858 = FAS3140C
N6060 Model A22 Type 2858 = FAS3160C
N6070 Model A21 Type 2858 = FAS3170C
N6210 Model C10 Type 2858 = FAS3210
N6210 Model C20 Type 2858 = FAS3210C
N6240 Model C21 Type 2858 = FAS3240
N6240 Model E11 Type 2858 = FAS3240
N6240 Model E21 Type 2858 = FAS3240C
N6270 Model C22 Type 2858 = FAS3270
N6270 Model E12 Type 2858 = FAS3270
N6270 Model E22 Type 2858 = FAS3270C
N7900 Model A21 Type 2867 = FAS6080C
N7950T Model E22 Type 2867 = FAS6280C
EXN1000 Model 001 Type 2861 = DS14mk2
EXN3000 Model 003 Type 2857 = DS4243
EXN3200 Model 306 Type 2857 = DS4486
EXN3500 Model 003 Type 2857 = DS2246
EXN4000 Model 004 Type 2863 = DS14mk4

What’s New in the Clustered ONTAP 8.2 RC Clustershell

With the CDOT 8.2RC Simulator now being out, here’s a brief run down of things new and missing from the CDOT 8.2RC Clustershell over CDOT 8.1.2 (said differently - things that are only present in either CDOT 8.2 or CDOT 8.1.2)

Note i: We’re only investigating down to the 2nd level subsections in the command tree structure so things deeper down - as well as additional command options - are not covered.
Note ii: This information is taken from the deepest privilege level - diagnostic (set -privilege diagnostic). Some things are at slightly different privilege levels across 8.1.2 and 8.2.

cluster::*> ?

CDOT 8.1.2 only:

antivirus>
Manage antivirus

cluster::cluster*> ?

CDOT 8.2 only:

cluster application-record>
*Manage cluster application-records
cluster application-record create
*Create application-record entries
cluster application-record delete
*Delete application-record entries
cluster application-record modify
*Modify application-record entries
cluster application-record show
*Display application-record entries
 
cluster date>
Manage cluster’s date and time setting
cluster date modify
Modify the current date and time for the nodes in the cluster
cluster date show
Display the current date and time for the nodes in the cluster

cluster::diag*> ?

CDOT 8.2 only:

diag vserver>
*Vserver services diagnostics
diag vserver audit>
*Audit services diagnostics

cluster::logger*> ?

CDOT 8.2 only:

logger servprocd>
*The servprocd directory
logger servprocd log>
*The log directory


cluster::lun*> ?

CDOT 8.2 only:

lun alignment>
*Displays LUN alignment statistics
lun alignment show
*Displays LUN alignment statistics

lun bind>
*The bind directory
lun bind create
 *Bind a VVol LUN to a protocol endpoint
lun bind destroy
*Unbind a VVol LUN from a protocol endpoint
lun bind show
*Show list of Vvol bindings

lun stale-map>
*Manage stale LUN maps
lun stale-map delete
*Delete stale LUN maps
lun stale-map show
*Display a list of stale LUN maps

cluster::network*> ?

CDOT 8.2 only:

network options>
Manage networking options
network options ipv6>
The ipv6 directory
network options  switchless-cluster>
*Manage switchless cluster options

cluster::qos*> ?

CDOT 8.2 only:

qos policy-group>
The policy-group directory
qos policy-group create
Create a policy group
qos policy-group delete
Delete a policy group
qos policy-group modify
Modify a policy group
qos policy-group rename
Rename a policy group
qos policy-group show
Display a list of policy groups

qos settings>
*QoS settings
qos settings archived-workload>
*QoS cluster-wide workload archival settings
qos settings control>
*QoS cluster-wide control settings
qos settings read-ahead>
*QoS read-ahead settings

qos statistics>
Qos Statistics
qos statistics characteristics>
Policy Group characterization
qos statistics latency>
Latency breakdown
qos statistics performance>
System performance
qos statistics resource>
Resource utilization
qos statistics workload>
Detail by workload

CDOT 8.1.2 only:

qos policy>
*QoS policy settings

cluster::snapmirror*> ?

CDOT 8.2 only:

snapmirror list-destinations
Display a list of destinations for SnapMirror sources

snapmirror policy>
Manage SnapMirror policies
snapmirror policy add-rule
Add a new rule to SnapMirror policy
snapmirror policy create
Create a new SnapMirror policy
snapmirror policy delete
Delete a SnapMirror policy
snapmirror policy modify
Modify a SnapMirror policy
snapmirror policy modify-rule
Modify an existing rule in SnapMirror policy
snapmirror policy remove-rule
Remove a rule from SnapMirror policy
snapmirror policy show
Show SnapMirror policies

snapmirror release
Release source information for a SnapMirror relationship

snapmirror restore
Restore a Snapshot copy from a source volume to a destination volume

snapmirror set-options
*Display/Set SnapMirror options

snapmirror snapshot-owner>
Manage Snapshot Copy Preservation
snapmirror snapshot-owner create
Add an owner to preserve a Snapshot copy for a SnapMirror mirror-to-vault cascade configuration
snapmirror snapshot-owner delete
Delete an owner used to preserve a Snapshot copy for a SnapMirror mirror-to-vault cascade configuration
snapmirror snapshot-owner show
Display Snapshot copies with owners

cluster::statistics*> ?

CDOT 8.2 only:

statistics archive>
*Performance Archive directory
statistics archive config>
*Performance Archive configuration directory
statistics archive create
*Save new Performance Archive
statistics archive delete
*Destroy existing Performance Archive
statistics archive modify
*Modify properties of Performance Archive
statistics archive show
*Display information about Performance Archives

statistics catalog>
Performance Catalog
statistics catalog counter>
The counter directory
statistics catalog instance>
The instance directory
statistics catalog object>
 The object directory
 
statistics latency>
*The latency directory
statistics latency samples>
*The samples directory

statistics preset>
*Performance Preset directory
statistics preset delete
*Delete an existing Performance Preset
statistics preset detail>
*Performance Preset Detail directory
statistics preset modify
*Modify an existing Performance Preset
statistics preset show
*Display information about Performance Presets

statistics samples>
Manage the statistics samples
statistics samples delete
Delete statistics samples
statistics samples show
Display statistics samples

statistics spinvfs>
*The spinvfs directory
statistics spinvfs rpc>
*The rpc directory
statistics spinvfs show
*Display the mroot spinvfs statistics

statistics start
Start data collection for a sample

statistics stop
Stop data collection for a sample

CDOT 8.1.2 only:

statisics protocol-request-size>
The protocol-request-size directory

statisics reset>
*Reset counter statistics

cluster::storage*> ?

CDOT 8.2 only:

storage import>
*The import directory
storage import abort
*Abort the data import from the foreign array LUN to the specified Data ONTAP(R) LUN
storage import bind
*Bind the specified Data ONTAP(R) LUN to a foreign array LUN for importing data from the foreign array LUN
storage import show
*Display the storage import sessions.
storage import start
*Start importing data to the specified Data ONTAP(R) LUN from the foreign array LUN to which it is bound.
storage import unbind
*Unbind the specified Data ONTAP(R) LUN which is bound to a foreign array LUN for a import session
storage import verify
*Verify the data imported to the specified Data ONTAP(R) LUN from the foreign array LUN.
 
storage library>
The library directory
storage library config>
Manage tape related devices
storage library path>
Manage connectivity of tape and Media Changer devices

cluster::system*> ?

CDOT 8.2 only:

system ontapi>
*The ontapi directory
system ontapi limits>
*The limits directory

system smtape>
Manage SMTape operations
system smtape abort
Abort an active SMTape session
system smtape backup
Backup a volume to tape devices
system smtape break
Make a restored volume read-write
system smtape continue
Continue SMTape session waiting at the end of tape
system smtape restore
Restore a volume from tape devices
system smtape showheader
Display SMTape header
system smtape status>
The status directory

cluster::volume*> ?

CDOT 8.2 only:

volume flexcache>
Manage FlexCache
volume flexcache cache-policy>
*Manage FlexCache cache policies
volume flexcache create
Cache a volume throughout the cluster
volume flexcache delegations>
*The delegations directory
volume flexcache delete
Delete caching for a given volume throughout the cluster
volume flexcache monitor>
*The monitor directory
volume flexcache show
Display cluster-wide caches

volume show-footprint
Display a list of volumes and their data and metadata footprints in their associated aggregate.

volume show-space
Display space usage for volume(s)

volume storage-service>
*The storage-service directory
volume storage-service locate-file
*Returns the storage service a file is stored in.
volume storage-service rename
*Renames a storage service for an Infinite Volume.
volume storage-service show
*Display all the storage services for an Infinite Volume.

volume transition-protect
*Fence access to volume during Transition.

CDOT 8.1.2 only:

volume destroy
Destroy an existing volume

volume member>
Manage constituent volumes of striped volumes

volume probe>
*Display diagnostic information for striped volumes

volume start-check
*Start testing volume for errors

volume stop-check
*Stop testing volume
 
volume transition-recover
*Initiates recovery for a volume which is in an incompletely transitioned state

cluster::vserver*> ?

CDOT 8.2 only:

vserver audit>
Manage auditing of protocol requests that the Vserver services
vserver audit create
Create an audit configuration
vserver audit delete
Delete audit configuration
vserver audit disable
Disable auditing
vserver audit enable
Enable auditing
vserver audit modify
Modify the audit configuration
vserver audit show
Display the audit configuration

vserver copy-offload>
*Manage Copy-Offload Configuration
vserver copy-offload delete-tokens
*Delete token scratch space used by copy-offload.
vserver copy-offload modify
*Modify the copy-offload configuration of a Vserver.
vserver copy-offload show
*Display the copy-offload configuration of a Vserver.

vserver data-policy>
Manage data policy
vserver data-policy export
Display a data policy
vserver data-policy import
Import a data policy
vserver data-policy validate
Validate a data policy without import

vserver fpolicy>
Manage FPolicy
vserver fpolicy disable
Disable a policy
vserver fpolicy enable
Enable a policy
vserver fpolicy engine-connect
Establish a connection to FPolicy server
vserver fpolicy engine-disconnect
Terminate connection to FPolicy server
vserver fpolicy log>
*FPolicy logger
vserver fpolicy policy>
Manage FPolicy policies
vserver fpolicy show
Display all policies with status
vserver fpolicy show-enabled
Display all enabled policies
vserver fpolicy show-engine
Display FPolicy server status

vserver group-mapping>
The group-mapping directory
vserver group-mapping create
Create a group mapping
vserver group-mapping delete
Delete a group mapping
vserver group-mapping insert
Create a group mapping at a specified position
vserver group-mapping modify
Modify a group mapping's pattern, replacement pattern, or both
vserver group-mapping show
Display group mappings
vserver group-mapping swap
Exchange the positions of two group mappings

vserver peer>
Create and manage Vserver peer relationships
vserver peer accept
Accept a pending Vserver peer relationship
vserver peer create
Create a new Vserver peer relationship
vserver peer delete
Delete a Vserver peer relationship
vserver peer modify
Modify a Vserver peer relationship
vserver peer reject
Reject a Vserver peer relationship
vserver peer resume
Resume a Vserver peer relationship
vserver peer show
Display Vserver peer relationships
vserver peer show-all
Display Vserver peer relationships in detail
vserver peer suspend
Suspend a Vserver peer relationship
vserver peer transition>
Create and manage transition peer relationships.

vserver security>
Manage ontap security
vserver security file-directory>
Manage file security
vserver security trace>
Manage security tracing

vserver smtape>
The smtape directory
vserver smtape break
Make a restored volume read-write

NetApp Simulator for Clustered ONTAP 8.2 RC Setup Notes

The NetApp Simulator for Clustered ONTAP 8.2 RC, comes in two versions - one for VMware Workstation/Player/Fusion, and one for VMware ESXi (here we cover the VMware Workstation version.)

To get the two node cluster working, it is important to set the serial numbers correctly as the licensing has changed. There is now a base cluster license, then protocol licenses per node (hence why the serial numbers need setting). Apart from that, there’s nothing too much new to be covered other than what’s been covered previously in these post for CDOT 8.1.2.


VMware Workstation/Player/Fusion Simulator Serial Numbers

The first node you let boot to the Boot Menu - press Ctrl+C when prompted - and option 4 to initialize the disks.

The second node you must boot to the boot prompt initially - press any key when the “Hit [Enter] to boot immediately, or any other for command prompt. Booting in 10 seconds…” message is displayed.

From the VLOADER> prompt run

setenv SYS_SERIAL_NUM 4034389-06-2
setenv bootarg.nvram.sysid 4034389062
printenv SYS_SERIAL_NUM
printenv bootarg.nvram.sysid
boot

Then the second node you let boot to the Boot Menu - press Ctrl+C when prompted - and option 4 to initialize the disks.

VMware Workstation/Player/Fusion Simulator Licenses

The 2-node cluster base license is applied in the cluster setup script:

SMKQROWJNQYQSDAAAAAAAAAAAAAA

The protocol licenses are per simulator node; apply these after the 2-node cluster setup has been done by copy and pasting the below lines at the clustershell prompt:

clustername::>

# Licenses for 8.2 CDOT SIM node 1
system license add -license-code APJAYWXCCLPKICXAGAAAAAAAAAAA #CIFS              
system license add -license-code YDFEZWXCCLPKICXAGAAAAAAAAAAA #FCP               
system license add -license-code UHWLBXXCCLPKICXAGAAAAAAAAAAA #FlexClone         
system license add -license-code AVGMFXXCCLPKICXAGAAAAAAAAAAA #Insight_Balance 
system license add -license-code MJHPYWXCCLPKICXAGAAAAAAAAAAA #iSCSI             
system license add -license-code OULLXWXCCLPKICXAGAAAAAAAAAAA #NFS               
system license add -license-code SWRPCXXCCLPKICXAGAAAAAAAAAAA #SnapLock          
system license add -license-code OAJXEXXCCLPKICXAGAAAAAAAAAAA #SnapLock_Enterpri
system license add -license-code ERPEDXXCCLPKICXAGAAAAAAAAAAA #SnapManager     
system license add -license-code INYWAXXCCLPKICXAGAAAAAAAAAAA #SnapMirror   
system license add -license-code QLNTDXXCCLPKICXAGAAAAAAAAAAA #SnapProtect     
system license add -license-code WSAIAXXCCLPKICXAGAAAAAAAAAAA #SnapRestore        
system license add -license-code GCUACXXCCLPKICXAGAAAAAAAAAAA #SnapVault         
# Licenses for 8.2 CDOT SIM node 2
system license add -license-code MHEYKUNFXMSMUCEZFAAAAAAAAAAA #CIFS              
system license add -license-code KWZBMUNFXMSMUCEZFAAAAAAAAAAA #FCP               
system license add -license-code GARJOUNFXMSMUCEZFAAAAAAAAAAA #FlexClone         
system license add -license-code MNBKSUNFXMSMUCEZFAAAAAAAAAAA #Insight_Balance 
system license add -license-code YBCNLUNFXMSMUCEZFAAAAAAAAAAA #iSCSI             
system license add -license-code ANGJKUNFXMSMUCEZFAAAAAAAAAAA #NFS               
system license add -license-code EPMNPUNFXMSMUCEZFAAAAAAAAAAA #SnapLock          
system license add -license-code ATDVRUNFXMSMUCEZFAAAAAAAAAAA #SnapLock_Enterpri
system license add -license-code QJKCQUNFXMSMUCEZFAAAAAAAAAAA #SnapManager     
system license add -license-code UFTUNUNFXMSMUCEZFAAAAAAAAAAA #SnapMirror   
system license add -license-code CEIRQUNFXMSMUCEZFAAAAAAAAAAA #SnapProtect     
system license add -license-code ILVFNUNFXMSMUCEZFAAAAAAAAAAA #SnapRestore        
system license add -license-code SUOYOUNFXMSMUCEZFAAAAAAAAAAA #SnapVault

Sunday, 26 May 2013

Notes for Upgrading a Z Node Clustered ONTAP Cluster from 8.X or 8.1.X to 8.1.X

*Where Z is any number from 2 to 24!

The following notes were written for an upgrade of a 4-node cluster running CDOT 8.1.2P1 to CDOT 8.1.4P4D2. The methodology below applies to pretty much any upgrade and any number of nodes within CDOT 8.X or CDOT 8.1.X to CDOT 8.1.X, and borrow greatly (highly abridged) from the manual - ‘Data ONTAP 8.1 Upgrade and Revert/Downgrade Guide for Cluster-Mode’.

Note i: Before doing upgrade work, run Upgrade Advisor from the My AutoSupport site, and read the manual.
Note ii: The latest patch release for 8.1.2 at the time of writing is 8.1.2P4. There are also two D releases which address specific bugs - 8.1.2P4D1 and 8.1.2P4D2. Data ONTAP 8.1.3 and Data ONTAP 8.2 are currently in Release Candidate.
Note iii: To obtain a P or D release, go to http://support.netapp.com/NOW/cgi-bin/software/ scroll down to the bottom and ‘To access a specific’, select Data ONTAP from the drop-down, and manually type in the required version in the ‘enter it here’ box. Here we want to download the 812P4D2_q_image.tgz.

Image: OnCommand System Manager - 4-node CDOT Cluster on 8.1.2P1
Image: Obtaining a Data ONTAP patch (and D Release) at support.netapp.com

Contents

10 x Pre-Flight checks
10 x Upgrade steps
2 x Post-Flight checks
Additional Steps

Pre-Flight checks - Part 1: Cluster Health

Verify the nodes are online and eligible to participate in the cluster:
cluster show

Pre-Flight checks - Part 2: RDB Quorum

Verify the nodes are participating in the  replicated database (RDB) and that all rings are in quorum:
set -privilege advanced
cluster ring show -unitname vldb
cluster ring show -unitname mgmt
cluster ring show -unitname vifmgr
set -privilege admin

Pre-Flight checks - Part 3: Vserver Health

Verify Vserver health and readiness:
storage aggregate show -state !online
volume show -state !online
vserver nfs show
vserver cifs show
vserver fcp show
vserver iscsi show

Verify LIFs are up and on their home ports:
network interface show -status-oper down
network interface show -is-home false

Pre-Flight checks - Part 4: LIF Failover Configuration

Verify LIF failover configuration for a major NDU (e.g upgrade from CDOT 8.X & here for completeness):
network interface failover show

Verify LIF failover configuration for a minor NDU (e.g upgrade within CDOT 8.1.X):
network interface show -role data -failover

Pre-Flight checks - Part 5: Removing Load-Sharing and Data-Protection Mirror Relationships Before Upgrading from 8.X to 8.1.X*

*CDOT 8.1 replicates differently from CDOT 8.0!

View and save information about mirror relationships:
snapmirror show
volume show -type LS,DP -instance

Delete DP and LS mirror relationships:
snapmirror delete destination:volume

Delete each destination volume:
volume delete destination:volume

Pre-Flight checks - Part 6: Update disk and shelf firmware*

*To minimize nondisruptive takeover and giveback periods during the CDOT software upgrade, manually upgrade to current versions of disk and shelf firmware!

Pre-Flight checks - Part 7: Verify Images

Determine the current software version on each node:
system node image show

Pre-Flight checks - Part 8: HTTP Server

In the image below, we’re using the very easy to use HFS.exe from -
- to present the image over HTTP.

Image: New software image presented over HTTP
Pre-Flight checks - Part 9: Jobs

Ensure no jobs are running:
job show
Delete any running or queued aggregate, volume, SnapMirror copy, or Snapshot jobs:
job delete *
- or -
job delete -id job_id
Verify no jobs are running:
job show

Pre-Flight checks - Part 10: SnapMirror

Two identify any destination snapmirror volumes:
snapmirror show
To quiesce each SnapMirror destination:
snapmirror quiesce destination:volume

Note: After the upgrade is complete run:
snapmirror resume destination:volume

Upgrade - Part 1: Image Download to All Nodes

Download the software image:
system node image update -node * -package http://X.X.X.X/812P4D2_q_image.tgz -setdefault true
Verify the image is installed:
system node image show

Upgrade - Part 2: Verify Storage (and Cluster HA) Failover

Ensure that storage failover is enabled and possible:
storage failover show

(Optional) If the cluster consists of only 2 nodes (a single HA pair)!
Ensure that cluster HA is configured:
cluster ha show

Disable automatic giveback on each node of the HA pair if it is enabled:
storage failover modify -node nodename -auto-giveback false

Upgrade - Part 3: Migrate LIFs

Migrate LIFs away from the node that will be taken over during the upgrade:
network interface migrate-all -node FirstNodeInPair

Upgrade - Part 4: Takeover

Initiate a takeover:
storage failover takeover -bynode SecondNodeInPair

Here, the node that is being taken over reboots onto the new image and waiting for giveback state!
Wait 8 minutes!

Upgrade - Part 5: Giveback

Initiate the giveback:
storage failover giveback -fromnode SecondNodeInPair
- or -
storage failover giveback -fromnode SecondNodeInPair -override-vetoes true

Upgrade - Part 6: Verification

Verify all aggregates and network:
storage failover show
storage aggregate show -node FirstNodeInPair
network interface show
network port show

Upgrade - Part 7: The HA Partner

To upgrade the HA partner in the pair (mostly, just swapping the second node with the first in Upgrade - Parts 3 to 6):
Note: The option allow-version-mismatch is not required for a minor NDU within 8.1.X

network interface migrate-all -node SecondNodeInPair
storage failover takeover -bynode FirstNodeInPair -option allow-version-mismatch

Wait 8 minutes!

storage failover giveback -fromnode FirstNodeInPair

Verify all aggregates and network:
storage failover show
storage aggregate show -node SecondNodeInPair
network interface show
network port show

Upgrade - Part 8: Additional Verfication

Confirm that the new Data ONTAP 8.1.x software is running on the HA Pair:
system node image show

Upgrade - Part 9: Re-enable Storage Failover

Re-enable storage failover for the HA pair:
storage failover modify -node nodename -auto-giveback true

Upgrade - Part 10: Repeat

Repeat parts 2 to 9 for all HA pairs in the cluster!

Post-Flight checks - Part 1: Verification

Note: After a major NDU from 8.X, you can change the name of your cluster using cluster identify modify -name newname

Ensure upgrade status is complete for each node:
set -privilege advanced
system node upgrade-revert show
set -privilege admin

Verify cluster health:
cluster show

Verify aggregates and volumes:
storage aggregate show -state !online
volume show -state !online

Verify data access protocols:
vserver nfs show
vserver cifs show
vserver fcp show
vserver iscsi show

Verify LIFs:
network interface show -status-oper down
network interface show -is-home false

Post-Flight checks - Part 2: Enabling and Reverting LIFs to Home Ports

Display status, enable, revert, and verify:
network interface show -vserver vservername
network interface modify -vserver vservername -lif * -status-admin up
network interface revert *
network interface show -vserver vservername

Post-Flight Additional Steps

1. (If required) Recreate data-protection mirror and load-sharing mirror relationships
2. (If required) Set the cluster management LIF for Remote Support Agent (RSA) using rsa setup

Image: OnCommand System Manager - 4-node CDOT Cluster on 8.1.2P4D2

Saturday, 25 May 2013

An Overview of some NetApp Data ONTAP Tuning Options

Note: The commands below are Data ONTAP 7-Mode commands. They can be run in Clustered Data ONTAP by simply placing run -node nodename -command infront (the information actually comes from a Clustered Data ONTAP 8.1.X system.)

Listed in this post:

- maxfiles
- aggr options (all 13)
- vol options (all 41)
- options *.* (97 listed and includes disk.*, raid.*, and wafl.*)

Maxfiles (Inodes)

To query the number of inodes in a FlexVol:
maxfiles volumename
To increase the number of inodes in a FlexVol:
maxfiles volumename 10000000
IMPORTANT: “Increasing the maximum number of files consumes disk space, and the number can never be decreased. Configuring a large number of inodes can also result in less available memory after an upgrade, which means you might not be able to run WAFL_check.”

Aggregate Options (Complete Listing of all 13)

aggr options aggrname free_space_realloc {on | off | no_redirect}
  - enable/disable free space reallocation on aggregate aggrname along with redirect scanner

aggr options atvname fs_size_fixed {on | off}
  - do not grow filesystem for aggregate or traditional volume atvname

aggr options aggrname ha_policy {none | cfo | sfo}
  - set ha_policy on aggregate aggrname

aggr options aggrname [-f] hybrid_enabled {on | off}
  - allow mixing HDD and SSD groups in aggregate aggrname

aggr options aggrname lost_write_protect {on | off}
  - default is to enable lost write protection on the aggregate

aggr options atvname nosnap {on | off}
  - disable snapshots on aggregate or traditional volume atvname

aggr options aggrname percent_snapshot_space percent
  - change aggregate aggrname's percent_snapshot_space setting

aggr options atvname raidsize size
  - set RAID group size on aggregate or traditional volume atvname

aggr options atvname raidtype (new-type = {raid4 | raid_dp})
  - change aggregate or traditional volume atvname's RAID type to new-type

aggr options atvname resyncsnaptime minutes
  - set RAID mirror resync snapshot frequency for aggregate or traditional volume atvname

aggr options atvname root
  - aggregate or traditional volume atvname becomes root on reboot

aggr options atvname snapmirrored off
  - discontinue mirroring on aggregate or traditional volume atvname

aggr options aggrname snapshot_autodelete {on | off}
  - change aggregate aggrname's snapshot_autodelete setting

Volume Options (Complete Listing of all 41)

vol options volname acdirmax seconds
  - set timeout before verifying directories

vol options volname acdisconnected seconds
  - set the timeout to serve items after last verify when in disconnected mode

vol options volname acregmax seconds
  - set timeout before verifying regular files

vol options volname acsymmax seconds
  - set timeout before verifying symbolic links

vol options volname actimeo seconds
  - set timeout to verify items without an explicit timeout

vol options volname convert_ucode {on | off}
  - enable UNICODE conversion on volume volname (default: off)

vol options volname create_ucode {on | off}
  - enable UNICODE creation on volume volname (default: off)

vol options volname disconnected_mode {off | hard | soft}
  - set the disconnected operation mode for caching

vol options volname dlog_hole_reserve {on | off}
  - enable hole reservation of delete log on volume volname

vol options volname extent {on | space_optimized | off}
  - configure extents on volume volname

vol options volname flexcache_autogrow {on | off}
  - enable autogrow on FlexCache volume volname

vol options volname flexcache_min_reserve size[k|m|g|t]
  - set size to be guaranteed for caching by volume

vol options volname fractional_reserve percentage
  - default is to have 100% reserve

vol options volname fs_size_fixed {on | off}
  - do not grow filesystem for volume volname

vol options volname guarantee {none | file | volume}
  - set storage guarantee for volume volname

vol options volname maxdirsize size
  - set maximum directory size on volume volname (unit: Kbytes, default: 1% of total system memory in KB, minimum: 4KB, maximum: 4194303KB)

vol options volname minra {on | off}
  - enable minimum readahead on volume volname

vol options volname nbu_archival_snap {on | off} [-f]
  - enable/disable archival snapshots for SnapVault for NetBackup on volume volname

vol options volname no_atime_update {on | off}
  - disable atime updates on volume volname

vol options volname no_delete_log {on | off}
  - disable delete logging on volume volname

vol options volname no_i2p {on | off}
  - disable inode to path on volume volname

vol options volname nosnap {on | off}
  - disable snapshots on volume volname

vol options volname nosnapdir {on | off}
  - disable '.snapshot' directory on volume volname

vol options volname nvfail {on | off}
  - invalidate file handles on NVRAM failure on volume volname

vol options volname raidsize size
  - set RAID group size on traditional volume volname

vol options volname raidtype (new-type = {raid4 | raid_dp})
  - change traditional volume volname's RAID type to new-type

vol options volname read_realloc {on | space_optimized | off}
  - configure read reallocation on volume volname

vol options volname resyncsnaptime minutes
  - set RAID mirror resync snapshot frequency for traditional volume volname

vol options volname [-f] root
  - volume volname becomes root on reboot

vol options volname schedsnapname {create_time | ordinal}
  - set scheduled snapshot names to include creation time or ordinal on volume volname

vol options volname snaplock_autocommit_period {none|(count){h|d|m|y}}
 - configure SnapLock autocommit period

vol options volname snaplock_default_period {(count){s|h|d|m|y}}|min|max|infinite
 - configure default SnapLock retention period

vol options volname snaplock_maximum_period {(count){s|h|d|m|y}}|infinite
 - configure maximum SnapLock retention period

vol options volname snaplock_minimum_period {(count){s|h|d|m|y}}|infinite
 - configure minimum SnapLock retention period

vol options volname snapmirrored off
  - discontinue mirroring on volume volname

vol options volname snapshot_clone_dependency {on | off}
  - enable option to remove dependency of backing snapshot on other snapshots for volume vol-name (default: off)

vol options volname svo_allow_rman {on|off}
  - set SnapValidator to correctly check RMAN backup data

vol options volname svo_checksum {on|off}
  - enable SnapValidator data checksumming on volume volname

vol options volname svo_enable {on|off}
  - enable SnapValidator for volume volname

vol options volname svo_reject_errors {on|off}
  - set SnapValidator to prevent all invalid operations default is to only log the errors

vol options volname try_first {volume_grow | snap_delete}
  - set volume to autogrow or delete snapshots first

Options (Listing of 97*)

*This is the complete list from the 8.1.2 C-Mode SIM, and is missing some options you would find on a production FAS/V-series controller!

options acp.domain
options acp.enabled
options acp.netmask
options acp.port
options autosupport.doit
options cdpd.enable
options cdpd.holdtime
options cdpd.interval
options cf.giveback.auto.after.panic.takeover
options cf.giveback.auto.cancel.on_network_failure
options cf.giveback.auto.cifs.terminate.minutes
options cf.giveback.auto.delay.seconds
options cf.giveback.auto.enable
options cf.giveback.auto.terminate.bigjobs
options cf.giveback.check.partner
options cf.hw_assist.enable
options cf.hw_assist.partner.address
options cf.hw_assist.partner.port
options cf.takeover.change_fsid
options cf.takeover.detection.seconds
options cf.takeover.on_disk_shelf_miscompare
options cf.takeover.on_failure
options cf.takeover.on_network_interface_failure
options cf.takeover.on_network_interface_failure.policy all_nics
options cf.takeover.on_panic
options cf.takeover.on_reboot
options cf.takeover.on_short_uptime
options cf.takeover.use_mcrc_file
options disk.asup_on_mp_loss
options disk.auto_assign
options disk.latency_check.enable
options disk.maint_center.allowed_entries
options disk.maint_center.enable
options disk.maint_center.max_disks
options disk.maint_center.rec_allowed_entries
options disk.maint_center.spares_check
options disk.powercycle.enable
options disk.recovery_needed.count
options ems.autosuppress.enable
options flexcache.access
options flexcache.deleg.high_water
options flexcache.deleg.low_water
options flexcache.enable
options flexcache.per_client_stats
options flexscale.enable
options flexscale.lopri_blocks
options flexscale.normal_data_blocks
options flexscale.pcs_high_res
options flexscale.pcs_size
options flexscale.rewarm
options locking.grace_lease_seconds
options qos.classify.count_all_matches
options raid.background_disk_fw_update.enable
options raid.disk.background_fw_update.raid4.enable
options raid.disk.copy.auto.enable
options raid.disktype.enable
options raid.media_scrub.enable
options raid.media_scrub.rate
options raid.media_scrub.spares.enable
options raid.min_spare_count
options raid.mirror_read_plex_pref
options raid.reconstruct.perf_impact
options raid.reconstruct.wafliron.enable
options raid.resync.perf_impact
options raid.rpm.ata.enable
options raid.rpm.fcal.enable
options raid.scrub.duration
options raid.scrub.enable
options raid.scrub.perf_impact
options raid.scrub.schedule
options raid.timeout
options raid.verify.perf_impact
options shelf.atfcx.auto.reset.enable
options shelf.esh4.auto.reset.enable
options shelf.fw.ndu.enable
options snapmirror.access
options snapmirror.checkip.enable
options snapmirror.delayed_acks.enable
options snapmirror.enable
options snapmirror.log.enable
options snapmirror.vbn_log_enable
options snapmirror.volume.local_nwk_bypass.enable
options snapmirror.vsm.volread.smtape_enable
options stats.archive.enable
options tape.reservations
options wafl.default_nt_user
options wafl.default_qtree_mode
options wafl.default_security_style
options wafl.default_unix_user
options wafl.group_cp
options wafl.inconsistent.asup_frequency.blks
options wafl.inconsistent.asup_frequency.time
options wafl.inconsistent.ems_suppress
options wafl.maxdirsize
options wafl.nt_admin_priv_map_to_root
options wafl.root_only_chown
options wafl.wcc_minutes_valid