Saturday, 23 March 2013

NetApp FAS3140A Port Identification

A quick post that may prove useful if you’re trying to map ports on your NetApp FAS3140 controllers to physical connections.

Figure: FAS3140A Logical View with Ports Identified (click to enlarge)

The figure above has a FAS3140A - two controllers, each controller with:
SLOTS 1&2 populated by Quad-Port FC HBA cards
SLOTS 3 &4 populated by Quad-Port Ethernet HBA cards

The colour scheme has:
Yellow = Serial Port
Blue = Ethernet Ports
Orange = Fibre Channel Ports

The takeaway from this post is that:
- Ethernet ports are named beginning with an e.
- FC ports start with a number.
- The number represents either the slot or is 0 for onboard.
- The onboard FC ports are - from left to right - 0a, 0b, 0c, 0d.
- The ports in slots 1to 4 are from left to right (e)Xd, (e)Xc, (e)Xb, (e)Xa (where X = the slot number).

The reason why we start on d at the left is because the HBA cards go in “upside down” when plugged into the riser which is mid-chassis.

Friday, 22 March 2013

Clustered ONTAP Administration Notes CLI Crib Sheet: Part i/v

This 5 part posting is a very rough and ready CLI crib sheet for Clustered ONTAP.


The version of CDOT here is 8.1.1 / 8.1.2.

1: INSTALLATION AND CONFIGURATION

Basic Steps for Setting Up a Cluster:
1. Connect controllers, disks, and cables
2. Set up and configure nodes
3. Install software onto nodes
4. Initialize disks
5. Create a cluster
6. Join additional nodes to the cluster
7. Create aggregates and volumes
8. Configure virtual servers (Vservers)

LOADER> PROMPT

printenv
setenv AUTOBOOT true
boot_primary
version

To copy flash0a to flash0b:
flash flash0a flash0b
ifconfig auto
ifconfig INTERFACE addr=IP mask=MASK gw=GATEWAY
flash tftp://TFTP_SERVER_IP/PATH_TO_IMAGE flash0a

The environment variables for Cluster-Mode can be set as follows:
set-defaults
setenv ONTAP_NG true
setenv bootarg.init.usebootp false
setenv bootarg.init.boot_clustered true

CREATING A CLUSTER

> cluster setup
> cluster show

Note:The following two commands were presented as below in 8.1.1 but have reduced functionality in 8.1.2:

> cluster create -license XXX -clustername XXX -mgmt-port XXX -mgmt-ip XXX -mgmt-netmask XXX -mgmt-gateway XXX -ipaddr1 CLUSTERIP1 -ipaddr2 CLUSTERIP2 -netmask CLUSTERMASK -mtu 9000

> cluster join -clusteripaddr REMOTECLUSTERIP -ipaddr1 CLUSTERIP1 -ipaddr2 CLUSTERIP2 -netmask CLUSTERNETMASK -mtu 9000

> system license add ?
> system date modify ?
> system services ntp config show
> system services ntp server show

2: ARCHITECTURE

All nodes in a cluster have these kernel modules:
common_kmod.ko, nvram5.ko, nvram_mgr.ko, nvr.ko, maytag.ko, nbladekmod.ko, scsi_blade, spinvfs.ko

User-Space Processes:
mgwd, vldb, vifmgr, bcomd, ngsh, ndmpd, secd, spmd

> system node modify -node XXX -eligibility false
> cluster ha -enable true

Note: cluster ha is enabled for two-node clusters only!

3: USER INTERFACE

> storage aggregate create
> net int show
> system node run -node clusa-02 hostname
> system node run -node clusa-02
Note: Type ‘exit’ or ‘Ctrl-D’ to return to the CLI.

How to get into the systemshell:
> security login unlock -username diag
> security login password -username diag
> set -privilege advanced
*> system node systemshell local
% exit
*> set -privilege admin

> volume show -vserver * -volume acct_* -used >500gb

4: PHYSICAL DATA STORAGE

> storage failover show
> storage aggregate 64bit-upgrade start
> storage aggregate add-disks
> storage aggregate show

How to Enable Flash Pools:
> storage aggregate modify -aggr aggr3 -hybrid-enabled true
> storage aggregate -add-disks -aggr aggr3 -disktype SSD -diskcount 12

EXAMPLE 4.1: CREATING A NEW AGGREGATE

> storage disk show
> disk assign -disk clusa-01:v4* -owner clusa-01
> disk assign -disk clusa-02:v4* -owner clusa-02
> storage aggregate show
> volume show
> storage aggregate create  -aggr n01_aggr1 -node clusa-01 -diskcount 3
> storage aggregate show -aggr n01_aggr1

EXAMPLE 4.2: ADD DISKS TO THE AGGREGATE

> aggr add-disks -aggr n01_aggr1 -diskcount 2
> aggr show -aggr n01_aggr1 -instance

EXAMPLE 4.3: CREATE VIRTUAL SOLID STATE DISKS (SIM)

> security login unlock -username diag -vserver clusa
> security login password -username diag -vserver
> set diag
*> systemshell -node clusa-02
% setenv PATH /sim/bin:$PATH
% cd /sim/dev
% sudo vsim_makedisks -t 35 -a 2 -n 14
% exit
*> reboot -node clusa-02

EXAMPLE 4.4: CREATE A FLASH POOL

> stor aggr create -aggr n02_fp1 -node clusa-02 -diskcount 10
> stor aggr show -aggr n02_fp1
> stor disk assign -disk clusa-02:v6* -owner clusa-02
> stor disk show -type SSD
> stor aggr modify -aggr n02_fp1 -hybrid-enabled true
> stor aggr add-disk -aggr n02_fp1 -diskcount 6 -disktype SSD
> stor aggr show -aggr n02_fp1

5: VIRTUAL DATA STORAGE

> vserver show
> volume show
> volume create
> volume mount

NFS mount command (Linux):
mount DATA_IP:/ /mnt/vserver1

Infinite Volumes:
> aggr create -aggregate aggr1 -diskcount 70
> aggr create -aggregate aggr2 -diskcount 70
> vserver create -vserver vs0 -rootvolume vs0_root -is-repository true
> volume create -vserver vs0 -volume repo_vol -junction-path /NS -size 768GB
> volume show -volume repo_vol
> volume show -is-constituent true

EXAMPLE 5.1 CREATE A CLUSTER VSERVER

> vserver show
> volume show
> vserver create -vserver vs1 -rootvolume vs1root -aggr n01_aggr1 -ns-switch file -rootvolume-security-style unix
> vserver show
> vserver show -vserver vs1
> volume show
> volume show -vserver vs1 -volume vs1root
> stor aggr show

EXAMPLE 5.2: CREATE A FLEXIBLE VOLUME

> volume create -vserver vs1 -volume volume1 -aggr n01_aggr1 -junction-path /vol1
> vol show
> vol show -vserver vs1 -volume volume1

Clustered ONTAP Administration Notes CLI Crib Sheet: Part ii/v

This 5 part posting is a very rough and ready CLI crib sheet for Clustered ONTAP.


The version of CDOT here is 8.1.1 / 8.1.2.

6: PHYSICAL NETWORKING

> network port modify ?
Note: two cluster ports are required per node

> network port show
> network fcp adapter show

7: VIRTUAL NETWORKING

> network interface show
> network routing-groups show
> network routing-groups route show
> network interface show output
> net int failover-groups create -failover-group customfailover1 -node clusa-02 -port e0d
> net int modify -vserver_name vs2 -lif data1 -use-failover-group enabled -failover-group customfailover1
> net int show -vserver vs2 -lif vs2_lif1
> net int failover-groups show
> network interface create

EXAMPLE 7.1: CREATE A NAS DATA LIF

> net int create -vserver vs1 -lif data1 -role data -home-node clusa-01 -home-port a0a -address X.X.X.X -netmask X.X.X.X
> net int show
> net int show -vserver vs1 -lif data1

EXAMPLE 7.2: MIGRATE A DATA LIF

> net int migrate -vserver vs1 -lif data1 -dest-node clusa-02 -dest-port e0c
> net int show
> net int show -vserver vs1 -lif data1
> net int revert -vserver vs1 -lif data1

EXAMPLE 7.3: RE-HOME A DATA LIF

> net int modify -vserver vs1 -lif data1 -home-node clusa-02 -home-port e0d
> net int show
> net int revert -vserver vs1 -lif data1
> net int show

EXAMPLE 7.4: FAIL OVER A DATA LIF

> net int show -vserver vs1 -lif data1 -instance
> net int failover show
> system node reboot -node clusa-02
> net into show
> net int revert -vserver vs1 -lif data*

EXAMPLE 7.5: DELETE THE VLANS AND THE INTERFACE GROUP

> network interface show
> network port show
> net port vlan show
> net port vlan delete -node clusa-01 -vlan-name a0a-11
> net port vlan delete -node clusa-01 -vlan-name a0a-22
> net port vlan delete -node clusa-01 -vlan-name a0a-33
> set advanced
*> net port modify -node clusa-01 -port a0a -up-admin false
*> set admin
> net port ifgrp delete -node clusa-01 -ifgrp a0a

Clustered ONTAP Administration Notes CLI Crib Sheet: Part iii/v

This 5 part posting is a very rough and ready CLI crib sheet for Clustered ONTAP.


The version of CDOT here is 8.1.1 / 8.1.2.

8: NAS PROTOCOLS

> vserver services nis-domain
> vserver services ldap
> vserver services Kerberos-realm
> vserver export-policy
> vserver export-policy rule
> vserver nfs modify ?
> vserver nfs modify -v4.1-pnfs enabled

> vserver cifs options modify -vserver vs1 -smb2-enabled true
> vserver cifs create -vserver vs1 -domain lab.priv -cifs-server MYCIFS
> vserver cifs share create -vserver vs1 -share-name root -path / -share-properties browsable
> vserver cifs share create -vserver vs1 -share-name root_pw -path /.admin -share-properties browsable
> vserver cifs share create -vserver vs1 -share-name %w -path /user/%w -share-properties browsable,homedirectory

%d - user’s Domain
%w - user’s Windows login name
%% - to represent a literal “%”

> vserver name-mapping create -vserver vs1 -direction win-unix -position 1 -pattern “lab\\Administrator” -replacement “root”
> vserver name-mapping create -vserver vs1 -direction unix-win -position 1 -pattern “root” -replacement “lab\\Administrator”
> vserver name-mapping create -vserver vs1 -direction win-unix -position 1 -pattern “lab\\(.+)” -replacement “\1”
> vserver name-mapping create -vserver vs1 -direction unix-win -position 1 -pattern “(.+)” -replacement “lab\\\1”

EXAMPLE 8.1: ACCESSING A CIFS SHARE FROM A WINDOWS CLIENT

net view clusvs2
net use * \\clusvs2\vol1
net use * \\clusvs2\rootdir
dir z:
dir y:

> cifs share access-control show -vserver vs2
> cifs share access-control modify -vserver vs2 -share vol1 -user-or-group Everyone Read
> cifs share access-control show -vserver vs2
> cifs share access-control modify -vserver vs2 -share vol1 -user-or-group Everyone Full_Control

EXAMPLE 8.2: CONFIGURE CIFS HOME DIRECTORIES

> cifs home-directory search-path add -vserver vs2 -path /vs2vol1
> cifs home-directory search-path show
> cifs share create -vserver vs2 -share-name “~&w”| -path “%w” -share-properties oplocks,browsable,changenotify,homedirectory cifs share show -vserver vs2

net use
z:
dir
md administrator
dir
dir administrator
net view \\clusvs2
net use * \\clusvs2\~administrator

EXAMPLE 8.3: ACCESS YOUR DATA FROM AN NFS CLIENT

> vserver export-policy rule show -vserver vs2

mkdir /mnt/vs2
mkdir /mnt/path01
mount -t nfs DATA_LIF_IP:/ /mnt/vs2
mount -t nfs DATA_LIF_IP:/vs2vol1 /mnt/path01
cd /mnt/vs2/vs2vol1/administrator/
ls

> vol show -volume vs2_vol01 -field used

cd /mnt/path01
cp /usr/include/*
ls

> vol show -volume vs2_vol01 -field used

cd /mnt/vs2/path01
ls

9: SAN PROTOCOLS

> system license add ISCSICODE
> system license show
> storage aggregate create -aggregate aggr_iscsI_2 -node clusa-02 -diskcount 7
> aggregate show
> vserver create -vserver vsiSCSI2 -rootvolume vsISCSI2_root -aggregate aggr_iscsi_2 -ns-switch file -nm-switch file -rootvolume-security-style ntfs
> vserver iscsi create -vserver vsISCSI2 -target-alias vsISCSI2 -status up
> vserver iscsi show
> network interface create -vserver vsISCSI2 -lif i2LIF1 -role data -data-protocol iscsi -home-node clusa-01 -home-port e0c -address 192.168.239.40 -netmask 255.255.255.0 -> status-admin up
> net in show -vserver vsISCSI2

Note: Failover groups do not apply to SAN LIFs

> lun portset create -vserver vsISCSI2 -portset portset_iscsi2 -protocol iscsi -port-name i2LIF1 i2LIF2 i2LIF3 i2LIF4
> lun portset show

> lun igroup create -vserver vsISCSI2 -igroup ig_myWin2 -protocol iscsi -ostype windows -initiator iqn.1991-05.com.microsoft:win-frtp2qb78mr -portset portset_iscsi2
> igroup show

> vserver iscsi tpgroup show -vserver vsISCSI2
> igroup show -instance ig_myWin2

> vserver iscsi session show -vserver vsiSCSI2
> vserver iscsi connection show -vserver vsISCSI2

> vol create -vserver vsISCSI2 -volume vol1 -aggregate aggr_iscsi_2 -size 150MB -state online -type RW -policy default -security-style ntfs
> vol show

> lun create -vserver vsISCSI2 -volume vol1 -lun lun_vsISCSI2_1 -size 50MB -ostype windows_2008 -space-reserve enable
> lun show -vserver vsISCSI2

> lun map -vserver vsISCSI2 -volume vol1 -lun lun_vsISCSI2_1 -igroup ig_myWin2
> lun show -instance /vol/vol1/lun_vsISCSI2_1

10: STORAGE EFFICIENCY

> volume efficiency on -vserver vs1 -volume vol1
> volume efficiency on -vserver vs1 -volume vol1 -compression true

> volume clone create -vserver vs1 -flexclone vol1clone -parent-volume vol1
> volume clone split start -vserver vs1 -flexclone vol1clone
> volume clone split show -vserver vs1 -flexclone vol1clone

Clustered ONTAP Administration Notes CLI Crib Sheet: Part iv/v

This 5 part posting is a very rough and ready CLI crib sheet for Clustered ONTAP.


The version of CDOT here is 8.1.1 / 8.1.2.

11: DATA PROTECTION

> volume snapshot restore ?
> snapmirror promote ?

> volume snapshot promote ?

> volume snapshow -vserver vs7 -volume vs7_vol1

> volume move ?
> volume copy ?
> snapmirror ?

Mirror Creation Steps:
1. volume create
2. snapmirror create
3. (DP Mirror) snapmirror initialize
3. (LS Mirror) snapmirror initialize-ls-set
4. (DP Mirror) snapmirror update
4. (LS Mirror) snapmirror update-ls-set

> snapmirror show
> snapmirror show -source-volume vs2root -type ls -instance

Clients are transparently directed to a load-sharing mirror copy for read operations rather than to the read/write volume, unless the special “.admin” path is used.

> vol snap show -vserver vs2 -volume vs2root

SNAPMIRROR DATA PROTECTION

Create Volume for Mirror (on DR Vserver):
> volume create -vserver vserver -volume datavolume_dp -aggr aggrname -size equal_to_datavolume_A_size -type dp

Create Mirror (from DR site):
> snapmirror create -destination-path DR://vserver/datavolume_dp -source-path PRI://vserver/datavolume_A

Initialize Mirror - Baseline Transfer (from DR site):
> snapmirror initialize -destination-path DR://vserver/datavolume_dp -source-path PRI://vserver/datavolume_A

Update Mirror - Incremental Transfers (from DR site):
> snapmirror update -destination-path DR://vserver/datavolume_dp -source-path PRI://vserver/datavolume_A

Break Mirror (from DR) - Make destination writeable:
> snapmirror break -destination-path DR://vserver/datavolume_dp -source-path PRI://vserver/datavolume_A

Resync Mirror (From DR) - Resume relationship:
> snapmirror resync -destination-path DR://vserver/datavolume_dp -source-path PRI://vserver/datavolume_A

Delete Mirror (From DR) - Remove Relationship:
> snapmirror delete -destination-path DR://vserver/datavolume_dp -source-path PRI://vserver/datavolume_A

Create a new relationship with DR as source:
> snapmirror create -destination-path PRI://vserver/datavolume_A -source-path DR://vserver/datavolume_dp

Resync the mirror (From DR):
> snapmirror resync -destination-path PRI://vserver/datavolume_A -source-path DR://vserver/datavolume_dp

Steps to Resume Original Mirror:
1. Redirect clients
2. Update mirrored volumes using snapmirror update
3. Break the mirror using snapmirror break
4. Delete the mirror using snapmirror delete
5. Recreate an original relationship using snapmirror create (Dest:DR - Source:Primary)
6. Resync from DR site using snapmirror resync (no re-baseline required)

JOB SCHEDULES

> job schedule show

SNAPSHOT POLICIES

> volume snapshot policy show

CONFIGURING FOR NDMP

> system services ndmp modify
> system node hardware tape drive show
> system node hardware tape library show

EXAMPLE 11.1: CREATE AND INITIALIZE LS AND DP SNAPMIRROR REPLICATIONS

> volume create -vserver vs2 -volume vs2_root_ls1 -aggr n01_aggr1 -type dp

Note: An LS mirror must be created as a DP mirror and then changed.

> snapmirror create -source-cluster clusa -source-vserver vs2 -source-volume vs2_root -destination-cluster clusa -destination-vserver vs2 -destination-volume vs2_root_ls1 -type ls
> vol create -vserver vs2 -vol vs2_root_ls2 -aggr n02_aggr1 -type dp
> snapmirror create -source-path clusa://vs2/vs2_root -destination-path clusa://vs2/vs2_root_ls2 -type ls
> snapmirror show
> snapmirror show -instance

Perform initial (baseline) replication:

> snapmirror initialize-ls-set -source-path clusa://vs2/vs2_root
> snapmirror show

Create two DP mirrors:

> volume create -vserver vs2 -volume vs2_root_dp1 -aggr n01_aggr1 -type dp
> volume create -vserver vs2 -volume vs2_root_dp2 -aggr n02_aggr1 -type dp

Establish DP mirror relationships:

> snapmirror create -source-path clusa://vs2/vs2_root -destination-path clusa://vs2/vs2_root_dp1 -type dp
> snapmirror create -source-path clusa://vs2/vs2_root -destination-path clusa://vs2/vs2_root_dp2 -type dp

Perform initial (baseline) replication to one of the DP mirrors:

> snapmirror initialize -source-path clusa://vs2/vs2_root -destination-path clusa://vs2/vs2_root_dp1
> volume snapshot show -vserver vs2 -volume vs2_root
> snapmirror show -inst

EXAMPLE 11.2: COMPARE DP MIRROR REPLICATION TIMES

> snapmirror initialize -source-path clusa://vs2/vs2_root -destination-path clusa://vs2/vs2_root_dp2 -foreground true
> snapmirror show -inst
> volume snapshot show -vserver vs2 -volume vs2_root
> snapmirror update -source-path clusa://vs2/vs2_root -destination-path clusa://vs2/vs2_root_dp* -foreground true
> snapmirror show -inst
> volume snapshot show -vserver vs2 -volume vs2_root

EXAMPLE 11.3: ADD VOLUMES AND FILES TO A REPLICATED NAMESPACE

> volume create -vserver vs2 -volume vs2_vol03 -aggr n01_aggr1 -junction-path /vs2vol3

Linux NFS client:
cd /mnt/vs2
ls

Cluster Shell:
> snapmirror update-ls-set -source-path clusa://vs2/vs2_root

Linux NFS client:
ls /mnt/vs2
touch /mnt/vs2/myfile
mkdir /mnt/vs2rw
mount X.X.X.X:/.admin /mnt/vs2rw
touch /mnt/vs2rw/myfile
ls /mnt/vs2rw/myfile
ls /mnt/vs2/myfile

Cluster Shell:
snapmirror update-ls-set -source-path clusa://vs2/vs2_root -foreground true

Linux NFS client:
ls /mnt/vs2/myfile

EXAMPLE 11.4: SCHEDULE PERIODIC SNAPMIRROR REPLICATIONS

> job schedule show
> snapmirror modify -destination-path clusa://vs2/vs2_root/ls1 -schedule 5min
> snapmirror show -destination-path clusa://vs2/vs2_root_ls* -instance
> snapmirror modify -destination-path clusa://vs2/vs2_root_dp1 -schedule hourly
> snapmirror show -instance
> system date show
> snapmirror show -instance

EXAMPLE 11.5: PROMOTE AN LS MIRROR

> volume show -volume vs2_root*
> snapmirror promote -source-path clusa://vs2/vs2_root -destination-path clusa://vs2/vs2_root_ls1
> volume show -volume vs2_root*
> snapmirror show
> snapmirror update-ls-set -source-path clusa://vs2/vs2_root_ls1 -foreground true

EXAMPLE 11.6: SET UP AN INTERCLUSTER PEER RELATIONSHIP

Create an aggregate on each node:
> aggr create -aggr aggr1 -diskcount 6 -nodes other-01
> aggr create -aggr aggr2 -diskcount 6 -nodes other-02

Change the roles of ports e0e to intercluster:
> net port show
> net port modify -node other-01 -port e0e -role intercluster
> net port modify -node other-02 -port e0e -role intercluster
> net port show

Create intercluster LIFs on the “other” cluster:
> net int create -vserver other-01 -lif o1_ic1 -role intercluster -home-node other-01 -home-port e0e -address X.X.X.X -netmask X.X.X.X
> net int create -vserver other-02 -lif o2_ic1 -role intercluster -home-node other-02 -home-port e0e -address X.X.X.X -netmask X.X.X.X
> net int show

Note: intercluster LIFs are associated with node Vservers rather than cluster Vservers.

On clusa cluster:
> net port show
> net port modify -node clusa-01 -port e0e -role intercluster
> net port modify -node clusa-02 -port e0e -role intercluster
> net port show

Create an intercluster LIF:
> net int create -vserver clusa-01 -lif n1_ic1 -role intercluster -home-node clusa-01 -home-port e0e -address X.X.X.X -netmask X.X.X.X
> net int create -vserver clusa-02 -lif n2_ic1 -role intercluster -home-node clusa-02 -home-port e0e -address X.X.X.X -netmask X.X.X.X
> net int show

Verify routing groups for the intercluster LIFs on both clusters:
> network routing-groups show

Verify failover redundancy for the interclusters LIFs on both clusters:
> net int show -role intercluster -failover

Verify the peer relationship:
> cluster peer show
> cluster peer health show
> cluster peer ping

EXAMPLE 11.7: USE THE CLI TO CONFIGURE A SNAPMIRROR RELATIONSHIP

> vol create -vserver vs9 -volume vs2_vol02 -aggr aggr2 -size 410mb -type dp

Note: The size of the destination volume must be equal to or greater than the source volume.

> snapmirror create -source-path clusa://vs2/vs2_vol01 -destination-path other://vs9/vs2_vol02 -type dp -schedule daily
> snapmirror initialize -destination-path other://vs9/vs2_vol02 -foreground true

Clustered ONTAP Administration Notes CLI Crib Sheet: Part v/v

This 5 part posting is a very rough and ready CLI crib sheet for Clustered ONTAP.


The version of CDOT here is 8.1.1 / 8.1.2.

12: BASIC TROUBLESHOOTING AND PERFORMANCE

After vol0 is available, logging goes to /mroot/etc/log/mlog
EMS logs are at /mroot/etc/log/ems*

> debug log

From the systemshell %, to display the log files in /mroot/etc/log/mlog
% pwd
% ls -l

Checking Cluster Health:
> cluster show
*> cluster ring show

For two-node clusters only:
> cluster ha show

Is the cluster network OK?
*> cluster ping-cluster -node clusa-02

Are the aggregates online?
> storage aggregate show
> storage aggregate show -state !online

Are any disks broken or reconstructing?
> storage disk show -state broken
> storage disk show -state reconstructing

Are the volumes online?
> volume show -state !online

Is storage failover (SFO) happy?
> storage failover show
> storage failover show -instance

Are all the ports OK?
> network port show

Are all the logical interfaces (LIFs) OK and home?
> net int show
> net int show -is-home false

The Service Process Manager (spmd):
*> system node systemshell local
% spmctl
*> set -privilege diagnostic
*> spm show

VLDB and D-Blade Inconsistencies:
> debug vreport show
> debug vreport fix

Statistics:
> statistics show -object?
> statistics show -object nfs* -instance failure* -counter write* -node clusa-01

USER-GENERATED CORE FILES

If the clustershell is responsive:
> system reboot -node clusa-01 -dump true

If the node is in bad shape, from the RLM/SP:
> system core

If the RLM/SP is not configured:
% sysctl debug.debugger_on_panic=0
% sysctl debug.panic=1

MANAGING CORE FILES

> system coredump show
> system coredump save
> system coredump upload

PERFORMANCE

> dashboard alarm show
> dashboard alarm thresholds show
> dashboard health vserver show
> dashboard health vserver show-aggregate
> dashboard health vserver show-all
> dashboard health vserver show-port
> dashboard health vserver show-protocol
> dashboard health vserver show-lif
> dashboard health vserver show-volume
> dashboard storage show
> dashboard storage show -week
> dashboard performance show

> statistics show -object processor
> statistics show -node clusa-02 -object processor -instance processor1
> statistics periodic

13: CLUSTER MANAGEMENT

> volume move start ?
> volume move trigger-cutover ?
> … -validation-only

> system node image show
> system node image update -node * -package http://FTP_SERVER/IMAGE.tgz -setdefault true

> event config modify -mailfrom admin@lab.priv -mailserver mail.lab.priv
> event destination create -name crits -mail sysadmin@lab.priv
> event route modify -messagename coredump* -destination crits

> system node autosupport invoke -node clusa-02 -type test

To enable security audit logging, use:
> security audit modify -cliset on -httpset on -cliget on -httpget on

Audited commands go to mgwd.log files:
% egrep “console|ssh” /mroot/etc/mlog/mgwd.log*

> security login role show

Note: Onboard Antivirus is disabled in 8.1.2

> antivirus on-demand command show
> antivirus on-access policy show
> antivirus remedy show

EXAMPLE 13.1: MOVE A VOLUME

> volume show
> volume snapshot show -volume vs2_vol01
> vol move start -vserver vs2 -volume vs2_vol01 -destination-aggregate aggr1_node3 -foreground true
> vol show -vserver vs2 -volume vs2_vol01
> vol snap show -volume vs2_vol01
> vol move start -vserver vs2 -destination-aggregate aggr1_node1 -volume vs2_vol01
> vol show
> job show
> job show -id {jobid}

EXAMPLE 13.2: REPLACE AN AGGREGATE

> stor aggr show
> volume show -aggr n01_aggr1
> stor aggr create -aggr n01_aggr1_new -diskcount 6
> volume move -vserver vs2 -volume vs2* -destination-aggr n01_aggr1_new
> stor aggr show n01_aggr1
> stor aggr delete n01_aggr1
> stor aggr rename -aggr n01_aggr1_new -newname n01_aggr1

EXAMPLE 13.3: USE ROLES TO DELEGATE ADMINISTRATIVE TASKS

> net int modify -vserver vs2 -lif vs2_lif -frewall-policy mgmt.
> cluster show
> volume show
> volume modify -volume vs2_vol01 -comment “modified by vs2admin”
> volume show -volume vs2_vol01 -instance
> network port show
> network interface show

etcetera