Sunday, 16 February 2014

Some Researches on HP-UX (with NetApp)

 

A Few Links

kb.netapp.com is a fantastic resource, unfortunately, if you Google stuff, it will never appear in the list of answers (because kb.netapp.com requires a login, so Google’s anonymous searches cannot scan it’s pages - at least that’s my guess.) The three links below all point to kb.netapp.com.

If, when you run “igroup show” the -
“HP-UX Host Bus Adapters (HBAs) will only appear as logged in if there is I/O activity on the path.”

To check if I/O is on the partner path:
lun stats -z (zero’s the statistics)
lun stats -o (displays extra statistics including whether I/O is from the partner path)
If host Multipathing (HP pvlink Multipathing, Veritas DMP) is not configured on the HP-UX host, it is possible you will see this.

If you have NetApp’s FC Host Utilities Kit (HUK) for HP-UX installed:
sanlun lun show
sanlun lun show -p

Some things to consider with a HP-UX SAN Migration

Credits: The below is mostly excellent advice given by awesome people who know something about HP-UX (not me)

If you’re migrating LUNs for HP-UX hosts, you need to be aware of whether they are currently using the Agile or Legacy naming standard (Legacy DSF.) The Agile naming scheme was introduced in HP-UX 11i v3 (read here HP-UX 11i v3 Mass Storage Device Naming) so anything older and it’s Legacy. The discussion below considers only the Legacy naming scheme.

If you’ve got a migration plan like the below (you could also consider a no-downtime online migration using LV mirroring):

1) Applications get shut down
2) HP-UX hosts get shut down
3) Final SnapMirror update
4) SnapMirror break
5) Re-serial mirrored LUNs with old serials (not totally necessary with HP-UX but we’ll do it anyway)
6) Map LUNs to igroups using same LUN ID as before
7) HP-UX hosts are patched into a new fabric
8) HP-UX hosts get powered up
9) Applications get restarted

With Legacy naming, at the highlighted step 7, your device ids are likely to change. This is because when HP-UX detects a path change, such as a new fabric, it will assign new legacy device file names to the disk devices when the ioscan command is run.

Before the Migration

Before migration you should document the:

ioscan –fnkC disk
bdf
vgdisplay -v
lvdisplay -v /dev/vgxx/lvxx  (for each logical volume)
ls -l /dev/*/group
sanlun lun show  (if the HUK is installed)
sanlun lun show -p (if the HUK is installed)

Note: You can also collect all the config data using NetApp’s nSANity tool.

What to Expect During the Migration

The “ioscan -fnkC disk” run on the HP-UX hosts will give you the FC_ID, as highlighted in an extract from an output below (basically the 3 entries after the first dot.)

8/8/1/0.118.23.19.0.14.2

Whether this will change or not, depends on how your Fabric Switches are configured. If your Fabric switches are the same make, have the same domain ID’s (Fabric A will have a different domain ID to Fabric B), and you’re using the same ports for connecting your HP-UX hosts too, then the FC_ID might just stay the same (don’t quote me on this). If not then your device file names will change from say (doesn’t actually matter what the FC_ID changes too, just that it changes):

Path 1:
/dev/rdsk/c4t0d1
/dev/rdsk/c4t0d2
/dev/rdsk/c4t0d3

Path 2:
/dev/rdsk/c6t0d1
/dev/rdsk/c6t0d2
/dev/rdsk/c6t0d3

To say:

Path 1:
/dev/rdsk/c7t0d1
/dev/rdsk/c7t0d2
/dev/rdsk/c7t0d3

Path 2:
/dev/rdsk/c8t0d1
/dev/rdsk/c8t0d2
/dev/rdsk/c8t0d3

Completing the Migration

And this is where you will need a Unix Administrator/Expert on hand who’s comfortable with with page 49 and section “5 Disruptive migration” of the following link:


Additional Reading:

Vserver Name-Mapping versus /etc/usermap.cfg Examples

UPDATE: Some of the mappings below, even though they don't error and are accepted in the Clustershell, they won't work (essentially, mappings with subnet/IP addresses in, do not work in CDOT!)

On the web there is this article Examples of usermap.cfg entries for Data ONTAP (operating in 7-Mode) and I was curious how they’d look in the world of Clustered ONTAP, which leads onto this post...

Firstly, if you scan the contents of the Clustered Data ONTAP 8.2 Commands: Manual Page Reference for the word “mapping”, you’ll find these sections:

vserver cifs domain name-mapping-search (for trusted domains)
vserver group-mapping (for mapping groups to groups)
vserver name-mapping (for mapping users - including user groups)

Here, we’re only interesting in: vserver name-mapping

Note: “Patterns (pattern and replacement field) can be expressed as POSIX regular expressions. For information about regular expressions, see the UNIX reference page for regex” (the following link is a good starting point http://www.unix-manuals.com/refs/regex/regex.htm)

What is a null character in regex? \x00

The /etc/usermap.cfg entries always have the Windows user on the left and UNIX user on the right. No explanation here for what the mappings do - for that there is the original link - I just show how they convert to Clustered ONTAP commands (at least how I think they should - please let me know if you come across any errors).

The Examples

1) "Bob Garj" == bobg

vserver name-mapping create -direction win-unix -pattern "Bob Garj" -replacement bobg -position 1 -vserver SVM
vserver name-mapping create -direction unix-win -pattern bobg -replacement “Bob Garg” -position 2 -vserver SVM

2) mktg\Roy => nobody

vserver name-mapping create -direction win-unix -pattern mktg\Roy -replacement nobody -position 1 -vserver SVM

3) engr\Tom => ""

vserver name-mapping create -direction win-unix -pattern engr\\Tom -replacement \x00 -position 1 -vserver SVM

4) uguest <= *

vserver name-mapping create -direction unix-win -pattern * -replacement uguest -position 1 -vserver SVM

5) *\root => ""

vserver name-mapping create -direction win-unix -pattern *\\root -replacement \x00 -position 1 -vserver SVM

6) corporate\* == pcuser

vserver name-mapping create -direction win-unix -pattern corporate\\* -replacement pcuser -position 1 -vserver SVM

7) Engineer == *

vserver name-mapping create -direction unix-win -pattern * -replacement Engineer -position 1 -vserver SVM

8) homeusers\* == *

vserver name-mapping create -direction win-unix -pattern homeusers\\(.+) -replacement \1 -position 1 -vserver SVM

9) Engineering\* <= sunbox2:*

vserver name-mapping create -direction unix-win -pattern sunbox2:(.+) -replacement Engineering\\\1 -position 1 -vserver SVM

10) Engineering\* <= 192.9.200.70:*

vserver name-mapping create -direction unix-win -pattern 192.9.200.70:(.+) -replacement Engineering\\\1 -position 1 -vserver SVM

11) "" <= 192.9.200.0/24:*

vserver name-mapping create -direction unix-win -pattern 192.9.200.0/24:* -replacement \x00 -position 1 -vserver SVM

12) 192.9.200.0/24:test-dom\* => ""

vserver name-mapping create -direction win-unix -pattern 192.9.200.0/24:test-dom\\* -replacement \x00 -position 1 -vserver SVM

13) *\* == corpnet/255.255.0.0:*

vserver name-mapping create -direction win-unix -pattern *\\(.+) -replacement corpnet/255.255.0.0:\1 -position 1 -vserver SVM
vserver name-mapping create -direction unix-win -pattern corpnet/255.255.0.0:(.+)  -replacement *\\\1 -position 2 -vserver SVM

PS These have not been tested in anger, just tested in the CDOT CLI and there are no syntax errors from any of the above!

PPS It would be interesting if there was a tool to convert “usermap.cfg” to “vserver name-mapping” - perhaps another post...

Image: Name Mapping in OnCommand System Manager

Sunday, 2 February 2014

Rebuilding a CF Card on a FAS3140A that was Out-of-Sorts

This was quite an interesting experience, so - seeing as I like to share things - here’s the story…

Firstly, you might be wondering why in the title the FAS3140A is Out-of-Sorts; well - without going into too much detail - there’d been a slight problem with an upgrade from 7.3.3 to 8.1.3 the previous weekend and one head was on 8.1.3, and the other (the head I was interested in fixing) was failing to boot - NetBoot-ing to 8.1.3 or back to 7.3.3 had been tried without success, so an engineer was required on site to fix it. And if you’re wondering why a week went by with one head in takeover of the other, well, it was a pre-prod system, and the fix is a little disruptive (and there was some other upgrade work to be done after fixing this little issue…)

I arrived on site armed with my Transcend Compact Flash Card Reader.

Image: Transcent P8 15-in-1 Card Reader
I checked version and partner version (from head in takeover) and both said on 8.1.3. Compared output of printenv on the 7.3.3 head’s boot loader and 8.1.3 head (from systemshell% kenv), and the 7.3.3 boot loader still thought it was 7.3.3 and was in a complete mess after the attempts at NetBoot-ing it.

So, I removed the 7.3.3 head’s CF card, copied off all the contents, and followed the instructions:

Replacing the CompactFlash card on a 31xx system running Data ONTAP 8.0

- and the section -

Installing the boot device using a PC or laptop with a card reader/writer on

Image: 1GB Compact Flash Card from a FAS3140
Essentially, the document details how to rebuild the Data ONTAP 8.X CF Card from scratch using the software image file (in this case 8.1.3P3_q_image.tgz) to populate the re-created folder structure:

x86_64/
   freebsd/
      image1/
      image2/
   diag/
   firmware/
common/
   firmware/

There was one part in the procedure that had me stumped though - since the 7.3.3 head had never been 8.1.3 there was no varfs.tgz to copy from its /etc folder (investigated this connected to the in-takeover 8.1.3 controller - which fortunately was licensed for CIFS - and mapping a network drive to \\partner_ip\c$).

I could not use the varfs.tgz from working and in takeover 8.1.3 controller, so, in went the reconstructed Compact Flash Card minus a varfs.tgz, and we booted the head up.

The head boots to 8.1.3 and we get the:

The boot device has changed. Use option (6) to restore the system configuration.

There I was thinking there’d be no backup configuration since the head had never been 8.1.3 and no backup config (varfs.tgz) was on the disks, fortunately, Data ONTAP is clever enough to get around this (not quite sure of the mechanism, still it works …)

So, I hit the:

Option 6 ‘Update flash from backup config’

It reboots again, back to:

The boot device has changed. Use option (6) to restore the system configuration.

 So, I hit - for the second time - the:

Option 6 ‘Update flash from backup config’

And finally, we have our head at:

Waiting for giveback…

On the partner we do:

cf giveback

And watch the repaired head boot up (now on 8.1.3). Here’s where the disruptive bit happens - it gives back, but then the controller I was repairing did a reboot whilst not in takeover (being disruptive didn’t matter in this instance since all hosts were down anyway.)

Finally, test everything is good.

All good and this job done!

Brief Notes on Relocating a Cluster Interconnect Switch

Not something that’s going to be done much if ever. In theory, in 100% of occasions, simply powering off the switch you want to move should be fine (be sure the configuration is backed up first though), after all, having two cluster interconnect switches is designed for resilience so that if one fails everything continues on its merry way.

Image: Two x Cisco 5010 Cluster Interconnect Switches

If you want to take the cautious approach though (always a good approach to take), here are some notes to follow:

1) General CDOT Health Checks (Use generously) / Obtaining Information

cluster show
cluster ping-cluster -node NODENAME
net int show -role cluster
net port show -role cluster
system node run -node NODENAME -command cdpd show-neighbors -v
system node run -node NODENAME -command ifstat -a -z
system node run -node NODENAME -command ifstat PORTNAME

event log show -severity EMERGENCY
event log show -severity ALERT
event log show -severity CRITICAL
event log show -severity ERROR
event log show -severity WARNING
event log show -time > 10m

2) Downing the Switch (Switch 2 here)

2.1 Migrating clus2 lifs and downing ports

Note i: Here we’re going to down switch 2 hence moving the clus2 lifs, and considering a FAS62XX with clus2 on e0e.
Note ii: Migration of cluster lifs must be done from the local node otherwise you get the error:
Error: command failed: Migration of cluster lifs must be done from the local node.
Note iii: Similarly, downing of cluster ports must be done from the local node.

Connected to NODE-X via its Node Management LIF / Service Processor (> system console):

set advanced
net int migrate -vserver  NODE-X -lif clus2 -dest-node NODE-X -dest-port e0c
net port modify -node NODE-X -port e0e -up-admin false

Repeat for as many nodes are in the cluster…

2.2 On the switch to be powered off (Switch 2 here)

copy run start
show interface brief
show port-channel summary

2.3 On the switch that remains powered on (Switch 1 here)

show interface brief
show port-channel summary

configure
interface ethernet 1/13-20
shutdown

Note: Here we’re considering a Cisco 5010 with 8 ISLs on ports 13-20. Only the ISL Port-Channel ports get shutdown!

3) Powering Off the Switch (Switch 2 here)

Power off the switch, un-cable, move, re-cable, and power up

4) Upping the Switch (Switch 2 here)

4.1 On the switch that remained powered on (Switch 1 here)

configure
interface Ethernet 1/13-20
no shutdown

show interface brief
show port-channel summary

Note: Confirm the port-channel is fully established.

4.2 Upping ports and reverting clus2 lifs

Note: This can be done from one Clustershell session

set advanced
net port modify -node NODE-X -port e0e -up-admin true
net int revert -vserver NODE-X -lif clus2

Repeat for as many nodes are in the cluster…

THE END!

Saturday, 1 February 2014

Brief Notes on Advanced Troubleshooting in CDOT

A few rough notes that might save ones bacon one day! It’s unlikely you’ll ever have a need to use any of the information in this post - if you have a problem you’ll be calling NetApp Global Support and not trying stuff found on a completely unofficial NetApp enthusiasts blog post, stuff which is not going to get updated, and where the author never really expects anyone to actually read any of this stuff - still, it’s interesting to know these things are there! As always - with this and any other unofficial blog - caveat lector!

The Obligatory Image: Apologies if the sight of juicy succulent bacon offends - no offense intended!
Most of the commands below are available from the clustershell diag privilege level (::> set d); and a lot of the others via the systemshell (%).

::> set d
::*> sec login unlock diag
::*> sec login password diag
::*> systemshell -node NODENAME
%

To get a new clustershell from systemshell

% ngsh
::*> exit
%

Note: If you log into the console as diag it takes you to the systemshell.

Cluster Health Basics

::*> cluster show
::*> cluster ring show

Effect of the following replicated database (RDB) applications not running

mgwd … there is no clustershell
vifmgr … you cannot manage networking
vldb … you cannot create volumes
bcomd … you cannot manage SAN data access

Moving Epsilon

::*> system node modify -node OLDNODEwEPSILON -epsilon false
::*> system node modify -node NEWNODEwEPSILON -epsilon true

You have to set the original to false and the new owner to true. If try to set the new owner to true without first setting the original to false you get this error:
Error: command failed: Could not change epsilon of specified node: SL_EPSILON_ERROR (code 36). Epsilon manipulation error: The epsilon must be assigned to at most one eligible node, and it is required for a single node cluster. In two node cluster HA configurations epsilon cannot be assigned.

Note: Also remember the use of the below when a system is being taken down for prolonged maintenance -
::*> system node modify -node NODENAME -eligibility false

Types of Failover

::*> aggr show -fields ha-policy
ha-policy = cfo (cluster failover) for mroot aggregates
ha-policy = sfo (storage failover) for data aggregates
::*> storage failover giveback -ofnode NODENAME -only-cfo-aggregates true
The above only gives back the root aggregate!
::*> sto fail progress-table show -node NODENAME

Some LOADER Environment Stuff

Un-Muting Console Logs:
LOADER> setenv bootarg.init.console_muted false

Setting to boot as clustered:
LOADER> setenv bootarg.init.boot_clustered true

Configuring an interface for netboot:
LOADER> ifconfig e0a -addr=X.X.X.X -mask=X.X.X.X -gw=X.X.X.X -dns=X.X.X.X -domain=DNS_DOMAIN
LOADER> netboot http://X.X.X.X/netboot/kernel

Note: To see the boot loader environment variables in the clustershell or systemshell:
::*> debug kenv show
% kenv

To start a node without job manager (also see “User Space Processes” below):
LOADER> setenv bootarg.init.mgwd_jm_nostart true

For a list of job manager types

::*> job type show

Job Manager Troubleshooting

::*> job initstate show
::*> job show
::*> job schedule show
::*> job history show
::*> job store show -id JOB_UUID

% cat /mroot/etc/log/mlog/command-history.log
% cat /mroot/etc/cluster_config/mdb/mgwd/job_history_table
% cat /mroot/etc/log/mlog/jm-restart.log
% cat /mroot/etc/log/ems

To keep an eye on the “tail” of a log:
% tail -f LOGNAME
% tail -f /var/log/notifyd*

% tail -100 /mroot/etc/mlog/mgwd.log | more

Logs

Location:
/mroot/etc/log/mlog

Includes logs for:
Message, mgwd, secd, vifmgr, vldb, notifyd

::> event log show -severity emergency
::> event log show -severity alert
::> event log show -severity critical
::> event log show -severity error
::> event log show -severity warning
::> event log show -time "01/21/2014 09:00:00".."01/22/2014 09:00:00" -severity !informational,!notice,!debug

::*> debug log files show
::*> debug log show ?

Autosupport

Invoke autosupport:
::> system autosupport invoke -node * -type all

Autosupport trigger:
::*> system node autosupport trigger show -node NODENAME
::*> system node autosupport trigger modify -node NODENAME -?

Troubleshooting asup with debug smdb:
::*> debug smdb table nd_asup_lock show

Unmounting mroot then remounting

% cd /etc
% sudo ./netapp_mroot_unmount
% sudo mgwd

Mounting /mroot for an HA Partner in Takeover (for core/logs/… collection)

% sudo mount
% sudo mount_partner
% sudo umount

Unlock diag user with mgwd not functioning

Option 1) Reboot to Ctrl-C for Boot Menu and option (3) Change password.
Option 2) If option (3) doesn’t work from the boot menu do:
Selection (1-8)? systemshell
# /usr/bin/passwd diag
# exit

Note: The same method can be used to reset admin but must the password update quickly after logging into clustershell after the change, otherwise new password is overwritten by original password from RDB
::> security login password -username admin

Panic Testing and System Coredumps

::*> system node run -node NODENAME -command panic

An in-state core from RLM/SP:
> system core

Out-state core from Clustershell:
::> reboot -node NODENAME -dump true

Out-state core from Nodeshell:
> halt -d

Out-state core from systemshell:
% sysctl debug.debugger_on_panic+0
% sysctl debug.panic=1

Controlling automatic coring, and core type (sparse contains no user data):
::> storage failover modify -onpanic true -node NODENAME
::> system coredump config modify -sparsecore-enabled true -node NODENAME

Reviewing and uploading:
::> coredump show
::> coredump upload
% scp /mroot/etc/crash/COREFILE REMOTEHOST:/path

User-Space Process and Cores

% ps -aux
% ps -aux | grep PROCESSNAME
% pgrep PROCESSNAME
% sudo kill -TRAP PID
Note: Processes monitored by spmd restart automatically

Root Volume and RDB (Node/Cluster) Backup and Recovery

::*> system configuration backup ?
::*> system configuration recovery ?

Items in a node configuration backup include:
- Persistent bootarg variables in cfcard/env
- varfs.tgz and oldvarfs.tgz in /cfcard/x86_64/freebsd/
- Any configuration file under /mroot/etc

In a cluster configuration backup:
- Replicated records of all the RDB rings - mgwd, VLDB, vifmgr, BCOM

Backup files @ /mroot/etc/backups
Can be redirected to a URL

Simple Management Framework (SMF)

To see all the simple management database (SMDB) iterator objects in the SMF:
::*> debug smdb table {TAB}

Examples:
::*> debug smdb table bladeTable show
::*> debug smdb table cluster show

Another way to get Volume Information

Note: Not to be used on production systems unless instructed by NGS!

::*> vol show -vserver VS1 -volume VOLNAME -fields UUID
::*> net int show -role cluster
% zsmcli -H CLUSTERICIP d-volume-list-info id=UUID desired-attrs=name
% zsmcli -H CLUSTERICIP d-volume-list-info id=UUID

User-Space Processes

RDB Applications (MGWD, VifMgr, VLDB, BCOM):
% ps aux | grep mgwd
etcetera…

Non-RDB Applications (secd, NDMP, spmd, mlogd, notifyd, schmd, sktlogd, httpd):
% ps aux | grep /sbin

List managed processes:
% spmctl -l

Stop monitoring a process:
% spmctl -dh PROCESSNAME

SPMCTL help:
% spmctl --help

List of well-known handles:
vldb, vifmgr, mgwd, secd, named, notifyd, time_state, ucoreman, env_mgr, spd, mhostexecd, bcomd, cmd, ndmpd, schmd, nchmd, shmd, nphmd, cphmd, httpd, mdnsd, sktlogd, kmip_client, raid_lm, #upgrademgr, mntsvc, coresegd, hashd, servprocd, cshmd, fpolicy, ntpd, memevt

Set MGWD to start without job manager running:
% spmctl -s -h mgwd
% sudo /sbin/mgwd --jm-nostart

To stop spmd from monitoring a process:
% spmctl -s -h vifmgr
To restart spmd monitoring:
% spmctl -e -c /sbin/vifmgr -h vifmgr

Notable processes:
secd = security daemon
notifyd = notify daemon (required for autosupport to run)
httpmgr = manager daemon for Apache httpd daemon
schmd = monitors SAS connectivity across a HA pair
nchmd = monitors SAS connectivity per node
ndmpd = used for NDMP backups

Debugging M-Host Applications:
::*> debug mtrace show

To restart vifmgr (and flush the cache):
% spmctl -h vifmgr -s
% spmctl -h vifmgr -e

Viewing the Configuration Database Info

Essentially: the CDB is local, the RDB is clusterwide.
% cdb_viewer /var/CDB.mgwd
::*> net int cdb show
::*> net port cdb show
::*> net routing-groups cdb show
::*> net routing-groups route cdb show
etcetera…
::*> debug smdb table {AS ABOVE}

For RDB status information

% rdb_dump --help
% rdb_dump

MSIDs

MSIDs appear as FSID in a packet trace and are also located in the VLDB. Look for errors in event log show referencing the MSID.

::*> vol show -vserver SVM -fields msid -volume VOLNAME

Identify which clients have NFS connectivity to a node and unmount clients if mounted to a volume that no longer exists:

::*> network connections active show -node NODENAME -service nfs*
::*> network connections active show -service ?

Using vReport to scan for VLDB and D-blade discrepancies

::*> debug vreport show
::*> debug vreport fix ?

DNS-Based Load Balancing

::> network interface show-zones
::> network interface modify -vserver SVM - lif LIFNAME -dns-zone CMODE.DOMAIN.COM

C:\> nslookup CMODE
C:\> nslookup CMODE.DOMAIN.COM

Note: Regular round-robin DNS can overload LIFs.

Automatic LIF Rebalancing

::*> net int show -fields allow-lb-migrate,lb-weight -vserver SVM

Note: You should create separate LIFs for CIFS and NFSv4 traffic. NFSv3 can auto-rebalance (not for use with VMware). Stateful protcols like CIFS and NFSv4 cannot auto-rebalance.

Firewall

::*> debug smdb table firewall_policy_table show
::*> firewall policy service create -service SERVICENAME -protocol tcp/udp -port PORT
::*> firewall policy service show
::*> firewall policy create -policy POLICYNAME -service SERVICENAME -action deny -ip-list X.X.X.X/X
::*> firewall policy show

Cluster Session Manager

::*> debug csm ?
::*> debug csm session show -node NODENAME
::*> event log show -messagename csm*
::*> cluster ping-cluster -node NODENAME
% sysctl sysvar.csm
% sysctl sysvar.nblade

Execution Thresholds

% sysctl -a sysvar.nblade.debug.core | grep exec

Local Fastpath (Cannot see any reason to change this but the option’s there!)

% sudo sysctl kern.bootargs=bootarg.nblade.localbypass_in=0
% sudo sysctl kern.bootargs=bootarg.nblade.localbypass_out=0

Nitrocfg Configures and Queries Basic N-Blade Components

% nitrocfg

The RDB Site List

% cat /var/rdb/_sitelist

LIF Connection Errors

::> event log show -message Nblade.*
::> event log show -instance -seqnum SEQUENCE_ID
% sysctl -a | grep sysvar.csm.rc
% sysctl sysvar.csm.session

Examine the:
/mroot/etc/log/mlog/vifmgr.log
/mroot/etc/log/mlog/mgwd.log

LIF Failure to Migrate

::> net port show -link !up
::> net int show -status-oper !up
::> cluster show
% rdb_dump
::> event log show -node NODENAME -message VifMgr*

Unbalanced LIFs

::> network connections active show

Duplicate LIF IDs

::*> net int show -fields numeric-id
::*> net int cdb show
::*> net int ids show
% cdb_viewer /var/CDB.mgwd
::*> net int ids delete -owner SVM -lif LIFNAME
::*> debug smdb table {vifmgr? or virtual_interface?} show

Nodeshell Network-Related Commands

> ifconfig -a
> netstat
> route -gsn
> ping -i e0d X.X.X.X
> traceroute

> pktt start e0c
> pktt dump e0c
> pktt stop e0c

Systemshell Tools

% ifconfig
% nitrocfg
% netstat -rn
% route
% vifmgrclient
% pcpconfig
% ipfstat -ioh
% rdb_dump
% tcpdump -r /mroot/e0d_DATE_TIME.trc

vifmgrclient for debugging vifmgr

% vigmgrclient --verbosity-on
% vifmgrclient -set-trace-level 0cfff
% vifmgrclient -debug
::*> debug smdb table rg_routes_view show

Cluster Peer Ping (Intercluster)

::*> cluster peer ping

Miscellaneous - Using Command History

::> history
::> !LINE