Saturday, 31 August 2013

Clustered ONTAP 8.2 SIM: Maximizing Available Usable Space

The NetApp Clustered ONTAP (CDOT) 8.2 Simulator - as first downloaded - has two virtual shelves of 14 x 1GB disks. With 3 disks taken for the dedicated Clustered ONTAP root aggregate, which leaves 25 for a data aggregate, of which two of those disks will be parity disks, giving a maximum usable space of around 23GB. If you want to see how it is possible to get close to 400GB usable space out of your CDOT 8.2 Simulator, read on!

Walkthrough - how to get the most out of your CDOT 8.2 SIM, starting from the very start!

Part 1: Obtaining the SIM

1: Download the CDOT 8.2 SIM from http://support.netapp.com > Downloads > Product Evaluations > Data ONTAP Simulator > Simulator 8.x > 8.2 Clustered-ONTAP for VMware Workstation
2: Unpack the downloaded vsim_netapp-cm.tgz
3: Open the DataONTAP.vmx file with VMware Workstation

Part 2: First Boot

1: Boot the simulator
2: Press Ctrl-C for Boot Menu when prompted
3: Enter selection 4 ‘Clean configuration and initialize all disks’ and answer ‘y’ to the two prompts

Part 3: Replacing the 2x14x1 GB Virtual Disks with 4x14x9 GB!

1: After the wipe request from part 2, the SIM will load to ‘Do you want to create a new cluster ...’ press Ctrl-C to escape the setup script
2: The login is ‘admin’ with no password
3: Run through the following commands/prompts>

security login unlock -username diag
security login password -username diag
Please enter a new password: XXXXXXXX
Please enter it again: XXXXXXXX
set -privilege advanced
Do you want to continue? y
systemshell local
login: diag
Password: XXXXXXXX
setenv PATH "${PATH}:/usr/sbin"
echo $PATH
cd /sim/dev/,disks
ls
sudo rm v0*
sudo rm v1*
sudo rm ,reservations
cd /sim/dev
vsim_makedisks -h
sudo vsim_makedisks -n 14 -t 36 -a 0
sudo vsim_makedisks -n 14 -t 36 -a 1
sudo vsim_makedisks -n 14 -t 36 -a 2
sudo vsim_makedisks -n 14 -t 36 -a 3
ls ,disks/
exit
system node halt local
Warning: Are you sure you want to halt the node? y

Note: The SIM comes with 2 virtual shelves of 14 x Type 23 (1GB) disks. Here we’re simply trashing those 2 shelves and adding 4 shelves (which is as many as you can add) of 14 x Type 36 (~9GB) disks.

Part 4: Second Wipe

1: Power off the SIM
2: Edit the properties of the Virtual Machine to make Hard Disk 4, 550 GB in size

Image: Expanding VMware Workstation VMDK
3: Power on the SIM
4: Press Ctrl-C for Boot Menu when prompted
5: Enter selection 5 ‘Maintenance mode boot’
6: Assign 3 disks for the Clustered ONTAP dedicated root aggregate, and halt

disk assign v4.16 v4.17 v4.18
disk show
halt

7: Power-cycle the SIM
8: Press Ctrl-C for Boot Menu when prompted
9: Enter selection 4 ‘Clean configuration and initialize all disks’ and answer ‘y’ to the two prompts

Part 5: Cluster Setup and Aggregate Setup

1: After the wipe request from part 2, the SIM will load to ‘Do you want to create a new cluster ...’ follow the prompts to create your cluster

Note: SIM Base License Key is SMKQROWJNQYQSDAAAAAAAAAAAAAA (that’s 14 A’s!)

2: Login to the cluster
3: Run through the following commands/prompts to create the largest CDOT 8.2 SIM aggregate possible!

storage disk assign -all true -node lab-01
system node run -node lab-01 options disk.maint_center.spares_check off
storage aggregate create -aggregate a_lab01_01 -diskcount 53 -nodes lab-01 -maxraidsize 28

And the output to show the nearly 400GB lab simulator aggregate!

lab::> storage aggregate show
  (storage aggregate show)
Aggregate     Size Available Used% State
--------- -------- --------- ----- ------
a_lab01_01 387.6GB   387.6GB    0% online
aggr0       7.91GB   379.0MB   95% online

Sunday, 11 August 2013

Clustered ONTAP with Microsoft and VMware Solutions - Reading List

The following post contains my set of links to Clustered ONTAP resources (updated On August 11th 2013). The focus of these links is: configuration and protocols, Microsoft solutions integration, and VMware solutions integration.


NetApp.com - Library
A selection of documents with type “Technical Report” and keyword search “Clustered ONTAP” going back over the last half a year. Note that a lot of these are not Clustered ONTAP specific.

(July) NFSv4 Enhancements and Best Practices Guide: Data ONTAP Implementation

(June) Microsoft SQL Server and NetApp SnapManager for SQL Server on NetApp Storage Best Practices Guide

(June) Best Practices for Clustered Data ONTAP Networking Configurations

(June) SnapManager 7.1 for SharePoint with Clustered Data ONTAP: Best Practices Guide

(June) Clustered Data ONTAP CIFS Auditing Quick Start Guide

(June) VMware vCloud Director on NetApp Clustered Data ONTAP

(June) Windows Multipathing Options with Data ONTAP: Fibre Channel and iSCSI

(June) FlexPod Express with Microsoft Windows Server 2012 Hyper-V Implementation Guide

(June) FlexPod Express with VMware vSphere Implementation Guide

(June) SnapVault Best Practices Guide Clustered Data ONTAP

(June) Clustered Data ONTAP NFS Implementation Guide

(June) Best Practices Guide for Clustered Data ONTAP 8.2 Windows File Services

(June) VMware View 5 Solutions Guide

(June) NetApp Clustered Data ONTAP 8.2 An Introduction

(May) Microsoft Hyper-V over SMB 3.0 with Clustered Data ONTAP: Best Practices

(May) Microsoft Windows Server 2012 Hyper-V Storage Performance: Measuring SMB 3.0, iSCSI and FC Protocols

(May) OnCommand Plug-In 3.2 for Microsoft Best Practices Guide

(May) VMware vSphere 5.1 on FlexPod Validated with Clustered Data ONTAP and Data ONTAP Operating in 7-Mode

(May) FlexPod Solutions Guide

(April) SnapMirror Configuration and Best Practices Guide for Clustered Data ONTAP 8.2

(April) Clustered Data ONTAP Security Guidance

(April) NetApp Data Compression and Deduplication Deployment and Implementation Guide Clustered Data ONTAP

(April) Introduction to NetApp Infinite Volume

(March) Microsoft Exchange Server 2010 and SnapManager for Exchange Best Practices Guide

Clustered Data ONTAP 8.2 Documentation
An enormous wealth of product information!

Release Notes

Cluster and Vserver Peering Express Guide

Command Map for 7-Mode Administrators

Commands: Manual Page Reference

Data Protection Guide

Data Protection Tape Backup and Recovery Guide

Documentation Roadmap

File Access and Protocols Management Guide

High-Availability Configuration Guide

Logical Storage Management Guide

Network Management Guide

Physical Storage Management Guide

Remote Support Agent Configuration Guide

SAN Administration Guide

SAN Configuration Guide

SAN Express Setup Guide for 32xx Systems

SAN Express Setup Guide for 62xx Systems

SnapVault Express Guide

Software Setup Guide

System Administration Guide for Cluster Administrators

System Administration Guide for Vserver Administrators

Upgrade and Revert/Downgrade Guide

V-Series Systems Implementation Guide for Third-Party Storage

V-Series Systems Installation Requirements and Reference Guide

Clustered Data ONTAP 8.1.3 Documentation

Release Notes

Command Map

Commands: Manual Page Reference, Volume 1

Commands: Manual Page Reference, Volume 2

Commands: Manual Page Reference, Volume 3

Data Protection Guide

Data Protection Tape Backup and Recovery Guide

Documentation Map

Express Setup Guide for Cluster-Mode SAN on 32xx Systems

Express Setup Guide for Cluster-Mode SAN on 62xx Systems

File Access and Protocols Management Guide

High-Availability Configuration Guide

Logical Storage Management Guide

Network Management Guide

Physical Storage Management Guide

Remote Support Agent Configuration Guide

SAN Administration Guide

SAN Configuration Guide

Software Setup Guide

System Administration Guide

Upgrade and Revert/Downgrade Guide

V-Series Systems Implementation Guide for Third-Party Storage

V-Series Systems Installation Requirements and Reference Guide

Vserver Administrator Capabilities Overview Guide

Cluster Management and Interconnect Switches

Installing NX-OS software and RCF files on Cisco cluster switches




Cisco Documentation

Cluster-Mode Switch Setup Guide for Cisco Switches

CN1610 Documentation

CN1601 and CN1610 Switch Setup and Configuration Guide

CN1610 Network Switch CLI Command Reference

CN1610 Switch Administrator's Guide

NetApp 10G Cluster-Mode Switch Installation Guide

CN1601 Documentation

CN1601 and CN1610 Switch Setup and Configuration Guide

CN1601 Network Switch CLI Command Reference

CN1601 Switch Administrator's Guide

NetApp 1G Cluster-Mode Switch Installation Guide

Miscellaneous

What LIF roles does clustered Data ONTAP use to establish connections to remote servers?

A useful to know extract from the above:

Clustered Data ONTAP 8.1.x/8.2:
- Sessions established to SNMP and NTP servers use the node-mgmt LIF per node.
- AutoSupport is delivered from the node-mgmt LIF per node.

Sunday, 4 August 2013

ACME Guide to 4a-ing a Factory Fresh NetApp System

 

There are circumstances when you might want to 4a (clean configuration and initialize all disks) on a brand new out-of-the-factory NetApp FAS Clustered Data ONTAP/7-Mode system. This post is not going to go into the circumstances, just the how to do.
                          
Note 1: A sample output is contained in the following Appendix B
Note 2: ‘To 4a a system’ comes from pre-Data ONTAP 8 days when there used to be option (4) ‘Clean configuration’ and option (4a) ‘Clean configuration and initialize all disks’ in the boot menu.

Walkthrough

1. Boot both controllers
2. Press Ctrl-C for Boot Menu when prompted
3. Selection option 5 ‘Maintenance mode boot’
4. Answer ‘y’ to ‘Continue with boot?’
5. Run the following command to get the system ID:
disk show -n
6. Run the following command to find the existing root aggregate disks:
aggr status -r
7. Offline and destroy aggregates as required (except the partners root aggregate):
aggr offline aggrname
aggr destroy aggrname
8. Unassign the nodes disks with:
disk remove_ownership -s 1234567890
9. On the partner node - repeat steps 2 to 8
10. Run the following command:
disk show -a
11. Remove ownership of any disks assigned to foreign controllers with:
disk remove_ownership -s 2345678901
12. Reassign the original 3 root aggregate disks (so these disks will be zeroed in the wipe filer procedure)
disk assign 0a.00.1
disk assign 0a.00.2
disk assign 0a.00.3
13. Halt the controller with:
halt
14. From the load prompt run:
boot_ontap
15. Press Ctrl-C for Boot Menu when prompted
16. Selection option 4 ‘Clean configuration and initialize all disks’
17. Answer ‘y’ to ‘Zero disks, reset config and install a new file system?’
18. Answer ‘y’ to ‘This will erase all the data on the disks, are you sure?’
19. On the partner node - Repeat step 12 to 18
20. The End!

Appendix A: Disk Zeroing Times

Image: NetApp Disk Zeroing Times for SSD/FC/SAS/SATA Disks

Appendix B: Example output from one head

*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************

Please choose one of the following:
(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 5

You have selected the maintenance boot option:
The system has booted in maintenance mode allowing the following operations to be performed:

?                       disk
key_manager             fcadmin
fcstat                  sasadmin
sasstat                 acpadmin
halt                    help
ifconfig                raid_config
storage                 sesdiag
sysconfig               vmservices
version                 vol
aggr                    sldiag
dumpblock               environment
systemshell             vol_db
led_on                  led_off
sata                    acorn
stsb                    scsi
nv8                     disk_list
ha-config               fctest
disktest                diskcopy
vsa                     xortest
disk_mung

Type "help " for more details.

In a High Availablity configuration, you MUST ensure that the partner node is (and remains) down, or that takeover is manually disabled on the partner node, because High Availability software is not started or fully enabled in Maintenance mode.
FAILURE TO DO SO CAN RESULT IN YOUR FILESYSTEMS BEING DESTROYED
NOTE: It is okay to use 'show/status' sub-commands such as 'disk show or aggr status' in Maintenance mode while the partner is up

Continue with boot? y

*> disk show -n
Local System ID: 1234567890

*> aggr status -r

Aggregate aggr0 (online, raid_dp) (block checksums)
  Plex /aggr0/plex0 (online, normal, active)
    RAID group /aggr0/plex0/rg0 (normal, block checksums)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------
      dparity   5b.18.0         5b    18  0   SA:B   -   SAS 15000 560000/1146880000 560208/1147307688
      parity    5b.19.0         5b    19  0   SA:B   -   SAS 15000 560000/1146880000 560208/1147307688
      data      5d.04.0         5d    4   0   SA:B   -   SAS 15000 560000/1146880000 560208/1147307688

*> disk remove_ownership -s 1234567890

All disks owned by the system with ID 1234567890 will have their ownership information removed. The system with ID 1234567890 must not be running !!!

Do you want to continue? y

Volumes must be taken offline. Are all impacted volumes offline? y

*> disk show -a
*> disk assign 5b.18.0
*> disk assign 5b.19.0
*> disk assign 5d.04.0
*> disk show
*> halt

LOADER-A> boot_ontap

*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************

Please choose one of the following:
(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 4

Zero disks, reset config and install a new file system? y
This will erase all the data on the disks, are you sure? y

Rebooting to finish wipeconfig request.
System rebooting...

*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************

Wipe filer procedure requested.

Saturday, 3 August 2013

NetApp Data ONTAP Logins for SMVI and SRM

  
The NetApp Virtual Storage Console (which includes SMVI - SnapManager for Virtual Infrastructure) makes an excellent combination with VMware Site Recovery Manager and the NetApp SRA. The SMVI backup jobs are used to trigger the SnapMirror updates, SRM manages the DR.

The following post contains some notes considering login accounts for SMVI and SRM, and how to create them. This is written specifically for Data ONTAP operating in 7-Mode.

The SMVI login - smvi_user - is used when controllers are added in the 'Backup and Recovery > Setup' section of the VSC. The SRM login - srm_user - is used when Array Based Replication is configured in SRM.

Options:

1. Use the root account:

Note: Changing the root account password will also require updating SMVI and SRM configuration.

2. Use newly created accounts in the Administrators group:

useradmin user add smvi_user -g Administrators
useradmin user add srm_user -g Administrators

- Or a domain account (if controller is domain joined) -

useradmin domainuser add DOMAIN\smvi_user -g Administrators
useradmin domainuser add DOMAIN\srm_user -g Administrators

3. Use newly created accounts with specific access rights:

3.1 The SMVI User:

useradmin role add api-access -a api-*,login-http-admin,cli-ifconfig
useradmin group add api-group -r api-access

useradmin user add smvi_user -g api-group

- Or a domain account -

useradmin domainuser add DOMAIN\smvi_user -g api-group

3.2 The SRM User:

Part 1 - Create a role with sufficient rights

NAS RBAC rights - NAS only SRM 5 environment with SRA 2.0:

FAS> useradmin role add srm_role -a login-http-admin,api-system-get-info,api-system-get-version,api-system-cli,cli-ifconfig,api-ems-autosupport-log,api-net-resolve,api-qtree-list,api-snapshot-list-info,api-volume-clone-create,api-volume-online,api-volume-list-info,api-volume-size,api-volume-offline,api-volume-destroy,api-snapmirror-get-status,api-snapmirror-abort,api-snapmirror-quiesce,api-snapmirror-break,api-snapmirror-list-connections,api-snapmirror-set-connection,api-snapmirror-set-sync-schedule,api-snapmirror-set-schedule,api-snapmirror-list-schedule,api-snapmirror-list-sync-schedule,api-snapmirror-update,api-snapmirror-resync,api-vfiler-list-info,api-nfs-exportfs-list-rules,api-nfs-exportfs-list-rules-2,api-fcp-node-get-name,api-fcp-adapter-list-info,api-iscsi-node-get-name,api-igroup-list-info,api-lun-list-info,api-lun-map-list-info,api-lun-get-serial-number,api-igroup-add,api-igroup-create,api-igroup-destroy,api-nfs-exportfs-modify-rule,api-nfs-exportfs-delete-rules,api-nfs-exportfs-append-rules
SAN RBAC rights - SAN (FC or iSCSI) only SRM 5 environment with SRA 2.0:

FAS> useradmin role add srm_role -a login-http-admin,api-system-get-info,api-system-get-version,api-system-cli,cli-ifconfig,api-ems-autosupport-log,api-net-resolve,api-qtree-list,api-snapshot-list-info,api-volume-clone-create,api-volume-online,api-volume-list-info,api-volume-size,api-volume-offline,api-volume-destroy,api-snapmirror-get-status,api-snapmirror-abort,api-snapmirror-quiesce,api-snapmirror-break,api-snapmirror-list-connections,api-snapmirror-set-connection,api-snapmirror-set-sync-schedule,api-snapmirror-set-schedule,api-snapmirror-list-schedule,api-snapmirror-list-sync-schedule,api-snapmirror-update,api-snapmirror-resync,api-vfiler-list-info,api-nfs-exportfs-list-rules,api-nfs-exportfs-list-rules-2,api-fcp-node-get-name,api-fcp-adapter-list-info,api-iscsi-node-get-name,api-igroup-list-info,api-lun-list-info,api-lun-map-list-info,api-lun-get-serial-number,api-igroup-add,api-igroup-create,api-igroup-destroy,api-lun-online,api-lun-set-space-reservation-info,api-lun-map,api-lun-unmap

NAS and SAN RBAC rights:

FAS> useradmin role add srm_role -a login-http-admin,api-system-get-info,api-system-get-version,api-system-cli,cli-ifconfig,api-ems-autosupport-log,api-net-resolve,api-qtree-list,api-snapshot-list-info,api-volume-clone-create,api-volume-online,api-volume-list-info,api-volume-size,api-volume-offline,api-volume-destroy,api-snapmirror-get-status,api-snapmirror-abort,api-snapmirror-quiesce,api-snapmirror-break,api-snapmirror-list-connections,api-snapmirror-set-connection,api-snapmirror-set-sync-schedule,api-snapmirror-set-schedule,api-snapmirror-list-schedule,api-snapmirror-list-sync-schedule,api-snapmirror-update,api-snapmirror-resync,api-vfiler-list-info,api-nfs-exportfs-list-rules,api-nfs-exportfs-list-rules-2,api-fcp-node-get-name,api-fcp-adapter-list-info,api-iscsi-node-get-name,api-igroup-list-info,api-lun-list-info,api-lun-map-list-info,api-lun-get-serial-number,api-igroup-add,api-igroup-create,api-igroup-destroy,api-lun-online,api-lun-set-space-reservation-info,api-lun-map,api-lun-unmap,api-nfs-exportfs-modify-rule,api-nfs-exportfs-delete-rules,api-nfs-exportfs-append-rules

Part 2 - Verify rights

FAS> useradmin role list srm_role

Part 3 - Create a group with the role

FAS> useradmin group add srm_group -r srm_role

Part 4 - Create a user in the group

FAS> useradmin user add srm_user -g srm_group

- Or a domain account -

FAS> useradmin domainuser add DOMAIN_NAME\srm_user -g srm_group

Additional Notes

Setting/re-setting passwords are done when logged in as root and using the passwd command:

useradmin user list
passwd

Example output:

FAS> passwd
Login: srm_user
New password:
Retype new password:
FAS>

Appendix: 7-Mode Password Options

FAS> options security
security.passwd.firstlogin.enable off
security.passwd.lockout.numtries 4294967295
security.passwd.rootaccess.enable on
security.passwd.rules.enable on
security.passwd.rules.everyone on
security.passwd.rules.history 0
security.passwd.rules.maximum 256
security.passwd.rules.minimum 8
security.passwd.rules.minimum.alphabetic 2
security.passwd.rules.minimum.digit 1
security.passwd.rules.minimum.lowercase 0
security.passwd.rules.minimum.symbol 0
security.passwd.rules.minimum.uppercase 0