Sunday, 24 February 2013

NetApp FAS2240-4 with DS4243 Installation Notes: Part 2/2 - Updating Software and Firmware

Part 1/2 - Basic Setup

4. How to Update Data ONTAP to 8.1.2

Here we’re assuming the new filer is already on 8.X!
IMPORTANT: Please read the official NetApp documentation before proceeding with updating your filers!
Note: This cannot be done via the SP IP, must be done via e0M or e0a, e0b … (a straight Ethernet cable is fine)

i. To check the version:

CTR-1> version

ii. Download the upgrade image - 812_q_image.tgz - from support.netapp.com

iii. If you’ve not already got it, get a HTTP server like the excellent lightweight HFS.EXE from http://www.rejetto.com/hfs/ (another option is the more powerful Serva from here )

iv. Add the update image file to your HTTP file server:

Image: HTTP File Server and 812_q_image.tgz

v. From the CLI, run the following commands to download the software to the to the /etc/software directory, and then perform the upgrade

CTR-1> software get http://10.0.0.1/812_q_image.tgz
Note: If there’s an image with the same name in there, use the option –f 812_q_image.tgz to force overwrite
CTR-1> software update 812_q_image.tgz

And repeat for the other controller!

Note: You could perform the update directly from the HTTP server with the following.
CTR-1> software update http://10.0.0.1/812_q_image.tgz

vi. Reboot and then check the version after the reboot:

CTR-1> reboot
CTR-1> version

5. How to update the Service Processor (SP) firmware to 2.1.1

i. Login to the service processor
ii. To check the status of the service processor:

SP CTR-1> sp status

iii. Download the firmware 308-02263_A0_2.1.1_SP_FW.tar.gz from support.netapp.com and rename SP_FW.tar.gz

iv. To download the SP firmware, reboot, and check the status:

SP CTR-1> sp update http://10.0.0.1/SP_FW.tar.gz
SP CTR-1> sp reboot
SP CTR-1> sp status

Tip: The SP updates quickest via the SP/e0M port and SP IP!

6. How to update the Disk Shelf Firmware

*Run sysconfig -v to check firmware

In part 1, it was mentioned we wanted IOM6E firmware 0121, and IOM3 firmware 152. At this stage we’ve already upgraded to DOT 8.1.2, and this already has IOM6E.0121, but only IOM3.0151. To see this, run:

CTR-1> priv set advanced
CTR-1*> ls /etc/shelf_fw

The contents of shelf_fw in 8.1.2:
AT-FCX.3800.SFW
AT-FCX.3800.SFW.FVF
ESH4.1400.SFW
ESH4.1400.SFW.FVF
IOM3.0151.SFW
IOM3.0151.SFW.FVF
IOM6.0141.SFW
IOM6.0141.SFW.FVF
IOM6E.0121.SFW
IOM6E.0121.SFW.FVF
SAS.0500.SFW
SAS.0500.SFW.FVF

A method to get the software into the /etc/shelf_fw is to:
i. Download the IOM3.0152.SFW.tar or IOM3.0152.SFW.zip from support.netapp.com (doesn’t really matter which)
ii. Unpack the contents to reveal a IOM3.0152.SFW and IOM3.0152.SFW.FVF file
iii. Add the two files above to our HTTP File Server
iv. From the DOT CLI, run:

CTR-1> software get http://10.0.0.1/IOM3.0152.SFW
CTR-1> software get http://10.0.0.1/IOM3.0152.SFW.FVF
CTR-1> priv set advanced
CTR-1*> mv /etc/software/IOM3.0152.SFW /etc/shelf_fw/IOM3.0152.SFW
CTR-1*> mv /etc/software/IOM3.0152.SFW.FVF /etc/shelf_fw/IOM3.0152.SFW.FVF
CTR-1*> ls /etc/shelf_fw

And that’s it, in DOT 7.3.2 and later the Firmware is automatically updated!

v. Run sysconfig -v to check firmware

Note 1: One other way to do this if you have an NFS license, is mount the /etc/shelf_fw directory to a Windows 7 machine, if you first install the Services for NFS as in the image below.

Image: Windows 7 - Services for NFS

Then the command to create the NFS export is:
CTR-1> exportsfs -io rw /etc/shelf_fw

Then the command to mount to the N: drive (for example) from the Windows DOS Prompt is:
C:> mount \\10.0.0.71\etc\shelf_fw N:\

The only problem was that I didn’t have permission to write - I’m sure that’s me doing something dumb!
Similarly, it should be possible with a CIFS share, but the method above is nice and simple - and doesn’t require either a CIFS or NFS license too!

7. (Optional) How to Update the BIOS

For completeness, we’ll cover how to update the BIOS.

i. To list the contents of the boot device, and determine the BIOS version of your storage controller:

CTR-1> version -b
CTR-1> sysconfig -a

ii. Obtain the BIOS 8.1.0 image - 30802134.zip - from support.netapp.com
iii. Add the BIOS image to the HTTP server, and run:

CTR-1> software install http://10.0.0.1/30802134.zip
CTR-1> priv set advanced
CTR-1> download -d

iv. To verify:

CTR-1> version -b
CTR-1> sysconfig -a

v. If the storage controller is earlier than 8.1.0 a reboot is required, otherwise we’re done!

NetApp FAS2240-4 with DS4243 Installation Notes: Part 1/2 - Basic Setup

Part 1/2 - Basic Setup

The following post contains some notes created for installing a FAS2240-4 (HA Pair) with DS4243 shelves. This mainly covers the updating software and firmware piece after powering up the shelves and controllers including: initial setup, upgrading Data ONTAP 7-Mode software to version 8.1.2, upgrading Service Processor firmware to 2.1.1, upgrading disk shelf firmware on the FAS2240-4 IOM6E’s to 0121, and DS4243 IOM3’s to 0152.

0. Preliminary Steps

i. Rack and stack the FAS2240-4 and DS4243’s as per provided cabinet diagram - check the serial numbers on the shelves for identifying stacks and order of installation
ii. Cable the FAS2240-4 and DS4243’s as per provided cabling diagrams (front-end connectivity, back-end SAS cabling and ACP cabling)
iii. Power on the shelves and set the shelf ID’s as instructed
iv. Connect to the serial port of both controllers, and power on the controller shelf checking the shelf ID is 00

As an example, we can consider a layout as below:

Shelf ID 00: FAS2240-4 (4U) - topmost unit
Shelf ID 01: DS4243 (4U) - middle unit
Shelf ID 02: DS4243 (4U) - bottom unit

Note: NetApp recommends to avoid numbering external shelves from 00 to 09, but if you’re requested to use 01, 02, 03, ... , that’s okay and we can sort of make an exception with the FAS2240 since it is a shelf in itself unlike the grander FAS 3XXX and 6XXX's.

1. FAS2240-4 initial Setup

Controller A or 1 (Top)

The FAS2240-4 controller should autoboot into the setup script, if it sits on the LOADER-A prompt then:

LOADER-A> setenv AUTOBOOT true
LOADER-A> boot_ontap

On first boot, the NetApp filer will automatically boot into the setup script. What we do in the initial setup doesn’t really matter; the setup can be re-run at any time by simply typing setup from the ONTAP CLI.

Information for Setup

Most essential is the Service Processor, as long as this is configured, pretty much everything else can be done remotely at a later stage:

Service Processor IP, Subnet Mask, Default Gateway

Note: You will need at least one IP address for e0a, e0b, e0c, e0d, or e0M – otherwise the setup script will loop back to asking for an IP from e0a… A tip – if you’re not configuring any of these for the customer, and just doing the SP – is to enter an IP for e0a, and then do an ifconfig e0a 0.0.0.0 later.

An example FAS2240 setup prompts are below:

Please enter the new hostname: CTR-1
Do you want to enable IPv6: n
Do you want to configure interface groups: n
Please enter the IP address for Network Interface e0a: {RETURN}
Please enter the IP address for Network Interface e0b: {RETURN}
Please enter the IP address for Network Interface e0c: {RETURN}
Please enter the IP address for Network Interface e0d: {RETURN}
Please enter the IP address for Network Interface e0M: {RETURN}
Please enter the netmask for Network Interface e0M: X.X.X.X
Would you like to continue setup through the web interface: n
Please enter the name or IP address of the IPv4 default gateway: X.X.X.X
Please enter the name or IP address of the administration host: {RETURN}
Please enter timezone: {RETURN}
Where is the filer located: Site Address
What language will be used for multi-protocol files: {RETURN}
Enter the root directory for HTTP files: {RETURN}
Do you want to run DNS resolver: n
Do you want to run NIS client: n
Would you like to configure the SP LAN interface: y
Would you like to enable DHCP on the SP LAN interface: n
Please enter the IP address for the SP: X.X.X.X
Please enter the netmask for the SP: X.X.X.X
Please enter the IP address for the SP gateway: X.X.X.X

When you are satisfied that all the entries are correct, type reboot to apply the changes

Note 1: The SP and e0M exist on the same interface
Note 2: e0M needs to be on a different subnet to the data network interfaces
Note 3: “Always configure SP. Don’t configure e0M unless SP is connected to a dedicated management network.”

Image: The Remote Management RJ45 port hosts both e0M and the SP

Controller B or 2 (Bottom)

Repeat the above.

2. (Optional) Verifying the System

Using SLDIAG (System Level Diags)

To run diagnostics before handing the system over to a customer, use SLDIAG (if AUTOBOOT is set, you will need to interrupt the boot) from the LOADER> prompt. At the LOADER> prompt type boot_diags and from the *> prompt, we can execute the sldiag command.

*> sldiag
*> sldiag version show
*> sldiag device types
*> sldiag device run -dev mem
*> sldiag device status
*> sldiag device status -long
*> halt
LOADER> boot

Using Config Advisor (Wire Gauge)

Download and install the Config Advisor (Wire Gauge) GUI tool, to run diagnostics and verify cabling.

Using System Manager

Download and install the System Manager GUI tool. System Manager 2.1 includes the Network Configuration Checker too.

Using the CLI

Verify and install licences (e.g. cf license is required before a cf enable):
CTR-1> license
CTR-1> license add XXXXXXX

Verify failover status…:
CTR-1> cf status
CTR-1> cf enable
CTR-1> cf takeover
CTR-1> cf giveback
CTR-1> cf disable

Verify disk paths and aggregate:
CTR-1> storage show disk -p

Verify aggregate:
CTR-1> agg status -f

Verify Service Processor (login username=naroot and same root password):
CTR-1> Ctrl+g
SP CTR-1> sp status
SP CTR-1> system sensors
SP CTR-1> system acp show
SP CTR-1> system acp sensors show
SP CTR-1> exit
CTR-1>

Verify Maintenance Mode:
Ctrl+C to interrupt the boot sequence and option 5 to boot into Maintenance mode:
*> sasadmin shelf
*> aggr status
*> disk show -a
*> halt
LOADER-A> boot_ontap
CTR-1> environment status
CTR-1> disks show -n
CTR-1> version -b
CTR-1> version

3. Assigning Disks

CTR-1> options disk
CTR-1> options disk.auto_assign off
CTR-1> options disk
CTR-1> disk show -n
CTR-1> disk assign {disk_name}
CTR-1> disk assign all
Note: In 7-mode, this disk is assigned to whatever controller the command is run from!

If a spare disk needs to be unassigned you can use:

CTR-1> priv set advanced
CTR-1*> disk show
CTR-1*> disk remove_ownership {disk_name}
CTR-1*> disk show -n
CTR-1*> priv set
CTR-1>

Note: Alternatively, can use disk reassign –s old_sysid –d new_sysid

Tip: If there is a requirement to have all odd disks assigned to the top controller, and even disks assigned to the bottom controller, and the unit powers up only to find the internal shelf disks are assigned the wrong way around, you can just halt the controllers and swap the heads around. The LOADER-A prompt stays with the topmost controller in a FAS2240 (can do a halt after swapping the heads to confirm this.)

Saturday, 23 February 2013

Lab Series 02: Part 2 – (Clustered ONTAP) 2 Node Cluster Basic Elements

We’ve run through the setup and have our 2 Node Cluster (CLUSA) up and running, what elements make up this 2 node cluster?

The Elements

2 Cluster Nodes:
CLUSA-01, CLUSA-02

3 Vservers:
CLUSA, CLUSA-01, CLUSA-02

7 LIFs (Logical Interfaces):
- cluster_mgt on CLUSA (e0c on either node CLUSA-01 or CLUSA-02)
- clus1, clus2, mgmt1 on CLUSA-01 (ports e0a, e0b, e0f)
- clus1, clus2, mgmt1 on CLUSA-02 (ports e0a, e0b, e0f)

1 Failover Group:
clusterwide (containing 6 ports - e0c, e0d, e0e on CLUSA-01 & e0c, e0d, e0e on CLUSA-02)

2 Aggregates:
aggr0 on node CLUSA-01 (900MB, 96% used, raid_dp)
aggr0_CLUSA_02_0 on node CLUSA-02 (900MB, 96% used, raid_dp)

112 disks (after adding the optional SIM shelves):
- 4 shelves of 14 disks (56) assigned to each node
- 3 disks per node are contained in an aggr0 which holds the root volume
- 53 spare disks per node, of which there are
- 25 spare 1GB disks per node, and
- 28 spare 4GB disks per node

2 Volumes:
vol0 belonging to Vserver CLUSA-01 (851.5MB, 25% used)
vol0 belonging to Vserver CLUSA-02 (851.5MB, 25% used)

0 LUNs
0 igroups

A good grasp of the elements that make up a cluster, helps make the whole thing relatively easy to understand.

How to Find the Information?

i. Use OnCommand System Manager and click on Advanced which brings up the Data ONTAP Element Manager, or simply point your web browser at
http://MANAGEMENT_IP_ADDRESS_OF_CLUSTER or
http://MANAGEMENT_IP_ADDRESS_OF_A_CLUSTER_NODE

Image: Cluster Element Manager

ii. CLI

cluster show
vserver show
network interface show
network interface failover-groups show
aggr show
storage disk show
storage disk show -aggr aggr*
storage disk show -container-type spare
storage disk show -physical-size 3.93GB
volume show
df -h
lun show
lun igroup show

Three Re-Configurations

i. Removing e0d and e0e from the clusterwide failover group (used by the cluster management IP):

network interface failover-groups delete -failover-group clusterwide -node CLUSA-01 -port e0d,e0e
network interface failover-groups delete -failover-group clusterwide -node CLUSA-02 -port e0d,e0e
network interface migrate -vserver CLUSA -lif cluster_mgmt -dest-node CLUSA-02 -dest-port e0c

The above lines remove port e0d, and e0e from the failover-group clusterwide, so that the cluster management IP can exist only on e0c on CLUSA-01 or CLUSA-02. And will migrate the LIF cluster_mgmt to CLUSA-02’s port e0c.

Note: When the node homing the cluster_mgmt LIF is rebooted, the port fails over to a port on the other node.

ii. Renaming the aggregates containing the root volumes to have a matching naming convention:

storage aggregate rename aggr0 aggr_CLUSA1_00
storage aggregate rename aggr0_CLUSA_02_0 aggr_CLUSA2_00

iii. Modifying disk options to put autoassign off:

storage disk option show
storage disk option modify -node CLUSA* -autoassign off

Note: By default on the 8.1.2 C-Mode SIM, autoassign is turned on. In this lab all disks are assigned anyway, its useful practice though to check this is off.

Lab Series 02: Part 1 – 2 x 2 Lab for Clustered ONTAP Simulator

To learn more about Clustered ONTAP, the lab design from Series 01 has been much simplified using some of the lessons learnt in the process of setting up that lab.

Here, we have 2 x 2 node clusters – one 2 node cluster in each site – which should be sufficient to get a basic understanding/working knowledge of the administration/concepts of Clustered ONTAP, including Peers.

Image: Lab Series 02 Design

The IP Addressing, Networks, and Naming Conventions have been simplified too. All the details required for the basic setup are contained in the tables below. Following through the build recipe here, is enough information to get the lab up and running.

Table: Site A – Networks and IP Addressing

Table: Site B – Networks and IP Addressing

Note: Having the domain controllers is optional at this stage, also, there’s no reason why – for a lab – we could not have everything in the same subnet.

The basic setup (skipping adding the optional additional shelves from the build recipe) is relatively simple to do and can be done in less than an hour – that’s for two up and running clusters; even adding the SIM shelves (8 shelves, 2 shelves for each of the 4 nodes) shouldn’t take much longer.

Now to do something interesting with it!

Saturday, 16 February 2013

Lab Series 01: Part 6 - Deploying the Data ONTAP Edge 8.1.1

*for NAEDGE101 and NAEDGE201

The final element of the lab is deploying the Data ONTAP Edge appliance. Since we have the 7-Mode and Clustered ONTAP SIMs, the Edge appliance (which is DOT 8.1.1 7-Mode) is not essential for this learning lab, still, it’s nice to get it out of the box and have a look.

Data ONTAP Edge requirements are:
- Two dedicated physical CPU cores
- 4GB dedicated memory
- Minimum 57.5GB disk space for the Data ONTAP Edge system

1. NAEDGE101

Here starting with a VMware ESXi 5.1 host (ESX111 in our lab), with 8GB RAM, and a 70GB local datastore. The ESXi host belongs to a VMware vCenter.
Via a vSphere Client connection to the vCenter > File > Deploy OVF Template > Browse to the 811_v_eval.ova file and click Open > Next >

Note: A vSphere Client connection directly to the host will error on deploy “Unsupported element ‘Property’”

Image: OVF Template Details

Next > Accept > Next > Name: “NAEDGE101” > Next > Select the local datastore & Thick Provision Lazy Zeroed > Next >

A. Data ONTAP OS Properties
Host Name = NAEDGE101
IP Address = 10.1.2.61
Netmask = 255.255.255.0
Gateway = 10.1.2.5
Administrative Password (root) = XXXXXXXX

Note: Don’t use an FQDN for the hostname, you’ll get “Invalid hostname format. Data ONTAP automatic setup failed, shutting down.”

B. Data ONTAP Managed Storage
Disk Size = 50

Next > Tick ‘Power on after deployment’ > Finish

And that’s it, now for the second ONTAP Edge!

2. NAEDGE201

Repeat on the second host ESX211 with these details:

Name: NAEDGE201

A. Data ONTAP OS Properties
Host Name = NAEDGE201
IP Address = 10.2.2.61
Netmask = 255.255.255.0
Gateway = 10.2.2.5
Administrative Password (root) = XXXXXXXX

B. Data ONTAP Managed Storage
Disk Size = 50

The Completed Simulator Lab

The image below shows the complete lab in VMware Workstation – two 2-node Clusters, two 8.1.2 7-Mode SIMs, and two 8.1.1 7-Mode Edge appliances as seen in OnCommand System Manager 2.1.

Image: The Completed Lab

This completes the setup of all the NetApp lab elements as outlined in part 1, from here on we’ll mostly just be using the SIMs and leave the Edge appliances powered down. The Edge appliance – especially since here it’s running in a virtual VMware ESXi host, something it’s not designed for – consumes a lot of CPU resources, whereas the SIMs tick over nicely.

Image: High CPU Usage on Quad-Core Hyper-Threaded Workstation (3600 MHz clock speed.)

Lab Series 01: Part 5 – NetApp Data ONTAP 8.1.2 Clustered ONTAP Simulator Build Recipe 02

*for NACSIM111 & NACSIM112 – the “local” cluster

Following on from Part 4, now we set up our second Clustered ONTAP Simulator lab. The build recipe here is pretty much identical to http://cosonok.blogspot.com/2013/02/lab-series-01-part-4-netapp-data-ontap.html the only amendments are the following variables:

Table: DOT 8.1.2-C-SIM Local Cluster Configuration Variables
Note: I’m using the same cluster network as before!

Renaming VM Serial Port Named Pipes:
Rename the Serial Port for NACSIM111 to nacsim111-cons
Rename the Serial Port 2 for NACSIM111 to nacsim111-rgb
Rename the Serial Port for NACSIM112 to nacsim112-cons
Rename the Serial Port 2 for NACSIM112 to nacsim112-rgb

Suggested serial numbers:
NACSIM111: 1234563111
NACSIM112: 1234563112

Substitute the variables above where required when following through http://cosonok.blogspot.com/2013/02/lab-series-01-part-4-netapp-data-ontap.html to set up the lab.

In real-world scenarios, having local clusters and then SnapMirroring to remote clusters is more likely. Stretched networks are notoriously difficult to set up efficiently depending on the distance involved – with latency issues, broadcast traffic, limited bandwidth, jitter, network hops, network resilience … all these issues coming into play.

Also, there is the option to create a Peer Relationship from one Cluster to another, something I’ll be looking at in later labs.

Image: Cluster > Configuration > Peers > Create

Lab Series 01: Part 4 – NetApp Data ONTAP 8.1.2 Clustered ONTAP Simulator Build Recipe 01

*for NACSIM101 & NACSIM201 – the “stretched” cluster

Lab Redesign

The initial lab design in Part 1  needs to be amended for this part, and that’s what labs are all about, learning and then making changes on what we’ve learnt.

To setup the “stetched” 2-node SIM cluster (note that we’re not using HA pairs in the SIM), we need the following networks:

i. Cluster Network (stetched across Site 1 and Site 2)
ii. Cluster Management Network (using the existing local management network in site 1)
iii. Node Management Network (using the existing local management networks in Site 1 and Site 2)

Note i: When configuring the cluster network, it does not have a default gateway, it is a dedicated un-routable network for intra-cluster communication
Note ii: The Cluster Management IP Address cannot be in the same subnet as the Cluster Network. The Cluster Administration IP address belongs to a Vserver that initially exists on the first node.

Table: DOT 8.1.2-C-SIM Stretched Cluster Configuration Variables

Continuing with the Build Recipe…

The following how to guide is a concise recipe to setup the Data ONTAP 8.1.2 Clustered ONTAP Simulators which we will call NACSIM101 & NACSIM201 (multi-site cluster.)

Key:
RED = site and/or filer specific variable
HIGHLIGHT = user input

0: Pre-boot VM Settings

Rename the Serial Port for NACSIM101 to nacsim101-cons
Rename the Serial Port 2 for NACSIM101 to nacsim101-rgb
Rename the Serial Port for NACSIM201 to nacsim201-cons
Rename the Serial Port 2 for NACSIM201 to nacsim201-rgb
(Optional) Add two additional Network Adapters (check this first!)
(Optional) Set all Network Adapters connection to NAT

Image: Example Virtual Machine Settings for NACSIM101

Note: Notice that the DOT 8.1.2 C-Mode SIM has a thin-provisioned 250GB Hard Disk 4.

1: NACSIM101

The below recipe is for NACSIM101.

Setting the Serial Numbers*
*so you can manage more than one SIM in OnCommand System Manager

Suggested serial numbers:
NACSIM101: 1234563101
NACSIM201: 1234563201

Boot the simulator – NACSIM101.
When you see Hit [Enter] to boot immediately, or any other key for command prompt hit Ctrl-C.
Enter the following commands to set your unique 10 digit serial number.

set bootarg.nvram.sysid=1234563101
set SYS_SERIAL_NUM=1234563101
boot

When you see Press Ctrl-C for Boot Menu hit Ctrl-C

Selection? 4 {for Clean configuration and initialize all disks}
Zero disks, reset config, and install a new file system: y
This will erase all the data on the disks, are you sure: y

{APPLIANCE REBOOTS}

Initial Setup

Do you want to create a new cluster or join an existing cluster? create
System Defaults:
Private cluster network ports [e0a,e0b].
Cluster port MTU values will be set to 1500.
Cluster interface IP addresses will be automatically generated.
Do you want to use these defaults? n
List the private cluster network ports? e0e,e0f
Enter the cluster ports’ MTU size? 1500
Enter the cluster network netmask? 255.255.252.0
Enter the cluster interface IP address for port e0e? 10.30.1.101
Enter the cluster interface IP address for port e0f? 10.30.2.101
Enter the cluster name: CSIM00
Enter the cluster base license key: JWFJEXMWZWYQSD
Enter an additional license key: {RETURN}
Note: The following Cluster-mode Simulator license codes are for:
CIFS, SnapRestore, NFS, SnapMirror_DP, FlexClone, FlexVol_HPO, iSCSI, FCP
- add these license keys after setup using:
system license add FAXQGXMWZWYQSD
system license add RUUFHXMWZWYQSD
system license add PJQJIXMWZWYQSD
system license add BEOYIXMWZWYQSD
system license add DPSUHXMWZWYQSD
system license add TFZBGXMWZWYQSD
system license add NYLNJXMWZWYQSD
system license add LNHRKXMWZWYQSD
Enter the cluster administrator’s (username “admin”) password: XXXXXXXX
Retype the password: XXXXXXXX
Enter the cluster management interface port: e0d
Enter the cluster management interface IP address: 10.1.2.30
Note: The cluster management IP cannot be on the cluster subnet!
Enter the cluster management interface netmask: 255.255.255.0
Enter the cluster management interface default gateway: 10.1.2.5
Enter the DNS domain names: lab.priv
Enter the name server IP addresses: 10.1.1.11,10.2.1.11
Where is the controller located: LAB01
Enter the node management interface port: e0a
Enter the node management interface IP address: 10.1.2.81
Enter the node management interface netmask: 255.255.255.0
Enter the node management interface default gateway: 10.1.2.5
{RETURN}

Use SSH to login to the cluster management IP address as admin
{or preferably for the next section} login to the node management IP address as admin

(Optional) Adding Disks to Cluster-Mode Simulator

Note: Unlike in the previous recipe for 7-Mode, here we’ll just add two additional shelves of 14 x 4 GB disks (t 36 would give 9 GB disks) for additional SIM capacity (may revisit moving the root volume at a later date – time permitting.)

security login unlock -username diag
security login password -username diag
Please enter a new password: XXXXXXXX
Please enter it again: XXXXXXXX
set -privilege advanced
Do you want to continue? y
systemshell local
login: diag
Password: XXXXXXXX
setenv PATH "${PATH}:/usr/sbin"
echo $PATH
cd /sim/dev
ls ,disks/
vsim_makedisks -h
sudo vsim_makedisks -n 14 -t 31 -a 2
sudo vsim_makedisks -n 14 -t 31 -a 3
ls ,disks/
exit
security login lock -username diag
system node show local
system node reboot local
Warning: Are you sure you want to reboot the node? y

{APPLIANCE REBOOTS}

storage disk show
storage disk modify -disk CSIM00-01:v4.* -owner CSIM00-01
storage disk modify -disk CSIM00-01:v6.* -owner CSIM00-01
storage disk modify -disk CSIM00-01:v7.* -owner CSIM00-01
storage disk show

Note: v5.* is already owned by CSIM00-01 by default

2: NACSIM201

Repeat 1 with the following differences:

set bootarg.nvram.sysid=1234563201
set SYS_SERIAL_NUM=1234563201

Do you want to create a new cluster or join an existing cluster? join
Do you want to use these defaults? n
List the private cluster network ports? e0e,e0f
Enter the cluster ports’ MTU size? 1500
Enter the cluster network netmask? 255.255.252.0
Enter the cluster interface IP address for port e0e? 10.30.1.201
Enter the cluster interface IP address for port e0f? 10.30.2.201
Enter the name of the cluster you would like to join: CSIM00
This node has joined the cluster CSIM00!
Enter the node management interface port: e0a
Enter the node management interface IP address: 10.2.2.81
Enter the node management interface netmask: 255.255.255.0
Enter the node management interface default gateway: 10.2.2.5

And for the “(Optional) Adding Disks to Cluster-Mode Simulator”

Skip the bit before set –privilege advanced (except the security login unlock -username diag) as the diag password is already set in the cluster!

storage disk show
storage disk modify -disk CSIM00-02:v4.* -owner CSIM00-02
storage disk modify -disk CSIM00-02:v6.* -owner CSIM00-02
storage disk modify -disk CSIM00-02:v7.* -owner CSIM00-02
storage disk show

3: Halt and Take a Snapshot

Finally, system node halt local the simulator filers and take a VMware workstation snapshot before we get to play and break things!

Friday, 15 February 2013

Lab Series 01: Part 3 – NetApp Data ONTAP 8.1.2 7-Mode Simulator Build Recipe

*for NA7SIM101 & NA7SIM201

The following how-to-guide is a concise recipe to setup the Data ONTAP 8.1.2 7-Mode Simulators which we will call NA7SIM101 and NA7SIM201.

Key:
RED = site and/or filer specific variable
HIGHLIGHT = user input

0: Pre-boot VM Settings

Rename the Serial Port for NA7SIM101 to na7sim101-cons
Rename the Serial Port 2 for NA7SIM101 to na7sim101-rgb
Rename the Serial Port for NA7SIM201 to na7sim201-cons
Rename the Serial Port 2 for NA7SIM201 to na7sim201-rgb
(Optional) Add two additional Network Adapters (check this first!) 
(Optional) Set all Network Adapters connection to NAT

Image: Example Virtual Machine Settings for NA7SIM101

Note: Notice that the DOT 8.1.2 7-Mode SIM has a thin-provisioned 250GB Hard Disk 4.

 1: NA7SIM101

The below recipe is for NA7SIM101; repeat for NA7SIM201 replacing the variables (in red) as required.

Setting the Serial Numbers*
*so you can manage more than one SIM in OnCommand System Manager

Suggested serial numbers:
NA7SIM101: 1234567101
NA7SIM201: 1234567201

Boot the simulator – NA7SIM101.
When you see Hit [Enter] to boot immediately, or any other key for command prompt hit Ctrl-C.
Enter the following commands to set your unique 10 digit serial number.

set bootarg.nvram.sysid=1234567101
set SYS_SERIAL_NUM=1234567101
boot

When you see Press Ctrl-C for Boot Menu hit Ctrl-C

Selection? 4 {for Clean configuration and initialize all disks}
Zero disks, reset config, and install a new file system: y
This will erase all the data on the disks, are you sure: y

{APPLIANCE REBOOTS}

Initial Setup

Please enter the new hostname: NA7SIM101
Do you want to enable IPv6: n
Do you want to configure interface groups: n
Please enter the IP address for Network Interface e0a: 10.1.2.71
Please enter the netmask for Network Interface e0a: 255.255.255.0
Please enter the media type for e0a: auto
Please enter the flow control for e0a: full
Do you want e0a to support jumbo frames: n
Please enter the IP address for Network Interface e0b: {RETURN}
Please enter the IP address for Network Interface e0c: {RETURN}
Please enter the IP address for Network Interface e0d: {RETURN}
Please enter the IP address for Network Interface e0e: {RETURN}
Please enter the IP address for Network Interface e0f: {RETURN}
Please enter the name or IP address of the IPv4 default gateway: 10.1.2.5
Please enter the name or IP address of the administration host: {RETURN}
Please enter timezone: GMT
Where is the filer location: Site 1
Enter the root directory for HTTP files: /home/http
Do you want to run DNS resolver: y
Please enter DNS domain name: lab.priv
Please enter the IP address for first nameserver: 10.1.1.11
Do you want another nameserver: y
Please enter the IP address for alternate nameserver: 10.2.1.11
Do you want another nameserver: n
Do you want to run NIS client: n
{RETURN}
Do you want to configure the Shelf Alternate Control Path Management interface for SAS shelves: n
New password: XXXXXXXX
Retype new password: XXXXXXXX
System initialization has completed successfully!

Adding 8.1.2-7 SIM Licenses

Connect to NA7SIM101 on 10.1.2.71 using SSH (PuTTY or similar) and the root login. Paste in the following:
Note 1: If you’re having problems with SSH disconnecting on attempting the paste below, retry after doing the ssl.enable off/on below…
Note 2: You’ll find quite a few of the licenses are pre-installed.

license add MTVVGAF #a_sis
license add DZDACHD #cifs
license add CEVIVFK #compression
license add PZKEAZL #disk_sanitization
license add NAZOMKC #http
license add BKHEXNB #fcp
license add ADIPPVM #flex_cache_nfs
license add ANLEAZL #flex_clone
license add BSLRLTG #iscsi
license add ELNRLTG #nearstore_option
license add BQOEAZL #nfs
license add CYLGWWF #operations_manager
license add CGUKRDE #protection_manager
license add UYNXFJJ #provisioning_manager
license add RKBAFSN #smdomino
license add HNGEAZL #smsql
license add ZOJPPVM #snaplock
license add PTZZESN #snaplock_enterprise
license add BCJEAZL #snapmanagerexchange
license add COIRLTG #snapmanager_hyperv
license add QZJTKCL #snapmanager_oracle
license add WICPMKC #snapmanager_sap
license add UPDCBQH #snapmanager_sharepoint
license add DFVXFJJ #snapmirror
license add XJQIVFK #snapmirror_sync
license add DNDCBQH #snaprestore
license add JQAACHD #snapvalidator
license add ZYICXLC #sv_linux_pri
license add PVOIVFK #sv_ontap_pri
license add PDXMQMI #sv_ontap_sec
license add RQAYBFE #sv_unix_pri
license add ZOPRKAM #sv_windows_pri
license add RIQTKCL #syncmirror_local
license add NQBYFJJ #vfiler
license add JGFRLTG #vld

reboot

{APPLIANCE REBOOTS}
Note: The reboot here is to enable local synchronous mirror

(Optional) Adding 2 More Shelves, Moving the Root Volume, Replacing the Original 2 Shelves with Larger Disks, and More!

By default the 8.1.2-7 SIM has 2 shelves of 14 x 1GB disks, for a maximum usable RAW capacity of 28GB. 3 disks make up aggr0, there are a further 11 spare disks, and 14 disks unowned (all the disks from the second shelf.) Use the following commands to verify this, or alternatively OnCommand System Manager:

sysconfig –r
disk show
disk show –n

The 8.1.2 SIM comes with a larger 250 GB VMDK (thin-provisioned) disk than the 48GB disk on the 8.1 SIM, but to take advantage of it we still need to add additional shelves, we can add two more shelves, with 2 x 14 x 4 GB disks, move the root volume (nice to know how to do) then re-populate the original shelves with 2 x 14 x 4 GB disks.

priv set advanced
useradmin diaguser unlock
useradmin diaguser password
Please enter a new password: XXXXXXXX
Please enter it again: XXXXXXXX
systemshell
login: diag
Password: XXXXXXXX
setenv PATH "${PATH}:/usr/sbin"
echo $PATH
cd /sim/dev
ls ,disks/
vsim_makedisks -h
sudo vsim_makedisks -n 14 -t 31 -a 2
sudo vsim_makedisks -n 14 -t 31 -a 3
ls ,disks/
exit
useradmin diaguser lock
priv set admin
reboot

{APPLIANCE REBOOTS}

disk assign all
options disk.maint_center.spares_check off
aggr create aggr1 –r 28 28@4G
aggr status –r
df -h
vol create vol1 aggr1 850m
ndmpd on
ndmpcopy /vol/vol0 /vol/vol1
vol options vol1 root
reboot

{APPLIANCE REBOOTS}

options ssl.enable off
secureadmin setup ssl
SSL Setup has already been done before. Do you want to proceed? y
Press {RETURN} to the prompts…
options ssl.enable on

vol offline vol0
vol destroy vol0
Are you sure you want to destroy volume ‘vol0’? y
aggr offline aggr0
aggr destroy aggr0
Are you sure you want to destroy this aggregate? y
options disk.auto_assign off
priv set advanced
disk show
disk remove_ownership v4.*
Volumes must be taken offline. Are all impacted volumes offline? y
disk remove_ownership v5.*
Volumes must be taken offline. Are all impacted volumes offline? y

vol rename vol1 vol0
aggr rename aggr1 aggr0

useradmin diaguser unlock
systemshell
login: diag
Password: XXXXXXXX
setenv PATH "${PATH}:/usr/sbin"
cd /sim/dev/,disks
ls
sudo rm v0*
sudo rm v1*
sudo rm ,reservations
cd /sim/dev
sudo vsim_makedisks -n 14 -t 31 -a 0
sudo vsim_makedisks -n 14 -t 31 -a 1
exit
useradmin diaguser lock
priv set admin
reboot

{APPLIANCE REBOOTS}

options autosupport.support.enable off
options autosupport.enable off
options disk.auto_assign on
disk assign all
aggr create aggr2 –r 28 28@4G
aggr status -r

This will gives us two aggregates with roughly 92GB usable space each!

2: NA7SIM201

Repeat 1 with Serial Number = 1234567201
Hostname = NA7SIM201
IP Address for e0a = 10.2.2.71
IPv4 default gateway = 10.2.2.5
Filer location = Site 2
First nameserver = 10.2.1.11
Alternate nameserver = 10.1.1.11

3: Halt and Take a Snapshot

Finally, halt the simulator filers and take a VMware workstation snapshot before we get to play and break things!