Friday, 17 February 2012

NetApp NS0-502 Study Notes Part 4/4: SAN Implementation Testing

4. SAN IMPLEMENTATION TESTING
Performance Measurement Tools:
nSANity (NetApp Data Center Collector): (run from Windows CLI or Linux CLI) collects information about the following SAN components - Storage systems, Switches, Hosts.
Brocade SAN Health: captures diagnostic and performance data from SAN switches.
Perfstat Converged: (run from Windows CLI or Linux CLI) captures information from node(s), local and remote host(s), network switch(s), returns as a zip or tar file.
Perfmon: (available on Windows Hosts) Can identify performance issues at the system and application level.
Load-generating tools: Simulated I/O (SIO and SIO_ntap), Microsoft Exchange Server Load Simulator (LoadSim) and Jetstress Tool, SQLIO for SQL Server, Iometer for SQL and Windows-based file-system I/O tests, Oracle Workload Generator.
*SQL Server performance depends on: logical design of databases, indexes, queries, and applications, with memory, cache buffers, and hardware also factoring in.
*Oracle includes fives types of transactions that affect performance: random queries, random updates, sequential queries, sequential updates, parallel queries.

4.1 Be able to create an acceptance test plan
Acceptance Test Plan: To define what the minimum performance should be, work with the customer to understand their performance expectations and determine the minimum performance they require for the application(s) this system will support.

4.2 Test host to storage connectivity (native Vol. Mgr., file systems)
iscsi interface show : DOT CLI command to check the network interfaces are enabled for iSCSI traffic.
fcp stats -i 5 : DOT CLI command that displays the average service time on an FC target port basis every 5 seconds
lun stats -i 5 : DOT CLI command that displays the average latency time on a per LUN basis every five seconds
sanlun lun show -p : UNIX CLI NetApp host utility command to determining the number of paths per LUN visible to the host
sanlun fcp show adapter : UNIX CLI NetApp host utility command to verify WWNN and WWPNs
esxcfg-info : VMware ESX CLI command to verify WWNN and WWPNs
/var/adm/messages : Check this log for SAN connectivity errors
To verify FC connectivity between host and FC switch – check FC switch for WWPN of host HBA.
Steps prior to replacing a failed Brocade FC Switch: 1) Change the domain ID of the replacement switch to the domain ID of the failed switch. 2) Clear all zoning configuration on the replacement switch. 3) Change the core PID format of the replacement switch to the core PID format of the failed switch.
When hard zoning (domain ID plus port) is in use and a host HBA is replaced, no changes need to be made provided the new HBA is connected to the same port.

4.3 Test LUN availability during failover scenarios (multipathing)
4.4 Test controller failover scenarios (multipath HA)
To test NetApp storage controller failover execute the cf takeover command on both controllers.
Troubleshoot/verify cluster configuration with the cluster configuration checker script.

Further reading:
http://www.netapp.com/us/support/university/certifications/nso-502.html
Highly recommend the NetApp University Training Courses from the above link as follows.
Web Based Training (WBT) courses (free for partners and with basic online labs):
1) SAN Fundamentals on Data ONTAP
2) SAN Design
3) SAN Implementation - Switch Configuration
4) NetApp Unified Connect Technical Overview and Implementation:
Instructor Led Training (ILT) courses
1) SAN Implementation Workshop
2) SAN Scaling and Architecting

Good for more on Data ONTAP CLI:
http://www.datadisk.co.uk/html_docs/netapp/netapp_cs.htm
http://mbrookman.wordpress.com/tag/netapp-cheat-sheet/

NetApp NS0-502 Study Notes Part 1/4: SAN Solution Assessment

NetApp NS0-502 Study Notes Part 3/4: SAN Implementation

3. SAN IMPLEMENTATION
Implement a Storage Area Network:
1.Discover the Target Portal
2. Create a session
3. Create and igroup
4. Create a Logical Unit
5. Map a LUN
6. Find the disk
7. Prepare the disk

3.1 Prepare site for installation.

3.1.1 Be able to review implementation flowchart with customer and assign task areas.
3.1.2 Make contact with datacenter/site personnel.
3.1.3 Validate equipment move path to installation location.
3.1.4 Verify site infrastructure including: dual power (and proximity of power drops to the install location), floor space, floor loading plan, HVAC (Heating, ventilation, and air conditioning.)
3.1.5 Validate logistics plan for staging and installation of equipment.
3.1.6 Verify Ethernet cabling plan and availability of cable supports.
3.1.7 Verify fiber cabling plan and availability of cable supports.

3.2 Following the rack diagram, install systems and FC switches.

3.3 Perform basic power on tests for all equipment.
Power on order: 1) Network Switches. 2) Disk Shelves. 3) Any Tape Backup Devices. 4) NetApp Controller Heads.
If a NetApp storage system fails on its first boot; check for a description of the problem on the LCD and console, and follow the instructions.

3.4 Configure NetApp storage systems (stage 1).
Run all diagnostic tests after initial installation of a NetApp FAS3000 storage system, to run through a comprehensive set of diagnostic tests on the entire system.

3.4.1 Update firmwares and software versions to the latest/required versions.
3.4.2 Configure controller name.
3.4.3 Configure controller failover.
3.4.4 Configure multipath HA and verify cabling.
*The FAS6020 NVRAM uses InfiniBand for the HA interconnect.

3.4.5 Perform ALUA controller configuration.
ALUA (Asymmetric Logical Unit Access) is a set of SCSI commands for discovering and managing multiple paths to LUNs of Fibre Channel and iSCSI SANs. It allows the initiator to query the target about path attributes, such as primary path and secondary path. ALUA no longer requires proprietary SCSI commands.
Disable ALUA on igroups connecting using the NetApp DSM.

3.4.6 Configure FC interfaces utilizing fcadmin.
3.4.7 Configure Ethernet interfaces with IP addresses defined in plan.
3.4.8 Configure interfaces for iSCSI.
3.4.9 Configure CHAP.

3.5 Configure FC switches.
*Brocade FC Switch always uses hardware zoning. Cisco FC Switch uses software zoning if a mixture of hard and soft zoning on the switch.
Each FC switch must have a unique domain ID (recommend start numbering from 10 to avoid reserved IDs).

Brocade Switches
Web Tools : Tool to manage Brocade Switches.
cfgshow : CLI command to display all defined zone information.
configshow : CLI command use to verify configuration.
fabricshow : CLI command to display which Brocade switches are connected into a fabric.
supportshow : CLI command to collect detailed diagnostic information from a Brocade FC switch.
switchshow : CLI command to view current nodes connected to the switch

Cisco Switches
Cisco Fabric Manager : Native GUI switch tool for managing Cisco MDS-Series switches and directors.
show zoneset : CLI command to view all defined zone configuration information.
show zoneset active : CLI command to display currently active zoneset.
show version : CLI command to collect information about the firmware version.

3.5.1 Configure basic switch settings (IP address, switch name).
3.5.2 Configure zoning as defined by implementation plan.

3.6 Configure Ethernet switches.
Default Ethernet Packet Size = 1500 bytes of payload (or MTU – Maximum Transmission Unit).
Jumbo Frames Industry Standard Packet Size = 9000 bytes of payload (or MTU – Maximum Transmission Unit).
When setting up the Ethernet switches for jumbo frames, the following components need to be configured: 1) Ethernet port on host system. 2) Ethernet port on storage device. 3) Ethernet switch ports being used.

3.6.1 Configure basic switch settings (IP addresses, switch name).
3.6.2 Configure and validate VLANs.

3.7 Configure NetApp storage systems (stage 2).

3.7.1 Connect NetApp systems to switches (FC and Ethernet).
3.7.2 Configure and validate aggregates (including RAID groups), volumes, LUNs, and qtrees.
Flexvol Space Management Policies
1) Guarantee =
none (thin provisioning)
file option (space is allocated from the aggregate when certain "space-reserved" files (such as space-reserved LUN) is created)
volume option (thick provisioning)
2) LUN Reservation =
on (thick provisioned)
off (thin provisioned)
*Set a LUN's reservation to on to guarantee it cannot be affected by other LUNs in the volume.
3) Fractional_reserve =
? % (Fraction of volume's size reserved on top for growth – and utilized during volume snapshot)
4) Snap_reserve =
? % (Fraction of volume's size reserved inside for snapshots)
*Set Snapshot Reserve to 0% for a volume holding LUNs.
5) Auto_delete =
volume / on / off (allows a flexible volume to automatically delete snapshots in the volume)
6) Auto_grow =
on / off (allows a flexible volume to automatically grow in size within an aggregate)
7) Try_first =
snap_delete / volume_grow (Controls the order in which the two reclaim policies (snap autodelete and vol autosize) are used).

NetApp deduplication (ASIS) is enabled on the volume level.
The following commands enable NetApp deduplication and verify space savings:
sis on
sis start
df s

3.7.3 Configure portsets for later attachment to igroups according to plan.
A portset is bound to an igroup.
The portset family of commands manages the portsets.
Portsets are a collection of ports that can be associated with an igroup.
portset create { -f | -i } portset_name [ port ... ]
Creates a new portset.
If the -f option is given, an FCP portset is created.
If the -i option is given, an iSCSI portset is created.

3.8 Configure hosts (stage 1).

3.8.1 Validate host hardware configuration.
3.8.2 Ensure that the correct PCI cards are installed in the correct location on the host.
3.8.3 Install host utility kits on all hosts.
NetApp Host Utilities Kits perform the following functions: 1) They provide properly set disk and HBA timeout values. 2) They identify and set path priorities for NetApp LUNs.

3.8.4 Configure the host Ethernet interfaces for iSCSI.
3.8.5 Configure the Internet storage name service (iSNS).
3.8.6 Configure CHAP on hosts.
3.8.7 Configure host FC interfaces.
Performance tuning parameters for Fibre Channel HBAs on a host: LUN queue depth and Fibre Channel speed.

3.8.8 Configure hosts to Ethernet and FC switches.
3.8.9 Install Snapdrive. Ensure SDW Service account is a member of built-in\Administrators, and if the system is part of a domain, the Service account must be a domain account. Ensure Service account is part of Local\Administrators on the host. For SDU install and administer using the root account.
A host-based volume manager is a host based tool that can be used to create a striped LUN across multiple NetApp controllers.
NetApp ASL = NetApp Array Support Library
Veritas DMP = Veritas Volume Manager with Dynamic Multi-Pathing (DMP)

3.9 Configure NetApp storage systems (stage 3).

3.9.1 Create igroups and perform LUN management for hosts without SnapDrive.
3.9.2 Attach portsets to igroups.
Use igroup throttle on a NetApp storage solution to: 1) Assign a specific percentage of queue resources on each physical port to the igroup. 2) Reserve a minimum percentage of queue resources for a specific igroup. 3) Limit number of concurrent I/O requests an initiator can send to the storage system. 4) Restrict an igroup to a maximum percentage of use.

3.9.3 Set alias for the WWPN and controllers.

3.10 Configure hosts (stage 2).

3.10.1 Configure host multipathing both FC and iSCSI.
The Data ONTAP DSM for Windows can handle FCP and iSCSI paths to the same LUN.

3.10.2 Perform ALUA configuration tasks.
A LUN can be mapped to an ALUA-enabled igroup and a non-ALUA-enabled igroup.

3.10.3 Check for LUN misalignment; check that the LUN and host parameters are properly matched.
The starting offset of the first file system block on the host can affect LUN I/O alignment.
Perfstat counters to check for unaligned I/O that do not fall on the WAFL boundary: 1) wp.partial_write . 2) read/write_align_histo.XX . 3) read/write_partial_blocks.XX .
It is important to selecting the correct LUN type (e.g VMware, Solaris, ...). The LUN type determines: LUNs offset, and size of prefix and suffix – these differ for different operating systems.
To prevent LUN I/O alignment issues with Raw Device Mappings (RDMs) in an ESX environment, use the LUN type matching the guest operating system.

3.10.4 Create snapshot schedule for each host according to implementation plan.

3.11 Perform SAN implementation tasks within virtualized environments utilizing SAN best practices.
On ESX 4.0 with guest operating systems in a Microsoft cluster configuration, the Path Selection Policy (PSP) for an MSCS LUN should be set to: MRU (Most recently used).
On ESX 4.0 for access to a VMFS datastore created on a NetApp LUN under an ALUA configuration, the Path Selection Policy (PSP) should be set to: RR (Round Robin).
For a FCP VMFS datastore created on a NetApp LUN: in ESX 4.0 use VMW_SATP_DEFAULT_AA type for the SATP (Storage Array Type Plugin).
NetApp supports the following Storage Array Type Plug-ins (SATP): VMW_SATP_ALUA and VMW_SATP_DEFAULT_AA.

3.11.1 Identify VM best practices with regard to data deduplication.
3.11.2 Identify VM best practices with regard to thin provisioning.
3.11.3 Identify VM best practices with regard to alignment issues.
VMware ESX: virtual machine disk alignment issues can occur on any protocol including FCP, iSCSI, FCoE, and NFS.

3.11.4 Identify VM best practices with regard to backup and recovery.
3.11.5 Determine the type of switch firmware required to support NPIV.
To make a LUN visible to an NPIV-supported VM (N_Port ID Virtualization = a Fibre Channel facility allowing multiple N_Port IDs to share a single physical N_Port): 1) Create an igroup on the NetApp controller containing the NPIV WWPNs. 2) Map a LUN to the igroup containing the NPIV WWPNs. 3) Create a zone containing NetApp target ports and NPIV WWPNs.
Only RDM type datastores can be used with NPIV.

3.12 FCoE and Unified Connect Enabling Technologies.
*FCoE requires Jumbo Frames because the FC payload is 2112 bytes and cannot be broken up.
Benefits of FCoE: Compatibility with existing FC deployments and management frameworks, 100% Application Transparency, and High Performance.

3.12.1 Identify Ethernet segments using 802.1Q (VLANs).
3.12.2 Describe bandwidth priority classes (QoS).
Bandwidth Management Based on Class of Service (IEEE 802.1Qaz): Consistent management of Quality of Service at the network level by providing consistent scheduling (by default NetApp UTA uses Ethernet priority channel 3 for FCoE).
*default is 50% for FCoE traffic and 50% for other traffic (recommend 80% for FCoE traffic.)

3.12.3 Define Data Center Bridging (DCB).
Data Center Bridging Exchange (DCBX): Management protocol for Enhanced Ethernet capabilities.

3.12.4 Define what is Lossless Ethernet (PAUSE Frame).
Lossless Ethernet (IEEE 802.1Qbb): Ability to have multiple traffic types share a common Ethernet link without interfering with each other (uses PAUSE frame command to control flow).
*Lossless Ethernet is a requirement for FCoE.

3.12.5 VN_ports, VF_ports and VE_ports.
What is a VE_port / VF_port / VN_port?
Called logical (virtual) ports because many logical (virtual) ports can share one physical port.
VE_Port = logical (virtual) E_Port
E_Port = The "Expansion" port within a Fibre Channel switch that connects to another Fibre Channel switch or bridge device via an inter-switch link.
VF_Port = logical (virtual) F_Port
F_Port = The "Fabric" port within a Fibre Channel fabric switch that provides a point-to-point link attachment to a single N_Port.
VN_Port = logical (virtual) N_Port
N_Port = A "Node" port that connects via a point-to-point link to either a single N_Port or a single F_Port.

3.13 FCoE and Unified Connect Hardware.
*FCoE Topologies: Fabric or network topology is the only supported attachment configuration.
*FCoE connects over the interconnect.
DOT 8.0.1 first allowed both FCoE and traditional Ethernet protocols using the same UTA.

3.13.1 Identify supported Converged Network Adapters (CNA).
3.13.2 Identify supported Unified Target Adapters (UTA).
3.13.3 Identify supported switches.
3.13.4 Jumbo frame configuration.
3.13.5 Switch configuration including ports, VLAN, VSAN (Cisco) and QoS.
3.13.6 Data ONTAP configuration including fcp topology, fcp zone show, cna show.
3.13.7 Initiator configuration.
3.13.8 FC to FcoE.
3.13.9 NAS protocols over CNA and UTA adapters.

NetApp NS0-502 Study Notes Part 2/4: SAN Implementation Plan Creation

2. SAN IMPLEMENTATION PLAN CREATION
*Implementation Project Plan: ensure that each task is assigned duration times, dependencies, and resources. Identify the critical path (which depends on: project activities, time for each activity, activity dependencies).

2.1 Verify and plan for dual power feeds for all components.

2.1.1 Ensure all components outlined in plan have power feeds from separate power sources.

Provision of dual power for all systems: 1) Dual power supplies should be placed in all equipment. 2) Dual power feeds should be run to all equipment. 3) Power feeds should be connected to separate outlets connected to two separate PDUs.

2.2 Be able to create cabinet diagrams or be able to read and interpret a cabinet diagram. Diagrams should include the cabinet's storage systems and switches with all connections shown.

Cabinet diagram information: 1) Physical location of the rack in the datacenter. 2) Rack identifying information. 3) Location in the rack of NetApp storage controllers and storage shelves. 4) Switch placement. 5) Storage shelf connectivity.)

2.3 Create a connectivity diagram. Be able to read and interpret a connectivity diagram.

Important for connectivity diagram for FCP: 1) Port numbers and initiator/target configuration for the NetApp storage system. 2) FC switch port connection details with host and storage WWNs. 3) Host WWNs and port connection details.
Important for connectivity diagram for iSCSI: 1) Ethernet switch port connection details and IP addresses. 2) Host and storage device IQNs. 3) NetApp storage system port configuration (initiator/target).
NetApp best practice: FC Target ports should be either ALL on expansion cards or ALL on the onboard ports but never mixed.

2.3.1 Identify port details and connections for NetApp storage device(s).
2.3.2 Identify port details and connections for Hosts.
2.3.3 Identify port details and connections for FC switches.

FC SAN (switch) topologies: Cascade, Core-Edge (provides best performance and scalability,) Full Mesh, Partial Mesh, Switched Fabric.
*Security best practice: disable unused switch ports

*Ensure using the correct small form factor pluggables (SFPs) or enhanced small form-factor pluggable (SFP+), short wave or long wave, single or multimode, and single or multimode cable.


2.3.4 Identify port details and connections for Ethernet switches.

2.4 Plan storage controller configuration.

2.4.1 Plan for single/dual controller configuration.

cfmode = controller failover mode.
Only single_image and standby cfmodes are supported with the 4-Gb FC HBAs on 30xx and 60xx storage systems.
Single_image is the only supported cfmode for new installations starting with Data ONTAP 7.3 (on legacy systems, can continue to use other cfmodes supported by the system).
In single_image cfmode, both nodes in the active-active configuration function as a single Fibre Channel node, and the LUN maps are shared between partners.
In single-image cfmode path failover from cable connectivity issue; I/O will fail over to any port on either controller that is part of the associated portset, with choice of path controlled by the host's MPIO weighting table.
In a HP-UX environment, set single system image cfmode to ensure proper failover.
The effects of change to single_image cfmode, include: 1) There are more paths to LUNs. 2) Target ports WWPNs change on at least one controller.
cfmode is not applicable to O/S co-existence.

2.4.2 Plan for and create diagram for a multipath HA configuration.

A pre-requisite for NetApp storage controller multipath HA between storage controller and disk shelves, is for Software Disk Ownership to be supported and configured.
Linux supports dm_mp multipathing type.

2.4.3 Create capacity plan to include aggregates (RAID groups), volumes, LUNs. Consider snapshot requirements and plan for space reserve strategy.

Best practices:
Use RAID-DP technology.
Separate data and log files by LUN, volume, and aggregate.
Reserve space on the root volumes for log files, document installation, and images of the storage system's memory (for diagnostic purposes).
Do not put LUNs or user data in the root volume.

2.5 Plan host configuration.

Features of SnapDrive: 1) Expand LUNs on the fly. 2) Perform SnapVault updates of qtrees to a SnapVault destination. 3) Perform iSCSI session management.
NetApp recommends installing SnapDrive software on hosts to ensure consistent SnapShot copies.
SnapMirror software is integrated with SnapManager software for application consistent snapshot copies.

2.5.1 Plan/verify host hardware configuration including HBAs, PCI slots that will be used along with firmware and drivers.

HBAnywhere = Utility to collect firmware/driver version for Emulex HBAs (software available for Windows 2003/2008 and Solaris 10).
SANsurfer = Utility to collect firmware/driver version for Qlogic HBAs.

2.5.2 Plan/verify installation of supporting software such as 3rd party volume managers or applications.
Windows 2008 host: consider installing 'Microsoft Multipath I/O Role', and 'DOT DSM for Windows MPIO'.

2.5.3 Validate entire solution and ensure it is supported using the IMT (Interoperability Matric Tool). Determine if PVRs (Product Variance Request) need to be filled.

When designing a NetApp storage solution for a customer, check the row in the IMT for: Host OS & patches, HBA driver, Volume Manager, File System, and Clustering.

2.5.4 Plan creation of igroups for all hosts that will not have SnapDrive installed.

2.6 Create a Snapshot plan.

2.6.1 Create Snapshot plan for each host. Consider customer RPO (Recover Point Objective) and RTO (Recovery Time Objective) requirements as well as what space reserve strategy is most appropriate to use.
2.6.2 Create SnapDrive installation plan.

FCP or iSCSI license is required for SnapDrive to be used with a NetApp appliance.
Best practice before installing SnapDrive is to establish a SnapDrive service account.
SnapDrive for Windows can communicate with NetApp Storage Controllers using the following protocols: HTTP, HTTPS, RPC.

2.7 Plan Ethernet switch configuration.

2.7.1 Plan VLAN configuration.

Beneficial uses of VLANs: 1) To isolate iSCSI traffic from LAN/WAN traffic. 2) To isolate management traffic from other IP traffic.
Ports to include in a VLAN for administrative security purposes: 1) Storage Controllers Management Ethernet port. 2) FC Switches Management port. 3) Ethernet Switches Management port. 4) Hosts Management Ethernet port.

2.7.2 Plan IPSEC configuration.
Two IPSec modes: transport mode (performed by host processor) and tunneling mode (offloaded to an IPSec gateway)

2.8 Plan zoning configuration.

2.8.1 Be able to plan the alias list based on the type of zoning that was decided.
2.8.2 Provide a name for the alias that describes the port/WWPN (targets and initiators).
2.8.3 Plan the zones, including the number of zones, members of each zone and the name of each zone. Be able to plan for single initiator zoning.

NetApp recommended best practice for zone configuration: All zones should contain a single initiator and all the targets that initiator accesses.
NetApp recommends zoning by World Wide Port Name.
For a Host connected to a NetApp storage system through an FC switch, which has boot volumes on the NetApp storage system, a persistent binding is a mandatory configuration.

2.9 Plan iSCSI configuration.

A direct connect topology allows for guaranteed maximum network performance for iSCSI.
iSCSI access lists: control which network interfaces on a storage system that an initiator can access, and limit the number of network interfaces advertised to a host by the storage system.

2.9.1 Be able to plan for the create of discovery domains and discovery domain sets in iSNS.

NetApp NS0-502 Study Notes Part 1/4: SAN Solution Assessment

The following four part set of study notes for NetApp's NCIE-SAN NS0-502 exam, are based on NetApp's own NS0-502 Study Guide (which expands on the NS0-502 SAN Implementation Exam Objectives ), and the NS0-502 Study Guide provides most of the headings and subsections headings (in green.) The methodology behind the SAN Solution Assessment, Implementation Plan Creation, Implementation Tasks, and Testing, could be applied to other SAN providers too. The exam is 90 minutes, there are 70 questions, the passing score is 70%, and there is no requirement to go on an Instructor Led Training course.

1. SAN SOLUTION ASSESSMENT
When documenting the existing configuration, factors that should be considered include: disk ownership settings, igroups being used, protocols in use, Snapshot configuration.
Use System Performance Modeler (SPM) for sizing, trending, enhanced performance analysis ( https://sizers.netapp.com ).
Use NetApp Synergy: for design.
*The latest version (as of March 2012) of NetApp Synergy - v3.1 - is available at http://synergy.netapp.com/disclaimer.htm . This link also includes the latest NetApp Data Collector - v3.1 (requires .NET 4).

1.1 Ensure that all prerequisites for the installation of NetApp system and switches (if needed) are met, and that the required information to configure NetApp systems is collected.

1.1.1 Collect NetApp storage system configuration information.
1.1.2 Collect Switch configuration information.
1.1.3 Gather power information – such as circuit availability, wiring in place, etc...
1.1.4 Collect Host configuration information.
Information gathering for host systems to be attached to NetApp storage systems using either FC or iSCSI: OS Version, Patch Level, Open Bus Slots, Cards in Bus Slots, Bus Type (PCIe and/or PCI-x,) and Ethernet Ports (both used and free).

1.1.5 Collect application configuration and requirements.
1.1.6 Collect potential DEDUPE information.
1.1.7 Collect backup and retention information.

1.2 List a detailed inventory of SAN components including:

1.2.1 NetApp storage system configuration details.
WWPN of NetApp FC Target Ports begin with 5 (target HBAs generally begin with 1 for Emulex, 2 for QLogic, and 5 for NetApp - e.g. 50:0a:09:81:83:e1:52:d9 ).

1.2.2 Host details.
NetApp FC solutions supported operating systems include: IBM AIX, Solaris 10, VMware ESX(i), Windows 2008 Server.

1.2.3 FC switch details.
For a fault tolerant FC solution utilize multiple FC switches, multiple NetApp storage controllers with multiple target ports each, dual port host FC HBAs with driver and firmware, and multipathing host software.
FC fabric topologies that NetApp supports: 1) A single FC switch. 2) Dual FC switches with no ISLs (Inter-Switch Links.) 3) Four FC switches with multiple ISLs between each pair of switches. 4) Four FC switches with multiple ISLs between ALL switches.

1.2.4 Ethernet switch details.
1.2.5 Current zoning configuration.
1.2.6 Current iSCSI implementation details.
NetApp supports the following for iSCSI with Microsoft Windows solutions: VLANs, Jumbo Frames, Microsoft MCS (there is no support for NIC teaming or NIC trunking).
iscsi session show -v : DOT CLI command to see if iSCSI digests are enabled.
Supported iSCSI configurations: Direct-attached, Network-attached (Single-network, Multi-network, VLANs)

1.2.7 CHAP settings.
iscsi security show : DOT CLI command to display current CHAP settings.

1.2.8 IPSEC configuration details.
1.2.9 Snapshot configuration details.
snap delta : DOT CLI command to see the rate of change between two successive Snapshot copies in a flexible volume.

1.2.10 Current data layout (aggregates, raid groups, volumes).
1.2.11 Consider listing: system names, IP addresses, current zoning configuration, OS versions, OS patch levels, driver versions and firmware versions.
When planning the addition of a NetApp FC SAN where the company has an existing FC SAN, consider: existing hosts with OS level and patches, FC HBAs with firmware and driver, FC switches with firmware version and LUN layout.

1.3 Ensure that the solution design and the hardware provisioned do not fall short of the customer's requirements and expectations.
Solution Verification checklist from the SAN Design and Implementation Service Guide ( from the TechNet for Partners site at https://tech.netapp.com/external/index.html ).
Gap analysis worksheet.
Finalize any configuration details in the SAN design.
Work out any deficiencies prior to requesting approval on the design.

1.3.1 Validate requirements with the customer. Consider the following:

1.3.1.1 Sizing needs.
1.3.1.2 Connectivity needs.
In a FC environment utilizing 62.5 micron cable between patch panels – supported NetApp storage controller configuration is to use: short wave SFPs with 62.5 micron FC cable.
50/125 OM2 multi-mode fiber cable supports up to 300 meters at 2 Gbps.
50/125 OM3 multi-mode  fiber cable supports up to 500 meters at 2 Gbps.
50/125 OM3 multi-mode  fiber cable supports up to 380 meters at 4 Gbps.
*Image source 

1.3.1.3 Zoning types.
Two benefits of soft zoning (device WWPN zoning) over hard zoning (domain ID plus port) for Cisco and Brocade FC switches: 1) A device can be connected to any port in the fabric without changing zoning. 2) It is fully interoperable between switch vendors.

1.3.1.4 Expected level of functionality.
Synchronous SnapMirror is not supported in DOT 8.1 Cluster-Mode.
SnapVault is not supported with DOT 8.1 Cluster-Mode.
SnapProtect is an end-to-end backup and recovery solution which also manages traditional tape backup and disk-to-disk-to-tape deployments. SnapProtect manages NetApp Snapshot, SnapVault, and SnapMirror technology, and tape from a single console.
NetApp solutions for disaster recovery of entire sites are: MetroCluster and SnapMirror software.

Stretch MetroCluster supports up to 500 meters @ 2 Gbps between two controllers.
Fabric MetroCluster supports up to 100 kilometers @ 2 Gbps between two nodes in a cluster .
*Single-mode fibre is only supported for the inter-switch links
*MetroCluster is not supported in Data ONTAP 8.1 Cluster-Mode

1.3.1.5 Performance requirements.
1.3.1.6 Solution requirements being provided by a third party.
With limited budget and resources, a suitable solution for a new disaster recover site = SnapMirror and iSCSI at the disaster recovery site for all hosts.
NetApp best practice for primary block data (FC and iSCSI): 1) Dual controller and single shelf. 2) Dual controller and multiple shelves.
NetApp recommended Ethernet topology for iSCSI: LAN with VLANs implemented.
8.1 Cluster-Mode supports: EMC Symmetric DMX4, EMC CLARiiON CX4, HP StorageWorks EVA (recommended at least one NetApp storage shelf be included with each V-Series installation.)


Tuesday, 14 February 2012

Installing HP ESXi Offline Bundle for VMware ESXi 5.0

The steps to install the offline bundle for VMware ESXi 5.0 has changed from VMware ESXi 4.X. The vihostupdate command used in ESXi 4.X does not work against ESXi 5.0 hosts.

1) Download the bundle.zip.

Either Google “HP ESXi Offline Bundle for VMware ESXi 5.0” or:

Go to www.hp.com → Support & Drivers → Drivers & Software → Search for the server model (e.g. ProLiant DL380 G5) → Select the server series → Select operating system = VMware ESXi 5.0 → Find “* RECOMMENDED* HP ESXi Offline Bundle for VMware ESXi 5.0 → Download

Note: If you have problems downloading the Offline Bundle (such as getting a prompt for ftp username and password); at the time of writing, the following link taken from the HTML source code – works:
2) Use WinSCP or similar to copy the name_of_bundle.zip to say the tmp directory on the ESXi 5.0 host server.

For WinSCP to work, check the SSH is running via the vSphere Client → Select Host → Configuration Tab → Security Profile → Services Properties → Start SSH

3) Put the host server into maintenance mode.

4) Use PuTTY or similar to establish an SSH connection to the host server, and run the command:

esxcli software vib install -d /tmp/name_of_bundle.zip
5) Reboot the host server.

The End!

Note: If you compare the 'Hardware Status' tab on the host server before and after installation of VIBs, notice the expanded and enhanced list of sensors including additional sensor categories for 'Battery' and 'Storage.'

Wednesday, 8 February 2012

Notes on Migrating from Compaq MSA1000 to VMware vSphere 5

The Compaq MSA1000 has been off VMware's Hardware Compatibility list since vSphere 4, with the last supported release being ESX 3.5 U5 (source - http://www.vmware.com/resources/compatibility .)

Fig. 1 – MSA1000 Supported Releases
Now, just because the MSA 1000 is not supported with vSphere 5, does not mean that it will not work with vSphere 5 – and indeed the MSA 1000 does work with vSphere 5. The below image is taken from an ESXi 5.0 host – the datastores with Device identified as "COMPAQ Fibre Channel Disk" are from the MSA 1000.

Fig. 2 - Storage > Configuration > Datastores View
One thing you need to take into account when migrating from VMware Virtual Infrastructure 3 (VI 3) to vSphere 5, is that datastores formatted with a file system of VMFS 3.21 cannot be seen by ESXi 5.0 hosts (ESX 3.0.0 is provided with the initial VMFS 3 release of VMFS 3.21, ESX 3.5.0 with VMFS 3.31, ESX 4.0 with VMFS 3.33, ESX 4.1 with VMFS 3.46, ESXi 5.0 with VMFS 5.54 – see VMware KB 1005325.) This means that any datastores created by ESX 3.0 hosts will not be visible when presented to an ESXi 5.0 host, whereas datastores created by an ESX 3.5 host will be visible (I have not seen this documented by VMware, only evidenced in a real world situation - so do not totally take my word on this!)

Fig. 3 – Datastore properties showing a datastore formatted with VMFS 3.21

The particular migration behind this post involved a migration from a Compaq MSA 1000 to HP P2000 G3 MSA, and upgrade/rebuild of VMware VI 3 as VMware vSphere 5. The Implementation Plan included the following steps:

1) Evacuate all VM guests from the ESX 3.5 U5 host to be rebuilt as ESXi 5.0, shutdown host and remove from the 2.5 U6 Virtual Center.
2) Install new 8 GB FC HBAs alongside the existing 4 GB FC HBAs into the HP ProLiant DL380 G5 host server.
3) Check BIOS settings for Static High Performance Settting enabled, No-Execute Memory Protection enabled, and Hardware Assisted Virtualization enabled.
4) Upgrade host server firmware.
5) Install ESXi 5.0 and configure host.
6) Install vSphere 5 vCentre Server with Update Manager on new Windows 2008 R2 Server.
7) Use vCenter Update Manager to apply critical patches
8) Use vCenter Update Manager to apply "HP P2000 Software Plug-in for VMware VAAI vSphere 5.0" *Google the terms inbetween the quotation marks to find the download page with installation instructions – current latest version at the time of writing was 2.00 (28 Oct 2011) and file name hp_vaaip_p2000_p210.zip
9) Install "HP ESXi Offline Bundle for VMware ESXi 5.0" *Google the terms inbetween the quotation marks to find the download page with installation instructions – current latest version at the time of writing was 1.1 (16 Dec 2011) and file name hp-esxi5.0uX-bundle-1.1-37.zip
10) Configure FC SAN environment such that ESXi 5.0 host can see old MSA 1000 LUNs and new HP P2000 G3 LUNs.
11) Unregister guests from ESX 3.5 U5 host, register with ESXi 5.0 host, and perform storage vMotion from MSA 1000 FC SAN to HP P2000 G3 FC SAN.
12) Repeat appropriate steps for subsequent host servers to be upgraded.

Note 1: For guests on VMFS 3.21 LUNs, these were first migrated to a VMFS 3.31 LUN across an ESX 3.5 U5 host.
Note 2: Live Storage vMotion from VMFS 3 LUNs on an MSA 1000, to VMFS 5 LUNs on a HP P2000 G3, is perfectly possible – they worked fine!
Note 3: VMware tools can be upgraded directly from VI 3 to vSphere 5.

Final Word

I would not recommend running live production VMware vSphere 5 guest machines on top of a Compaq MSA1000 SAN. It will work, any problem though and VMware support are well within their rights to turn around and say "this is unsupported in vSphere."

Saturday, 4 February 2012

Windows User Profile Design: Architects Quick Reference Notes

Three User Profile Types:
1. Mandatory Profiles
       Use for kiosk systems
2. Local Profiles
       Use for users who do not switch computers often
       Use for computers without permanent network connectivity (e.g. laptops)
3. Roaming Profiles / Terminal Services Profiles
       Use for setups where local profiles are not suitable

Note: If Terminal Services is installed on a server, the TS profile path is determined first, and if there is no TS profile it will fall back to the roaming profile, and if no roaming profile path it will fall back to using a local profile.

Incompatible Versions of User Profiles
V1 profiles on all versions of NT up to XP and Server 2003
V2 profiles on Vista and newer versions of Windows

A major reason why Microsoft introduced V2 profiles
V2 profiles offer much more options for folder redirection over V1 profiles

List of the four V1 profile folder redirection options (see User Configuration > Windows Settings > Folder Redirection):
1. Application Data
2. Desktop
3. My Documents
4. Start Menu
Fig 1. V1 Folder Redirection options (image taken from a Windows Server 2003 DC)
List of the thirteen V2 profile folder redirection options (see User Configuration > Policies > Windows Settings > Folder Redirection):
1. AppData(Roaming)
2. Desktop
3. Start Menu
4. Documents
5. Pictures
6. Music
7. Videos
8. Favourites
9. Contacts
10. Downloads
11. Links
12. Searches
13. Saved Games
Fig 2. V2 Folder Redirection options (image taken from a Windows Server 2008R2 DC)
Advantages of Folder Redirection
1. In environments where roaming profiles are not cached locally (i.e. most terminal server farms,) logon times can be greatly reduced by redirecting folders containing large files or large numbers of small files.
2. In environments where multiple profiles exist per user, folders are typically redirected to a single location per user.

Disadvantages of Folder Redirection
1. Network utilization is much higher (because profile files in redirected folders are no longer locally cached.)
2. Increased load on file servers containing the redirected profile folders
3. Increased file I/O latency with redirected profile folder files

The “last writer wins problem”
Roaming profiles and terminal services profiles can suffer from what is known as the “last writer wins” problem, whereby; if a user has several parallel sessions, only the registry of the last session to close will persist since all local copies of NTUSER.DAT are stored in only one place on the central file server. *Third-party products like Citrix User Profile Management, can overcome this.

User Profiles Rules of Thumb
1. Use as few profiles per user as possible, but as many as necessary
2. Use one profile per platform
3. Use different profiles for 32-bit and 64-bit versions of Windows
4. Do not use the same profile on workstations and terminal servers
5. V1 and V2 profiles are not interchangeable
6. To overcome the “last writer wins” problem, use one profile per silo in terminal server farms

Assigning User Profiles
1. Using group policies (recommended)

Windows Server 2003 Group Policy Object Editor Path:
Computer Configuration > Administrative Templates > Windows Components > Terminal Services : Set path for TS Roaming Profiles

Windows Server 2008 R2 Group Policy Management Editor Path:
Computer Configuration > Policies > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Profiles : Set path for Remote Desktop Services Roaming User Profile

2. In the attributes of the Active Directory user objects

Advanced Profile Management
A list of some products that can enhance the capability, efficiency and manageability of Windows User Profiles:



Credits and Further Reading:
http://blogs.sepago.de/helge/2009/01/14/user-profile-design-a-primer/ (an absolutely outstanding post from Helge Klein upon which this post is unashamedly based)