NetApp NS0-502 Study Notes Part 3/4: SAN Implementation

3. SAN IMPLEMENTATION
Implement a Storage Area Network:
1.Discover the Target Portal
2. Create a session
3. Create and igroup
4. Create a Logical Unit
5. Map a LUN
6. Find the disk
7. Prepare the disk

3.1 Prepare site for installation.

3.1.1 Be able to review implementation flowchart with customer and assign task areas.
3.1.2 Make contact with datacenter/site personnel.
3.1.3 Validate equipment move path to installation location.
3.1.4 Verify site infrastructure including: dual power (and proximity of power drops to the install location), floor space, floor loading plan, HVAC (Heating, ventilation, and air conditioning.)
3.1.5 Validate logistics plan for staging and installation of equipment.
3.1.6 Verify Ethernet cabling plan and availability of cable supports.
3.1.7 Verify fiber cabling plan and availability of cable supports.

3.2 Following the rack diagram, install systems and FC switches.

3.3 Perform basic power on tests for all equipment.
Power on order: 1) Network Switches. 2) Disk Shelves. 3) Any Tape Backup Devices. 4) NetApp Controller Heads.
If a NetApp storage system fails on its first boot; check for a description of the problem on the LCD and console, and follow the instructions.

3.4 Configure NetApp storage systems (stage 1).
Run all diagnostic tests after initial installation of a NetApp FAS3000 storage system, to run through a comprehensive set of diagnostic tests on the entire system.

3.4.1 Update firmwares and software versions to the latest/required versions.
3.4.2 Configure controller name.
3.4.3 Configure controller failover.
3.4.4 Configure multipath HA and verify cabling.
*The FAS6020 NVRAM uses InfiniBand for the HA interconnect.

3.4.5 Perform ALUA controller configuration.
ALUA (Asymmetric Logical Unit Access) is a set of SCSI commands for discovering and managing multiple paths to LUNs of Fibre Channel and iSCSI SANs. It allows the initiator to query the target about path attributes, such as primary path and secondary path. ALUA no longer requires proprietary SCSI commands.
Disable ALUA on igroups connecting using the NetApp DSM.

3.4.6 Configure FC interfaces utilizing fcadmin.
3.4.7 Configure Ethernet interfaces with IP addresses defined in plan.
3.4.8 Configure interfaces for iSCSI.
3.4.9 Configure CHAP.

3.5 Configure FC switches.
*Brocade FC Switch always uses hardware zoning. Cisco FC Switch uses software zoning if a mixture of hard and soft zoning on the switch.
Each FC switch must have a unique domain ID (recommend start numbering from 10 to avoid reserved IDs).

Brocade Switches
Web Tools : Tool to manage Brocade Switches.
cfgshow : CLI command to display all defined zone information.
configshow : CLI command use to verify configuration.
fabricshow : CLI command to display which Brocade switches are connected into a fabric.
supportshow : CLI command to collect detailed diagnostic information from a Brocade FC switch.
switchshow : CLI command to view current nodes connected to the switch

Cisco Switches
Cisco Fabric Manager : Native GUI switch tool for managing Cisco MDS-Series switches and directors.
show zoneset : CLI command to view all defined zone configuration information.
show zoneset active : CLI command to display currently active zoneset.
show version : CLI command to collect information about the firmware version.

3.5.1 Configure basic switch settings (IP address, switch name).
3.5.2 Configure zoning as defined by implementation plan.

3.6 Configure Ethernet switches.
Default Ethernet Packet Size = 1500 bytes of payload (or MTU – Maximum Transmission Unit).
Jumbo Frames Industry Standard Packet Size = 9000 bytes of payload (or MTU – Maximum Transmission Unit).
When setting up the Ethernet switches for jumbo frames, the following components need to be configured: 1) Ethernet port on host system. 2) Ethernet port on storage device. 3) Ethernet switch ports being used.

3.6.1 Configure basic switch settings (IP addresses, switch name).
3.6.2 Configure and validate VLANs.

3.7 Configure NetApp storage systems (stage 2).

3.7.1 Connect NetApp systems to switches (FC and Ethernet).
3.7.2 Configure and validate aggregates (including RAID groups), volumes, LUNs, and qtrees.
Flexvol Space Management Policies
1) Guarantee =
none (thin provisioning)
file option (space is allocated from the aggregate when certain "space-reserved" files (such as space-reserved LUN) is created)
volume option (thick provisioning)
2) LUN Reservation =
on (thick provisioned)
off (thin provisioned)
*Set a LUN's reservation to on to guarantee it cannot be affected by other LUNs in the volume.
3) Fractional_reserve =
? % (Fraction of volume's size reserved on top for growth – and utilized during volume snapshot)
4) Snap_reserve =
? % (Fraction of volume's size reserved inside for snapshots)
*Set Snapshot Reserve to 0% for a volume holding LUNs.
5) Auto_delete =
volume / on / off (allows a flexible volume to automatically delete snapshots in the volume)
6) Auto_grow =
on / off (allows a flexible volume to automatically grow in size within an aggregate)
7) Try_first =
snap_delete / volume_grow (Controls the order in which the two reclaim policies (snap autodelete and vol autosize) are used).

NetApp deduplication (ASIS) is enabled on the volume level.
The following commands enable NetApp deduplication and verify space savings:
sis on
sis start
df s

3.7.3 Configure portsets for later attachment to igroups according to plan.
A portset is bound to an igroup.
The portset family of commands manages the portsets.
Portsets are a collection of ports that can be associated with an igroup.
portset create { -f | -i } portset_name [ port ... ]
Creates a new portset.
If the -f option is given, an FCP portset is created.
If the -i option is given, an iSCSI portset is created.

3.8 Configure hosts (stage 1).

3.8.1 Validate host hardware configuration.
3.8.2 Ensure that the correct PCI cards are installed in the correct location on the host.
3.8.3 Install host utility kits on all hosts.
NetApp Host Utilities Kits perform the following functions: 1) They provide properly set disk and HBA timeout values. 2) They identify and set path priorities for NetApp LUNs.

3.8.4 Configure the host Ethernet interfaces for iSCSI.
3.8.5 Configure the Internet storage name service (iSNS).
3.8.6 Configure CHAP on hosts.
3.8.7 Configure host FC interfaces.
Performance tuning parameters for Fibre Channel HBAs on a host: LUN queue depth and Fibre Channel speed.

3.8.8 Configure hosts to Ethernet and FC switches.
3.8.9 Install Snapdrive. Ensure SDW Service account is a member of built-in\Administrators, and if the system is part of a domain, the Service account must be a domain account. Ensure Service account is part of Local\Administrators on the host. For SDU install and administer using the root account.
A host-based volume manager is a host based tool that can be used to create a striped LUN across multiple NetApp controllers.
NetApp ASL = NetApp Array Support Library
Veritas DMP = Veritas Volume Manager with Dynamic Multi-Pathing (DMP)

3.9 Configure NetApp storage systems (stage 3).

3.9.1 Create igroups and perform LUN management for hosts without SnapDrive.
3.9.2 Attach portsets to igroups.
Use igroup throttle on a NetApp storage solution to: 1) Assign a specific percentage of queue resources on each physical port to the igroup. 2) Reserve a minimum percentage of queue resources for a specific igroup. 3) Limit number of concurrent I/O requests an initiator can send to the storage system. 4) Restrict an igroup to a maximum percentage of use.

3.9.3 Set alias for the WWPN and controllers.

3.10 Configure hosts (stage 2).

3.10.1 Configure host multipathing both FC and iSCSI.
The Data ONTAP DSM for Windows can handle FCP and iSCSI paths to the same LUN.

3.10.2 Perform ALUA configuration tasks.
A LUN can be mapped to an ALUA-enabled igroup and a non-ALUA-enabled igroup.

3.10.3 Check for LUN misalignment; check that the LUN and host parameters are properly matched.
The starting offset of the first file system block on the host can affect LUN I/O alignment.
Perfstat counters to check for unaligned I/O that do not fall on the WAFL boundary: 1) wp.partial_write . 2) read/write_align_histo.XX . 3) read/write_partial_blocks.XX .
It is important to selecting the correct LUN type (e.g VMware, Solaris, ...). The LUN type determines: LUNs offset, and size of prefix and suffix – these differ for different operating systems.
To prevent LUN I/O alignment issues with Raw Device Mappings (RDMs) in an ESX environment, use the LUN type matching the guest operating system.

3.10.4 Create snapshot schedule for each host according to implementation plan.

3.11 Perform SAN implementation tasks within virtualized environments utilizing SAN best practices.
On ESX 4.0 with guest operating systems in a Microsoft cluster configuration, the Path Selection Policy (PSP) for an MSCS LUN should be set to: MRU (Most recently used).
On ESX 4.0 for access to a VMFS datastore created on a NetApp LUN under an ALUA configuration, the Path Selection Policy (PSP) should be set to: RR (Round Robin).
For a FCP VMFS datastore created on a NetApp LUN: in ESX 4.0 use VMW_SATP_DEFAULT_AA type for the SATP (Storage Array Type Plugin).
NetApp supports the following Storage Array Type Plug-ins (SATP): VMW_SATP_ALUA and VMW_SATP_DEFAULT_AA.

3.11.1 Identify VM best practices with regard to data deduplication.
3.11.2 Identify VM best practices with regard to thin provisioning.
3.11.3 Identify VM best practices with regard to alignment issues.
VMware ESX: virtual machine disk alignment issues can occur on any protocol including FCP, iSCSI, FCoE, and NFS.

3.11.4 Identify VM best practices with regard to backup and recovery.
3.11.5 Determine the type of switch firmware required to support NPIV.
To make a LUN visible to an NPIV-supported VM (N_Port ID Virtualization = a Fibre Channel facility allowing multiple N_Port IDs to share a single physical N_Port): 1) Create an igroup on the NetApp controller containing the NPIV WWPNs. 2) Map a LUN to the igroup containing the NPIV WWPNs. 3) Create a zone containing NetApp target ports and NPIV WWPNs.
Only RDM type datastores can be used with NPIV.

3.12 FCoE and Unified Connect Enabling Technologies.
*FCoE requires Jumbo Frames because the FC payload is 2112 bytes and cannot be broken up.
Benefits of FCoE: Compatibility with existing FC deployments and management frameworks, 100% Application Transparency, and High Performance.

3.12.1 Identify Ethernet segments using 802.1Q (VLANs).
3.12.2 Describe bandwidth priority classes (QoS).
Bandwidth Management Based on Class of Service (IEEE 802.1Qaz): Consistent management of Quality of Service at the network level by providing consistent scheduling (by default NetApp UTA uses Ethernet priority channel 3 for FCoE).
*default is 50% for FCoE traffic and 50% for other traffic (recommend 80% for FCoE traffic.)

3.12.3 Define Data Center Bridging (DCB).
Data Center Bridging Exchange (DCBX): Management protocol for Enhanced Ethernet capabilities.

3.12.4 Define what is Lossless Ethernet (PAUSE Frame).
Lossless Ethernet (IEEE 802.1Qbb): Ability to have multiple traffic types share a common Ethernet link without interfering with each other (uses PAUSE frame command to control flow).
*Lossless Ethernet is a requirement for FCoE.

3.12.5 VN_ports, VF_ports and VE_ports.
What is a VE_port / VF_port / VN_port?
Called logical (virtual) ports because many logical (virtual) ports can share one physical port.
VE_Port = logical (virtual) E_Port
E_Port = The "Expansion" port within a Fibre Channel switch that connects to another Fibre Channel switch or bridge device via an inter-switch link.
VF_Port = logical (virtual) F_Port
F_Port = The "Fabric" port within a Fibre Channel fabric switch that provides a point-to-point link attachment to a single N_Port.
VN_Port = logical (virtual) N_Port
N_Port = A "Node" port that connects via a point-to-point link to either a single N_Port or a single F_Port.

3.13 FCoE and Unified Connect Hardware.
*FCoE Topologies: Fabric or network topology is the only supported attachment configuration.
*FCoE connects over the interconnect.
DOT 8.0.1 first allowed both FCoE and traditional Ethernet protocols using the same UTA.

3.13.1 Identify supported Converged Network Adapters (CNA).
3.13.2 Identify supported Unified Target Adapters (UTA).
3.13.3 Identify supported switches.
3.13.4 Jumbo frame configuration.
3.13.5 Switch configuration including ports, VLAN, VSAN (Cisco) and QoS.
3.13.6 Data ONTAP configuration including fcp topology, fcp zone show, cna show.
3.13.7 Initiator configuration.
3.13.8 FC to FcoE.
3.13.9 NAS protocols over CNA and UTA adapters.

Comments

  1. Standard person can validate the Equipment through calibration order.
    They will check every the parameters,
    Place Heat,humidity & additionally the components changed by maintenance and others.
    writing is simply great,thank you for the blog.


    equipment validation
    labview programming
    software validation

    ReplyDelete
  2. The ability of prerequisites to streamline the execution of a qualification,
    with the added bonus of the ways that they benefit a quality system,
    demonstrates the value of incorporating prerequisites into an equipment qualification.This service saves you time and money,
    as you can review these digital representations.

    equipment validation

    ReplyDelete
  3. Verification and Validation are the activities performed to improve the quality and reliability of the system and assure the product satisfies the customer needs.
    Verification assures the product of each development phase meets their respective requirements.
    Validation assures the final product meets the client requirements.

    software validation

    ReplyDelete
  4. In software project management, software testing, and software engineering,
    verification and validation (V&V) is the process of checking that a software system meets specifications and that it fulfills its intended purpose.It may also be referred to as software quality control.

    software validation

    ReplyDelete

Post a Comment