Thursday, 27 June 2013

How to configure NetApp vFilers in a DMZ Context for VMware SRM 5

In this following post, we walk through the steps to setup DMZ vFilers and SnapMirror replication to work with VMware Site Recovery Manager 5. We will set up the vfilers vfiler_lon_dmz in the production site, and vfiler_frk_dmz in the DR Site. Remember, vFiler DR is not supported as an SRM array pairing (both arrays need to be online!)

The below diagram gives an idea of the vfiler_lon_dmz IP configuration at Site A (since this was run in a lab environment, the option to use VLANs was not available - which would not be the case in real life - and so we use dedicated interfaces)!

Image: vfiler_lon_dmz IP Addressing

The DMZ network is not route-able but presented to the ESX hosts.
We will have two volumes to be replicated - v_lon_dmz_vol1 and v_lon_dmz_vol2 - by SnapMirror for SRM.

A listing of the systems that will make up this lab:

Site A (London)
LONDMC01 - Domain Controller
LONNTP01 - NetApp Sim 8.1.2
+ with vfiler_lon_dmz
LONVCS01 - vCenter Server & SRM & VSC
LONESX01 - ESXi Host

Site B (Frankfurt)
FRKDMC01 - Domain Controller
FRKNTP01 - NetApp Sim 8.1.2
+ with vfiler_frk_dmz
FRKVCS01 - vCenter Server & SRM & VSC
FRKESX01 - ESXi Host

A listing of the IP addresses used on the storage:

Site A (London)
10.0.1.25 Mgmt (e0a)
10.0.1.31 Vfiler Mgmt (e0b)
192.168.101.31 Vfiler DMZ (e0c)
10.1.0.31 Vfiler Replication (e0d)

Site B (Frankfurt)
10.0.2.25 Mgmt (e0a)
10.0.2.31 Vfiler Mgmt (e0b)
192.168.102.31 Vfiler DMZ (e0c)
10.2.0.31 Vfiler Replication (e0d)

Note: The choice of London and Frankfurt here is completely arbitrary and has no relation to any real-world production environment!

PART 1: Configuring dmz vfiler on LONNTP01

## Licensing multistore and enabling
license add MULTISTORE_CODE
options licensed_feature.multistore.enable on

## Downing interfaces for the dmz_vfiler after removing any assigned IPs
ifconfig e0b 0.0.0.0
ifconfig e0b down
ifconfig e0c 0.0.0.0
ifconfig e0c down
ifconfig e0d 0.0.0.0
ifconfig e0d down

## Creating the ipspace
ipspace create ipspace_dmz_vfiler
ipspace assign ipspace_dmz_vfiler e0b
ipspace assign ipspace_dmz_vfiler e0c
ipspace assign ipspace_dmz_vfiler e0d

## Creating the vfiler
vol create v_lon_dmz_root -s none aggr2 1g
vfiler create vfiler_lon_dmz -s ipspace_dmz_vfiler -i 10.0.1.31 /vol/v_lon_dmz_root

Running Through the Create Script on LONNTP01’s dmz vfiler

Configure vfiler IP address 10.0.1.31? [y]:
Interface to assign this address to {e0b, e0c, e0d}: e0b
Netmask to use: [255.255.255.0]:
Please enter the name or IP address of the administration host:
Do you want to run DNS resolver? [n]:
Do you want to run NIS client? [n]:
New password:
Retype new password:
Do you want to setup CIFS? [y]: n

Creating/Adding Additional Volumes and IP Addresses

vol create v_lon_dmz_local -s none aggr2 10g
vol create v_lon_dmz_vol1 -s none aggr2 10g
vol create v_lon_dmz_vol2 -s none aggr2 10g
vfiler add vfiler_lon_dmz /vol/v_lon_dmz_local
vfiler add vfiler_lon_dmz /vol/v_lon_dmz_vol1
vfiler add vfiler_lon_dmz /vol/v_lon_dmz_vol2
vfiler add vfiler_lon_dmz -i 192.168.101.31
vfiler add vfiler_lon_dmz -i 10.1.0.31
vfiler run vfiler_lon_dmz setup

Running Through the Setup Script on LONNTP01’s dmz vfiler

===== vfiler_lon_dmz
The setup command will rewrite the /etc/exports, /etc/hosts, /etc/hosts.equiv, /etc/nsswitch.conf, and /etc/resolv.conf files …
Are you sure you want to continue? [yes]
Change binding for vfiler IP address 10.0.1.31? [n]:
Configure vfiler IP address 192.168.101.31? [y]:
Interface to assign this address to {e0b, e0c, e0d}: e0c
Netmask to use: [255.255.255.0]:
Configure vfiler IP address 10.1.0.31? [y]:
Interface to assign this address to {e0b, e0c, e0d}: e0d
Netmask to use: [255.255.255.0]:
Please enter the name or IP address of the administration host:
Do you want to run DNS resolver? [n]:
Do you want to run NIS client? [n]:

Note: It is very important to remember that re-running vfiler setup will rewrite the /etc/exports, /etc/hosts, /etc/hosts.equiv, /etc/nsswitch.conf, and /etc/resolv.conf files - if you already had any of these setup, the contents must be restored from the .bak files!

PART 2: Configuring dmz vfiler on FRKNTP01

## Licensing multistore and enabling
license add MULTISTORE_CODE
options licensed_feature.multistore.enable on

## Downing interfaces for the dmz_vfiler after removing any assigned IPs
ifconfig e0b 0.0.0.0
ifconfig e0b down
ifconfig e0c 0.0.0.0
ifconfig e0c down
ifconfig e0d 0.0.0.0
ifconfig e0d down

## Creating the ipspace
ipspace create ipspace_dmz_vfiler
ipspace assign ipspace_dmz_vfiler e0b
ipspace assign ipspace_dmz_vfiler e0c
ipspace assign ipspace_dmz_vfiler e0d

## Creating the vfiler
vol create v_frk_dmz_root -s none aggr2 1g
vfiler create vfiler_frk_dmz -s ipspace_dmz_vfiler -i 10.0.2.31 /vol/v_frk_dmz_root

Running Through the Create Script on FRKNTP01’s dmz vfiler

Configure vfiler IP address 10.0.2.31? [y]:
Interface to assign this address to {e0b, e0c, e0d}: e0b
Netmask to use: [255.255.255.0]:
Please enter the name or IP address of the administration host:
Do you want to run DNS resolver? [n]:
Do you want to run NIS client? [n]:
New password:
Retype new password:
Do you want to setup CIFS? [y]: n

Creating/Adding Additional Volumes and IP Addresses

vol create v_frk_dmz_local -s none aggr2 10g
vol create v_lon_dmz_vol1 -s none aggr2 10g
vol create v_lon_dmz_vol2 -s none aggr2 10g
vfiler add vfiler_frk_dmz /vol/v_frk_dmz_local
vfiler add vfiler_frk_dmz /vol/v_lon_dmz_vol1
vfiler add vfiler_frk_dmz /vol/v_lon_dmz_vol2
vfiler add vfiler_frk_dmz -i 192.168.102.31
vfiler add vfiler_frk_dmz -i 10.2.0.31
vfiler run vfiler_frk_dmz setup

Running Through the Setup Script on FRKNTP01’s dmz vfiler

===== vfiler_frk_dmz
The setup command will rewrite …
Are you sure you want to continue? [yes]
Change binding for vfiler IP address 10.0.2.31? [n]:
Configure vfiler IP address 192.168.102.31? [y]:
Interface to assign this address to {e0b, e0c, e0d}: e0c
Netmask to use: [255.255.255.0]:
Configure vfiler IP address 10.2.0.31? [y]:
Interface to assign this address to {e0b, e0c, e0d}: e0d
Netmask to use: [255.255.255.0]:
Please enter the name or IP address of the administration host:
Do you want to run DNS resolver? [n]:
Do you want to run NIS client? [n]:

PART 3: Further Configuration of dmz vfiler on LONNTP01

# Check IP addresses are assigned correctly
vfiler status -r

# Change context to the dmz vfiler
vfiler context vfiler_lon_dmz

# Create a route for replication traffic
route add host 10.2.0.31 10.1.0.1 1

PART 4: Further Configuration of dmz vfiler on FRKNTP01

# Check IP addresses are assigned correctly
vfiler status -r

# Change context to the dmz vfiler
vfiler context vfiler_frk_dmz

# Create a route for replication traffic
route add host 10.1.0.31 10.2.0.1 1

PART 5: Test connectivity

# From vfiler_lon_dmz@LONNTP01
ping 10.2.0.31

# From vfiler_frk_dmz@FRKNTP01
ping 10.1.0.31

PART 6: Update the configuration files

## Update rc file to make routes persistent across reboots ##

# From LONNTP01>
rdfile /etc/rc
wrfile -a /etc/rc route add host 10.2.0.31 10.1.0.1 1

# From FRKNTP01>
rdfile /etc/rc
wrfile -a /etc/rc route add host 10.1.0.31 10.2.0.1 1

## Update hosts file for replication network host name resolution ##

# From LONNTP01>
rdfile /vol/v_lon_dmz_root/etc/hosts
wrfile -a /vol/v_lon_dmz_root/etc/hosts vfiler_lon_dmz 10.1.0.31
wrfile -a /vol/v_lon_dmz_root/etc/hosts vfiler_frk_dmz 10.2.0.31

# From FRKNTP01>
rdfile /vol/v_frk_dmz_root/etc/hosts
wrfile -a /vol/v_frk_dmz_root/etc/hosts vfiler_lon_dmz 10.1.0.31
wrfile -a /vol/v_frk_dmz_root/etc/hosts vfiler_frk_dmz 10.2.0.31

PART 7: Configure SnapMirror

# From LONNTP01>
vfiler context vfiler_lon_dmz
options snapmirror.access host=10.2.0.31
snapmirror on

# From FRKNTP01>
vfiler context vfiler_frk_dmz
options snapmirror.access host=10.1.0.31
snapmirror on
vol restrict v_lon_dmz_vol1
vol restrict v_lon_dmz_vol2
vfiler context vfiler0
wrfile -a /vol/v_frk_dmz_root/etc/snapmirror.conf vfiler_lon_dmz:v_lon_dmz_vol1 vfiler_frk_dmz:v_lon_dmz_vol1 - - - - -
wrfile -a /vol/v_frk_dmz_root/etc/snapmirror.conf vfiler_lon_dmz:v_lon_dmz_vol2 vfiler_frk_dmz:v_lon_dmz_vol2 - - - - -
vfiler context vfiler_frk_dmz
snapmirror initialize -S vfiler_lon_dmz:v_lon_dmz_vol1 vfiler_frk_dmz:v_lon_dmz_vol1
snapmirror initialize -S vfiler_lon_dmz:v_lon_dmz_vol2 vfiler_frk_dmz:v_lon_dmz_vol2
snapmirror status
snapmirror status -l

Note 1: The snapmirror schedule is set to - - - - - here (we will let the VSC handle triggering of SnapMirror updates)
Note 2: After this stage, snapmirror running from inside the dmz vfilers context should be working AOK!

PART 8: Final NetApp vFiler Configurations for SRM (if not done already)

# From vfiler_lon_dmz@LONNTP01>
options httpd.admin.enable on
options httpd.enable on

# From vfiler_frk_dmz@FRKNTP01>
options httpd.admin.enable on
options httpd.enable on

PART 9: Configuring SRM

Configuring SRM is beyond the scope of this post. All being well, you should be able to add both dmz vfilers as arrays, see the SnapMirror volume relationships, place some VMs on the storage and test!

Note 1: If you are using NetApp SRA 2.0.1.0 and receiving the error “Element 'SourceDevices' is not valid for content model: (SourceDevice)” updating the SRA to 2.0.1P2 as per http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=642115 fixes this issue.

Note 2: If this is for NFS datastores and you’re getting problems, double-check your exports file is configured correctly - for instance, if you are mounting a Qtree to VMware, be sure the Qtree is referenced in the exports file. An example is below:

# Corrected LONNTP01 with Qtrees export file
wrfile /vol/v_lon_dmz_root/etc/exports
# Press ctrl-c to exit
/vol/v_lon_dmz_root -sec=sys,rw,anon=0
/vol/v_lon_dmz_vol1/q_lon_dmz_vol1 -sec=sys,rw=192.168.101.0/24
/vol/v_lon_dmz_vol2/q_lon_dmz_vol2 -sec=sys,rw=192.168.101.0/24

# Corrected FRKNTP01 with Qtrees export file
wrfile /vol/v_frk_dmz_root/etc/exports
# Press ctrl-c to exit
/vol/v_frk_dmz_root -sec=sys,rw,anon=0
/vol/v_lon_dmz_vol1/q_lon_dmz_vol1 -sec=sys,rw=192.168.101.0/24

/vol/v_lon_dmz_vol2/q_lon_dmz_vol2 -sec=sys,rw=192.168.101.0/24

Sunday, 23 June 2013

NetApp Controller Scripts Overview for a DTA2800 FC SAN Migration

Introduction

This post covers setting up the aggregates, volumes, qtrees, LUNs, igroups, post migration snapshots… all NetApp FAS controller steps involved when migrating an FC SAN to NetApp. The script outlined here is written with a specific scenario in mind, and here we are starting from a newly installed NetApp FAS2240-4 HA pair after the initial setup script has been run. This is Data ONTAP 7-Mode.

Image: DTA2800 Data Migration Topologies - SAN-Attached Data Migration
Beginnings

Firstly: if you’ve run the NetApp DataCollector from http://synergy.netapp.com/disclaimer.htm, imported the information into Synergy, and completed the design, much of this script can be obtained as an output from Synergy.

Secondly: copying and pasting commands into PuTTY doesn’t always go smoothly, a much better method is to download/copy the script to your controller, and with CIFS this is very easy:

FAS1&2> cifs shares -add tmp /etc/tmp
FAS1&2> cifs access tmp root “Full Control”
FAS1&2> wrfile -a /etc/usermap.cfg DOMAIN\USER1 => root

Then simply map a drive to tmp, copy the script over and run with:

FAS1&2> source /etc/tmp/the_script

Note: From this point, all the commands below will need to be run on both heads (with modification for controller2)

###########################
## SETTING UP AGGREGATES ##
###########################

# Here were have SAS and Flashpool (SATA + SSD) aggregates
# Creating SAS aggregate
aggr create aggr_cont1_450sas -B 64 -t raid_dp -T SAS -r 11 22

# Moving the root vol off SATA (root vol cannot be in a Flashpool!)
vol create newrootvol -s volume aggr_cont1_450sas 162g
ndmpd on
ndmpcopy /vol/vol0 /vol/newrootvol
vol options newrootvol root
reboot

## REBOOTS!!! ##

# Destroy old root vol and rename new
vol offline vol0
vol destroy vol0
vol rename newrootvol vol0

# Configure Flashpool
aggr rename aggr0 aggr_cont1_flashpool
aggr add aggr_cont1_flashpool 5@847
aggr options aggr_cont1_flashpool hybrid_enabled on
aggr add aggr_cont1_flashpool -T SSD 3

##########################
## LICENSING, FCP, TIME ##
##########################

# Repeat for to add all licenses
license add LICENSECODE

# Configure FCP ports as target (FC SAN)
fcadmin config -d 1a
fcadmin config -t target 1a
fcadmin config -d 1a
fcadmin config -t target 1a
fcp set

# Set time, data, and NTP
date 1200
options timed.servers 10.10.10.11,10.10.10.12
options timed.enable on

####################################
## CREATING VOLUMES, QTREES, LUNS ##
####################################

# REPEAT for as many LUNs as is required
# minra on for databases only
vol create          v_sql_data -s none aggr_cont1_450sas 420g
snap reserve        v_sql_data 0
vol options         v_sql_data fractional_reserve 0
snap sched          v_sql_data 0 0 0
vol options         v_sql_data convert_ucode on
vol options         v_sql_data create_ucode on
vol options         v_sql_data minra on
vol lang            v_sql_data en_US.UTF-8
qtree security /vol/v_sql_data ntfs
vol options         v_sql_data  try_first volume_grow
vol autosize        v_sql_data v_sql_data  -m 500g -i 25g on
sis on         /vol/v_sql_data
qtree create   /vol/v_sql_data /q_sql_data
lun create -t windows_2008 -s 279g /vol/v_sql_data/q_ sql_data/ sql_data.lun

#######################################
## CREATING IGROUPS AND LUN MAPPINGS ##
#######################################

# DTA igroup
igroup create -t linux -f i_temp_dta 21:00:00:c0:dd:XX:XX:02 21:00:00:c0:dd:XX:XX:04
# Note: Similar required on the legacy storage to see the DTA

# LUN mappings (LUN, igroup, LUN ID)
lun map /vol/v_sql_data/q_sql_data/sql_data.lun i_temp_dta 11
# …
# REPEAT for as many LUN mappings as required

# Host Server igroups
igroup create -t windows -f sql 50:01:43:80:90:ab:cd:01 50:01:43:80: 90:ab:cd:02
igroup set sql alua yes
# …
# REPEAT for as many host igroups are required
# Note: Hosts are mapped after the migration is complete

####################
## POST MIGRATION ##
####################

# Post Migration Snapshots
snap create -V v_sql_data postMigration
# …
# REPEAT for as many volumes as required

# Unmapping the DTA
lun unmap /vol/v_sql_data/q_sql_data/sql_data.lun i_temp_dta
# …
# REPEAT for as many LUN unmappings as required

# LUN Host mappings (LUN, igroup, LUN ID)
lun map /vol/v_sql_data/q_sql_data/sql_data.lun sql 11
# …
# REPEAT for as many LUN mappings as required

# Delete Post Migration Snapshots
snap delete -V v_sql_data postMigration
# …
# REPEAT for as many volumes as required

# Enabling autosupport (SMTP)
options autosupport.mailhost IP
options autosupport.from cont1@domain.com
options autosupport.to email1@domain.com,email2@domain.com
options autosupport.support.transport smtp
options autosupport.doit "Post DTA Migration"

# http.enable
options httpd.enable on
options httpd.admin.enable on

SQL Server Setup Considerations

The notes below were collated from a couple of SQL Server 2008R2 deployments on Windows 2008R2 (one was on Dell EqualLogic storage, the other on NetApp.)

Storage Considerations

Storage Allocation (Suggested):
LUN 1: OS
LUN 2: SQL Server Data (User Databases)
LUN 3: SQL Server Logs (Transaction Logs)
LUN 4: SQL Server TempDB (DB & Logs)

Optional Drives:
LUN 5: SQL Server Backups
LUN 6: SnapInfo (if using NetApp SnapManager for SQL - SMSQL)

Storage Configuration (Suggested Drive Letters):
C:\ = Operating System / Page File
D:\ = Default Instance Data disk
L:\ = Default Instance Log file disk
T:\ = Default Instance TempDB disk

Optional Drives:
Z:\ = Default Instance Backup disk
S:\ = SnapInfo (for NetApp SMSQL)

Note 1: The 4 system databases are master, model, msdb & tempdb (in the above design we would have the master, model and msdb system databases and their transaction logs on the OS drive.)
Note 2: Placing databases on one LUN and transaction logs on another LUN, gives improved SQL performance by separating the random I/O patterns of user databases, from the sequential I/O patterns of the logs.
Note 3: The SnapInfo directory contains all backup set metadata. By default, the directory name is SMSQL_SnapInfo. When NetApp SMSQL performs a backup, a new backup set subdirectory is created under the SnapInfo directory.

Other Suggestions

General:
The Model database can be configured with settings which will be used for all new databases.
TempDB best practices are to have 1 TempDB file per Processor Core.
Maintenance Plans - Check integrity of all databases and indexes @ Sunday 00:00

Database Default Settings:
Page_Verify = Checksum
Auto_Shrink = OFF
Owner = SA

Policy based management - enabled policies evaluated daily @ 00:00:
Data and Log File Location = Checks to ensure that Data and Log Files are not in the same location
Database Auto Close = Checks to see that Auto_Close is not enabled on databases
Database Auto Shrink = Checks to see that Auto_Shrink is not enabled on databases
Database Page Verification = Checks that Page_Verify is set to Checksum on all databases
Surface Area Configuration for Database Engine 2008 Features = Checks that Database mail is enabled

Further Reading (aimed at NetApp SMSQL deployments)

 ‘Microsoft SQL Server and NetApp SnapManager for SQL Server on NetApp Storage Best Practices Guide’ - February 2013 | TR-4003

SnapDrive for Windows documentation:

SnapManager for Microsoft SQL Server documentation:

Data ONTAP 8 documentation:

Technet - Microsoft SQL Server:

‘NetApp Disaster Recovery Solution for Microsoft SQL Server’ - January 2010 | TR-3604

‘Microsoft SQL Server Relational Engine: Storage Fundamentals for NetApp Storage’ - January 2011 | TR-3693

Image: Design Example 3 (of 5) from p28 - Separate System DBs, TempDB, User DBs, User Logs, SnapInfo

Saturday, 22 June 2013

Configuring DMZ Vfilers for VMware Site Recovery Manager

                                             
Scenario

We have two DMZ Vfilers - one in Site A and one in site B. These DMZ Vfilers already belong to an ipspace, and have two IP Addresses each - one for management and one for the DMZ. Now, VMware Site Recovery Manager cannot use Vfiler DR, so we need to configure SnapMirror in the Vfiler context. Slight problem: the DMZ network is not routable between sites, and we are not permitted to use the management network. What we need is a special VLAN created for our site to site SnapMirror replication within the DMZ Vfilers.

fas1 = Primary site NetApp FAS series controller.
dr01 = DR site NetApp FAS series controller.

Note: This 4 year old post http://virtualstorageguy.com/2009/04/17/site-recovery-manager-its-not-just-for-vms/ shows SRM using the the script vFiler_SRM.pl. But ... NetApp TR-4064 (updated June 2012) says "The source and destination vFiler units must both be online. This means that SRM cannot be used to manage failover where the MultiStore vFiler DR capability is configured. When a vFiler unit is configured for vFiler DR capability, the destination vFiler unit is in an offline state; this is not supported for an SRM array."

Walkthrough

Creating the VLAN:
fas1&dr01> vlan create bond VLANID
Note: Here, ‘bond’ is an interface group (IFGRP) made up of physical NICs. VLANID is a number from 1 to 4095.

Add the VLAN to the ipspace:
fas1&dr01> ipspace assign dmz_ipspace bond-VLANID

Add an IP address to the DMZ Vfilers:
fas1> vfiler add fas1_dmzvfiler -i REP_IP_ADDR
dr01> vfiler add dr01_dmzvfiler -i REP_IP_ADDR
Note: REP_IP_ADDR is an IP address on the replication network.

Run vfiler setup, and bind the IP address to the new VLAN:
fas1> vfiler run fas1_dmzvfiler setup
dr01> vfiler run dr01_dmzvfiler setup

Example of running vfiler setup:
===== fas1_dmzvfiler
The setup command will rewrite the /etc/exports, /etc/hosts, /etc/hosts.equiv, /etc/nsswitch.conf, and /etc/resolv.conf files …
Are you sure you want to continue? [yes]
Change binding for vfiler IP address 10.10.10.51? [n]: n
Change binding for vfiler IP address 10.10.99.51? [n]: n
Configure vfiler IP address 10.10.25.51? [y]: y
Interface to assign this address to {bond-1010, bond-1099, bond-1025}: bond-1025
Netmask to use: [255.255.255.0]:

Enter the dmzvfiler context:
fas1> vfiler context fas1_dmzvfiler
dr01> vfiler context dr01_dmzvfiler
Note: vfiler context vfiler0 to return


Create routes to remote vfiler using the local replication networks default gateway:
fas1_dmzvfiler@fas1> route add host REP_IP_of_DR01_DMZVFILER LOCAL_REPL_NETWORK_DG 1
dr01_dmzvfiler@dr01> route add host REP_IP_of_FAS1_DMZVFILER LOCAL_REPL_NETWORK_DG 1

Ping to test connectivity:
fas1_dmzvfiler@fas1> ping REP_IP_of_DR01_DMZVFILER
dr01_dmzvfiler@dr01> ping REP_IP_of_FAS1_DMZVFILER

Write the routes to the RC file (so they won’t be lost on reboot):
fas1> wrfile -a /etc/rc route add host REP_IP_of_DR01_DMZVFILER LOCAL_REPL_NETWORK_DG 1
dr01> wrfile -a /etc/rc route add host REP_IP_of_FAS1_DMZVFILER LOCAL_REPL_NETWORK_DG 1

At this stage our DMZ Vfilers should be able to ping each other across the replication network!
Next step is to setup SnapMirror in the dmzvfiler context!

To be continued …

Researches on NetApp OnCommand Balance Part 2 - Some Use Cases

Carrying on from this post of two months ago -
- here we expand the section ‘Use Cases’


Some Use Cases of OnCommand Balance:

Use Case 1: Find All Servers That Share Groups or Aggregates

Balance > Storage Arrays tab > Select the Storage Array
Performance Summary tab > Select the Disk Group or Aggregate
At the bottom of the disk group summary page, notice the breakdown of all the workloads that share the disk group or aggregate.

Data Topology tab >
For a more visual understanding of the workloads that share the disk group.

Use Case 2: Identify Application Workloads That Are Limited by CPU, Memory, or I/O Availability

Balance > Applications tab >
This page list all the workloads and status indicators for their CPU, Memory, and Storage (I/O) resources. View the resource status icons.

Balance > Applications tab > Select the Application
To get a more detailed look at a specific application. The infrastructure response time graph shows the end-to-end response time for a workload, including its CPU and storage.

Data Topology tab >
This provides a visualization of the application workloads path to storage. If the workload is a database application (Microsoft SQL or Oracle), click the schema check box to view the full data path for each database element.

Balance > Reports > Scorecard Reports > Application Scorecard
The Application Scorecard provides response time, CPU, memory, and storage utilization information for application workloads. Generally, overall response times should not exceed 100ms!

Use Case 3: Identify Bully and Victim Servers

Open and read the Balance predictor email.
For further analysis, click the View full analysis link and view the workload breakdowns.

Use Case 4: Reclaim Storage and Identify Storage That Is Nearing Capacity

Balance > Reports > Standard Reports > Server Volume Capacity Forecast
Balance > Reports > Scorecard Reports
Balance > Reports > Reports Scheduling

To see volumetric capacity at the array level, use the Array Utilization Report.
To see capacity at the disk group or aggregate level, use the Storage Scorecard.
To see current capacity from the perspective of each server volume, use the Server Storage Utilization Report.
To see capacity from the perspective of application workloads, use the Application Storage Trend Report.
The Server Volume Capacity Utilization Forecast Report provides the calculated number of weeks before server volumes hit 80%, 90%, and 100% capacity.

Capacity information is available throughout the UI as you view an array with its disk groups, or drill down for data on a specific disk group, or analyze the server volume for detailed capacity information. Balance array performance reports show volumetric capacity information in addition to high-level array information, and detailed statistics and analysis of the arrays disk groups, ports, and controllers.

Use Case 5: Identify Servers That Are Causing Resource Contention

At the host level:
Balance > Servers > Virtual Hosts > select a virtual host
The Summary page provides quick access to contention information for the shared CPU and memory.
Tick the CPU ‘Show VM breakout’ box - to access the VM breakout for CPU usage. The display includes hyperlinks to access the summary page for each VM.
Tick the Memory ‘Show VM breakout’ box - to access the VM breakout for Memory usage. The display includes hyperlinks to access the summary page for each VM.

At the disk group (aggregate) level:
Balance > Storage > select an array
Disk Groups tab > select a disk group
The workload breakdown at the bottom of the Summary page, displays the performance characteristics for each contending workload (sort by: Most IO, Worst RT, Highest disk utilization.)
See the Contention tab - this display includes the disk utilization percentage, response time and throughput.
See the IO by Workload tab.
Also, see the Scorecards and Reports, for example - the Virtual Machine Scorecard report, the Application Scorecard report

Note: Keep in mind that reports can be scheduled!

Use Case 6: Identify Storage Hotspots

Balance > Dashboard
The Balance Dashboard proactively identifies storage hotspots. Click the link for Arrays and check the Status column for anything that is in red. The dashboard also lists the most recent abnormal storage events.

Balance > Storage > select an array
Data Topology tab >
Red symbols indicate hotspots. For a specific element you can right-click and choose ‘Re-orient Topology’. Or, right-click and choose ‘Open Summary Page’.

Balance > Reports > Scorecard Reports > Storage Scorecard
Use to identify disk groups with excessive disk utilization, response times, and IOPs; in addition to capacity information.

Use Case 7: Identify Servers That Have Misaligned LUN Partitions

Balance > Servers
The top of the page indicates the total number of servers that have misaligned partitions. Click ‘View the report’ and use the ‘Servers With Misaligned Partitions’ report to focus your efforts on those servers that are misaligned and driving the most IOPS.

Balance > Reports > Standard Reports > Server Reports > Servers With Misaligned Partitions

Use Case 8: Use the Performance Index to Identify Virtual Hosts That Need Optimization

Balance > Servers > Virtual Hosts
View the entire list and look at the Performance Index (PI) values. Hosts with a PI > 125 are currently experiencing degraded performance - they are either over-utilized with too many VMs, or under-provisioned for their current workload. Hosts with a PI well below 75 are not being sufficiently utilized and are wasting datacenter resources. As a general rule, strive to have all virtual hosts operating with PI values between 75 and 125, this provides the best possible balance between resource utilization and performance.

Balance > Servers > Virtual Hosts > select a host > PI tab
Balance > Reports > Standard Reports > Server Reports > Virtual Host Server Performance Index
Balance > Reports > Reports Scheduling > (to schedule the report on a regular basis)

Use Case 9: Display a Baseline View of Your Storage Environment

Balance > Reports > Scorecard Reports > Storage Scorecard
The Storage Scorecard report provides a baseline view of your storage environment with notable columns:
Percent Capacity - watch for anything > 75% used.
Disk Utilization - watch for anything averaging > 60%, or maximum > 75%
Also, watch for abnormally high response times, IOPS, or read and write throughput values (additional workloads could degrade the performance of all servers that share the storage!)

Notable Reports

Among all the excellent reports available in Balance, the below are highly recommended for having a look at, and all (except the Servers With Misaligned Partitions - which you are probably not going to look at that often) are worth putting on a schedule too:
- Application Storage Trends
- Array Utilization
- Server Volume Capacity Forecast
- Servers with Misaligned Partitions
- Storage Scorecard
- Virtual Host Server Performance Index
- Virtual Machine Scorecard