Sunday, 16 May 2010

Creating Application Aware Snapshots in SAN/iQ 8.5

Prior to SAN/iQ 8.5, making snapshots application aware was a case of installing the required components and having a scheduled task on the guest running a script. Now there is a tick box in the Centralized Management Console to allow this.


Windows 2003 Server with SQL 2005 and iSCSI initiator installed, which has 3 volumes

i: O/S volume which is a vmdk
ii: iSCSI attached DATABASE volume
iii: iSCSI attached LOGS volume

The iSCSI volumes are presented on a LeftHand SAN/iQ 8.5 SAN

How to create application aware snapshots?

<< Note: I will go one step further and configure application aware snapshots with remote snapshots >>

1: Configuring the Windows Server components

i: On the Windows 2003 Server, do a typical install of the HP LeftHand Network Centralized Management Console for SAN/iQ 8.5

ii: On the Windows 2003 Server, install the HP LeftHand VSS Provider from the HP LeftHand P4000 Windows Solution Pack CD

2: Configuring the Authentication Console

On the Windows 2003 Server, Start > Programs > HP LeftHand Networks > Authentication Console

Enter the credentials for the Management Group in which your iSCSI volumes belong, Next > , Finish, then close the Authentication Console

3: Configuring the Snapshot with Remote Snapshot

In the HP LeftHand Networks Centralized Management Console, right-click the volume which holds your SQL data, and choose 'New Schedule to Remote Snapshot a Volume'

Configure like the below to your specifications << the important thing here is to tick the 'Application-Managed Snapshot' box, and create a new remote volume in the remote site for the database volume >>

When you click OK, it will retrieve information from the application server and discover the associated logs volume

Click Continue and create a new remote volume in the remote site for the logs volume

Click 'Create Schedule'

And that's it!

Note: SAN/iQ 9 will integrate with the VMware Virtual Center server so that you can create application aware snapshots of iSCSI VMFS volumes containing VMDKs

Author V. Cosonok

Demonstrating the HP LeftHand / StorageWorks P4000 VSA running within ESX4i

Today I will demonstrate setting up the HP LeftHand / StorageWorks P4000 VSA running within ESX4i.

For this demonstration, we will pretend that we have a physical server capable of running ESX4i with one local logical drive, and then attached storage with two logical drives (e.g. you might have fibre channel attached 14 disk shelf split into two RAID 5s of 6 disks plus hot spare.) Also, we'll give it 4 NICs (you don't need to do this, 1 will be fine for the demo, in practice you'll want redundancy for your service console network, and storage network.) I say pretend since this entire demonstration will be running inside VMware Workstation 7.

Here is my virtual ESX4i server configuration:

1: Install ESX4i

Installing ESX4i is a simple endeavour, simply boot from the CD (here I am using the Vmware-VMvisor-Installer-4.0.0.Update01-208167.x86_64.iso) and follow the few prompts. There really is no configuration to be done at this stage other than to select the local disk on which to install ESXi. And upon first boot after install, we have:

2: Configuring ESX4i

I'll skip talking in-depth about configuring ESX4i as I want to demonstrate the StorageWorks P4000 VSA here and there are loads of articles out there for configuring ESX4i. The few bits of configuration I did on my ESX4i server were:

Via the console:

i: Set a password
ii: Configure a static IP

Via the vSphere Client connected to the ESXi host:

i: Give the ESXi host a name
ii: Add vmnic 1 to the default vSwitch0 (default VM Network / Management Network)
iii: Create a Virtual Machine network for the storage network, vSwitch1, and add vmnic 2 & 3
iv: Check that the install of ESXi has picked up the direct attached storage and datastores are created on the logical volumes (rename datastores as required)

3: Importing the P4000 VSA

You can download SAN/iQ 8.5 software from:

And you want to download and extract – - in here you will find the Centralized Management Console that you will need to connect to the P4000 VSA when it is up and running, and also a folder called OVF

From the vSphere Client connected to your ESXi host, click on the File menu and choose 'Deploy OVF Template...'

Choose 'Deploy from file' and browse to the VSA.ovf, then click Next >, Next >, Accept, Next >, give the VSA a name, Next >, choose the local datastore for the VSA, Next >, accept the default network mapping for now, Next >, Finish

<< Note: The O/S disks for the VSA are on 2GB in size, and this is fine to put on your ESXi hosts local disks >>

4: Configuring the P4000 VSA in ESXi

Change Network adapter 1 so it is on your Storage Network
Add 2 disks consuming most of the space on you direct attached storage LUNs
<< Note: Additional disks must begin on SCSI (1:0) to be picked up by the VSA >>

There are a few other things to do since I'm running the VSA inside ESXi inside VM Workstation 7, this is to remove any reservations set for CPU and Memory, and I can lower the memory to 384MB (no lower though.)

Now we're ready to power the VSA on (it is normal for it to reboot once on the first time boot) and when it is up we have:

5: Configuring the P4000 VSA via the console

Once the VSA has booted (as above,) at the login prompt type start and press enter, and press enter again to login to the Configuration Interface. Use the arrow keys to navigate and select 'Network TCP/IP Settings' and press enter. Choose eth0 and press enter. Configure hostname, IP Address, Mask, Gateway.

OK, OK, OK, back, logout

6: Configuring the P4000 VSA via the Centralized Management Console

If you've not already got it; install the Centralized Management Console (current version is CMC_8.5.00.0313_Installer.exe) on your workstation or a server.

Double-click on the shortcut for HP LeftHand Centralized Management Console.

On the Find Nodes Wizard, click Next, and search for your node via subnet or IP as desired, enter the IP addressing details, click Finish, click Close.

Now we have our VSA in the CMC we need to configure the storage. Click on the + by Available Nodes. Click on the + by the VSA, go to the Disk Setup tab, check to see if the first disk is active, and select the second disk, then from Disk Setup Tasks, choose 'Add Disk to RAID'

7: Final Notes

With the VSA up and running, the rest of the configuration is the same as having a physical appliance; this includes adding to a management group, adding to a cluster, configuring volumes, and configuring server access to these volumes.


i: Remember vSphere has a 2TB maximum datastore size limit
ii: For best performance the RAID controllers should support acceleration and have battery backed write cache
iii: If you use VMware NIC teaming, make sure the ‘Network Failover Detection’ is not set to ‘Beacon Probing’ otherwise performance will suck; have on ‘Link Status only’
iv: Expect performance to be only 90% as good as a physical platform made out of similar hardware for small IO, random IOPS like databases. For sequential workloads, like file copies, cloning, backups, streaming, etcetera … expect performance to be only 60% to 70% as good as the physical platform made out of similar hardware.