Tuesday, 28 June 2011

Custom VMware Tools Install for Windows 7 Citrix XenDesktop VDI

Note: This post was updated on 19 August 2011 - see http://cosonok.blogspot.com/2011/08/update-custom-vmware-tools-install-for.html


An end user is reporting poor performance from their Windows 7 VDI. Analysis with Task Manager and then Process Explorer reveals that vmtoolsd.exe is consuming nearly 100% of 1 CPU for a few seconds every 10 seconds or so. Investigation reveals that a typical VMware Tools installation was applied to the Windows 7 VDI.


Reinstall VMware tools applying a custom setup as below:

YES – Toolbox

VMware Device Drivers
NO – Memory Control Driver
NO – Thin Print
NO – Paravirtual SCSI
NO – Mouse Driver
NO – Shared Folders
YES – SCSI Driver
NO – SVGA Driver
NO – Audio Driver
YES – VMXNet3 NIC Driver
NO – VMCI Driver
NO – Volume Shadow Copy Service
NO – Wyse Multimedia Support

Guest SDK
NO – WMI Performance Logging

This is a personal best practice VMware tools install for a Windows 7 Citrix XenDesktop (the virtual desktop I am writing this document on has its VMware tools installed thus way,) and reduces the number of components installed down from 13 in the typical install (which just excludes Wyse Multimedia Support) to 3. Performance is now excellent and vmtoolsd.exe is behaving!

To explain the above selections, it needs to be remembered that the Citrix Virtual Desktop Agent applies its own drivers, so no functionality is lost in not installing most of the VMware Device Drivers. Most of the drivers installed in a VMware tools install, only really apply when you are accessing the guest machine using the vSphere client or similar. The VMXNet3 NIC driver is necessary if using the VMXNet 3 NIC – which is recommended.

Rounding off this post below are a few compliled notes/tips and an appendix - VMware Tools Component Feature Descriptions


Note 1: This was specifically done with Windows 7, Citrix XenDesktop 5, VMware ESXi 4.1 – applicable to other flavours too.

Note 2: This came from investigating the problem with vmtoolsd running with high CPU utilization. Some suggesting fixes including manually rebuilding performance counter library values did not help. An error for Process Explorer regarding .NET performance counters are corrupt – mentioned running EXCTRLST from the Microsoft Windows Resource Kit to repair (alas could not find a Windows 7 version of this, only the Windows XP edition from the Windows XP Service Pack 2 Support Tools – which was not compatible.)

Note 3: Technically - if you use the E1000 network adapter with your Windows 7 VDI, it will run fine without any VMware tools installed at all (transparent page sharing, and memory compression continue to operate.) The ability to do a controlled shutdown/restart via the VMware tools is lost. Worth trying if you want to rule out performance issues caused by tools.

Note 4: Tip - remember to remove any mounted ISO from the VDI as this can cause occassional CPU spikes spikes (specific occurrence - have noticed vmware-remotemks.exe spike ocassionally when have a vSphere client console connection open.)

Note 5: Tip - just a hunch that it might be beneficial to set the WorkstationAgent.exe process priority to high  (to stop XenDesktop becoming temporarily unavailable.) The below script will do this:

' Title: Start a Process with a Base Priority

' References: technet.microsoft.com
' Instructions:
' Change const strProcessName =
' Change const priority =

Const strProcessName = "WorkstationAgent.exe"
Const Normal = 32
Const Low = 64
Const High = 128
Const BelowNormal = 16384
Const AboveNormal = 32768
Const Priority = High

strComputer = "."
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set objStartup = objWMIService.Get("Win32_ProcessStartup")
Set objProcess = GetObject("winmgmts:root\cimv2:Win32_Process")

Set objConfig = objStartup.SpawnInstance_
objConfig.PriorityClass = Priority

errReturn = objProcess.Create(strProcessName, null, objConfig, intProcessID)

Save as something like priorityScript.vbs

Appendix: VMware Tools Component Feature Descriptions

Utilities to improve the functionality of this virtual machine

Vmware Device Drivers
Drivers used to enhance the performance of your virtual machines
Memory control driver
Driver to provide enhanced memory management of this virtual machine
Thin print
enable automatic printing to host computer's printers
Paravirtual SCSI
Driver to enhance the performance of your virtual SCSI devices
Mouse driver
Driver to enhance the performance of your virtual mouse
Shared folders
Allows file to be shared between this virtual machine and your host computer
SCSI driver
Driver to enhance the performance of your virtual SCSI devices
SVGA driver
Driver to enhance the performance of your virtual video card
Audio driver
Driver to provide audio for virtual sound card
VMXNet3 NIC Driver
Driver to enhance the performance of your virtual network card
VMCI Driver
Driver to allow virtual machines to communicate with applications on host and with other virtual machines using datagrams and shared memory
Volume Shadow Copy Service
VSS Support for Windows guest OS
Wyse Multimedia Support
Driver to enhance your remote desktop multimedia experience

Guest SDK
Allows applications in this guest to access information about virtual machine state and performance
WMI Performance Logging
enable performance monitoring between the guest SDK API and the VMI environment

Friday, 24 June 2011

P4000 SAN Networking Best Practices

Posting here for ease of reference:

Source → Jitun replying to post @ forums.itrc.hp.com

Gigabit Ethernet support
Each storage node comes equipped with at least two copper Gigabit Ethernet ports (802.3ab). To take advantage of full-duplex 1GbE capabilities, the cabling infrastructure must be Cat5e or Cat6 to the storage nodes. Server connections and switch interconnects can also be done via fiber cabling instead of Cat5e or Cat6 cabling. For 10Gigabit implementations, Cat-6a or Cat-7 cabling is usually required for use with distances over 55 meters.

Fully subscribed non-blocking backplanes
In order to achieve maximum performance on the HP P4000 SAN, it is important to select a switch that has a fully subscribed backplane, which means that the backplane must be capable of supporting all ports in full-duplex mode. For instance, if the switch has 24 1 Gb ports, it will require a 48 Gb backplane to support full-duplex Gigabit communications.

Adequate per-port buffer cache
For optimal switch performance, HP recommends that the switch have at least 512 KB of buffer cache per port. For example, if the switch has 48 1 Gb ports, the recommendation is to have at least 24 MB of buffer cache dedicated to those ports. If the switch aggregates cache among a group of ports (for example, 1 MB of cache per 8 ports), space your storage modules and servers appropriately to avoid cache oversubscription.

Enable Flow Control on network switches and adapters
Flow Control ensures a receiver can make the sender pace its speed and is important in avoiding data loss. IP storage networks are unique in the amount of sustained bandwidth that is required to maintain adequate performance levels under heavy workloads. Gigabit Ethernet flow control (802.3x) technology should be enabled on the switch to eliminate receive and/or transmit buffer cache pressure. The storage nodes should also be set to have flow control enabled. Note: Some switch manufacturers do not recommend configuring flow control when using jumbo frames, or vice versa. For optimal performance, HP recommends implementing flow control over jumbo frames. Flow control is required when using DSM/MPIO.

Ensure spanning tree algorithm for detecting loops is turned off
Loop detection introduces a delay in making a port become usable for data transfer and may lead to application timeouts. If supported by the switch infrastructure, HP recommends implementing Rapid Spanning Tree for faster Spanning Tree convergence. If configurable on the switch, consider disabling spanning tree on the storage node and server switch ports so that they do not participate in the Spanning Tree convergence protocol timing. Enable Rapid Spanning Tree protocol convergence or PortFast (Cisco) on the storage node and server switch ports.

VLAN support
HP recommends implementing a separate subnet or VLAN for the IP storage network. If you are implementing VLAN technology within the switch infrastructure, you will typically need to enable VLAN tagging (802.1q) and/or VLAN trunking (802.1q) or Cisco Inter-Switch Link (ISL)).

Basic IP routing
The storage nodes can access external services such as DNS, SMTP, SNMP, Syslog, and so on. In order to allow this traffic outside the IP storage network, an IP route is required from the IP storage network to the LAN environment. Also, if the storage nodes are to be managed from a remote network, an IP route must exist to the storage nodes from the management station. Finally, if remote copy functionality is going to be used, the remote copy traffic must be routable between the primary/remote sites.

Disable unicast storm control on iSCSI ports
Most switches have unicast storm control disabled by default. If your switch has this enabled, you should disable this on the ports connected to iSCSI hosts and targets to avoid packet loss.

Segregate SAN and LAN traffic
iSCSI SAN interfaces should be separated from other corporate network traffic (LAN). Servers should use dedicated NICs for SAN traffic. Deploying iSCSI disks on a separate network helps to minimize network congestion and latency. Additionally, iSCSI volumes are more secure when SAN and LAN traffic can be separated using port based VLANs or physically separate networks (different physical switches.)

Configure additional Paths for High Availability; use either Microsoft MPIO or MCS (multiple connections per session) with additional NICs in the server to create additional connections to the iSCSI storage array through redundant Ethernet switch fabrics.

Unbind File and Print Sharing from the iSCSI NIC
on the NICs which connect only to the iSCSI SAN, unbind File and Print Sharing.

Use Gigabit Ethernet connections for high speed access to storage
Congested or lower speed networks can cause latency issues that disrupt access to iSCSI storage and applications running on iSCSI devices. In many cases, a properly designed IP-SAN can deliver better performance than internal disk drives. iSCSI is suitable for WAN and lower speed implementations including replication where latency and bandwidth are not a concern.

Use Server class NICs
It is recommended to use NICs which are designed for enterprise networking and storage applications.

Use Jumbo Frames if supported in your network infrastructure
Jumbo Frames can be used to allow more data to be transferred with each Ethernet transaction and reduce the number of frames. This larger frame size reduces the overhead on both your servers and iSCSI targets. For end to end support, each device in the network needs to support Jumbo frames including the NIC and Ethernet switches.

High network latency can be the primary cause of slow I/O performance, or worse, iSCSI drive disconnects
It is important to keep network latency on your HP P4500 Multi-Site SAN subnet below two milliseconds. Many factors can contribute to increasing network latency, but two are most common:
i: Distance between storage cluster modules
ii: Router hops between storage cluster modules
Configuring the HP P4500 Multi-Site SAN on a single IP subnet with Layer 2 switching will help to lower the network latency between storage cluster modules.

Wednesday, 22 June 2011

Example of Using CLIQ to Change the Coordinating Manager within a SAN/iQ 9.0 (HP LeftHand / Storageworks P4000) Management Group

1: Discover the current coordinating manager node

Using the HP StorageWorks P4000 Centralized Management Console

Click on the Management Group icon → Details Tab → Status

2: Change the coordinating manager

i: Use PuTTY or similar to establish an SSH connection to the coordinating NSM on port 16022, and login using Management Group administrator credentials

login as: admin
Using keyboard-interactive authentication.
SAN/iQ Command Line Interface, v9.0.00.3561 (type exit to quit)
(C) Copyright 2007-2010 Hewlett-Packard Development Company, L.P.


ii: From the prompt run the command stopManager


iii: After a short wait run the command startManager to restart the manager on that node


The coordinating manager will automatically have moved to another NSM

Access to live volumes are uninterrupted by the change of coordinating manager.

Output below:


SAN/iQ Command Line Interface, v9.0.00.3561
(C) Copyright 2007-2010 Hewlett-Packard Development Company, L.P.

result 0
processingTime 14293
name CliqSuccess
description Operation succeeded.


SAN/iQ Command Line Interface, v9.0.00.3561
(C) Copyright 2007-2010 Hewlett-Packard Development Company, L.P.

result 0
processingTime 15803
name CliqSuccess
description Operation succeeded.

Appendix: SAN/iQ 9.0 CLIQ Command Reference and Error Codes from Help Output


CLIQ is the command-line interface (CLI) for the HP LeftHand Networks SAN. The CLI specifies parameters in the form parameter= (specification), rather than dictating a particular order (positional) notation.

Ordering of parameters is not specified. Any order will do. For example:
cliq deleteVolume volumeName=theVolume userName=user passWord=secret login=
is equivalent to
cliq deleteVolume login= passWord=secret userName=user volumeName=theVolume
The method parameter may be optionally specified as "method=":
cliq userName=user passWord=secret login= volumeName=theVolume method=deleteVolume

All commands and parameter names are case-insensitive. "createVolume" is the same as "CreateVolume" is the same as "CREATEVOLUME". In some cases, parameter values, while not sensitive, are case significant as the system will preserve the case specified. For example, the description parameter value in the createVolume command will preserve the case specified by the caller, and impose this on the newly created volume.

Any parameter that indicates true/false, may be specified as "1|0" or "true|false".

There is no command or parameter abbreviation in the CLI when scripted. All commands and parameter names must be fully specified. This is to prevent ambiguity in legacy scripts if new commands or parameters are added.

The CLI will map error codes to reasonable OS status codes (status in Linux, ERRORLEVEL in DOS). Since these are limited to 0..255, some of the OS errors may have less granularity than the API error codes.

Some commands take multiple elements for the parameter value. In this case, the parameter is interpreted as an delimiter-separated ordered list. For example:
If a parameter contains fewer elements in the list than needed for the composite command, the last one in the list will be repeated. There must be at least one element in the list, if it's required.

When volume sizes or thresholds are specified, the format is

The integrated shell supports rich command line editing features specific to the HP LeftHand API. The following editing keys are supported:

Key Meaning
LEFT Moves the cursor one space to the left.
RIGHT Moves the cursor one space to the right.
BACKSPACE Deletes the character under the cursor and moves it to the left.
DELETE Deletes the character under the cursor.
UP Recalls the previous command entered.
DOWN Recalls the first command entered.
HOME Moves the cursor to the beginning of the line.
END Moves the cursor to the end of the line.
ESCAPE Clears the current command line.
INSERT Toggles between insert mode (the default) and overwrite mode.
TAB Completes the command.


Some commands may take a long time to complete. The default is to wait until the command completes or fails. This parameter allows you to specify a maximum wait time for completion. If this time is exceeded, the CLI returns CliqOperationTimedOut.

Some potentially destructive commands prompt before proceeding. This default behavior can be turned off by specifying prompt=false.

In the default case, the CLI returns information to standard output, formatted in a way that's easy to read rather than easy to parse. The XML setting returns all output information as an XML document, allowing easier parsing of the result. There is no guarantee that newer versions of the API will preserve the same formatting in the default case. It is strongly discouraged to use this form of the CLI programmatically. If the output needs to be parsed, the XML variant is preferred.

Some CLI parameters comprise parameters for multiple operations. For example, the snapshotVolumes command allows the caller to specify simultaneous snapshotting of multiple volumes. In this scenario, some parameters specify an ordered list that apply to each snapshot in succession. For example:
description="This applies to snapshot1;This applies to snapshot2"
The default separator character is a semicolon (';'). This can be overridden with the separator parameter in the event that the default separator is in the body of a parameter.
description="This applies to snapshot1;This applies to snapshot2"

This takes all command input from a file containing XML input.

The following section lists the commands supported


Return Codes

0 CliqSuccess - Everything succeeded normally.
1 CliqNothingDone - Operation has succeeded, but nothing was done (the system was already in the requested state).
2 CliqOperationPending - Operation has not failed, but is not yet complete. The "handle" parameter contains a value that can be used to query and cancel the operation.
3 CliqOperationAbandoned - Operation was intentionally cancelled or abandoned.
4 CliqNothingFound - Nothing was found.
5 CliqSnapshotSet - This snapshot was a part of the snapshot set.
6 CliqVssSnapshotWarning - Warning: The writer operation failed.
128 CliqUnexpected - An unexpected error has occurred.
129 CliqXmlError - The XML given is not well-formed.
130 CliqParameterFormat - The parameter is not specified correctly.
131 CliqParameterRepeat - A parameter is repeated.
132 CliqMissingMethod - The command method is missing.
133 CliqMissingParameter - One or more expected parameters are missing.
134 CliqUnrecognizedCommand - This command is unrecognized.
135 CliqUnrecognizedParameter - This parameter is unrecognized.
136 CliqIncompatibleParameters - Two or more parameters supplied are incompatible with each other.
137 CliqNotYetImplemented - This is a legal command - we just haven't done it yet.
138 CliqNoMemory - Out of memory.
139 CliqVolumeNotFound - Could not find the requested volume.
140 CliqVolumeInUse - The requested volume is in use.
141 CliqVolumeInitFailure - Volume initialization failed.
142 CliqUnrecognizedVolume - The volume is an unrecognized type.
143 CliqOperationFailed - General SAN/iQ error - the operation failed.
144 CliqCredentialsFailed - The supplied credentials are incorrect.
145 CliqInvalidParameter - Invalid parameter.
146 CliqObjectNotFound - Object not found.
147 CliqConnectionFailure - Failed to connect to the API server.
148 CliqNotEnoughSpace - Not enough space to complete the command.
149 CliqNoManager - Could not find a manager.
150 CliqSocketError - Network socket error.
151 CliqOperationTimedOut - Operation exceeded the specified timeout.
152 CliqNoPlatformSupport - This operating system type does not support the operation.
153 CliqIncorrectOsVersion - This operating system version does not support the operation.
154 CliqUtilityNotFound - The utility command requested was not found.
155 CliqUtilityNotAllowed - The utility command requested is not in the allowed list.
156 CliqUtilityIllegalParameter - The utility command contains unsupported parameters or redirection.
157 CliqUtilityFailed - The utility command executed, but returned a non-zero status code.
158 CliqNodeNotFound - The specified storage node can't be found.
159 CliqIllegalUsername - The username must be 3..40 characters, starting with a letter.
160 CliqIllegalPassword - The password must be 5..40 characters, not / or :.
161 CliqFileError - General file error.
162 CliqMissingInitiator - No iSCSI initiator found.
163 CliqInitiatorStopped - The iSCSI initiator is not running.
164 CliqSanIqTooOld - The version of SAN/iQ software must be upgraded.
165 CliqDefaultAdmin - You cannot delete, modify permissions, or remove the last user from the default administration group.
166 CliqVssProviderNotInstalled - The HP LeftHand Networks VSS Provider is not installed.
167 CliqVssProviderNotRunning - The HP LeftHand Networks VSS Provider is not running.
168 CliqVolumeNoSessions - Cannot create an application-managed snapshot because there are no iSCSI connections associated with this volume. To create application-managed snapshots, there must be at least one application server associated with the volume via an iSCSI connection.The volume must be connected to a VSS-enabled server.
169 CliqVolumeMultipleSessions - Cannot create an application-managed snapshot because there is more than one IQN (iSCSI Qualified Name) associated with this volume. To create application-managed snapshots, there must be only one application server associated with the volume or the servers must be in a server cluster. (Note: ensure all servers have VSS installed and running.)
170 CliqNoVssCapabilities - Cannot create an application-managed snapshot because the server does not support this capability.
171 CliqServerUnresponsive - Cannot create an application-managed snapshot because the system could not communicate to the necessary software component on the application server.
172 CliqVssSnapshotFailed - The system could not quiesce the application associated with this volume. Point in time snapshot is created.
173 CliqVssLunInfoFailed - Cannot create an application-managed snapshot because the system failed to get LUN data.
174 CliqVssWriterUnavailable -One or more VSS writers or their components are unavailable.
175 CliqVssSnapshotInProgress - The creation of a shadow copy is in progress, and only one shadow copy creation operation can be in progress at one time.
176 CliqWindowsServerIsBusy - The application server is busy.
177 CliqUpdateVssProvider - This version of VSS provider must be upgraded.
178 CliqVssOperationTimedOut - VSS operation timed out.