Sunday, 26 December 2010

Our ESXi hosts are running out of memory - oh no they're not!


ESXi 4.1 host with 60GB memory, 17 X Windows 7 guests with 2GB memory, 3 x Windows 2008 guests with 4GB memory, and 1 x Windows 2008 guest with 8GB memory; and the host reports a “Host memory usage” warning – how come VMware's superb memory management systems have not kicked in?


In the scenario above; adding up the memory given to the guests, this comes to 54GB, and when looking at the host memory usage, it was recording nearly 56GB memory usage (57344MB.)

Any VMware veteran would look at this situation and think “How odd, how come the TPS (Transparent Page Sharing) has not kicked in!”

The image below shows host memory claimed by the guests and configured memory size.

To see that TPS is not working, quickest way is to click on a guest in the vSphere client, and go to the resource allocation tab. In the image below notice there is no shared memory (below was for one of the Windows 7 guests)

To see shared memory in action, one would expect to something more like the example in the below image where there is 1.64GB shared, or greater than 50% (this was for a Windows 7 guest configured with 3GB memory.)

What is going on?

The answer is found in Matt Liebowitz's excellent article - VMware KB Clarifies Page Sharing on Nehalem Processors:

And specifically the paragraph:

VMware has published a KB article that gives more information on TPS with Nehalem processors and why it appears TPS isn’t working (this affects modern AMD processors also). The short version is that TPS uses small pages (4K), and Nehalem processors utilize large pages (2MB). The ESX/ESXi host keeps track of what pages could be shared, and once memory is over-comitted it breaks the large pages into small pages and begins sharing memory.”

And yes, the host in question had a Nehalem processor (easy to check in Wikipedia - )

There is a solution if this behaviour is proving to be unsettling (will not say fix as technically everything is fine.)

You can force the use of small pages on all guests all the time by changing the value of the advanced option Mem.AllocGuestLargePage to 0 on your hosts and then VMotion the VMs off and back on to the host, or cold boot them.

In the scenario above; with TPS in effect, very roughly around 50% of the memory consumed by guests would be reclaimed, and the 60GB host would be showing only around 30GB memory usage.


A bit of further reading semi-related to the above from Matt Liebowitz's blog - Does ASLR really hurt memory sharing in VMware vSphere?

And the reg key to disable ASLR (Address Space Layout Randomization):

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management]

Tuesday, 21 December 2010

Manually Provisioning 50 Windows 7 Virtual Desktops


A company has purchased Citrix XenDesktop 4 Standard Edition, and someone is given the task to manually provision 50 Windows 7 virtual desktops (their license did not include the excellent Citrix Provisioning Services) on a vSphere infrastructure

Of course this was not an optimal solution, would have been very easy to provision 50 desktops in no time at all with Citrix Provisioning Services, alas the $125+ per user saving was desired....


If anyone is considering/tasked with doing this, the main thing you will be interested in knowing is how long it will take. The answer is that you could get it down to just over

2 minutes per desktop

(basically as long as it takes to type out a few characters, do a few clicks of the mouse, login, activate and reboot)

The procedure is very simple:

Part 1:

Create your gold image and turn it into a template

Part 2:

Create a customization specification for your Windows 7 machine

Part 3:

Repeat this procedure in Stage 1 below for each desktop your want to create, then when they have finished going through the sysprep process (sysprep is inbuilt into the Windows 7 O/S and it is this that the vCenter customization uses) and are powered up and on the login prompt; complete stages 2 and 3.

Stage 1:

Right-click Gold-Image template and choose “Deploy Virtual Machine from this Template”
Give it a “Name:” , and choose an “Inventory Location”
Next >
Choose on which host or cluster you want to place the virtual machine
Next >
Choose which resource pool you want to run the the virtual machine within
Next >
Choose a datastore in which to store the virtual machine files
Next >
Choose disk format
Next >
Choose “Customize using an existing customization specification”
Next >
Tick “Power on this virtual machine after creation”

Note: Each virtual desktop actually takes around 30 minutes to complete stage 1; the machine boots and runs through sysprep using the vCenter customization wizard. It will reboot a few times and automatically be renamed and domain joined (if the customization specification is correctly configured)

Stage 2:

Put in the correct OU for XenDesktops
Login to the desktop
Activate Windows

Stage 3:

Add machines to the Citrix Desktop Delivery Controller and assign users

Friday, 10 December 2010

Nexenta Community 3.0.3 / 3.0.4 Web UI Stops Working!


Running five Nexenta (Community Version 3.03 and 3.04) storage boxes (installed on reclaimed HP DL380 G4's,) and on all five the Web UI has stopped working.

Below follows a fix to get the Web UI working again, additional commands, and two examples


Part 1/2

Use putty or similar to SSH to the Nexenta box and use the root login

From the UNIX shell (#) run these commands:

1 root@nexentabox:/volumes# svcadm enable -rs apache2
2 root@nexentabox:/volumes# svcadm restart nmv
3 root@nexentabox:/volumes# svcadm restart nms
4 root@nexentabox:/volumes#

Note 1: Command from line 1 only needs to be run one time
Note 2: These commands are perfectly safe to run during the working day

If in the NMC shell ($), to get to the UNIX shell (#) run these commands:

nmc@nexentabox:/$ option expert_mode=1 -s
nmc@nexentabox:/$ !bash
You are about to enter the Unix ("raw") shell and execute low-level Unix command(s). Warning: using low-level Unix commands is not recommended! Execute? Yes

Part 2/2

A promising fix (courtesy of my colleague Alfredo) to stop the Web UI failing in the future (or at least reduce the rate of it happening) is to change the 'Seconds between Retrieves' time on the Status → General → General Status and Details pane to 100 or more (default is 5)

Note 3: The HP DL380 G4s used are not on the HCL for OpenSolaris - - apart from problems with the Web UI, Nexenta runs better than the old Openfiler installs it has replaced, with a much greater feature set

Some other commands used in Nexenta

UNIX shell (#)

Reboot # shutdown -y -i6 -g0
Reboot (older command that still works)  # sync; sync; init 6
Shutdown # shutdown -y -i5 -g0
Shutdown (older command that still works) # sync; sync; init 5

NMC shell ($)

$ setup appliance upgrade nms (to upgrade Web UI)
$ setup appliance upgrade (to upgrade base OS s/ware in Nexenta Community Edition)
$ setup appliance init (re-run through the network setup)

Example 1 where SSH enters into the NMC shell

login as: root
Using keyboard-interactive authentication.
Last login: Thu Dec 9 07:00:13 2010 from
* Management Console *
* Version 3.0.3-4 *
* *
* press TAB-TAB to list and complete available options *
* *
* type help for help *
* exit to exit local NMC, remote NMC, or group mode *
* q[uit] or Ctrl-C exit NMC dialogs *
* q[uit] or Ctrl-C exit NMC text viewer *
* *
* option -h help on NMC options *
* -h help on any command *
* ? brief summary *
* help keyword [-q] locate NMC commands *
* help -k [-q] same as above *
* setup usage combined 'setup' man pages *
* show usage combined 'show' man pages *
* *
* type help and press TAB-TAB *
* *
* Management GUI: *
* *
nmc@flake:/$ option expert_mode=1 -s

nmc@flake:/$ !bash
You are about to enter the Unix ("raw") shell and execute low-level Unix command (s). Warning: using low-level Unix commands is not recommended! Execute? Yes

root@flake:/volumes# svcadm enable -rs apache2
root@flake:/volumes# svcadm restart nmv
root@flake:/volumes# svcadm restart nms

Example 2: where SSH connection appears to be unresponsive after logging in - wait a few minutes and the SYSTEM NOTICE “Failed to initialize NMC” pops up and the prompt enters the UNIX shell

login as: root
Using keyboard-interactive authentication.
Last login: Fri Dec 10 08:30:42 2010 from

* * *

Failed to initialize NMC:
no introspection data available for method 'get_props' in object '/Root/App liance', and object is not cast to any interface

Suggested possible recovery actions:
- Reboot into a known working system checkpoint
- Run 'svcadm clear nms'; then try to re-login
Suggested troubleshooting actions:
- Run 'svcs -vx' and collect output for further analysis
- Run 'dmesg' and look for error messages
- View "/var/log/nms.log" for error messages
- View "/var/svc/log/application-nms:default.log" for error messages

Entering UNIX shell. Type 'exit' to go back to NMC login...
root@ripple:~# svcadm enable -rs apache2
root@ripple:~# svcadm restart nmv
root@ripple:~# svcadm restart nms

Thursday, 9 December 2010

Replacing failed disk on Compaq MA8000

A bit of a blast from the past this!

The Compaq MA8000 went end of life around 2004, there are still some out there though, and a few CLI commands must be run before pulling the failed disk


Part 1: Establish CLI access

Method A

Use the serial cable provided with the MA8000 (Serial Cable Part # 17-04074-01) to connect from the serial port of a system running the HyperTerminal application, to the HSG80 controller port. HyperTerminal settings:

Baud Rate = 9600
Data Bits = 8
Parity = NONE
Stop Bits = 1
Flow Control = Hardware

Method B

Use the Storage Works Commands Console installed on an NT system and open the CLI Window (you will need to know an authorization password)

Part 2: Disk removal

From the CLI:

Check that the failed disk is part of the failedset. The failedset contains disk drives that were removed from service either by the controller or by the user.


Enter the DELETE FAILEDSET and DELETE DISKnnnnn commands before physically removing failed members from the storage shelf for testing, repair, or replacement.


Then replace the failed disk.

Part 3: Add replacement disk

From the CLI:


Add the new disk drive to the spareset.
The spareset is a pool of drives available to the controller to replace failing members of storagesets.


If the raidset that the failed disk device was part of is running in reduced state, then the controller automatically removes the new disk from the spareset and adds it to the raidset.
If the controller had a spareset when the disk failed, then the controller already have added a disk to the raidset and the state is normal.

Note i: HSG80 is the Array Controller
Note ii: In DISKnnnnn the nnnnn corresponds relates to the SCSI channel and target ID of the disk (example: DISK50400 is in Channel 5 and Target ID 4)


Monday, 6 December 2010

Setting up Syslogging with VMware vCenter, Free Kiwi Syslog Server, and ESXi

Part 1: Download, install Kiwi Syslog Server on the Virtual Center server

i: On the Virtual Center server, download and install Kiwi Syslog which is currently freely available from:
ii: Extract files from the zip and then run the setup.exe

iii: Agree to the End User License Agreement
iv: Choose to 'Install Kiwi Syslog Server as a Service' and click Next

v: Accept the default -> Install the Service using: The LocalSystem Account, and click Next
vi: Untick 'Install Kiwi Syslog Web Access' (feature not availble in free version,) and click Next
vii: Choose Components - can leave on type = Normal, and click Next
viii: Choose Install Location and click Install
ix: Run Kiwi Syslog Server when the install completes, and click Finish

Part 2: Configure Kiwi Syslog Server

It will work fine with default settings, one thing we might want to do:

From File -> Setup -> Rules -> Default -> Actions -> Log to file

Change the default log file path and file name
Would also be nice to 'Enable Log File Rotation' alas this feature requires the licensed version

Note i: default location is C:\Program Files (x86)\Syslogd\Logs\SyslogCatchAll.txt
Note ii: default UDP listen port of 514 is used
Note iii: The paid for version of Kiwi Syslog Server costs £215 and would be worth buying for the extra features

Part 3: Enable syslog on the ESXi hosts

i: Via the vSphere client - click on an ESXi host and select Configuration tab -> Advanced Settings (under Software)
ii: From Advanced Settings window - in Syslog -> Syslog.Remote.Hostname, enter the DNS name of your Virtual Center Server and click OK

iii: Verify messages are being received and if this is okay then enable for all your ESXi hosts

Wednesday, 10 November 2010

Installing HP ESXi Offline Bundle for VMware ESXi 4.1

If ESXi 4.1 has been implemented on a HP ProLiant Server using the general VMware build (as opposed to the HP build that can be obtained by Googling "download vmware esxi HP";) to install the HP ESXi Offline bundle - that can be downloaded from HP's website - to get storage information (and more) under the hardware status tab in the vSphere client; follow the procedure below:

1) Put the host into maintenance mode
2) Via the vSphere CLI, run this command from C:\Program Files\VMware\VMware vSphere CLI\bin --server X.X.X.X --install --bundle

<< Where X.X.X.X is the IP address of the host >>
<< The bundle name included above is correct as downloaded from HP's website in November 2010 >>

3) Reboot the host

Example below where is the IP Address of the ESXi host to be upgraded:

C:\Program Files\VMware\VMware vSphere CLI\bin> --server --install --bundle

Enter username: root
Enter password:

Please wait patch installation is in progress ...
The update completed successfully, but the system needs to be rebooted for the changes to be effective.

Tuesday, 26 October 2010

Calculating the RW Value in a VMDK Disk Descriptor File, and Fixing Incorrect Virtual Disk Capacity

*Update: see Formulas to Calculate Dimensionless Hard Disk Size, and a Real World Application to Extending a UFS Partition  posted 20/01/2012

Formula to find R the RW value in the disk descriptor file from G the exact GByte capacity of the disk

R = G x 2097152     {or R = Gx1024x1024x1024/512 }

If you don't know the exact GBbyte capacity of the disk, you can only roughly work it out from number of cylinders C
R = (((C+1)x512xSxH)/1024)-8)x2     {where S is the number of sectors, and H is the number of heads}

Scenario: A virtual guest server has a virtual disk with its capacity totally incorrect, preventing cloning, storage vMotion …

The virtual guest machine in question has a disk that is only 20GB in size (as seen from browsing the datastore, and in Windows,) but the VI Client (this is ESX 3.5U4) connected to either the vCenter or the host, says the disk is 10245.08GB. Investigation shows the RW value in the VMDK disk descriptor file is wrong.

[root@esx01 EXAMPLE]# cat EXAMPLE_0.vmdk

# Disk DescriptorFile

# Extent description
RW 21485479936 VMFS "EXAMPLE_0-flat.vmdk"

# The Disk Data Base

ddb.virtualHWVersion = "4"
ddb.toolsVersion = "7303"
ddb.geometry.cylinders = "2612"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.adapterType = "lsilogic"

[root@esx01 EXAMPLE]#

The RW above should be 20 x 2097152 = 41943040 not 21485479936 (which divided by 2097152 does indeed give the 10245.08 seen via the VI client.)

How to fix:

You might think the above formulas will help, in practice they don't.

The solution:

1: Shutdown the affected machine
2: Detach the affected disk
3: Create another disk of exactly the same size
4: Edit the disk descriptor file for the new VMDK to point to the old -FLAT.VMDK
5: Re-attach the disk using the same SCSI ID as before
6: Boot the affected machine back up


Also see:

Friday, 22 October 2010

cma.log is filling up most of the space in /var/log (VMware ESX, 4.0.0, 208167)

A full /var/log can cause various problems on an affected ESX host – unable to snapshot a guest, unable to enable HA agent …

One potential culprit for filling up /var/log, can be the cma.log file generated by the HP Insight Management agents.


[root@esx01 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 4.9G 1.9G 2.8G 41% /
/dev/sda2 2.0G 2.0G 0 100% /var/log
/dev/cciss/c0d0p1 198M 68M 121M 36% /boot

Investigation uncovers that the cma.log is taking up nearly all the space:

[root@esx01 /]# cd /var/log/hp-snmp-agents/
[root@esx01 hp-snmp-agents]# ls -s
total 1690984
1690984 cma.log 0 cmaX.log

- or as seen via Veeam Fast SCP (below) -


From the folder /var/log/hp-snmp-agents, use the rm command to delete cma.log

[root@esx01 hp-snmp-agents]# rm cma.log
rm: remove regular file `cma.log'? y

(or delete via Veeam Fast SCP)

Then change to the /etc/init.d directory, and run the command ‘service hp-snmp-agents restart’ to restart the HP Management Agents (the cma.log file is invisible, and remains in /var/log until this service is restarted.) This command will not affect live guests in any way.

[root@esx01 hp-snmp-agents]# cd /etc/init.d
[root@esx01 init.d]# service hp-snmp-agents restart
Shutting down NIC Agent Daemon (cmanicd): [ OK ]
Shutting down Storage Event Logger (cmaeventd): [ OK ]
Shutting down FCA agent (cmafcad): [FAILED]
Shutting down SAS agent (cmasasd): [ OK ]
Shutting down IDA agent (cmaidad): [ OK ]
Shutting down IDE agent (cmaided): [ OK ]
Shutting down SCSI agent (cmascsid): [ OK ]
Shutting down Health agent (cmahealthd): [ OK ]
Shutting down Standard Equipment agent (cmastdeqd): [ OK ]
Shutting down Host agent (cmahostd): [ OK ]
Shutting down Threshold agent (cmathreshd): [ OK ]
Shutting down RIB agent (cmasm2d): [ OK ]
Shutting down Rack Infrastructure Info Srv (cpqriisd): [FAILED]
Shutting down Rack agent (cmarackd): [FAILED]
Shutting down Performance agent (cmaperfd): [ OK ]
Shutting down SNMP Peer (cmapeerd): [ OK ]
Starting Health agent (cmahealthd): [ OK ]
Starting Standard Equipment agent (cmastdeqd): [ OK ]
Starting Host agent (cmahostd): [ OK ]
Starting Threshold agent (cmathreshd): [ OK ]
Starting hp-ilo:
Already started hpilo module. [ OK ]
Starting RIB agent (cmasm2d): [ OK ]
Starting hp-ilo:
Already started hpilo module. [ OK ]
Starting Rack Infrastructure Info Srv (cpqriisd): [ OK ]
Starting Rack agent (cmarackd): [ OK ]
Starting Performance agent (cmaperfd): [ OK ]
Starting SNMP Peer (cmapeerd): [ OK ]
Starting Storage Event Logger (cmaeventd): [ OK ]
Starting FCA agent (cmafcad): [ OK ]
Starting SAS agent (cmasasd): [ OK ]
Starting IDA agent (cmaidad): [ OK ]
Starting IDE agent (cmaided): [ OK ]
Starting SCSI agent (cmascsid): [ OK ]
Starting NIC Agent Daemon (cmanicd): [ OK ]

Finally, re-run the df –h to check space on /var/log has been freed up

[root@esx01 init.d]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 4.9G 1.9G 2.8G 41% /
/dev/sda2 2.0G 318M 1.6G 17% /var/log
/dev/cciss/c0d0p1 198M 68M 121M 36% /boot

Thursday, 21 October 2010

Migrating from 32bit Virtual Center to 64bit vSphere 4.1 Virtual Center on a New Database

With the advent of VMware vSphere 4.1, companies with 32bit Virtual Centers will be needing to rebuild these on 64bit systems. There are plenty of articles out there with walkthroughs on how to use the Data Migration tool to migrate from a 32bit to a 64bit Virtual Center, and we will not go into this here. Instead, we shall look at a different scenario:


A small company with 7 hosts in two clusters; 1 cluster of 4 hosts is EVC enabled, the other cluster of 3 hosts is not EVC enabled. The database on the existing vSphere 4 32-bit Virtual Center has grown to over 20GB (in vCenter, estimated space required for 7 hosts and 140 VMs on default settings is just 1.32GB,) so the decision is taken to start afresh with a new database. The decision is also taken to keep the same name and IP address.

Complicating matters – the company uses Veeam Backup and Replication (on v4.1.1,) and has a small 20 seat XenDesktop 4 VDI environment, both of which talk to the Virtual Center server.

Walkthrough and issues encountered:

1) Build Windows 2008 R2 Enterprise server with same name and IP Address as original

2) Install SQL 2008 R2 Express (recommend doing this as comes with a 10GB database limit, whereas the SQL 2005 Express bundled with the vCenter 4.1 installation media only has a 4GB limit)

3) Create a 64bit System DSN for the Virtual Center database and a 32bit System DSN for the Update Manager database

4) Install Virtual Center 4.1

5) Create datacenter and clusters in the new Virtual Centre as a copy of the original

6) Install Update Manager 4.1

Switch over day:

1) Shut down old Virtual Center and reset its account in AD

2) Join the new 64bit Virtual Center to the domain

3) On the non-EVC enabled cluster, the hosts could be added into the new VC without any down time of guest machines

4) On the EVC enabled cluster, hosts could not be added into the new VC without having to shutdown all the guests on the host prior to adding (being in maintenance mode is not a requirement)

Error: The host cannot be admitted to the cluster’s current Enhanced vMotion Compatibility mode

5) Once all the resource pools had been put in place as per the original; time to test the Citrix XenDesktops, and Veeam Backups, and deal with any niggling issues

Citrix XenDesktop 4

These would not initially work as the Citrix Desktop Delivery Controller could not talk to the new VC. The fix was to edit the proxy.xml file on the Virtual Center server. The location of proxy.xml is:

C:\ProgramData\VMware\VMware VirtualCenter

Edited so that the section for type = vim.ProxyService.LocalServiceSpec, and port 8085, and serverNamespace = /sdk - has accessMode changed from httpsWithRedirect to httpAndHttps (see image below):
Change to:
(And restart vCenter Server service to apply changes to proxy.xml)

All the virtual desktops had to be removed from their groups and be re-added, with users reassigned to their desktops.

Veeam Backup 4.1.1

Virtual Machines had to be re-added to their jobs, and the old entry removed. Replicas needed to be kicked off from scratch.

Other Issues:

1) Unable to assign permissions under Domain accounts via the vSphere Client, a search would bring up no accounts

Error -> Call “UserDirectory.RetrieveUserGroups” for object “UserDirectory” on vCenter Server failed

The solution was to have the ‘VMware VirtualCenter Server’ service log on as a domain account (the domain account used must also have permissions to the VIM_VCDB database)

2) In Update Manager, all the settings were greyed out

Possibly a side effect of installing Update Manager on the Virtual Center whilst the machine was still in a workgroup. A few things were tried … Update Manager was uninstalled. When trying to reinstall Update Manager, this error was encountered:

Error 25085.Setup failed to register VMware vCenter Update Manager extension to VMware vCenter Server:

The fix outlined in -

was to use ADSI edit to add CN=com.vmware.vcIntegrity which was missing, and then the install ran fine.

Wednesday, 22 September 2010

Configuring SNMP on ESXi 4.1 via the vSphere CLI 4.1

1) Download the vSphere CLI 4.1 and install

2) From the VMware vSphere CLI command line, type: --server ESXIHOSTIP -c COMMUNITYNAME -p 161 -t DESTINATIONHOSTIP@161/COMMUNITYNAME

Example with output:

C:\Program Files (x86)\VMware\VMware vSphere CLI\bin> --server -c anSNMPnm -p 161 -t

Enter username: root
Enter password:

Changing udp port to 161...
Changing community list to: anSNMPnm...
Changing notification(trap) targets list to:

3) Enable from the command line type: –server ESXIHOSTIP -E

Example with output:

C:\Program Files (x86)\VMware\VMware vSphere CLI\bin> --server -E

Enter username: root
Enter password:

Enabling agent...

4) Verify your settings, type: –server ESXIHOSTIP -s

Example with output:

C:\Program Files (x86)\VMware\VMware vSphere CLI\bin> --server -s

Enter username: root
Enter password:

Current SNMP agent settings:
Enabled : 1
UDP port : 161
Communities :
Notification targets :

5) Test your settings, type: –server ESXIHOSTIP -T

Example with output:

C:\Program Files (x86)\VMware\VMware vSphere CLI\bin> --server -T

Enter username: root
Enter password:

Sending test nofication(trap) to all configured targets...
Complete. Check with each target to see if trap was received.

Note: These settings are written to /etc/vmware/snmp.xml

Update - March 2011

Related post on how to monitor ESXi host hardware with HPSIM (this does not need SNMP enabled):

Friday, 10 September 2010

Citrix VDI Article 5/5) All XenDesktops have simultaneously shut down (system initiated guest OS shutdowns)!

By default, Citrix XenDesktops will shutdown 3 hours after losing contact to a Citrix Desktop Delivery Controller, if there is no resumption of communication. For this reason, here are a few recommendations:

1) Monitor the ‘Citrix Desktop Delivery Controller Service’ service

There is always a chance that this can terminate unexpectedly (as we have seen happen be it due to antivirus, software bug/glitch …)

2) Set the ‘Citrix Desktop Delivery Controller Service’ service to keep trying to restart

By default the service may be set to do ‘Take No Action’ on failure, set it to ‘Restart the Service’ on First, Second, and Subsequent failures.

3) Have two or more Citrix Desktop Delivery Controllers (all on exactly the same versions of Citrix and at the same patch level)

Citrix VDI Article 4/5) Getting the HDX File Access box to re-appear

Scenario: A user puts a tick in the box for ‘Do not ask me again for this virtual desktop’ and then wants to get the HDX File Access box back

Method 1/2
1) Right-click the 'Citrix Connection Center' icon in the notification area

2) Click on 'File Security'

3) And re-select as required

4) Click OK to close the ‘Citrix Connection Center’ box

Note: if there is no 'Citrix Connection Center' icon in your the notification area, or it appears and disappears quickly, update the client agent.

Method 2/2

1) When connected to your VDI, open up the 'Desktop Viewer Preferences' screen (click on the cogs icon)

2) On the HDX tab, adjust the 'File Access' settings to suit your preference

Citrix VDI Article 3/5) How to populate the domain box on the XenDesktop login screen


1) Connect onto the Citrix Desktop Delivery Controller
2) Open the Citrix Access Management Console
3) Expand ‘Citrix Resources’ -> Expand ‘Configuration tools’ -> Expand ‘Web Interface’
4) Click on ‘Internal Site’ and choose ‘Configure authentication methods’

5) With Explicit ticked, click on Properties…

6) Expand Explicit -> Click on ‘Authentication Type’ -> Click on Settings

7) Change the Domain list selection to ‘Pre-populated’ and add the domain into the box (by default the domain the CDDC is installed on is pre-added to the restricted domains list)

8) Click OK three times
9) (Optional) Repeat on the Citrix Secure Gateway server

Citrix VDI Article 2/5) Policy to stop local drives being available in XenDesktop

Why: for security or because users complain they see a lot of drives when they log into their XenDesktop


1) Connect onto the Citrix Desktop Delivery Controller
2) Open the Citrix Presentation Server Console
3) Expand the XenDesktop farm
4) Right-click on Policies and select ‘Create Policy’
5) Give the policy a name and description and click OK

6) Right-click on the policy in the contents pane and select Properties
7) Expand ‘Client Devices’ -> Resources -> Drives
8) Click on Connection and select Enabled and ‘Do Not Connect Client Drives at Logon’

9) Click OK
10) Right-click on the policy and select ‘Apply this policy to…’
11) Choose what to apply the policy to (in the example below, selected ‘Client Name’ -> ‘Filter based on client name’ -> ‘Apply to all client names’, to apply to all)

12) Click OK

Citrix VDI Article 1/5) How to disable session reliability on Citrix XenDesktop Farm

Problem: Remote office using Citrix XenDesktop through a VPN tunnel, reports bandwidth usage has increased dramatically due to traffic on port 2598 used by Citrix XTE Service / Citrix Session Reliability Protocol.
Solution: Disable session reliability on Citrix Farm

Side effects: Non reported


1) Connect onto the Citrix Desktop Delivery Controller
2) Open the Citrix Access Management Console
3) Expand ‘Citrix Resources’ -> Expand ‘Desktop Delivery Controller’
4) Click on the XenDesktop farm and from ‘Common Tasks’ choose ‘Modify farm properties’

5) Click on ‘Modify All Properties’
6) Expand ‘Farm-wide’ -> Select ‘Session Reliability’ -> Untick ‘Allow users to view sessions during broken connections’

7) Click ‘OK’

Thursday, 2 September 2010

Getting loads (thousands per second) of event 5145 for Detailed File Share on a Windows 2008 R2 File Server

On a recently built Windows 2008 R2 File Server, it was noticed that in the security log there were over 10000+ per second event 5145s for category 'Detailed File Share.'

Investigation into this pointed to an 'Advanced Audit Policy Configuration' item that is only available with Windows 2008 R2 and Windows 7; which is the subcategory item 'Audit Detailed File Share', and interestingly this was not configured.

It appears that if you have a domain or local policy that enables the normal 'Local Policies' → 'Audit Policy' for 'Audit object access' with Success and/or Failure

it causes the 'Audit Detailed File Share' to be configured unless you explicitly configure it with Success and/or Failure unticked.

After configuring the Local Security Policy on the file server with Success unticked (see below,) the number of security audit events recorded was drastically cut down, noticably reducing the CPU processing load.

Note: There is no granularity to this setting; it is either enabled or not across all the shares on the server.

Saturday, 21 August 2010

Guide to Using Double-Take To Accomplish a Minimal Downtime Fileserver P2V Conversion

Guide to Using Double-Take To Accomplish a Minimal Downtime Fileserver P2V Conversion

Scenario: you have been given the task to complete a P2V conversion of a large fileserver with minimal downtime

A solution: Use double-take to replicate the data drives to virtual disks, and then use VMware Converter on the O/S drive

Below is a brief walk-through:

1: Obtain the software (in this walk-through I am using Double-Take/HP StorageWorks Storage Mirroring Standard Edition for Windows, Version, which comes packaged in SWSM4500_i386.exe)

2: Create a virtual machine with the same number of drives as the original and install Windows onto the O/S drive (this server does not need to be domain joined,) and letter the drives the same as the original (the drives don't need to be as large as the physical originals, just need to have enough free space to accommodate all the data)


3: Copy the setup_1629.exe obtained from unpacking SWSM4500_i386.exe, to your physical file server and virtual replication target server, and run

4: Double-Take install from the screen below is as follows:

Next >
I accept the terms in the license agreement
Next >
Client and Server Components
Next >
Next >
User Name: < up to you >
Organization: < up to you >
Activation code: EVAL
Next >
Next >
Next >

Next >

Next >
Next >

Install Double-Take on both your physical file server and virtual replication target server

Note: the eval key is used here as this will only be a temporary installation of Double-Take

5: The physical file server and virtual replication target server, will both need restarts when the install completes

6: Open the HP StorageWorks Storage Mirroring Management Console (does not matter on which server you do this)

Cancel out of the Welcome to HP StorageWorks Storage Mirroring screen

If both the physical file server and virtual replication target server are on the same network, they will be auto-discovered as below, otherwise use the add server button to add to the console:

Right-click on both servers in turn and select 'Logon' (recommend using local administrator here)
Right-click on the fileserver and select 'New > Replication Set' and name
Tick the drive(s) on the fileserver to be replicated

Right-click on the replication set and select 'Connection Manager'
Choose the 'Target Server' and 'Mappings – One To One'

Click 'Connect'
Select 'Yes' to the 'Save changes now?' prompt
The replication will now start and just need to wait until the 'Mirror Status' is idle (will start at 0%)

7: From here; it is just a case of running VMware Converter on the O/S drive of the fileserver then attaching the data drive(s) from the replication target server to the VMware Converted fileserver O/S disk, shutting down the physical fileserver, and booting up the virtual one.

Thursday, 12 August 2010

Using VMware vSphere CLI to convert Sparse Disk to Thick


VMware virtual machines are being transferred from one hosting provider to another. You are given a desktop SATA disk with the virtual machines files on and you upload the files using the vSphere Client to a VMFS datastore connected to one of your ESXi servers. You right-click on the VMX file and import the machine. The disks appear as either 0 bytes in size or 'Type:' = sparse. You remove the disks and re-add, they still come up as 'Type:' = sparse, and the virtual machine will not boot

The solution:

Use the VMware vSphere CLI to convert the Sparse Disk to Thick using the following command, then remove the sparse disks from the VM and add the converted disks (keeping SCSI IDs in order):



C:\Program Files (x86)\VMware\VMware vSphere CLI> --server=esxi01 -i /vmfs/volumes/"test store"/DC1/scsi0-0-0-DC1.vmdk /vmfs/volumes/"test store"/DC1/DC1_0.vmdk

Enter username: root
Enter password:

UPDATE: Or you could just storage vMotion the sparse disk and this will have the same affect.

Saturday, 31 July 2010

LeftHand/Storageworks SAN/iQ NSM Fails to Boot with Non-System disk error

Scenario: A brand new HP Storageworks P4500 G2 network storage module (NSM) fails to boot out of the box.

The module cycles on

Attempting Boot From CD-ROM
Attempting Boot From Hard Drive (C:)
Attempting Boot From NIC

Non-System disk or disk error
Replace and strike any key when ready

Press “F9” key for ROM-Based Setup Utility
Press “F10” key for System Maintenance Menu

System will automatically reboot in 4_seconds

So you check all the usual suspects:

No disks in drives
Boot controller order is correct
RAID controller is seeing 3 logical drives as per usual
And no disks are showing amber or red indicator lights

The only answer is to re-image the NSM, and this is very easy:

1) Dig out the SAN/iQ Quick Recovery CD which came with the NSM and place in the CD-drive

2) Obtain the Feature Key (this can be obtained via the iLO as it will be the NIC MAC address with the lowest numerical value)

3) Obtain the License Key (if the module is brand new, this is obtained by logging into and generating a license with the entitlement number from the entitlement certificate that came with the NSM, and the Feature Key obtained above; otherwise hopefully a record of it will have been kept, if not - and there is no record of the HP Passport account the License Key was generated with - only option is to speak to HP Licensing)

4) On a USB key create a blank file with the name featurekey_MACADDRESSWITHNOCOLONSIN and inside this file (edit with notepad or whaterver) paste the License Key


File name = featurekey_1CC1DE02CAEC
File contents = 0426-1219-FC96-8958-DF52-145C-8E7C-EF42-2B74-B2CD-AAD0-D231-5D4D-65C2-227B-F177-BB56-6161-CDCB-F17A-689F-2A19-1EC9-8EE4-3EAE-53EE-814A-FCF0-A779-F4C4-2F3E-D99D-AC

5) Then boot the NSM with the Recovery CD and USB key in and it will go away and re-image itself

If there is data on the module that needs to be recovered, then the above would not be an option, as it wipes the module and places on a new image.

Note 1: Can put the USB key in later after booting off the CD when prompted; also – if do not have a USB key to hand – can manually input the License Key.

Note 2: The above scenario is very unlikely, and has happened to me only once in many LeftHand/Storageworks SAN/iQ installs.

Note 3: This procedure was done for an NSM with SAN/iQ 8.5 Recovery CD.

Tuesday, 20 July 2010

How to Configure iSCSI Multipathing on ESX4i

Note: This is pretty much from but with a few pictures and small edits/additions.
1.Open VMware vCenter or point the vSphere Client to the ESX host to be configured

2.Select Host > Configuration > Networking.

3.Click Add Networking.

4.Select “Virtual Machine” to create new vSwitch for iSCSI connectivity > Next.

5.Select “Create a virtual switch” and check the box next to the VMNICs for iSCSI connectivity > Next.

6.Type a name for the new virtual switch. (e.g. iSCSI) > Next.

7.Click Finish.

(Skip 9 to 13 if you don't want to add a Service Console port to your iSCSI vSwitch)
8.Scroll down and click Properties on the newly created vSwitch.

9.Click Add button to add Service Console port.

10.Select “Service Console” > Next.

11.Type a name for the new service console port. (e.g. iSCSI Service Console) > Next.

12.Select “Use the following IP settings” and type an IP Address on the iSCSI network > Next.

13.Click Finish.

14.Click Add button to add first iSCSI VMkernel port.

15.Select “VMkernel” > Next.

16.Type a name for the first new VMkernel port (e.g. iSCSI VMkernel 1) > Next.

17.Select “Use the following IP settings” and type an IP Address on the iSCSI network > Next.

18.Click Finish.

19.Repeat Steps 14 to 18 to create additional VMkernel ports for each physical network adapter (VMNIC) to be used for iSCSI

20.Select the first VMkernel port created and click Edit.

21.Click the NIC Teaming tab and select “Override vSwitch failover order”.

22.Designate only one active adapter and move the remaining adapters to the Unused Adapters category.

23.Click OK

24.Repeat Steps 20 to 23 to map each VMkernel port to only one active adapter. Only one active adapter can exist per VMkernel port for multipathing to function properly.

25.Make sure the iSCSI Initiator is enabled in

Host > Configuration > Storage Adapters > iSCSI Software Adapter

(otherwise you’ll get an “Invalid adapter provided” error when try to bind the ports to the iSCSI hba)

26.Identify the port names for each VMkernel port created. (e.g. vmk0, vmk1)

27.Using the vSphere CLI (NOT vSphere PowerCLI), connect each VMkernel port created to the software iSCSI initiator using the esxcli command:

esxcli --server=x.x.x.x swiscsi nic add -n -d (where x.x.x.x is the IP address of the ESXi host you are configuring)

example: esxcli --server= swiscsi nic add -n vmk0 -d vmhba33

28.Repeat the esxcli command until all VMkernel ports have been connected to the software iscsi initiator.

29.Verify the VMkernel ports connections by running the esxcli command.

esxcli --server=x.x.x.x swiscsi nic list -d vmhba33

example: esxcli --server=x.x.x.x swiscsi nic list -d vmhba33

30.In Host > Configuration > iSCSI Software Adapter > Details > Devices > Manage Paths, choose Path Selection = Round Robin (VMware) and click change

(Need to do for every LUN that is connected, also when adding in new LUNs.)

Multipathing configuration is now complete!

Note 1: vmhba33 is not always the iSCSI software initiator; the vmhba number can be found from the storage adapters tab in the vSphere client.