Sunday, 23 October 2011

Experiment: Is it Possible to Shrink Back Down a Thin-Provisioned LUN in SAN/iQ 9.0?

Note 1: See Appendix below for more on VMware's vStorage APIs
Note 2: SAN/iQ 9.5 was released 7 October 2011 (see http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c03047822 )

Prologue: A problem with thin-provisioned LUNs is that, when data is removed from inside a LUN, the LUN does not shrink down correspondingly (unless using really high-end storage like 3PAR.)

Question: SAN/iQ 9.0 supports VMware's vStorage APIs (VAAI) with vSphere 4 and above, so a quick experiment is devised to see if a thin-provisioned SAN/iQ 9.0 LUN, when grown by adding data (a storage vMotion,) and then having the data deleted, and on converting the volume from thin to thick and then thick to thin, does it shrink back down?

Answer: No (the expected result)

Experimental Results

1: Start with unpopulated thin-provisioned volume →
TEST LUN created of size 25GB

2: Create a datastore →
Datastore details below (this is VMFS 3.46 and notice Hardware Acceleration = Supported)

3: Populate the datastore with data (here a small XP VM was storage vMotioned) →
TEST LUN has grown to 11.92GB consumed space

4: Then delete the data from the datastore
TEST LUN consumed space remains at 11.92GB

5: Convert the LUN to full provisioned
TEST LUN consuming 25GB space now (says only 13.09GB reclaimable)

6: Convert the LUN back to thin
TEST LUN still consuming 11.92GB space

Appendix: VMware's vStorage APIs

Full Copy: This feature delivers hardware-accelerated copying of data by performing all duplication and migration operations on the array. Customers can achieve considerably faster data movement via VMware Storage vMotion, and virtual machine creation and deployment from templates and virtual machine cloning.

Block Zero: This feature delivers hardware-accelerated zero initialization, greatly reducing common input/output tasks such as creating new virtual machines. This feature is especially beneficial when creating fault-tolerant (FT)-enabled virtual machines or when performing routine application-level Block Zeroing.

Hardware-assisted locking: This feature delivers improved locking controls on Virtual Machine File System (VMFS), subsequently allowing far more virtual machines per datastore and shortened simultaneous block virtual machine boot times. This improves performance of commons tasks such as virtual machine migration, powering many virtual machines on or off and creating a virtual machine from template.

Upgrading Citrix Access Gateway VPX Walkthrough

Credit: This is an edit of my colleague Lupa Mooncak's document created in liasion with Alfredo and published with permission. Thanks again Lupa and Alfredo!

*This is specifically for upgrading from version 5.0.2 to 5.0.3 - applies to other versions too

1: Take a VMware/Citrix snapshot of the CAG VPX virtual machine
2: Log in to the Citrix Access Gateway Management web page

*Note after the IP Address it is 'ell' 'pee'

3: Click on Snapshots tab


i: In the 'Software Releases' section highlight your current version
ii: Click on Create to the right of the 'Snapshots' section (this will create an internal snapshot)
iii: Then export the snapshot made above using the Export button (this will act as a backup just in case)
iv: To the right of the 'Software Releases' section, click on Import to import the appliance upgrade
v: In the 'Software Releases' section click on Migrate, which will upgrade the appliance keeping existing settings
vi: Either accept the prompt asking for reboot, or reboot when convenient

4: Once rebooted, log in to the Citrix Access Gateway Management web page to check the upgrade has applied
*also be a good time to take another internal snapshot as a roll-back point

5: Give 24 hours to see if any issues are reported, if there are none then commit the VMware/Citrix snapshot

Sunday, 16 October 2011

VCP510 (VCP on vSphere 5) Exam Cram Notes Part 3/3: Exam Passed!

After passing the exam in early October 2011 at a first attempt, here are a few notes on the exam (without infringing the confidentiality agreement) that might help you:

i: The exam is 85 questions in 90 minutes (only slightly more time than one minute per question) so you need to know your stuff and answer the questions efficiently.
ii: The questions are all multiple choice.
iii: The passing score is 300 out of a maximum possible score of 500.

Some useful resources

i: Notes Part 1 → well worth going over and understanding, and since the notes fit nicely in a smartphone screen, it is convenient to study these whilst out and about.

ii: Notes Part 2: vSphere 5 Configuration Maximums → are also worth going over but it is unlikely you will see many questions – perhaps only 1 or 2 – on this in the exam – then again those 1 or 2 questions can be the difference between a pass and a fail! Do not spend too much time on trying to remember these, the time can be better spent elsewhere.

iii: Lab work → highly recommend you have a vSphere 5 lab setup and where possible practice setting up, configuring and administrating all features. Especially have a good play around with / get a good understanding of:

sDRS
Storage Profiles
vSphere Distributed Switch

*Remember that vSphere runs nicely inside VMware Workstation / Oracle VirtualBox

iv: Web resources →


v: Experience → if short on experience then going on a VMware vSphere 5 course (a prerequisite before taking the exam if not a pre-existing VCP4 – and even VCP4's will need to take the What's New course after 29th Feb 2012,) more lab work and a well written book.

vi: Time → always the hardest resource to get enough of!

vii: A bit of luck!

Final Comment

Thursday, 6 October 2011

VCP510 (VCP on vSphere 5) Exam Cram Notes Part 2/3: vSphere 5 Configuration Maximums

Sub-sections:
1: vSphere 5 Compute Configuration Maximums
2: vSphere 5 Memory Configuration Maximums
3: vSphere 5 Networking Configuration Maximums
4: vSphere 5 Orchestrator Configuration Maximums
5: vSphere 5 Storage Configuration Maximums
6: vSphere 5 Update Manager Configuration Maximums
7: vSphere 5 vCenter Server, and Cluster and Resource Pool Configuration Maximums
8: vSphere 5 Virtual Machine Configuration Maximums

1: vSphere 5 Compute Configuration Maximums
1 = Maximum amount of virtual CPU's per Fault Tolerance protected virtual machine
4 = Maximum Fault Tolerance protected virtual machines per ESXi host
16 = Maximum amount of virtual disks per Fault Tolerance protected virtual machine
25 = Maximum virtual CPU's per core
160 = Maximum logical CPU's per host
512 = Maximum virtual machines per host
2048 = Maximum virtual CPU's per host
64GB = Maximum amount of RAM per Fault Tolerance protected virtual machine

2: vSphere 5 Memory Configuration Maximums
1 = Maximum number of swap files per virtual machine
1TB = Maximum swap file size
2TB = Maximum RAM per host

3: vSphere 5 Networking Configuration Maximums
2 = Maximum forcedeth 1Gb Ethernet ports (NVIDIA) per host
4 = Maximum concurrent vMotion operations per host (1Gb/s network)
8 = Maximum concurrent vMotion operations per host (10Gb/s network)
8 = Maximum VMDirectPath PCI/PCIe devices per host
8 = Maximum nx_nic 10Gb Ethernet ports (NetXen) per host
8 = Maximum ixgbe 10Gb Ethernet ports (Intel) per host
8 = Maximum be2net 10Gb Ethernet ports (Emulex) per host
8 = Maximum bnx2x 10Gb Ethernet ports (Broadcom) per host
16 = Maximum bnx2 1Gb Ethernet ports (Broadcom) per host
16 = Maximum igb 1Gb Ethernet ports (Intel) per host
24 = Maximum e1000e 1Gb Ethernet ports (Intel PCI-e) per host
32 = Maximum tg3 1Gb Ethernet ports (Broadcom) per host
32 = Maximum e1000 1Gb Ethernet ports (Intel PCI-x) per host
32 = Maximum distributed switches (VDS) per vCenter
256 = Maximum Port Groups per Standard Switch (VSS)
256 = Maximum ephemeral port groups per vCenter
350 = Maximum hosts per VDS
1016 = Maximum active ports per host (VSS and VDS ports)
4088 = Maximum virtual network switch creation ports per standard switch (VSS)
4096 = Maximum total virtual network switch ports per host (VSS and VDS ports)
5000 = Maximum static port groups per vCenter
30000 = Maximum distributed virtual network switch ports per vCenter
6x10Gb + 4x1Gb = Maximum combination of 10Gb and 1Gb Ethernet ports per host

4: vSphere 5 Orchestrator Configuration Maximums
10 = Maximum vCenter server systems connect to vCenter Orchestrator
100 = Maximum hosts connect to vCenter Orchestrator
150 = Maximum concurrent running workflows
15000 = Maximum virtual machines connect to vCenter Orchestrator

5: vSphere 5 Storage Configuration Maximums
2 = Maximum concurrent Storage vMotion operations per host
4 = Maximum Qlogic 1Gb iSCSI HBA initiator ports per server
4 = Maximum Broadcom 1Gb iSCSI HBA initiator ports per server
4 = Maximum Broadcom 10Gb iSCSI HBA initiator ports per server
4 = Maximum software FCoE adapters
8 = Maximum non-vMotion provisioning operations per host
8 = Maximum concurrent Storage vMotion operations per datastore
8 = Maximum number of paths to a LUN (software iSCSI and hardware iSCSI)
8 = Maximum NICs that can be associated or port bound with the software iSCSI stack per server
8 = Maximum number of FC HBA's of any type
10 = Maximum VASA (vSphere storage APIs – Storage Awareness) storage providers
16 = Maximum FC HBA ports
32 = Maximum number of paths to a FC LUN
32 = Maximum datastores per datastore cluster
62 = Maximum Qlogic iSCSI: static targets per adapter port
64 = Maximum Qlogic iSCSI: dynamic targets per adapter port
64 = Maximum hosts per VMFS volume
64 = Maximum Broadcom 10Gb iSCSI dynamic targets per adapter port
128 = Maximum Broadcom 10Gb iSCSI static targets per adapter port
128 = Maximum concurrent vMotion operations per datastore
255 = Maximum FC LUN Ids
256 = Maximum VMFS volumes per host
256 = Maximum datastores per vCenter
256 = Maximum targets per FC HBA
256 = Maximum iSCSI LUNs per host
256 = Maximum FC LUNs per host
256 = Maximum NFS mounts per host
256 = Maximum software iSCSI targets
1024 = Maximum number of total iSCSI paths on a server
1024 = Maximum number of total FC paths on a server
2048 = Maximum Powered-On virtual machines per VMFS volume
2048 = Maximum virtual disks per host
9000 = Maximum virtual disks per datastore cluster
30'720 = Maximum files per VMFS-3 volume
130'690 = Maximum files per VMFS-5 volume
1MB = Maximum VMFS-5 block size (non upgraded VMFS-3 volume)
8MB = Maximum VMFS-3 block size
256GB = Maximum file size (1MB VMFS-3 block size)
512GB = Maximum file size (2MB VMFS-3 block size)
1TB = Maximum file (4MB VMFS-3 block size)
2TB – 512 bytes = Maximum file size (8MB VMFS-3 block size)
2TB – 512 bytes = Maximum VMFS-3 RDM size
2TB – 512 bytes = Maximum VMFS-5 RDM size (virtual compatibility)
64TB = Maximum VMFS-3 volume size
64TB = Maximum FC LUN size
64TB = Maximum VMFS-5 RDM size (physical compatibility)
64TB = Maximum VMFS-5 volume size

6: vSphere 5 Update Manager Configuration Maximums
1 = Maximum ESXi host upgrades per cluster
24 = Maximum VMware tools upgrades per ESXi host
24 = Maximum virtual machines hardware upgrades per host
70 = Maximum VUM Cisco VDS updates and deployments
71 = Maximum ESXi host remediations per VUM server
71 = Maximum ESXi host upgrades per VUM server
75 = Maximum virtual machines hardware scans per VUM server
75 = Maximum virtual machine hardware upgrades per VUM server
75 = Maximum VMware Tools scans per VUM server
75 = Maximum VMware Tools upgrades per VUM server
75 = Maximum ESXi host scans per VUM server
90 = Maximum VMware Tools scans per ESXi host
90 = Maximum virtual machines hardware scans per host
1000 = Maximum VUM host scans in a single vCenter server
10000 = Maximum VUM virtual machines scans in a single vCenter server

7: vSphere 5 vCenter Server, and Cluster and Resource Pool Configuration Maximums
100% = Maximum failover as percentage of cluster
8 = Maximum resource pool tree depth
32 = Maximum concurrent host HA failover
32 = Maximum hosts per cluster
512 = Maximum virtual machines per host
1024 = Maximum children per resource pool
1600 = Maximum resource pool per host
1600 = Maximum resource pool per cluster
3000 = Maximum virtual machines per cluster

8: vSphere 5 Virtual Machine Configuration Maximums
1 = Maximum IDE controllers per virtual machine
1 = Maximum USB 3.0 devices per virtual machine
1 = Maximum USB controllers per virtual machine
1 = Maximum Floppy controllers per virtual machine
2 = Maximum Floppy devices per virtual machine
3 = Maximum Parallel ports per virtual machine
4 = Maximum IDE devices per virtual machine
4 = Maximum Virtual SCSI adapters per virtual machine
4 = Maximum Serial ports per virtual machine
4 = Maximum VMDirectPath PCI/PCIe devices per virtual machine (or 6 if 2 of them are Teradici devices)
10 = Maximum Virtual NICs per virtual machine
15 = Maximum Virtual SCSI targets per virtual SCSI adapter
20 = Maximum xHCI USB controllers per virtual machine
20 = Maximum USB device connected to a virtual machine
32 = Maximum Virtual CPUs per virtual machine (Virtual SMP)
40 = Maximum concurrent remote console connections to a virtual machine
60 = Maximum Virtual SCSI targets per virtual machine
60 = Maximum Virtual Disks per virtual machine (PVSCSI)
128MB = Maximum Video memory per virtual machine
1TB = Maximum Virtual Machine swap file size
1TB = Maximum RAM per virtual machine
2TB – 512B = Maximum virtual machine Disk Size
*Credits, sources, and useful links:
VMware's Official vSphere 5 Configuration Maximums (PDF)
TIP: Can use these notes to study for VCP510 (VCP 5) on iPhone / Android / Smartphone internet browser

VCP510 (VCP on vSphere 5) Exam Cram Notes Part 1/3

Sub-sections

1: Auto Deploy
2: HA
3a: Licensing
3b: Licensing – Entitlements per CPU license
3c: Licensing - Features
4: Memory
5: Miscellaneous
6a: Networking - General
6b: Networking - vSphere Distributed Switch
6c: Networking - Ports
7: Storage
8: Update Manager
9a: vCenter
9b: vCenter Server Sizing

1: Auto Deploy
vSphere Auto Deploy installs the ESXi image directly into Host memory.
By default, hosts deployed with VMware Auto Deploy store logs in memory.
When deploying hosts with VMware Auto Deploy, Host Profiles is the recommended method to configure ESXi once it has been installed.
Benefits of auto deploy = decouples the Vmware ESXi host from the physical server and eliminates the boot disk, eliminates configuration drift, simplifies patching and updating.
VMware Auto Deploy Installation = the quickest possible way to deploy > 10 ESXi hosts.
Interactive Installation = recommended install method to evaluate vSphere 5 on a small ESXi host setup.
The vSphere power CLI image builder cmdlet defines the image profiles used with auto deploy.
3 ways that vSphere Auto Deploy can access the answer file: 1) CIFS 2) SFTP 3) HTTP
Note: 6 ways ESXi Scripted installations and upgrades can access installation or upgrade script (kickstart file): 1) CD/DVD 2) USB Flash Drive 3) NFS 4) HTTP 5) HTTPS 6) FTP

2: HA
In a HA cluster after an initial election process, host is either Master or Slave.
vCenter may communicate to the Slaves in certain situations, such as:
i) Scanning for an existing Master.
ii) If the Master states that it cannot reach a Slave, vCenter will contract the Slave to determine why.
iii) When powering on a FT Secondary VM.
iv) When host is reported isolated or partitioned.
A HA Slot = a logical representation of the memory and CPU resources that satisfy the requirements for any powered-on virtual machine in the cluster.
The 4 VM Restart Priority options available on a HA cluster = Disabled, Low, Medium, High.
The three Host Isolation Response options available on a HA Cluster = Shut down, Power off, Leave powered on.
If the 'Admission Control' option 'Disable: Allow VM power on operations that violate availability constraints' is selected, then "if a cluster has insufficient failover capacity, vSphere HA can still perform failovers and it uses the VM Restart Priority setting to determine which virtual machines to power on first."

3a: Licensing
VMware vSphere can be evaluated for 60 days prior to purchase
Free vSphere Hypervisor is allowed 32GB Physical RAM per host
If more vRAM is allocated than licensed for, new VM's cannot be powered on

3b: Licensing - Entitlements per CPU license
32GB vRAM, 8-way vCPU for Essentials, Essentials Plus, Standard
64GB vRAM, 8-way vCPU for Enterprise
96GB vRAM, 32-way vCPU for Enterprise Plus

3c: Licensing - Features
Essentials Plus: High Availability, Data Recovery, vMotion
Standard: as above
Enterprise: + Virtual Serial Port Concentrator, Hot Add, vShield Zones, Fault Tolerance, Storage APIs for Array Integration, Storage vMotion, Distributed Resource Scheduler & Distributed Power Management
Enterprise Plus: + Distributed Switch, I/O Controls (Network and Storage), Host Profiles, Auto deploy, Profile-Driven Storage, Storage DRS

4: Memory
VMX Swap can be used to reduce virtual machine memory overhead.
Memory allocation – memory limit = amount of virtual machine memory that will always be composed of disk pages.
(v)NUMA = (virtual) Non-Uniform Memory Access (a computer memory design used in multiprocessors where the memory access time depends on the memory location relative to a processor.)
vNUMA is enabled by default when a virtual machine has more than 8 vCPU's.
Disabling transparent memory page sharing increases resource contention.
For maximum perfomance benefits of vNUMA, recommend make sure your clusters are composed entirely of hosts with matching NUMA architecture.
Memory reservation = the amount of physical memory that is guaranteed to the VM.
Resource Allocation tab definitions: Host memory usage = amount of physical host memory allocated to a guest (includes virtualisation overhead.)
Resource Allocation tab definitions: Guest memory usage = amount of memory actively used by a guest operating system and it's applications.
3 metrics to diagnose a memory bottleneck at the ESXi host level: MEMCTL, SWAP, ZIP.
Virtual Machine Memory Overhead is determined by Configured Memory and Number of vCPUs.

5: Miscellaneous
New features made available with vSphere 5 = sDRS, VSA, vSphere Web Client, SplitRX...
ESX is no longer available with vSphere 5
Via the Direct Console, it is possible to: (F12) Shut down/Restart host, (F2) Customize System/View Logs - which includes: Configure/Restart/Test Management Network (includes configure host IP, DNS), View System Logs, Troubleshooting Mode Options > Restart Management Agents.
Via the Direct Console, it is NOT possible to: Enter host into Maintenance Mode
vMotion has been improved by allowing multiple vMotion vmknics, allowing for more and faster vMotion operations
Image Builder is used to create ESXi installation images with a customized set of updates, patches, and drivers
Packaging format used by the VMware ESXi Image Builder = VIB
By default, the Administrator role at the ESX Host Server level is assigned to root and vpxuser
Distributed Power Management (DPM) requires Wake On LAN (WOL) technology on host NIC
ESXi 5.0 supports only LAHF and SAHF CPU instructions
The three default roles provided on an ESXi host = No Access, Read Only, Administrator
ESXi Dump Collector is a new feature of vSphere 5
Automation Levels on a DRS Cluster = Manual, Partially Automated, Fully Automated
ESXi 5.0 introduces Virtual Hardware VM Version 8
To disable alarm actions for a DRS cluster while maintenance is taking place: Right-Click the DRS cluster, select Alarm → 'Disable Alarm Actions.'
3 valid objects to place in a vAppResource pools, vApps, Virtual Machines.
Required settings for a kick start ESXi host upgrade script file: root password & IP address.
vMotion cannot be used unless RDM boot mapping files are placed on the same datastore & storage vMotion cannot be used with RDMs using NPIV.
Conditions that would stop virtual machines restarting in the event of a host failure in a HA cluster: 1) An anti-affinity rule configured where restarting the VMs would place them on the same host. 2) The virtual machines on the failed host are HA disabled.
The VMkernel is secured by the features – memory hardening and kernel module integrity.
VMware vCloud Director pools virtual infrastructure resources in datacenters and delivers them to users as a catalog-based service.
Two ways to enable remote tech support mode (SSH) on an ESXi 5.x host: 1) Through the Security Profile pane of the Configuration tab in the vSphere Client. 2) Through the Direct Console User inferface (DCUI.)
%RDY metric is checked to determine if CPU contention exists on an ESXi 5.x host (Note: %RDY = Percentage of time the resource pool, virtual machine, or world was ready to run, but was not provided CPU resources on which to execute.)
Quiescing virtual machine snapshot operation: 1) Requires VMware tools. 2) Ensures that the snapshot includes a power state. 3) May alter the behaviour of applications within the virtual machine. 4) Ensures all pending disk I/O operations are written to disk.
Image Profile Acceptance Levels: Community Supported < Partner Supported < VMware Accepted < VMware Certified (where VMware Certified Acceptance Level has the most stringent requirements.)
Each VMware Data Recovery appliance can have no more than two dedupe destinations, and it is recommended that each dedupe destination is no more than 1TB in size when using virtual disks, and no more than 500GB in size when using a CIFS network share.

6a: Networking - General
ESX 4.X to ESXi 5.0 upgrade removes the "Service Console" port group because ESXi 5.0 has no Service Console.
SplitRX can be used to increase network throughput to virtual machines.
The default security policy on a Port Group = Reject, Accept, Accept (Promiscuous Mode, MAC Address Changes, Forged Transmits.)
ESX 4.X to ESXi 5.0 upgrade process migrates all vswif interfaces to vmk interfaces.
SSH configuration is not migrated for ESX 4.x hosts or ESXi 4.0 hosts (SSH access is disabled during the migration or upgrade process.)
Custom ports that were opened by using the ESX/ESXi 4.1 esxcfg-firewall command do not remain open after upgrade to ESXi 5.0.
A firewall has been added to ESXi 5.0 to improve security.
vSphere Standard Switch Traffic shaping settings: Status – Disabled/Enabled, Average Bandwidth (Kbit/sec), Peak Bandwidth (Kbit/sec), Burst Size (Kbytes.)
To relieve a network bottleneck caused by a VM with occasional high outbound network activity, apply traffic shaping to the port group that contains the virtual machine.
NIC Teaming policy: Notify Switches → the physical switch is notified when the location of a virtual NIC changes.
A remote SSH connection to a newly installed ESXi 5.x host fails, possible causes: 1) The SSH service is disabled on the host by default. 2) The ESXi firewall blocks the SSH protocol by default.
Forged transmits: allows packets to be created by a virtual machine with different source MAC address.
To verify all IP storage VMkernel interfaces are configured for jumbo frames, either: 1) esxcli network IP interface list. 2) View the VMkernel interface properties in the vSphere client.
Map view indicates vMotion is disabled => vMotion has not been enabled on a VMkernel port group.
ESXi Host → Configuration Tab → Network Adapters : Headings = Device, Speed, Configured, Switch, MAC Address, Observed IP Ranges, Wake on LAN Supported.
If you create a portgroup and assign it to VLAN 4095, it will have access to all the VLANs that are exposed to the physical NIC (a special driver is needed within the VM that can properly tag VLANs.)
By default, new adapters are active for all policies, and new adapters carry traffic for the standard switch and its port group unless you specify otherwise.
In IP hash load balancing policy all physical switch ports connected to the active uplinks must be in ether channel mode (Note: if the physical switch ports are not in ether channel mode, new uplinks will be considered active/active but will not participate in the active NIC team until configured correctly on the physical switch.)
6b: Networking - vSphere Distributed Switch
The Dynamic Binding dvPort binding type has been deprecated in vSphere 5 (leaving Ephemeral Binding and Static Binding types.)
Network Load Balancing policies for vSphere Distributed Switch = Route based on originating virtual port; Route based on IP hash; Route based on source MAC hash; Route based on physical NIC load; Use explicit failover order.
Requirements for a collector virtual machine to analyze traffic from a vSphere Distributed Switch: 1) The source and target virtual machines must both be on a vNetwork Switch, but can be on any vDS datacenter. 2) The port group on distributed port must have NetFlow enabled.
Two methods to migrate a virtual machine from a vSphere Standard Switch (VSS) to a vSphere Distributed Switch (VDS): 1) Migrate the port group containing the virtual machine from a vNetwork Standard Switch using the Migrate Virtual machine networking option. 2) Edit the Network Adapater settings for the virtual machine and select a dvPort group from the list.
features only available when using a vSphere Distributed Switch: 1) NetFlow monitoring. 2) Network I/O control. 3) Egress and ingress traffic shaping.
New feature: vSphere Distributed Switch - improves visibility of virtual-machine traffic through NetFlow and enhances monitoring and troubleshooting through Switched Port Analyzer (SPAN) and Link Layer Discovery Protocol (LLDP) support.

6c: Networking - Ports
22: SSH operates on port 22
80: vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443
389: This port must be open on the local and all remote instances of vCenter Server. This is the LDAP port number for the Directory Services for the vCenter Server group. The vCenter Server system needs to bind to port 389, even if you are not joining this vCenter Server instance to a Linked Mode group.
443: The default port that the vCenter Server system uses to listen for connections from the vSphere Client. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients.
636: For vCenter Server Linked Mode, this is the SSL port of the local instance.
902: The default port that the vCenter Server system uses to send data to managed hosts. Managed hosts also send a regular heartbeat over UDP port 902 to the vCenter Server system. This port must NOT be blocked by firewalls between the server and the hosts or between hosts. Also must NOT be blocked between the vSphere Client and the hosts. The vSphere Client uses this port to display virtual machine consoles.
8080: Web Services HTTP. Used for the VMware VirtualCenter Management Web Services.
8182: vSphere HA uses TCP and UDP port 8182 for agent-to-agent communication.
8443: Web Services HTTPS. Used for the VMware VirtualCenter Management Web Services.
10109: vCenter Inventory Service Service Management
10111: vCenter Inventory Service Linked Mode Communication
10443: vCenter Inventory Service HTTPS
60099: Web Service change service notification port

7: Storage
vStorage Thin Provisioning feature provides dynamic allocation of storage capacity
Upgrade from VMFS-3 to VMFS-5 requires no downtime
VMFS-5 is introduced by vSphere 5
The globally unique identifier assigned to each Fibre Channel Port = World Wide Name (WWN)
NFS protocol is used by an ESXi host to communicate with NAS devices
It is now possible in vSphere 5 to Storage vMotion virtual machines that have snapshots
Two iSCSI discovery methods supported by an ESXi host = Static Discovery, and Send Targets
VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size
Shared local storage is not a supported location for a host diagnostic partition
The VMware HCL lists the correct MPP (Multi-Pathing Protocol) to use with a storage array
To guarantee a certain level of capacity, performance, availability, and redundancy for a virtual machines storage use the Profile-Driven Storage feature of vSphere 5
Use sDRS (storage DRS) to ensure storage is utilized evenly
If an ESXi 5.x host is configured to boot from Software iSCSI adapter and the administrator disables the iSCSI Software adapter, then it will be disabled but is re-enabled the next time the host boots up.
An array that supports vStorage APIs for array integration (VAAI) can directly perform → Cloning virtual machines and templates; Migrating virtual machines using Storage vMotion.
VAAI thin provisioning dead space reclamation feature can reclaim blocks on a thin provisioned LUN array: 1) When a virtual machine is migrated to a different datastore. 2) When a virtual disk is deleted.
Manage Paths → can disable path by right-clicking and selecting disable.
A preferred path selection can only be made with Fixed 'Path Selection' types (not possible with 'Round Robin' or 'Most Recently Used' types.) 
Information about a VMFS datastore available via the Storage Views tab includes → Multipathing Status, Space Used, Snapshot Space.
Two benefits of virtual compatibility mode RDMs v physical compatibility mode RDMs: 1) Allows for cloning. 2) Allows for template creation of the related virtual machine.
To uplink a Hardware FCoE Adapter, create a vSphere Standard Switch and add the FCoE Adapter as an uplink.
Three storage I/O control conditions that might trigger the non-VI workload detected on the datastore alarm: 1) The datastore is Storage I/O Control-enabled, but it cannot be fully controlled by Storage I/O Control because of an external workload. This can occur if the Storage I/O Control-enabled datastore is connected to an ESX/ESXi host that does not support Storage I/O Control. 2) The datastore is Storage I/O Control-enabled and one or more of the hosts to which the datastore connects is not managed by vCenter Server. 3) The array is shared with non-vSphere workloads or the array is performing system tasks such as replication.
The software iSCSI Adapter and Dependent Hardware iSCSI Adapter require one or more VMkernel ports.
Unplanned Device Loss in a vSphere 5 environment = A condition where an ESXi host determines a device loss has occurred that was not planned. Performing a storage rescan removes the persistent information related to the device.
To convert thin provisioned disk to thick either use the inflate option in the Datastore Browser or use Storage vMotion and change the disk type to Thick.

To manage storage placement by using virtual machine profiles with a storage array that supports vSphere Storage APIs (Storage Awareness):
1 Create user-defined storage capabilities.
2 Associate user-defined storage capabilities with datastores.
3 Enable virtual machine storage profiles for a host or cluster.
4 Create virtual machine storage profiles by defining the storage capabilities that an application running on a virtual machine requires.
5 Associate a virtual machine storage profile with the virtual machine files or virtual disks.
6 Verify that virtual machines and virtual disks use datastores that are compliant with their associated virtual machine storage profile

8: Update Manager
Default Hosts upgrade baselines included with vSphere Update Manager: Critical Host Patches, Non-Critical Host Patches.
Default VMs/VAs upgrade baselines included with vSphere Update Manager: VMware Tools Upgrade to Match Host, VM Hardware Upgrade to Match Host, VA Upgrade to Latest.
vSphere 5 vCenter Update Manager cannot update virtual machine hardware when running against legacy hosts.
Update Manager can update virtual appliances but cannot update the vCenter Server Appliance.

9a: vCenter
vCenter Server 5 requires a 64 Bit DSN.
vCenter Heartbeat product provides high availability for vCenter server.
vCenter requires a valid (internal) domain name system (DNS) registration.
vCenter 4.1 and vCenter 5.0 cannot be joined with Linked-Mode.
The VMware vSphere Storage Appliance manager (VSA manager) is installed on the vSphere 5 vCenter Server System.
Optional components that can be installed from the VMware vSphere 5.0 vCenter Installer: Product Installers) vSphere Client, VMware vSphere Web Client (Server), VMware vSphere Update Manager. Support tools) VMware ESXi Dump Collector, VMware Syslog Collector, VMware Auto Deploy, VMware vSphere Authentication Proxy. Utility) vCenter Host Agent Pre-Upgrade Checker.
Predefined vCenter Server roles are: No access, Read-only, Administrator (there are also 6 sample roles.)
To export ESXi 5.x host diagnostic information / logs from a host managed by a vCenter server instance using the vSphere Client: 1) Home → Administration → System Logs → Export System Logs → Source: select the ESXi host → Select System Logs: Select all → Select a Download Location → Finish. 2) Under 'Hosts and Clusters' view select the ESXi host → File → Export → Export System Logs → Select System Logs: Select all → Select a Download Location → Finish.

9b: vCenter Server Sizing
Medium deployment of up to 50 hosts and 500 powered-on VMs: 2 cores, 4GB RAM, 5GB disk
Large deployment of up to 300 hosts and 3000 powered-on VMs: 4 cores, 8GB RAM, 10GB disk
Extra-Large deployment of up to 1000 hosts and 10'000 powered-on VMs: 8 cores, 16GB RAM, 10GB disk
TIP: Can use these notes to study for VCP510 (VCP 5) on iPhone / Android / Smartphone internet browser

Monday, 3 October 2011

Veeam Backup and Replication v5 Install Complete Walkthrough

*UPDATE: See http://cosonok.blogspot.com/2011/12/veeam-backup-and-replication-v6-install.html for Veeam Backup and Replication v6
*These notes are specifically for an installation on Windows 2008 R2 with Veeam Backup and Replication 5.0.2 Enterprise with all components installed, using the pre-packaged SQL Server 2005 Express

Pre-installation: step 1 (Diskpart)
CMD:\>
DISKPART
automount disable
*to check in future use automount command
exit

Pre-installation: step 2 (Specify a local user to be granted logon as service permissions)
Administrative Tools → Local Security Policy → Local Policies → User Rights Assignments → Find 'Log on as a service' policy and double-click → Add User or Group → Add the user account to be used to install Veeam Backup

Pre-installation: step 3 (required for Veeam Backup Enterprise Manager)
Server Manager → Roles → Add Roles → Check 'Web Server (IIS)' → Accept the default selections, check IIS 6 Management Compatibility and sub components, and check Windows Authentication component → Install

Installation
Download the following ZIP files from Veeam: veeam_backup_5, veeam_backup_ad_air_5, veeam_backup_ex_air_5, veeam_backup_sql_air_5, veeam_backup_un_air_5


Veeam Backup Server Installation
Open the Zip file containing Veeam Backup → Double-click the Veeam Backup Setup executable file → Click Run → Next → Accept terms → Next → Browse for License File → Next → Accept default feature selections for 'Veeam Backup and Replication' and 'Veeam Backup Catalog' (optionally can add in 'PowerShell Snap-in') → Next →


Specify SQL Server Instance (either new SQL 2005 Express or existing) → Next → Specify service account and password → Next → Customize/accept default file locations for 'Guest file system catalog' and 'vPower NFS' → Next → Ready to Install and check 'Create shortcut on desktop' → Install → Finish

Enterprise Manager setup step 1
Open the Zip file containing Veeam Backup → Double-click the Veeam Backup Enterprise Manager executable file → Click Run → Next → Accept license agreement → Next → Browse for License File → Next → Accept default feature selections for 'Enterprise Manager Server' and 'Enterprise Manager Web Site' → Next →


Specify SQL Server Instance → Next → Specify service account and password → Next → Accept Web Site Configuration defaults → Next → Ready to Install → Install → Finish

Enterprise Manager setup step 2
Logon to the Enterprise Manager https://servername:9443 → Configuration → Backup Servers → Add → In the Backup Server settings dialog enter the computer name or IP address of the backup server and provide user credentials → OK

Install Microsoft Search Server: step 1
Download Microsoft Search Server 2010 Express (suitable for small to medium environments) → Run the searchserverexpress executable → Install software prerequisites → Restart → Installation will continue when login after restart → Finish → Run the searchserverexpress executable again →


Install Search Server Express → Accept the license → Standalone → Check 'Run the SharePoint Products Configuration Wizard now' → Close → SharePoint Products Configuration Wizard → Next → Finish

Install Microsoft Search Server: step 2
SharePoint 2010 Central Administration → Application Management → Service Applications: Manage service applications → Search Service Application → Click Default content access account and change to either your Veeam backup service account / or an account with Read access rights to the catalog share, and NTFS permissions to the folder backing the share (*can also try giving permission for original default account - NT\Network Service to these)

Backup Search Installation
Open the Zip file containing Veeam Backup → Double-click the Veeam Backup Search Setup executable file → Click Run → Next → Accept the license agreement → Next → Next → Provide Service Settings → Next → Install → Finish
Open the Enterprise Manager console → Logon → Configuration → Search Servers → Add → Provide DNS name / IP address of Search server, credentials → Choose Server type → OK

U-AIR Installations
1) Open the Zip file containing Veeam Backup AD AIR Setup → Double-click the executable file → Click Run → Next → Accept the license agreement → Next → Feature install: Active Directory Restore → Next → Accept connect parameter defaults → Next → Install → Finish
2) Open the Zip file containing Veeam Backup EX AIR Setup → Double-click the executable file → Click Run → Next → Accept the license agreement → Next → Feature install: Microsoft Exchange Restore → Next → Accept connect parameter defaults → Next → Install → Finish
3) Open the Zip file containing Veeam Backup SQL AIR Setup → Double-click the executable file → Click Run → Next → Accept the license agreement → Next → Feature install: Microsoft SQL Restore → Next → Install → Finish
4) Open the Zip file containing Veeam Backup UN AIR Setup → Double-click the executable file → Click Run → Next → Accept the license agreement → Next → Feature install: Universal Restore → Next → Install → Finish

*Perhaps a bit late to publish these notes with Veeam Backup v6 imminent....

Saturday, 1 October 2011

MSExchange ADAccess Error Event 2915 BESAdmin Over Budget Policy:[Fallback]

Scenario

After a network outage, all systems and services come back online except for the BlackBerry smartphones.
Troubleshooting identifies the following event from the Application log of the Exchange 2010 server:

Source: MSExchange ADAccess
Event ID: 2915
Process Microsoft.Exchange.RpcClientAccess.Service.exe (PID=...). User '...BESAdmin~RCA~false' has gone over budget '...' times for component 'RCA' within a one minute period. Info: 'Policy:[Fallback], Parts:MaxConcurrency:...;'. Threshold value: '100'.

Resolution

1: Restart the Exchange 2010 server which holds the BESAdmin mailbox (*See Note 1 below)
2: Restart the BlackBerry Controller service on the BES Server

*Note 1: Potentially restarting the Microsoft Exchange RPC Client Access service on each CAS server may have been sufficient (please comment if you know the answer to this.) UPDATE: received comment from Anonymous that this is sufficient
*Note 2: Also works in a DAG environment; restart (assuming database copy statuses are either mounted or healthy) whichever Exchange holds the mounted database copy containing the BESAdmin mailbox

Explanation


The throttling framework is intended to protect Exchange resources, so if it is going to "fail", it needs to do so in a safe and predictable way. …. When ... tries to load policy … it fails. ... it fails along a fallback path.
If the non-default policy is corrupt or missing, it will first fall back to the default throttling policy for the organization in question. … then it falls back to a special policy defined in code called the "fallback policy". … this policy is embedded in the Exchange assemblies....”

In above scenario, it appears that – due to the network outage – there was trouble contacting a domain controller, trouble reading the BESPolicy throttling policy, trouble reading the default throttling policy, hence it passed to the fallback throttling policy which is hard coded in Exchange 2010 (even though the GetThrottlingPolicy cmdlet indicated the BESAdmin mailbox was using the correct policy.) Recreating the BESPolicy from scratch and re-applying to the BESAdmin mailbox did not fix, nor did a restart of the BES 5.0 server.

Further Notes

The above issue was seen with an Exchange 2010 SP1 DAG and BES 5.0 setup.
*For configuring Exchange 2010 BESPolicy throttling policy correctly, see: