Sub-sections
1: Auto Deploy
2: HA
3a: Licensing
3b: Licensing – Entitlements per CPU license
3c: Licensing - Features
4: Memory
5: Miscellaneous
6a: Networking - General
6b: Networking - vSphere Distributed Switch
6c: Networking - Ports
7: Storage
8: Update Manager
9a: vCenter
9b: vCenter Server Sizing
1: Auto Deploy
vSphere Auto Deploy installs the ESXi image directly into Host memory.
By default, hosts deployed with VMware Auto Deploy store logs in memory.
When deploying hosts with VMware Auto Deploy, Host Profiles is the recommended method to configure ESXi once it has been installed.
Benefits of auto deploy = decouples the Vmware ESXi host from the physical server and eliminates the boot disk, eliminates configuration drift, simplifies patching and updating.
VMware Auto Deploy Installation = the quickest possible way to deploy > 10 ESXi hosts.
Interactive Installation = recommended install method to evaluate vSphere 5 on a small ESXi host setup.
The vSphere power CLI image builder cmdlet defines the image profiles used with auto deploy.
3 ways that vSphere Auto Deploy can access the answer file: 1) CIFS 2) SFTP 3) HTTP
Note: 6 ways ESXi Scripted installations and upgrades can access installation or upgrade script (kickstart file): 1) CD/DVD 2) USB Flash Drive 3) NFS 4) HTTP 5) HTTPS 6) FTP
2: HA
In a HA cluster after an initial election process, host is either Master or Slave.
vCenter may communicate to the Slaves in certain situations, such as:
i) Scanning for an existing Master.
ii) If the Master states that it cannot reach a Slave, vCenter will contract the Slave to determine why.
iii) When powering on a FT Secondary VM.
iv) When host is reported isolated or partitioned.
A HA Slot = a logical representation of the memory and CPU resources that satisfy the requirements for any powered-on virtual machine in the cluster.
The 4 VM Restart Priority options available on a HA cluster = Disabled, Low, Medium, High.
The three Host Isolation Response options available on a HA Cluster = Shut down, Power off, Leave powered on.
If the 'Admission Control' option 'Disable: Allow VM power on operations that violate availability constraints' is selected, then "if a cluster has insufficient failover capacity, vSphere HA can still perform failovers and it uses the VM Restart Priority setting to determine which virtual machines to power on first."
3a: Licensing
VMware vSphere can be evaluated for 60 days prior to purchase
Free vSphere Hypervisor is allowed 32GB Physical RAM per host
If more vRAM is allocated than licensed for, new VM's cannot be powered on
3b: Licensing - Entitlements per CPU license
32GB vRAM, 8-way vCPU for Essentials, Essentials Plus, Standard
64GB vRAM, 8-way vCPU for Enterprise
96GB vRAM, 32-way vCPU for Enterprise Plus
3c: Licensing - Features
Essentials Plus: High Availability, Data Recovery, vMotion
Standard: as above
Enterprise: + Virtual Serial Port Concentrator, Hot Add, vShield Zones, Fault Tolerance, Storage APIs for Array Integration, Storage vMotion, Distributed Resource Scheduler & Distributed Power Management
Enterprise Plus: + Distributed Switch, I/O Controls (Network and Storage), Host Profiles, Auto deploy, Profile-Driven Storage, Storage DRS
4: Memory
VMX Swap can be used to reduce virtual machine memory overhead.
Memory allocation – memory limit = amount of virtual machine memory that will always be composed of disk pages.
(v)NUMA = (virtual) Non-Uniform Memory Access (a computer memory design used in multiprocessors where the memory access time depends on the memory location relative to a processor.)
vNUMA is enabled by default when a virtual machine has more than 8 vCPU's.
Disabling transparent memory page sharing increases resource contention.
For maximum perfomance benefits of vNUMA, recommend make sure your clusters are composed entirely of hosts with matching NUMA architecture.
Memory reservation = the amount of physical memory that is guaranteed to the VM.
Resource Allocation tab definitions: Host memory usage = amount of physical host memory allocated to a guest (includes virtualisation overhead.)
Resource Allocation tab definitions: Guest memory usage = amount of memory actively used by a guest operating system and it's applications.
3 metrics to diagnose a memory bottleneck at the ESXi host level: MEMCTL, SWAP, ZIP.
Virtual Machine Memory Overhead is determined by Configured Memory and Number of vCPUs.
5: Miscellaneous
New features made available with vSphere 5 = sDRS, VSA, vSphere Web Client, SplitRX...
ESX is no longer available with vSphere 5
Via the Direct Console, it is possible to: (F12) Shut down/Restart host, (F2) Customize System/View Logs - which includes: Configure/Restart/Test Management Network (includes configure host IP, DNS), View System Logs, Troubleshooting Mode Options > Restart Management Agents.
Via the Direct Console, it is NOT possible to: Enter host into Maintenance Mode
vMotion has been improved by allowing multiple vMotion vmknics, allowing for more and faster vMotion operations
Image Builder is used to create ESXi installation images with a customized set of updates, patches, and drivers
Packaging format used by the VMware ESXi Image Builder = VIB
By default, the Administrator role at the ESX Host Server level is assigned to root and vpxuser
Distributed Power Management (DPM) requires Wake On LAN (WOL) technology on host NIC
ESXi 5.0 supports only LAHF and SAHF CPU instructions
The three default roles provided on an ESXi host = No Access, Read Only, Administrator
ESXi Dump Collector is a new feature of vSphere 5
Automation Levels on a DRS Cluster = Manual, Partially Automated, Fully Automated
ESXi 5.0 introduces Virtual Hardware VM Version 8
To disable alarm actions for a DRS cluster while maintenance is taking place: Right-Click the DRS cluster, select Alarm → 'Disable Alarm Actions.'
3 valid objects to place in a vApp → Resource pools, vApps, Virtual Machines.
Required settings for a kick start ESXi host upgrade script file: root password & IP address.
vMotion cannot be used unless RDM boot mapping files are placed on the same datastore & storage vMotion cannot be used with RDMs using NPIV.
Conditions that would stop virtual machines restarting in the event of a host failure in a HA cluster: 1) An anti-affinity rule configured where restarting the VMs would place them on the same host. 2) The virtual machines on the failed host are HA disabled.
The VMkernel is secured by the features – memory hardening and kernel module integrity.
VMware vCloud Director pools virtual infrastructure resources in datacenters and delivers them to users as a catalog-based service.
Two ways to enable remote tech support mode (SSH) on an ESXi 5.x host: 1) Through the Security Profile pane of the Configuration tab in the vSphere Client. 2) Through the Direct Console User inferface (DCUI.)
%RDY metric is checked to determine if CPU contention exists on an ESXi 5.x host (Note: %RDY = Percentage of time the resource pool, virtual machine, or world was ready to run, but was not provided CPU resources on which to execute.)
Quiescing virtual machine snapshot operation: 1) Requires VMware tools. 2) Ensures that the snapshot includes a power state. 3) May alter the behaviour of applications within the virtual machine. 4) Ensures all pending disk I/O operations are written to disk.
Image Profile Acceptance Levels: Community Supported < Partner Supported < VMware Accepted < VMware Certified (where VMware Certified Acceptance Level has the most stringent requirements.)
Each
VMware Data Recovery appliance can have
no more than two dedupe destinations, and it is
recommended that each dedupe destination is
no more than 1TB in size when using virtual disks, and no more than
500GB in size when using a CIFS network share.
6a: Networking - General
ESX 4.X to ESXi 5.0 upgrade removes the "Service Console" port group because ESXi 5.0 has no Service Console.
SplitRX can be used to increase network throughput to virtual machines.
The default security policy on a Port Group = Reject, Accept, Accept (Promiscuous Mode, MAC Address Changes, Forged Transmits.)
ESX 4.X to ESXi 5.0 upgrade process migrates all vswif interfaces to vmk interfaces.
SSH configuration is not migrated for ESX 4.x hosts or ESXi 4.0 hosts (SSH access is disabled during the migration or upgrade process.)
Custom ports that were opened by using the ESX/ESXi 4.1 esxcfg-firewall command do not remain open after upgrade to ESXi 5.0.
A firewall has been added to ESXi 5.0 to improve security.
vSphere Standard Switch Traffic shaping settings: Status – Disabled/Enabled, Average Bandwidth (Kbit/sec), Peak Bandwidth (Kbit/sec), Burst Size (Kbytes.)
To relieve a network bottleneck caused by a VM with occasional high outbound network activity, apply traffic shaping to the port group that contains the virtual machine.
NIC Teaming policy: Notify Switches → the physical switch is notified when the location of a virtual NIC changes.
A remote SSH connection to a newly installed ESXi 5.x host fails, possible causes: 1) The SSH service is disabled on the host by default. 2) The ESXi firewall blocks the SSH protocol by default.
Forged transmits: allows packets to be created by a virtual machine with different source MAC address.
To verify all IP storage VMkernel interfaces are configured for jumbo frames, either: 1) esxcli network IP interface list. 2) View the VMkernel interface properties in the vSphere client.
Map view indicates vMotion is disabled => vMotion has not been enabled on a VMkernel port group.
ESXi Host → Configuration Tab → Network Adapters : Headings = Device, Speed, Configured, Switch, MAC Address, Observed IP Ranges, Wake on LAN Supported.
If you create a portgroup and assign it to VLAN 4095, it will have access to all the VLANs that are exposed to the physical NIC (a special driver is needed within the VM that can properly tag VLANs.)
By default, new adapters are active for all policies, and new adapters carry traffic for the standard switch and its port group unless you specify otherwise.
In IP hash load balancing policy all physical switch ports connected to the active uplinks must be in ether channel mode (Note: if the physical switch ports are not in ether channel mode, new uplinks will be considered active/active but will not participate in the active NIC team until configured correctly on the physical switch.)
6b: Networking - vSphere Distributed Switch
The Dynamic Binding dvPort binding type has been deprecated in vSphere 5 (leaving Ephemeral Binding and Static Binding types.)
Network Load Balancing policies for vSphere Distributed Switch = Route based on originating virtual port; Route based on IP hash; Route based on source MAC hash; Route based on physical NIC load; Use explicit failover order.
Requirements for a collector virtual machine to analyze traffic from a vSphere Distributed Switch: 1) The source and target virtual machines must both be on a vNetwork Switch, but can be on any vDS datacenter. 2) The port group on distributed port must have NetFlow enabled.
Two methods to migrate a virtual machine from a vSphere Standard Switch (VSS) to a vSphere Distributed Switch (VDS): 1) Migrate the port group containing the virtual machine from a vNetwork Standard Switch using the Migrate Virtual machine networking option. 2) Edit the Network Adapater settings for the virtual machine and select a dvPort group from the list.
3 features only available when using a vSphere Distributed Switch: 1) NetFlow monitoring. 2) Network I/O control. 3) Egress and ingress traffic shaping.
New feature: vSphere Distributed Switch - improves visibility of virtual-machine traffic through NetFlow and enhances monitoring and troubleshooting through Switched Port Analyzer (SPAN) and Link Layer Discovery Protocol (LLDP) support.
6c: Networking - Ports
22: SSH operates on port 22
80: vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443
389: This port must be open on the local and all remote instances of vCenter Server. This is the LDAP port number for the Directory Services for the vCenter Server group. The vCenter Server system needs to bind to port 389, even if you are not joining this vCenter Server instance to a Linked Mode group.
443: The default port that the vCenter Server system uses to listen for connections from the vSphere Client. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients.
636: For vCenter Server Linked Mode, this is the SSL port of the local instance.
902: The default port that the vCenter Server system uses to send data to managed hosts. Managed hosts also send a regular heartbeat over UDP port 902 to the vCenter Server system. This port must NOT be blocked by firewalls between the server and the hosts or between hosts. Also must NOT be blocked between the vSphere Client and the hosts. The vSphere Client uses this port to display virtual machine consoles.
8080: Web Services HTTP. Used for the VMware VirtualCenter Management Web Services.
8182: vSphere HA uses TCP and UDP port 8182 for agent-to-agent communication.
8443: Web Services HTTPS. Used for the VMware VirtualCenter Management Web Services.
10109: vCenter Inventory Service Service Management
10111: vCenter Inventory Service Linked Mode Communication
10443: vCenter Inventory Service HTTPS
60099: Web Service change service notification port
7: Storage
vStorage Thin Provisioning feature provides dynamic allocation of storage capacity
Upgrade from VMFS-3 to VMFS-5 requires no downtime
VMFS-5 is introduced by vSphere 5
The globally unique identifier assigned to each Fibre Channel Port = World Wide Name (WWN)
NFS protocol is used by an ESXi host to communicate with NAS devices
It is now possible in vSphere 5 to Storage vMotion virtual machines that have snapshots
Two iSCSI discovery methods supported by an ESXi host = Static Discovery, and Send Targets
VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size
Shared local storage is not a supported location for a host diagnostic partition
The VMware HCL lists the correct MPP (Multi-Pathing Protocol) to use with a storage array
To guarantee a certain level of capacity, performance, availability, and redundancy for a virtual machines storage use the Profile-Driven Storage feature of vSphere 5
Use sDRS (storage DRS) to ensure storage is utilized evenly
If an ESXi 5.x host is configured to boot from Software iSCSI adapter and the administrator disables the iSCSI Software adapter, then it will be disabled but is re-enabled the next time the host boots up.
An array that supports vStorage APIs for array integration (VAAI) can directly perform → Cloning virtual machines and templates; Migrating virtual machines using Storage vMotion.
VAAI thin provisioning dead space reclamation feature can reclaim blocks on a thin provisioned LUN array: 1) When a virtual machine is migrated to a different datastore. 2) When a virtual disk is deleted.
Manage Paths → can disable path by right-clicking and selecting disable.
A preferred path selection can only be made with Fixed 'Path Selection' types (not possible with 'Round Robin' or 'Most Recently Used' types.)
Information about a VMFS datastore available via the Storage Views tab includes → Multipathing Status, Space Used, Snapshot Space.
Two benefits of virtual compatibility mode RDMs v physical compatibility mode RDMs: 1) Allows for cloning. 2) Allows for template creation of the related virtual machine.
To uplink a Hardware FCoE Adapter, create a vSphere Standard Switch and add the FCoE Adapter as an uplink.
Three storage I/O control conditions that might trigger the non-VI workload detected on the datastore alarm: 1) The datastore is Storage I/O Control-enabled, but it cannot be fully controlled by Storage I/O Control because of an external workload. This can occur if the Storage I/O Control-enabled datastore is connected to an ESX/ESXi host that does not support Storage I/O Control. 2) The datastore is Storage I/O Control-enabled and one or more of the hosts to which the datastore connects is not managed by vCenter Server. 3) The array is shared with non-vSphere workloads or the array is performing system tasks such as replication.
The software iSCSI Adapter and Dependent Hardware iSCSI Adapter require one or more VMkernel ports.
Unplanned Device Loss in a vSphere 5 environment = A condition where an ESXi host determines a device loss has occurred that was not planned. Performing a storage rescan removes the persistent information related to the device.
To convert thin provisioned disk to thick either use the inflate option in the Datastore Browser or use Storage vMotion and change the disk type to Thick.
To manage storage placement by using virtual machine profiles with a storage array that supports vSphere Storage APIs (Storage Awareness):
1 Create user-defined storage capabilities.
2 Associate user-defined storage capabilities with datastores.
3 Enable virtual machine storage profiles for a host or cluster.
4 Create virtual machine storage profiles by defining the storage capabilities that an application running on a virtual machine requires.
5 Associate a virtual machine storage profile with the virtual machine files or virtual disks.
6 Verify that virtual machines and virtual disks use datastores that are compliant with their associated virtual machine storage profile
8: Update Manager
Default Hosts upgrade baselines included with vSphere Update Manager: Critical Host Patches, Non-Critical Host Patches.
Default VMs/VAs upgrade baselines included with vSphere Update Manager: VMware Tools Upgrade to Match Host, VM Hardware Upgrade to Match Host, VA Upgrade to Latest.
vSphere 5 vCenter Update Manager cannot update virtual machine hardware when running against legacy hosts.
Update Manager can update virtual appliances but cannot update the vCenter Server Appliance.
9a: vCenter
vCenter Server 5 requires a 64 Bit DSN.
vCenter Heartbeat product provides high availability for vCenter server.
vCenter requires a valid (internal) domain name system (DNS) registration.
vCenter 4.1 and vCenter 5.0 cannot be joined with Linked-Mode.
The VMware vSphere Storage Appliance manager (VSA manager) is installed on the vSphere 5 vCenter Server System.
Optional components that can be installed from the VMware vSphere 5.0 vCenter Installer: Product Installers) vSphere Client, VMware vSphere Web Client (Server), VMware vSphere Update Manager. Support tools) VMware ESXi Dump Collector, VMware Syslog Collector, VMware Auto Deploy, VMware vSphere Authentication Proxy. Utility) vCenter Host Agent Pre-Upgrade Checker.
Predefined vCenter Server roles are: No access, Read-only, Administrator (there are also 6 sample roles.)
To export ESXi 5.x host diagnostic information / logs from a host managed by a vCenter server instance using the vSphere Client: 1) Home → Administration → System Logs → Export System Logs → Source: select the ESXi host → Select System Logs: Select all → Select a Download Location → Finish. 2) Under 'Hosts and Clusters' view select the ESXi host → File → Export → Export System Logs → Select System Logs: Select all → Select a Download Location → Finish.
9b: vCenter Server Sizing
Medium deployment of up to 50 hosts and 500 powered-on VMs: 2 cores, 4GB RAM, 5GB disk
Large deployment of up to 300 hosts and 3000 powered-on VMs: 4 cores, 8GB RAM, 10GB disk
Extra-Large deployment of up to 1000 hosts and 10'000 powered-on VMs: 8 cores, 16GB RAM, 10GB disk