Monday, 20 January 2020

NetApp HCI X.X Software Component Versions

A list of NetApp HCI version to software component versions starting from HCI 1.4P2 (the first release notes that had this information.)

Information taken from the release notes.

NetApp HCI 1.7P1 includes the following software component versions:
Version : Software component
1.7P1   : NetApp Deployment Engine
11.7    : NetApp Element software
4.3     : NetApp Element Plug-in for vCenter Server
11.7    : NetApp HCI Bootstrap OS
11.7    : Management node
6.7,6.5 : VMware vSphere (installable versions during deployment)

NetApp HCI 1.7 includes the following software component versions:
Version : Software component
1.7     : NetApp Deployment Engine
11.5    : NetApp Element software
4.3     : NetApp Element Plug-in for vCenter Server
11.5    : NetApp HCI Bootstrap OS
11.5    : Management node
6.7,6.5 : VMware vSphere (installable versions during deployment)
NetApp HCI 1.6P1 includes the following software component versions:
Version : Software component
1.6     : NetApp Deployment Engine
11.3.1  : NetApp Element software
4.3     : NetApp Element Plug-in for vCenter Server
11.3    : NetApp HCI Bootstrap OS
11.3    : Management node

NetApp HCI 1.4P2 ships with the following software component versions:
Version : Software component
1.4P1   : NetApp Deployment Engine
11.1P1  : NetApp Element software
4.2.3   : NetApp Element Plug-in for vCenter Server
11.1P1  : NetApp HCI Bootstrap OS
11.1P1  : NetApp HCI management node
9.4     : ONTAP Select (optional)

Image: NetApp HCI

Monday, 13 January 2020

How to DHCP with Open DHCP Server - Quick Walkthrough

A quick install, setup, and run of Open DHCP Server.

1) Obtain Open DHCP Server from:

2) Double-click the OpenDHCPServerInstallerVX.XX.exe
And follow the prompts to install.

Image: Open DHCP Server Installer

Image: Open DHCP Server Installation

The install runs -
C:\OpenDHCPServer\installservice.exe

- which installs a service ‘OpenDHCPServer’ with the path to executable:
C:\OpenDHCPServer\OpenDHCPServer.exe

Image: Installing OpenDHCPServer - Installation Completed

Image: Open DHCP Server Service

Image: Contents of Install Folder C:\OpenDHCPServer

3) Edit OpenDHCPServer.ini as required
Note: Save the original .ini as say .ini.backup

4) To run, double-click ‘RunStandAlone.bat’.

5) To stop, close the cmd.exe window.

IMPORTANT NOTE: Before running OpenDHCP, be very careful that OpenDHCP is not connected/listening on any networks that it shouldn’t be. You don’t want to accidentally give out DHCP addresses.

Note: The ‘Open DHCP Server’ service only runs if you need to run Open DHCP as a service. Default state is ‘Startup Type’ = ‘Automatic’ and ‘Status’ = ‘Not Running’. When you run ‘RunStandAlone.bat’ that doesn’t start the service.

Image: Running OpenDHCP

Configuring IPMIs using OpenDHCP (or Not Having to Plug a KVM into every Server to Configure the IPMI/iLO/DRAC/whatever)

The reason why I’m writing a post about Open DHCP Server, is because I sometimes have to configure a number of servers (e.g. NetApp HCI/NetApp SolidFire) and it’s a bit of a pain needing to go to every server with a Keyboard, Video, Mouse (KVM), and configure the IPMI by: boot node, wait for key press to enter setup, configure IPMI, reboot, and so on (if there is already DHCP on the IPMI network, it can help.)

Using OpenDHCP and a portable switch (I bought a Netgear ProSafe GS116 for about £60), I can plug up to all the IPMI ports, see what’s got what on my private DHCP scope, connect over IPMI, and apply whatever configurations are required, without needing KVM (also, can RTFI nodes over the IPMI with new images if required, rather than carrying a set of USB keys.)

I set just:

[LISTEN ON]
{IP of my laptop on the IPMI network}
[RANGE_SET]
DHCPRANGE = {Range of IPs I want to use - i.e. not the static IPMI IPs I will configure later}
SubNetMask={as required}
Router={as required}

Image: Very simple OpenDHCPServer.ini example

The default HTTP interface for OpenDHCP is http://127.0.0.1:6789/ and for some reason this didn’t work for my laptop. Not a massive issue as can see all the DHCPREQUEST in the cmd.exe. The error was “HTTP Client 127.0.0.1, Message Receive failed, WSAError 0”. I tried on another laptop, didn’t get that error, but still the webpage failed to load.

Image: Netgear ProSafe GS116 16-port Gigabit Switch

Saturday, 11 January 2020

Studying for the VCP-CMA 2020 Certification: Update

I’ve just finished the 8 x Cloud Management Platform VMware Hands On Labs as below -

HOL-2021-01-CMP - vRealize Automation - Getting Started
HOL-2006-01-CMP - vRealize Suite - Making Private Cloud Easy
HOL-2021-91-ISM - vRealize Automation 8 - What’s New - Lightning Lab
HOL-2021-02-CMP - vRealize Automation - Advanced Topics
HOL-2021-04-CMP - vRealize Orchestrator - Getting Started
HOL-2006-02-CMP - vRealize Suite - Integrated Troubleshooting
HOL-2006-03-CMP - vRealize Suite Life Cycle Manager
HOL-2021-03-CMP - vRealize Automation - Advanced Extensibility

- and they are all well worth your time doing. The only one I might skip in hindsight is HOL-2021-91-ISM, since the VCP-CMA 2020 is focused on VMware vRealize Automation 7.6.

Image: Education Services > Certification > VCP-CMA 2020 (the hyperlink link still says 2019)

What Next?

Now the VMware website has been updated with the VCP-CMA 2020 -
- I know my path is to take the Professional VMware vRealize Automation 7.6 exam once I’ve acquired enough knowledge and experience.

Now to “...item writers use the following references for information when writing exam questions. It is recommended that you study the reference content...” (from):

Revisiting the labs/other hands-on.

There is also a VMware Cloud Management YouTube channel with content worth watching:

No FlexPod design guides that appear to include vRealize, but worth checking out these NetApp Verified Designs that do (skip the bits you’re not interested in):

Notes from the 8 x Cloud Management Platform VMware Hands on Labs

Some notes I recorded whilst doing the labs.

VMware vRealize Automation Documentation

VMware vRealize Automation uses two distinct types of administrator accounts to divide up the administrative tasks required to manage the infrastructure endpoints, compute resource reservations, users, groups, and policies that need to be put in place. These two accounts are known as the IaaS Administrator and the Tenant Administrator.

The Tenant Administrator Portal includes two new tabs.
1. Administration - This tab contains all-of the administrative functions that are available to you as the Tenant Administrator.
2. Infrastructure - Allows you to review recent events that have occurred on your tenant's infrastructure.

DEM = Distributed Execution Manager

vSphere SPBM (Storage Policy-Based Management Framework)

vRealize Automation 8.0 consists of the following components:
- Cloud Assembly
- Service Broker
- Code Stream
- Orchestrator

vRealize Automation also requires vRealize Suite Lifecycle Manager and VMware Identity Manager for installation, configuration, post-install management, and authentication.

There are two different types of Component Profiles:
- Image
- Size

Harbor in GitHub:

VMware Solutions Exchange:
Note: vRealize Orchestrator plugins are .VMOAPP files (can also be installed as .DAR files)

vRealize Orchestrator Control Center is the main interface to configure and troubleshoot vRealize Orchestrator.

Configure Component Profile Size Settings for Catalog Deployments

Configure Component Profile Image Settings for Catalog Deployments

There are some limitations to component profiles with which you should familiarize yourself:
- If you try to resize the virtual machine to a size greater than the largest setting, it will fail.
- Additionally, if you edit the component profile Size ValueSets, the changes will retroactively work on any already-deployed virtual machines.

Image: Automated Lifecycle Management and Operations (Day 0 to Day 2)

VMware vRealize Suite Lifecycle Manager comes free with VMware vRealize Suite in all three editions.  The vRealize Suite Lifecycle Manager automates installation, configuration, upgrade, patch, configuration management, drift remediation and validate the health status of services from within a single pane of glass, thereby freeing IT Managers/Cloud admin resources to focus on business-critical initiatives, while improving time to value (TTV), reliability and consistency.

Step 1) Deploy vRealize Suite Lifecycle Manager appliance and complete initial configuration.
Step 2) Deploy other VMware Products*

*Products = vRealize Network Insight, vRealize Business for Cloud, vRealize Log Insight, vRealize Operations, vRealize Automation

Image: VMware Products deployable from vRealize Suite Lifecycle Manager

vRealize Automation 7 has the following options for extending functionality beyond simple virtual machine deployment:
- Event Broker
- XaaS blueprints and actions

Image (1/2): vRealize Automation + Event Broker Service & Xaas Service Designer - pluggable framework to vRO

Image (2/2): vRealize Automation + Event Broker Service & Xaas Service Designer - pluggable framework to vRO

vRealize CloudClient is a command-line utility that provides verb-based access with a unified interface across vRealize Automation APIs:

Image: VMware vRealize CloudClient 4.7.0

Another resource for downloading Blueprints is VMWARE {code}.  The code site allows community members to post and share vRealize Automation Blueprints as well as workflows and other content for VMware solutions: https://code.vmware.com

The ITSM Plug-in 7.6 is the latest release for those looking to extend ServiceNow ITSM with Multi-Cloud Automation with Governance

VMware vRealize Automation integration with ServiceNow (vRA ITSM Plugin)

Saturday, 4 January 2020

SnapMirror Protect (from ONTAP 9.3) with SVM-DR (from ONTAP 9.7)

SnapMirror Protect first featured in ONTAP 9.3.

Image: ONTAP 9.2 doesn’t have SnapMirror Protect

Image: ONTAP 9.3 introduces SnapMirror Protect

Essentially, what ‘SnapMirror Protect’ does - as from the man page output in the appendix below - is:

The snapmirror protect command establishes SnapMirror protection for the specified Vserver or a list of volumes. For each endpoint, the command creates a data protection destination endpoint in the Vserver specified by the -destination-vserver parameter, creates an extended data protection (XDP) SnapMirror relationship, and starts the initialization of the SnapMirror relationship.

This means that rather than doing the traditional, volume create, snapmirror create, snapmirror initialize, to protect a source volume, those 3 commands are reduced to just 1 simple command - how easy is that!
SnapMirror Protect is quite powerful because you can protect multiple volumes with one command.
And from ONTAP 9.7, you can use SnapMirror Protect for protecting an entire SVM (establishing SVM-DR relationships)!

See the appendix for a couple of examples:
EXAMPLE 1) SnapMirror Protect 2 volumes with one command
EXAMPLE 2) SnapMirror Protect an SVM (SVM-DR).

APPENDIX: ONTAP 9.7 Manual Pages: SnapMirror Protect


cluster1::> man snapmirror protect
snapmirror protect -- Data ONTAP 9.7 -- snapmirror protect

NAME: snapmirror protect -- Start protection for Vservers and volumes
DESCRIPTION:
The snapmirror protect command establishes SnapMirror protection for the specified Vserver or a list of volumes. For each endpoint, the command creates a data protection destination endpoint in the Vserver specified by the -destination-vserver parameter, creates an extended data protection (XDP) SnapMirror relationship, and starts the initialization of the SnapMirror relationship. This command must be used from the destination Vserver or cluster. This command is not supported for FlexGroup volume constituents or non ONTAP endpoints.

PARAMETERS:

[-source-cluster {Cluster name}] - Source Cluster
This optional parameter specifies the source cluster name. This parameter is valid only when only a single Vserver is specified in the path-list parameter.

-path-list {[vserver:][volume] | [[cluster:]//vserver/]volume | hostip:/lun/name | hostip:/share/share-name },... - Path List
This parameter specifies the list of source endpoints to be protected. The list is a comma separated list of paths of the form vserver:volume or vserver:, for example:
vs1.example.com:dept_eng1,vs1.example.com:dept_eng2
vs1.example.com:
If the list contains a Vserver endpoint, then the Vserver is the only endpoint that can be specified, and the list cannot contain a mixture of volume and Vserver endpoints.

[-destination-vserver {vserver name}] - Destination Vserver
This parameter specifies the Vserver in which to create the destination volumes of the SnapMirror relationships. When protecting a single Vserver, this parameter specifies the destination Vserver endpoint for protection.

[-destination-vserver-ipspace {IPspace}] - Destination Vserver IPSpace Name
This optional parameter specifies the IPspace the Vserver will be assigned to. If left unspecified, the Vserver will be assigned to the default IPspace. This parameter is supported while protecting a Vserver.

[-schedule {text}] - SnapMirror Schedule
This optional parameter designates the name of the schedule which is used to update the SnapMirror relationships.

-policy {sm_policy} - SnapMirror Policy
This parameter designates the name of the SnapMirror policy which is associated with the SnapMirror relationships.

[-auto-initialize {true|false}] - Auto Initialize
This optional parameter specifies whether or not initializes of the SnapMirror relationships should be started after the relationships are created. The default value for this parameter is true.

[-destination-volume-prefix {text}] - Destination Volume Name Prefix
This optional parameter designates the prefix for the destination volume name. For example if the source path is of the form vserver:volume and the destination-volume-prefix specified is prefix_ and no destination-volume-suffix is specified, then the destination volume name will be prefix_volume_dst or possibly prefix_volume_1_dst if a name conflict is encountered. If both prefix and suffix are specified as prefix_ and _suffix, then the destination volume name will be prefix_volume_suffix or prefix_volume_1_suffix, if a name conflict is encountered. This parameter is not supported for Vserver endpoints.

[-destination-volume-suffix ] - Destination Volume Name Suffix
This optional parameter designates the suffix for the destination volume name. If you do not desginate a suffix, a destination volume name with suffix _dst will be used. For example if the source path is of the form vserver:volume, and the suffix specified is _DP, the destination volume will be created with the name volume_DP or volume_1_DP if a name conflict is encountered. If both prefix and suffix are specified as prefix_ and _suffix, then the destination volume name will be prefix_volume_suffix or prefix_volume_1_suffix, if a name conflict is encountered. This parameter is not supported for Vserver endpoints.

[-support-tiering {true|false}] - Provision Destination Volumes on FabricPools
This optional parameter specifies whether or not FabricPools are selected when provisioning a FlexVol volume or a FlexGroup volume during protection workflows. When this parameter is set to true, only FabricPools are used; when set to false, only non-FabricPools are used. Tiering support for a FlexVol volume can be changed by moving the volume to the required aggregate. Tiering support for a FlexGroup volume can be changed by moving all of the constituents to the required aggregates. The default value is false. This parameter is supported only for FlexVol volumes and FlexGroup volumes.

[-tiering-policy {snapshot-only|auto|none|all}] - Destination Volume Tiering Policy
This optional parameter specifies the tiering policy to apply to the destination FlexVol volume or FlexGroup volume. This policy determines whether the user data blocks of a FlexVol volume or FlexGroup volume in a FabricPool will be tiered to the capacity tier when they become cold. FabricPool combines flash (performance tier) with an object store (external capacity tier) into a single aggregate. The default tiering policy is 'snapshot-only' for a FlexVol volume and 'none' for a FlexGroup volume. The temperature of a FlexVol volume or FlexGroup volume block increases if it is accessed frequently and decreases when it is not.
The available tiering policies are:
o snapshot-only - This policy allows tiering of only the FlexVol volume or FlexGroup volume Snapshot copies not associated with the active file system. The default minimum cooling period is 2 days.
o auto - This policy allows tiering of both snapshot and active file system user data to the capacity tier. The default cooling period is 31 days.
o none - FlexVol volume or FlexGroup volume blocks will not be tiered to the capacity tier.
o backup - On a DP FlexVol volume or FlexGroup volume this policy allows all transferred user data blocks to start in the capacity tier.
This parameter is supported only for FlexVol volumes and FlexGroup volumes.

EXAMPLE 1)
To establish SnapMirror protection for the source volumes:
vs1.example.com:dept_eng1 and vs1.example.com:dept_eng2
using destination-vserver vs2.example.com
and policy MirrorAllSnapshots
type the following command:

vs2.example.com::> snapmirror protect -path-list vs1.example.com:dept_eng1,vs1.example.com:dept_eng2 -destination-vserver vs2.example.com -policy MirrorAllSnapshots

EXAMPLE 2)
To establish SnapMirror protection for the source Vserver:
vs1.example.com
which is on cluster cluster1
creating a destination-vserver named vs1dp.example.com
and using policy MirrorAllSnapshots
type the following command:

cluster2::> snapmirror protect -source-cluster cluster1 -path-list vs1.example.com: -destination-vserver vs1dp.example.com -policy MirrorAllSnapshots


Friday, 3 January 2020

Network Switch Diagrams for a Minimum NetApp HCI Deployment*

*The minimum NetApp HCI Deployment is:
- 4 x H410S Storage Node
- 2 x H410C Compute Node (using two-cable configuration)

Here I present some diagrams for a minimum NetApp HCI Deployment. It’s not a cable diagram, instead I label ports with numbered and coloured discs. Essentially this is a blog post of 4 pictures:

i) Legend
ii) 1 GbE Management Switches
iii) 10/25GbE Network Switches
iv) NetApp HCI Chassis (with H410S + H410C)

Image: i) Legend

Image: ii) 1 GbE Management Switches

Image: iii) 10/25GbE Network Switches

Image: iv) NetApp HCI Chassis (with H410S + H410C)
  
The key to success with running the NetApp Deployment Engine (NDE) is to fully understand the:

Specifically, for this deployment:
> Network and switch requirements
- Need minimum of 3 VLANs: Management, vMotion, Storage (iSCSI).
- Additional VLANs: VM Network(s) [Initial VMs can use the Management network]
Note: Easiest setup if:
i) Storage Nodes 1GbE Management Connectivity Ports are Access Ports in the Management VLAN.
ii) All 10/25GbE ports have Management VLAN as the native VLAN (all 10/25GbE ports are trunk ports).
- All switch ports connected to NetApp HCI nodes must be configured as spanning tree edge ports.
- You must configure jumbo frames on the 10/25GbE switch ports
> Networks ports used by NetApp HCI
- If you want the Management Node (mNode) to reach external services - for instance, for ActiveIQ - review the table.
> Network cable requirements
- 14 Cat 5e/6 cables with RJ45 connectors
- 12 Twinax cables with SFP28/SFP+ connectors {OR} 6 switch side SFPs for 10/25GbE, 6 NetApp HCI side SFPs for 10/25GbE
> IP address requirements
- 2 x Compute nodes: 2 x Management IPs, 4 x Storage IPs, 2 x vMotion IP
- 4 x Storage nodes: 4 x Management IPs, 4 x Storage IPs
- Storage Cluster: 1 x Management IP, 1 x Storage IP
- VMware vCenter: 1 x Management IP
- SolidFire mNode: 1 x Management IP, 1 x Storage IP
- Temporary IP to run NDE: 1 x Management IP (applied to first Storage Node)
- TOTAL: 10 x Management IPs, 10 x Storage IPs, 2 x vMotion IPs
> Configuring LACP for optimal storage performance
- You have configured the switch ports connected to the 10/25GbE interfaces of NetApp HCI storage nodes as LACP port channels.
- You have set the LACP timers on the switches handling storage traffic to “fast mode (1s)” for optimal failover detection time.
> DNS and timekeeping requirements
- If you are deploying NetApp HCI with a new VMware vSphere installation using a fully qualified domain name, you must create one Pointer (PTR) record and one Address (A) record for vCenter Server on any DNS servers in use before deployment.
- If you are deploying NetApp HCI with a new vSphere installation using only IP addresses, you do not need to create new DNS records for vCenter.
- NetApp HCI requires a valid NTP server for timekeeping. You can use a publicly available time server if you do not have one in your environment.

Note: The 2-cable method requires VMware distributed switching (VDS) and the VMware Enterprise Plus licensing.

If all the requirements are met, NDE runs faultlessly and your environment is deployed in 30 minutes or so!
Environment here being: 4 x Storage Nodes. 2 x Compute Nodes (VMware ESXi). 1 x vCenter. 1 x SolidFire mNode.