Some notes from the NetApp LoD "Getting Started with Ansible". Highly recommend doing the lab. The below is just notified for myself. The AWX / Ansible Automation Controller section of the LoD is not covered (notified) here.
1) Getting Familiar with Ansible and NetApp ONTAP Modules
1.1) Ansible Configuration and Inventory
- Installing Ansible — Ansible Community Documentation
- Ansible is an agentless automation tool that you install on a single host (referred to as the control node).
- For your control node (the machine that runs Ansible), you can use nearly any UNIX-like machine with Python installed. This includes Red Hat, Debian, Ubuntu, macOS, BSDs, and Windows under a Windows Subsystem for Linux (WSL) distribution.
- VSCodium - Open Source Binaries of VSCode
- VSCodium is a community-driven, freely-licensed binary distribution of Microsoft’s editor VS Code.
ansible@ansible:~$ ansible --version
- ansible [core 2.15.5]
- config file = /home/ansible/ansible.cfg
- configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
- ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
- ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
- executable location = /usr/local/bin/ansible
- python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
- jinja version = 3.1.2
- libyaml = True
Ansible will select its configuration file from one of the following possible locations on the control node, in order of precedence:
- ANSIBLE_CONFIG (environment variable if set)
- ansible.cfg (in the current directory)
- ~/.ansible.cfg (in the home directory)
- /etc/ansible/ansible.cfg (installed as Ansible default)
Of special interest in the ansible.cfg file is the path to inventory:
Ansible is designed to allow you to perform automation tasks on multiple hosts at once, and in order to do this you need to provide an inventory file which defines a list of hosts to be managed (also called managed nodes) from the Ansible control node.
[all:vars]
ansible_user=ansible
ansible_ssh_pass=********
ansible_port=22
[linux]
ubuntu ansible_host=192.168.0.61
awx ansible_host=192.168.0.241
[control]
ansible ansible_host=192.168.0.188
[ontap]
192.168.0.101 shortname=cluster1
192.168.0.102 shortname=cluster2
ansible@ansible:~$ ansible linux --list-hosts
ansible@ansible:~$ ansible ontap --list-hosts
1.2) Ansible Commands and Modules
At a basic level, to use Ansible to perform tasks you need to specify one or more hosts and modules, along with any options.
i) To use the ping module in your ad-hoc command, you must specify the module using the “-m” argument which is the short form of “–module-name”.
ansible@ansible:~$ ansible linux -m ping
ansible@ansible:~$ ansible linux -m ping -o
The “-o” argument is the short form of “-one-line”.
ii) There is an Ansible module called “command” that allows you to execute arbitrary Linux commands on a managed host.
ansible@ansible:~$ ansible linux -m command -a "uptime"
iii) Use the “file” module to create a directory on the host “ubuntu”.
ansible@ansible:~$ ansible ubuntu -m file -a "path=/var/new_dir state=directory"
iv) Ansible has a feature called “become” which instructs Ansible to use sudo to run the command on the managed host.
ansible@ansible:~$ ansible ubuntu -m file -a "path=/var/new_dir state=directory" -b
Note: Ansible will connect to machines using your current user name. To override the remote user name, use the -u parameter.
v) To see the (long) list of available modules:
ansible@ansible:~$ ansible-doc -l
To see the NetApp modules installed from the netapp.ontap collection:
ansible@ansible:~$ ansible-doc -l | grep netapp.ontap
Note: You can add the NetApp ONTAP module collection to your environment by using the ansible-galaxy command like this: ansible-galaxy collection install netapp.ontap
vi) Documentation (q to quit):
ansible@ansible:~$ ansible-doc netapp.ontap.na_ontap_volume
1.3 Write an Ansible Playbook Using an ONTAP Module
Basic structure of an Ansible playbook:
---
- name: Play 1
hosts: target_hosts
tasks:
- name: Task 1
module_name:
key: value
- name: Task 2
second_module_name:
key: value
- name: Play 2
hosts: other_target_hosts
tasks:
- name: Task 3
third_module_name:
key: value
...
Note 1: The playbook begins with --- and ends with ..., which are YAML document markers. YAML files are indented with spaces and not tabs.
Note 2: The Ansible playbook should be idempotent, meaning it produces the same results no matter how many times it is performed.
playbook.yml
---
- hosts: localhost
gather_facts: false
name: Connectivity Test & Display ONTAP Info
tasks:
- name: "Collect ONTAP info"
netapp.ontap.na_ontap_rest_info:
hostname: cluster1.demo.company.com
username: admin
password: *********
https: true
validate_certs: false
gather_subset: "svm/svms"
register: info
- name: Print ONTAP Response
debug:
msg: "{{ info }}"
Note 1: Hosts keyword set to “localhost” because NetApp Ansible modules work by making ONTAP REST API calls. The Ansible modules are run locally on the control node rather than directly on the ONTAP system.
Note 2: "register" is an Ansible directive. It captures the output and assigns to a variable (info).
Note 3: Variables are inserted into a string using two pairs of curly brackets.
ansible@ansible:~$ ansible-playbook playbook.yml
2.1 Use Playbook Variables to Simplify Tasks
- Some ways to define a variable:
- Inventory file
- Within the playbook
- Variable files
- Command line
- Replace hostname, username & password in the above playbook, with variable names.
- Use vars to tell Ansible what the values are.
- Use Module_defaults to allow you to define parameters once for a group of modules.
- And we can remove those defined defaults from netapp.ontap.na_ontap_rest_info:
- By defining the netapp.ontap module collection namespace in the playbook using the "collections:" keyword, we can remove netapp.ontap from the module name.
- Remove netapp_password from vars and replace with vars_prompt, so anyone running the playbook will need to supply the password at runtime.
---
- hosts: localhost
gather_facts: false
name: Connectivity Test & Display ONTAP Info
collections:
- netapp.ontap
module_defaults:
group/netapp.ontap.netapp_ontap:
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
https: true
validate_certs: false
use_rest: auto
vars_prompt:
- name: "netapp_password"
prompt: "Enter the NetApp Admin Password"
private: true
confirm: true
vars:
netapp_hostname: cluster1.demo.company.com
netapp_username: admin
tasks:
- name: "Collect ONTAP info"
na_ontap_rest_info:
gather_subset: "svm/svms"
register: info
- name: Print ONTAP Response
debug:
msg: "{{ info }}"
Note 1: With module_defaults set up, the na_ontap_rest_info module is much more concise. Additional tasks from the netapp.contap collection will not need these parameters set.
Note 2: YAML has a feature called 'Anchors and Aliases' - see here
Note 3: Ansible also has a feature called 'Ansible Vault' which can store passwords in encrypted files - see here
Ansible prompting for password |
2.2 Add NFS Provisioning Tasks
---
- hosts: localhost
gather_facts: false
name: Provision NFS
collections:
- netapp.ontap
module_defaults:
group/netapp.ontap.netapp_ontap:
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
vserver: "{{ vserver }}"
https: true
validate_certs: false
use_rest: auto
vars_prompt:
- name: "netapp_password"
prompt: "Enter the NetApp Admin Password"
private: true
confirm: true
vars:
netapp_hostname: cluster1.demo.company.com
netapp_username: admin
vserver: svm1_cluster1
aggr_name: cluster1_01_SSD_1
size: 10
volname: vol1
client_match: 192.168.0.0/24
tasks:
- name: Create Export Policy
na_ontap_export_policy:
state: present
name: "{{ volname }}_policy"
- name: Set up Export Policy Rules
na_ontap_export_policy_rule:
state: present
policy_name: "{{ volname }}_policy"
client_match: "{{ client_match }}"
ro_rule: sys
rw_rule: sys
super_user_security: sys
- name: Create Volume
na_ontap_volume:
state: present
name: "{{ volname }}"
aggregate_name: "{{ aggr_name }}"
size: "{{ size }}"
size_unit: gb
policy: "{{ volname }}_policy"
junction_path: "/{{ volname }}"
space_guarantee: "none"
volume_security_style: unix
Running the playbook and seeing it is idempotent |
2.3 Use Conditionals to Incorporate CIFS Into Your Playbook
- Ansible can use conditionals to execute tasks, or plays, when certain conditions are met. The condition is expressed using one of the available operators:
- == equals
- != :does not equal
- > :greater than
- >= :greater than or equal to
- < :less than
- <= :less than or equal to
Proposed demo* flow: Create an export policy => If using NFS, add a rule to that policy => Create a volume that applies this policy => If using CIFS, create the CIFS share
*In practice we wouldn't to anything NFS related if it's just a CIFS share. The demo is just an illustration.
First, create a variables file - vars.yml:
netapp_hostname: cluster1.demo.company.com
netapp_username: admin
vserver: svm1_cluster1
aggr_name: cluster1_01_SSD_1
volname: vol2
protocol: cifs
size: 10
client_match: 192.168.0.0/24
And replace the vars definition with:
vars_files:
- ./vars.yml
The playbook from above with conditionals:
---
- hosts: localhost
gather_facts: false
name: Provision NAS Storage
collections:
- netapp.ontap
module_defaults:
group/netapp.ontap.netapp_ontap:
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
vserver: "{{ vserver }}"
https: true
validate_certs: false
use_rest: auto
vars_prompt:
- name: "netapp_password"
prompt: "Enter the NetApp Admin Password"
private: true
confirm: true
vars_files:
- ./vars.yml
tasks:
- name: Create Export Policy
na_ontap_export_policy:
state: present
name: "{{ volname }}_policy"
- name: Set up Export Policy Rules
na_ontap_export_policy_rule:
state: present
policy_name: "{{ volname }}_policy"
client_match: "{{ client_match }}"
ro_rule: sys
rw_rule: sys
super_user_security: sys
when: protocol.lower() == 'nfs'
- name: Create Volume
na_ontap_volume:
state: present
name: "{{ volname }}"
aggregate_name: "{{ aggr_name }}"
size: "{{ size }}"
size_unit: gb
policy: "{{ volname }}_policy"
junction_path: "/{{ volname }}"
space_guarantee: "none"
volume_security_style: unix
- name: Create CIFS share
na_ontap_cifs:
state: present
share_name: "{{ volname }}"
path: "/{{ volname }}"
when: protocol.lower() == 'cifs'
And output:
Note: Two key Ansible features for developing more complex flows with a playbook: Handlers and Loops.
3) Make Playbooks More Useful with Ansible Roles
Roles let you automatically load related vars, files, tasks, handlers, and other Ansible elements based on a known file structure.
To create a role you create a specific directory structure within a "roles" directory (see illustrative structure below). main.yml is an expected file.
- roles/
- role_name/
- tasks/
- main.yml
- vars/
- main.yml
- handlers/
- main.yml
- files/
- main.yml
Our Lab roles:
- roles
- provision_nas
- tasks
- main.yml
- vars
- main.yml
Note: Roles do not support module_defaults.
---
- name: Create Export Policy
na_ontap_export_policy:
state: present
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
vserver: "{{ vserver }}"
https: true
validate_certs: false
use_rest: auto
name: "{{ volname }}_policy"
- name: Set Up Export Policy Rules
na_ontap_export_policy_rule:
state: present
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
vserver: "{{ vserver }}"
https: true
validate_certs: false
use_rest: auto
policy_name: "{{ volname }}_policy"
client_match: "{{ client_match }}"
ro_rule: sys
rw_rule: sys
super_user_security: sys
when: protocol.lower() == 'nfs'
- name: Create Volume
na_ontap_volume:
state: present
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
vserver: "{{ vserver }}"
https: true
validate_certs: false
use_rest: auto
name: "{{ volname }}"
aggregate_name: "{{ aggr_name }}"
size: "{{ size }}"
size_unit: gb
policy: "{{ volname }}_policy"
junction_path: "/{{ volname }}"
space_guarantee: "none"
volume_security_style: unix
- name: Create CIFS Share
na_ontap_cifs:
state: present
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
vserver: "{{ vserver }}"
https: true
validate_certs: false
use_rest: auto
share_name: "{{ volname }}"
path: "/{{ volname }}"
when: protocol.lower() == 'cifs'
Content of vars:main.yml:
netapp_hostname: cluster1.demo.company.com
netapp_username: admin
vserver: svm1_cluster1
aggr_name: cluster1_01_SSD_1
volname: vol3
protocol: cifs
size: 10
client_match: 192.168.0.0/24
Then we have an example Ansible playbook - role_example.yml - which will utilize the above role.
role_example.yml:
- hosts: localhost
gather_facts: false
vars_prompt:
- name: "netapp_password"
prompt: "Enter the NetApp Admin Password"
private: true
confirm: true
roles:
- role: provision_nas
volname: vol3
protocol: nfs
- role: provision_nas
volname: vol4
protocol: cifs
Note: Variables in the playbook take precedence over values defined in the role.
ansible@ansible:~$ ansible-playbook role_example.yml
Note: Roles make it easy to build a library of automation scripts that you can chain together into more complex workflows
THE END
References
Comments
Post a Comment