Carrying on from Part 2, the following post will run
through setting up NFS on Clustered Data ONTAP and testing using a FreeBSD
Linux system and also mounting as an NFS datastore to a VMware ESXi 5.1 host.
Note: This could also be done using the NetApp VSC (Virtual Storage Console) and will test this with CDOT at a later date.
Preparations
First we need to create an aggregate (at this stage we
only have the 3 disk root volume container aggregate - best practice with CDOT
is to have a dedicated aggregate for the root volume.) The below creates the
aggregate aggr2_A01 on node CLUSA-01 with 11 x 1 GB disks (9 data, 1 parity, 1
dparity):
storage aggregate create aggr2_A01
-nodes CLUSA-01
-diskcount 11 -disksize 1
Second, I’m going to create a Multimode Interface Group (not
LACP here because running in VMware workstation) called a0a, and add e0d and
e0e to it:
network port ifgrp create -node CLUSA-01 -ifgrp a0a -distr-func ip -mode multimode
network port ifgrp add-port -node CLUSA-01 -ifgrp a0a -port e0d
network port ifgrp add-port -node CLUSA-01 -ifgrp a0a -port e0e
Creating the vServer and Configuring NFS File Sharing
The lines below will:
i. Create a vServer
ii. Create an NFS configuration for the vServer
iii. Create a logical interface (lif) for the vServer
iv. Create a route to the vServer’s NFS IP
v. Create an “all open” export policy for the vServer
vi. Allow the NFS protocol for the vServer
vii. Create a volume attached to the vServer
viii. Mount the volume to a junction path
xi. Assign the export policy to the vServer root volume
vserver create -vserver vs_nfs -rootvolume vs_nfs -aggregate aggr2_A01 -rootvolume-security-style unix -ns-switch file -nm-switch file
vserver nfs create -vserver vs_nfs
network interface create -vserver vs_nfs -lif vs_nfs_data -role data -data-protocol nfs -home-node CLUSA-01 -home-port a0a -address 10.1.3.100 -netmask 255.255.255.0
network routing-groups route create -server vs_nfs -routing-group d10.1.3.0/24 -gateway 10.1.3.5
vserver export-policy rule create -vserver vs_nfs -policyname default -clientmatch 0.0.0.0/0 -rorule any -rwrule any -anon 0 -superuser any -ruleindex 1
vserver modify -vserver vs_nfs -allowed-protocols nfs
vol create -vserver vs_nfs -volume nfsvol1 -aggregate aggr2_A01 -size 500MB -policy default -unix-permissions 777
volume mount -volume nfsvol1 -vserver vs_nfs -junction-path /nfsvol1
volume modify -vserver vs_nfs
-policy default -volume vs_nfs
Testing on a
VMware ESXi Host
With IP connectivity
from the ESXi host:
Select the host > Configuration tab > Hardware
panel: Storage > Add Storage…
And follow through the wizard to add ‘Network File System’
type storage with -
Server IP = 10.1.3.100
Folder = /nfsvol1
Datastore Name = NFSVOL1
Image: Properties
for adding the CDOT NFS volume via VMware vSphere
And - if all’s well on the networking side - that’s it,
we have our NFSVOL1 mounted in VMware!
Image: Mounted CDOT
NFS volume in VMware vSphere
Note: This could also be done using the NetApp VSC (Virtual Storage Console) and will test this with CDOT at a later date.
Testing via a
FreeBSD Linux Host
Run the commands:
mkdir /mnt/nfsvol1
mount 10.1.3.100:/nfsvol1 /mnt/nfsvol1
cd /mnt/nfsvol1
ls
And you should see the .snapshot folder (there’ll be
nothing else in there to see.)
Roll Back
To get good at the CLI, the best way to learn is practice
- create and recreate, again and again until you can do it in your sleep. The
following commands will undo everything we’ve done in this post, and
enable doing it all over again.
volume offline -volume nfsvol1
volume delete -volume nfsvol1
volume offline -volume vs_nfs
volume delete -volume vs_nfs
vserver delete -vserver vs_nfs
The next two lines
are only necessary to delete the ifgrp and aggregate created in preparation for
the NFS bit. I’ll leave these in place for future posts in this series, so the following lines are just
down here for completeness.
network port
ifgrp delete -node CLUSA-01 -ifgrp a0a
storage
aggregate delete -aggregate aggr2_A01
This comment has been removed by the author.
ReplyDeleteI love your blogs
ReplyDelete