Sunday, 28 July 2013

SnapMirror 7DOT CLI Notes and Procedures: Part 1/2


Section 1/2: Notes

SnapMirror Over Multiple Paths:

Example in /etc/snapmirror.conf:
SRC_conf = multi (SRC-e0a,DEST-e0a) (SRC-e0b,DEST-e0b)
SRC_conf:vol1 DEST:vol1 - sync

Throttle Network Usage:

Example in /etc/snapmirror.conf:
snapmirror throttle {n} DEST:DEST_PATH
n = throttle value in kilobytes per second (1 to 125’000)

Options:
options replication.throttle.enable {on/off}
options replication.throttle.incoming.max_kbps {value}
options replication.throttle.outgoing.max_kbps {value}

Planning for SnapMirror Updates:

Use the -
snap delta VOL_NAME
- command to determine the rate of data change between Snapshot copies on a volume.

Compare the file system size of the two volumes with:
vol status -b VOL_NAME
Note: The snapmirror initialize sets the vol options fs_size_fixed to on!

SnapMirror Access:

Use the -
snapmirror.access
- option or create the file -
/etc/snapmirror.allow

SnapMirror.Conf:

Syntax:
SRC:/vol/SRC_VOL/{SRC_QTREE} DEST:/vol/DEST_VOL/{DEST_QTREE} {arguments} {schedule}
Note: Schedule is - minute hour day_of_month day_of_week

Arguments:
Check the manual for - Transfer speed, Restart Mode, Checksum, Ops (Outstanding) suffix, Visibility interval, TCP window size

snapmirror status:

-l displays the long format of the output
-q displays which volumes or qtrees are quiesced or quiescing

Listing SnapMirror Snapshot Copies:

Use:
snap list

SnapMirror Snapshot copies are distinguished by their naming convention, for example*:
DEST(0012345678)_vol01.0
*destination_system(sysid)_volume.number
Note: If SnapMirror Snapshot copies are deleted, affected relationships will have to be re-initialized!

SnapMirror Log Files:

options snapmirror.log.enable on/off
Note 1: default is on
Note 2: location /etc/logs/snapmirror

Converting a Replica to a Writable File System:

This is done on the destination:
snapmirror quiesce /vol/DEST_VOL/DEST_QTREE
snapmirror break /vol/DEST_VOL/DEST_QTREE

Re-synchronizing a Broken-off Relationship:

This is done on the system with the less up-to-date file system:
snapmirror resync DEST_HOSTNAME:DEST_VOL

Making a Break Permanent:

SRC> snapmirror destinations SRC_VOL
SRC> snapmirror release SRC_VOL DST_HOSTNAME:DST_VOL
Note: Remember to remove the entry from the /etc/snapmirror.conf file!

Snapmirror Migrate:

The snapmirror migrate command is run on the storage system which holds the source volume:
snapmirror migrate SRC_HOSTNAME:SRC_VOL DST_HOSTNAME:DST_VOL
This stops NFS and CIFS on the source, migrates NFS file handles to the destination, makes the source restricted, and makes the destination volume read-write.

Section 2/2: Procedures

Procedure 1: How to Create SnapMirror Relationships

Note 1: snapmirror and snapmirror_sync licenses have already been added to SRC and DES, and D2 has the snapmirror license.
Note 2: Source volume has been created.
Note 3: Here we have two destinations to create a cascaded relationship from SRC -> DES -> D2.

Verify volumes are created and online:
SRC> vol status
Specify hostnames of the SnapMirror destination systems:
SRC> options snapmirror.access host=DES,D2

Create volume on destination (DES):
DES> vol create vol1 aggr1 10g
Set volume options:
DES> vol options vol1 create_ucode on
DES> vol options vol1 convert_ucode on
Restrict the volume:
DES> vol restrict vol1
Verify destination volumes are restricted:
DES> vol status
Initialize the SnapMirror:
DES> snapmirror initialize -S SRV:vol1 DES:vol1
Monitor the status of the transfer:
DES> snapmirror status
Create (or edit) the snapmirror.conf file:
DES> wrfile /etc/snapmirror.conf
SRC:vol1 DES:vol1 - sync
{Ctrl C exits wrfile}
Verify the relationships are in sync:
DES> snapmirror status -l
List the snapshot copies:
DES> snap list vol1
Authorize the second destination system:
DES> options snapmirror.access host=D2

Create volume on destination (D2):
D2> vol create vol1 aggr1 10g
Set volume options:
D2> vol options vol1 create_ucode on
D2> vol options vol1 convert_ucode on
Restrict the volume:
D2> vol restrict vol1
Verify destination volumes are restricted:
D2> vol status
Initialize the SnapMirror:
D2> snapmirror initialize -S DES:vol1 D2:vol1
Monitor the status of the transfer:
D2> snapmirror status
Create (or edit) the snapmirror.conf file:
D2> wrfile /etc/snapmirror.conf
DES:vol1 D2:vol1 - 0-59/5***
{Ctrl C exits wrfile}
Verify the relationships are in sync:
D2> snapmirror status -l
List the snapshot copies:
D2> snap list vol1

Back on SRC!
Verify the status of the SnapMirror relationships:
SRC> snapmirror status
List the snapshot copies:
SRC> snap list vol1
List all the snapmirror destinations:
SRC> snapmirror destinations
Read the snapmirror log:
SRC> rdfile /etc/log/snapmirror

Here we convert a snapmirror sync to semi-sync!
Verify the status of the SnapMirror relationships:
DES> snapmirror status
Display the contents of snapmirror.conf:
DES> rdfile /etc/snapmirror.conf
Modify the contents of snapmirror.conf:
DES> wrfile /etc/snapmirror.conf
SRC:vol1 DES:vol1 outstanding=10s sync
{Ctrl C exits wrfile}
Note: When the snapmirror.conf file is edited, SnapMirror drops to Async mode (replicates once each minute.)
Verify the status of the SnapMirror relationships:
DES> snapmirror status

Back on SRC!
Verify the status of the SnapMirror relationships:
SRC> snapmirror status
Read the snapmirror log:
SRC> rdfile /etc/log/snapmirror

Procedure 2: How to Configure SnapMirror to Support Multiple Paths

To display network interface configuration:
SRC> ifconfig -a
Read the hosts file:
SRC> rdfile /etc/hosts
Note: Each interface should have a hosts entry (source and destination) like SRC-e0a and SRC-e0b and
Clear the statistics of network interfaces e0a and e0b:
SRC> ifstat -z e0a
SRC> ifstat -z e0b

On the destination!
To display network interface configuration:
DES> ifconfig -a
Read the hosts file:
DES> rdfile /etc/hosts
Edit the snapmirror.conf file to add a connection name line:
DES> wrfile /etc/snapmirror.conf
SRC_conf = multi (SRC-e0a,DES-e0a) (SRC-e0b,DES-e0b)
SRC_conf:vol1 DES:vol1 - sync
{Ctrl C exits wrfile}
Verify the status of the SnapMirror relationships:
DES> snapmirror status

On the source!
Use ifstat to verify the data transfer is occurring on both network interfaces:
SRC> ifstat e0a
SRC> ifstat e0b

Continued …


SnapMirror 7DOT CLI Notes and Procedures: Part 2/2


Procedure 3: How to setup SnapMirror setup (re-rerun)

SRC> vol create vol1 aggr1 10g
SRC> vol options vol1 create_ucode on
SRC> vol options vol1 convert_ucode on
SRC> vol options vol1 nosnap on
SRC> qtree create /vol/vol1/q1
SRC> license add SNAPMIRRORCODE
SRC> options snapmirror.access host=DST

DST> vol create vol1 aggr1 10g
DST> vol options vol1 create_ucode on
DST> vol options vol1 convert_ucode on
DST> vol options vol1 nosnap on
DST> license add SNAPMIRRORCODE
DST> vol restrict vol1
DST> snapmirror initialize -S SRC:vol1 DST:vol1

Procedure 4: How to do a SnapMirror Failover and Failback

i. SnapMirror Failover

Convert the read-only volume replica to a writable file system:
DST> snapmirror break vol1

ii. SnapMirror Failback

On the destination
Enable the source for snapmirror access to the destination:
DST> options snapmirror.access host=SRC

On the source
Resync the relationship:
SRC> snapmirror resync -S DST:vol1 SRC:vol1
SRC> snapmirror status
SRC> snapmirror break vol1

On the destination
Release the reverse relationship:
DST> snapmirror destinations
DST> snapmirror release vol1 SRC:vol1
DST> snapmirror status

On the source
Identify the snapshot copy created by the resync:
SRC> snapmirror status -l
There will be a ‘Broken-off’ relationship and here we want to delete its ‘Base Snapshot’.
SRC> snap delete vol1 SRC(0012345678)_vol1.3
SRC> snapmirror status

On the destination
Reinstate the original SnapMirror relationship:
DST> snapmirror resync -S SRC:vol1 DST:vol1
Confirm y to “Are you sure you want to resync the volume?”
DST> snapmirror status

iii. Testing the relationship is working

On the source
SRC> snap create vol1 test_snap
SRC> snap list vol1

On the destination
DST> snapmirror update -S SRC:vol1 DST:vol1
DST> snap list vol1

Procedure 5: How to use SnapMirror with FlexClone for DR Testing

i. Overview

Source: Create a regular Snapshot copy of the source volume
SRC> snap create vol1 {SNAPSHOT_NAME}
Destination: Trigger a SnapMirror Update
DST> snapmirror update vol1
Destination: Clone volume using the Snapshot copy
DST> vol clone create {CLONE_NAME} -b vol1 {SNAPSHOT_NAME}
Destination: View shared Snapshot copies
DST> snap list vol1

ii. Creating the FlexClone Volume

On the destination
DST> snap list -b vol1
DST> vol clone create clone_vol1 -b vol1 test_snap
DST> vol status clone_vol1
DST> snap list vol1
DST> snapmirror update -S SRC:vol1 DST:vol1

iii. Splitting of the Clone

On the destination
DST> vol clone split estimate clone_vol1
DST> vol clone split start clone_vol1
DST> vol status -v clone_vol1

Sunday, 14 July 2013

How to Obtain and Upload Coredumps and Log Files in Clustered ONTAP 8.2

Note: Methods 1 & 2 do not need CDOT 8.2. Method 3 is written specifically with CDOT 8.2 in mind.

Method 1: To send coredumps via FTP

Note: You will need a NetApp Global Support case open first.

Run these commands:

cMode82::> system node coredump show
cMode82::> system node coredump upload -node cMode82-01 -corename core.1234567890.2013-07-13.03_30_30.nz -location ftp://ftp.netapp.com/to-ntap/ -type kernel -casenum 1234567890

login = anonymous
password = youremail@company.com

Method 2: Locating coredumps and logs via the systemshell and transferring via SCP

# Access system shell and browse /mroot/etc/log
cMode82::> security login unlock -username diag
cMode82::> security login password -username diag
Please enter a new password: XXXXXXXX
cMode82::> set -privilege advanced
cMode82::*> systemshell -node cMode82-01
login: diag
Password: XXXXXXXX

# Browse /mroot/etc/log
cMode82-01% cd /mroot/etc/log
cMode82-01% ls

# Delete any old tar files
cMode82-01% rm *.tar.gz

# ZIP up all the contents of /mroot/etc/log
cMode82-01% tar -czvf cMode82-01-log.tar.gz *.*

# scp the files to an administration host
cMode82-01% scp /mroot/etc/log/cMode82-01-log.tar.gz admin@10.11.12.13:/home/admin/cMode82-01-log.tar.gz

# Browse /mroot/etc/crash
cMode82-01% cd /mroot/etc/log
cMode82-01% ls

# Delete any old tar files
cMode82-01% rm *.tar.gz

# scp the files to an administration host
cMode82-01% scp /mroot/etc/crash/core.1234567890.2013-07-13.03_30_30.nz admin@10.11.12.13:/home/admin/core.1234567890.2013-07-13.03_30_30.nz

# Exit, re-lock diag, and return to priv admin
cMode82-01% exit
cMode82::*> security login lock -username diag
cMode82::*> set -privilege admin

Then upload the files to https://support.netapp.com/upload

Method 3: Using the SPI (Service Processor Infrastructure) to retrieve files via HTTP(S)

All the commands below are run from the Cluster Shell (e.g. cMode82::>)

1. Confirm the IP address and firewall of the cluster management lif:
network interface show -role cluster-mgmt -fields address,firewall-policy

2. Confirm web services are enabled and online:
system services web show

3.  Confirm the firewall policy used by the cluster management LIF allows http(s):
system services firewall policy show -policy mgmt

4. Confirm spi web services are enabled for the cluster management vserver and all any desired node management vservers:
vserver services web show -name spi

5. Enable web services if necessary:
vserver services web modify -vserver cMode82 -name spi -enabled true

6. Confirm the cluster user account is configured for http access method:
security login show -username admin -application http

7. Confirm (and if necessary create) admin access-control role for spi web service:
vserver services web access create -role admin -name spi -vserver cMode82
vserver services web access show -type admin -role admin -name spi

8.  Access the files with an html browser at -
http(s)://cluster-mgmt-ip/spi/node-name/etc/log
http(s)://cluster-mgmt-ip/spi/node-name/etc/crash
- to download. 

Image: Clustered ONTAP /etc/crash Available over HTTP
Image: Clustered ONTAP /etc/log Available over HTTP

Friday, 12 July 2013

Notes on the 7-Mode Transition Tool (7MTT)

The tool to migrate CIFS and NFS volumes to Clustered Data ONTAP 8.2 from Data ONTAP 7.3.3+ and 8.0.3+ operating in 7-Mode!


Part 1: Introduction

Supported versions:
- Data ONTAP 7.3.3 and above
- Data ONTAP 8.0.3 and above
- Data ONTAP 8.1 and above
=> to Clustered Data ONTAP 8.2

License requirements:
- SnapMirror license on the source (7-Mode storage system)
- SnapMirror license on the cluster
- (On the CDOT system) feature licenses to perform required 7DOT system functions
- (Optional) CIFS and NFS license

Preparing the 7-Mode Storage System for Transition:
license add SnapMirror_License
options snapmirror.enable.on
options snapmirror.access all
secureadmin setup ssl
options ssl.enable on
options ssl.v3.enable on
options httpd.admin.ssl.enable on
options httpd.admin.enable on
options interface.snapmirror.blocked “ “
options interface.blocked.mgmt_data_traffic off
vol options v_name no_i2p off
vol options v_name read_realloc off
vol options v_name nvfail off
vol online v_name

Preparing Clustered Data ONTAP Storage Systems - part 1:
*The language setting of the Vserver must match the language setting of the 7-Mode volumes to be transitioned
SFO (Storage Fail Over) must be enabled on the aggregate:
storage aggregate modify -ha-policy sfo -aggregate a_name
*Transition from 32-bit aggregate on 7-mode to 64-bit aggregate on C-mode requires an additional 5% space in the destination aggregate
*For Vserver peer relationship - clusters should not have a Vserver with same name

Preparing Clustered Data ONTAP Storage Systems - part 2:
Verify SSH connection:
ssh username@cluster_mgmt_IP
Verify SSL enabled:
system services web show
Verify HTTPs allowed:
system services firewall policy show
Create an intercluster LIF on each node:
network interface create -vserver nodename -lif intercluster -role intercluster -home-node nodename -home-port e0f -address INTERCL_IP -netmask INTERCL_SUBNET
Create a static route for the intercluster LIF:
network routing-groups route create
Verify the use of the intercluster LIF:
network ping

Installation Requirements:
64-bit Windows 7 or 2008R2
64-bit JRE 1.6
An available Port 8443
IE 9, Chrome 20 or Firefox 14/16
Screen resolution 1280 x 1024

Download 7MTT from:

Installation:
Next > Install > Finish

GUI Web Address:
https://127.0.0.1:8443/transition

Part 2.1: Transitioning using the 7MTT Web UI

Image: 7-Mode Transition Tool 1.0

Steps:
1. Add 7-Mode controllers and Clustered ONTAP clusters
2. Use New Transition Project to create a project
3. Add controllers and clusters to the project
4. Add volumes by configuring a subproject
5. Run prechecks
6. Start the subproject
7. Complete the subproject (cutover)
Note 1: IP can be automatically or manually migrated “Map 7-Mode IP Addresses to Cluster LIFs”!
Note 2: ‘SnapMirror Schedule for Transition’ controls the transition incremental updates.

Part 2.2: Transitioning using the 7MTT CLI

Image: 7MTT CLI Main Menu
(Optional) Configure NFS and Kerberos

*From the cluster shell
Add the NFS license:
system license add NFS_LICENSE
To verify the time skew:
vserver services kerberos-realm show
To correct the time skew:
vserver services Kerberos-realm modify
To transition an Active Directory (AD) Kerberos server:
vserver services dns create
To verify that Domain Name System (DNS) entry exists from the AD domain on the Vserver:
vserver services dns show

(Optional) Configure CIFS

*From 7-mode system shell
Disconnect all CIFS access:
cifs terminate
To create the CIFS server:
cifs setup

*From the cluster shell
Add the CIFS license:
system license add CIFS_LICENSE
Verify there is a data LIF:
network interface show
To configure DNS:
vserver services dns create
To create a CIFS server on the Vserver:
vserver cifs create

Part 3: Post-transition Tasks

Configure the features (if required):
IPv6 and FPolicy (not transitioned automatically to the Vserver)

Check Vserver Readiness:
- Verify the volumes on the Vserver are online and read/write
- Verify the transitioned IP addresses are up and reachable on the Vserver

Client access:
Redirect client access to the CDOT volumes

Sunday, 7 July 2013

SnapDrive for Windows with VMware VMDKs on NFS Datastores in a vFiler Context

- we extend this to NFS datastores presented from a vFiler. Also, here we are not using RPC since our vFiler is not domain joined (cifs setup has not been run - but we do have DNS configured!)

Part 1: Our vFiler Configuration

# Creating the vFiler
vol create v_vflonntp01_01 -s none aggr0 100g
vfiler create vflonntp01 -i 10.0.1.125 /vol/v_vflonntp01_01

# Changing to the vFiler context
vfiler context vflonntp01

## All the following bits are done in the vfiler context
## vflonntp01@LONNTP01>

# Creating a Qtree and export to our VMware Hosts
qtree create /vol/v_vflonntp01_01/q_vflonntp01_01
exportfs -p rw=10.0.0.0/16 /vol/v_vflonntp01_01/q_vflonntp01_01

# We will configure vFiler access over HTTP* in SDW
options httpd.admin.enable on

# Create a user
useradmin user add local_sdw_svc -g Administrators

# This next bit is only to enable SSH (PuTTY) access to our vFiler so we can test our credentials
options ssh2.enable off
secureadmin setup ssh
options ssh2.enable on

*vFiler access is only possible over HTTP in the vFiler context - HTTPS is not an option. Why? options httpd.admin.ssl.enable is only an option in the vfiler0 context!

Part 2: Configuration of the VSC

The vFiler has already been added to DNS. We add the vFiler as a new storage system in the ‘Virtual Storage Console - Backup and Recovery’ section using its DNS name as below:

Image: vfiler (vflonntp01) added to SMVI/VSC
Part 3: Configuration of SnapDrive for Windows

Not that it is absolutely necessary (but useful practice) - here we have created a local administrator account on our server called ‘local_sdw_svc’ with the same password as we configured when running the useradmin user add local_sdw_svc command earlier. The SnapDrive and SnapDrive Management Services are set to use the local account as below.

Image: SDW services using local admin account with same name as account created on vFiler
In the SnapDrive for Windows MMC, and Transport Protocol Settings:
Default tab - unticked ‘Enable’
Storage Systems tab - added in our vFiler using HTTP and the local account credentials

Image: SDW Transport Protocol Settings - Default tab with ‘Enable’ unchecked
Image: SDW Transport Protocol Settings - Storage Systems tab
And proof that our VMDKs on NFS datastores presented by a vFiler are showing in SnapDrive for Windows!

Image: SDW - VMDKs on NFS via a vFiler
A Question

Q: If it possible to use RPC with a vFiler?
A: Yes. If cifs setup has been run on the vFiler as in the preceeding post here, then RPC will work with a vFiler in just the same way. With DNS resolution etcetera … all configured correctly, there is no need to specify the vFiler as a Storage System under Transport Protocol Settings, and the Default ‘Use RPC’ can be enabled.

An Error

Not the most useful insightful error! The below error was received whilst I was playing around with RPC and HTTPS to my vFiler (without having CIFS setup run, so the RPC was always doomed!) The error message could just say ‘Please use HTTP!’

Image: Error “Unable to modify or add transport protocol. Failed to get Data ONTAP version …”

How to Setup NetApp SnapDrive for Windows with VMware VMDKs on NFS Datastores

Here we run through setting up SnapDrive for Windows (SDW) on a Windows 2008R2 SP1 server, to manage VMDKs that exist on NFS datastores provisioned on NetApp storage.

Note: SnapDrive for Windows cannot create the VMDKs on NFS. The VMDKs must first be created in vCenter; and then set online, initialized, and formatted in Windows disk Management.

Versions used in this lab

- VMware vSphere 5.1
- Windows 2008R2 SP1
- NetApp Data ONTAP 8.1.2 7-Mode
- NetApp Virtual Storage Console 4.1
- NetApp SnapDrive for Windows 6.5

Note: Check out the NetApp Interoperability Matrix Tool (IMT) at http://support.netapp.com/matrix for supported configurations!

Starting Point

- The snapdrive_windows license is installed on the NetApp controller
- Newly provisioned Windows 2008R2 SP1 server with Windows Updates
- VMware Tools installed
- Domain joined
- A domain user account LAB\srv_snapdrive created
- srv_snapdrive placed in the local administrator group on the server where we are installing SDW
- IPv6 is disabled
- Windows firewall is turned off on all profiles
- UAC (User Account Control) is turned off
- .NET Framework 3.5.1 feature is installed

Image: Select .NET Framework 3.5.1 only

Note: Being a lab machine this has no installed anti-virus!

Installation Walkthrough

Part 1: Configuration on the Storage Controller

SnapDrive works most simply via RPC. Here we have CIFS already enabled and configured on the storage system (via cifs setup) and we want to add the LAB\srv_snapdrive service account to the administrators group on the controller. The command is:

useradmin domainuser add LAB\srv_snapdrive -g administrators

Part 2: Installation of SnapDrive for Windows

Note: Here we are logged on as the LAB\srv_snapdrive account!

1: Double-click on SnapDrive6.5_x64

Image: SnapDrive6.5_x64.exe
2: SnapDrive - Installation Wizard (SDIW):

2.1 Welcome to the SnapDrive Installation
Click Next >

2.2 License Agreement
Accept and click Next >

2.3 SnapDrive License
Here we select ‘Per Storage System’ and click Next >

Image: SDIW License
2.4 Customer Information
Enter User Name and Organization and click Next >

2.5 Destination Folder
Chose where to install SnapDrive (default is C:\Program Files\NetApp\SnapDrive\) and click Next >

2.6 VirtualCenter or ESX Server Web Service Credentials
DO NOT SELECT ‘Enable VirtualCenter or ESX Server Settings’ - this is only for if you are using iSCSI or FC attached RDMs!
And click Next >

Image: SDIW VC or ESX Credentials
2.7 Virtual Storage Console Details
Select ‘Enable Virtual Storage Console Details’ and enter IP address and port of the VSC, and click Next >

Image: SDIW VSC Details
2.8 SnapDrive Service Credentials
Enter the SnapDrive service account and password, and click Next >

Image: SDIW Service Credentials
2.9 SnapDrive Web Service Configuration
Specify SnapDrive Web Service Configuration and click Next >

Image: SDIW Web Service Configuration
2.10 Transport Protocol Default Setting
Since we have already added our Windows domain account LAB\srv_snapdrive on the storage controller, tick ‘Enable Transport Protocol Settings’ and choose RPC, and click Next >

Image: SDIW Transport Protocol Setting
2.11 OnCommand Configuration
Leave unchecked ‘Enable Protection Manager Integration’ and click Next >
Note: DFM/Protection manager cannot be used to manage SnapVault-ing of VMDKs on NFS!

2.12 Ready to Install the SnapDrive Application
Click Install!

2.13 SnapDrive Installation Completed
Click Finish!

Tip: If you encounter problems with slow/hanging SnapDrive service on restart, check that the ‘WinHTTP Web Proxy Auto-Discovery Service’ is started and set to Automatic start (due to Microsoft Security update MS12-074 which added a dependency on the WinHTTP service.)

Verifying Disks in the SnapDrive MMC

All being well, when you open the SnapDrive MMC and click to expand ‘Disks’, all the VMDKs on NFS attached to this Virtual Machine should be visible! Nice and simple!

Image: SnapDrive Icon
Image: SnapDrive for Windows using VMDKs on NFS
A Few Notes More

Q1: Do Storage Systems need to be individually specified under ‘Transport Protocol Settings’?
A1: No!

Image: No Storage Systems specified here. We’re using RPC for default settings.

Q2: Do you need to run the below?
A2: No!

Via the Windows DOS prompt at:
C:\Program Files\NetApp\Snapdrive> sdcli smvi_config set -host VSC_Server_IP

Q3: I’m getting “Loading SnapDriveRes.dll failed” error when I try to run sdcli, why?
A3: Check the server has had a reboot after turning of UAC.

Q4: How to restart the SnapDrive service?
A4:
net stop SWSvc
net start SWSvc

Q5: Does it matter if the VMware Paravirtual SCSI driver is used?
A5: No!

Q6: Does Microsoft iSCSI Initiator need to be enabled?
A6: No!