Wednesday, 20 January 2016

Using Clustershell to Transition LUNs from 7 to C

Wasn’t sure we could do it with Clustershell, but we definitely can in 8.3.1!

Step 1: Setup on 8.1.4 7-Mode Controller

# FAS1>

vol create SANTEST01 -l C.UTF-8 -s none aggr0 3g
vol options SANTEST01 no_i2p off
vol options SANTEST01 nvfail off
vol options SANTEST01 read_realloc off
vol status SANTEST01

lun create -s 2g -t windows_2008 -o noreserve /vol/SANTEST01/SANLUN01.lun
lun show /vol/SANTEST01/SANLUN01.lun
igroup create -i -t windows MS02 iqn.1991-05.com.microsoft:ms02
igroup show MS02
lun map /vol/SANTEST01/SANLUN01.lun MS02 11
lun show -m /vol/SANTEST01/SANLUN01.lun
lun show -v /vol/SANTEST01/SANLUN01.lun

snapmirror on
options snapmirror.access *


Note: The vol options bit isn’t necessary since they all default to off with the new volume, just here to demonstrate that these need to be off before transitioning from 7 to C.

2: Setup on 8.3.1 CDOT Cluster

# CLU1::>

volume create -vserver SVM1 -volume SANTEST01 -type DP -aggregate N1_aggr1 -size 3g
vserver peer transition create -local-vserver SVM1 -src-filer-name 10.10.10.95
snapmirror create -source-path 10.10.10.95:SANTEST01 -destination-path SVM1:SANTEST01 -type TDP
snapmirror initialize -destination-path SVM1:SANTEST01
snapmirror show -destination-path SVM1:SANTEST01 -fields state,status

igroup create -vserver SVM1 -igroup MS02 -protocol iscsi -ostype windows -initiator iqn.1991-05.com.microsoft:ms02


3: Cutover Steps on 8.1.4 7-Mode Controller

Shutdown servers and/or applications using the LUN, and then...

# FAS1>

lun unmap /vol/SANTEST01/SANLUN01.lun MS02
vol offline SANTEST01


4: Cutover Steps on 8.3.1 CDOT Cluster

# CLU::>

snapmirror update -destination-path SVM1:SANTEST01
snapmirror break -destination-path SVM1:SANTEST01
lun show -vserver SVM1 -path /vol/SANTEST01/SANLUN01.lun
lun offline -vserver SVM1 -path /vol/SANTEST01/SANLUN01.lun
lun modify -vserver SVM1 -path /vol/SANTEST01/SANLUN01.lun -serial BpEwz$Hb4Ryw
lun online -vserver SVM1 -path /vol/SANTEST01/SANLUN01.lun
lun map -vserver SVM1 -path /vol/SANTEST01/SANLUN01.lun -igroup MS02 -lun-id 11


5: Tidy up Steps on 8.3.1 CDOT Cluster

# CLU::>

snapmirror delete -destination-path SVM1:SANTEST01


6: Tidy up Steps on on 8.1.4 7-Mode Controller

# FAS1>

snapmirror release SANTEST01 SVM1:SANTEST01


7: Very final Tidy Up on 8.3.1 CDOT Cluster

# CLU::>

vserver peer transition delete -local-vserver SVM1 -src-filer-name 10.10.10.95


Proof of the pudding...

CLU1::> lun show -vserver SVM1 -path /vol/SANTEST01/SANLUN01.lun

              Vserver Name: SVM1
                  LUN Path: /vol/SANTEST01/SANLUN01.lun
               Volume Name: SANTEST01
                Qtree Name: ""
                  LUN Name: SANLUN01.lun
                  LUN Size: 2.01GB
                   OS Type: windows_2008
         Space Reservation: disabled
             Serial Number: BpEwz$Hb4Ryw
       Serial Number (Hex): 427045777a24486234527977
                   Comment:
Space Reservations Honored: false
          Space Allocation: disabled
                     State: online
                  LUN UUID: 0bd3fe2f-e296-42c5-bbce-14bf0dbf2e47
                    Mapped: mapped
                Block Size: 512
          Device Legacy ID: -
          Device Binary ID: -
            Device Text ID: -
                 Read Only: false
     Fenced Due to Restore: false
                 Used Size: 0
       Maximum Resize Size: 502.0GB
             Creation Time: 1/20/2016 14:03:38
                     Class: regular
      Node Hosting the LUN: CLU1N1
          QoS Policy Group: -
                     Clone: false
  Clone Autodelete Enabled: false
       Inconsistent import: false

Saturday, 2 January 2016

OPM 2.0 Virtual Appliance Quick Install Notes: Part 4/4 (Further Administration)

Note: This post is essentially just a screenshot fest...

4) OnCommand Performance Manager Administration

If you click Administration in the top right corner, the following screen is displayed:

Image 1: OPM 2.0 Administration
4.1) Management

4.1.1: Users - “The Manage Users page displays a list of users and groups, and provides information such as the name, type of user, email address, and role. You can also perform tasks such as adding, editing, and deleting users.”

Image 2: OPM 2.0 Add User
4.1.2: Data Sources - “The Manage Data Sources page displays information about the clusters that Performance Manager is currently monitoring. This page enables you to add additional clusters, edit cluster settings, and remove clusters.”

Image 3: OPM 2.0 Add Cluster

4.2) Setup

4.2.1: Authentication - “You can use the Authentication page to configure the OnCommand management server to communicate with your authentication server. This enables remote users to be authenticated by the authentication server to access the management server.”

Image 4: OPM 2.0 Authentication
4.2.2: AutoSupport - “You use the AutoSupport page to send an on-demand AutoSupport message to technical support. The page also displays the product System ID, which is a unique ID for your Performance Manager instance that technical support uses to find your AutoSupport messages.”

Image 5: OPM 2.0 AutoSupport
4.2.3: Email - “You can configure an SMTP server that the Performance Manager server uses to send email notifications when an event is generated. You can also specify a From address that will appear as the sender in the email.”

Image 6: OPM 2.0 Email
4.2.4: HTTPS Certificate - “You can use the HTTPS Certificate page to view the current security certificate, download a certificate signing request, generate a new HTTPS certificate, or install a new HTTPS certificate.”

Image 7: OPM 2.0 HTTPS Certificate
4.2.5: Network - “You must configure the required network settings to connect to the Performance Manager server. You use the Network page to modify the settings of your network configuration.”

Image 8: OPM 2.0 Network
4.2.6: NTP Server - “You can use the NTP Server page to specify the NTP server that you want to use with Performance Manager. The Performance Manager server synchronizes its time with the time on the NTP server.”

Image 9: OPM 2.0 NTP Server
5) Configuration

Image 10: OPM 2.0 Configuration drop down
5.1: Threshold Policies - “You set threshold policies on cluster objects (for example, on aggregates and volumes) so that an event can be sent to the storage administrator to inform the administrator that the cluster is experiencing a performance issue.”

5.2: Event Handling - “You use the Event Handling page to specify which events from Performance Manager to alert on, and the email recipients for those alerts. If Performance Manager is connected to a Unified Manager server, you can also define whether the events are reported to Unified Manager.”

OPM 2.0 Virtual Appliance Quick Install Notes: Part 3/4 (Unified Manager Connection)

Continuing from this post here ...

Note: This post is essentially just a screenshot fest...

1) Log into OPM

Image 1: Logging into OPM
2) Adding Clusters

As part of the basic setup, a cluster was added. To add further clusters:

Click Administration
Click Data Sources
Click + Add

Image 2: OPM 2.0 Add Cluster
3) Linking with OnCommand Unified Manager

Firstly, you need to have configured an Event Publisher User in OnCommand Unified Manager as below:

Image 3: OCUM 6.3 creating an Event Publisher User
Using the vSphere Client or Web Client, log in to the OnCommand Performance Manager Maintenance Console. Select option 5 from the main menu.

Image 4: OPM 2.0 Maintenance Console - Main Menu
Select option 2 to Add a new Unified Manager Server Connection.

Image 5: OPM 2.0 Maintenance Console - Unified Manager Connection Menu
Select y to continue.

Image 6: OPM 2.0 Maintenance Console - Unified Manager Connection Settings
Q: Unified Manager Server Name or IP =
Q: Unified Manager Server Port = 443

Q: Do you want to accept the certificate (if it was not trusted)? y

Q: Event Publisher User Name =
Q: Event Publisher Password =

Select y to “Are these settings correct?”

Image 7: OPM 2.0 Unified Manager connection settings
Press any key to continue.

Image 8: OPM 2.0 “Succeeded in sending registration request to the Unified Manager”
Then select x to exit from the Maintenance Console