Thursday, 19 March 2020

Obtaining List of WFA Pack Contents using PowerShell and REST API

Sort of following on from this July 2017 post - WFA 4.1 RESTful API: A Few Notes + Some PowerShell.

I was curious to see if it’s possible using the WFA REST API, to remove all the contents from a community generated WFA Pack, delete the WFA Pack (without losing all the pack’s original contents, since they’ve been removed from the pack), and then re-create a “cleaner” pack (cleaner - because I’ve noticed some odd issues with community generated packs.) Unfortunately, I don’t believe it is possible having read the REST Web Services Primer, but I’m happy to be proved wrong. Still, some of my notes regards inspecting WFA packs might be useful.

For work related reasons (compatibility with customers environment) I’m using WFA 4.2.0.0.1 here (as of writing this, WFA 5.1RC1 is the latest release).

If you’ve never used the PowerShell Invoke-RestMethod to talk to your WFA box, and use self-signed certs, you’ll need to run the below commands first:


$TypeToAdd = "using System.Net;using System.Security.Cryptography.X509Certificates;public class TrustAllCertsPolicy : ICertificatePolicy {public bool CheckValidationResult(ServicePoint srvPoint, X509Certificate certificate, WebRequest request, int certificateProblem) {return true;}}"

add-Type $TypeToAdd

[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12


To store WFA credentials in a variable (C$), run:


$C = New-Object System.Management.Automation.PsCredential($(Read-Host "U"),$(Read-Host "P" -AsSecureString))


To get the packs, run:


$packs = Invoke-RestMethod -Method Get -uri https://wfa:443/rest/packs -Credential $C


To list all the packs, see the example below:


PS> $packs.collection.pack | foreach{$_.Name}
WFA pack with common entities
WFA pack for managing Clustered Data ONTAP
WFA pack for managing Data ONTAP in 7-Mode
WFA pack for managing vCenter


To list all the entities in a pack called “WFA pack with common entities”, see the example below:


PS> ($packs.collection.pack | where{$_.name -eq "WFA pack with common entities"}).entities.entity | Foreach{ ($_.Type) + ": " + ($_.Name) }
Remote System Type: Other
Function: getValueIfEnabled
Function: replaceFirst
Function: convertNullToValue
Function: listSize
Function: nextObjectNameWithSuffix
Function: getValueAt2DWithDefault
Function: returnNullOnEmpty
Function: padNumber
Function: getValueAt
Function: nextNamePaddedBy
Function: nextIPByIncrement
Command: Send email
Scheme: playground
Function: getValueAt2D
Function: toLower
Function: nextIP
Function: getColumnValues
Command: Acquire data source
Function: roundRobinNextIP
Command: Wait for data source acquisition
Function: getSize
Function: splitByDelimiter
Function: minimumSize
Function: toUpper
Function: convertNullToZero
Function: isElementPresent
Function: nextNamePadded
Function: getValueFrom2DByRowKey
Function: nextName


THE END

APPENDIX: Empty WFA Pack in JSON

I created an empty pack, then went to the entities REST API page using Google Chrome (for the example below: https://wfa/rest/packs/4409d8fe-527f-4819-b5b7-12027d9fe2e9), acquired the XML, then used an online XML to JSON converter (https://codebeautify.org/xmltojson) to see what the “Empty Pack” looks like in JSON:

 {
 "pack": {
  "name": "Empty Pack",
  "version": {
   "major": "1",
   "minor": "0",
   "revision": "0"
  },
  "certification": "NONE",
  "description": "Empty Pack",
  "author": "Empty Pack",
  "entities": "",
  "link": [
   {
    "_rel": "self",
    "_href": "https://wfa/rest/packs/4409d8fe-527f-4819-b5b7-12027d9fe2e9",
    "__prefix": "atom"
   },
   {
    "_rel": "exportToServerFolder",
    "_href": "https://wfa/rest/packs/folder/4409d8fe-527f-4819-b5b7-12027d9fe2e9",
    "__prefix": "atom"
   },
   {
    "_rel": "list",
    "_href": "https://wfa/rest/packs",
    "__prefix": "atom"
   },
   {
    "_rel": "export",
    "_href": "https://wfa/rest/dars/4409d8fe-527f-4819-b5b7-12027d9fe2e9",
    "__prefix": "atom"
   }
  ],
  "_xmlns:atom": "http://www.w3.org/2005/Atom",
  "_uuid": "4409d8fe-527f-4819-b5b7-12027d9fe2e9"
 }
}


APPENDIX: Definitions of all the common links that might appear in responses.

The following table contains the definitions of all the common links that might appear in responses. These relations are standard across all WFA object collections.

Note: If a particular relation does not appear in a response object, it might either mean that this particular operation is not supported for that object or mean that the operation or relation specified by the link is not relevant to the current context.

Table: Definitions of all the common links that might appear in responses.

Apart from a partial or complete set of the standard actions listed above, objects in a given object collection might support several non-standard actions and relations that are specific to that object. The following sections in this document that describe these object collections also contains a table that explains all the standard and non-standard links a given object supports.

Wednesday, 18 March 2020

1000 Posts! Cosonok’s Greatest Hits! Top 40!


After pretty much exactly 10 years, here’s Cosonok’s 1000th post. Top 40 charts seem to work well for music (at least in the UK), so I thought I’d find the top 40 most viewed posts (had to use a little powershell to construct this post - see below). Unsurprisingly, all the most viewed posts are old posts (the newest ones in the list below are from 2014), which makes sense - the longer a post has lived, the more time it’s had to get views. It is a bit of a daft 1000th post as it’s just a list of stuff that was viewed in the past and probably too old now for anyone to care much about. Still, seems a fitting thing to do for a 1000th post.

At number 40 with 10239 views (posted on 8/31/13):

At number 39 with 10321 views (posted on 3/5/11):

At number 38 with 10408 views (posted on 4/3/14):

At number 37 with 10443 views (posted on 1/4/12):

At number 36 with 10765 views (posted on 2/1/14):

At number 35 with 11293 views (posted on 3/5/13):

At number 34 with 11313 views (posted on 11/4/12):

At number 33 with 11481 views (posted on 1/1/12):

At number 32 with 11647 views (posted on 1/16/12):

At number 31 with 11674 views (posted on 5/16/13):

At number 30 with 11742 views (posted on 8/24/11):

At number 29 with 11750 views (posted on 1/19/14):

At number 28 with 11955 views (posted on 11/18/11):

At number 27 with 13055 views (posted on 6/5/12):

At number 26 with 13055 views (posted on 3/24/12):

At number 25 with 13179 views (posted on 7/7/13):

At number 24 with 13653 views (posted on 2/24/13):

At number 23 with 13804 views (posted on 9/11/12):

At number 22 with 13916 views (posted on 4/14/12):

At number 21 with 14463 views (posted on 1/6/13):

At number 20 with 14766 views (posted on 10/26/12):

At number 19 with 17760 views (posted on 10/6/11):

At number 18 with 18096 views (posted on 2/21/11):

At number 17 with 18346 views (posted on 2/14/12):

At number 16 with 19223 views (posted on 10/16/11):

At number 15 with 19331 views (posted on 1/9/12):

At number 14 with 20111 views (posted on 1/28/12):

At number 13 with 21517 views (posted on 1/28/12):

At number 12 with 26014 views (posted on 3/14/12):

At number 11 with 26799 views (posted on 11/24/12):

At number 10 with 26971 views (posted on 12/3/11):

At number 9 with 27508 views (posted on 2/13/11):

At number 8 with 28375 views (posted on 1/27/12):

At number 7 with 28628 views (posted on 1/1/12):

At number 6 with 28686 views (posted on 1/9/11):

At number 5 with 29101 views (posted on 3/26/10):

At number 4 with 38749 views (posted on 8/27/12):

At number 3 with 48663 views (posted on 12/24/11):

At number 2 with 49139 views (posted on 12/7/11):

At number 1 with 109469 views (posted on 10/6/11):

Breakdown of the top 40 by year:
- 18 from 2012
- 12 from 2011
- 6 from 2013
- 3 from 2014
- 1 from 2010

PowerShell Script to Obtain Blogger Top Posts

To obtain the above information relatively easily, I copy and pasted information from my Blogger, and put it into a text file. Then ran the PowerShell script below against the text file.


[System.Object]$Views = Get-Content "cosonoks_views.txt"
[System.Array]$ViewsTable = @()
[Int]$Column = 0
$Views | Foreach{
  $Column++
  If(($_ -eq "Edit | View | Delete") -or ($_.Trim() -eq "")){$Column--}
  elseif($Column -eq 1){[String]$Title = $_}
  elseif($Column -eq 4){[Int]$ViewCount = $_}
  elseif($Column -eq 5){[String]$DateStr = $_}
  If($Column -eq 5){
    $Column = 0
    $ViewsTable += New-Object PSObject -Property @{
      "Title" = $Title
      "Views" = [Int]$ViewCount
      "Date"  = $DateStr
    }
    Write-Host "$Title : $ViewCount : $DateStr"
  }
}
$ViewsTable | Sort-Object -Property Views | Export-CSV "cosonoks_views.csv" -NoTypeInformation
[System.Array]$TextOut = @()
$Index = 999
$ViewsTable | Sort-Object -Property Views | Foreach{
  $TextOut += ("At number " + [String]($Index) + " with " + [String]($_.Views) + " views (posted on " + ($_.Date) + "):")
  $TextOut += ($_.Title)
  $TextOut += ""
  $Index--
}
$TextOut | Out-File "cosonoks_table.txt"


Tuesday, 17 March 2020

NFS Terminate -V

In Data ONTAP 7-Mode we had a command “cifs terminate -v” to terminate CIFS access to a volume. There never was an “nfs terminate -v”.

Image: Thou (NFS) shall not pass!

Q: In Clustered ONTAP, if you want to terminate all NFS access to just one NFS volume, how do you do it?

Intuitively you might think running the “volume unmount” command will do the trick, but no. This is explained by this NetApp KB:

Symptom
Users might notice that after removing a volume from the SVM namespace on the cluster, existing NFS sessions will continue to be able to read and write to the unmounted volume. However, other clients will be unable to start new sessions accessing the volume.

Cause
NFS clients obtain a FileHandle for each export root, directory, and file that the client is currently accessing. Due to the FH referencing the data volume's MSID (Master Data Set ID), any existing FH that is obtained from the NFS server continues to be able to read and write to the data volume after it is removed from the SVM namespace.

The answer is to add an export-policy rule with index 1 (the lowest index number, the first rule read) for 0.0.0.0/0 and do not allow any ro/rw access, for all the exports of the volume. And remember to update your root Loading Sharing mirrors too if you have them.

export-policy rule create -policyname exp_vol01 -vserver svm1 -clientmatch 0.0.0.0/0 -ruleindex 1 -protocol any -rorule never -rwrule never

snapmirror update-ls-set -source-path //vs1.example.com/svm_root


Note: In the above I’ve gone for ‘-protocol any’ because I don’t think many people will be using export-policies with cifs - if you are and don’t want to terminate cifs access too, then use ‘-protocol nfs’.

Explanation of never from the man pages

-rorule
never - For an incoming request from a client matching the clientmatch criteria, do not allow any access to the volume regardless of the security type of that incoming request.

-rwrule
‌never - For an incoming request from a client matching the clientmatch criteria, do not allow write access to the volume regardless of the effective security type (determined from rorule) of that incoming request.

BONUS: cifs terminate -v in Clustered ONTAP

Yes ‘cifs terminate -v’ does not exist in ONTAP, but what you can do - if you want to terminate CIFS access to just one volume - is add everyone no_access on all the shares ACLs of that volume. For example:


cluster1::> cifs share access-control create/modify -vserver svm4 -share finance -user-or-group Everyone -permission No_access

cluster1::> cifs share access-control show -vserver svm4 -share finance -user-or-group Everyone
            Share       User/Group   User/Group  Access
Vserver     Name        Name         Type        Permission
----------- ----------- ------------ ----------- -----------
svm4        finance     Everyone     windows     No_access


Note: If you already have an Everyone share ACL, then the command is ‘modify’, otherwise it is ‘create’.

Credit:

My colleague Ejos Zida (this is an anagram)

Thursday, 12 March 2020

A Colourful Way to Understand MySQL Queries

Image: 4 levels in MySQL Queries

Note: The examples here are from NetApp OnCommand Workflow Information.

To the uninitiated, MySQL queries can look confusing. Really, they are quite simple. Essentially there are four levels of data.

Level 1 = Database
Level 2 = Table / Table from a table
Level 3 = Column Heading / Returned Column Heading
Level 4 = Cell Data

The following require specific values (i.e. the Database must exist, the Table must exist, the Column Heading must exist, the Cell Data must exist):
Database Table Column Heading Cell Data

These two you can make up names in the SQL Query to suit your purpose (i.e. if you want to use Banana, that’s fine):
Table from a table Returned Column Heading

To demonstrate these, we have a couple of examples, one basic, one slightly advanced.

Basic Sample MySQL Query

In the following example:
We want to return a table of information from the vserver table with contents of the `name` column only, and we rename the name column to SVM.
We’re looking at the database cm_storage and table vserver.
We only want to return rows in the table where the vserver table `type` column has cell contents 'data'.
... and where the vserver table `admin_state` column has cell contents 'running'.

SELECT
    vserver.`name` AS SVM
FROM
    cm_storage.vserver
WHERE
    vserver.`type` = 'data'
    AND vserver.`admin_state` = 'running'

And below is the result of running this query.

Image: Basic Sample MySQL Query: Result

Advanced Sample MySQL Query

In the following example:

We want to return a table of information with...
... from the anno1 table-from-a-table the column `value` under column heading as 'Alias'
... from the vserver table the column `name` under column heading 'SVM'
... from the anno2 table-from-a-table the column `value` under column heading 'Type'

We’re looking at the database cm_storage and table vserver.

Since the above table does not have all the information we want, we do some joins...

JOIN 1) We create the table vs_anno1 from the table cm_storage.vserver_annotation
And match the table vserver column id with table vs_anno1 column vserver_id.

Image: To help visualize the above. For vserver.id = ‘7’ we have 4 rows of vserver.annotation where vserver_id = ‘7’.

JOIN 2) We create the table anno1 from the table cm_storage.annotation
And match the table vs_anno1 column annotation_id with table anno1 column id.

Image: To help visualize the above. We have 4 rows where vserver_annotation has annotation_id 2942, and annotation.id 2942 gives one row.
JOIN 3 & 4) Rinse and repeat of the above.

We only want to return information rows from the cm_storage.vserver table where...
... from table anno1 column `name` there is an 'Alias' (vserver has an annotation called Alias)
... from table vserver column `type` = 'data'
... from table anno2 column `name` there is 'DataProtectionType' (vserver has an annotation called DataProtectionType)
... from table anno2 column `value` (for DataProtectionType) = 'Primary'
... from table vserver column admin_state = 'running'.

Finally order by column 1.

SELECT
    anno1.`value` AS 'Alias',
    vserver.`name` AS 'SVM',
    anno2.`value` AS 'Type'
FROM
    cm_storage.vserver
JOIN
    cm_storage.vserver_annotation vs_anno1
    ON vserver.id = vs_anno1.vserver_id
JOIN
    cm_storage.annotation anno1
    ON vs_anno1.annotation_id = anno1.id
JOIN
    cm_storage.vserver_annotation vs_anno2
    ON vserver.id = vs_anno2.vserver_id
JOIN
    cm_storage.annotation anno2
    ON vs_anno2.annotation_id = anno2.id
WHERE
    anno1.`name` = 'Alias'
    AND vserver.`type` = 'data'
    AND anno2.`name` = 'DataProtectionType'
    AND anno2.`value` = 'Primary'
    AND vserver.admin_state = 'running'
ORDER BY
1

Image: Advanced Sample MySQL Query: Result

Conclusion

Hopefully the samples above will help you to understand the MySQL queries you encounter a little better. Understanding the levels, and then accessing the database tables to see the available fields, should lead to a knowledge of how the MySQL query works, and what is it looking for.