Sunday, 15 May 2022

[CI][REST API][Python] Adding Annotation Values via the REST API

Work In Progress

Some more NetApp Cloud Insights REST API with Python basics here.

Again using Python like a shell, no creating and running scripts.

We want to update some annotations via the REST API. We have already got a list of annotations not found in CI using the method here: Comparing OCI vs CI Annotations and Annotation Values.

0) Setup

Notes: 1) We're just running things directly in the Python Client, no creating scripts here. 2) Update the highlighted bits as per your CI tenant.

  • import requests
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
  • apiKey = "YOUR_API_KEY"
  • yourTenant = "aaXXXX.bYY-cc-Z.cloudinsights.netapp.com"
  • baseUrl = "https://" + yourTenant + "/rest/v1/"
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=apiKey)}
1) Get Annotation ID from the Annotation Name

This is written as a Python one liner.

a = "YOURANNOTATION";j = requests.get(baseUrl + "assets/annotations/" + a, headers=headers, verify=False).json();j['id']

Other useful stuff - 1) To get display all the values2) The count of all the values:

for v in j['enumValues']: print(v['name'])

Other useful stuff - 2) The count of all the values:

len(j['enumValues'])

Other useful stuff - 3) Finding out if the value is already in enumValues:

values = []
for v in j['enumValues']: values += [v['name']]
if 'TEST_DATA' in values: print("EXISTS!")

2) Add a Value to the Annotation you've just got the Annotation ID for

2.1) Type FLEXIBLE_ENUM (List in the UI)

The REST API to add the annotation is:

PATCH /assets/annotations/{id}

The request body needs to include all the existing values (you could remove existing values that aren't in use and no longer required) and the new value(s).

For example, for a Country annotation where you already have Argentina and Brazil, and you want to add Chile, the request body is:

{
  "name": "Country",
  "description": "",
  "enumValues": [
    {
      "name": "ARGENTINA"
      "label": "ARGENTINA"
    },
    {
      "name": "BRAZIL"
      "label": "BRAZIL"
    },
    {
      "name": "CHILE"
      "label": "CHILE"
    }
  ]
}

In Python we can use (also see here):

requests.patch(url, data=json.dumps(payload), headers=headers, verify=False)

To be continued.



[CI][DWH][ODATA] Limits $top $skip and $count

Carrying on from the previous post:

Something important I should mention about running queries against the NetApp Cloud Insights Data Warehouse is that there are certain limits on the amount of data that will be returned.

  • The maximum limit is 10'000 rows by default.
  • Without limit, the query will return 1'000 rows.
  • You can use $count to count the number of rows.
  • You can use $top to select the number of rows you want (remembering the 10'000 row limit)
  • You can use $skip to skip rows you don't want.

Playing around with Python and the dwh_inventory.hosts table (which is quite big in the environment I am running these queries on). These were my findings.

I couldn't get $count to work without returning data too. But what I could do is a count with just return 1 result.

Note: In the below I am using Python shell one liners (because it is super easy to up arrow and edit the one line of Python when you're playing around.)

Example: Getting the full count with just one row of data returned.

api = "dwh-management/odata/dwh_inventory/host?&count=true&$top=1";url = baseUrl + api;jsondata = requests.get(url, headers=headers, verify=False).json();jsondata['@odata.count']

In my case this returned 40929.
So if we want to get all the output then we need to do the following.

Example: Getting all 40929 results

hosts = []

api = "dwh-management/odata/dwh_inventory/host?&top=10000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=10000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=20000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=30000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=40000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

len(hosts)

At least that is how it should be working. I'm not sure if I was having weird issues with my Python/VDI, but what worked at one point, stopped working later on...

Note: Programmatically you would use a for loop for this, just demonstrating the top and skip here.

Saturday, 23 April 2022

[CI][Python] Custom SQL Queries in the NetApp Cloud Insights Data Warehouse

Introduction

In the NetApp OnCommand Insight (OCI) Data Warehouse world, it was fairly standard to integrate with the MySQL OCI-DWH by running queries from an external client (like Splunk for example.) In the NetApp Cloud Insights (CI) world, over the REST API ODATA interface, there is another way of doing it; create custom SQL queries in CI, and the integration simply calls that custom query (one API call). So how do we do this?

0) Setup

Notes: 1) We're just running things directly in the Python Client, no creating scripts here. 2) Update the highlighted bits as per your CI tenant.

  • import requests
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
  • apiKey = "YOUR_API_KEY"
  • yourTenant = "aaXXXX.bYY-cc-Z"
  • baseUrl = "https://" + yourTenant + ".cloudinsights.netapp.com/rest/v1/"
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=apiKey)}

1) Seeing What Custom SQL Queries Are Already in the CI DWH (there may be none)

To see the Custom Queries that are currently in the CI DWH, run:

  • api = "dwh-management/odata/dwh_custom/custom_queries"
  • url = baseUrl + api
  • jsondata = requests.get(url, headers=headers, verify = False).json()
  • print(jsondata)

The json format isn't super human readable, but we can make it more readable with the following.

To print out the odata.context (the full API URL):

  • print(jsondata['@odata.context'])

To print out the list of queries:

  • i = 0
  • for query in jsondata['value']:
  •   print(str(i) + ":" + query['queryName'])
  •   i += 1
And if you want to see the actual SQL query (see the Appendix below if you want to print the sql to a text file):
  • print(jsondata['value'][i]['querySql'])

2) How to We Upload a Custom Query to CI?

The method changes to POST, and the BODY needs two fields - queryName and querySql, like so:

{
  "queryName": "test_query_1",
  "querySql": "SELECT * FROM dwh_inventory.storage"
}

To create our very simple Custom Query in the CI DWH, run:

  • api = "dwh-management/odata/dwh_custom/custom_queries"
  • url = baseUrl + api
  • body = {
  •   "queryName": "test_query_1",
  •   "querySql": "SELECT * FROM dwh_inventory.storage"
  • }
  • postdata = requests.post(url, headers=headers, json=body, verify = False)
Response [201] = success!

3) How to We Run a Custom Query in CI?

To execute the custom query, it is simply (replace test_query_1 with the name of the query):

  • api = "dwh-management/odata/dwh_custom/test_query_1"
  • url = baseUrl + api
  • jsondata = requests.get(url, headers=headers, verify = False).json()

And similar to the above you'll find the table output in 'value':

  • i = 0
  • for query in jsondata['value']:
  •   print(str(i) + ":" + str(query))
  •   i += 1

You will need to pass the data as per requirements.

4) How do We Delete a Custom Query in CI?

To execute the custom query, it is simply (replace test_query_1 with the name of the query):

  • api = "dwh-management/odata/dwh_custom/custom_queries('test_query_1')"
  • url = baseUrl + api
  • delQuery = requests.delete(url, headers=headers, verify = False)

Response [200] = success!


PS Just a reminder that if you're using Python on your Windows Machine, in a corporate environment, initiated from a DOS prompt, then you'll probably want to set the proxy in DOS first:
set https_proxy=https://USERNAME:PASSWORD@proxy.server.fqdn:PORT

APPENDIX: Random Useful Stuff

How to redirect python print output to a file, then redirect output back to the CLI:

orig_stdout = sys.stdout
sys.stdout = open('file.txt','w')
print(...)
sys.stdout.close()
sys.stdout = orig_stdout

How to turn a formatted SQL Query into a String in Python (or How to String-ize an SQL Query with Python):

One of the challenges with using Custom SQL Queries in CI is that your beautifully formatted SQL query needs to be turned into a string. And we can do this from the Python shell as per the below:

Note: Remember where you have \ it should be \\ in Python.

workingDir = "{WORKING DIRECTORY}"
file = "{FILE NAME}"

strFile = ""
with open(workingDir + "\\" + file) as f:
  lines = f.readlines()

len(lines) # Check this is what you expect!

for l in lines:
  strFile += l.strip() + " "

file2 = "{FILE NAME 2}"
with open(workingDir + "\\" + file2, 'w') as f:
  f.write(strFile)

Note 1: You would still need to convert " to ' (because above we use " to encapsulate the SQL query.)
Note 2: I did have issues with a really big query, but exactly the same string runs fine in OCI-DWH SQL - Cloud Insights problem? I didn't check if it would run in CI Cognos (this is something I need to try ... could we be missing an annotation!?)

Tuesday, 12 April 2022

Comparing OCI vs CI Annotations and Annotation Rules

If found this comparison surprisingly useful. We were missing more annotation values than I had anticipated!

Here we bring together stuff for the previous two posts. And it is fairly easy to compare OCI vs CI annotations and annotations rules and get a report of what is missing. First run this tool:

Cosonok's IT Blog: [OCI][Python] Collecting NetApp OCI Annotations and Annotation Values Across Multiple OCI Servers and Export to CSV

Then run this tool:

Cosonok's IT Blog: [CI][Python] Collecting NetApp Cloud Insights Annotations and Annotation Values and Export to CSV

Bring both sheets into the same excel workbook, turn the data into tables. Create a column D on both sheets to combine annotation and annotation value as per:

=TRIM(CONCATENATE(A2,":",C2))

Then I found using INDEX, MATCH, and the range worked best, something like this (change XXXX with the final row of the OciAnnosAndValues tab, and YYYY with the final row of the CiAnnosAndValues tab.)

For the OciAnnosAndValues tab:

=IFERROR(INDEX(CiAnnosAndValues!D$2:D$YYYY,MATCH(D2,CiAnnosAndValues!D$2:D$YYYY,0)),"Not Found")

For the CiAnnosAndValues tab:

=IFERROR(INDEX(OciAnnosAndValues!D$2:D$XXXX,MATCH(D2,OciAnnosAndValues!D$2:D$XXXX,0)),"Not Found")

Add in a little bit of conditional formatting and job done!

[CI][Python] Collecting NetApp Cloud Insights Annotations and Annotation Values and Export to CSV

A Python script to connect to your NetApp Cloud Insights tenant, and collect the annotations and annotation values, and then export to CSV.

Save the script as say GetCiAnnosAndValues.py and run from your command prompt as> python GetCiAnnosAndValues.py (remembering to set the highlighted bits a per your environment.)

Note: If you're running from a corporate machine, you may need to set you command prompt proxy as per this post.

The Script

## IMPORT THE ESSENTIALS ##
import csv
import json
import urllib3
import requests
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## USER/ACQUISITION SPECIFIC VARIABLES ##
api = 'assets/annotations'
api_key = 'YOURAPIKEY'
yourTenant = 'YOURTENANT' # This is the bit that comes before .cloudinsights.netapp.com

## REST API GET ##
url = 'https://' + yourTenant + '.cloudinsights.netapp.com/rest/v1/' + api
headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
jsondata = requests.get(url, headers=headers, verify = False).json()

## INITIALIZE 1 LIST AND 2 DICTS TO STORE THE DATA ##
annoNames = []
annoTypes = {}
annoValues = {}

## STORING THE DATA ##
for a in jsondata:
 if a['isUserDefined'] == True:
  if a['name'] not in annoNames:
   annoNames += [a['name']]
   annoTypes[a['name']] = a['type']
   annoValues[a['name']] = []
   for v in a['enumValues']:
    annoValues[a['name']] += [v['name']]
  else:
   for v in a['enumValues']:
    if v['name'] not in annoValues[a['name']]:
     annoValues[a['name']] += [v['name']]

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('CiAnnosAndValues.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['Anno Name','Anno Type','Anno Value']
writer.writerow(headings)

## WRITE THE DATA TO CSV ##
for a in annoNames:
 for v in annoValues[a]:
  row=[]
  row.append(a)
  row.append(annoTypes[a])
  row.append(v)
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

[OCI][Python] Collecting NetApp OCI Annotations and Annotation Values Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect the annotations and annotation values, and then export to CSV.

Save the script as say GetOciAnnosAndValues.py and run from your command prompt as> python GetOciAnnosAndValues.py (remembering to set the highlighted bits a per your environment.)

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## USER/ACQUISITION SPECIFIC VARIABLES ##
api       = 'assets/annotations'
username  = 'YOURUSERNAME'
password  = 'YOURPASSWORD'
ocis = []
ocis.append('OCISERVER1')
ocis.append('OCISERVER2')
ocis.append('OCISERVER3')

## SETUP CREDENTIALS (assume same on all OCIs) ##
s = username + ':' + password
userAndPass = b64encode(s.encode('ascii')).decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## INITIALIZE 1 LIST AND 2 DICTS TO COMBINE THE DATA ##
annoNames = []
annoTypes = {}
annoValues = {}

## CYCLE THROUGH THE OCIS ##
for oci in ocis:

 ## ACQUIRE DATA OVER REST API ##
 url = "https://" + oci + "/rest/v1/" + api
 jsondata = requests.get(url, headers=headers, verify = False).json()

 ## STORING THE DATA ##
 for a in jsondata:
  if a['isUserDefined'] == True:
   if a['name'] not in annoNames:
    annoNames += [a['name']]
    annoTypes[a['name']] = a['type']
    annoValues[a['name']] = []
    for v in a['enumValues']:
     annoValues[a['name']] += [v['name']]
   else:
    for v in a['enumValues']:
     if v['name'] not in annoValues[a['name']]:
      annoValues[a['name']] += [v['name']]

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('OciAnnosAndValues.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['Anno Name','Anno Type','Anno Value']
writer.writerow(headings)

## WRITE THE DATA TO CSV ##
for a in annoNames:
 for v in annoValues[a]:
  row=[]
  row.append(a)
  row.append(annoTypes[a])
  row.append(v)
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Monday, 11 April 2022

Comparing OCI vs CI Datasources

Not sure if this information will be of any use to anyone, except if you're doing a fairly large OCI to CI migration, where it's not so straightforward as just running the migration tool to bring all your datasources across.

Here we bring together stuff from the previous posts. And it's fairly easy to compare OCI vs CI datasources and get a report. First run the OCI datasource export to CSV tool:

Cosonok's IT Blog: [OCI][Python] Collecting All NetApp OCI Datasources Across Multiple OCI Servers and Export to CSV

Then run the CI datasource export to CSV tool:

Cosonok's IT Blog: [CI][Python] Collecting All NetApp Cloud Insights Data Sources (Data Collectors) and Export to CSV (remember you might need to set your proxy as per this post)

Finally, with a little bit of excel manipulation, you can bring both sheets into the same excel workbook, turn the data into tables, and to compare use VLOOKUPS like the below.

For the OCI table to check if they are in CI (OCI is sheet 1 / CI sheet 2):

IFERROR(VLOOKUP(F2,Table2[@foundationIp],1,FALSE),"Not found")

For the CI table to check if they are in OCI (CI is sheet 2/ OCI sheet 1):

IFERROR(VLOOKUP(F2,Table1[@foundationIp],1,FALSE),"Not found")

Note: In both cases it is still worth manually checking the output (i.e. DNS FQDN may have been used in once but IP in the other and vice versa.)

And you might add in a little conditional formatting too.

THE END

Sunday, 10 April 2022

[CI][Python] Collecting All NetApp Cloud Insights Data Sources (Data Collectors) and Export to CSV

A Python script to connect to your NetApp Cloud Insights tenant, collect all the Datasources, and export as a CSV.

Save the script as say GetCiDatasources.py and run from your prompt as> python GetCiDatasources.py (remembering to set the highlighted bits as per your environment.)

The Script

## IMPORT NECESSARY STUFF ##
import csv
import json
import urllib3
import requests
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## REST API GET ##
api_key = 'ENTER_YOUR_API_KEY_HERE'
yourTenant = 'YOUR_TENANT' # This is the bit that comes before .cloudinsights.netapp.com
url = 'https://' + yourTenant + '.cloudinsights.netapp.com/rest/v1/collector/datasources'
headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
jsondata = requests.get(url, headers=headers, verify = False).json()

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('CiDatasources.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['name','status','vendor','model','foundationIp','lastSuccessfullyAcquired']
writer.writerow(['CI Tenant'] + headings)

## PARSE THE JSONDATA TO CSV ##
for dsource in jsondata:
  row=[]
  row.append(yourTenant)
  for column in headings:  
    row.append(dsource[column])
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Friday, 8 April 2022

[CI][Python] Collecting Datasources from NetApp Cloud Insights Using REST API and Python

In the previous post, we collected all the datasources from our NetApp OCI servers. Now I want to collect the datasources from NetApp Cloud Insights. The main change is that CI uses an API Key, whereas OCI used username and password authentication, so I had to google the correct way to connect, and it turned out very simple.

Collecting NetApp Cloud Insights Datasources over REST API in Python Command Line >>>

  • import requests
  • api_key = 'ENTER_YOUR_API_KEY_HERE'
  • url = 'https://YOUR_TENANT.cloudinsights.netapp.com/rest/v1/collector/datasources'
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
  • jsondata = requests.get(url, headers=headers).json()

Nice and simple. 5 lines. Of course jsondata is just that, JSON data. In a future post we'll process it.

Now to get Python to work with a Windows machine that's behind a proxy.

Getting the Above to Work With Python on a Windows Machine That's Behind a Proxy

Not much different to the above. First you will need to find your corporate web-proxy information (you might find this down the bottom of a .pac file), then you need to run this from the DOS Command Prompt>

set https_proxy=https://USERNAME:PASSWORD@proxy.server.fqdn:PORT

Then I needed to add verify = False to the requests.get which makes the final line in the above:

  • jsondata = requests.get(url, headers=headers, verify = False).json()
And if you want to get rid of the "InsecureRequestWarning" you can add:
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
Note: Just for completeness, there is also set http_proxy=http...

[OCI][Python] Collecting All NetApp OCI Datasources Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Datasources. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI), and you want to compare what you've got left in all your OCI servers, to what you've now got in CI.

Save the script as say GetOciDatasources.py and run from your prompt as> python GetOciDatasources.py (remembering to set the highlighted bits as per your environment.)

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('OciDatasources.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','DS Name','Status','Vendor','Model','FoundationIp','LastAcquired'])

## CYCLE THROUGH THE OCIS COLLECTING THE ANNOTATION RULES ##
for OCI in OCIS:
  method = "GET"
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "admin/datasources"
  response = requests.request(method,baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for dsource in json_data:
    row=[]
    row.append(OCI)
    row.append(dsource["name"])
    row.append(dsource["status"])
    row.append(dsource["vendor"])
    row.append(dsource["model"])
    row.append(dsource["foundationIp"])
    row.append(dsource["lastSuccessfullyAcquired"])
    writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Friday, 1 April 2022

[OCI][Python] Collecting All NetApp OCI Annotation Rules Across Multiple OCI Servers and Export to CSV

Carrying on from the last post.

 A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Annotation Rules and some useful information. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI).

Save the script as say GetAnnoRules.py and run from your prompt as python GetAnnoRules.py.

Note: The script is very similar to the last post. The cycle through bit is a bit clear laid out.

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('annorules.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','Anno Rules Name','Evaluation Order','Query Name','Object Type','Anno Name','Anno Value'])

## CYCLE THROUGH THE OCIS COLLECTING THE ANNOTATION RULES ##
for OCI in OCIS:
  method = "GET"
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "admin/rules"
  response = requests.request(method,baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for annorule in json_data:
    row=[]
    row.append(OCI)
    row.append(annorule["name"])
    row.append(annorule["evaluationOrder"])
    row.append(annorule["query"]["name"])
    row.append(annorule["query"]["objectType"])
    row.append(annorule["annotation"]["name"])
    row.append(annorule["value"])
    writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Thursday, 31 March 2022

[OCI][Python] Collecting All NetApp OCI Queries Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Queries (queries you see in the WebUI) and relating information. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI). And in the next post, we'll collect the Annotation Rules (which will use some of these queries.)

My Python skills are pretty poor, which is a good reason for me to have a go at improving them!

Save the script as say GetQueries.py and run from your prompt as python GetQueries.py.

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('queries.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','Query Name','Object Type','Sort Order','Fields','filters'])

## CYCLE THROUGH THE OCIS COLLECTING THE QUERIES ##
for OCI in OCIS:
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "queries"
  response = requests.request("GET",baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for query in json_data:
    writer.writerow([OCI,query["name"], query["objectType"], query["sort"], query["fields"], query["filters"]])

## CLOSE THE CSV ##
output.close()

Wednesday, 23 March 2022

[OCI-DWH] [MySQL] NetApp ONTAP Volume Encryption Status from OCI Data Warehouse

I seem to have been neglecting this blog a little this year (perhaps adding too much stuff to my other IT blog....) This might be useful know how.

The following MySQL query run against the NetApp OnCommand Insight Data Warehouse in your MySQL query tool of choice (like MySQL Workbench), will allow your to export a CSV of all your NetApp Data ONTAP internal volumes (includes 7-Mode, even though these of course do not support any volume level encryption) and the volume encryption status.

SELECT
    s.name AS 'Storage Cluster'
  , e.fieldValue AS 'Is Encrypted?'
  , i.identifier AS 'Internal Volume Identifier'
  , s.microcodeVersion AS 'ONTAP Version'
FROM dwh_inventory.internal_volume AS i
LEFT JOIN
  (SELECT * FROM dwh_inventory.extended_data
    WHERE objectType = 'INTERNAL_VOLUME'
      AND fieldName = 'isEncrypted'
  ) AS e ON e.objectId = i.id
JOIN dwh_inventory.storage AS s ON s.id = i.storageId
WHERE s.manufacturer = 'NetApp'
  AND (s.microcodeVersion LIKE '%ONTAP'
    OR s.microcodeVersion LIKE '%7-Mode%');


What are the distinct Object Types and Field Names in dwh_inventory.extended_data?

From the above query, you can see that we're using dwh_inventory.extended_data to get the internal volume encryption status.

Just out of curiosity, what are the distinct Object Types and Field Names in dwh_inventory.extended_data?

The below outputs are from an OCI DWH 7.3.10 instance (yes, a bit behind, as I write this OCI is up to 7.3.13.)

SELECT DISTINCT objectType FROM dwh_inventory.extended_data ORDER BY 1 ASC;

  DISK
  INTERNAL_VOLUME
  STORAGE_NODE
  STORAGE_POOL
  SWITCH
  VOLUME
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'DISK' ORDER BY 1 asc;

  isEncrypted
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'INTERNAL_VOLUME' ORDER BY 1 asc;

  adaptiveQosPolicy
  comment
  groupName
  isEncrypted
  junctionPath
  objectStoreTieringPolicy
  objectStoreUsedSpace
  qosLimitIOPS
  qosLimitRaw
  qosPolicy
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'STORAGE_NODE' ORDER BY 1 asc;

  managementIpAddresses
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'STORAGE_POOL' ORDER BY 1 asc;

  isCompactionSavingsEnabled
  isEncrypted
  objectStoreMetadataSpace
  objectStoreName
  objectStorePhysicalUsedSpace
  objectStoreProviderType
  objectStoreReferencedSpace
  objectStoreS3Name
  objectStoreServer
  objectStoreSisSavedSpace
  objectStoreTieringFullnessThreshold
  objectStoreUnreclaimedSpace
  objectStoreUsedSpace
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'SWITCH' ORDER BY 1 asc;

  lineCard01
  lineCard02
  lineCard03
  lineCard04
  lineCard05
  lineCard06
  lineCard07
  lineCard08
  lineCard09
  lineCard10
  lineCard11
  lineCard12
  lineCard13
  lineCard14
  lineCard15
  lineCard16
  lineCard17
  lineCard18
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'VOLUME' ORDER BY 1 asc;

  isEncrypted
  qosLimitIOPS
  qosLimitMBPS
  storageGroups

~~~~~

The titular query was more around looking for NetApp NVE. The following bonus items look for disk encryption status (all arrays), and storage pool encryption status (all arrays.)

BONUS 1: Query to Find Disk Encryption Status

The following query will run for all storage arrays in your OCI DWH, and return whether the array has encrypted disks or not (like NetApp NSE).

SELECT DISTINCT
    s.name AS 'Storage Cluster'
  , s.manufacturer AS 'Manufacturer'
  , s.microcodeVersion AS 'Storage O/S'
  , e.fieldValue AS 'Has encrypted disks?'
FROM dwh_inventory.extended_data AS e
JOIN dwh_inventory.disk AS d ON d.id = e.objectId
JOIN dwh_inventory.storage As s ON s.id = d.storageId
WHERE e.objectType = 'DISK'
  AND e.fieldName = 'isEncrypted';

BONUS 2: Query to Find Storage Pool (like Aggregate) Encryption Status

The following query will run for all storage pools in your OCI DWH, and return whether the storage pool is encrypted or not (like NetApp NAE).

SELECT DISTINCT
    s.name AS 'Storage Cluster'
  , s.manufacturer AS 'Manufacturer'
  , s.microcodeVersion AS 'Storage O/S'
  , sp.identifier AS 'Storage Pool Identifier'
  , e.fieldValue AS 'Is Encrypted?'
FROM dwh_inventory.extended_data AS e
JOIN dwh_inventory.storage_pool AS sp ON sp.id = e.objectId
JOIN dwh_inventory.storage AS s ON s.id = sp.storageId
WHERE e.objectType = 'STORAGE_POOL'
  AND e.fieldName = 'isEncrypted';