Saturday, 23 April 2022

[CI][Python] Custom SQL Queries in the NetApp Cloud Insights Data Warehouse

Introduction

In the NetApp OnCommand Insight (OCI) Data Warehouse world, it was fairly standard to integrate with the MySQL OCI-DWH by running queries from an external client (like Splunk for example.) In the NetApp Cloud Insights (CI) world, over the REST API ODATA interface, there is another way of doing it; create custom SQL queries in CI, and the integration simply calls that custom query (one API call). So how do we do this?

0) Setup

Notes: 1) We're just running things directly in the Python Client, no creating scripts here. 2) Update the highlighted bits as per your CI tenant.

  • import requests
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
  • apiKey = "YOUR_API_KEY"
  • yourTenant = "aaXXXX.bYY-cc-Z"
  • baseUrl = "https://" + yourTenant + ".cloudinsights.netapp.com/rest/v1/"
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=apiKey)}

1) Seeing What Custom SQL Queries Are Already in the CI DWH (there may be none)

To see the Custom Queries that are currently in the CI DWH, run:

  • api = "dwh-management/odata/dwh_custom/custom_queries"
  • url = baseUrl + api
  • jsondata = requests.get(url, headers=headers, verify = False).json()
  • print(jsondata)

The json format isn't super human readable, but we can make it more readable with the following.

To print out the odata.context (the full API URL):

  • print(jsondata['@odata.context'])

To print out the list of queries:

  • i = 0
  • for query in jsondata['value']:
  •   print(str(i) + ":" + query['queryName'])
  •   i += 1
And if you want to see the actual SQL query (see the Appendix below if you want to print the sql to a text file):
  • print(jsondata['value'][i]['querySql'])

2) How to We Upload a Custom Query to CI?

The method changes to POST, and the BODY needs two fields - queryName and querySql, like so:

{
  "queryName": "test_query_1",
  "querySql": "SELECT * FROM dwh_inventory.storage"
}

To create our very simple Custom Query in the CI DWH, run:

  • api = "dwh-management/odata/dwh_custom/custom_queries"
  • url = baseUrl + api
  • body = {
  •   "queryName": "test_query_1",
  •   "querySql": "SELECT * FROM dwh_inventory.storage"
  • }
  • postdata = requests.post(url, headers=headers, json=body, verify = False)
Response [201] = success!

3) How to We Run a Custom Query in CI?

To execute the custom query, it is simply (replace test_query_1 with the name of the query):

  • api = "dwh-management/odata/dwh_custom/test_query_1"
  • url = baseUrl + api
  • jsondata = requests.get(url, headers=headers, verify = False).json()

And similar to the above you'll find the table output in 'value':

  • i = 0
  • for query in jsondata['value']:
  •   print(str(i) + ":" + str(query))
  •   i += 1

You will need to pass the data as per requirements.

4) How do We Delete a Custom Query in CI?

To execute the custom query, it is simply (replace test_query_1 with the name of the query):

  • api = "dwh-management/odata/dwh_custom/custom_queries('test_query_1')"
  • url = baseUrl + api
  • delQuery = requests.delete(url, headers=headers, verify = False)

Response [200] = success!


PS Just a reminder that if you're using Python on your Windows Machine, in a corporate environment, initiated from a DOS prompt, then you'll probably want to set the proxy in DOS first:
set https_proxy=https://USERNAME:PASSWORD@proxy.server.fqdn:PORT

APPENDIX: Random Useful Stuff

How to redirect python print output to a file, then redirect output back to the CLI:

orig_stdout = sys.stdout
sys.stdout = open('file.txt','w')
print(...)
sys.stdout.close()
sys.stdout = orig_stdout

How to turn a formatted SQL Query into a String in Python (or How to String-ize an SQL Query with Python):

One of the challenges with using Custom SQL Queries in CI is that your beautifully formatted SQL query needs to be turned into a string. And we can do this from the Python shell as per the below:

Note: Remember where you have \ it should be \\ in Python.

workingDir = "{WORKING DIRECTORY}"
file = "{FILE NAME}"

strFile = ""
with open(workingDir + "\\" + file) as f:
  lines = f.readlines()

len(lines) # Check this is what you expect!

for l in lines:
  strFile += l.strip() + " "

file2 = "{FILE NAME 2}"
with open(workingDir + "\\" + file2, 'w') as f:
  f.write(strFile)

Note 1: You would still need to convert " to ' (because above we use " to encapsulate the SQL query.)
Note 2: I did have issues with a really big query, but exactly the same string runs fine in OCI-DWH SQL - Cloud Insights problem? I didn't check if it would run in CI Cognos (this is something I need to try ... could we be missing an annotation!?)

Tuesday, 12 April 2022

Comparing OCI vs CI Annotations and Annotation Rules

If found this comparison surprisingly useful. We were missing more annotation values than I had anticipated!

Here we bring together stuff for the previous two posts. And it is fairly easy to compare OCI vs CI annotations and annotations rules and get a report of what is missing. First run this tool:

Cosonok's IT Blog: [OCI][Python] Collecting NetApp OCI Annotations and Annotation Values Across Multiple OCI Servers and Export to CSV

Then run this tool:

Cosonok's IT Blog: [CI][Python] Collecting NetApp Cloud Insights Annotations and Annotation Values and Export to CSV

Bring both sheets into the same excel workbook, turn the data into tables. Create a column D on both sheets to combine annotation and annotation value as per:

=TRIM(CONCATENATE(A2,":",C2))

Then I found using INDEX, MATCH, and the range worked best, something like this (change XXXX with the final row of the OciAnnosAndValues tab, and YYYY with the final row of the CiAnnosAndValues tab.)

For the OciAnnosAndValues tab:

=IFERROR(INDEX(CiAnnosAndValues!D$2:D$YYYY,MATCH(D2,CiAnnosAndValues!D$2:D$YYYY,0)),"Not Found")

For the CiAnnosAndValues tab:

=IFERROR(INDEX(OciAnnosAndValues!D$2:D$XXXX,MATCH(D2,OciAnnosAndValues!D$2:D$XXXX,0)),"Not Found")

Add in a little bit of conditional formatting and job done!

[CI][Python] Collecting NetApp Cloud Insights Annotations and Annotation Values and Export to CSV

A Python script to connect to your NetApp Cloud Insights tenant, and collect the annotations and annotation values, and then export to CSV.

Save the script as say GetCiAnnosAndValues.py and run from your command prompt as> python GetCiAnnosAndValues.py (remembering to set the highlighted bits a per your environment.)

Note: If you're running from a corporate machine, you may need to set you command prompt proxy as per this post.

The Script

## IMPORT THE ESSENTIALS ##
import csv
import json
import urllib3
import requests
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## USER/ACQUISITION SPECIFIC VARIABLES ##
api = 'assets/annotations'
api_key = 'YOURAPIKEY'
yourTenant = 'YOURTENANT' # This is the bit that comes before .cloudinsights.netapp.com

## REST API GET ##
url = 'https://' + yourTenant + '.cloudinsights.netapp.com/rest/v1/' + api
headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
jsondata = requests.get(url, headers=headers, verify = False).json()

## INITIALIZE 1 LIST AND 2 DICTS TO STORE THE DATA ##
annoNames = []
annoTypes = {}
annoValues = {}

## STORING THE DATA ##
for a in jsondata:
 if a['isUserDefined'] == True:
  if a['name'] not in annoNames:
   annoNames += [a['name']]
   annoTypes[a['name']] = a['type']
   annoValues[a['name']] = []
   for v in a['enumValues']:
    annoValues[a['name']] += [v['name']]
  else:
   for v in a['enumValues']:
    if v['name'] not in annoValues[a['name']]:
     annoValues[a['name']] += [v['name']]

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('CiAnnosAndValues.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['Anno Name','Anno Type','Anno Value']
writer.writerow(headings)

## WRITE THE DATA TO CSV ##
for a in annoNames:
 for v in annoValues[a]:
  row=[]
  row.append(a)
  row.append(annoTypes[a])
  row.append(v)
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

[OCI][Python] Collecting NetApp OCI Annotations and Annotation Values Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect the annotations and annotation values, and then export to CSV.

Save the script as say GetOciAnnosAndValues.py and run from your command prompt as> python GetOciAnnosAndValues.py (remembering to set the highlighted bits a per your environment.)

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## USER/ACQUISITION SPECIFIC VARIABLES ##
api       = 'assets/annotations'
username  = 'YOURUSERNAME'
password  = 'YOURPASSWORD'
ocis = []
ocis.append('OCISERVER1')
ocis.append('OCISERVER2')
ocis.append('OCISERVER3')

## SETUP CREDENTIALS (assume same on all OCIs) ##
s = username + ':' + password
userAndPass = b64encode(s.encode('ascii')).decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## INITIALIZE 1 LIST AND 2 DICTS TO COMBINE THE DATA ##
annoNames = []
annoTypes = {}
annoValues = {}

## CYCLE THROUGH THE OCIS ##
for oci in ocis:

 ## ACQUIRE DATA OVER REST API ##
 url = "https://" + oci + "/rest/v1/" + api
 jsondata = requests.get(url, headers=headers, verify = False).json()

 ## STORING THE DATA ##
 for a in jsondata:
  if a['isUserDefined'] == True:
   if a['name'] not in annoNames:
    annoNames += [a['name']]
    annoTypes[a['name']] = a['type']
    annoValues[a['name']] = []
    for v in a['enumValues']:
     annoValues[a['name']] += [v['name']]
   else:
    for v in a['enumValues']:
     if v['name'] not in annoValues[a['name']]:
      annoValues[a['name']] += [v['name']]

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('OciAnnosAndValues.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['Anno Name','Anno Type','Anno Value']
writer.writerow(headings)

## WRITE THE DATA TO CSV ##
for a in annoNames:
 for v in annoValues[a]:
  row=[]
  row.append(a)
  row.append(annoTypes[a])
  row.append(v)
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Monday, 11 April 2022

Comparing OCI vs CI Datasources

Not sure if this information will be of any use to anyone, except if you're doing a fairly large OCI to CI migration, where it's not so straightforward as just running the migration tool to bring all your datasources across.

Here we bring together stuff from the previous posts. And it's fairly easy to compare OCI vs CI datasources and get a report. First run the OCI datasource export to CSV tool:

Cosonok's IT Blog: [OCI][Python] Collecting All NetApp OCI Datasources Across Multiple OCI Servers and Export to CSV

Then run the CI datasource export to CSV tool:

Cosonok's IT Blog: [CI][Python] Collecting All NetApp Cloud Insights Data Sources (Data Collectors) and Export to CSV (remember you might need to set your proxy as per this post)

Finally, with a little bit of excel manipulation, you can bring both sheets into the same excel workbook, turn the data into tables, and to compare use VLOOKUPS like the below.

For the OCI table to check if they are in CI (OCI is sheet 1 / CI sheet 2):

IFERROR(VLOOKUP(F2,Table2[@foundationIp],1,FALSE),"Not found")

For the CI table to check if they are in OCI (CI is sheet 2/ OCI sheet 1):

IFERROR(VLOOKUP(F2,Table1[@foundationIp],1,FALSE),"Not found")

Note: In both cases it is still worth manually checking the output (i.e. DNS FQDN may have been used in once but IP in the other and vice versa.)

And you might add in a little conditional formatting too.

THE END

Sunday, 10 April 2022

[CI][Python] Collecting All NetApp Cloud Insights Data Sources (Data Collectors) and Export to CSV

A Python script to connect to your NetApp Cloud Insights tenant, collect all the Datasources, and export as a CSV.

Save the script as say GetCiDatasources.py and run from your prompt as> python GetCiDatasources.py (remembering to set the highlighted bits as per your environment.)

The Script

## IMPORT NECESSARY STUFF ##
import csv
import json
import urllib3
import requests
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## REST API GET ##
api_key = 'ENTER_YOUR_API_KEY_HERE'
yourTenant = 'YOUR_TENANT' # This is the bit that comes before .cloudinsights.netapp.com
url = 'https://' + yourTenant + '.cloudinsights.netapp.com/rest/v1/collector/datasources'
headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
jsondata = requests.get(url, headers=headers, verify = False).json()

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('CiDatasources.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['name','status','vendor','model','foundationIp','lastSuccessfullyAcquired']
writer.writerow(['CI Tenant'] + headings)

## PARSE THE JSONDATA TO CSV ##
for dsource in jsondata:
  row=[]
  row.append(yourTenant)
  for column in headings:  
    row.append(dsource[column])
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Friday, 8 April 2022

[CI][Python] Collecting Datasources from NetApp Cloud Insights Using REST API and Python

In the previous post, we collected all the datasources from our NetApp OCI servers. Now I want to collect the datasources from NetApp Cloud Insights. The main change is that CI uses an API Key, whereas OCI used username and password authentication, so I had to google the correct way to connect, and it turned out very simple.

Collecting NetApp Cloud Insights Datasources over REST API in Python Command Line >>>

  • import requests
  • api_key = 'ENTER_YOUR_API_KEY_HERE'
  • url = 'https://YOUR_TENANT.cloudinsights.netapp.com/rest/v1/collector/datasources'
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
  • jsondata = requests.get(url, headers=headers).json()

Nice and simple. 5 lines. Of course jsondata is just that, JSON data. In a future post we'll process it.

Now to get Python to work with a Windows machine that's behind a proxy.

Getting the Above to Work With Python on a Windows Machine That's Behind a Proxy

Not much different to the above. First you will need to find your corporate web-proxy information (you might find this down the bottom of a .pac file), then you need to run this from the DOS Command Prompt>

set https_proxy=https://USERNAME:PASSWORD@proxy.server.fqdn:PORT

Then I needed to add verify = False to the requests.get which makes the final line in the above:

  • jsondata = requests.get(url, headers=headers, verify = False).json()
And if you want to get rid of the "InsecureRequestWarning" you can add:
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
Note: Just for completeness, there is also set http_proxy=http...

[OCI][Python] Collecting All NetApp OCI Datasources Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Datasources. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI), and you want to compare what you've got left in all your OCI servers, to what you've now got in CI.

Save the script as say GetOciDatasources.py and run from your prompt as> python GetOciDatasources.py (remembering to set the highlighted bits as per your environment.)

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('OciDatasources.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','DS Name','Status','Vendor','Model','FoundationIp','LastAcquired'])

## CYCLE THROUGH THE OCIS COLLECTING THE ANNOTATION RULES ##
for OCI in OCIS:
  method = "GET"
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "admin/datasources"
  response = requests.request(method,baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for dsource in json_data:
    row=[]
    row.append(OCI)
    row.append(dsource["name"])
    row.append(dsource["status"])
    row.append(dsource["vendor"])
    row.append(dsource["model"])
    row.append(dsource["foundationIp"])
    row.append(dsource["lastSuccessfullyAcquired"])
    writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Friday, 1 April 2022

[OCI][Python] Collecting All NetApp OCI Annotation Rules Across Multiple OCI Servers and Export to CSV

Carrying on from the last post.

 A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Annotation Rules and some useful information. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI).

Save the script as say GetAnnoRules.py and run from your prompt as python GetAnnoRules.py.

Note: The script is very similar to the last post. The cycle through bit is a bit clear laid out.

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('annorules.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','Anno Rules Name','Evaluation Order','Query Name','Object Type','Anno Name','Anno Value'])

## CYCLE THROUGH THE OCIS COLLECTING THE ANNOTATION RULES ##
for OCI in OCIS:
  method = "GET"
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "admin/rules"
  response = requests.request(method,baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for annorule in json_data:
    row=[]
    row.append(OCI)
    row.append(annorule["name"])
    row.append(annorule["evaluationOrder"])
    row.append(annorule["query"]["name"])
    row.append(annorule["query"]["objectType"])
    row.append(annorule["annotation"]["name"])
    row.append(annorule["value"])
    writer.writerow(row)

## CLOSE THE CSV ##
output.close()