Monday, 27 June 2022

CI Doesn't Like Annotations with a Dot In

I was importing some CSVs of annotations into NetApp Cloud Insights version of the Annotation Import Utility - /assets/import - and occasionally one of the CSVs would result in an:

"errorMessage": "Internal Error importing CSV file ...",
"errorCode": "INTERNAL_ERROR"

Image: CI Swagger API - INTERNAL_ERROR


The problem turned out to be the presence of annotation values with a dot in, like (made up):

SUPER-APP-v1.3-UAT

The solution was either to just ignore the lines with dots in,  or apply the application with a different name (for instance, in the above example, should the app really have a version in it!? Why not just: SUPER-APP-UAT.)

Sunday, 26 June 2022

How to Migrate OCI Qtree Application from OCI to CI (and BU)

Carrying on from How to Migrate OCI Qtree Annotation from OCI to CI.

  • OCI = NetApp OnCommand Insight
  • CI = NetApp Cloud Insights
Collecting the Data from the OCI Operational Server

Again, using the MySQL connection to the OCI Operational Server, we can get our CSV of data to upload into the CI Annotation Import Utility (API Swagger UI: /assets/import). Edit (in a text editor) the top line of the CSV output so it is either of:
  • ,,Application
  • ,,Application,Business_Unit
Below are some slightly different options for acquiring the data. Why you might use these is explained at the end of the article.

Option 1) With application is not 'No-Service' and Business Unit IS NOT NULL

SELECT
  'Qtree' AS 'ObjectType',
  CONCAT(s.name,'->',i.name,'->',q.name) AS 'S>I>Q',
  a.name AS 'Application',
  a.businessUnit AS 'Business_Unit'
FROM inventory.qtree AS q
JOIN inventory.storage AS s ON s.ID = q.storageID
JOIN inventory.internal_volume AS i ON i.ID = q.internalVolumeID
JOIN inventory.qtree_to_application AS qa ON qa.qtreeID = q.ID
JOIN inventory.application AS a ON a.id = qa.applicationId
WHERE a.name != 'No-Service'
AND a.businessUnit IS NOT NULL;

Option 1B) With application is not 'No-Service' and Business Unit IS NOT NULL, and Application name is not the same as Business Unit

SELECT
  'Qtree' AS 'ObjectType',
  CONCAT(s.name,'->',i.name,'->',q.name) AS 'S>I>Q',
  a.name AS 'Application',
  a.businessUnit AS 'Business_Unit'
FROM inventory.qtree AS q
JOIN inventory.storage AS s ON s.ID = q.storageID
JOIN inventory.internal_volume AS i ON i.ID = q.internalVolumeID
JOIN inventory.qtree_to_application AS qa ON qa.qtreeID = q.ID
JOIN inventory.application AS a ON a.id = qa.applicationId
WHERE a.name != 'No-Service'
AND a.businessUnit IS NOT NULL
AND a.name != a.businessUnit;

Option 1C) With application is not 'No-Service' and Business Unit IS NOT NULL, and Application name = Business Unit

SELECT
  'Qtree' AS 'ObjectType',
  CONCAT(s.name,'->',i.name,'->',q.name) AS 'S>I>Q',
  a.name AS 'Application',
  a.businessUnit AS 'Business_Unit'
FROM inventory.qtree AS q
JOIN inventory.storage AS s ON s.ID = q.storageID
JOIN inventory.internal_volume AS i ON i.ID = q.internalVolumeID
JOIN inventory.qtree_to_application AS qa ON qa.qtreeID = q.ID
JOIN inventory.application AS a ON a.id = qa.applicationId
WHERE a.name != 'No-Service'
AND a.businessUnit IS NOT NULL
AND a.name != a.businessUnit;

Option 2) With application and ignoring Business Unit because it is null

SELECT
  'Qtree' AS 'ObjectType',
  CONCAT(s.name,'->',i.name,'->',q.name) AS 'S>I>Q',
  a.name AS 'Application'
FROM inventory.qtree AS q
JOIN inventory.storage AS s ON s.ID = q.storageID
JOIN inventory.internal_volume AS i ON i.ID = q.internalVolumeID
JOIN inventory.qtree_to_application AS qa ON qa.qtreeID = q.ID
JOIN inventory.application AS a ON a.id = qa.applicationId
WHERE a.name != 'No-Service'
AND a.businessUnit IS NULL;

Option 3) With application and completely ignoring Business Unit

SELECT
  'Qtree' AS 'ObjectType',
  CONCAT(s.name,'->',i.name,'->',q.name) AS 'S>I>Q',
  a.name AS 'Application'
FROM inventory.qtree AS q
JOIN inventory.storage AS s ON s.ID = q.storageID
JOIN inventory.internal_volume AS i ON i.ID = q.internalVolumeID
JOIN inventory.qtree_to_application AS qa ON qa.qtreeID = q.ID
JOIN inventory.application AS a ON a.id = qa.applicationId
WHERE a.name != 'No-Service';

Explained

If you notice in Option 1, that a lot of the Business Units are exactly the same name as the Application (can get the break down with 1B and 1C), then it suggests the Business Units have been incorrectly input (i.e. you would expect to have a different Application Name to Business Unit.) In which case you might skip Business Unit entirely and just go with Option 3.

If not (Business Units are mostly correct and not simply the Application input twice), then you would go with Option 1 and Option 2.

Saturday, 25 June 2022

[PowerShell] Batch My CSV File

This was a simple PowerShell script I knocked up to batch some of my very large CSVs down. Save it as say batchMyCsv.ps1 and run it like in:

.\batchMyCsv.ps1 -FileName .\verybig.csv -Batch 1000

The Script

Param(
  [Parameter(Mandatory=$true)][String]$FileName,
  [Int]$Batch = 1000
)

Write-Host ("Getting file $FileName")
[System.Array]$csvContent = Get-Content $FileName
Write-Host ("Splitting $FileName into batches of $Batch")
[String]$FileWoExt = $FileName.split(".")[-2].replace("\","")
Write-Host ("Files will be named " + $FileWoExt + ".X.csv")
[String]$FirstLine = $csvContent[0]
Write-Host ("Common first line: $FirstLine")

$outFile = @()
$counter = 0
$fileCount = 1
$csvContent | Foreach{
  $outFile  += $_
  $counter++
  If($counter -eq $Batch){
    $NewFileName = $FileWoExt + "." + [String]$fileCount + ".csv"
    $outFile | Set-Content $NewFileName
    Write-Host ("Created file $NewFileName")
    $counter = 0
    $outFile = @()
    $outFile += $csvContent[0]
    $fileCount++
  }
}

$NewFileName = $FileWoExt + "." + [String]$fileCount + ".csv"
$outFile | Set-Content $NewFileName
Write-Host ("Created file $NewFileName")


How to Migrate OCI Qtree Annotation from OCI to CI

Carrying on from the last post where we looked at storage annotation.

  • OCI = NetApp OnCommand Insight
  • CI = NetApp Cloud Insights

The Easy Way to Migrate Qtree Annotations from OCI to CI

Note: I know some people will debate this, but the setup time for this method is like zero. If you have access to the OCI operational servers MySQL (which you should do with a certain default user that runs the ETL process from the OCI DWH) and a MySQL client, you're up and running super fast.

Connect to the MySQL interface on the operational OCI Server (not the DWH) and query data there, to create an output that you can apply in CI's Annotation Import Utility.

What Object Types are in inventory.object_to_annotation?

SELECT DISTINCT objectType FROM inventory.object_to_annotation;

What Annotation Types (Annotation Names) Do We Have For Qtree?

This is going to be dependent on customer, but to get the list run:

SELECT DISTINCT annotationType FROM inventory.object_to_annotation WHERE objectType = 'Qtree';

Getting Qtree Annotation from OCI to CI

Say we have an annotation called "Cost Code" on qtrees. To get all the "Cost Code" annotations for all our qtrees, we would simply collect the CSV output of:

SELECT
  'Qtree' AS 'ObjectType',
  CONCAT(s.name,'->',i.name,'->',q.name) AS 'S>I>Q',
  a.valueIdentifier AS 'Cost Code' # <--- YOUR ANNOTATION
FROM inventory.object_to_annotation AS o
JOIN inventory.annotation_value AS a ON a.id = o.annotationValueId
JOIN inventory.qtree AS q ON q.id = o.objectId
JOIN inventory.storage AS s ON s.id = q.storageID
JOIN inventory.internal_volume AS i ON i.id = q.internalVolumeID
WHERE o.objectType = 'Qtree'
AND o.annotationType = 'cost_code'; # <--- YOUR ANNOTATION

Edit CSV output with a text editor (Excel has a habit of messing things up), and edit the top line to:

,,"Cost Code"

And then simply upload to CI via the Swagger UI and API:

/assets/import

Note: If you have a lot of qtree annotations to update, you may need to batch this up into batches of like 2000 (test to find a number that works for you.)

Friday, 24 June 2022

How to Migrate OCI Storage Annotation from OCI to CI (+ Switch)

  • OCI = NetApp OnCommand Insight
  • CI = NetApp Cloud Insights

I've been looking for a relatively quick and painless way to migrate annotations and applications from OCI to CI. I had limited success with a provided migration tool. Tried the Python and REST API route, and it was too much effort (especially for what is essentially a one off, not to be repeated again event ... and I really need to develop my Python skills more.) And I tried with the OCI-DWH, but the problem there is that the data is not the latest and greatest, and if the ETL fails you're stuck, and it is sometimes subtly transformed. So...

The Easy Way to Migrate Storage Annotations from OCI to CI

Is to connect to the MySQL interface on the operational OCI Server (not the DWH) and query data there, to create an output that you can apply in CI's Annotation Import Utility.

What Object Types are in inventory.object_to_annotation?

SELECT DISTINCT objectType FROM inventory.object_to_annotation;

  • Host
  • HV_VirtualMachine
  • InternalVolume
  • Qtree
  • Storage
  • StorageNode
  • StoragePool
  • Switch
  • Volume

What Annotation Types (Annotation Names) Do We Have On Our Storage?

This is going to be dependent on customer, but to get the list run:

SELECT DISTINCT annotationType FROM inventory.object_to_annotation WHERE objectType = 'Storage';

Getting Storage Tier Annotation from OCI to CI

Say we want to get our Tier annotation for our storages, then we would simply collect the output of:

SELECT
  'Storage' AS 'ObjectType',
  s.name AS 'Storage',
  a.valueIdentifier AS 'Tier'
FROM inventory.object_to_annotation AS o
JOIN inventory.annotation_value AS a ON a.id = o.annotationValueId
JOIN inventory.storage AS s ON s.id = o.objectId
WHERE o.objectType = 'Storage'
AND o.annotationType = 'Tier'

Edit with a text editor the top line to:

,,Tier

And then simply upload to CI via the Swagger UI and API:

/assets/import


[BONUS] How to Migrate OCI Switch Annotation from OCI to CI

This is so similar to the above that it did not seem worthy of a post by itself.

Getting Switch Country Annotation from OCI to CI

Say we want to get our Country annotation for our switches, then we would simply collect the CSV output (using something like MySQL Workbench) of:

SELECT
  'Switch' AS 'ObjectType',
  s.name AS 'Switch',
  a.valueIdentifier AS 'Country'
FROM inventory.object_to_annotation AS o
JOIN inventory.annotation_value AS a ON a.id = o.annotationValueId
JOIN inventory.storage AS s ON s.id = o.objectId
WHERE o.objectType = 'Switch'
AND o.annotationType = Country'

Edit with a text editor the top line to:

,,Country

And then simply upload to CI via the Swagger UI and API:

/assets/import

And if you wanted to know what annotations are available for switch then it is simply:

SELECT DISTINCT annotationType FROM inventory.object_to_annotation WHERE objectType = 'Switch';

Sunday, 15 May 2022

[CI][REST API][Python] Adding Annotation Values via the REST API

Work In Progress

Some more NetApp Cloud Insights REST API with Python basics here.

Again using Python like a shell, no creating and running scripts.

We want to update some annotations via the REST API. We have already got a list of annotations not found in CI using the method here: Comparing OCI vs CI Annotations and Annotation Values.

0) Setup

Notes: 1) We're just running things directly in the Python Client, no creating scripts here. 2) Update the highlighted bits as per your CI tenant.

  • import requests
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
  • apiKey = "YOUR_API_KEY"
  • yourTenant = "aaXXXX.bYY-cc-Z.cloudinsights.netapp.com"
  • baseUrl = "https://" + yourTenant + "/rest/v1/"
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=apiKey)}
1) Get Annotation ID from the Annotation Name

This is written as a Python one liner.

a = "YOURANNOTATION";j = requests.get(baseUrl + "assets/annotations/" + a, headers=headers, verify=False).json();j['id']

Other useful stuff - 1) To get display all the values2) The count of all the values:

for v in j['enumValues']: print(v['name'])

Other useful stuff - 2) The count of all the values:

len(j['enumValues'])

Other useful stuff - 3) Finding out if the value is already in enumValues:

values = []
for v in j['enumValues']: values += [v['name']]
if 'TEST_DATA' in values: print("EXISTS!")

2) Add a Value to the Annotation you've just got the Annotation ID for

2.1) Type FLEXIBLE_ENUM (List in the UI)

The REST API to add the annotation is:

PATCH /assets/annotations/{id}

The request body needs to include all the existing values (you could remove existing values that aren't in use and no longer required) and the new value(s).

For example, for a Country annotation where you already have Argentina and Brazil, and you want to add Chile, the request body is:

{
  "name": "Country",
  "description": "",
  "enumValues": [
    {
      "name": "ARGENTINA"
      "label": "ARGENTINA"
    },
    {
      "name": "BRAZIL"
      "label": "BRAZIL"
    },
    {
      "name": "CHILE"
      "label": "CHILE"
    }
  ]
}

In Python we can use (also see here):

requests.patch(url, data=json.dumps(payload), headers=headers, verify=False)

To be continued.



[CI][DWH][ODATA] Limits $top $skip and $count

Carrying on from the previous post:

Something important I should mention about running queries against the NetApp Cloud Insights Data Warehouse is that there are certain limits on the amount of data that will be returned.

  • The maximum limit is 10'000 rows by default.
  • Without limit, the query will return 1'000 rows.
  • You can use $count to count the number of rows.
  • You can use $top to select the number of rows you want (remembering the 10'000 row limit)
  • You can use $skip to skip rows you don't want.

Playing around with Python and the dwh_inventory.hosts table (which is quite big in the environment I am running these queries on). These were my findings.

I couldn't get $count to work without returning data too. But what I could do is a count with just return 1 result.

Note: In the below I am using Python shell one liners (because it is super easy to up arrow and edit the one line of Python when you're playing around.)

Example: Getting the full count with just one row of data returned.

api = "dwh-management/odata/dwh_inventory/host?&count=true&$top=1";url = baseUrl + api;jsondata = requests.get(url, headers=headers, verify=False).json();jsondata['@odata.count']

In my case this returned 40929.
So if we want to get all the output then we need to do the following.

Example: Getting all 40929 results

hosts = []

api = "dwh-management/odata/dwh_inventory/host?&top=10000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=10000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=20000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=30000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

api = "dwh-management/odata/dwh_inventory/host?&top=10000&skip=40000"
url = baseUrl + api
jsondata = requests.get(url, headers=headers, verify=False).json()
hosts += jsondata['value']

len(hosts)

At least that is how it should be working. I'm not sure if I was having weird issues with my Python/VDI, but what worked at one point, stopped working later on...

Note: Programmatically you would use a for loop for this, just demonstrating the top and skip here.

Saturday, 23 April 2022

[CI][Python] Custom SQL Queries in the NetApp Cloud Insights Data Warehouse

Introduction

In the NetApp OnCommand Insight (OCI) Data Warehouse world, it was fairly standard to integrate with the MySQL OCI-DWH by running queries from an external client (like Splunk for example.) In the NetApp Cloud Insights (CI) world, over the REST API ODATA interface, there is another way of doing it; create custom SQL queries in CI, and the integration simply calls that custom query (one API call). So how do we do this?

0) Setup

Notes: 1) We're just running things directly in the Python Client, no creating scripts here. 2) Update the highlighted bits as per your CI tenant.

  • import requests
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
  • apiKey = "YOUR_API_KEY"
  • yourTenant = "aaXXXX.bYY-cc-Z"
  • baseUrl = "https://" + yourTenant + ".cloudinsights.netapp.com/rest/v1/"
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=apiKey)}

1) Seeing What Custom SQL Queries Are Already in the CI DWH (there may be none)

To see the Custom Queries that are currently in the CI DWH, run:

  • api = "dwh-management/odata/dwh_custom/custom_queries"
  • url = baseUrl + api
  • jsondata = requests.get(url, headers=headers, verify = False).json()
  • print(jsondata)

The json format isn't super human readable, but we can make it more readable with the following.

To print out the odata.context (the full API URL):

  • print(jsondata['@odata.context'])

To print out the list of queries:

  • i = 0
  • for query in jsondata['value']:
  •   print(str(i) + ":" + query['queryName'])
  •   i += 1
And if you want to see the actual SQL query (see the Appendix below if you want to print the sql to a text file):
  • print(jsondata['value'][i]['querySql'])

2) How to We Upload a Custom Query to CI?

The method changes to POST, and the BODY needs two fields - queryName and querySql, like so:

{
  "queryName": "test_query_1",
  "querySql": "SELECT * FROM dwh_inventory.storage"
}

To create our very simple Custom Query in the CI DWH, run:

  • api = "dwh-management/odata/dwh_custom/custom_queries"
  • url = baseUrl + api
  • body = {
  •   "queryName": "test_query_1",
  •   "querySql": "SELECT * FROM dwh_inventory.storage"
  • }
  • postdata = requests.post(url, headers=headers, json=body, verify = False)
Response [201] = success!

3) How to We Run a Custom Query in CI?

To execute the custom query, it is simply (replace test_query_1 with the name of the query):

  • api = "dwh-management/odata/dwh_custom/test_query_1"
  • url = baseUrl + api
  • jsondata = requests.get(url, headers=headers, verify = False).json()

And similar to the above you'll find the table output in 'value':

  • i = 0
  • for query in jsondata['value']:
  •   print(str(i) + ":" + str(query))
  •   i += 1

You will need to pass the data as per requirements.

4) How do We Delete a Custom Query in CI?

To execute the custom query, it is simply (replace test_query_1 with the name of the query):

  • api = "dwh-management/odata/dwh_custom/custom_queries('test_query_1')"
  • url = baseUrl + api
  • delQuery = requests.delete(url, headers=headers, verify = False)

Response [200] = success!


PS Just a reminder that if you're using Python on your Windows Machine, in a corporate environment, initiated from a DOS prompt, then you'll probably want to set the proxy in DOS first:
set https_proxy=https://USERNAME:PASSWORD@proxy.server.fqdn:PORT

APPENDIX: Random Useful Stuff

How to redirect python print output to a file, then redirect output back to the CLI:

orig_stdout = sys.stdout
sys.stdout = open('file.txt','w')
print(...)
sys.stdout.close()
sys.stdout = orig_stdout

How to turn a formatted SQL Query into a String in Python (or How to String-ize an SQL Query with Python):

One of the challenges with using Custom SQL Queries in CI is that your beautifully formatted SQL query needs to be turned into a string. And we can do this from the Python shell as per the below:

Note: Remember where you have \ it should be \\ in Python.

workingDir = "{WORKING DIRECTORY}"
file = "{FILE NAME}"

strFile = ""
with open(workingDir + "\\" + file) as f:
  lines = f.readlines()

len(lines) # Check this is what you expect!

for l in lines:
  strFile += l.strip() + " "

file2 = "{FILE NAME 2}"
with open(workingDir + "\\" + file2, 'w') as f:
  f.write(strFile)

Note 1: You would still need to convert " to ' (because above we use " to encapsulate the SQL query.)
Note 2: I did have issues with a really big query, but exactly the same string runs fine in OCI-DWH SQL - Cloud Insights problem? I didn't check if it would run in CI Cognos (this is something I need to try ... could we be missing an annotation!?)

Tuesday, 12 April 2022

Comparing OCI vs CI Annotations and Annotation Rules

If found this comparison surprisingly useful. We were missing more annotation values than I had anticipated!

Here we bring together stuff for the previous two posts. And it is fairly easy to compare OCI vs CI annotations and annotations rules and get a report of what is missing. First run this tool:

Cosonok's IT Blog: [OCI][Python] Collecting NetApp OCI Annotations and Annotation Values Across Multiple OCI Servers and Export to CSV

Then run this tool:

Cosonok's IT Blog: [CI][Python] Collecting NetApp Cloud Insights Annotations and Annotation Values and Export to CSV

Bring both sheets into the same excel workbook, turn the data into tables. Create a column D on both sheets to combine annotation and annotation value as per:

=TRIM(CONCATENATE(A2,":",C2))

Then I found using INDEX, MATCH, and the range worked best, something like this (change XXXX with the final row of the OciAnnosAndValues tab, and YYYY with the final row of the CiAnnosAndValues tab.)

For the OciAnnosAndValues tab:

=IFERROR(INDEX(CiAnnosAndValues!D$2:D$YYYY,MATCH(D2,CiAnnosAndValues!D$2:D$YYYY,0)),"Not Found")

For the CiAnnosAndValues tab:

=IFERROR(INDEX(OciAnnosAndValues!D$2:D$XXXX,MATCH(D2,OciAnnosAndValues!D$2:D$XXXX,0)),"Not Found")

Add in a little bit of conditional formatting and job done!

[CI][Python] Collecting NetApp Cloud Insights Annotations and Annotation Values and Export to CSV

A Python script to connect to your NetApp Cloud Insights tenant, and collect the annotations and annotation values, and then export to CSV.

Save the script as say GetCiAnnosAndValues.py and run from your command prompt as> python GetCiAnnosAndValues.py (remembering to set the highlighted bits a per your environment.)

Note: If you're running from a corporate machine, you may need to set you command prompt proxy as per this post.

The Script

## IMPORT THE ESSENTIALS ##
import csv
import json
import urllib3
import requests
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## USER/ACQUISITION SPECIFIC VARIABLES ##
api = 'assets/annotations'
api_key = 'YOURAPIKEY'
yourTenant = 'YOURTENANT' # This is the bit that comes before .cloudinsights.netapp.com

## REST API GET ##
url = 'https://' + yourTenant + '.cloudinsights.netapp.com/rest/v1/' + api
headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
jsondata = requests.get(url, headers=headers, verify = False).json()

## INITIALIZE 1 LIST AND 2 DICTS TO STORE THE DATA ##
annoNames = []
annoTypes = {}
annoValues = {}

## STORING THE DATA ##
for a in jsondata:
 if a['isUserDefined'] == True:
  if a['name'] not in annoNames:
   annoNames += [a['name']]
   annoTypes[a['name']] = a['type']
   annoValues[a['name']] = []
   for v in a['enumValues']:
    annoValues[a['name']] += [v['name']]
  else:
   for v in a['enumValues']:
    if v['name'] not in annoValues[a['name']]:
     annoValues[a['name']] += [v['name']]

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('CiAnnosAndValues.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['Anno Name','Anno Type','Anno Value']
writer.writerow(headings)

## WRITE THE DATA TO CSV ##
for a in annoNames:
 for v in annoValues[a]:
  row=[]
  row.append(a)
  row.append(annoTypes[a])
  row.append(v)
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

[OCI][Python] Collecting NetApp OCI Annotations and Annotation Values Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect the annotations and annotation values, and then export to CSV.

Save the script as say GetOciAnnosAndValues.py and run from your command prompt as> python GetOciAnnosAndValues.py (remembering to set the highlighted bits a per your environment.)

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## USER/ACQUISITION SPECIFIC VARIABLES ##
api       = 'assets/annotations'
username  = 'YOURUSERNAME'
password  = 'YOURPASSWORD'
ocis = []
ocis.append('OCISERVER1')
ocis.append('OCISERVER2')
ocis.append('OCISERVER3')

## SETUP CREDENTIALS (assume same on all OCIs) ##
s = username + ':' + password
userAndPass = b64encode(s.encode('ascii')).decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## INITIALIZE 1 LIST AND 2 DICTS TO COMBINE THE DATA ##
annoNames = []
annoTypes = {}
annoValues = {}

## CYCLE THROUGH THE OCIS ##
for oci in ocis:

 ## ACQUIRE DATA OVER REST API ##
 url = "https://" + oci + "/rest/v1/" + api
 jsondata = requests.get(url, headers=headers, verify = False).json()

 ## STORING THE DATA ##
 for a in jsondata:
  if a['isUserDefined'] == True:
   if a['name'] not in annoNames:
    annoNames += [a['name']]
    annoTypes[a['name']] = a['type']
    annoValues[a['name']] = []
    for v in a['enumValues']:
     annoValues[a['name']] += [v['name']]
   else:
    for v in a['enumValues']:
     if v['name'] not in annoValues[a['name']]:
      annoValues[a['name']] += [v['name']]

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('OciAnnosAndValues.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['Anno Name','Anno Type','Anno Value']
writer.writerow(headings)

## WRITE THE DATA TO CSV ##
for a in annoNames:
 for v in annoValues[a]:
  row=[]
  row.append(a)
  row.append(annoTypes[a])
  row.append(v)
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Monday, 11 April 2022

Comparing OCI vs CI Datasources

Not sure if this information will be of any use to anyone, except if you're doing a fairly large OCI to CI migration, where it's not so straightforward as just running the migration tool to bring all your datasources across.

Here we bring together stuff from the previous posts. And it's fairly easy to compare OCI vs CI datasources and get a report. First run the OCI datasource export to CSV tool:

Cosonok's IT Blog: [OCI][Python] Collecting All NetApp OCI Datasources Across Multiple OCI Servers and Export to CSV

Then run the CI datasource export to CSV tool:

Cosonok's IT Blog: [CI][Python] Collecting All NetApp Cloud Insights Data Sources (Data Collectors) and Export to CSV (remember you might need to set your proxy as per this post)

Finally, with a little bit of excel manipulation, you can bring both sheets into the same excel workbook, turn the data into tables, and to compare use VLOOKUPS like the below.

For the OCI table to check if they are in CI (OCI is sheet 1 / CI sheet 2):

IFERROR(VLOOKUP(F2,Table2[@foundationIp],1,FALSE),"Not found")

For the CI table to check if they are in OCI (CI is sheet 2/ OCI sheet 1):

IFERROR(VLOOKUP(F2,Table1[@foundationIp],1,FALSE),"Not found")

Note: In both cases it is still worth manually checking the output (i.e. DNS FQDN may have been used in once but IP in the other and vice versa.)

And you might add in a little conditional formatting too.

THE END

Sunday, 10 April 2022

[CI][Python] Collecting All NetApp Cloud Insights Data Sources (Data Collectors) and Export to CSV

A Python script to connect to your NetApp Cloud Insights tenant, collect all the Datasources, and export as a CSV.

Save the script as say GetCiDatasources.py and run from your prompt as> python GetCiDatasources.py (remembering to set the highlighted bits as per your environment.)

The Script

## IMPORT NECESSARY STUFF ##
import csv
import json
import urllib3
import requests
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## REST API GET ##
api_key = 'ENTER_YOUR_API_KEY_HERE'
yourTenant = 'YOUR_TENANT' # This is the bit that comes before .cloudinsights.netapp.com
url = 'https://' + yourTenant + '.cloudinsights.netapp.com/rest/v1/collector/datasources'
headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
jsondata = requests.get(url, headers=headers, verify = False).json()

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('CiDatasources.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
headings = ['name','status','vendor','model','foundationIp','lastSuccessfullyAcquired']
writer.writerow(['CI Tenant'] + headings)

## PARSE THE JSONDATA TO CSV ##
for dsource in jsondata:
  row=[]
  row.append(yourTenant)
  for column in headings:  
    row.append(dsource[column])
  writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Friday, 8 April 2022

[CI][Python] Collecting Datasources from NetApp Cloud Insights Using REST API and Python

In the previous post, we collected all the datasources from our NetApp OCI servers. Now I want to collect the datasources from NetApp Cloud Insights. The main change is that CI uses an API Key, whereas OCI used username and password authentication, so I had to google the correct way to connect, and it turned out very simple.

Collecting NetApp Cloud Insights Datasources over REST API in Python Command Line >>>

  • import requests
  • api_key = 'ENTER_YOUR_API_KEY_HERE'
  • url = 'https://YOUR_TENANT.cloudinsights.netapp.com/rest/v1/collector/datasources'
  • headers = {'X-CloudInsights-ApiKey': '{key}'.format(key=api_key)}
  • jsondata = requests.get(url, headers=headers).json()

Nice and simple. 5 lines. Of course jsondata is just that, JSON data. In a future post we'll process it.

Now to get Python to work with a Windows machine that's behind a proxy.

Getting the Above to Work With Python on a Windows Machine That's Behind a Proxy

Not much different to the above. First you will need to find your corporate web-proxy information (you might find this down the bottom of a .pac file), then you need to run this from the DOS Command Prompt>

set https_proxy=https://USERNAME:PASSWORD@proxy.server.fqdn:PORT

Then I needed to add verify = False to the requests.get which makes the final line in the above:

  • jsondata = requests.get(url, headers=headers, verify = False).json()
And if you want to get rid of the "InsecureRequestWarning" you can add:
  • import urllib3
  • urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
Note: Just for completeness, there is also set http_proxy=http...

[OCI][Python] Collecting All NetApp OCI Datasources Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Datasources. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI), and you want to compare what you've got left in all your OCI servers, to what you've now got in CI.

Save the script as say GetOciDatasources.py and run from your prompt as> python GetOciDatasources.py (remembering to set the highlighted bits as per your environment.)

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('OciDatasources.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','DS Name','Status','Vendor','Model','FoundationIp','LastAcquired'])

## CYCLE THROUGH THE OCIS COLLECTING THE ANNOTATION RULES ##
for OCI in OCIS:
  method = "GET"
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "admin/datasources"
  response = requests.request(method,baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for dsource in json_data:
    row=[]
    row.append(OCI)
    row.append(dsource["name"])
    row.append(dsource["status"])
    row.append(dsource["vendor"])
    row.append(dsource["model"])
    row.append(dsource["foundationIp"])
    row.append(dsource["lastSuccessfullyAcquired"])
    writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Friday, 1 April 2022

[OCI][Python] Collecting All NetApp OCI Annotation Rules Across Multiple OCI Servers and Export to CSV

Carrying on from the last post.

 A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Annotation Rules and some useful information. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI).

Save the script as say GetAnnoRules.py and run from your prompt as python GetAnnoRules.py.

Note: The script is very similar to the last post. The cycle through bit is a bit clear laid out.

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('annorules.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','Anno Rules Name','Evaluation Order','Query Name','Object Type','Anno Name','Anno Value'])

## CYCLE THROUGH THE OCIS COLLECTING THE ANNOTATION RULES ##
for OCI in OCIS:
  method = "GET"
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "admin/rules"
  response = requests.request(method,baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for annorule in json_data:
    row=[]
    row.append(OCI)
    row.append(annorule["name"])
    row.append(annorule["evaluationOrder"])
    row.append(annorule["query"]["name"])
    row.append(annorule["query"]["objectType"])
    row.append(annorule["annotation"]["name"])
    row.append(annorule["value"])
    writer.writerow(row)

## CLOSE THE CSV ##
output.close()

Thursday, 31 March 2022

[OCI][Python] Collecting All NetApp OCI Queries Across Multiple OCI Servers and Export to CSV

A Python script to connect to multiple NetApp OnCommand Insight Servers in turn, and collect all the Queries (queries you see in the WebUI) and relating information. Perhaps useful if you're doing a consolidation exercise (like  many OCI to one CI). And in the next post, we'll collect the Annotation Rules (which will use some of these queries.)

My Python skills are pretty poor, which is a good reason for me to have a go at improving them!

Save the script as say GetQueries.py and run from your prompt as python GetQueries.py.

The Script

## IMPORT THE ESSENTIALS ##
import requests
from base64 import b64encode
import csv
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

## SETUP CREDENTIALS FOR THE OCI SERVERS (ASSUMING ALL THE SAME) ##
userAndPass = b64encode(b"USERNAME:PASSWORD").decode("ascii")
headers = {'Authorization':'Basic %s' % userAndPass}

## CREATE A LIST OF OCI SERVERS ##
OCIS=[]
OCIS.append("OCI_SERVER1_FQDN")
OCIS.append("OCI_SERVER2_FQDN")
OCIS.append("OCI_SERVER3_FQDN")

## INITIALIZE A FILE FOR THE CSV OUTPUT ##
output = open('queries.csv',mode='w',newline='')
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['OCI Server','Query Name','Object Type','Sort Order','Fields','filters'])

## CYCLE THROUGH THE OCIS COLLECTING THE QUERIES ##
for OCI in OCIS:
  baseurl = "https://" + OCI + "/rest/v1/"
  api = "queries"
  response = requests.request("GET",baseurl + api, headers = headers, verify = False)
  json_data = json.loads(response.text)

  for query in json_data:
    writer.writerow([OCI,query["name"], query["objectType"], query["sort"], query["fields"], query["filters"]])

## CLOSE THE CSV ##
output.close()

Wednesday, 23 March 2022

[OCI-DWH] [MySQL] NetApp ONTAP Volume Encryption Status from OCI Data Warehouse

I seem to have been neglecting this blog a little this year (perhaps adding too much stuff to my other IT blog....) This might be useful know how.

The following MySQL query run against the NetApp OnCommand Insight Data Warehouse in your MySQL query tool of choice (like MySQL Workbench), will allow your to export a CSV of all your NetApp Data ONTAP internal volumes (includes 7-Mode, even though these of course do not support any volume level encryption) and the volume encryption status.

SELECT
    s.name AS 'Storage Cluster'
  , e.fieldValue AS 'Is Encrypted?'
  , i.identifier AS 'Internal Volume Identifier'
  , s.microcodeVersion AS 'ONTAP Version'
FROM dwh_inventory.internal_volume AS i
LEFT JOIN
  (SELECT * FROM dwh_inventory.extended_data
    WHERE objectType = 'INTERNAL_VOLUME'
      AND fieldName = 'isEncrypted'
  ) AS e ON e.objectId = i.id
JOIN dwh_inventory.storage AS s ON s.id = i.storageId
WHERE s.manufacturer = 'NetApp'
  AND (s.microcodeVersion LIKE '%ONTAP'
    OR s.microcodeVersion LIKE '%7-Mode%');


What are the distinct Object Types and Field Names in dwh_inventory.extended_data?

From the above query, you can see that we're using dwh_inventory.extended_data to get the internal volume encryption status.

Just out of curiosity, what are the distinct Object Types and Field Names in dwh_inventory.extended_data?

The below outputs are from an OCI DWH 7.3.10 instance (yes, a bit behind, as I write this OCI is up to 7.3.13.)

SELECT DISTINCT objectType FROM dwh_inventory.extended_data ORDER BY 1 ASC;

  DISK
  INTERNAL_VOLUME
  STORAGE_NODE
  STORAGE_POOL
  SWITCH
  VOLUME
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'DISK' ORDER BY 1 asc;

  isEncrypted
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'INTERNAL_VOLUME' ORDER BY 1 asc;

  adaptiveQosPolicy
  comment
  groupName
  isEncrypted
  junctionPath
  objectStoreTieringPolicy
  objectStoreUsedSpace
  qosLimitIOPS
  qosLimitRaw
  qosPolicy
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'STORAGE_NODE' ORDER BY 1 asc;

  managementIpAddresses
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'STORAGE_POOL' ORDER BY 1 asc;

  isCompactionSavingsEnabled
  isEncrypted
  objectStoreMetadataSpace
  objectStoreName
  objectStorePhysicalUsedSpace
  objectStoreProviderType
  objectStoreReferencedSpace
  objectStoreS3Name
  objectStoreServer
  objectStoreSisSavedSpace
  objectStoreTieringFullnessThreshold
  objectStoreUnreclaimedSpace
  objectStoreUsedSpace
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'SWITCH' ORDER BY 1 asc;

  lineCard01
  lineCard02
  lineCard03
  lineCard04
  lineCard05
  lineCard06
  lineCard07
  lineCard08
  lineCard09
  lineCard10
  lineCard11
  lineCard12
  lineCard13
  lineCard14
  lineCard15
  lineCard16
  lineCard17
  lineCard18
 
SELECT DISTINCT fieldName FROM dwh_inventory.extended_data WHERE objectType = 'VOLUME' ORDER BY 1 asc;

  isEncrypted
  qosLimitIOPS
  qosLimitMBPS
  storageGroups

~~~~~

The titular query was more around looking for NetApp NVE. The following bonus items look for disk encryption status (all arrays), and storage pool encryption status (all arrays.)

BONUS 1: Query to Find Disk Encryption Status

The following query will run for all storage arrays in your OCI DWH, and return whether the array has encrypted disks or not (like NetApp NSE).

SELECT DISTINCT
    s.name AS 'Storage Cluster'
  , s.manufacturer AS 'Manufacturer'
  , s.microcodeVersion AS 'Storage O/S'
  , e.fieldValue AS 'Has encrypted disks?'
FROM dwh_inventory.extended_data AS e
JOIN dwh_inventory.disk AS d ON d.id = e.objectId
JOIN dwh_inventory.storage As s ON s.id = d.storageId
WHERE e.objectType = 'DISK'
  AND e.fieldName = 'isEncrypted';

BONUS 2: Query to Find Storage Pool (like Aggregate) Encryption Status

The following query will run for all storage pools in your OCI DWH, and return whether the storage pool is encrypted or not (like NetApp NAE).

SELECT DISTINCT
    s.name AS 'Storage Cluster'
  , s.manufacturer AS 'Manufacturer'
  , s.microcodeVersion AS 'Storage O/S'
  , sp.identifier AS 'Storage Pool Identifier'
  , e.fieldValue AS 'Is Encrypted?'
FROM dwh_inventory.extended_data AS e
JOIN dwh_inventory.storage_pool AS sp ON sp.id = e.objectId
JOIN dwh_inventory.storage AS s ON s.id = sp.storageId
WHERE e.objectType = 'STORAGE_POOL'
  AND e.fieldName = 'isEncrypted';

Saturday, 8 January 2022

Uploading a CSV to Cloud Insights for Purposes of DWH_Custom Enrichment

If you want to create a NetApp Cloud Insights dwh_custom table, insert data to this table, etcetera ... the method to do this is via the API:

POST /dwh-management/upload/csvs

After uploading the CSV to Cloud Insights, you will need a Post ETL SQL script to process this CSV. Currently, to get the Post ETL SQL script to Cloud Insights, you need to reach out to the NetApp Cloud Insights SRE team.

In the post I'll show two examples of uploading the CSV to NetApp Cloud Insights:

  1. Via the swagger-ui
  2. Using a Linux Server

1) Via the Swager UI

With the correct permissions, to get to the NetApp Cloud Insights Swagger UI:

Login to either:

  • https://cloud.netapp.com > Select Fabric View > Click on Cloud Insights
  • Or go in via your tenant link: https://{tenant}.cloudinsights.netapp.com/

Admin > API Access > API Documentation > Data Warehouse

We're interested in the Data Warehouse API Documentation. And the API we're interested in is:

POST /dwh-management/upload/csvs

The next couple of screenshots show uploading a CSV with a HTML 200 (Successful Operation) result:

(Click to enlarge the images)



2) Using a Linux Server

This is probably a more useful example, as you'll likely be programmatically uploading CSVs to Cloud Insights. Because it is a little interesting, I've included an error here and how to troubleshoot it. My example is from a Linux VM inside an Enterprise Organization (any identifiers redacted/removed/changed), and to get out to Cloud Insights, necessary proxy access had to be configured in the first place. Also, you have already obtained your API key.

curl: (6) Could not resolve host: YOURTENANT.c01-eu-1.cloudinsights.netapp.com; Name or service not known
[user@aserver ~]$
[user@aserver ~]$ grep hosts /etc/nsswitch.conf
#hosts:     db files nisplus nis dns
hosts:      files dns
[user@aserver ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
128.129.130.131 aserver aserver.localdomain
[user@aserver ~]$
[user@aserver ~]$ sudo vi /etc/hosts
[sudo] password for aserver:
[user@aserver ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
128.129.130.131 aserver aserver.localdomain
3.126.150.64 YOURTENANT YOURTENANT.c01-eu-1.cloudinsights.netapp.com
[user@aserver ~]$
[user@aserver ~]$ touch AWSPriceList-Windows.csv
[user@aserver ~]$ vi AWSPriceList-Windows.csv
[user@aserver ~]$ cat AWSPriceList-Windows.csv
ConfigType,vCPU,Memory,HourlyCost,OS,Region
t3.nano,2,0.5,0.0098,Windows,US East (N. Virginia)
t3.micro,2,1,0.0196,Windows,US East (N. Virginia)
t3.small,2,2,0.0392,Windows,US East (N. Virginia)
t3.medium,2,4,0.06,Windows,US East (N. Virginia)
t3.large,2,8,0.1108,Windows,US East (N. Virginia)
t3.xlarge,4,16,0.24,Windows,US East (N. Virginia)
t3.2xlarge,8,32,0.48,Windows,US East (N. Virginia)
t3a.nano,2,0.5,0.0093,Windows,US East (N. Virginia)
t3a.micro,2,1,0.0186,Windows,US East (N. Virginia)
t3a.small,2,2,0.0372,Windows,US East (N. Virginia)
t3a.medium,2,4,0.056,Windows,US East (N. Virginia)
t3a.large,2,8,0.1028,Windows,US East (N. Virginia)
t3a.xlarge,4,16,0.224,Windows,US East (N. Virginia)
t3a.2xlarge,8,32,0.448,Windows,US East (N. Virginia)
t2.nano,1,0.5,0.0081,Windows,US East (N. Virginia)
t2.micro,1,1,0.0162,Windows,US East (N. Virginia)
t2.small,1,2,0.032,Windows,US East (N. Virginia)
t2.medium,2,4,0.0644,Windows,US East (N. Virginia)
t2.large,2,8,0.1208,Windows,US East (N. Virginia)
t2.xlarge,4,16,0.2266,Windows,US East (N. Virginia)
t2.2xlarge,8,32,0.4332,Windows,US East (N. Virginia)
[user@aserver ~]$
[user@aserver ~]$ pwd
/home/user
[user@aserver ~]$
[user@aserver ~]$ export https_proxy='aproxy.company.com:80' proxy_auth_scheme=''
[user@aserver ~]$
[user@aserver ~]$ curl -i -X POST -H "X-CloudInsights-ApiKey:eyJra...very.long.string...DV2GQ " -H "Content-Type:multipart/form-data" -F "customFile=@/home/user/AWSPriceList-Windows.csv" "https://YOURTENANT.c01-eu-1.cloudinsights.netapp.com/rest/v1/dwh-management/upload/csvs"
HTTP/1.1 200 Connection established

HTTP/1.1 100 Continue

HTTP/1.1 200
Date: Tue, 04 Jan 2022 20:03:56 GMT
Content-Length: 0
Connection: keep-alive
Server: nginx
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Frame-Options: DENY

To explain the above.
  • We get a could not resolve host error.
  • To troubleshoot, first check hosts is using files.
  • Then add the Cloud Insights IP to /etc/hosts
  • Then we create a sample CSV to upload.
  • Then we export the proxy configuration.
  • Then we use CURL to POST our CSV to Cloud Insights.
  • And we get HTTP 200 successful!
APPENDIX: POST ETL SQL File

This might have MySQL commands like:

DROP TABLE ...
CREATE TABLE ...

LOAD DATA LOCAL INFILE 'AWSPriceList-Linux.csv' INTO
TABLE dwh_custom.AWSPriceLinux FIELDS TERMINATED by ',' ENCLOSED BY '"' LINES terminated by '\n'
IGNORE 1 LINES;

UPDATE: You can do REST ODATA API inserts to dwh_custom!

A colleague informed me in the week (say 13th January 2021), that testing of POST-ing data to dwh_custom in CI, over the REST ODATA API, does work, which is very good news. An example:

curl -X POST -k https://YOURTENANT.cloudinsights.netapp.com/rest/v1/dwh-management/odata/dwh_custom/aCustomTable -H "Content-Type:application/json" -H "X-CloudInsights-ApiKey:${key}" -d '{"someTk":23,"clientname":"aserver.acme.com","versionLabel":"7.6.1.2","osDescription":"Linux 3.10.0-1160.11.1.el7.x86_64 "}'

Note: ${key} comes from a .sh file.