I had an email conversation with a colleague regards something
I had written. Specifically - http://www.cosonok.com/2015/07/search-ncfilesystemps1-cdot.html
- which morphed a year-and-a-half later into the better - http://www.cosonok.com/2017/02/treesize-for-clustered-ontap-get.html.
My colleague commented “you use the
read-directory, but that doesn’t return all records (only first 2000 or so)”
so I had to prove this was not the case.
In the following post - all using PowerShell and the Data
ONTAP PowerShell Toolkit - I set up a test SVM, test volume, and create 100’000
files in my test volume, and then run Read-NcDirectory on that volume to see
how many results come back.
PS> Import-Module
DataONTAP
PS>
Connect-NcController 10.9.3.10
PS>
Get-NcVserver
PS>
Get-NcAggr
PS> New-NcVserver
-Name SVM_TEST -RootVolume ROOTVOL -RootVolumeAggregate data1
-RootVolumeSecurityStyle UNIX
PS>
New-NcVol TEST_VOL data1 10g -JunctionPath /TEST_VOL -VserverContext SVM_TEST
The next bit will create the 100’000 files in the volume.
PS>
For($i=1;$i -le 100000;$i++){Write-Host "$i " -NoNewLine;
Write-NcFile "/vol/TEST_VOL/file$i" -Data "$i" -VserverContext
SVM_TEST}
It will take some time to create 100’000 files (a couple
of hours on my VSIM). You can check the filecount is increasing by running df -i on the Clustershell CLI.
And when that’s finished we see if Read-NcDirectory can
return the 100’000+ records (Read-NcDirectory will take a while to process).
PS> $R
= Read-NcDirectory -path /vol/TEST_VOL -VserverContext SVM_TEST
PS>
$R.count
100003
And visual proof!
Image:
Read-NcDirectory returning > 100’000 records - Picture 1
Image:
Read-NcDirectory returning > 100’000 records - Picture 2: $R Output
I could keep running my test with ever larger file count
and see if there is a limit, but for now we’ll leave it at “Read-NcDirectory definitely returns over 100’000 records, and I don’t
know if there is a limit.”
Comments
Post a Comment