This question was
raised by a client – whilst doing an investigation into the
underlying VMware and SAN infrastructure of a couple of systems for a
customer where users were reporting poor application performance –
as being a potential avenue for analysis.
The application in
question reads from an SQL database, then writes the read file to
local disk as a temporary file. When Process Monitor (available in
the Sysinternals Suite and freely downloadable from Microsoft) is run
against the file system to see the transacation taking place, a
ReadFile length of 4096, and WriteFile length of 1024 is recorded.
Fig. 1 Process
Monitor ReadFile Length 4096
Fig. 2 Process
Monitor WriteFile Length 1024
When looking at this, I
could find little information on the Net to explain this behaviour,
so the following is a hypothesis.
Assuming that the
ReadFile length mentioned is 4096 bytes, and the WriteFile length is
1024 bytes, these numbers could come from:
Default Network
Packet Size on SQL Server = 4096 bytes
Fig. 3 Network
Packet Size setting taken from SQL Server Management Studio: Database
Properties > Advanced
NTFS Default Bytes
Per FileRecord Segment = 1024 bytes (at least where the Bytes Per
Cluster is 4096)
Fig. 4 Bytes Per
FileRecord Segment from Windows Command Prompt:\> fsutil fsinfo
ntfsinfo C:
Please feel free to
comment
– especially if my hypothesis is wrong (which is part of the
reason for posting.)
Comments
Post a Comment