We have a client who is complaining
about performance of an application which utilizes an MS SQL database. They do not
believe the performance issues are the fault of the application
itself.
The Smart Array E200i RAID controller
has 128MB cache and we have the cache set to 75% read/25% write. The disk array set to
enable write caching.
Recently we ran a disk
performance test using href="http://www.microsoft.com/downloads/details.aspx?familyid=9a8b005b-84e4-4f24-8d65-cb53442d9e19&displaylang=en"
rel="nofollow noreferrer">SQLIO based on href="http://sqlserverpedia.com/wiki/SAN_Performance_Tuning_with_SQLIO" rel="nofollow
noreferrer">this guide. We used a 10 GB file for the test found that the
average sequential read rate was ~60 MB/sec (megabytes/sec) and the average random read
rate was ~30 MB/sec. Are these numbers on par for what the server should be performing?
Better than on par? Horrible?
Amazing?
Additional information on the server
set up/RAID controller config:
There are three, 146 GB SAS 10k RPM 3.0 GB/sec
(model HP DG146BABCF) drives, configured in a RAID 5 array. These are the only physical
disks available to the server so both logs and data, including operating system data and
paging file are all on the same physical disk array (there are 2 logical drives with the
OS data being separate). The array stripe size is set to 64k. Total usable space is 273
GB.
The HP Advanced Data Guard is
turned off. Rebuild and expand priority are set to medium. Surface scan delay is 15 sec.
The controller has a cache board and a battery pack.
Answer
Too many imponderables. For example, how are the disks set up? If the logs and
data share the same disks the random I/O from the data areas will disrupt the log
traffic, which is mostly sequential I/O and is disproportionately affected by a busy
random access workload on the same
disks.
Without some more insight into your
configuration I can't really say what might be causing the problem.
For example, 60MB/sec off a RAID is about right
for a 4-disk RAID-5 or RAID-10 with 64k stripes and 15k drives. Each drive will read one
64k stripe per revolution of the disk (about 250/sec for a 15k drive) which gives you
15MB/sec per drive.
The average seek time for a
15k disk is around 3ms across the whole disk. On a mostly contiguous 10GB file on a RAID
volume with (say) 146GB or 300GB disks and a bit of help from the cache I could see
30MB/sec being a reasonable figure for a disk array configured as described above. It
would indicate averaging a data read about every two revolutions of the
disks.
That's a thought off the top
of my head for a configuration one might reasonably expect to see on an ML350. However,
I have no idea if that matches your actual configuration, so I can't really comment on
whether the observations are relevant in your case.
Comments
Post a Comment