Skip to main content

performance - Disk throughput dramatically reduced when using ZFS on OpenSolaris?

I'm building a simple ZFS file server for the small
business I work for. The server is a Dell Poweredge 840, with 1GB RAM. The OS
(OpenSolaris 2009.06) is installed on one SATA drive, and there are three other SATA
drives installed for storage: 1x1TB, 1x1.5TB, and 1x2TB. When I add the three drives to
one raidz zpool, throughput isn't very
good:



#zpool create -m
/export/pool pool raidz c7d1 c8d0 c8d1
#zfs create
pool/fs

#time dd if=/dev/zero of=/export/pool/fs/zerofile
bs=1048576 count=1024

1024+0 records in
1024+0 records
out

real 0m12.539s
user 0m0.002s
sys
0m0.435s



That's
about 81.6 MB/s. That's not horrendous, but I tried creating a pool consisting of just
one of those drives:



#zpool create
-m /export/disk-c7d1 disk-c7d1 c7d1
#zfs create disk-c7d1/fs
#time
dd if=/dev/zero of=/export/disk-c7d1/fs/zerofile bs=1048576 count=1024
1024+0
records in
1024+0 records out

real
0m21.251s
user 0m0.002s

sys
0m0.552s


Okay, 48.19
MB/s throughput for a sequential write to one drive? That seems pretty low. Especially
when I format the drive as UFS and try that same
write:



#newfs
/dev/dsk/c7d1s2

#mount /dev/dsk/c7d1s2
/mnt/c7d1
# time dd if=/dev/zero of=/mnt/c7d1/zeroes bs=1048576
count=1024
1024+0 records in

1024+0 records
out

real 0m10.372s
user 0m0.002s
sys
0m1.720s


That's almost
twice the speed, 98.73 MB/s. That's much closer to what I'd expect out of these drives
(though they're just cheap SATA drives).



What
am I doing wrong here? I understand that there's overhead involved in writing parity
data with RAIDZ, but making a pool from a single drive shouldn't halve throughput,
should it? That seems pretty
bad.




Thanks,
everybody.

Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

ubuntu - Monitoring CPU, Mem, disk, on a single server

I've been looking for a simple starter solution for monitoring my [currently] single server hosted solution. Other than Nagios and similar, are there other good (simple) solutions people are using? Answer Everything depends on what you want. For example Munin is very simple, you can install and configure it in less then 10 minutes (on one server), it can sends alarms, make graphs from monitoring cpu, mem. apache connections, eaccellerator, disk io and many many more (it has many plugins). But if you are planning in future get some more machines, munin may not be enough. For example in munin you cant monitor state of individual processes, can't monitor changes in files (for security purpose). So if you wanna only see what is the utilization of basics parameters on your server and don't plan to buy some more servers Munin is what you are looking for, but if you wanna be alarmed when some of your service is down, take more control on what is happeninig on...

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...