Skip to main content

raid - mdadm raid6 recovery reads more from one drive?

I just replaced a fault drive in my raid6-array (consisting of 8 drives) during the recovery I did notice something strange in iostat. All drives are getting the same speeds (as expected) except for one drive (sdi) which is constantly reading faster than the rest.



It is also reading about one eighth faster which might have something to do with that there are in total eight drives in the array, but I don't know why...



This was true during the whole recovery (always the same drive reading faster than all of the rest) and looking at the total statistics for the recovery all drives have read/written pretty much the same amount except for sdi which have read one eighth more.



Some iostat stats averaged for 100s:



Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdb 444.80 26.15 0.00 2615 0

sdb1 444.80 26.15 0.00 2615 0
sdc 445.07 26.15 0.00 2615 0
sdc1 445.07 26.15 0.00 2615 0
sdd 443.21 26.15 0.00 2615 0
sdd1 443.21 26.15 0.00 2615 0
sde 444.01 26.15 0.00 2615 0
sde1 444.01 26.15 0.00 2615 0
sdf 448.79 26.15 0.00 2615 0
sdf1 448.79 26.15 0.00 2615 0
sdg 521.66 0.00 26.15 0 2615

sdg1 521.66 0.00 26.15 0 2615
sdh 443.32 26.15 0.00 2615 0
sdh1 443.32 26.15 0.00 2615 0
sdi 369.23 29.43 0.00 2942 0
sdi1 369.23 29.43 0.00 2942 0


Can anyone offer a sensible explanation?
When I discovered that it was ~exactly one eighth faster I figured that it had to do with the parity but that really didn't make much sense (I don't know about the specific raid 6 implementation in mdadm but for one it surely can't store all of the parity on one drive...).




UPDATE:
Well, I did replace another drive just now (same array) and I am seeing the exact same results but this time with a different drive reading faster (actually, it is the drive I added for the last recovery that has decided it want to do more work).



Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdb 388.48 24.91 0.00 2490 0
sdb1 388.48 24.91 0.00 2490 0
sdc 388.13 24.91 0.00 2491 0
sdc1 388.13 24.91 0.00 2491 0
sdd 388.32 24.91 0.00 2491 0
sdd1 388.32 24.91 0.00 2491 0

sde 388.81 24.91 0.00 2491 0
sde1 388.81 24.91 0.00 2491 0
sdf 501.07 0.00 24.89 0 2489
sdf1 501.07 0.00 24.89 0 2489
sdg 356.86 28.03 0.00 2802 0
sdg1 356.86 28.03 0.00 2802 0
sdh 387.52 24.91 0.00 2491 0
sdh1 387.52 24.91 0.00 2491 0
sdi 388.79 24.92 0.00 2491 0
sdi1 388.79 24.92 0.00 2491 0



Theese are 4k-drives (but as all drives do (or at least did) they stil report 512-byte sectors). So I figured I might have aligned the partitions wrong somehow (what implications that might have had I don't know, depends on how mdadm works and stripe size I guess, anyway easy enough to check):



debbie:~# fdisk -l -u /dev/sd[bcdefghi] | grep ^/dev/sd
/dev/sdb1 2048 3906988207 1953493080 fd Linux raid autodetect
/dev/sdc1 2048 3906988207 1953493080 fd Linux raid autodetect
/dev/sdd1 2048 3906988207 1953493080 fd Linux raid autodetect
/dev/sde1 2048 3906988207 1953493080 fd Linux raid autodetect
/dev/sdf1 2048 3907024064 1953511008+ fd Linux raid autodetect

/dev/sdg1 2048 3907024064 1953511008+ fd Linux raid autodetect
/dev/sdh1 2048 3906988207 1953493080 fd Linux raid autodetect
/dev/sdi1 2048 3906988207 1953493080 fd Linux raid autodetect


f and g are the new drives and appear to be slightly bigger but they all start on the same sector (all drives are of the same make an model (and on the same controller) but the new ones are bought ~6 months later than the rest).

Comments

Popular posts from this blog

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...

iLO 3 Firmware Update (HP Proliant DL380 G7)

The iLO web interface allows me to upload a .bin file ( Obtain the firmware image (.bin) file from the Online ROM Flash Component for HP Integrated Lights-Out. ) The iLO web interface redirects me to a page in the HP support website ( http://www.hp.com/go/iLO ) where I am supposed to find this .bin firmware, but no luck for me. The support website is a mess and very slow, badly categorized and generally unusable. Where can I find this .bin file? The only related link I am able to find asks me about my server operating system (what does this have to do with the iLO?!) and lets me download an .iso with no .bin file And also a related question: what is the latest iLO 3 version? (for Proliant DL380 G7, not sure if the iLO is tied to the server model)

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...