Skip to main content

mdadm - Currently unreadable sectors on RAID 5 linux drive

itemprop="text">

I have every 30 minutes smartd
messages on
/var/log/messages:




smartd[3588]:
Device: /dev/sdc, 176 Currently unreadable (pending)
sectors



This drive (sdc) is part of RAID 5
configured with mdadm.
Mdadm monitor tells RAID is ok but i want to know if i
need to change the drive or not. Also if its neccesary to mark as bad this sectors or OS
already did it.
If i need to change the drive, how can i chose the replacement
one? I can´t find the number of blocks in hard drive specifications so if i chose one
with less blocks than original, i will be in
trouble.
Thanks.


itemprop="text">
class="normal">Answer



Yes,
change the drive.




Unreadable
(pending) sectors are sector whose contents could not be read. On a normal non-RAID
situation that would result in either a read error, or a long delay while the drive
attempts to read the sector again and again until it succeeds (or until it eventually
gives up).



With RAID two things are
happening:




  1. Your disk is
    probably configured with a short TLER value. Thus is will give up its attempts to read
    that sector within a reasonable time. (Thus preventing long
    hangs).

  2. Your RAID array notices the failure and reads the
    data from another disk. This is the advantage of RAID 5; you have a spare
    copy.



What you want to do
is:





  1. Check your
    backups. You should not need them if all goes
    well
    .

  2. Fetch a replacement disk of equal or
    larger size. You can check the size with smartctl -a /dev/sdc.
    Do not assume all drives of size X have the same capacity. Manufacturers like round
    numbers; one 500GB drive might well be smaller than another 500 GB
    drive.

  3. Bring the disk with problems off-line.
    (mdadm --manage --remove /dev/mdX
    /dev/sdc
    )

  4. Replace the disk with new hardware
    and let the array rebuild itself. (mdadm --add /dev/mdX
    /dev/sdc
    )



If
you used large disks then this will take a lot of time. Sometimes it is faster to just
rebuild the RAID array from scratch and restore from backups. (TEST those backups
first!)




While the RAID is rebuilding
you have no redundancy. Thus is another disk fails (e.g. due to the stress of
rebuilding) then you have a problem. This sometimes happens with large disks (long
rebuild times) and batches of drives from the same date.



Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...