Skip to main content

raid - HP Array P410, HDD changed, but still predict to fail soon

On HP Proliant DL G6, one disk in RAID 1 on P410 array broken. It was HP EG0300FBDBR. I changed it on compatible HP model - HP EG0300FAWHV. (both 300 Gb, 10K)




But HP Array Configuration Utility still show me - that new HP EG0300FAWHV - 300 GB 2-Port SAS Drive at Port 1I : Box 1 : Bay 0 is predicted to fail soon.



New disk in server blinking green - is like "The drive is rebuilding, erasing, or it is part of an array that is undergoing capacity expansion or stripe migration."



But two days gone and status didn't change.



In RIS Event Log, last info:



*> Event 123 2016-02-08 11:56:54 Hot Plug Physical Drive Change Removed.





Physical drive number: 0x09. Configured drive flag: 1. Spare drive
flag: 0. Big drive: 0x00000009. Enclosure Box: 00. Bay: 00 Event 124
2016-02-08 12:16:57 Hot Plug Physical Drive Change Inserted. Physical
drive number: 0x09. Configured drive flag: 1. Spare drive flag: 0. Big
drive: 0x00000009. Enclosure Box: 00. Bay: 00 Event 125 2016-02-08
12:16:57 Logical Drive Status State change. State change, logical
drive 0x0000. Previous logical drive state (0x03): Logical drive is
degraded. New logical drive state (0x04): Logical drive is ready for

recovery operation. Spare status (0x00): No spare configured Event
126 2016-02-08 12:16:57 Logical Drive Status State change. State
change, logical drive 0x0000. Previous logical drive state (0x04):
Logical drive is ready for recovery operation. New logical drive state
(0x05): Logical drive is currently recovering. Spare status (0x00): No
spare configured Event 127 2016-02-08 12:51:51 Logical Drive Status
State change. State change, logical drive 0x0000. Previous logical
drive state (0x05): Logical drive is currently recovering. New logical
drive state (0x00): Logical drive OK. Spare status (0x00): No spare
configured Event 128 2016-02-09 03:23:22 Logical Drive Surface

Analysis Surface Analysis pass information. Block count: 00000000.
Drive No: 00. Starting Address: 00000848:00000000.*




I attached ADU report and screen.



Is it mean that new disk still recovering? But why its status as "predict to fail" and why so long for recovering? Why HP utility didn't mark it as rebuilding?



Report ADU: https://www.dropbox.com/s/70ucdsiafzdwvfr/ADUReport.zip?dl=0




enter image description here

Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...