failed drive in raid 1 array
failed drive in raid 1 array
am 23.02.2011 17:52:39 von Roberto Nunnari
Hello.
I have a linux box, with two 2TB sata HD in raid 1.
Now, one disk is in failed state and it has no spares:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb4[2](F) sda4[0]
1910200704 blocks [2/1] [U_]
md0 : active raid1 sdb1[1] sda2[0]
40957568 blocks [2/2] [UU]
unused devices:
The drives are not hot-plug, so I need to shutdown the box.
My plan is to:
# sfdisk -d /dev/sdb > sdb.sfdisk
# mdadm /dev/md1 -r /dev/sdb4
# mdadm /dev/md0 -r /dev/sdb1
# shutdown -h now
replace the disk and boot (it should come back up, even without one
drive, right?)
# sfdisk /dev/sdb < sdb.sfdisk
# mdadm /dev/md1 -a /dev/sdb4
# mdadm /dev/md0 -a /dev/sdb1
and the drives should start to resync, right?
This is my first time I do such a thing, so please, correct me
if the above is not correct, or is not a best practice for
my configuration.
My last backup of md1 is of mid november, so I need to be
pretty sure I will not lose my data (over 1TB).
A bit abount my environment:
# mdadm --version
mdadm - v1.12.0 - 14 June 2005
# cat /etc/redhat-release
CentOS release 4.8 (Final)
# uname -rms
Linux 2.6.9-89.31.1.ELsmp i686
Thank you very much and best regards.
Robi
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 18:56:52 von Roberto Spadim
sata2 without hot plug?
check if your sda sdb sdc will change after removing it, it=B4s depends
on your udev or another /dev filesystem
2011/2/23 Roberto Nunnari :
> Hello.
>
> I have a linux box, with two 2TB sata HD in raid 1.
>
> Now, one disk is in failed state and it has no spares:
> # cat /proc/mdstat
> Personalities : [raid1]
> md1 : active raid1 sdb4[2](F) sda4[0]
> =A0 =A0 =A01910200704 blocks [2/1] [U_]
>
> md0 : active raid1 sdb1[1] sda2[0]
> =A0 =A0 =A040957568 blocks [2/2] [UU]
>
> unused devices:
>
>
> The drives are not hot-plug, so I need to shutdown the box.
>
> My plan is to:
> # sfdisk -d /dev/sdb > sdb.sfdisk
> # mdadm /dev/md1 -r /dev/sdb4
> # mdadm /dev/md0 -r /dev/sdb1
> # shutdown -h now
>
> replace the disk and boot (it should come back up, even without one d=
rive,
> right?)
>
> # sfdisk /dev/sdb < sdb.sfdisk
> # mdadm /dev/md1 -a /dev/sdb4
> # mdadm /dev/md0 -a /dev/sdb1
>
> and the drives should start to resync, right?
>
> This is my first time I do such a thing, so please, correct me
> if the above is not correct, or is not a best practice for
> my configuration.
>
> My last backup of md1 is of mid november, so I need to be
> pretty sure I will not lose my data (over 1TB).
>
> A bit abount my environment:
> # mdadm --version
> mdadm - v1.12.0 - 14 June 2005
> # cat /etc/redhat-release
> CentOS release 4.8 (Final)
> # uname -rms
> Linux 2.6.9-89.31.1.ELsmp i686
>
> Thank you very much and best regards.
> Robi
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 19:20:08 von Albert Pauw
On 02/23/11 06:56 PM, Roberto Spadim wrote:
> sata2 without hot plug?
> check if your sda sdb sdc will change after removing it, it=B4s depen=
ds
> on your udev or another /dev filesystem
>
> 2011/2/23 Roberto Nunnari:
>> Hello.
>>
>> I have a linux box, with two 2TB sata HD in raid 1.
>>
>> Now, one disk is in failed state and it has no spares:
>> # cat /proc/mdstat
>> Personalities : [raid1]
>> md1 : active raid1 sdb4[2](F) sda4[0]
>> 1910200704 blocks [2/1] [U_]
>>
>> md0 : active raid1 sdb1[1] sda2[0]
>> 40957568 blocks [2/2] [UU]
>>
>> unused devices:
>>
>>
>> The drives are not hot-plug, so I need to shutdown the box.
>>
>> My plan is to:
>> # sfdisk -d /dev/sdb> sdb.sfdisk
>> # mdadm /dev/md1 -r /dev/sdb4
-> removing should be ok, as the partition has failed in md1
>> # mdadm /dev/md0 -r /dev/sdb1
-> In this case, sdb1 hasn't failed according to the output of=20
/proc/mdstat, so you should fail it otherwise you can't remove it:
mdadm /dev/md0 -f /dev/sdb1
mdadm /dev/md0 -r /dev/sdb1
>> # shutdown -h now
>>
>> replace the disk and boot (it should come back up, even without one =
drive,
>> right?)
>>
>> # sfdisk /dev/sdb< sdb.sfdisk
>> # mdadm /dev/md1 -a /dev/sdb4
>> # mdadm /dev/md0 -a /dev/sdb1
>>
>> and the drives should start to resync, right?
>>
>> This is my first time I do such a thing, so please, correct me
>> if the above is not correct, or is not a best practice for
>> my configuration.
>>
>> My last backup of md1 is of mid november, so I need to be
>> pretty sure I will not lose my data (over 1TB).
>>
>> A bit abount my environment:
>> # mdadm --version
>> mdadm - v1.12.0 - 14 June 2005
>> # cat /etc/redhat-release
>> CentOS release 4.8 (Final)
>> # uname -rms
>> Linux 2.6.9-89.31.1.ELsmp i686
What about sdb2 an sdb3, are they in use as normal mountpoints, or swap=
.
Then these should be commented out in /etc/fstab
before you change the disk.
>> Thank you very much and best regards.
>> Robi
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 20:16:17 von Roberto Nunnari
Roberto Spadim wrote:
> sata2 without hot plug?
Hi Roberto.
I mean that there is no hot-plug bay, with sliding rails etc..
The drives are connected to the mb using standard sata cables.
> check if your sda sdb sdc will change after removing it, it=B4s depen=
ds
> on your udev or another /dev filesystem
Ok, thank you.
That means that if I take care to check the above, and
the new drive will be sdb, then taking the steps indicated
in my original post will do the job?
Best regards.
Robi
>=20
> 2011/2/23 Roberto Nunnari :
>> Hello.
>>
>> I have a linux box, with two 2TB sata HD in raid 1.
>>
>> Now, one disk is in failed state and it has no spares:
>> # cat /proc/mdstat
>> Personalities : [raid1]
>> md1 : active raid1 sdb4[2](F) sda4[0]
>> 1910200704 blocks [2/1] [U_]
>>
>> md0 : active raid1 sdb1[1] sda2[0]
>> 40957568 blocks [2/2] [UU]
>>
>> unused devices:
>>
>>
>> The drives are not hot-plug, so I need to shutdown the box.
>>
>> My plan is to:
>> # sfdisk -d /dev/sdb > sdb.sfdisk
>> # mdadm /dev/md1 -r /dev/sdb4
>> # mdadm /dev/md0 -r /dev/sdb1
>> # shutdown -h now
>>
>> replace the disk and boot (it should come back up, even without one =
drive,
>> right?)
>>
>> # sfdisk /dev/sdb < sdb.sfdisk
>> # mdadm /dev/md1 -a /dev/sdb4
>> # mdadm /dev/md0 -a /dev/sdb1
>>
>> and the drives should start to resync, right?
>>
>> This is my first time I do such a thing, so please, correct me
>> if the above is not correct, or is not a best practice for
>> my configuration.
>>
>> My last backup of md1 is of mid november, so I need to be
>> pretty sure I will not lose my data (over 1TB).
>>
>> A bit abount my environment:
>> # mdadm --version
>> mdadm - v1.12.0 - 14 June 2005
>> # cat /etc/redhat-release
>> CentOS release 4.8 (Final)
>> # uname -rms
>> Linux 2.6.9-89.31.1.ELsmp i686
>>
>> Thank you very much and best regards.
>> Robi
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>=20
>=20
>=20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 20:20:39 von Roberto Spadim
i don´t know how you setup your kernel (with or without raid
autodetect?) do you use kernel command line to setup raid? autodetect?
here in my test machine i´m using kernel command line (grub), i do=
n´t
have a server with hotplug bay, i open the case and remove the wire
with my hands =3D) after reconecting it with another device kerenel
recognize the new device reread the parititions etc etc and i can add
it to array again
my grub is something like:
md=3D0,/dev/sda,/dev/sdb .....
internal meta data, raid1, i didn´t like the autodetect (it´s=
good)
but i prefer hardcoded kernel command line (it´s not good with usb
devices)
2011/2/23 Roberto Nunnari :
> Roberto Spadim wrote:
>>
>> sata2 without hot plug?
>
> Hi Roberto.
>
> I mean that there is no hot-plug bay, with sliding rails etc..
> The drives are connected to the mb using standard sata cables.
>
>
>> check if your sda sdb sdc will change after removing it, itæ=80=
depends
>> on your udev or another /dev filesystem
>
> Ok, thank you.
> That means that if I take care to check the above, and
> the new drive will be sdb, then taking the steps indicated
> in my original post will do the job?
>
> Best regards.
> Robi
>
>
>>
>> 2011/2/23 Roberto Nunnari :
>>>
>>> Hello.
>>>
>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>
>>> Now, one disk is in failed state and it has no spares:
>>> # cat /proc/mdstat
>>> Personalities : [raid1]
>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>> Â Â 1910200704 blocks [2/1] [U_]
>>>
>>> md0 : active raid1 sdb1[1] sda2[0]
>>> Â Â 40957568 blocks [2/2] [UU]
>>>
>>> unused devices:
>>>
>>>
>>> The drives are not hot-plug, so I need to shutdown the box.
>>>
>>> My plan is to:
>>> # sfdisk -d /dev/sdb > sdb.sfdisk
>>> # mdadm /dev/md1 -r /dev/sdb4
>>> # mdadm /dev/md0 -r /dev/sdb1
>>> # shutdown -h now
>>>
>>> replace the disk and boot (it should come back up, even without one
>>> drive,
>>> right?)
>>>
>>> # sfdisk /dev/sdb < sdb.sfdisk
>>> # mdadm /dev/md1 -a /dev/sdb4
>>> # mdadm /dev/md0 -a /dev/sdb1
>>>
>>> and the drives should start to resync, right?
>>>
>>> This is my first time I do such a thing, so please, correct me
>>> if the above is not correct, or is not a best practice for
>>> my configuration.
>>>
>>> My last backup of md1 is of mid november, so I need to be
>>> pretty sure I will not lose my data (over 1TB).
>>>
>>> A bit abount my environment:
>>> # mdadm --version
>>> mdadm - v1.12.0 - 14 June 2005
>>> # cat /etc/redhat-release
>>> CentOS release 4.8 (Final)
>>> # uname -rms
>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>
>>> Thank you very much and best regards.
>>> Robi
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-rai=
d" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.=
html
>>>
>>
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.ht=
ml
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 22:21:09 von Roberto Nunnari
Albert Pauw wrote:
> On 02/23/11 06:56 PM, Roberto Spadim wrote:
>> sata2 without hot plug?
>> check if your sda sdb sdc will change after removing it, it=B4s depe=
nds
>> on your udev or another /dev filesystem
>>
>> 2011/2/23 Roberto Nunnari:
>>> Hello.
>>>
>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>
>>> Now, one disk is in failed state and it has no spares:
>>> # cat /proc/mdstat
>>> Personalities : [raid1]
>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>> 1910200704 blocks [2/1] [U_]
>>>
>>> md0 : active raid1 sdb1[1] sda2[0]
>>> 40957568 blocks [2/2] [UU]
>>>
>>> unused devices:
>>>
>>>
>>> The drives are not hot-plug, so I need to shutdown the box.
>>>
>>> My plan is to:
>>> # sfdisk -d /dev/sdb> sdb.sfdisk
>>> # mdadm /dev/md1 -r /dev/sdb4
> -> removing should be ok, as the partition has failed in md1
ok.
>>> # mdadm /dev/md0 -r /dev/sdb1
> -> In this case, sdb1 hasn't failed according to the output of=20
> /proc/mdstat, so you should fail it otherwise you can't remove it:
> mdadm /dev/md0 -f /dev/sdb1
> mdadm /dev/md0 -r /dev/sdb1
good to know! Thank you.
>=20
>>> # shutdown -h now
>>>
>>> replace the disk and boot (it should come back up, even without one=
=20
>>> drive,
>>> right?)
>>>
>>> # sfdisk /dev/sdb< sdb.sfdisk
>>> # mdadm /dev/md1 -a /dev/sdb4
>>> # mdadm /dev/md0 -a /dev/sdb1
>>>
>>> and the drives should start to resync, right?
>>>
>>> This is my first time I do such a thing, so please, correct me
>>> if the above is not correct, or is not a best practice for
>>> my configuration.
>>>
>>> My last backup of md1 is of mid november, so I need to be
>>> pretty sure I will not lose my data (over 1TB).
>>>
>>> A bit abount my environment:
>>> # mdadm --version
>>> mdadm - v1.12.0 - 14 June 2005
>>> # cat /etc/redhat-release
>>> CentOS release 4.8 (Final)
>>> # uname -rms
>>> Linux 2.6.9-89.31.1.ELsmp i686
> What about sdb2 an sdb3, are they in use as normal mountpoints, or sw=
ap.=20
> Then these should be commented out in /etc/fstab
> before you change the disk.
Yes. They're normal mount point, so I'll have to
comment them out before rebooting, especially the swap partition.
Thank you for pointing that out!
Best regards.
Robi
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 22:24:14 von Roberto Nunnari
Roberto Spadim wrote:
> i don´t know how you setup your kernel (with or without raid
I use the official CentOS kernel with no modification and don't
know about raid autodetect, but:
# cat /boot/config-2.6.24-28-server |grep -i raid
CONFIG_BLK_DEV_3W_XXXX_RAID=3Dm
CONFIG_MD_RAID0=3Dm
CONFIG_MD_RAID1=3Dm
CONFIG_MD_RAID10=3Dm
CONFIG_MD_RAID456=3Dm
CONFIG_MD_RAID5_RESHAPE=3Dy
CONFIG_MEGARAID_LEGACY=3Dm
CONFIG_MEGARAID_MAILBOX=3Dm
CONFIG_MEGARAID_MM=3Dm
CONFIG_MEGARAID_NEWGEN=3Dy
CONFIG_MEGARAID_SAS=3Dm
CONFIG_RAID_ATTRS=3Dm
CONFIG_SCSI_AACRAID=3Dm
> autodetect?) do you use kernel command line to setup raid? autodetect=
?
/dev/md0 in grub
I don't know if that means autodetect, but I guess so..
> here in my test machine i´m using kernel command line (grub), i =
don´t
> have a server with hotplug bay, i open the case and remove the wire
> with my hands =3D) after reconecting it with another device kerenel
Is it safe? Isn't it a blind bet to fry up the controller and/or disk?
> recognize the new device reread the parititions etc etc and i can add
> it to array again
> my grub is something like:
>=20
> md=3D0,/dev/sda,/dev/sdb .....
>=20
> internal meta data, raid1, i didn´t like the autodetect (it´=
s good)
> but i prefer hardcoded kernel command line (it´s not good with u=
sb
> devices)
the relevant part of my grub is:
default=3D0
timeout=3D5
splashimage=3D(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.9-89.31.1.ELsmp)
root (hd0,0)
kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb qu=
iet
initrd /initrd-2.6.9-89.31.1.ELsmp.img
Best regards.
Robi
>=20
> 2011/2/23 Roberto Nunnari :
>> Roberto Spadim wrote:
>>> sata2 without hot plug?
>> Hi Roberto.
>>
>> I mean that there is no hot-plug bay, with sliding rails etc..
>> The drives are connected to the mb using standard sata cables.
>>
>>
>>> check if your sda sdb sdc will change after removing it, itæ=80=
depends
>>> on your udev or another /dev filesystem
>> Ok, thank you.
>> That means that if I take care to check the above, and
>> the new drive will be sdb, then taking the steps indicated
>> in my original post will do the job?
>>
>> Best regards.
>> Robi
>>
>>
>>> 2011/2/23 Roberto Nunnari :
>>>> Hello.
>>>>
>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>
>>>> Now, one disk is in failed state and it has no spares:
>>>> # cat /proc/mdstat
>>>> Personalities : [raid1]
>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>> 1910200704 blocks [2/1] [U_]
>>>>
>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>> 40957568 blocks [2/2] [UU]
>>>>
>>>> unused devices:
>>>>
>>>>
>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>
>>>> My plan is to:
>>>> # sfdisk -d /dev/sdb > sdb.sfdisk
>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>> # shutdown -h now
>>>>
>>>> replace the disk and boot (it should come back up, even without on=
e
>>>> drive,
>>>> right?)
>>>>
>>>> # sfdisk /dev/sdb < sdb.sfdisk
>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>
>>>> and the drives should start to resync, right?
>>>>
>>>> This is my first time I do such a thing, so please, correct me
>>>> if the above is not correct, or is not a best practice for
>>>> my configuration.
>>>>
>>>> My last backup of md1 is of mid november, so I need to be
>>>> pretty sure I will not lose my data (over 1TB).
>>>>
>>>> A bit abount my environment:
>>>> # mdadm --version
>>>> mdadm - v1.12.0 - 14 June 2005
>>>> # cat /etc/redhat-release
>>>> CentOS release 4.8 (Final)
>>>> # uname -rms
>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>>
>>>> Thank you very much and best regards.
>>>> Robi
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 22:34:59 von Roberto Spadim
hum, maybe you are using mdadm.conf or autodetect, non autodetect
should be something like this:
i don´t know the best solution, but it works ehhehe
kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb
quiet md=3D0,/dev/sda,/dev/sdb md=3D1,xxxx,yyyy.....
or another md array...
humm i readed the sata specification and removing isn´t a problem,=
at
eletronic level the sata channel is only data, no power source, all
channels are diferencial (like rs422 or rs485), i don´t see anypro=
blem
removing it. i tryed hot plug a revodrive (pciexpress ssd) and it
don´t work (reboot) hehehe, pci-express isn´t hot plug =3DP, =
sata2 don´t
have problems, the main problem is a short circuit at power source, if
you remove with caution no problems =3D)
i tried in some others distros and udev created a new device when add
a diferent disk for example, remove sdb, and add another disk create
sdc (not sdb), maybe with another udev configuration should work
2011/2/23 Roberto Nunnari :
> Roberto Spadim wrote:
>>
>> i don´t know how you setup your kernel (with or without raid
>
> I use the official CentOS kernel with no modification and don't
> know about raid autodetect, but:
> # cat /boot/config-2.6.24-28-server |grep -i raid
> CONFIG_BLK_DEV_3W_XXXX_RAID=3Dm
> CONFIG_MD_RAID0=3Dm
> CONFIG_MD_RAID1=3Dm
> CONFIG_MD_RAID10=3Dm
> CONFIG_MD_RAID456=3Dm
> CONFIG_MD_RAID5_RESHAPE=3Dy
> CONFIG_MEGARAID_LEGACY=3Dm
> CONFIG_MEGARAID_MAILBOX=3Dm
> CONFIG_MEGARAID_MM=3Dm
> CONFIG_MEGARAID_NEWGEN=3Dy
> CONFIG_MEGARAID_SAS=3Dm
> CONFIG_RAID_ATTRS=3Dm
> CONFIG_SCSI_AACRAID=3Dm
>
>
>> autodetect?) do you use kernel command line to setup raid? autodetec=
t?
>
> /dev/md0 in grub
> I don't know if that means autodetect, but I guess so..
>
>
>> here in my test machine i´m using kernel command line (grub), i=
don´t
>> have a server with hotplug bay, i open the case and remove the wire
>> with my hands =3D) after reconecting it with another device kerenel
>
> Is it safe? Isn't it a blind bet to fry up the controller and/or disk=
?
>
>
>> recognize the new device reread the parititions etc etc and i can ad=
d
>> it to array again
>> my grub is something like:
>>
>> md=3D0,/dev/sda,/dev/sdb .....
>>
>> internal meta data, raid1, i didn´t like the autodetect (it´=
s good)
>> but i prefer hardcoded kernel command line (it´s not good with =
usb
>> devices)
>
> the relevant part of my grub is:
>
> default=3D0
> timeout=3D5
> splashimage=3D(hd0,0)/grub/splash.xpm.gz
> hiddenmenu
> title CentOS (2.6.9-89.31.1.ELsmp)
> Â Â Â Â root (hd0,0)
> Â Â Â Â kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro roo=
t=3D/dev/md0 rhgb quiet
> Â Â Â Â initrd /initrd-2.6.9-89.31.1.ELsmp.img
>
> Best regards.
> Robi
>
>
>>
>> 2011/2/23 Roberto Nunnari :
>>>
>>> Roberto Spadim wrote:
>>>>
>>>> sata2 without hot plug?
>>>
>>> Hi Roberto.
>>>
>>> I mean that there is no hot-plug bay, with sliding rails etc..
>>> The drives are connected to the mb using standard sata cables.
>>>
>>>
>>>> check if your sda sdb sdc will change after removing it, itæ=80=
depends
>>>> on your udev or another /dev filesystem
>>>
>>> Ok, thank you.
>>> That means that if I take care to check the above, and
>>> the new drive will be sdb, then taking the steps indicated
>>> in my original post will do the job?
>>>
>>> Best regards.
>>> Robi
>>>
>>>
>>>> 2011/2/23 Roberto Nunnari :
>>>>>
>>>>> Hello.
>>>>>
>>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>>
>>>>> Now, one disk is in failed state and it has no spares:
>>>>> # cat /proc/mdstat
>>>>> Personalities : [raid1]
>>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>>> Â Â 1910200704 blocks [2/1] [U_]
>>>>>
>>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>>> Â Â 40957568 blocks [2/2] [UU]
>>>>>
>>>>> unused devices:
>>>>>
>>>>>
>>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>>
>>>>> My plan is to:
>>>>> # sfdisk -d /dev/sdb > sdb.sfdisk
>>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>>> # shutdown -h now
>>>>>
>>>>> replace the disk and boot (it should come back up, even without o=
ne
>>>>> drive,
>>>>> right?)
>>>>>
>>>>> # sfdisk /dev/sdb < sdb.sfdisk
>>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>>
>>>>> and the drives should start to resync, right?
>>>>>
>>>>> This is my first time I do such a thing, so please, correct me
>>>>> if the above is not correct, or is not a best practice for
>>>>> my configuration.
>>>>>
>>>>> My last backup of md1 is of mid november, so I need to be
>>>>> pretty sure I will not lose my data (over 1TB).
>>>>>
>>>>> A bit abount my environment:
>>>>> # mdadm --version
>>>>> mdadm - v1.12.0 - 14 June 2005
>>>>> # cat /etc/redhat-release
>>>>> CentOS release 4.8 (Final)
>>>>> # uname -rms
>>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>>>
>>>>> Thank you very much and best regards.
>>>>> Robi
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.ht=
ml
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 23.02.2011 23:13:06 von Roberto Nunnari
Roberto Spadim wrote:
> hum, maybe you are using mdadm.conf or autodetect, non autodetect
> should be something like this:
> i don´t know the best solution, but it works ehhehe
>=20
> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb
> quiet md=3D0,/dev/sda,/dev/sdb md=3D1,xxxx,yyyy.....
>=20
> or another md array...
>=20
> humm i readed the sata specification and removing isn´t a proble=
m, at
> eletronic level the sata channel is only data, no power source, all
> channels are diferencial (like rs422 or rs485), i don´t see anyp=
roblem
> removing it. i tryed hot plug a revodrive (pciexpress ssd) and it
> don´t work (reboot) hehehe, pci-express isn´t hot plug =3DP=
, sata2 don´t
> have problems, the main problem is a short circuit at power source, i=
f
> you remove with caution no problems =3D)
>=20
> i tried in some others distros and udev created a new device when add
> a diferent disk for example, remove sdb, and add another disk create
> sdc (not sdb), maybe with another udev configuration should work
Ok. I'll keep all that in mind tomorrow.
Best regards.
Robi
>=20
>=20
> 2011/2/23 Roberto Nunnari :
>> Roberto Spadim wrote:
>>> i don´t know how you setup your kernel (with or without raid
>> I use the official CentOS kernel with no modification and don't
>> know about raid autodetect, but:
>> # cat /boot/config-2.6.24-28-server |grep -i raid
>> CONFIG_BLK_DEV_3W_XXXX_RAID=3Dm
>> CONFIG_MD_RAID0=3Dm
>> CONFIG_MD_RAID1=3Dm
>> CONFIG_MD_RAID10=3Dm
>> CONFIG_MD_RAID456=3Dm
>> CONFIG_MD_RAID5_RESHAPE=3Dy
>> CONFIG_MEGARAID_LEGACY=3Dm
>> CONFIG_MEGARAID_MAILBOX=3Dm
>> CONFIG_MEGARAID_MM=3Dm
>> CONFIG_MEGARAID_NEWGEN=3Dy
>> CONFIG_MEGARAID_SAS=3Dm
>> CONFIG_RAID_ATTRS=3Dm
>> CONFIG_SCSI_AACRAID=3Dm
>>
>>
>>> autodetect?) do you use kernel command line to setup raid? autodete=
ct?
>> /dev/md0 in grub
>> I don't know if that means autodetect, but I guess so..
>>
>>
>>> here in my test machine i´m using kernel command line (grub), =
i don´t
>>> have a server with hotplug bay, i open the case and remove the wire
>>> with my hands =3D) after reconecting it with another device kerenel
>> Is it safe? Isn't it a blind bet to fry up the controller and/or dis=
k?
>>
>>
>>> recognize the new device reread the parititions etc etc and i can a=
dd
>>> it to array again
>>> my grub is something like:
>>>
>>> md=3D0,/dev/sda,/dev/sdb .....
>>>
>>> internal meta data, raid1, i didn´t like the autodetect (it=C2=
=B4s good)
>>> but i prefer hardcoded kernel command line (it´s not good with=
usb
>>> devices)
>> the relevant part of my grub is:
>>
>> default=3D0
>> timeout=3D5
>> splashimage=3D(hd0,0)/grub/splash.xpm.gz
>> hiddenmenu
>> title CentOS (2.6.9-89.31.1.ELsmp)
>> root (hd0,0)
>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb q=
uiet
>> initrd /initrd-2.6.9-89.31.1.ELsmp.img
>>
>> Best regards.
>> Robi
>>
>>
>>> 2011/2/23 Roberto Nunnari :
>>>> Roberto Spadim wrote:
>>>>> sata2 without hot plug?
>>>> Hi Roberto.
>>>>
>>>> I mean that there is no hot-plug bay, with sliding rails etc..
>>>> The drives are connected to the mb using standard sata cables.
>>>>
>>>>
>>>>> check if your sda sdb sdc will change after removing it, itæ=
=80 depends
>>>>> on your udev or another /dev filesystem
>>>> Ok, thank you.
>>>> That means that if I take care to check the above, and
>>>> the new drive will be sdb, then taking the steps indicated
>>>> in my original post will do the job?
>>>>
>>>> Best regards.
>>>> Robi
>>>>
>>>>
>>>>> 2011/2/23 Roberto Nunnari :
>>>>>> Hello.
>>>>>>
>>>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>>>
>>>>>> Now, one disk is in failed state and it has no spares:
>>>>>> # cat /proc/mdstat
>>>>>> Personalities : [raid1]
>>>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>>>> 1910200704 blocks [2/1] [U_]
>>>>>>
>>>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>>>> 40957568 blocks [2/2] [UU]
>>>>>>
>>>>>> unused devices:
>>>>>>
>>>>>>
>>>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>>>
>>>>>> My plan is to:
>>>>>> # sfdisk -d /dev/sdb > sdb.sfdisk
>>>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>>>> # shutdown -h now
>>>>>>
>>>>>> replace the disk and boot (it should come back up, even without =
one
>>>>>> drive,
>>>>>> right?)
>>>>>>
>>>>>> # sfdisk /dev/sdb < sdb.sfdisk
>>>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>>>
>>>>>> and the drives should start to resync, right?
>>>>>>
>>>>>> This is my first time I do such a thing, so please, correct me
>>>>>> if the above is not correct, or is not a best practice for
>>>>>> my configuration.
>>>>>>
>>>>>> My last backup of md1 is of mid november, so I need to be
>>>>>> pretty sure I will not lose my data (over 1TB).
>>>>>>
>>>>>> A bit abount my environment:
>>>>>> # mdadm --version
>>>>>> mdadm - v1.12.0 - 14 June 2005
>>>>>> # cat /etc/redhat-release
>>>>>> CentOS release 4.8 (Final)
>>>>>> # uname -rms
>>>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>>>>
>>>>>> Thank you very much and best regards.
>>>>>> Robi
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 24.02.2011 17:05:53 von Iordan Iordanov
Hi guys,
I saw a bunch of discussion of devices changing names when hot-plugged.=
=20
If you get the device name right when you add it to the array first, al=
l=20
is good since the superblock is used to "discover" the device later.
However, to make things easier/clearer, and to avoid errors, one can=20
take a look at the set of directories:
/dev/disk/by-id
/dev/disk/by-path
/dev/disk/by-uuid
/dev/disk/by-label
for a predictable, more static view of the drives. The symlinks in thes=
e=20
directories are created by udev, and are simply links to the "real"=20
device nodes /dev/sd{a-z}*. You can either just use these symlinks as a=
=20
way of verifying that you are adding the right device, or add the devic=
e=20
using the symlink.
At our location, we even augmented udev to add links to labeled GPT=20
partitions in /dev/disk/by-label, and now our drives/partitions look=20
like this:
iscsi00-drive00-part00 -> ../../sda1
iscsi00-drive01-part00 -> ../../sdb1
iscsi00-drive02-part00 -> ../../sdc1
iscsi00-drive03-part00 -> ../../sdd1
iscsi00-drive04-part00 -> ../../sde1
This way, we know exactly which bay contains exactly which drive, and i=
t=20
stays this way. If you guys want, I can share with you the changes to=20
udev necessary and the script which extracts the GPT label and reports=20
it to udev for this magic to happen :). Please reply to this thread wit=
h=20
a request if you think it may be useful to you.
Cheers,
Iordan
On 02/23/11 17:13, Roberto Nunnari wrote:
> Roberto Spadim wrote:
>> hum, maybe you are using mdadm.conf or autodetect, non autodetect
>> should be something like this:
>> i don´t know the best solution, but it works ehhehe
>>
>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb
>> quiet md=3D0,/dev/sda,/dev/sdb md=3D1,xxxx,yyyy.....
>>
>> or another md array...
>>
>> humm i readed the sata specification and removing isn´t a probl=
em, at
>> eletronic level the sata channel is only data, no power source, all
>> channels are diferencial (like rs422 or rs485), i don´t see any=
problem
>> removing it. i tryed hot plug a revodrive (pciexpress ssd) and it
>> don´t work (reboot) hehehe, pci-express isn´t hot plug =3D=
P, sata2 don´t
>> have problems, the main problem is a short circuit at power source, =
if
>> you remove with caution no problems =3D)
>>
>> i tried in some others distros and udev created a new device when ad=
d
>> a diferent disk for example, remove sdb, and add another disk create
>> sdc (not sdb), maybe with another udev configuration should work
>
> Ok. I'll keep all that in mind tomorrow.
> Best regards.
> Robi
>
>
>>
>>
>> 2011/2/23 Roberto Nunnari :
>>> Roberto Spadim wrote:
>>>> i don´t know how you setup your kernel (with or without raid
>>> I use the official CentOS kernel with no modification and don't
>>> know about raid autodetect, but:
>>> # cat /boot/config-2.6.24-28-server |grep -i raid
>>> CONFIG_BLK_DEV_3W_XXXX_RAID=3Dm
>>> CONFIG_MD_RAID0=3Dm
>>> CONFIG_MD_RAID1=3Dm
>>> CONFIG_MD_RAID10=3Dm
>>> CONFIG_MD_RAID456=3Dm
>>> CONFIG_MD_RAID5_RESHAPE=3Dy
>>> CONFIG_MEGARAID_LEGACY=3Dm
>>> CONFIG_MEGARAID_MAILBOX=3Dm
>>> CONFIG_MEGARAID_MM=3Dm
>>> CONFIG_MEGARAID_NEWGEN=3Dy
>>> CONFIG_MEGARAID_SAS=3Dm
>>> CONFIG_RAID_ATTRS=3Dm
>>> CONFIG_SCSI_AACRAID=3Dm
>>>
>>>
>>>> autodetect?) do you use kernel command line to setup raid? autodet=
ect?
>>> /dev/md0 in grub
>>> I don't know if that means autodetect, but I guess so..
>>>
>>>
>>>> here in my test machine i´m using kernel command line (grub),=
i don´t
>>>> have a server with hotplug bay, i open the case and remove the wir=
e
>>>> with my hands =3D) after reconecting it with another device kerene=
l
>>> Is it safe? Isn't it a blind bet to fry up the controller and/or di=
sk?
>>>
>>>
>>>> recognize the new device reread the parititions etc etc and i can =
add
>>>> it to array again
>>>> my grub is something like:
>>>>
>>>> md=3D0,/dev/sda,/dev/sdb .....
>>>>
>>>> internal meta data, raid1, i didn´t like the autodetect (it=C2=
=B4s good)
>>>> but i prefer hardcoded kernel command line (it´s not good wit=
h usb
>>>> devices)
>>> the relevant part of my grub is:
>>>
>>> default=3D0
>>> timeout=3D5
>>> splashimage=3D(hd0,0)/grub/splash.xpm.gz
>>> hiddenmenu
>>> title CentOS (2.6.9-89.31.1.ELsmp)
>>> root (hd0,0)
>>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb quiet
>>> initrd /initrd-2.6.9-89.31.1.ELsmp.img
>>>
>>> Best regards.
>>> Robi
>>>
>>>
>>>> 2011/2/23 Roberto Nunnari :
>>>>> Roberto Spadim wrote:
>>>>>> sata2 without hot plug?
>>>>> Hi Roberto.
>>>>>
>>>>> I mean that there is no hot-plug bay, with sliding rails etc..
>>>>> The drives are connected to the mb using standard sata cables.
>>>>>
>>>>>
>>>>>> check if your sda sdb sdc will change after removing it, itæ=
=80 depends
>>>>>> on your udev or another /dev filesystem
>>>>> Ok, thank you.
>>>>> That means that if I take care to check the above, and
>>>>> the new drive will be sdb, then taking the steps indicated
>>>>> in my original post will do the job?
>>>>>
>>>>> Best regards.
>>>>> Robi
>>>>>
>>>>>
>>>>>> 2011/2/23 Roberto Nunnari :
>>>>>>> Hello.
>>>>>>>
>>>>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>>>>
>>>>>>> Now, one disk is in failed state and it has no spares:
>>>>>>> # cat /proc/mdstat
>>>>>>> Personalities : [raid1]
>>>>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>>>>> 1910200704 blocks [2/1] [U_]
>>>>>>>
>>>>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>>>>> 40957568 blocks [2/2] [UU]
>>>>>>>
>>>>>>> unused devices:
>>>>>>>
>>>>>>>
>>>>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>>>>
>>>>>>> My plan is to:
>>>>>>> # sfdisk -d /dev/sdb > sdb.sfdisk
>>>>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>>>>> # shutdown -h now
>>>>>>>
>>>>>>> replace the disk and boot (it should come back up, even without=
one
>>>>>>> drive,
>>>>>>> right?)
>>>>>>>
>>>>>>> # sfdisk /dev/sdb < sdb.sfdisk
>>>>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>>>>
>>>>>>> and the drives should start to resync, right?
>>>>>>>
>>>>>>> This is my first time I do such a thing, so please, correct me
>>>>>>> if the above is not correct, or is not a best practice for
>>>>>>> my configuration.
>>>>>>>
>>>>>>> My last backup of md1 is of mid november, so I need to be
>>>>>>> pretty sure I will not lose my data (over 1TB).
>>>>>>>
>>>>>>> A bit abount my environment:
>>>>>>> # mdadm --version
>>>>>>> mdadm - v1.12.0 - 14 June 2005
>>>>>>> # cat /etc/redhat-release
>>>>>>> CentOS release 4.8 (Final)
>>>>>>> # uname -rms
>>>>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>>>>>
>>>>>>> Thank you very much and best regards.
>>>>>>> Robi
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 24.02.2011 21:08:41 von Roberto Spadim
do you have the udev configuration for this (static)?
2011/2/24 Iordan Iordanov :
> Hi guys,
>
> I saw a bunch of discussion of devices changing names when hot-plugge=
d. If
> you get the device name right when you add it to the array first, all=
is
> good since the superblock is used to "discover" the device later.
>
> However, to make things easier/clearer, and to avoid errors, one can =
take a
> look at the set of directories:
>
> /dev/disk/by-id
> /dev/disk/by-path
> /dev/disk/by-uuid
> /dev/disk/by-label
>
> for a predictable, more static view of the drives. The symlinks in th=
ese
> directories are created by udev, and are simply links to the "real" d=
evice
> nodes /dev/sd{a-z}*. You can either just use these symlinks as a way =
of
> verifying that you are adding the right device, or add the device usi=
ng the
> symlink.
>
> At our location, we even augmented udev to add links to labeled GPT
> partitions in /dev/disk/by-label, and now our drives/partitions look =
like
> this:
>
> iscsi00-drive00-part00 -> ../../sda1
> iscsi00-drive01-part00 -> ../../sdb1
> iscsi00-drive02-part00 -> ../../sdc1
> iscsi00-drive03-part00 -> ../../sdd1
> iscsi00-drive04-part00 -> ../../sde1
>
> This way, we know exactly which bay contains exactly which drive, and=
it
> stays this way. If you guys want, I can share with you the changes to=
udev
> necessary and the script which extracts the GPT label and reports it =
to udev
> for this magic to happen :). Please reply to this thread with a reque=
st if
> you think it may be useful to you.
>
> Cheers,
> Iordan
>
>
> On 02/23/11 17:13, Roberto Nunnari wrote:
>>
>> Roberto Spadim wrote:
>>>
>>> hum, maybe you are using mdadm.conf or autodetect, non autodetect
>>> should be something like this:
>>> i don´t know the best solution, but it works ehhehe
>>>
>>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb
>>> quiet md=3D0,/dev/sda,/dev/sdb md=3D1,xxxx,yyyy.....
>>>
>>> or another md array...
>>>
>>> humm i readed the sata specification and removing isn´t a prob=
lem, at
>>> eletronic level the sata channel is only data, no power source, all
>>> channels are diferencial (like rs422 or rs485), i don´t see an=
yproblem
>>> removing it. i tryed hot plug a revodrive (pciexpress ssd) and it
>>> don´t work (reboot) hehehe, pci-express isn´t hot plug =3D=
P, sata2 don´t
>>> have problems, the main problem is a short circuit at power source,=
if
>>> you remove with caution no problems =3D)
>>>
>>> i tried in some others distros and udev created a new device when a=
dd
>>> a diferent disk for example, remove sdb, and add another disk creat=
e
>>> sdc (not sdb), maybe with another udev configuration should work
>>
>> Ok. I'll keep all that in mind tomorrow.
>> Best regards.
>> Robi
>>
>>
>>>
>>>
>>> 2011/2/23 Roberto Nunnari :
>>>>
>>>> Roberto Spadim wrote:
>>>>>
>>>>> i don´t know how you setup your kernel (with or without raid
>>>>
>>>> I use the official CentOS kernel with no modification and don't
>>>> know about raid autodetect, but:
>>>> # cat /boot/config-2.6.24-28-server |grep -i raid
>>>> CONFIG_BLK_DEV_3W_XXXX_RAID=3Dm
>>>> CONFIG_MD_RAID0=3Dm
>>>> CONFIG_MD_RAID1=3Dm
>>>> CONFIG_MD_RAID10=3Dm
>>>> CONFIG_MD_RAID456=3Dm
>>>> CONFIG_MD_RAID5_RESHAPE=3Dy
>>>> CONFIG_MEGARAID_LEGACY=3Dm
>>>> CONFIG_MEGARAID_MAILBOX=3Dm
>>>> CONFIG_MEGARAID_MM=3Dm
>>>> CONFIG_MEGARAID_NEWGEN=3Dy
>>>> CONFIG_MEGARAID_SAS=3Dm
>>>> CONFIG_RAID_ATTRS=3Dm
>>>> CONFIG_SCSI_AACRAID=3Dm
>>>>
>>>>
>>>>> autodetect?) do you use kernel command line to setup raid? autode=
tect?
>>>>
>>>> /dev/md0 in grub
>>>> I don't know if that means autodetect, but I guess so..
>>>>
>>>>
>>>>> here in my test machine i´m using kernel command line (grub)=
, i don´t
>>>>> have a server with hotplug bay, i open the case and remove the wi=
re
>>>>> with my hands =3D) after reconecting it with another device keren=
el
>>>>
>>>> Is it safe? Isn't it a blind bet to fry up the controller and/or d=
isk?
>>>>
>>>>
>>>>> recognize the new device reread the parititions etc etc and i can=
add
>>>>> it to array again
>>>>> my grub is something like:
>>>>>
>>>>> md=3D0,/dev/sda,/dev/sdb .....
>>>>>
>>>>> internal meta data, raid1, i didn´t like the autodetect (it=C2=
=B4s good)
>>>>> but i prefer hardcoded kernel command line (it´s not good wi=
th usb
>>>>> devices)
>>>>
>>>> the relevant part of my grub is:
>>>>
>>>> default=3D0
>>>> timeout=3D5
>>>> splashimage=3D(hd0,0)/grub/splash.xpm.gz
>>>> hiddenmenu
>>>> title CentOS (2.6.9-89.31.1.ELsmp)
>>>> root (hd0,0)
>>>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb quiet
>>>> initrd /initrd-2.6.9-89.31.1.ELsmp.img
>>>>
>>>> Best regards.
>>>> Robi
>>>>
>>>>
>>>>> 2011/2/23 Roberto Nunnari :
>>>>>>
>>>>>> Roberto Spadim wrote:
>>>>>>>
>>>>>>> sata2 without hot plug?
>>>>>>
>>>>>> Hi Roberto.
>>>>>>
>>>>>> I mean that there is no hot-plug bay, with sliding rails etc..
>>>>>> The drives are connected to the mb using standard sata cables.
>>>>>>
>>>>>>
>>>>>>> check if your sda sdb sdc will change after removing it, itæ=
=80 depends
>>>>>>> on your udev or another /dev filesystem
>>>>>>
>>>>>> Ok, thank you.
>>>>>> That means that if I take care to check the above, and
>>>>>> the new drive will be sdb, then taking the steps indicated
>>>>>> in my original post will do the job?
>>>>>>
>>>>>> Best regards.
>>>>>> Robi
>>>>>>
>>>>>>
>>>>>>> 2011/2/23 Roberto Nunnari :
>>>>>>>>
>>>>>>>> Hello.
>>>>>>>>
>>>>>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>>>>>
>>>>>>>> Now, one disk is in failed state and it has no spares:
>>>>>>>> # cat /proc/mdstat
>>>>>>>> Personalities : [raid1]
>>>>>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>>>>>> 1910200704 blocks [2/1] [U_]
>>>>>>>>
>>>>>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>>>>>> 40957568 blocks [2/2] [UU]
>>>>>>>>
>>>>>>>> unused devices:
>>>>>>>>
>>>>>>>>
>>>>>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>>>>>
>>>>>>>> My plan is to:
>>>>>>>> # sfdisk -d /dev/sdb > sdb.sfdisk
>>>>>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>>>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>>>>>> # shutdown -h now
>>>>>>>>
>>>>>>>> replace the disk and boot (it should come back up, even withou=
t one
>>>>>>>> drive,
>>>>>>>> right?)
>>>>>>>>
>>>>>>>> # sfdisk /dev/sdb < sdb.sfdisk
>>>>>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>>>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>>>>>
>>>>>>>> and the drives should start to resync, right?
>>>>>>>>
>>>>>>>> This is my first time I do such a thing, so please, correct me
>>>>>>>> if the above is not correct, or is not a best practice for
>>>>>>>> my configuration.
>>>>>>>>
>>>>>>>> My last backup of md1 is of mid november, so I need to be
>>>>>>>> pretty sure I will not lose my data (over 1TB).
>>>>>>>>
>>>>>>>> A bit abount my environment:
>>>>>>>> # mdadm --version
>>>>>>>> mdadm - v1.12.0 - 14 June 2005
>>>>>>>> # cat /etc/redhat-release
>>>>>>>> CentOS release 4.8 (Final)
>>>>>>>> # uname -rms
>>>>>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>>>>>>
>>>>>>>> Thank you very much and best regards.
>>>>>>>> Robi
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.ht=
ml
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 24.02.2011 22:32:58 von Iordan Iordanov
This is a multi-part message in MIME format.
--------------050509000902030401040902
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi Roberto (Spadim),
I am attaching the two files necessary for this functionality. The first
one (gpt_id) is the script which given this example input:
gpt_id sdb 1
should give this example output:
PARTITION_LABEL=itest00-drive00-part00
The second file is a udev configuration file which needs to be dropped
into /etc/udev/rules.d/. When a new device is attached, it runs gpt_id
on its partitions, and if a GPT label is found, a link in
/dev/disk/by-label magically appears to the partition in question.
To create a GPT label and name a 100GB partition on /dev/sdb, one would
do something like this (WARNING, WARNING, WARNING THIS IS A
DATA-DESTRUCTIVE PROCESS):
parted /dev/sdb
mklabel y gpt
mkpart primary ext3 0 100GB
name 1 itest00-drive00-part00
print
quit
To trigger udevadm to rescan all the devices and remake all the
symlinks, you can run:
udevadm trigger
The gpt_id and the udev rules file are home-brewed at our department. Enjoy!
Cheers,
Iordan
On 02/24/11 15:08, Roberto Spadim wrote:
> do you have the udev configuration for this (static)?
>
> 2011/2/24 Iordan Iordanov:
>> Hi guys,
>>
>> I saw a bunch of discussion of devices changing names when hot-plugged. If
>> you get the device name right when you add it to the array first, all is
>> good since the superblock is used to "discover" the device later.
>>
>> However, to make things easier/clearer, and to avoid errors, one can take a
>> look at the set of directories:
>>
>> /dev/disk/by-id
>> /dev/disk/by-path
>> /dev/disk/by-uuid
>> /dev/disk/by-label
>>
>> for a predictable, more static view of the drives. The symlinks in these
>> directories are created by udev, and are simply links to the "real" device
>> nodes /dev/sd{a-z}*. You can either just use these symlinks as a way of
>> verifying that you are adding the right device, or add the device using the
>> symlink.
>>
>> At our location, we even augmented udev to add links to labeled GPT
>> partitions in /dev/disk/by-label, and now our drives/partitions look like
>> this:
>>
>> iscsi00-drive00-part00 -> ../../sda1
>> iscsi00-drive01-part00 -> ../../sdb1
>> iscsi00-drive02-part00 -> ../../sdc1
>> iscsi00-drive03-part00 -> ../../sdd1
>> iscsi00-drive04-part00 -> ../../sde1
>>
>> This way, we know exactly which bay contains exactly which drive, and it
>> stays this way. If you guys want, I can share with you the changes to udev
>> necessary and the script which extracts the GPT label and reports it to udev
>> for this magic to happen :). Please reply to this thread with a request if
>> you think it may be useful to you.
>>
>> Cheers,
>> Iordan
>>
>>
>> On 02/23/11 17:13, Roberto Nunnari wrote:
>>>
>>> Roberto Spadim wrote:
>>>>
>>>> hum, maybe you are using mdadm.conf or autodetect, non autodetect
>>>> should be something like this:
>>>> i don´t know the best solution, but it works ehhehe
>>>>
>>>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=/dev/md0 rhgb
>>>> quiet md=0,/dev/sda,/dev/sdb md=1,xxxx,yyyy.....
>>>>
>>>> or another md array...
>>>>
>>>> humm i readed the sata specification and removing isn´t a problem, at
>>>> eletronic level the sata channel is only data, no power source, all
>>>> channels are diferencial (like rs422 or rs485), i don´t see anyproblem
>>>> removing it. i tryed hot plug a revodrive (pciexpress ssd) and it
>>>> don´t work (reboot) hehehe, pci-express isn´t hot plug =P, sata2 don´t
>>>> have problems, the main problem is a short circuit at power source, if
>>>> you remove with caution no problems =)
>>>>
>>>> i tried in some others distros and udev created a new device when add
>>>> a diferent disk for example, remove sdb, and add another disk create
>>>> sdc (not sdb), maybe with another udev configuration should work
>>>
>>> Ok. I'll keep all that in mind tomorrow.
>>> Best regards.
>>> Robi
>>>
>>>
>>>>
>>>>
>>>> 2011/2/23 Roberto Nunnari:
>>>>>
>>>>> Roberto Spadim wrote:
>>>>>>
>>>>>> i don´t know how you setup your kernel (with or without raid
>>>>>
>>>>> I use the official CentOS kernel with no modification and don't
>>>>> know about raid autodetect, but:
>>>>> # cat /boot/config-2.6.24-28-server |grep -i raid
>>>>> CONFIG_BLK_DEV_3W_XXXX_RAID=m
>>>>> CONFIG_MD_RAID0=m
>>>>> CONFIG_MD_RAID1=m
>>>>> CONFIG_MD_RAID10=m
>>>>> CONFIG_MD_RAID456=m
>>>>> CONFIG_MD_RAID5_RESHAPE=y
>>>>> CONFIG_MEGARAID_LEGACY=m
>>>>> CONFIG_MEGARAID_MAILBOX=m
>>>>> CONFIG_MEGARAID_MM=m
>>>>> CONFIG_MEGARAID_NEWGEN=y
>>>>> CONFIG_MEGARAID_SAS=m
>>>>> CONFIG_RAID_ATTRS=m
>>>>> CONFIG_SCSI_AACRAID=m
>>>>>
>>>>>
>>>>>> autodetect?) do you use kernel command line to setup raid? autodetect?
>>>>>
>>>>> /dev/md0 in grub
>>>>> I don't know if that means autodetect, but I guess so..
>>>>>
>>>>>
>>>>>> here in my test machine i´m using kernel command line (grub), i don´t
>>>>>> have a server with hotplug bay, i open the case and remove the wire
>>>>>> with my hands =) after reconecting it with another device kerenel
>>>>>
>>>>> Is it safe? Isn't it a blind bet to fry up the controller and/or disk?
>>>>>
>>>>>
>>>>>> recognize the new device reread the parititions etc etc and i can add
>>>>>> it to array again
>>>>>> my grub is something like:
>>>>>>
>>>>>> md=0,/dev/sda,/dev/sdb .....
>>>>>>
>>>>>> internal meta data, raid1, i didn´t like the autodetect (it´s good)
>>>>>> but i prefer hardcoded kernel command line (it´s not good with usb
>>>>>> devices)
>>>>>
>>>>> the relevant part of my grub is:
>>>>>
>>>>> default=0
>>>>> timeout=5
>>>>> splashimage=(hd0,0)/grub/splash.xpm.gz
>>>>> hiddenmenu
>>>>> title CentOS (2.6.9-89.31.1.ELsmp)
>>>>> root (hd0,0)
>>>>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=/dev/md0 rhgb quiet
>>>>> initrd /initrd-2.6.9-89.31.1.ELsmp.img
>>>>>
>>>>> Best regards.
>>>>> Robi
>>>>>
>>>>>
>>>>>> 2011/2/23 Roberto Nunnari:
>>>>>>>
>>>>>>> Roberto Spadim wrote:
>>>>>>>>
>>>>>>>> sata2 without hot plug?
>>>>>>>
>>>>>>> Hi Roberto.
>>>>>>>
>>>>>>> I mean that there is no hot-plug bay, with sliding rails etc..
>>>>>>> The drives are connected to the mb using standard sata cables.
>>>>>>>
>>>>>>>
>>>>>>>> check if your sda sdb sdc will change after removing it, itæ depends
>>>>>>>> on your udev or another /dev filesystem
>>>>>>>
>>>>>>> Ok, thank you.
>>>>>>> That means that if I take care to check the above, and
>>>>>>> the new drive will be sdb, then taking the steps indicated
>>>>>>> in my original post will do the job?
>>>>>>>
>>>>>>> Best regards.
>>>>>>> Robi
>>>>>>>
>>>>>>>
>>>>>>>> 2011/2/23 Roberto Nunnari:
>>>>>>>>>
>>>>>>>>> Hello.
>>>>>>>>>
>>>>>>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>>>>>>
>>>>>>>>> Now, one disk is in failed state and it has no spares:
>>>>>>>>> # cat /proc/mdstat
>>>>>>>>> Personalities : [raid1]
>>>>>>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>>>>>>> 1910200704 blocks [2/1] [U_]
>>>>>>>>>
>>>>>>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>>>>>>> 40957568 blocks [2/2] [UU]
>>>>>>>>>
>>>>>>>>> unused devices:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>>>>>>
>>>>>>>>> My plan is to:
>>>>>>>>> # sfdisk -d /dev/sdb> sdb.sfdisk
>>>>>>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>>>>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>>>>>>> # shutdown -h now
>>>>>>>>>
>>>>>>>>> replace the disk and boot (it should come back up, even without one
>>>>>>>>> drive,
>>>>>>>>> right?)
>>>>>>>>>
>>>>>>>>> # sfdisk /dev/sdb< sdb.sfdisk
>>>>>>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>>>>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>>>>>>
>>>>>>>>> and the drives should start to resync, right?
>>>>>>>>>
>>>>>>>>> This is my first time I do such a thing, so please, correct me
>>>>>>>>> if the above is not correct, or is not a best practice for
>>>>>>>>> my configuration.
>>>>>>>>>
>>>>>>>>> My last backup of md1 is of mid november, so I need to be
>>>>>>>>> pretty sure I will not lose my data (over 1TB).
>>>>>>>>>
>>>>>>>>> A bit abount my environment:
>>>>>>>>> # mdadm --version
>>>>>>>>> mdadm - v1.12.0 - 14 June 2005
>>>>>>>>> # cat /etc/redhat-release
>>>>>>>>> CentOS release 4.8 (Final)
>>>>>>>>> # uname -rms
>>>>>>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>>>>>>>
>>>>>>>>> Thank you very much and best regards.
>>>>>>>>> Robi
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
--------------050509000902030401040902
Content-Type: text/plain;
name="gpt_id"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
filename="gpt_id"
IyEvYmluL3NoCgpQQVJFTlRfREVWSUNFPSIkMSIKUEFSVElUSU9OPSIkMiIK CiMgR2V0IHRo
ZSBsYWJlbCBvZiB0aGUgcGFydGl0aW9uIHVzaW5nIHBhcnRlZC4KUEFSVElU SU9OX0xBQkVM
PSJgcGFydGVkIC1zbSAvZGV2LyIkUEFSRU5UX0RFVklDRSIgcHJpbnQgfCBn cmVwICJeJFBB
UlRJVElPTjoiIHwgYXdrIC1GOiAne3ByaW50ICQ2fSdgIgoKZWNobyAiUEFS VElUSU9OX0xB
QkVMPSRQQVJUSVRJT05fTEFCRUwiCg==
--------------050509000902030401040902
Content-Type: text/plain;
name="10-gpt-label.rules"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
filename="10-gpt-label.rules"
IyBUaGlzIGZpbGUgY29udGFpbnMgdGhlIHJ1bGVzIHRvIGNyZWF0ZSBieS1H UFQtbGFiZWwg
c3ltbGlua3MgZm9yIGRldmljZXMKCiMgZm9yd2FyZCBzY3NpIGRldmljZSBl dmVudHMgdG8g
dGhlIGNvcnJlc3BvbmRpbmcgYmxvY2sgZGV2aWNlCkFDVElPTj09ImNoYW5n ZSIsIFNVQlNZ
U1RFTT09InNjc2kiLCBFTlZ7REVWVFlQRX09PSJzY3NpX2RldmljZSIsIFwK CVRFU1Q9PSJi
bG9jayIsCQkJQVRUUntibG9jay8qL3VldmVudH09ImNoYW5nZSIKCiMgd2Ug YXJlIG9ubHkg
aW50ZXJlc3RlZCBpbiBhZGQgYW5kIGNoYW5nZSBhY3Rpb25zIGZvciBibG9j ayBkZXZpY2Vz
CkFDVElPTiE9ImFkZHxjaGFuZ2UiLAkJCUdPVE89ImdwdF9sYWJlbF9lbmQi ClNVQlNZU1RF
TSE9ImJsb2NrIiwJCQlHT1RPPSJncHRfbGFiZWxfZW5kIgoKIyBhbmQgd2Ug Y2FuIHNhZmVs
eSBpZ25vcmUgdGhlc2Uga2luZHMgb2YgZGV2aWNlcwpLRVJORUw9PSJtdGRb MC05XSp8bXRk
YmxvY2tbMC05XSp8cmFtKnxsb29wKnxmZCp8bmJkKnxnbmJkKnxkbS0qfG1k KnxidGlibSoi
LCBHT1RPPSJncHRfbGFiZWxfZW5kIgoKIyBza2lwIHJlbW92YWJsZSBpZGUg ZGV2aWNlcywg
YmVjYXVzZSBvcGVuKDIpIG9uIHRoZW0gY2F1c2VzIGFuIGV2ZW50cyBsb29w CktFUk5FTD09
ImhkKlshMC05XSIsIEFUVFJ7cmVtb3ZhYmxlfT09IjEiLCBEUklWRVJTPT0i aWRlLWNzfGlk
ZS1mbG9wcHkiLCBcCgkJCQkJR09UTz0iZ3B0X2xhYmVsX2VuZCIKS0VSTkVM PT0iaGQqWzAt
OV0iLCBBVFRSU3tyZW1vdmFibGV9PT0iMSIsIFwKCQkJCQlHT1RPPSJncHRf bGFiZWxfZW5k
IgoKIyBza2lwIHhlbiB2aXJ0dWFsIGhhcmQgZGlza3MKRFJJVkVSUz09InZi ZCIsCQkJCUdP
VE89Im5vX2hhcmR3YXJlX2lkIgoKIyBjaGVjayB0aGVzZSBhdHRyaWJ1dGVz IG9mIC9zeXMv
Y2xhc3MvYmxvY2sgbm9kZXMKRU5We0RFVlRZUEV9IT0iPyoiLCBBVFRSe3Jh bmdlfT09Ij8q
IiwJRU5We0RFVlRZUEV9PSJkaXNrIgpFTlZ7REVWVFlQRX0hPSI/KiIsIEFU VFJ7c3RhcnR9
PT0iPyoiLAlFTlZ7REVWVFlQRX09InBhcnRpdGlvbiIKCiMgcHJvYmUgR1BU IHBhcnRpdGlv
biBsYWJlbCBvZiBkaXNrcwpLRVJORUwhPSJzcioiLCBFTlZ7REVWVFlQRX09 PSJwYXJ0aXRp
b24iLCBJTVBPUlR7cHJvZ3JhbX09Ii9zYmluL2dwdF9pZCAkcGFyZW50ICRu dW1iZXIiCgpF
TlZ7UEFSVElUSU9OX0xBQkVMfT09Ij8qIiwgU1lNTElOSys9ImRpc2svYnkt bGFiZWwvJGVu
dntQQVJUSVRJT05fTEFCRUx9IgoKTEFCRUw9ImdwdF9sYWJlbF9lbmQiCg==
--------------050509000902030401040902--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 24.02.2011 22:38:25 von Roberto Spadim
nice thanks !!!
2011/2/24 Iordan Iordanov :
> Hi Roberto (Spadim),
>
> I am attaching the two files necessary for this functionality. The fi=
rst one
> (gpt_id) is the script which given this example input:
>
> Â gpt_id sdb 1
>
> should give this example output:
>
> Â PARTITION_LABEL=3Ditest00-drive00-part00
>
> The second file is a udev configuration file which needs to be droppe=
d into
> /etc/udev/rules.d/. When a new device is attached, it runs gpt_id on =
its
> partitions, and if a GPT label is found, a link in /dev/disk/by-label
> magically appears to the partition in question.
>
> To create a GPT label and name a 100GB partition on /dev/sdb, one wou=
ld do
> something like this (WARNING, WARNING, WARNING THIS IS A DATA-DESTRUC=
TIVE
> PROCESS):
>
> parted /dev/sdb
> mklabel y gpt
> mkpart primary ext3 0 100GB
> name 1 itest00-drive00-part00
> print
> quit
>
> To trigger udevadm to rescan all the devices and remake all the symli=
nks,
> you can run:
>
> udevadm trigger
>
> The gpt_id and the udev rules file are home-brewed at our department.=
Enjoy!
>
> Cheers,
> Iordan
>
> On 02/24/11 15:08, Roberto Spadim wrote:
>>
>> do you have the udev configuration for this (static)?
>>
>> 2011/2/24 Iordan Iordanov:
>>>
>>> Hi guys,
>>>
>>> I saw a bunch of discussion of devices changing names when hot-plug=
ged.
>>> If
>>> you get the device name right when you add it to the array first, a=
ll is
>>> good since the superblock is used to "discover" the device later.
>>>
>>> However, to make things easier/clearer, and to avoid errors, one ca=
n take
>>> a
>>> look at the set of directories:
>>>
>>> /dev/disk/by-id
>>> /dev/disk/by-path
>>> /dev/disk/by-uuid
>>> /dev/disk/by-label
>>>
>>> for a predictable, more static view of the drives. The symlinks in =
these
>>> directories are created by udev, and are simply links to the "real"
>>> device
>>> nodes /dev/sd{a-z}*. You can either just use these symlinks as a wa=
y of
>>> verifying that you are adding the right device, or add the device u=
sing
>>> the
>>> symlink.
>>>
>>> At our location, we even augmented udev to add links to labeled GPT
>>> partitions in /dev/disk/by-label, and now our drives/partitions loo=
k like
>>> this:
>>>
>>> iscsi00-drive00-part00 -> Â ../../sda1
>>> iscsi00-drive01-part00 -> Â ../../sdb1
>>> iscsi00-drive02-part00 -> Â ../../sdc1
>>> iscsi00-drive03-part00 -> Â ../../sdd1
>>> iscsi00-drive04-part00 -> Â ../../sde1
>>>
>>> This way, we know exactly which bay contains exactly which drive, a=
nd it
>>> stays this way. If you guys want, I can share with you the changes =
to
>>> udev
>>> necessary and the script which extracts the GPT label and reports i=
t to
>>> udev
>>> for this magic to happen :). Please reply to this thread with a req=
uest
>>> if
>>> you think it may be useful to you.
>>>
>>> Cheers,
>>> Iordan
>>>
>>>
>>> On 02/23/11 17:13, Roberto Nunnari wrote:
>>>>
>>>> Roberto Spadim wrote:
>>>>>
>>>>> hum, maybe you are using mdadm.conf or autodetect, non autodetect
>>>>> should be something like this:
>>>>> i don´t know the best solution, but it works ehhehe
>>>>>
>>>>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb
>>>>> quiet md=3D0,/dev/sda,/dev/sdb md=3D1,xxxx,yyyy.....
>>>>>
>>>>> or another md array...
>>>>>
>>>>> humm i readed the sata specification and removing isn´t a pr=
oblem, at
>>>>> eletronic level the sata channel is only data, no power source, a=
ll
>>>>> channels are diferencial (like rs422 or rs485), i don´t see =
anyproblem
>>>>> removing it. i tryed hot plug a revodrive (pciexpress ssd) and it
>>>>> don´t work (reboot) hehehe, pci-express isn´t hot plug =
=3DP, sata2 don´t
>>>>> have problems, the main problem is a short circuit at power sourc=
e, if
>>>>> you remove with caution no problems =3D)
>>>>>
>>>>> i tried in some others distros and udev created a new device when=
add
>>>>> a diferent disk for example, remove sdb, and add another disk cre=
ate
>>>>> sdc (not sdb), maybe with another udev configuration should work
>>>>
>>>> Ok. I'll keep all that in mind tomorrow.
>>>> Best regards.
>>>> Robi
>>>>
>>>>
>>>>>
>>>>>
>>>>> 2011/2/23 Roberto Nunnari:
>>>>>>
>>>>>> Roberto Spadim wrote:
>>>>>>>
>>>>>>> i don´t know how you setup your kernel (with or without ra=
id
>>>>>>
>>>>>> I use the official CentOS kernel with no modification and don't
>>>>>> know about raid autodetect, but:
>>>>>> # cat /boot/config-2.6.24-28-server |grep -i raid
>>>>>> CONFIG_BLK_DEV_3W_XXXX_RAID=3Dm
>>>>>> CONFIG_MD_RAID0=3Dm
>>>>>> CONFIG_MD_RAID1=3Dm
>>>>>> CONFIG_MD_RAID10=3Dm
>>>>>> CONFIG_MD_RAID456=3Dm
>>>>>> CONFIG_MD_RAID5_RESHAPE=3Dy
>>>>>> CONFIG_MEGARAID_LEGACY=3Dm
>>>>>> CONFIG_MEGARAID_MAILBOX=3Dm
>>>>>> CONFIG_MEGARAID_MM=3Dm
>>>>>> CONFIG_MEGARAID_NEWGEN=3Dy
>>>>>> CONFIG_MEGARAID_SAS=3Dm
>>>>>> CONFIG_RAID_ATTRS=3Dm
>>>>>> CONFIG_SCSI_AACRAID=3Dm
>>>>>>
>>>>>>
>>>>>>> autodetect?) do you use kernel command line to setup raid?
>>>>>>> autodetect?
>>>>>>
>>>>>> /dev/md0 in grub
>>>>>> I don't know if that means autodetect, but I guess so..
>>>>>>
>>>>>>
>>>>>>> here in my test machine i´m using kernel command line (gru=
b), i don´t
>>>>>>> have a server with hotplug bay, i open the case and remove the =
wire
>>>>>>> with my hands =3D) after reconecting it with another device ker=
enel
>>>>>>
>>>>>> Is it safe? Isn't it a blind bet to fry up the controller and/or=
disk?
>>>>>>
>>>>>>
>>>>>>> recognize the new device reread the parititions etc etc and i c=
an add
>>>>>>> it to array again
>>>>>>> my grub is something like:
>>>>>>>
>>>>>>> md=3D0,/dev/sda,/dev/sdb .....
>>>>>>>
>>>>>>> internal meta data, raid1, i didn´t like the autodetect (i=
t´s good)
>>>>>>> but i prefer hardcoded kernel command line (it´s not good =
with usb
>>>>>>> devices)
>>>>>>
>>>>>> the relevant part of my grub is:
>>>>>>
>>>>>> default=3D0
>>>>>> timeout=3D5
>>>>>> splashimage=3D(hd0,0)/grub/splash.xpm.gz
>>>>>> hiddenmenu
>>>>>> title CentOS (2.6.9-89.31.1.ELsmp)
>>>>>> root (hd0,0)
>>>>>> kernel /vmlinuz-2.6.9-89.31.1.ELsmp ro root=3D/dev/md0 rhgb quie=
t
>>>>>> initrd /initrd-2.6.9-89.31.1.ELsmp.img
>>>>>>
>>>>>> Best regards.
>>>>>> Robi
>>>>>>
>>>>>>
>>>>>>> 2011/2/23 Roberto Nunnari:
>>>>>>>>
>>>>>>>> Roberto Spadim wrote:
>>>>>>>>>
>>>>>>>>> sata2 without hot plug?
>>>>>>>>
>>>>>>>> Hi Roberto.
>>>>>>>>
>>>>>>>> I mean that there is no hot-plug bay, with sliding rails etc..
>>>>>>>> The drives are connected to the mb using standard sata cables.
>>>>>>>>
>>>>>>>>
>>>>>>>>> check if your sda sdb sdc will change after removing it, it=E6=
>>>>>>>>> depends
>>>>>>>>> on your udev or another /dev filesystem
>>>>>>>>
>>>>>>>> Ok, thank you.
>>>>>>>> That means that if I take care to check the above, and
>>>>>>>> the new drive will be sdb, then taking the steps indicated
>>>>>>>> in my original post will do the job?
>>>>>>>>
>>>>>>>> Best regards.
>>>>>>>> Robi
>>>>>>>>
>>>>>>>>
>>>>>>>>> 2011/2/23 Roberto Nunnari:
>>>>>>>>>>
>>>>>>>>>> Hello.
>>>>>>>>>>
>>>>>>>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>>>>>>>
>>>>>>>>>> Now, one disk is in failed state and it has no spares:
>>>>>>>>>> # cat /proc/mdstat
>>>>>>>>>> Personalities : [raid1]
>>>>>>>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>>>>>>>> 1910200704 blocks [2/1] [U_]
>>>>>>>>>>
>>>>>>>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>>>>>>>> 40957568 blocks [2/2] [UU]
>>>>>>>>>>
>>>>>>>>>> unused devices:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>>>>>>>
>>>>>>>>>> My plan is to:
>>>>>>>>>> # sfdisk -d /dev/sdb> Â sdb.sfdisk
>>>>>>>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>>>>>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>>>>>>>> # shutdown -h now
>>>>>>>>>>
>>>>>>>>>> replace the disk and boot (it should come back up, even with=
out
>>>>>>>>>> one
>>>>>>>>>> drive,
>>>>>>>>>> right?)
>>>>>>>>>>
>>>>>>>>>> # sfdisk /dev/sdb< Â sdb.sfdisk
>>>>>>>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>>>>>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>>>>>>>
>>>>>>>>>> and the drives should start to resync, right?
>>>>>>>>>>
>>>>>>>>>> This is my first time I do such a thing, so please, correct =
me
>>>>>>>>>> if the above is not correct, or is not a best practice for
>>>>>>>>>> my configuration.
>>>>>>>>>>
>>>>>>>>>> My last backup of md1 is of mid november, so I need to be
>>>>>>>>>> pretty sure I will not lose my data (over 1TB).
>>>>>>>>>>
>>>>>>>>>> A bit abount my environment:
>>>>>>>>>> # mdadm --version
>>>>>>>>>> mdadm - v1.12.0 - 14 June 2005
>>>>>>>>>> # cat /etc/redhat-release
>>>>>>>>>> CentOS release 4.8 (Final)
>>>>>>>>>> # uname -rms
>>>>>>>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>>>>>>>>
>>>>>>>>>> Thank you very much and best regards.
>>>>>>>>>> Robi
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-ra=
id" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-rai=
d" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.=
html
>>>
>>
>>
>>
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 24.02.2011 22:51:59 von Roberto Nunnari
Roberto Nunnari wrote:
> Albert Pauw wrote:
>> On 02/23/11 06:56 PM, Roberto Spadim wrote:
>>> sata2 without hot plug?
>>> check if your sda sdb sdc will change after removing it, it=B4s dep=
ends
>>> on your udev or another /dev filesystem
>>>
>>> 2011/2/23 Roberto Nunnari:
>>>> Hello.
>>>>
>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>
>>>> Now, one disk is in failed state and it has no spares:
>>>> # cat /proc/mdstat
>>>> Personalities : [raid1]
>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>> 1910200704 blocks [2/1] [U_]
>>>>
>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>> 40957568 blocks [2/2] [UU]
>>>>
>>>> unused devices:
>>>>
>>>>
>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>
>>>> My plan is to:
>>>> # sfdisk -d /dev/sdb> sdb.sfdisk
>>>> # mdadm /dev/md1 -r /dev/sdb4
>> -> removing should be ok, as the partition has failed in md1
>=20
> ok.
>=20
>=20
>>>> # mdadm /dev/md0 -r /dev/sdb1
>> -> In this case, sdb1 hasn't failed according to the output of=20
>> /proc/mdstat, so you should fail it otherwise you can't remove it:
>> mdadm /dev/md0 -f /dev/sdb1
>> mdadm /dev/md0 -r /dev/sdb1
>=20
> good to know! Thank you.
>=20
>=20
>>
>>>> # shutdown -h now
>>>>
>>>> replace the disk and boot (it should come back up, even without on=
e=20
>>>> drive,
>>>> right?)
>>>>
>>>> # sfdisk /dev/sdb< sdb.sfdisk
>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>
>>>> and the drives should start to resync, right?
>>>>
>>>> This is my first time I do such a thing, so please, correct me
>>>> if the above is not correct, or is not a best practice for
>>>> my configuration.
>>>>
>>>> My last backup of md1 is of mid november, so I need to be
>>>> pretty sure I will not lose my data (over 1TB).
>>>>
>>>> A bit abount my environment:
>>>> # mdadm --version
>>>> mdadm - v1.12.0 - 14 June 2005
>>>> # cat /etc/redhat-release
>>>> CentOS release 4.8 (Final)
>>>> # uname -rms
>>>> Linux 2.6.9-89.31.1.ELsmp i686
>> What about sdb2 an sdb3, are they in use as normal mountpoints, or=20
>> swap. Then these should be commented out in /etc/fstab
>> before you change the disk.
>=20
> Yes. They're normal mount point, so I'll have to
> comment them out before rebooting, especially the swap partition.
> Thank you for pointing that out!
>=20
> Best regards.
> Robi
Thank you very much Roberto and Albert.
I replaced the defective drive.
md0 was rebuilt almost immediatly, md1 is still rebuilding
but already completed 77%.
Great linux-raid md!
Best regards.
Robi
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: failed drive in raid 1 array
am 24.02.2011 23:00:59 von Roberto Spadim
=3D) nice hehe
2011/2/24 Roberto Nunnari :
> Roberto Nunnari wrote:
>>
>> Albert Pauw wrote:
>>>
>>> Â On 02/23/11 06:56 PM, Roberto Spadim wrote:
>>>>
>>>> sata2 without hot plug?
>>>> check if your sda sdb sdc will change after removing it, itæ=80=
depends
>>>> on your udev or another /dev filesystem
>>>>
>>>> 2011/2/23 Roberto Nunnari:
>>>>>
>>>>> Hello.
>>>>>
>>>>> I have a linux box, with two 2TB sata HD in raid 1.
>>>>>
>>>>> Now, one disk is in failed state and it has no spares:
>>>>> # cat /proc/mdstat
>>>>> Personalities : [raid1]
>>>>> md1 : active raid1 sdb4[2](F) sda4[0]
>>>>> Â Â Â 1910200704 blocks [2/1] [U_]
>>>>>
>>>>> md0 : active raid1 sdb1[1] sda2[0]
>>>>> Â Â Â 40957568 blocks [2/2] [UU]
>>>>>
>>>>> unused devices:
>>>>>
>>>>>
>>>>> The drives are not hot-plug, so I need to shutdown the box.
>>>>>
>>>>> My plan is to:
>>>>> # sfdisk -d /dev/sdb> Â sdb.sfdisk
>>>>> # mdadm /dev/md1 -r /dev/sdb4
>>>
>>> -> removing should be ok, as the partition has failed in md1
>>
>> ok.
>>
>>
>>>>> # mdadm /dev/md0 -r /dev/sdb1
>>>
>>> -> In this case, sdb1 hasn't failed according to the output of
>>> /proc/mdstat, so you should fail it otherwise you can't remove it:
>>> mdadm /dev/md0 -f /dev/sdb1
>>> mdadm /dev/md0 -r /dev/sdb1
>>
>> good to know! Thank you.
>>
>>
>>>
>>>>> # shutdown -h now
>>>>>
>>>>> replace the disk and boot (it should come back up, even without o=
ne
>>>>> drive,
>>>>> right?)
>>>>>
>>>>> # sfdisk /dev/sdb< Â sdb.sfdisk
>>>>> # mdadm /dev/md1 -a /dev/sdb4
>>>>> # mdadm /dev/md0 -a /dev/sdb1
>>>>>
>>>>> and the drives should start to resync, right?
>>>>>
>>>>> This is my first time I do such a thing, so please, correct me
>>>>> if the above is not correct, or is not a best practice for
>>>>> my configuration.
>>>>>
>>>>> My last backup of md1 is of mid november, so I need to be
>>>>> pretty sure I will not lose my data (over 1TB).
>>>>>
>>>>> A bit abount my environment:
>>>>> # mdadm --version
>>>>> mdadm - v1.12.0 - 14 June 2005
>>>>> # cat /etc/redhat-release
>>>>> CentOS release 4.8 (Final)
>>>>> # uname -rms
>>>>> Linux 2.6.9-89.31.1.ELsmp i686
>>>
>>> What about sdb2 an sdb3, are they in use as normal mountpoints, or =
swap.
>>> Then these should be commented out in /etc/fstab
>>> before you change the disk.
>>
>> Yes. They're normal mount point, so I'll have to
>> comment them out before rebooting, especially the swap partition.
>> Thank you for pointing that out!
>>
>> Best regards.
>> Robi
>
> Thank you very much Roberto and Albert.
> I replaced the defective drive.
> md0 was rebuilt almost immediatly, md1 is still rebuilding
> but already completed 77%.
>
> Great linux-raid md!
> Best regards.
> Robi
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.ht=
ml
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html