Help - raid not assembling right on boot (was: Resizing a RAID1)
Help - raid not assembling right on boot (was: Resizing a RAID1)
am 27.01.2011 05:02:49 von Hank Barta
I followed the procedure below. Essentially removing one drive from a
RAID1, zeroing the superblock, repartitioning the drive, starting a
new RAID1 in degraded mode, copying over the data and repeating the
process on the second drive.
Everything seemed to be going well with the new RAID mounted and the
second drive syncing right along. However on a subsequent reboot the
RAID did not seem to come up properly. I was no longer able to mount
it. I also noticed that the resync had restarted. I found I could
temporarily resolve this by stopping the RAID1 and reassembling it and
specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb2
/dev/sdc2) At this point, resync starts again and I can mount
/dev/md2. The problem crops up again on the next reboot. Information
revealed by 'mdadm --detail /dev/md2' changes between "from boot" and
following reassembly. It looks like at boot the entire drives
(/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desired
partitions.
I do not know where this is coming from. I tried zeroing the
superblock for both /dev/sdb and /dev/sdc and mdadm reported they did
not look like RAID devices.
Results from 'mdadm --detail /dev/md2' before and after is:
==================== =====
=====3D
root@oak:~# mdadm --detail /dev/md2
/dev/md2:
Version : 00.90
Creation Time : Tue Jan 25 10:39:52 2011
Raid Level : raid1
Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jan 26 21:16:04 2011
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 2% complete
UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oa=
k)
Events : 0.13376
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
2 8 16 1 spare rebuilding /dev/sdb
root@oak:~#
root@oak:~# mdadm --detail /dev/md2
/dev/md2:
Version : 00.90
Creation Time : Tue Jan 25 10:39:52 2011
Raid Level : raid1
Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jan 26 21:25:40 2011
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 0% complete
UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oa=
k)
Events : 0.13382
Number Major Minor RaidDevice State
0 8 34 0 active sync /dev/sdc2
2 8 18 1 spare rebuilding /dev/sdb2
==================== =====
=====3D
Contents of /etc/mdadm/mdadm.conf are:
==================== =====
=====3D
hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks=
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes
# automatically tag new arrays as belonging to the local system
HOMEHOST
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
#ARRAY /dev/md2 level=3Draid1 num-devices=3D2
UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
#spares=3D2
# This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
# by mkconf $Id$
hbarta@oak:~$
==================== =====
=====3D
(I commented out the two lines following "definitions of existing MD
arrays" because I thought they might be the culprit.)
They seem to match:
==================== =====
=====3D
hbarta@oak:~$ sudo mdadm --examine --scan
ARRAY /dev/md0 level=3Draid1 num-devices=3D2
UUID=3D954a3be2:f23e1239:cd71bfd9:6916a14f
ARRAY /dev/md2 level=3Draid1 num-devices=3D2
UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
spares=3D2
hbarta@oak:~$
==================== =====
=====3D
except for the addition of a second RAID which I added after installing=
mdadm.
I have no idea how to fix this (*) and appreciate any help with how to =
do so.
(*) All I can think of is to zero both entire drives and start from
the beginning.
On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta wrote:
> My previous experiment with USB flash drives has not gone too far. I
> can install Ubuntu Server 10.04 to a single USB flash drive and boot
> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Intel
> D525MW from it. The Intel board will boot install media on USB flash,
> but not a normal install. (This is an aside.) The desire to use an
> alternate boot was to avoid having to fiddle with a two drive RAID1.
> The drives have a single partition consisting of the entire drive
> which is combined into the RAID1.
>
> My desire to get this system up and running is overrunning my desire
> to get the USB flash raid to boot. My strategy is to
> =A0- remove one drive from the raid,
> =A0- repartition it to allow for a system installation
> =A0- create a new RAID1 with that drive and format the new data
> partition. (both would be =A0RAID1 and now both degraded to one drive=
)
> =A0- copy data from the existing RAID1 data partition to the new RAID=
1
> data partition.
> =A0- stop the old RAID1
> =A0- repartition the other drive (most recently the old RAID1) to mat=
ch
> the new RAID1
> =A0- add the second drive to the new RAID1
> =A0- watch it rebuild and breathe big sigh of relief.
>
> When convenient I can install Linux to the space I've opened up via
> the above machinations and move this project down the road.
>
> That looks pretty straightforward to me, but I've never let that sort
> of thing prevent me from cobbling things up in the past. (And at this
> moment, I'm making a copy of the RAID1 to an external drive just in
> case.) For anyone interested, I'll share the details of my plan to th=
e
> command level in the case that any of you can spot a problem I have
> overlooked.
>
> A related question Is what are the constraints for partitioning the
> drive to achieve best performance? I plan to create a 10G partition o=
n
> each drive for the system. Likewise, suggestions for tuning the RAID
> and filesystem configurations would be appreciated. Usage for the RAI=
D
> is backup for my home LAN as well as storing pictures and more
> recently my video library so there's a mix of large and small files.
> I'm not obsessed with performance as most clients are on WiFi, but I
> might as well grab the low hanging fruit in this regard.
>
> Feel free to comment on any aspects of the details listed below.
>
> many thanks,
> hank
>
> This is what is presently on the drives:
> ==================== ===3D=
=3D
> root@oak:~# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md1 : active raid1 sdc1[0] sda1[1]
> =A0 =A0 =A01953511936 blocks [2/2] [UU]
>
> unused devices:
> root@oak:~# fdisk -l /dev/sda /dev/sdc
>
> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Block=
s =A0 Id =A0System
> /dev/sda1 =A0 * =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A01953512001=
=A0 fd =A0Linux raid autodetect
>
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Block=
s =A0 Id =A0System
> /dev/sdc1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A019535120=
01 =A0 fd =A0Linux raid autodetect
> root@oak:~#
> ==================== ===3D=
=3D
>
> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI.
> The Samsung is one of those with 4K sectors. (I think the Seagate may
> be too.)
>
> Selecting /dev/sdc to migrate first (and following more or less the
> guide on http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-pa =
rtition.html)
>
> Fail the drive:
>> mdadm --manage /dev/md1 --fail /dev/sdc1
>
> Remove from the array:
>> mdadm --manage /dev/md1 --remove /dev/sdc1
>
> Zero the superblock:
>> mdadm --zero-superblock /dev/sdc1
>
>
d
> a second primary partition using the remainder of the drive: /dev/sdc=
1
> and /dev/sdc2>
>
> Create new RAID:
>> mdadm --create /dev/md2 -n 2 --level=3D1 /dev/sdc2 missing
>
> Format:
>> mkfs.ext4 /dev/md2
>
> Mount:
>> mount /dev/md2 /mnt/md2
>
> Copy:
>> rsync -av -H -K --partial --partial-dir=3D.rsync-partial /mnt/md1/ /=
mnt/USB/
>
> Stop the old RAID:
>> mdadm --stop /dev/md1
>
> Zero the superblock:
>> mdadm --zero-superblock /dev/sda1
>
> Repartition to match the other drive
>
> Add the second drive to the RAID:
>> mdadm --manage /dev/md2 --add /dev/sda2
>
> Watch the resync complete.
>
> Done! (Except for doing something with the new 10G partition, but
> that's another subject.)
>
> Many thanks for reading this far!
>
> best,
> hank
>
> --
> '03 BMW F650CS - hers
> '98 Dakar K12RS - "BABY K" grew up.
> '93 R100R w/ Velorex 700 (MBD starts...)
> '95 Miata - "OUR LC"
> polish visor: apply squashed bugs, rinse, repeat
> Beautiful Sunny Winfield, Illinois
>
--=20
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing aRAID1)
am 27.01.2011 12:56:54 von Justin Piszcz
This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.
--655872-18869242-1296129414=:31246
Content-Type: TEXT/PLAIN; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: QUOTED-PRINTABLE
Hi,
Show fdisk -l on both disks, are the partitions type 0xfd Linux raid Auto=
=20
Detect? If not, you will have that exact problem.
Justin.
On Wed, 26 Jan 2011, Hank Barta wrote:
> I followed the procedure below. Essentially removing one drive from a
> RAID1, zeroing the superblock, repartitioning the drive, starting a
> new RAID1 in degraded mode, copying over the data and repeating the
> process on the second drive.
>
> Everything seemed to be going well with the new RAID mounted and the
> second drive syncing right along. However on a subsequent reboot the
> RAID did not seem to come up properly. I was no longer able to mount
> it. I also noticed that the resync had restarted. I found I could
> temporarily resolve this by stopping the RAID1 and reassembling it and
> specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb2
> /dev/sdc2) At this point, resync starts again and I can mount
> /dev/md2. The problem crops up again on the next reboot. Information
> revealed by 'mdadm --detail /dev/md2' changes between "from boot" and
> following reassembly. It looks like at boot the entire drives
> (/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desired
> partitions.
>
> I do not know where this is coming from. I tried zeroing the
> superblock for both /dev/sdb and /dev/sdc and mdadm reported they did
> not look like RAID devices.
>
> Results from 'mdadm --detail /dev/md2' before and after is:
>
> ==================== =====
=====3D
> root@oak:~# mdadm --detail /dev/md2
> /dev/md2:
> Version : 00.90
> Creation Time : Tue Jan 25 10:39:52 2011
> Raid Level : raid1
> Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Wed Jan 26 21:16:04 2011
> State : clean, degraded, recovering
> Active Devices : 1
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 1
>
> Rebuild Status : 2% complete
>
> UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
> Events : 0.13376
>
> Number Major Minor RaidDevice State
> 0 8 32 0 active sync /dev/sdc
> 2 8 16 1 spare rebuilding /dev/sdb
> root@oak:~#
> root@oak:~# mdadm --detail /dev/md2
> /dev/md2:
> Version : 00.90
> Creation Time : Tue Jan 25 10:39:52 2011
> Raid Level : raid1
> Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Wed Jan 26 21:25:40 2011
> State : clean, degraded, recovering
> Active Devices : 1
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 1
>
> Rebuild Status : 0% complete
>
> UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
> Events : 0.13382
>
> Number Major Minor RaidDevice State
> 0 8 34 0 active sync /dev/sdc2
> 2 8 18 1 spare rebuilding /dev/sdb2
> ==================== =====
=====3D
>
> Contents of /etc/mdadm/mdadm.conf are:
> ==================== =====
=====3D
> hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions
>
> # auto-create devices with Debian standard permissions
> CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST
>
> # instruct the monitoring daemon where to send mail alerts
> MAILADDR root
>
> # definitions of existing MD arrays
> #ARRAY /dev/md2 level=3Draid1 num-devices=3D2
> UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
> #spares=3D2
>
> # This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
> # by mkconf $Id$
> hbarta@oak:~$
> ==================== =====
=====3D
> (I commented out the two lines following "definitions of existing MD
> arrays" because I thought they might be the culprit.)
>
> They seem to match:
> ==================== =====
=====3D
> hbarta@oak:~$ sudo mdadm --examine --scan
> ARRAY /dev/md0 level=3Draid1 num-devices=3D2
> UUID=3D954a3be2:f23e1239:cd71bfd9:6916a14f
> ARRAY /dev/md2 level=3Draid1 num-devices=3D2
> UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
> spares=3D2
> hbarta@oak:~$
> ==================== =====
=====3D
> except for the addition of a second RAID which I added after installing m=
dadm.
>
> I have no idea how to fix this (*) and appreciate any help with how to do=
so.
>
>
> (*) All I can think of is to zero both entire drives and start from
> the beginning.
>
> On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta wrote:
>> My previous experiment with USB flash drives has not gone too far. I
>> can install Ubuntu Server 10.04 to a single USB flash drive and boot
>> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Intel
>> D525MW from it. The Intel board will boot install media on USB flash,
>> but not a normal install. (This is an aside.) The desire to use an
>> alternate boot was to avoid having to fiddle with a two drive RAID1.
>> The drives have a single partition consisting of the entire drive
>> which is combined into the RAID1.
>>
>> My desire to get this system up and running is overrunning my desire
>> to get the USB flash raid to boot. My strategy is to
>> =A0- remove one drive from the raid,
>> =A0- repartition it to allow for a system installation
>> =A0- create a new RAID1 with that drive and format the new data
>> partition. (both would be =A0RAID1 and now both degraded to one drive)
>> =A0- copy data from the existing RAID1 data partition to the new RAID1
>> data partition.
>> =A0- stop the old RAID1
>> =A0- repartition the other drive (most recently the old RAID1) to match
>> the new RAID1
>> =A0- add the second drive to the new RAID1
>> =A0- watch it rebuild and breathe big sigh of relief.
>>
>> When convenient I can install Linux to the space I've opened up via
>> the above machinations and move this project down the road.
>>
>> That looks pretty straightforward to me, but I've never let that sort
>> of thing prevent me from cobbling things up in the past. (And at this
>> moment, I'm making a copy of the RAID1 to an external drive just in
>> case.) For anyone interested, I'll share the details of my plan to the
>> command level in the case that any of you can spot a problem I have
>> overlooked.
>>
>> A related question Is what are the constraints for partitioning the
>> drive to achieve best performance? I plan to create a 10G partition on
>> each drive for the system. Likewise, suggestions for tuning the RAID
>> and filesystem configurations would be appreciated. Usage for the RAID
>> is backup for my home LAN as well as storing pictures and more
>> recently my video library so there's a mix of large and small files.
>> I'm not obsessed with performance as most clients are on WiFi, but I
>> might as well grab the low hanging fruit in this regard.
>>
>> Feel free to comment on any aspects of the details listed below.
>>
>> many thanks,
>> hank
>>
>> This is what is presently on the drives:
>> ==================== ====
>> root@oak:~# cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md1 : active raid1 sdc1[0] sda1[1]
>> =A0 =A0 =A01953511936 blocks [2/2] [UU]
>>
>> unused devices:
>> root@oak:~# fdisk -l /dev/sda /dev/sdc
>>
>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders
>> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Blocks =
=A0 Id =A0System
>> /dev/sda1 =A0 * =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A01953512001 =
=A0 fd =A0Linux raid autodetect
>>
>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders
>> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Blocks =
=A0 Id =A0System
>> /dev/sdc1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A01953512001 =
=A0 fd =A0Linux raid autodetect
>> root@oak:~#
>> ==================== ====
>>
>> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI.
>> The Samsung is one of those with 4K sectors. (I think the Seagate may
>> be too.)
>>
>> Selecting /dev/sdc to migrate first (and following more or less the
>> guide on http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-pa rti=
tion.html)
>>
>> Fail the drive:
>>> mdadm --manage /dev/md1 --fail /dev/sdc1
>>
>> Remove from the array:
>>> mdadm --manage /dev/md1 --remove /dev/sdc1
>>
>> Zero the superblock:
>>> mdadm --zero-superblock /dev/sdc1
>>
>>
>> a second primary partition using the remainder of the drive: /dev/sdc1
>> and /dev/sdc2>
>>
>> Create new RAID:
>>> mdadm --create /dev/md2 -n 2 --level=3D1 /dev/sdc2 missing
>>
>> Format:
>>> mkfs.ext4 /dev/md2
>>
>> Mount:
>>> mount /dev/md2 /mnt/md2
>>
>> Copy:
>>> rsync -av -H -K --partial --partial-dir=3D.rsync-partial /mnt/md1/ /mnt=
/USB/
>>
>> Stop the old RAID:
>>> mdadm --stop /dev/md1
>>
>> Zero the superblock:
>>> mdadm --zero-superblock /dev/sda1
>>
>> Repartition to match the other drive
>>
>> Add the second drive to the RAID:
>>> mdadm --manage /dev/md2 --add /dev/sda2
>>
>> Watch the resync complete.
>>
>> Done! (Except for doing something with the new 10G partition, but
>> that's another subject.)
>>
>> Many thanks for reading this far!
>>
>> best,
>> hank
>>
>> --
>> '03 BMW F650CS - hers
>> '98 Dakar K12RS - "BABY K" grew up.
>> '93 R100R w/ Velorex 700 (MBD starts...)
>> '95 Miata - "OUR LC"
>> polish visor: apply squashed bugs, rinse, repeat
>> Beautiful Sunny Winfield, Illinois
>>
>
>
>
> --=20
> '03 BMW F650CS - hers
> '98 Dakar K12RS - "BABY K" grew up.
> '93 R100R w/ Velorex 700 (MBD starts...)
> '95 Miata - "OUR LC"
> polish visor: apply squashed bugs, rinse, repeat
> Beautiful Sunny Winfield, Illinois
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--655872-18869242-1296129414=:31246--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
am 27.01.2011 13:20:39 von Hank Barta
Thanks for the suggestion:
==================== =====
=====3D
hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units =3D sectors of 1 * 512 =3D 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 2048 20973567 10485760 fd Linux raid auto=
detect
/dev/sdb2 20973568 3907029167 1943027800 fd Linux raid auto=
detect
Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units =3D sectors of 1 * 512 =3D 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 2048 20973567 10485760 fd Linux raid auto=
detect
/dev/sdc2 20973568 3907029167 1943027800 fd Linux raid auto=
detect
hbarta@oak:~$
==================== =====
=====3D
Everything seems OK as far as I can see.
thanks,
hank
On Thu, Jan 27, 2011 at 5:56 AM, Justin Piszcz
> wrote:
> Hi,
>
> Show fdisk -l on both disks, are the partitions type 0xfd Linux raid =
Auto
> Detect? =A0If not, you will have that exact problem.
>
> Justin.
>
> On Wed, 26 Jan 2011, Hank Barta wrote:
>
>> I followed the procedure below. Essentially removing one drive from =
a
>> RAID1, zeroing the superblock, repartitioning the drive, starting a
>> new RAID1 in degraded mode, copying over the data and repeating the
>> process on the second drive.
>>
>> Everything seemed to be going well with the new RAID mounted and the
>> second drive syncing right along. However on a subsequent reboot the
>> RAID did not seem to come up properly. I was no longer able to mount
>> it. I also noticed that the resync had restarted. I found I could
>> temporarily resolve this by stopping the RAID1 and reassembling it a=
nd
>> specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb=
2
>> /dev/sdc2) At this point, resync starts again and I can mount
>> /dev/md2. The problem crops up again on the next reboot. Information
>> revealed by 'mdadm --detail /dev/md2' changes between "from boot" an=
d
>> following reassembly. It looks like at boot the entire drives
>> (/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desir=
ed
>> partitions.
>>
>> I do not know where this is coming from. I tried zeroing the
>> superblock for both /dev/sdb and /dev/sdc and mdadm reported they di=
d
>> not look like RAID devices.
>>
>> Results from 'mdadm --detail /dev/md2' before and after is:
>>
>> ==================== ===3D=
======
>> root@oak:~# mdadm --detail /dev/md2
>> /dev/md2:
>> =A0 =A0 =A0 Version : 00.90
>> =A0Creation Time : Tue Jan 25 10:39:52 2011
>> =A0 =A0Raid Level : raid1
>> =A0 =A0Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
>> =A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
>> =A0Raid Devices : 2
>> =A0Total Devices : 2
>> Preferred Minor : 2
>> =A0 Persistence : Superblock is persistent
>>
>> =A0 Update Time : Wed Jan 26 21:16:04 2011
>> =A0 =A0 =A0 =A0 State : clean, degraded, recovering
>> Active Devices : 1
>> Working Devices : 2
>> Failed Devices : 0
>> =A0Spare Devices : 1
>>
>> Rebuild Status : 2% complete
>>
>> =A0 =A0 =A0 =A0 =A0UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local=
to host oak)
>> =A0 =A0 =A0 =A0Events : 0.13376
>>
>> =A0 Number =A0 Major =A0 Minor =A0 RaidDevice State
>> =A0 =A0 =A00 =A0 =A0 =A0 8 =A0 =A0 =A0 32 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc
>> =A0 =A0 =A02 =A0 =A0 =A0 8 =A0 =A0 =A0 16 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
spare rebuilding =A0 /dev/sdb
>> root@oak:~#
>> root@oak:~# mdadm --detail /dev/md2
>> /dev/md2:
>> =A0 =A0 =A0 Version : 00.90
>> =A0Creation Time : Tue Jan 25 10:39:52 2011
>> =A0 =A0Raid Level : raid1
>> =A0 =A0Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
>> =A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
>> =A0Raid Devices : 2
>> =A0Total Devices : 2
>> Preferred Minor : 2
>> =A0 Persistence : Superblock is persistent
>>
>> =A0 Update Time : Wed Jan 26 21:25:40 2011
>> =A0 =A0 =A0 =A0 State : clean, degraded, recovering
>> Active Devices : 1
>> Working Devices : 2
>> Failed Devices : 0
>> =A0Spare Devices : 1
>>
>> Rebuild Status : 0% complete
>>
>> =A0 =A0 =A0 =A0 =A0UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local=
to host oak)
>> =A0 =A0 =A0 =A0Events : 0.13382
>>
>> =A0 Number =A0 Major =A0 Minor =A0 RaidDevice State
>> =A0 =A0 =A00 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc2
>> =A0 =A0 =A02 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
spare rebuilding =A0 /dev/sdb2
>> ==================== ===3D=
======
>>
>> Contents of /etc/mdadm/mdadm.conf are:
>> ==================== ===3D=
======
>> hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
>> # mdadm.conf
>> #
>> # Please refer to mdadm.conf(5) for information about this file.
>> #
>>
>> # by default, scan all partitions (/proc/partitions) for MD superblo=
cks.
>> # alternatively, specify devices to scan, using wildcards if desired=
>> DEVICE partitions
>>
>> # auto-create devices with Debian standard permissions
>> CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes
>>
>> # automatically tag new arrays as belonging to the local system
>> HOMEHOST
>>
>> # instruct the monitoring daemon where to send mail alerts
>> MAILADDR root
>>
>> # definitions of existing MD arrays
>> #ARRAY /dev/md2 level=3Draid1 num-devices=3D2
>> UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
>> =A0#spares=3D2
>>
>> # This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
>> # by mkconf $Id$
>> hbarta@oak:~$
>> ==================== ===3D=
======
>> (I commented out the two lines following "definitions of existing MD
>> arrays" because I thought they might be the culprit.)
>>
>> They seem to match:
>> ==================== ===3D=
======
>> hbarta@oak:~$ sudo mdadm --examine --scan
>> ARRAY /dev/md0 level=3Draid1 num-devices=3D2
>> UUID=3D954a3be2:f23e1239:cd71bfd9:6916a14f
>> ARRAY /dev/md2 level=3Draid1 num-devices=3D2
>> UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
>> =A0spares=3D2
>> hbarta@oak:~$
>> ==================== ===3D=
======
>> except for the addition of a second RAID which I added after install=
ing
>> mdadm.
>>
>> I have no idea how to fix this (*) and appreciate any help with how =
to do
>> so.
>>
>>
>> (*) All I can think of is to zero both entire drives and start from
>> the beginning.
>>
>> On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta wrote=
:
>>>
>>> My previous experiment with USB flash drives has not gone too far. =
I
>>> can install Ubuntu Server 10.04 to a single USB flash drive and boo=
t
>>> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Inte=
l
>>> D525MW from it. The Intel board will boot install media on USB flas=
h,
>>> but not a normal install. (This is an aside.) The desire to use an
>>> alternate boot was to avoid having to fiddle with a two drive RAID1=
>>> The drives have a single partition consisting of the entire drive
>>> which is combined into the RAID1.
>>>
>>> My desire to get this system up and running is overrunning my desir=
e
>>> to get the USB flash raid to boot. My strategy is to
>>> =A0- remove one drive from the raid,
>>> =A0- repartition it to allow for a system installation
>>> =A0- create a new RAID1 with that drive and format the new data
>>> partition. (both would be =A0RAID1 and now both degraded to one dri=
ve)
>>> =A0- copy data from the existing RAID1 data partition to the new RA=
ID1
>>> data partition.
>>> =A0- stop the old RAID1
>>> =A0- repartition the other drive (most recently the old RAID1) to m=
atch
>>> the new RAID1
>>> =A0- add the second drive to the new RAID1
>>> =A0- watch it rebuild and breathe big sigh of relief.
>>>
>>> When convenient I can install Linux to the space I've opened up via
>>> the above machinations and move this project down the road.
>>>
>>> That looks pretty straightforward to me, but I've never let that so=
rt
>>> of thing prevent me from cobbling things up in the past. (And at th=
is
>>> moment, I'm making a copy of the RAID1 to an external drive just in
>>> case.) For anyone interested, I'll share the details of my plan to =
the
>>> command level in the case that any of you can spot a problem I have
>>> overlooked.
>>>
>>> A related question Is what are the constraints for partitioning the
>>> drive to achieve best performance? I plan to create a 10G partition=
on
>>> each drive for the system. Likewise, suggestions for tuning the RAI=
D
>>> and filesystem configurations would be appreciated. Usage for the R=
AID
>>> is backup for my home LAN as well as storing pictures and more
>>> recently my video library so there's a mix of large and small files=
>>> I'm not obsessed with performance as most clients are on WiFi, but =
I
>>> might as well grab the low hanging fruit in this regard.
>>>
>>> Feel free to comment on any aspects of the details listed below.
>>>
>>> many thanks,
>>> hank
>>>
>>> This is what is presently on the drives:
>>> ==================== ===3D=
=3D
>>> root@oak:~# cat /proc/mdstat
>>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5=
]
>>> [raid4] [raid10]
>>> md1 : active raid1 sdc1[0] sda1[1]
>>> =A0 =A0 =A01953511936 blocks [2/2] [UU]
>>>
>>> unused devices:
>>> root@oak:~# fdisk -l /dev/sda /dev/sdc
>>>
>>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
>>> 255 heads, 63 sectors/track, 243201 cylinders
>>> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disk identifier: 0x00000000
>>>
>>> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Blo=
cks =A0 Id =A0System
>>> /dev/sda1 =A0 * =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A019535120=
01 =A0 fd =A0Linux raid
>>> autodetect
>>>
>>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
>>> 255 heads, 63 sectors/track, 243201 cylinders
>>> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disk identifier: 0x00000000
>>>
>>> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Blo=
cks =A0 Id =A0System
>>> /dev/sdc1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A0195351=
2001 =A0 fd =A0Linux raid
>>> autodetect
>>> root@oak:~#
>>> ==================== ===3D=
=3D
>>>
>>> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI=
>>> The Samsung is one of those with 4K sectors. (I think the Seagate m=
ay
>>> be too.)
>>>
>>> Selecting /dev/sdc to migrate first (and following more or less the
>>> guide on
>>> http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-pa rtition=
html)
>>>
>>> Fail the drive:
>>>>
>>>> mdadm --manage /dev/md1 --fail /dev/sdc1
>>>
>>> Remove from the array:
>>>>
>>>> mdadm --manage /dev/md1 --remove /dev/sdc1
>>>
>>> Zero the superblock:
>>>>
>>>> mdadm --zero-superblock /dev/sdc1
>>>
>>>
and
>>> a second primary partition using the remainder of the drive: /dev/s=
dc1
>>> and /dev/sdc2>
>>>
>>> Create new RAID:
>>>>
>>>> mdadm --create /dev/md2 -n 2 --level=3D1 /dev/sdc2 missing
>>>
>>> Format:
>>>>
>>>> mkfs.ext4 /dev/md2
>>>
>>> Mount:
>>>>
>>>> mount /dev/md2 /mnt/md2
>>>
>>> Copy:
>>>>
>>>> rsync -av -H -K --partial --partial-dir=3D.rsync-partial /mnt/md1/
>>>> /mnt/USB/
>>>
>>> Stop the old RAID:
>>>>
>>>> mdadm --stop /dev/md1
>>>
>>> Zero the superblock:
>>>>
>>>> mdadm --zero-superblock /dev/sda1
>>>
>>> Repartition to match the other drive
>>>
>>> Add the second drive to the RAID:
>>>>
>>>> mdadm --manage /dev/md2 --add /dev/sda2
>>>
>>> Watch the resync complete.
>>>
>>> Done! (Except for doing something with the new 10G partition, but
>>> that's another subject.)
>>>
>>> Many thanks for reading this far!
>>>
>>> best,
>>> hank
>>>
>>> --
>>> '03 BMW F650CS - hers
>>> '98 Dakar K12RS - "BABY K" grew up.
>>> '93 R100R w/ Velorex 700 (MBD starts...)
>>> '95 Miata - "OUR LC"
>>> polish visor: apply squashed bugs, rinse, repeat
>>> Beautiful Sunny Winfield, Illinois
>>>
>>
>>
>>
>> --
>> '03 BMW F650CS - hers
>> '98 Dakar K12RS - "BABY K" grew up.
>> '93 R100R w/ Velorex 700 (MBD starts...)
>> '95 Miata - "OUR LC"
>> polish visor: apply squashed bugs, rinse, repeat
>> Beautiful Sunny Winfield, Illinois
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing aRAID1)
am 27.01.2011 13:37:57 von Justin Piszcz
On Thu, 27 Jan 2011, Hank Barta wrote:
> Thanks for the suggestion:
>
> =============================
> hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
>
> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 2048 20973567 10485760 fd Linux raid autodetect
> /dev/sdb2 20973568 3907029167 1943027800 fd Linux raid autodetect
>
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 2048 20973567 10485760 fd Linux raid autodetect
> /dev/sdc2 20973568 3907029167 1943027800 fd Linux raid autodetect
> hbarta@oak:~$
> =============================
>
> Everything seems OK as far as I can see.
>
> thanks,
> hank
Hi,
That looks correct, so you boot from /dev/sdb, /dev/sdc? Normally when I
do a RAID1 it is with /dev/sda, /dev/sdb for SATA systems... It looks
good, if you reboot again does it want to resync again?
Justin.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
am 27.01.2011 14:39:21 von Hank Barta
The system presently boots from /dev/sda:
==================== =====
=====3D
hbarta@oak:~$ sudo fdisk -luc /dev/sda
Disk /dev/sda: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors
Units =3D sectors of 1 * 512 =3D 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c071b
=A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Blocks =
=A0 Id =A0System
/dev/sda1 =A0 * =A0 =A0 =A0 =A02048 =A0 =A039063551 =A0 =A019530752 =A0=
83 =A0Linux
/dev/sda2 =A0 =A0 =A0 =A039065598 =A0 390721535 =A0 175827969 =A0 =A05 =
=A0Extended
/dev/sda5 =A0 =A0 =A0 =A039065600 =A0 =A054687743 =A0 =A0 7811072 =A0 8=
2 =A0Linux swap / Solaris
/dev/sda6 =A0 =A0 =A0 =A054689792 =A0 390721535 =A0 168015872 =A0 83 =A0=
Linux
hbarta@oak:~$
==================== =====
=====3D
Eventually I plan to migrate the RAID to another system where it will
boot from what is now /dev/sd[bc]
At present I have the RAID listed in /etc/fstab so the boot process
stalls when it tries to mount /dev/md2. At that point I can get to a
console and:
- stop a spurious RAID listed in /proc/mdstat. This is named
/dev/md_. I copied /proc/mdstat to /tmp at that point but
this is apparently before /tmp gets cleared on boot.
- stop /dev/md2 At this point in the boot process it has not started to=
resync.
- assemble /dev/md2. This time it does not start resync.
- mount /dev/md2
- exit the console and complete the boot process.
In the output below, I have highlighted some lines of particular
interest using "<<<<<<<<<<<<<<<<<<<<<<<<"
=46rom dmesg I find:
==================== =====
=====3D
[ =A0 =A01.777908] udev: starting version 151
[ =A0 =A01.782359] md: linear personality registered for level -1
..
[ =A0 =A01.797816] md: multipath personality registered for level -4
..
[ =A0 =A01.814115] md: raid0 personality registered for level 0
..
[ =A0 =A02.706178] md: raid1 personality registered for level 1
..
[ =A0 =A02.730265] md: bind
<<<<<<<<<<<<<<<<<<<<<<<<
[ =A0 =A02.768834] md: bind
<<<<<<<<<<<<<<<<<<<<<<<<
[ =A0 =A02.770005] raid1: raid set md2 active with 2 out of 2 mirrors
[ =A0 =A02.770022] md2: detected capacity change from 0 to 198966037708=
8
[ =A0 =A02.779491] =A0md2: p1 p2
[ =A0 =A02.810420] md2: p2 size 3886055600 exceeds device capacity,
limited to end of disk
[ =A0 =A02.871677] raid6: int64x1 =A0 2414 MB/s
[ =A0 =A03.041683] raid6: int64x2 =A0 3306 MB/s
[ =A0 =A03.211675] raid6: int64x4 =A0 2498 MB/s
[ =A0 =A03.381687] raid6: int64x8 =A0 2189 MB/s
[ =A0 =A03.551687] raid6: sse2x1 =A0 =A03856 MB/s
[ =A0 =A03.721674] raid6: sse2x2 =A0 =A06233 MB/s
[ =A0 =A03.891676] raid6: sse2x4 =A0 =A07434 MB/s
[ =A0 =A03.891678] raid6: using algorithm sse2x4 (7434 MB/s)
[ =A0 =A03.892539] xor: automatically using best checksumming function:=
generic_sse
[ =A0 =A03.941685] =A0 =A0generic_sse: 11496.800 MB/sec
[ =A0 =A03.941687] xor: using function: generic_sse (11496.800 MB/sec)
[ =A0 =A03.944793] md: raid6 personality registered for level 6
[ =A0 =A03.944795] md: raid5 personality registered for level 5
[ =A0 =A03.944796] md: raid4 personality registered for level 4
[ =A0 =A03.949094] md: raid10 personality registered for level 10
[ =A0 =A04.034790] EXT4-fs (sda1): mounted filesystem with ordered data=
mode
..
[ =A0 15.313074] RPC: Registered tcp NFSv4.1 backchannel transport modu=
le.
[ =A0 15.322662] md: bind
<<<<<<<<<<<<<<<<<<<<<<<<
[ =A0 15.347522] [drm] ring test succeeded in 1 usecs
==================== =====
=====3D
and finally where boot process halts and I intervene manually:
==================== =====
=====3D
[ =A0 16.147562] EXT4-fs (sda6): mounted filesystem with ordered data m=
ode
[ =A0 16.532107] EXT4-fs (md2p2): bad geometry: block count 485756928
exceeds size of device (483135232 blocks)
[ =A0212.816279] md: md_d0 stopped.
[ =A0212.816289] md: unbind
[ =A0212.861783] md: export_rdev(md2p1)
[ =A0225.764663] md: md2 stopped.
[ =A0225.764669] md: unbind
[ =A0225.811751] md: export_rdev(sdc)
[ =A0225.811779] md: unbind
[ =A0225.891748] md: export_rdev(sdb)
[ =A0249.653886] md: md2 stopped.
[ =A0249.655627] md: bind
[ =A0249.655788] md: bind
[ =A0249.679172] raid1: raid set md2 active with 2 out of 2 mirrors
[ =A0249.679194] md2: detected capacity change from 0 to 1989660377088
[ =A0249.680142] =A0md2: unknown partition table
[ =A0270.774369] EXT4-fs (md2): mounted filesystem with ordered data mo=
de
==================== =====
=====3D
(no further pattern match in dmesg for 'md:')
The following command seems to find a RAID superblock on /dev/sdb and
/dev/sdc which would explain why they are assembled at boot:
==================== =====
=====3D
root@oak:/var/log# mdadm --examine --scan -vv
mdadm: No md superblock detected on /dev/block/9:2.
/dev/sdc2:
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 00.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local t=
o host oak)
=A0Creation Time : Tue Jan 25 10:39:52 2011
=A0 =A0 Raid Level : raid1
=A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 =A0 Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 Raid Devices : 2
=A0Total Devices : 2
Preferred Minor : 2
=A0 =A0Update Time : Thu Jan 27 07:12:16 2011
=A0 =A0 =A0 =A0 =A0State : clean
Active Devices : 2
Working Devices : 2
=46ailed Devices : 0
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : 6b4365e0 - correct
=A0 =A0 =A0 =A0 Events : 13448
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc2
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc2
=A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdb2
/dev/sdc1:
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 00.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 954a3be2:f23e1239:cd71bfd9:6916a14f (local t=
o host oak)
=A0Creation Time : Wed Jan 26 20:20:06 2011
=A0 =A0 Raid Level : raid1
=A0Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
=A0 =A0 Array Size : 10485696 (10.00 GiB 10.74 GB)
=A0 Raid Devices : 2
=A0Total Devices : 2
Preferred Minor : 0
=A0 =A0Update Time : Wed Jan 26 21:16:05 2011
=A0 =A0 =A0 =A0 =A0State : active
Active Devices : 2
Working Devices : 2
=46ailed Devices : 0
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : 25dccb8 - correct
=A0 =A0 =A0 =A0 Events : 3
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdc1
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdb1
=A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdc1
/dev/sdc:
<<<<<<<<<<<<<<<<<<<<<<<<
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 00.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local t=
o host oak)
=A0Creation Time : Tue Jan 25 10:39:52 2011
=A0 =A0 Raid Level : raid1
=A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 =A0 Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 Raid Devices : 2
=A0Total Devices : 2
Preferred Minor : 2
=A0 =A0Update Time : Thu Jan 27 07:12:16 2011
=A0 =A0 =A0 =A0 =A0State : clean
Active Devices : 2
Working Devices : 2
=46ailed Devices : 0
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : 6b4365e0 - correct
=A0 =A0 =A0 =A0 Events : 13448
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc2
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc2
=A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdb2
/dev/sdb2:
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 00.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local t=
o host oak)
=A0Creation Time : Tue Jan 25 10:39:52 2011
=A0 =A0 Raid Level : raid1
=A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 =A0 Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 Raid Devices : 2
=A0Total Devices : 2
Preferred Minor : 2
=A0 =A0Update Time : Thu Jan 27 07:12:16 2011
=A0 =A0 =A0 =A0 =A0State : clean
Active Devices : 2
Working Devices : 2
=46ailed Devices : 0
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : 6b4365d2 - correct
=A0 =A0 =A0 =A0 Events : 13448
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdb2
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc2
=A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdb2
/dev/sdb1:
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 00.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 954a3be2:f23e1239:cd71bfd9:6916a14f (local t=
o host oak)
=A0Creation Time : Wed Jan 26 20:20:06 2011
=A0 =A0 Raid Level : raid1
=A0Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
=A0 =A0 Array Size : 10485696 (10.00 GiB 10.74 GB)
=A0 Raid Devices : 2
=A0Total Devices : 2
Preferred Minor : 0
=A0 =A0Update Time : Wed Jan 26 21:16:05 2011
=A0 =A0 =A0 =A0 =A0State : active
Active Devices : 2
Working Devices : 2
=46ailed Devices : 0
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : 25dccb8 - correct
=A0 =A0 =A0 =A0 Events : 3
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdc1
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdb1
=A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdc1
/dev/sdb:
<<<<<<<<<<<<<<<<<<<<<<<<
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 00.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local t=
o host oak)
=A0Creation Time : Tue Jan 25 10:39:52 2011
=A0 =A0 Raid Level : raid1
=A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 =A0 Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
=A0 Raid Devices : 2
=A0Total Devices : 2
Preferred Minor : 2
=A0 =A0Update Time : Thu Jan 27 07:12:16 2011
=A0 =A0 =A0 =A0 =A0State : clean
Active Devices : 2
Working Devices : 2
=46ailed Devices : 0
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : 6b4365d2 - correct
=A0 =A0 =A0 =A0 Events : 13448
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdb2
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdc2
=A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdb2
mdadm: No md superblock detected on /dev/sda6.
mdadm: No md superblock detected on /dev/sda5.
mdadm: No md superblock detected on /dev/sda2.
mdadm: No md superblock detected on /dev/sda1.
mdadm: No md superblock detected on /dev/sda.
root@oak:/var/log#
==================== =====
=====3D
If I try to zero the superblock that seems to be in error, I get:
==================== =====
=====3D
root@oak:/var/log# mdadm --zero-superblock /dev/sdb
mdadm: Couldn't open /dev/sdb for write - not zeroing
root@oak:/var/log#
==================== =====
=====3D
thanks again,
hank
On Thu, Jan 27, 2011 at 6:37 AM, Justin Piszcz
> wrote:
>
> On Thu, 27 Jan 2011, Hank Barta wrote:
>
>> Thanks for the suggestion:
>>
>> ==================== ===3D=
======
>> hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
>>
>> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sect=
ors
>> Units =3D sectors of 1 * 512 =3D 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>> =A0Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Block=
s =A0 Id =A0System
>> /dev/sdb1 =A0 =A0 =A0 =A0 =A0 =A02048 =A0 =A020973567 =A0 =A01048576=
0 =A0 fd =A0Linux raid
>> autodetect
>> /dev/sdb2 =A0 =A0 =A0 =A020973568 =A03907029167 =A01943027800 =A0 fd=
=A0Linux raid
>> autodetect
>>
>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sect=
ors
>> Units =3D sectors of 1 * 512 =3D 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>> =A0Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Block=
s =A0 Id =A0System
>> /dev/sdc1 =A0 =A0 =A0 =A0 =A0 =A02048 =A0 =A020973567 =A0 =A01048576=
0 =A0 fd =A0Linux raid
>> autodetect
>> /dev/sdc2 =A0 =A0 =A0 =A020973568 =A03907029167 =A01943027800 =A0 fd=
=A0Linux raid
>> autodetect
>> hbarta@oak:~$
>> ==================== ===3D=
======
>>
>> Everything seems OK as far as I can see.
>>
>> thanks,
>> hank
>
> Hi,
>
> That looks correct, so you boot from /dev/sdb, /dev/sdc? =A0Normally =
when I
> do a RAID1 it is with /dev/sda, /dev/sdb for SATA systems... =A0It lo=
oks
> good, if you reboot again does it want to resync again?
>
> Justin.
>
>
>
--
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing aRAID1)
am 27.01.2011 16:06:25 von Justin Piszcz
This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.
--655872-2053702380-1296140785=:31246
Content-Type: TEXT/PLAIN; format=flowed; charset=ISO-8859-15
Content-Transfer-Encoding: QUOTED-PRINTABLE
On Thu, 27 Jan 2011, Hank Barta wrote:
Hi,
You may just want to dd and start over; or:
If I try to zero the superblock that seems to be in error, I get:
==================== =====3D=
====
root@oak:/var/log# mdadm --zero-superblock /dev/sdb
mdadm: Couldn't open /dev/sdb for write - not zeroing
root@oak:/var/log#
==================== =====3D=
====
Have you tried using the partition itself?
/dev/sdb1?
Also, any reason for making partitions on a MD raid device?
[ =A0 16.532107] EXT4-fs (md2p2): bad geometry: block count 485756928
exceeds size of device (483135232 blocks)
This is generally not a good idea.
It sounds like you want to make a raid-1 with two disks and pop it into a=
=20
new system?
Way I typically do this is insert both drives into the new system, boot=20
off a system rescue cd and then create the raid there, boot off the cd=20
again, with root=3D/dev/md2, then run LILO, make sure to use 0.90=20
superblocks.
See below:
USE --assume-clean FOR LARGE FILESYSTEMS so you can reboot directly after
array creation and system restore implantation. (you will want to run an
echo repair > /sys/..sync_action though afterwards)
root@Knoppix:/t/etc# mdadm --create -e 0.90 --verbose /dev/md0 --level=3D1 =
--raid-devices=3D2 /dev/sda1 /dev/sdb1=20
mdadm: size set to 8393856K
mdadm: array /dev/md0 started.
root@Knoppix:/t/etc# cat /proc/mdstat=20
Personalities : [raid1]=20
md0 : active raid1 sdb1[1] sda1[0]
8393856 blocks [2/2] [UU]
[>....................] resync =3D 1.4% (120512/8393856) finish=3D=
3.4min speed=3D40170K/sec
root@Knoppix:/t/etc# mdadm --create -e 0.90 --verbose /dev/md1 --level=3D1 =
--raid-devices=3D2 /dev/sda2 /dev/sdb2
mdadm: size set to 136448K
mdadm: array /dev/md1 started.
root@Knoppix:/t/etc#
root@Knoppix:/t/etc# mdadm --create -e 0.90 --verbose /dev/md2 --level=3D1 =
--raid-devices=3D2 /dev/sda3 /dev/sdb3
mdadm: size set to 382178240K
mdadm: array /dev/md2 started.
root@Knoppix:/t/etc#
root@Knoppix:/t/etc# cat /proc/mdstat=20
Personalities : [raid1]=20
md2 : active raid1 sdb3[1] sda3[0]
382178240 blocks [2/2] [UU]
resync=3DDELAYED
md1 : active raid1 sdb2[1] sda2[0]
136448 blocks [2/2] [UU]
resync=3DDELAYED
md0 : active raid1 sdb1[1] sda1[0]
8393856 blocks [2/2] [UU]
[==========>..........] resync =3D 51.0% (42830=
72/8393856) finish=3D1.0min speed=3D62280K/sec
unused devices:
root@Knoppix:/t/etc#
After this, you'll need to set them as 0xfd and make sure the boot is boota=
ble,
your LILO config should look something like this:
boot=3D/dev/md1
root=3D/dev/md2
map=3D/boot/map
prompt
delay=3D100
timeout=3D100
lba32
vga=3Dnormal
append=3D""
raid-extra-boot=3D"/dev/sda,/dev/sdb" # make boot blocks on both drives
default=3D2.6.37-3
image=3D/boot/2.6.37-3
label=3D2.6.37-3
read-only
root=3D/dev/md2
Justin.
--655872-2053702380-1296140785=:31246--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing aRAID1)
am 27.01.2011 21:47:58 von NeilBrown
On Thu, 27 Jan 2011 06:20:39 -0600 Hank Barta wrote:
> Thanks for the suggestion:
>=20
> ==================== ===3D=
======
> hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
>=20
> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 secto=
rs
> Units =3D sectors of 1 * 512 =3D 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>=20
> Device Boot Start End Blocks Id System
> /dev/sdb1 2048 20973567 10485760 fd Linux raid au=
todetect
> /dev/sdb2 20973568 3907029167 1943027800 fd Linux raid au=
todetect
These start numbers are multiples of 64K.
With 0.90 metadata, md thinks that the metadata for a partition that st=
arts
at a multiple of 64K and ends a the end of the device looks just like m=
etadata
for the whole devices.
If you use 1.0 (or 1;1 or 1.2) metadata this problem will disappear.
NeilBrown
>=20
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 secto=
rs
> Units =3D sectors of 1 * 512 =3D 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>=20
> Device Boot Start End Blocks Id System
> /dev/sdc1 2048 20973567 10485760 fd Linux raid au=
todetect
> /dev/sdc2 20973568 3907029167 1943027800 fd Linux raid au=
todetect
> hbarta@oak:~$
> ==================== ===3D=
======
>=20
> Everything seems OK as far as I can see.
>=20
> thanks,
> hank
>=20
>=20
>=20
> On Thu, Jan 27, 2011 at 5:56 AM, Justin Piszcz
om> wrote:
> > Hi,
> >
> > Show fdisk -l on both disks, are the partitions type 0xfd Linux rai=
d Auto
> > Detect? =A0If not, you will have that exact problem.
> >
> > Justin.
> >
> > On Wed, 26 Jan 2011, Hank Barta wrote:
> >
> >> I followed the procedure below. Essentially removing one drive fro=
m a
> >> RAID1, zeroing the superblock, repartitioning the drive, starting =
a
> >> new RAID1 in degraded mode, copying over the data and repeating th=
e
> >> process on the second drive.
> >>
> >> Everything seemed to be going well with the new RAID mounted and t=
he
> >> second drive syncing right along. However on a subsequent reboot t=
he
> >> RAID did not seem to come up properly. I was no longer able to mou=
nt
> >> it. I also noticed that the resync had restarted. I found I could
> >> temporarily resolve this by stopping the RAID1 and reassembling it=
and
> >> specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/s=
db2
> >> /dev/sdc2) At this point, resync starts again and I can mount
> >> /dev/md2. The problem crops up again on the next reboot. Informati=
on
> >> revealed by 'mdadm --detail /dev/md2' changes between "from boot" =
and
> >> following reassembly. It looks like at boot the entire drives
> >> (/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the des=
ired
> >> partitions.
> >>
> >> I do not know where this is coming from. I tried zeroing the
> >> superblock for both /dev/sdb and /dev/sdc and mdadm reported they =
did
> >> not look like RAID devices.
> >>
> >> Results from 'mdadm --detail /dev/md2' before and after is:
> >>
> >> ==================== ===
=======3D
> >> root@oak:~# mdadm --detail /dev/md2
> >> /dev/md2:
> >> =A0 =A0 =A0 Version : 00.90
> >> =A0Creation Time : Tue Jan 25 10:39:52 2011
> >> =A0 =A0Raid Level : raid1
> >> =A0 =A0Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >> =A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >> =A0Raid Devices : 2
> >> =A0Total Devices : 2
> >> Preferred Minor : 2
> >> =A0 Persistence : Superblock is persistent
> >>
> >> =A0 Update Time : Wed Jan 26 21:16:04 2011
> >> =A0 =A0 =A0 =A0 State : clean, degraded, recovering
> >> Active Devices : 1
> >> Working Devices : 2
> >> Failed Devices : 0
> >> =A0Spare Devices : 1
> >>
> >> Rebuild Status : 2% complete
> >>
> >> =A0 =A0 =A0 =A0 =A0UUID : 19d72028:63677f91:cd71bfd9:6916a14f (loc=
al to host oak)
> >> =A0 =A0 =A0 =A0Events : 0.13376
> >>
> >> =A0 Number =A0 Major =A0 Minor =A0 RaidDevice State
> >> =A0 =A0 =A00 =A0 =A0 =A0 8 =A0 =A0 =A0 32 =A0 =A0 =A0 =A00 =A0 =A0=
=A0active sync =A0 /dev/sdc
> >> =A0 =A0 =A02 =A0 =A0 =A0 8 =A0 =A0 =A0 16 =A0 =A0 =A0 =A01 =A0 =A0=
=A0spare rebuilding =A0 /dev/sdb
> >> root@oak:~#
> >> root@oak:~# mdadm --detail /dev/md2
> >> /dev/md2:
> >> =A0 =A0 =A0 Version : 00.90
> >> =A0Creation Time : Tue Jan 25 10:39:52 2011
> >> =A0 =A0Raid Level : raid1
> >> =A0 =A0Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >> =A0Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >> =A0Raid Devices : 2
> >> =A0Total Devices : 2
> >> Preferred Minor : 2
> >> =A0 Persistence : Superblock is persistent
> >>
> >> =A0 Update Time : Wed Jan 26 21:25:40 2011
> >> =A0 =A0 =A0 =A0 State : clean, degraded, recovering
> >> Active Devices : 1
> >> Working Devices : 2
> >> Failed Devices : 0
> >> =A0Spare Devices : 1
> >>
> >> Rebuild Status : 0% complete
> >>
> >> =A0 =A0 =A0 =A0 =A0UUID : 19d72028:63677f91:cd71bfd9:6916a14f (loc=
al to host oak)
> >> =A0 =A0 =A0 =A0Events : 0.13382
> >>
> >> =A0 Number =A0 Major =A0 Minor =A0 RaidDevice State
> >> =A0 =A0 =A00 =A0 =A0 =A0 8 =A0 =A0 =A0 34 =A0 =A0 =A0 =A00 =A0 =A0=
=A0active sync =A0 /dev/sdc2
> >> =A0 =A0 =A02 =A0 =A0 =A0 8 =A0 =A0 =A0 18 =A0 =A0 =A0 =A01 =A0 =A0=
=A0spare rebuilding =A0 /dev/sdb2
> >> ==================== ===
=======3D
> >>
> >> Contents of /etc/mdadm/mdadm.conf are:
> >> ==================== ===
=======3D
> >> hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
> >> # mdadm.conf
> >> #
> >> # Please refer to mdadm.conf(5) for information about this file.
> >> #
> >>
> >> # by default, scan all partitions (/proc/partitions) for MD superb=
locks.
> >> # alternatively, specify devices to scan, using wildcards if desir=
ed.
> >> DEVICE partitions
> >>
> >> # auto-create devices with Debian standard permissions
> >> CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes
> >>
> >> # automatically tag new arrays as belonging to the local system
> >> HOMEHOST
> >>
> >> # instruct the monitoring daemon where to send mail alerts
> >> MAILADDR root
> >>
> >> # definitions of existing MD arrays
> >> #ARRAY /dev/md2 level=3Draid1 num-devices=3D2
> >> UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
> >> =A0#spares=3D2
> >>
> >> # This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
> >> # by mkconf $Id$
> >> hbarta@oak:~$
> >> ==================== ===
=======3D
> >> (I commented out the two lines following "definitions of existing =
MD
> >> arrays" because I thought they might be the culprit.)
> >>
> >> They seem to match:
> >> ==================== ===
=======3D
> >> hbarta@oak:~$ sudo mdadm --examine --scan
> >> ARRAY /dev/md0 level=3Draid1 num-devices=3D2
> >> UUID=3D954a3be2:f23e1239:cd71bfd9:6916a14f
> >> ARRAY /dev/md2 level=3Draid1 num-devices=3D2
> >> UUID=3D19d72028:63677f91:cd71bfd9:6916a14f
> >> =A0spares=3D2
> >> hbarta@oak:~$
> >> ==================== ===
=======3D
> >> except for the addition of a second RAID which I added after insta=
lling
> >> mdadm.
> >>
> >> I have no idea how to fix this (*) and appreciate any help with ho=
w to do
> >> so.
> >>
> >>
> >> (*) All I can think of is to zero both entire drives and start fro=
m
> >> the beginning.
> >>
> >> On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta wro=
te:
> >>>
> >>> My previous experiment with USB flash drives has not gone too far=
I
> >>> can install Ubuntu Server 10.04 to a single USB flash drive and b=
oot
> >>> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the In=
tel
> >>> D525MW from it. The Intel board will boot install media on USB fl=
ash,
> >>> but not a normal install. (This is an aside.) The desire to use a=
n
> >>> alternate boot was to avoid having to fiddle with a two drive RAI=
D1.
> >>> The drives have a single partition consisting of the entire drive
> >>> which is combined into the RAID1.
> >>>
> >>> My desire to get this system up and running is overrunning my des=
ire
> >>> to get the USB flash raid to boot. My strategy is to
> >>> =A0- remove one drive from the raid,
> >>> =A0- repartition it to allow for a system installation
> >>> =A0- create a new RAID1 with that drive and format the new data
> >>> partition. (both would be =A0RAID1 and now both degraded to one d=
rive)
> >>> =A0- copy data from the existing RAID1 data partition to the new =
RAID1
> >>> data partition.
> >>> =A0- stop the old RAID1
> >>> =A0- repartition the other drive (most recently the old RAID1) to=
match
> >>> the new RAID1
> >>> =A0- add the second drive to the new RAID1
> >>> =A0- watch it rebuild and breathe big sigh of relief.
> >>>
> >>> When convenient I can install Linux to the space I've opened up v=
ia
> >>> the above machinations and move this project down the road.
> >>>
> >>> That looks pretty straightforward to me, but I've never let that =
sort
> >>> of thing prevent me from cobbling things up in the past. (And at =
this
> >>> moment, I'm making a copy of the RAID1 to an external drive just =
in
> >>> case.) For anyone interested, I'll share the details of my plan t=
o the
> >>> command level in the case that any of you can spot a problem I ha=
ve
> >>> overlooked.
> >>>
> >>> A related question Is what are the constraints for partitioning t=
he
> >>> drive to achieve best performance? I plan to create a 10G partiti=
on on
> >>> each drive for the system. Likewise, suggestions for tuning the R=
AID
> >>> and filesystem configurations would be appreciated. Usage for the=
RAID
> >>> is backup for my home LAN as well as storing pictures and more
> >>> recently my video library so there's a mix of large and small fil=
es.
> >>> I'm not obsessed with performance as most clients are on WiFi, bu=
t I
> >>> might as well grab the low hanging fruit in this regard.
> >>>
> >>> Feel free to comment on any aspects of the details listed below.
> >>>
> >>> many thanks,
> >>> hank
> >>>
> >>> This is what is presently on the drives:
> >>> ==================== ===
==
> >>> root@oak:~# cat /proc/mdstat
> >>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [rai=
d5]
> >>> [raid4] [raid10]
> >>> md1 : active raid1 sdc1[0] sda1[1]
> >>> =A0 =A0 =A01953511936 blocks [2/2] [UU]
> >>>
> >>> unused devices:
> >>> root@oak:~# fdisk -l /dev/sda /dev/sdc
> >>>
> >>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
> >>> 255 heads, 63 sectors/track, 243201 cylinders
> >>> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
> >>> Sector size (logical/physical): 512 bytes / 512 bytes
> >>> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >>> Disk identifier: 0x00000000
> >>>
> >>> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0B=
locks =A0 Id =A0System
> >>> /dev/sda1 =A0 * =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A0195351=
2001 =A0 fd =A0Linux raid
> >>> autodetect
> >>>
> >>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> >>> 255 heads, 63 sectors/track, 243201 cylinders
> >>> Units =3D cylinders of 16065 * 512 =3D 8225280 bytes
> >>> Sector size (logical/physical): 512 bytes / 512 bytes
> >>> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >>> Disk identifier: 0x00000000
> >>>
> >>> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0B=
locks =A0 Id =A0System
> >>> /dev/sdc1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0243201 =A01953=
512001 =A0 fd =A0Linux raid
> >>> autodetect
> >>> root@oak:~#
> >>> ==================== ===
==
> >>>
> >>> One drive is a Seagate ST32000542AS and the other a Samsung HD204=
UI.
> >>> The Samsung is one of those with 4K sectors. (I think the Seagate=
may
> >>> be too.)
> >>>
> >>> Selecting /dev/sdc to migrate first (and following more or less t=
he
> >>> guide on
> >>> http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-pa rtiti=
on.html)
> >>>
> >>> Fail the drive:
> >>>>
> >>>> mdadm --manage /dev/md1 --fail /dev/sdc1
> >>>
> >>> Remove from the array:
> >>>>
> >>>> mdadm --manage /dev/md1 --remove /dev/sdc1
> >>>
> >>> Zero the superblock:
> >>>>
> >>>> mdadm --zero-superblock /dev/sdc1
> >>>
> >>>
g and
> >>> a second primary partition using the remainder of the drive: /dev=
/sdc1
> >>> and /dev/sdc2>
> >>>
> >>> Create new RAID:
> >>>>
> >>>> mdadm --create /dev/md2 -n 2 --level=3D1 /dev/sdc2 missing
> >>>
> >>> Format:
> >>>>
> >>>> mkfs.ext4 /dev/md2
> >>>
> >>> Mount:
> >>>>
> >>>> mount /dev/md2 /mnt/md2
> >>>
> >>> Copy:
> >>>>
> >>>> rsync -av -H -K --partial --partial-dir=3D.rsync-partial /mnt/md=
1/
> >>>> /mnt/USB/
> >>>
> >>> Stop the old RAID:
> >>>>
> >>>> mdadm --stop /dev/md1
> >>>
> >>> Zero the superblock:
> >>>>
> >>>> mdadm --zero-superblock /dev/sda1
> >>>
> >>> Repartition to match the other drive
> >>>
> >>> Add the second drive to the RAID:
> >>>>
> >>>> mdadm --manage /dev/md2 --add /dev/sda2
> >>>
> >>> Watch the resync complete.
> >>>
> >>> Done! (Except for doing something with the new 10G partition, but
> >>> that's another subject.)
> >>>
> >>> Many thanks for reading this far!
> >>>
> >>> best,
> >>> hank
> >>>
> >>> --
> >>> '03 BMW F650CS - hers
> >>> '98 Dakar K12RS - "BABY K" grew up.
> >>> '93 R100R w/ Velorex 700 (MBD starts...)
> >>> '95 Miata - "OUR LC"
> >>> polish visor: apply squashed bugs, rinse, repeat
> >>> Beautiful Sunny Winfield, Illinois
> >>>
> >>
> >>
> >>
> >> --
> >> '03 BMW F650CS - hers
> >> '98 Dakar K12RS - "BABY K" grew up.
> >> '93 R100R w/ Velorex 700 (MBD starts...)
> >> '95 Miata - "OUR LC"
> >> polish visor: apply squashed bugs, rinse, repeat
> >> Beautiful Sunny Winfield, Illinois
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-ra=
id" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at =A0http://vger.kernel.org/majordomo-info.ht=
ml
> >
>=20
>=20
>=20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
am 27.01.2011 22:14:45 von jeromepoulin
Sorry if it is a double post, I forgot to switch to plain text.
On Thu, Jan 27, 2011 at 3:47 PM, NeilBrown wrote:
>
> These start numbers are multiples of 64K.
>
> With 0.90 metadata, md thinks that the metadata for a partition that starts
> at a multiple of 64K and ends a the end of the device looks just like metadata
> for the whole devices.
>
I have a similar problem with GRUB2, shouldn't MD check for partitions
first, then disks?
I've got the same problem at home with my RAID5 in GPT partitions,
GRUB sees the whole disk as RAID however I have 3 partitions on each.
Because of mdadm.conf it is OK but I guess type 0xFD on standard MBR
would fail too.
I had a discussion with the GRUB team of checking partitions first
then whole disk and was referred to this list to see if it is how we
should proceed or not.
>
> If you use 1.0 (or 1;1 or 1.2) metadata this problem will disappear.
>
> NeilBrown
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
am 28.01.2011 03:50:28 von Hank Barta
On Thu, Jan 27, 2011 at 2:47 PM, NeilBrown wrote:
>>
>>   Device Boot    Start   =C2=
=A0  End    Blocks  Id  System
>> /dev/sdb1 Â Â Â Â Â Â 2048 Â Â =
20973567   10485760  fd  Linux raid autodetect
>> /dev/sdb2 Â Â Â Â 20973568 Â 3907029167 Â =
1943027800  fd  Linux raid autodetect
>
> These start numbers are multiples of 64K.
>
> With 0.90 metadata, md thinks that the metadata for a partition that =
starts
> at a multiple of 64K and ends a the end of the device looks just like=
metadata
> for the whole devices.
>
> If you use 1.0 (or 1;1 or 1.2) metadata this problem will disappear.
Many thanks for the tip.
============
1, 1.0, 1.1, 1.2
Use the new version-1 format superblock. This
has few restricâ=90
tions. The different sub-versions store the
superblock at differâ=90
ent locations on the device, either at the end
(for 1.0), at the
start (for 1.1) or 4K from the start (for 1.2).
============
I went with 1.1 and that seems to work w/out this problem.
thanks,
hank
--=20
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html