Bootable Raid-1

Bootable Raid-1

am 01.02.2011 21:20:59 von Naira Kaieski

Hi,

I have read several articles on the internet and researched in the
messages list, but I'm still having trouble configuring a raid level 1
array that is bootable.

I configured a server some time agowith Gentoo Linux, Kernel
2.6.28-hardened-r9, mdadm - v3.0 and 2 IDE hard drives, this is working
correctly. For this installation iused as a basis for consultation
Article http://en.gentoo-wiki.com/wiki/Migrate_to_RAID

Now, I want to use two SATA drives in raid level 1,

Now i have Gentoo Linux with kernel 2.6.36-hardened-r6 and mdadm -
v3.1.4 and the instructions of Article dont work. The kernel was
configured with support for disks raid autodetect and supported the raid
level 1. But in the logs of dmesg does not run the auto-detection of the
disks to the array, so in the boot when mounting the root device /
dev/md2 the system can not find the device.

When I run mdadm - auto-detect the array are found somewhere but still
displays message indicating that the raid device is not a valid
partition table.

How can you configure a raid level 1 with bootable disks / dev / sda and
/ dev / sdb?
I want three partitions:
/dev/md1 - swap - /dev/sda1, /dev/sdb1
/dev/md2 - boot - /dev/sda2, /dev/sdb2
/dev/md3 - / - /dev/sda3, /dev/sdb3

I am using grub as bootloader.

Thanks,
Naira Kaieski
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Bootable Raid-1

am 01.02.2011 21:43:13 von Leslie Rhorer

> I have read several articles on the internet and researched in the
> messages list, but I'm still having trouble configuring a raid level 1
> array that is bootable.
>
> I configured a server some time agowith Gentoo Linux, Kernel
> 2.6.28-hardened-r9, mdadm - v3.0 and 2 IDE hard drives, this is working
> correctly. For this installation iused as a basis for consultation
> Article http://en.gentoo-wiki.com/wiki/Migrate_to_RAID
>
> Now, I want to use two SATA drives in raid level 1,
>
> Now i have Gentoo Linux with kernel 2.6.36-hardened-r6 and mdadm -
> v3.1.4 and the instructions of Article dont work. The kernel was
> configured with support for disks raid autodetect and supported the raid
> level 1. But in the logs of dmesg does not run the auto-detection of the
> disks to the array, so in the boot when mounting the root device /
> dev/md2 the system can not find the device.
>
> When I run mdadm - auto-detect the array are found somewhere but still
> displays message indicating that the raid device is not a valid
> partition table.
>
> How can you configure a raid level 1 with bootable disks / dev / sda and
> / dev / sdb?
> I want three partitions:
> /dev/md1 - swap - /dev/sda1, /dev/sdb1
> /dev/md2 - boot - /dev/sda2, /dev/sdb2
> /dev/md3 - / - /dev/sda3, /dev/sdb3
>
> I am using grub as bootloader.

This is very similar to my boot configuration on my two servers. I
suspect your problem is the metadata. What version of superblock are you
using for /dev/md2? GRUB2 does not recognize a version 1.x superblock.
Since the boot images are quite small, and don't require an array of many
disks, there is nothing wrong with the 0.90 superblock, however. If your
/dev/md2 array is not a 0.9 version superblock, try converting it. Here is
my configuration from one of the servers:

ARRAY /dev/md0 level=raid6 num-devices=10 metadata=01.2 name=Backup:0
UUID=431244d6:45d9635a:e88b3de5:92f30255
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90
UUID=4cde286c:0687556a:4d9996dd:dd23e701
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=01.2 name=Backup:2
UUID=d45ff663:9e53774c:6fcf9968:21692025
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=01.2 name=Backup:3
UUID=51d22c47:10f58974:0b27ef04:5609d357

Where md0 is a large (11T) data array, md1 is boot, md2 is root, and
md3 is swap. The partitioning layout of the boot drives is the same as
yours.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Bootable Raid-1

am 01.02.2011 22:35:32 von Naira Kaieski

Hi,

My metadata is 0.90...

My Partitions:
/dev/sda1 1 122 979933+ fd Linux raid
autodetect
/dev/sda2 * 123 134 96390 fd Linux raid
autodetect
/dev/sda3 135 19457 155211997+ fd Linux raid
autodetect

Disk /dev/md1: 1003 MB, 1003356160 bytes
2 heads, 4 sectors/track, 244960 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md2: 98 MB, 98631680 bytes
2 heads, 4 sectors/track, 24080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md3: 158.9 GB, 158936989696 bytes
2 heads, 4 sectors/track, 38802976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc4036374

Disk /dev/md3 doesn't contain a valid partition table


I created the array with the command:
mdadm --create --verbose --assume-clean --metadata=0.90 /dev/md3
--level=1 --raid-devices=2 /dev/sda3 missing

# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda3[0]
155211904 blocks [2/1] [U_]

md2 : active raid1 sda2[0]
96320 blocks [2/1] [U_]

md1 : active raid1 sda1[0]
979840 blocks [2/1] [U_]

# mdadm -D --scan
ARRAY /dev/md1 metadata=0.90 UUID=e905069f:43e2eaa4:e090bcab:b1d9c206
ARRAY /dev/md2 metadata=0.90 UUID=d259ec4f:1c63d0b1:e090bcab:b1d9c206
ARRAY /dev/md3 metadata=0.90 UUID=030d5ded:82314c21:e090bcab:b1d9c206

On dmesg:
[ 2349.760155] md: bind
[ 2349.762677] md/raid1:md1: active with 1 out of 2 mirrors
[ 2349.762720] md1: detected capacity change from 0 to 1003356160
[ 2349.765307] md1: unknown partition table
[ 2363.059235] md: bind
[ 2363.061089] md/raid1:md2: active with 1 out of 2 mirrors
[ 2363.061129] md2: detected capacity change from 0 to 98631680
[ 2363.065812] md2: unknown partition table
[ 2372.302358] md: bind
[ 2372.304614] md/raid1:md3: active with 1 out of 2 mirrors
[ 2372.304663] md3: detected capacity change from 0 to 158936989696
[ 2372.308395] md3: unknown partition table

My kernel config:
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_RAID1=y

# mdadm --examine /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 0.90.00
UUID : 030d5ded:82314c21:e090bcab:b1d9c206 (local to host dns)
Creation Time : Tue Feb 1 19:03:30 2011
Raid Level : raid1
Used Dev Size : 155211904 (148.02 GiB 158.94 GB)
Array Size : 155211904 (148.02 GiB 158.94 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 3

Update Time : Tue Feb 1 19:18:56 2011
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Checksum : 64a5bec0 - correct
Events : 7


Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3

0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed

I format the md* devices and copy file with rsync, alter the grub and
fstab to boot md devices but on boot i have fail to boot md3 as rootfs

Atenciosamente,
Naira Kaieski



Em 1/2/2011 18:43, Leslie Rhorer escreveu:
>> I have read several articles on the internet and researched in the
>> messages list, but I'm still having trouble configuring a raid level 1
>> array that is bootable.
>>
>> I configured a server some time agowith Gentoo Linux, Kernel
>> 2.6.28-hardened-r9, mdadm - v3.0 and 2 IDE hard drives, this is working
>> correctly. For this installation iused as a basis for consultation
>> Article http://en.gentoo-wiki.com/wiki/Migrate_to_RAID
>>
>> Now, I want to use two SATA drives in raid level 1,
>>
>> Now i have Gentoo Linux with kernel 2.6.36-hardened-r6 and mdadm -
>> v3.1.4 and the instructions of Article dont work. The kernel was
>> configured with support for disks raid autodetect and supported the raid
>> level 1. But in the logs of dmesg does not run the auto-detection of the
>> disks to the array, so in the boot when mounting the root device /
>> dev/md2 the system can not find the device.
>>
>> When I run mdadm - auto-detect the array are found somewhere but still
>> displays message indicating that the raid device is not a valid
>> partition table.
>>
>> How can you configure a raid level 1 with bootable disks / dev / sda and
>> / dev / sdb?
>> I want three partitions:
>> /dev/md1 - swap - /dev/sda1, /dev/sdb1
>> /dev/md2 - boot - /dev/sda2, /dev/sdb2
>> /dev/md3 - / - /dev/sda3, /dev/sdb3
>>
>> I am using grub as bootloader.
> This is very similar to my boot configuration on my two servers. I
> suspect your problem is the metadata. What version of superblock are you
> using for /dev/md2? GRUB2 does not recognize a version 1.x superblock.
> Since the boot images are quite small, and don't require an array of many
> disks, there is nothing wrong with the 0.90 superblock, however. If your
> /dev/md2 array is not a 0.9 version superblock, try converting it. Here is
> my configuration from one of the servers:
>
> ARRAY /dev/md0 level=raid6 num-devices=10 metadata=01.2 name=Backup:0
> UUID=431244d6:45d9635a:e88b3de5:92f30255
> ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90
> UUID=4cde286c:0687556a:4d9996dd:dd23e701
> ARRAY /dev/md2 level=raid1 num-devices=2 metadata=01.2 name=Backup:2
> UUID=d45ff663:9e53774c:6fcf9968:21692025
> ARRAY /dev/md3 level=raid1 num-devices=2 metadata=01.2 name=Backup:3
> UUID=51d22c47:10f58974:0b27ef04:5609d357
>
> Where md0 is a large (11T) data array, md1 is boot, md2 is root, and
> md3 is swap. The partitioning layout of the boot drives is the same as
> yours.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Bootable Raid-1

am 01.02.2011 22:53:27 von Leslie Rhorer

> -----Original Message-----
> From: Naira Kaieski [mailto:naira@faccat.br]
> Sent: Tuesday, February 01, 2011 3:36 PM
> To: lrhorer@satx.rr.com
> Cc: linux-raid@vger.kernel.org
> Subject: Re: Bootable Raid-1
>
> Hi,
>
> My metadata is 0.90...
>
> My Partitions:
> /dev/sda1 1 122 979933+ fd Linux raid
> autodetect
> /dev/sda2 * 123 134 96390 fd Linux raid
> autodetect
> /dev/sda3 135 19457 155211997+ fd Linux raid
> autodetect

I recall reading very recently (it might have even been today) that
Linux RAID Autodetect partitions can cause problems. I have mine set to
simply "Linux":

Disk /dev/sda: 500 GB, 500105249280 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 50 401593 83 Linux
/dev/sda2 51 40000 320890342 83 Linux
/dev/sda3 40001 60801 167076000 83 Linux

> Disk /dev/md1: 1003 MB, 1003356160 bytes
> 2 heads, 4 sectors/track, 244960 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/md1 doesn't contain a valid partition table
>
> Disk /dev/md2: 98 MB, 98631680 bytes
> 2 heads, 4 sectors/track, 24080 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/md2 doesn't contain a valid partition table
>
> Disk /dev/md3: 158.9 GB, 158936989696 bytes
> 2 heads, 4 sectors/track, 38802976 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0xc4036374
>
> Disk /dev/md3 doesn't contain a valid partition table
>
>
> I created the array with the command:
> mdadm --create --verbose --assume-clean --metadata=0.90 /dev/md3
> --level=1 --raid-devices=2 /dev/sda3 missing
>
> # cat /proc/mdstat
> Personalities : [raid1]
> md3 : active raid1 sda3[0]
> 155211904 blocks [2/1] [U_]
>
> md2 : active raid1 sda2[0]
> 96320 blocks [2/1] [U_]
>
> md1 : active raid1 sda1[0]
> 979840 blocks [2/1] [U_]
>
> # mdadm -D --scan
> ARRAY /dev/md1 metadata=0.90 UUID=e905069f:43e2eaa4:e090bcab:b1d9c206
> ARRAY /dev/md2 metadata=0.90 UUID=d259ec4f:1c63d0b1:e090bcab:b1d9c206
> ARRAY /dev/md3 metadata=0.90 UUID=030d5ded:82314c21:e090bcab:b1d9c206
>
> On dmesg:
> [ 2349.760155] md: bind
> [ 2349.762677] md/raid1:md1: active with 1 out of 2 mirrors
> [ 2349.762720] md1: detected capacity change from 0 to 1003356160
> [ 2349.765307] md1: unknown partition table
> [ 2363.059235] md: bind
> [ 2363.061089] md/raid1:md2: active with 1 out of 2 mirrors
> [ 2363.061129] md2: detected capacity change from 0 to 98631680
> [ 2363.065812] md2: unknown partition table
> [ 2372.302358] md: bind
> [ 2372.304614] md/raid1:md3: active with 1 out of 2 mirrors
> [ 2372.304663] md3: detected capacity change from 0 to 158936989696
> [ 2372.308395] md3: unknown partition table
>
> My kernel config:
> CONFIG_MD=y
> CONFIG_BLK_DEV_MD=y
> CONFIG_MD_AUTODETECT=y
> CONFIG_MD_RAID1=y
>
> # mdadm --examine /dev/sda3
> /dev/sda3:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 030d5ded:82314c21:e090bcab:b1d9c206 (local to host dns)
> Creation Time : Tue Feb 1 19:03:30 2011
> Raid Level : raid1
> Used Dev Size : 155211904 (148.02 GiB 158.94 GB)
> Array Size : 155211904 (148.02 GiB 158.94 GB)
> Raid Devices : 2
> Total Devices : 1
> Preferred Minor : 3
>
> Update Time : Tue Feb 1 19:18:56 2011
> State : clean
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 1
> Spare Devices : 0
> Checksum : 64a5bec0 - correct
> Events : 7
>
>
> Number Major Minor RaidDevice State
> this 0 8 3 0 active sync /dev/sda3
>
> 0 0 8 3 0 active sync /dev/sda3
> 1 1 0 0 1 faulty removed
>
> I format the md* devices and copy file with rsync, alter the grub and
> fstab to boot md devices but on boot i have fail to boot md3 as rootfs
>
> Atenciosamente,
> Naira Kaieski
>
>
>
> Em 1/2/2011 18:43, Leslie Rhorer escreveu:
> >> I have read several articles on the internet and researched in the
> >> messages list, but I'm still having trouble configuring a raid level 1
> >> array that is bootable.
> >>
> >> I configured a server some time agowith Gentoo Linux, Kernel
> >> 2.6.28-hardened-r9, mdadm - v3.0 and 2 IDE hard drives, this is working
> >> correctly. For this installation iused as a basis for consultation
> >> Article http://en.gentoo-wiki.com/wiki/Migrate_to_RAID
> >>
> >> Now, I want to use two SATA drives in raid level 1,
> >>
> >> Now i have Gentoo Linux with kernel 2.6.36-hardened-r6 and mdadm -
> >> v3.1.4 and the instructions of Article dont work. The kernel was
> >> configured with support for disks raid autodetect and supported the
> raid
> >> level 1. But in the logs of dmesg does not run the auto-detection of
> the
> >> disks to the array, so in the boot when mounting the root device /
> >> dev/md2 the system can not find the device.
> >>
> >> When I run mdadm - auto-detect the array are found somewhere but still
> >> displays message indicating that the raid device is not a valid
> >> partition table.
> >>
> >> How can you configure a raid level 1 with bootable disks / dev / sda
> and
> >> / dev / sdb?
> >> I want three partitions:
> >> /dev/md1 - swap - /dev/sda1, /dev/sdb1
> >> /dev/md2 - boot - /dev/sda2, /dev/sdb2
> >> /dev/md3 - / - /dev/sda3, /dev/sdb3
> >>
> >> I am using grub as bootloader.
> > This is very similar to my boot configuration on my two servers. I
> > suspect your problem is the metadata. What version of superblock are
> you
> > using for /dev/md2? GRUB2 does not recognize a version 1.x superblock.
> > Since the boot images are quite small, and don't require an array of
> many
> > disks, there is nothing wrong with the 0.90 superblock, however. If
> your
> > /dev/md2 array is not a 0.9 version superblock, try converting it. Here
> is
> > my configuration from one of the servers:
> >
> > ARRAY /dev/md0 level=raid6 num-devices=10 metadata=01.2 name=Backup:0
> > UUID=431244d6:45d9635a:e88b3de5:92f30255
> > ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90
> > UUID=4cde286c:0687556a:4d9996dd:dd23e701
> > ARRAY /dev/md2 level=raid1 num-devices=2 metadata=01.2 name=Backup:2
> > UUID=d45ff663:9e53774c:6fcf9968:21692025
> > ARRAY /dev/md3 level=raid1 num-devices=2 metadata=01.2 name=Backup:3
> > UUID=51d22c47:10f58974:0b27ef04:5609d357
> >
> > Where md0 is a large (11T) data array, md1 is boot, md2 is root, and
> > md3 is swap. The partitioning layout of the boot drives is the same as
> > yours.
> >

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Bootable Raid-1

am 02.02.2011 16:56:59 von hansBKK

On Wed, Feb 2, 2011 at 4:53 AM, Leslie Rhorer wrote:
> I recall reading very recently (it might have even been today) that Linux RAID Autodetect partitions can cause problems. I have mine set to simply "Linux".

I haven't come across this at all, and have never had any problem
booting even older Linux kernels in RAID1 arrays using grub2.

>> >> I am using grub as bootloader.

You didn't specify grub2, and if you're using grub legacy, that's a
real pain, not so much to get working, but to keep working if you have
to reconfigure later on. I recommend going with grub 2, IMO it's very
much ready for production use now - keeping in mind it's "just a
bootloader", and you can always use Super Grub2 live CD to get things
going in a pinch.

I've found that many OS installation routines mess up the
partitioning/RAID creation, so I'll often set things up ahead of time
with a sysadmin Live CD (see below) so the block devices I want to use
are all available before I start the target OS installation itself.

You're not actually partitioning the mdX block device are you? I've
always set up my component partitions, created the array using those
partitions, and then created the filesystem on the mdX.

All of my systems now boot via grub2 into RAID1s. I usually set up my
partitioning so that every drive is exactly the same, the first
primary is used as the boot RAID1, and the RAID5/6 component
partitions used for the main storage Lvs are often logical inside
extended for flexibility. This allows me to have multiple
"rescue/recovery/maintenance" OSs available to boot from (System
Rescue, Grml, Ubuntu for the grub2) installed right on the HDD -
lately I've been able to get grub2 to boot directly from on-disk ISO
images rather than having to do any actual install for these. Another
advantage is that I can boot from any component drive and get the same
config/menu, don't have to worry about drive order when swapping out
hardware, or even moving an array to another box.

Since most current "production-level" server OSs still use legacy
grub, I let it go ahead and do whatever it wants to do to setup
booting from its RAID1 array, and when it's all done if necessary I
then restore grub2 to the MBR(s) and adapt the grub1/lilo/whatever
code from the target OS to my grub2 configuration. Lately I've been
dedicating a partition/array to grub2 itself, so I'm not dependent on
a particular OS for maintenance.

I used to chainload into the partition-boot-sector MBR to load the
grub legacy (or lilo or whatever) menu, but found it better to just
boot the production OS directly from grub2 using regular
linux-kernel/initrd-image statements. The only downside to the latter
is that when the production OS gets a kernel upgrade, I have to
remember to update the grub2 config myself, again adapting the new
lines generated by the upgrade process.

I hope that helps, let me know if you need more detail, learning links etc.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Bootable Raid-1

am 02.02.2011 17:24:13 von Roberto Spadim

grub2 work with raid1 boot partition and rootfs over mdadm raid1
i tested today (03:00 AM) on my ubuntu desktop

2011/2/2 :
> On Wed, Feb 2, 2011 at 4:53 AM, Leslie Rhorer w=
rote:
>> =A0I recall reading very recently (it might have even been today) th=
at Linux RAID Autodetect partitions can cause problems. =A0I have mine =
set to simply "Linux".
>
> I haven't come across this at all, and have never had any problem
> booting even older Linux kernels in RAID1 arrays using grub2.
>
>>> >> I am using grub as bootloader.
>
> You didn't specify grub2, and if you're using grub legacy, that's a
> real pain, not so much to get working, but to keep working if you hav=
e
> to reconfigure later on. I recommend going with grub 2, IMO it's very
> much ready for production use now - keeping in mind it's "just a
> bootloader", and you can always use Super Grub2 live CD to get things
> going in a pinch.
>
> I've found that many OS installation routines mess up the
> partitioning/RAID creation, so I'll often set things up ahead of time
> with a sysadmin Live CD (see below) so the block devices I want to us=
e
> are all available before I start the target OS installation itself.
>
> You're not actually partitioning the mdX block device are you? I've
> always set up my component partitions, created the array using those
> partitions, and then created the filesystem on the mdX.
>
> All of my systems now boot via grub2 into RAID1s. I usually set up my
> partitioning so that every drive is exactly the same, the first
> primary is used as the boot RAID1, and the RAID5/6 component
> partitions used for the main storage Lvs are often logical inside
> extended for flexibility. This allows me to have multiple
> "rescue/recovery/maintenance" OSs available to boot from (System
> Rescue, Grml, Ubuntu for the grub2) installed right on the HDD -
> lately I've been able to get grub2 to boot directly from on-disk ISO
> images rather than having to do any actual install for these. Another
> advantage is that I can boot from any component drive and get the sam=
e
> config/menu, don't have to worry about drive order when swapping out
> hardware, or even moving an array to another box.
>
> Since most current "production-level" server OSs still use legacy
> grub, I let it go ahead and do whatever it wants to do to setup
> booting from its RAID1 array, and when it's all done if necessary I
> then restore grub2 to the MBR(s) and adapt the grub1/lilo/whatever
> code from the target OS to my grub2 configuration. Lately I've been
> dedicating a partition/array to grub2 itself, so I'm not dependent on
> a particular OS for maintenance.
>
> I used to chainload into the partition-boot-sector MBR to load the
> grub legacy (or lilo or whatever) menu, but found it better to just
> boot the production OS directly from grub2 using regular
> linux-kernel/initrd-image statements. The only downside to the latter
> is that when the production OS gets a kernel upgrade, I have to
> remember to update the grub2 config myself, again adapting the new
> lines generated by the upgrade process.
>
> I hope that helps, let me know if you need more detail, learning link=
s etc.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>



--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Bootable Raid-1

am 02.02.2011 18:36:12 von Leslie Rhorer

> -----Original Message-----
> From: hansbkk@gmail.com [mailto:hansbkk@gmail.com]
> Sent: Wednesday, February 02, 2011 9:57 AM
> To: lrhorer@satx.rr.com
> Cc: naira@faccat.br; linux-raid@vger.kernel.org
> Subject: Re: Bootable Raid-1
>
> On Wed, Feb 2, 2011 at 4:53 AM, Leslie Rhorer wrote:
> > I recall reading very recently (it might have even been today) that
> Linux RAID Autodetect partitions can cause problems. I have mine set to
> simply "Linux".
>
> I haven't come across this at all, and have never had any problem
> booting even older Linux kernels in RAID1 arrays using grub2.
>
> >> >> I am using grub as bootloader.
>
> You didn't specify grub2

I didn't specify anything. I'm not having a problem.

> You're not actually partitioning the mdX block device are you? I've
> always set up my component partitions, created the array using those
> partitions, and then created the filesystem on the mdX.

Judging by his original message, I don't think he is partitioning
the arrays, no.

> All of my systems now boot via grub2 into RAID1s. I usually set up my
> partitioning so that every drive is exactly the same, the first
> primary is used as the boot RAID1, and the RAID5/6 component
> partitions used for the main storage Lvs are often logical inside
> extended for flexibility.

I prefer booting from relatively small partitioned drives and
keeping a separate target (an array, if the system is large) for data. My
servers have RAID1 arrays for booting.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Bootable Raid-1

am 02.02.2011 18:44:38 von Leslie Rhorer

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Roberto Spadim
> Sent: Wednesday, February 02, 2011 10:24 AM
> To: hansbkk@gmail.com
> Cc: lrhorer@satx.rr.com; naira@faccat.br; linux-raid@vger.kernel.org
> Subject: Re: Bootable Raid-1
>
> grub2 work with raid1 boot partition and rootfs over mdadm raid1
> i tested today (03:00 AM) on my ubuntu desktop

There wasn't really any question about that. I've been running
RAID1 boots under GRUB2 for months. Prior to that, it was RAID1 under GRUB
legacy. RAID1 under GRUB legacy takes some finagling. Under GRUB2 it
should be straightforward as long as the partitioning is correct and one
employs a 0.90 superblock for the /boot array.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Bootable Raid-1

am 09.09.2011 07:17:40 von hansBKK

On Fri, Sep 9, 2011 at 3:28 AM, CoolCold wrote:
> how does Grub2 react on degraded raid? Does it respect md's point of view which disk is bad and which is not? Does it cooperate well with mdadm in general?

Without being able to answer that specific question (my guesses: "very
well", "most likely" and "it better" 8-)

What I do is use a relatively small (and possibly slower) physical
hard drive for boot purposes, which doesn't contain any important
data. Easy enough to keep a RAID1 member on a shelf or offsite for
disaster recovery, image it or whatever.

The arrays that keep important userland data are on completely
separate (in my case RAID6) arrays, and wouldn't have anything to do
with the boot process. In fact they are easily moved from one host to
another as needed - many hosts run in VMs anyway.

Such a principle can be implemented even in simpler/smaller
environments with very little added cost.

I'd advise playing/learning with a couple scratch drives, perhaps
using a recent debian or ubuntu LiveCD. Don't use any GUI tools, just
follow the local man and online resources.

The new tool strives to be user-friendly and idiot-proof, and
therefore inevitably is more complex if you actually want to
understand the internal details of what its doing. But it's not rocket
science, maybe a half-day's worth of research and testing and you'll
be a GRUB2 guru. . .
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Bootable Raid-1

am 09.09.2011 10:59:01 von CoolCold

On Wed, Feb 2, 2011 at 9:44 PM, Leslie Rhorer wro=
te:

Resending because 1st letter was not delivered.


>>
>> grub2 work with raid1 boot partition and rootfs over mdadm raid1
>> i tested today (03:00 AM) on my ubuntu desktop
>
> =A0 =A0 =A0 =A0There wasn't really any question about that. =A0I've b=
een running
> RAID1 boots under GRUB2 for months. =A0Prior to that, it was RAID1 un=
der GRUB
> legacy. =A0RAID1 under GRUB legacy takes some finagling. =A0Under GRU=
B2 it
> should be straightforward as long as the partitioning is correct and =
one
> employs a 0.90 superblock for the /boot array.
>
Guys, basing on your experience, can you tell us, how does Grub2 react
on degraded raid? Does it respect md's point of view which disk is bad
and which is not? Does it cooperate well with mdadm in general?

Grub legacy was way easy, just setup on disk & read that disk, no raid
knowledge, no problems. I guess grub2 can be configured in that manner
too, but as it has raid/lvm/whatever support, may be give it a try...

I've got Debian Squeeze servers with grub2, without any mirroing, so
converting them into RAID1 system, wanna make it right.

--=20
Best regards,
[COOLCOLD-RIPN]

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Bootable Raid-1

am 09.09.2011 21:09:52 von Bill Davidsen

Naira Kaieski wrote:
> Hi,
>
> My metadata is 0.90...
>
> My Partitions:
> /dev/sda1 1 122 979933+ fd Linux raid
> autodetect
> /dev/sda2 * 123 134 96390 fd Linux raid
> autodetect
> /dev/sda3 135 19457 155211997+ fd Linux raid
> autodetect

When you use RAID1 you can do it one of two ways, as a while drive array
or as a series of arrays based on partitions. Some versions of GRUB will
only understand 0.90 metadata, so at least the boot partition should use
that. In addition, if you don't have the whole drive raid, you really
should be sure you have written a useful boot sector to each drive.
Other than that I can't think of any particular issues you might have.

Depending on your distribution you may have to create a new boot image
using mkinitrd or whatever your distribution has decided is better. If
you change your setup after install this is often needed.

--
Bill Davidsen
We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination. -me, 2010



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Bootable Raid-1

am 10.09.2011 19:56:00 von Leslie Rhorer

> -----Original Message-----
> From: CoolCold [mailto:coolthecold@gmail.com]
> Sent: Thursday, September 08, 2011 3:29 PM
> To: lrhorer@satx.rr.com
> Cc: Roberto Spadim; hansbkk@gmail.com; naira@faccat.br; linux-
> raid@vger.kernel.org
> Subject: Re: Bootable Raid-1
>=20
> On Wed, Feb 2, 2011 at 9:44 PM, Leslie Rhorer w=
rote:
>
> >>
> >> grub2 work with raid1 boot partition and rootfs over mdadm raid1
> >> i tested today (03:00 AM) on my ubuntu desktop
> >
> > =A0 =A0 =A0 =A0There wasn't really any question about that. =A0I've=
been running
> > RAID1 boots under GRUB2 for months. =A0Prior to that, it was RAID1 =
under
> GRUB
> > legacy. =A0RAID1 under GRUB legacy takes some finagling. =A0Under G=
RUB2 it
> > should be straightforward as long as the partitioning is correct an=
d one
> > employs a 0.90 superblock for the /boot array.
> >
> Guys, basing on your experience, can you tell us, how does Grub2 reac=
t
> on degraded raid? Does it respect md's point of view which disk is ba=
d
> and which is not? Does it cooperate well with mdadm in general?

I can't say from experience. Presumably it will.

> Grub legacy was way easy, just setup on disk & read that disk, no rai=
d
> knowledge, no problems. I guess grub2 can be configured in that manne=
r
> too, but as it has raid/lvm/whatever support, may be give it a try...
>=20
> I've got Debian Squeeze servers with grub2, without any mirroing, so
> converting them into RAID1 system, wanna make it right.

That conversion shouldn't be difficult. Going from legacy GRUB
RAID1 to GRUB2 RAID1 was a bit of a challenge.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Bootable Raid-1

am 15.09.2011 21:33:45 von Robert L Mathews

>> RAID1 under GRUB legacy takes some finagling. Under GRUB2 it
>> should be straightforward as long as the partitioning is correct and one
>> employs a 0.90 superblock for the /boot array.

I'm using three-disk RAID-1 arrays ("it's not paranoia when they really
are out to get you") with Debian squeeze and grub-pc.

My strategy:

* Use version 1.0 metadata (0.9 is also okay; 1.1 and 1.2 aren't
because the partition no longer looks identical to a non-RAID
partition as far as grub is concerned).
* Have a small (512 MB) separate ext3 /boot partition.
* Wait until the new /boot partition fully syncs before messing
with the MBR on the additional disks.
* After it syncs, add the grub record to the other two disks with
"grub-install /dev/sdb" and "grub-install /dev/sdc".

This definitely works correctly in that I can remove any one or two of
the disks and it boots properly with the RAID degraded, just as it should.

What I haven't been able to fully test is how it copes with one of the
disks being present but non-working. For example, a potential problem is
that the MBR on /dev/sda is readable, but the /dev/sda1 "/boot"
partition is not readable due to sector read errors. I assume that in
that case, the boot will fail. Our disaster plan to deal with that
situation is to physically pull /dev/sda out of the machine.

As an aside, using "grub-install" on the additional disks is much
simpler than what I used to do under grub-legacy, which was something like:

# grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)

The reason I did that for years instead of using "grub-install" was due
to a belief that the "grub-install" method somehow made grub always rely
on the original, physical /dev/sda drive even if it booted from the MBR
of a different one. I'm not sure if this was ever true, but if it was,
it doesn't seem to be true any more, at least on the hardware I'm using.
Perhaps the fact that grub-pc now recognizes the /boot partition by file
system UUID (which is the same across all members of the array) helps it
find any of them that are working.


> Guys, basing on your experience, can you tell us, how does Grub2 react
> on degraded raid? Does it respect md's point of view which disk is bad
> and which is not? Does it cooperate well with mdadm in general?

Keep in mind that grub is booting the system before the RAID array
starts. It wouldn't know anything about arrays (degraded or otherwise)
at boot time. It simply picks a single disk partition to start using the
files from. This is why booting from RAID only works with RAID 1, where
the RAIDed partitions can appear identical to non-RAID partitions.

For that reason, you want the metadata to be either 0.90 or 1.0 so that
the partition looks like a normal, non-RAID ext3 /boot partition to grub
(the beginning of the partition is the same). Your goal is to make sure
that your system would be bootable even if you didn't assemble the RAID
array. That's possible because the file system UUIDs listed in the
/boot/grub/grub.cfg file will match the file system UUIDs shown on the
raw, non-mdadm output of "dumpe2fs /dev/sda1", etc.

For example, on one of my servers, grub.cfg contains:

search --no-floppy --fs-uuid --set 0fa13d65-7e83-4e87-a348-52f77a51b3d5

And that UUID is the same one I see from:

dumpe2fs /dev/sda1 | grep UUID
Filesystem UUID: 0fa13d65-7e83-4e87-a348-52f77a51b3d5

dumpe2fs /dev/sdb1 | grep UUID
Filesystem UUID: 0fa13d65-7e83-4e87-a348-52f77a51b3d5

dumpe2fs /dev/sdc1 | grep UUID
Filesystem UUID: 0fa13d65-7e83-4e87-a348-52f77a51b3d5

Of course, mdadm was the magical thing that originally made an identical
file system exist on all three of those partitions, but the point is
that grub doesn't know or care about that: it simply searches for the
matching UUID of the file system, finds it on a single one of the
physical disks, and uses it without any RAID/mdadm stuff being involved
at all. It wouldn't know if it had somehow found a degraded copy --
hence my philosophy of "if /dev/sda is messed up, it's better to just
pull it out and rely on grub RAID 1 booting from a different physical disk".

--
Robert L Mathews, Tiger Technologies, http://www.tigertech.net/
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html