How to recreate a dmraid RAID array with mdadm (was: no subject)

How to recreate a dmraid RAID array with mdadm (was: no subject)

am 14.11.2010 07:50:42 von Mike Viau

> On Sun, 14 Nov 2010 06:36:00 +1100 wrote:
>> cat /proc/mdstat (showing what mdadm shows/discovers)
>>
>> Personalities :
>> md127 : inactive sda[1](S) sdb[0](S)
>> 4514 blocks super external:imsm
>>
>> unused devices:
>
> As imsm can have several arrays described by one set of metadata, mda=
dm
> creates an inactive arrive just like this which just holds the set of
> devices, and then should create other arrays made of from different r=
egions
> of those devices.
> It looks like mdadm hasn't done that you. You can ask it to with:
>
> mdadm -I /dev/md/imsm0
>
> That should created the real raid1 array in /dev/md/something.
>
> NeilBrown
>

Thanks for this information, I feel like I am getting closer to getting=
this working properly. After running the command above (mdadm -I /dev/=
md/imsm0), the real raid 1 array did appear as /dev/md/*

ls -al /dev/md
total 0
drwxr-xr-x=A0 2 root root   80 Nov 14 00:53 .
drwxr-xr-x 21 root root 3480 Nov 14 00:53 ..
lrwxrwxrwx=A0 1 root root  =A0 8 Nov 14 00:50 imsm0 -> ../md127
lrwxrwxrwx=A0 1 root root  =A0 8 Nov 14 00:53 OneTB-RAID1-PV -> ../=
md126

---------------

And the kernel messages:

[ 4652.315650] md: bind
[ 4652.315866] md: bind
[ 4652.341862] raid1: md126 is not clean -- starting background reconst=
ruction
[ 4652.341958] raid1: raid set md126 active with 2 out of 2 mirrors
[ 4652.342025] md126: detected capacity change from 0 to 1000202043392
[ 4652.342400]=A0 md126: p1
[ 4652.528448] md: md126 switched to read-write mode.
[ 4652.529387] md: resync of RAID array md126
[ 4652.529424] md: minimum _guaranteed_=A0 speed: 1000 KB/sec/disk.
[ 4652.529464] md: using maximum available idle IO bandwidth (but not m=
ore than 200000 KB/sec) for resync.
[ 4652.529525] md: using 128k window, over a total of 976759940 blocks.
=A0
---------------

fdisk -ul /dev/md/OneTB-RAID1-PV=20

Disk /dev/md/OneTB-RAID1-PV: 1000.2 GB, 1000202043392 bytes
255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors
Units =3D sectors of 1 * 512 =3D 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

                 Device Boot    =
=A0 Start         End    =A0 Blocks   Id=A0=
System
/dev/md/OneTB-RAID1-PV1            = A0 63=A0 19=
53503999   976751968+=A0 8e=A0 Linux LVM

---------------

pvscan=20

=A0 PV /dev/sdc7    =A0 VG XENSTORE-VG    =A0 lvm2 [46.=
56 GiB / 0  =A0 free]
=A0 PV /dev/md126p1   VG OneTB-RAID1-VG   lvm2 [931.50 GiB / 0=A0=
   free]
=A0 Total: 2 [978.06 GiB] / in use: 2 [978.06 GiB] / in no VG: 0 [0  =
]

---------------

pvdisplay=20

=A0--- Physical volume ---
=A0 PV Name               /dev/md126p1
=A0 VG Name               OneTB-RAID1-VG
=A0 PV Size               931.50 GiB / not =
usable 3.34 MiB
=A0 Allocatable           yes (but full)
=A0 PE Size               4.00 MiB
=A0 Total PE            =A0 238464
=A0 Free PE               0
=A0 Allocated PE        =A0 238464
=A0 PV UUID               hvxXR3-tV9B-CMBW-=
nZn2-N2zH-N1l6-sC9m9i

----------------

vgscan=20

=A0 Reading all physical volumes.=A0 This may take a while...
=A0 Found volume group "XENSTORE-VG" using metadata type lvm2
=A0 Found volume group "OneTB-RAID1-VG" using metadata type lvm2

-------------

vgdisplay

--- Volume group ---
=A0 VG Name               OneTB-RAID1-VG
=A0 System ID            =20
=A0 Format              =A0 lvm2
=A0 Metadata Areas      =A0 1
=A0 Metadata Sequence No=A0 2
=A0 VG Access             read/write
=A0 VG Status             resizable
=A0 MAX LV              =A0 0
=A0 Cur LV              =A0 1
=A0 Open LV               0
=A0 Max PV              =A0 0
=A0 Cur PV              =A0 1
=A0 Act PV              =A0 1
=A0 VG Size               931.50 GiB
=A0 PE Size               4.00 MiB
=A0 Total PE            =A0 238464
=A0 Alloc PE / Size       238464 / 931.50 GiB
=A0 Free=A0 PE / Size       0 / 0  =20
=A0 VG UUID               nCBsU2-VpgR-EcZj-=
lA15-oJGL-rYOw-YxXiC8

--------------------

vgchange -a y OneTB-RAID1-VG

=A0 1 logical volume(s) in volume group "OneTB-RAID1-VG" now active

--------------------

lvdisplay=20

--- Logical volume ---
=A0 LV Name              =A0 /dev/OneTB-RAI=
D1-VG/OneTB-RAID1-LV
=A0 VG Name              =A0 OneTB-RAID1-VG
=A0 LV UUID              =A0 R3TYWb-PJo1-Xz=
bm-vJwu-YpgP-ohZW-Vf1kHJ
=A0 LV Write Access      =A0 read/write
=A0 LV Status            =A0 available
=A0 # open                 0
=A0 LV Size              =A0 931.50 GiB
=A0 Current LE             238464
=A0 Segments               1
=A0 Allocation             inherit
=A0 Read ahead sectors     auto
=A0 - currently set to     256
=A0 Block device           253:4

------------------------

fdisk -ul /dev/OneTB-RAID1-VG/OneTB-RAID1-LV=20

Disk /dev/OneTB-RAID1-VG/OneTB-RAID1-LV: 1000.2 GB, 1000190509056 bytes
255 heads, 63 sectors/track, 121599 cylinders, total 1953497088 sectors
Units =3D sectors of 1 * 512 =3D 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbda8e40b

                         =
     Device Boot    =A0 Start         E=
nd    =A0 Blocks   Id=A0 System
/dev/OneTB-RAID1-VG/OneTB-RAID1-LV1        = A0  =A0=
=A0 63=A0 1953487934   976743936   83=A0 Linux

-----------------------

mount -t ext4 /dev/OneTB-RAID1-VG/OneTB-RAID1-LV /mnt
mount
/dev/sdc5 on / type ext4 (rw,errors=3Dremount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=3D0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=3D0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=3D5,mode=3D620)
/dev/sdc1 on /boot type ext2 (rw)
xenfs on /proc/xen type xenfs (rw)
/dev/mapper/OneTB--RAID1--VG-OneTB--RAID1--LV on /mnt type ext4 (rw)

-----------------

ls /mnt (and files are visible)

-------------------

And also when the array is running after manually running the command a=
bove, the error when updating the init ramdisk for kernels is gone....

update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64


-----------------

But the issue remain now is that the mdadm is not running the real raid=
1 array on reboots, the init ramdisk errors come right back unfortunate=
ly (enabled verbosity)....

1) update-initramfs -u -k all

update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=3Dlow mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=3Dlow mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=3Dlow mdadm` to change this.


2) dpkg-reconfigure --priority=3Dlow mdadm [leaving all defaults]

Stopping MD monitoring service: mdadm --monitor.
Generating array device nodes... done.
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=3Dlow mdadm` to change this.
Starting MD monitoring service: mdadm --monitor.
Generating udev events for MD arrays...done.


3) update-initramfs -u -k all [again]

update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=3Dlow mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=3Dlow mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=3Dlow mdadm` to change this.
-----------------

ls -al /dev/md/
total 0
drwxr-xr-x=A0 2 root root   60 Nov 14 01:22 .
drwxr-xr-x 21 root root 3440 Nov 14 01:23 ..
lrwxrwxrwx=A0 1 root root  =A0 8 Nov 14 01:23 imsm0 -> ../md127

-----------------


How does one fix the problem of not having the array not starting at bo=
ot?

The files/configuration I have now:

find /etc -type f | grep mdadm
/logcheck/ignore.d.server/mdadm
/logcheck/violations.d/mdadm
/default/mdadm
/init.d/mdadm
/init.d/mdadm-raid
/cron.daily/mdadm
/cron.d/mdadm
/mdadm/mdadm.conf

find /etc/rc?.d/ | grep mdadm
/etc/rc0.d/K01mdadm
/etc/rc0.d/K10mdadm-raid
/etc/rc1.d/K01mdadm
/etc/rc2.d/S02mdadm
/etc/rc3.d/S02mdadm
/etc/rc4.d/S02mdadm
/etc/rc5.d/S02mdadm
/etc/rc6.d/K01mdadm
/etc/rc6.d/K10mdadm-raid
/etc/rcS.d/S03mdadm-raid


cat /etc/mdadm/mdadm.conf=20
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks=

# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes

# automatically tag new arrays as belonging to the local system
HOMEHOST

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb7:626=
59383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a

# This file was auto-generated on Fri, 05 Nov 2010 16:29:48 -0400
# by mkconf 3.1.4-1+8efb9d1

--------------------


Again, How does one fix the problem of not having the array not startin=
g at boot?



Thanks.
=A0

-M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm (was: nosubject)

am 15.11.2010 06:21:22 von NeilBrown

On Sun, 14 Nov 2010 01:50:42 -0500
Mike Viau wrote:

> Again, How does one fix the problem of not having the array not starting at boot?
>

To be able to answer that one would need to know exactly what is in the
initramfs. And unfortunately all distros are different and I'm not
particularly familiar with Ubuntu.

Maybe if you
mkdir /tmp/initrd
cd /tmp/initrd
zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv

and then have a look around and particularly report etc/mdadm/mdadm.conf
and anything else that might be interesting.

If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
*should* work.


NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm (was: no subject)

am 17.11.2010 02:02:17 von Mike Viau

> On Mon, 15 Nov 2010 16:21:22 +1100 wrote:
> > On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> >
> > How does one fix the problem of not having the array not starting at boot?
> >
>
> To be able to answer that one would need to know exactly what is in the
> initramfs. And unfortunately all distros are different and I'm not
> particularly familiar with Ubuntu.
>
> Maybe if you
> mkdir /tmp/initrd
> cd /tmp/initrd
> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
>
> and then have a look around and particularly report etc/mdadm/mdadm.conf
> and anything else that might be interesting.
>
> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> *should* work.
>

Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.

The initramfs's copy contains:

DEVICE partitions
HOMEHOST
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but

CREATE owner=root group=disk mode=0660 auto=yes

and

MAILADDR root

were not carried over on the update-initramfs command.


To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?


My diff findings between the local copy of mdadm.conf and the initramfs's copy pasted at:
http://debian.pastebin.com/5VNnd9g1


Thanks for your help.


-M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm (was: nosubject)

am 17.11.2010 02:26:47 von NeilBrown

On Tue, 16 Nov 2010 20:02:17 -0500
Mike Viau wrote:

>
> > On Mon, 15 Nov 2010 16:21:22 +1100 wrote:
> > > On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> > >
> > > How does one fix the problem of not having the array not starting at boot?
> > >
> >
> > To be able to answer that one would need to know exactly what is in the
> > initramfs. And unfortunately all distros are different and I'm not
> > particularly familiar with Ubuntu.
> >
> > Maybe if you
> > mkdir /tmp/initrd
> > cd /tmp/initrd
> > zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
> >
> > and then have a look around and particularly report etc/mdadm/mdadm.conf
> > and anything else that might be interesting.
> >
> > If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> > *should* work.
> >
>
> Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
>
> The initramfs's copy contains:
>
> DEVICE partitions
> HOMEHOST
> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
>
> So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
>
> CREATE owner=root group=disk mode=0660 auto=yes
>
> and
>
> MAILADDR root
>
> were not carried over on the update-initramfs command.
>
>
> To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?

No, those differences couldn't explain it not working.

I would really expect that mdadm.conf file to successfully assemble the
RAID1.

As you have the same in /etc/mdadm/mdadm.conf you could see what is happening
by:

mdadm -Ss

to stop all md arrays, then

mdadm -Asvv

to auto-start everything in mdadm.conf and be verbose about that is happening.

If that fails to start the raid1, then the messages it produces will be
helpful in understanding why.
If it succeeds, then there must be something wrong with the initrd...
Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
mdadm -As
(or equivalently: mdadm --assemble --scan)
but doesn't something else. To determine what you would need to search for
'mdadm' in all the scripts in the initrd and see what turns up.

NeilBrown




>
>
> My diff findings between the local copy of mdadm.conf and the initramfs's copy pasted at:
> http://debian.pastebin.com/5VNnd9g1
>
>
> Thanks for your help.
>
>
> -M
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm (was: no subject)

am 17.11.2010 02:39:39 von John Robinson

On 17/11/2010 01:26, Neil Brown wrote:
> On Tue, 16 Nov 2010 20:02:17 -0500
> Mike Viau wrote:
[...]
>> DEVICE partitions
>> HOMEHOST
>> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
>> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
[...]
> I would really expect that mdadm.conf file to successfully assemble the
> RAID1.

The only thing that strikes me is that "DEVICE partitions" line - surely
imsm containers don't live in partitions?

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm (was: nosubject)

am 17.11.2010 02:53:37 von NeilBrown

On Wed, 17 Nov 2010 01:39:39 +0000
John Robinson wrote:

> On 17/11/2010 01:26, Neil Brown wrote:
> > On Tue, 16 Nov 2010 20:02:17 -0500
> > Mike Viau wrote:
> [...]
> >> DEVICE partitions
> >> HOMEHOST
> >> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> >> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> [...]
> > I would really expect that mdadm.conf file to successfully assemble the
> > RAID1.
>
> The only thing that strikes me is that "DEVICE partitions" line - surely
> imsm containers don't live in partitions?

No, they don't.

But "DEVICE partitions" actually means "any devices listed
in /proc/partitions", and that includes whole devices.
:-(

NeilBrown


>
> Cheers,
>
> John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm (was: no subject)

am 17.11.2010 03:27:19 von Mike Viau

> On Wed, 17 Nov 2010 12:53:37 +1100 wrote:
> On Wed, 17 Nov 2010 01:39:39 +0000
> John Robinson wrote:
>
> > On 17/11/2010 01:26, Neil Brown wrote:
> > > On Tue, 16 Nov 2010 20:02:17 -0500
> > > Mike Viau wrote:
> > [...]
> > >> DEVICE partitions
> > >> HOMEHOST
> > >> ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
> > >> ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784=
fb7:62659383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a
> > [...]
> > > I would really expect that mdadm.conf file to successfully assemb=
le the
> > > RAID1.
> >
> > The only thing that strikes me is that "DEVICE partitions" line - s=
urely
> > imsm containers don't live in partitions?
>
> No, they don't.
>
> But "DEVICE partitions" actually means "any devices listed
> in /proc/partitions", and that includes whole devices.
> :-(
>

I noticed both /dev/sda and /dev/sdb (the drives which make up the raid=
1 array) do not appear to recognized as having a valid container when o=
ne is required. The output of mdadm -Asvv shows:

mdadm -Asvv
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.


and cat /proc/partitions shows:

major minor=A0 #blocks=A0 name

   8      =A0 0=A0 976762584 sda
   8       16=A0 976762584 sdb
   8       32   78125000 sdc
   8       33     487424 sdc1
   8       34        =A0 1 sdc2
   8       37   20995072 sdc5
   8       38  =A0 7811072 sdc6
   8       39   48826368 sdc7
   7      =A0 0  =A0 4388218 loop0
=A0254      =A0 0   10485760 dm-0
=A0254      =A0 1   10485760 dm-1
=A0254      =A0 2   10485760 dm-2
=A0254      =A0 3   17367040 dm-3

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm (was: no subject)

am 17.11.2010 03:44:10 von Mike Viau

> On Wed, 17 Nov 2010 12:26:47 +1100 wrote:
>>
>>> On Mon, 15 Nov 2010 16:21:22 +1100 wrote:
>>>> On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
>>>>
>>>> How does one fix the problem of not having the array not starting =
at boot?
>>>>
>>>
>>> To be able to answer that one would need to know exactly what is in=
the
>>> initramfs. And unfortunately all distros are different and I'm not
>>> particularly familiar with Ubuntu.
>>>
>>> Maybe if you
>>> mkdir /tmp/initrd
>>> cd /tmp/initrd
>>> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
>>>
>>> and then have a look around and particularly report etc/mdadm/mdadm=
conf
>>> and anything else that might be interesting.
>>>
>>> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then =
it
>>> *should* work.
>>>
>>
>> Thanks again Neil. I got a chance to examine my systems initramfs to=
discover two differences in the local copy of mdadm.conf and the initr=
amfs's copy.
>>
>> The initramfs's copy contains:
>>
>> DEVICE partitions
>> HOMEHOST
>> ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
>> ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb7:=
62659383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a
>>
>> So both ARRAY lines got copied over to the initramfs's copy of mdadm=
conf, but
>>
>> CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes
>>
>> and
>>
>> MAILADDR root
>>
>> were not carried over on the update-initramfs command.
>>
>>
>> To your clearly better understanding of all this, does the CREATE st=
anza NEED to be present in the initramfs's copy of mdadm.conf in order =
for the array to be created on boot? If so, how can one accomplish this=
, so that the line is added whenever a new initramfs is created for the=
kernel?
>
> No, those differences couldn't explain it not working.
>
> I would really expect that mdadm.conf file to successfully assemble t=
he
> RAID1.
>
> As you have the same in /etc/mdadm/mdadm.conf you could see what is h=
appening
> by:
>
> mdadm -Ss
>
> to stop all md arrays, then
>
> mdadm -Asvv
>
> to auto-start everything in mdadm.conf and be verbose about that is h=
appening.
>
> If that fails to start the raid1, then the messages it produces will =
be
> helpful in understanding why.
> If it succeeds, then there must be something wrong with the initrd...
> Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
> mdadm -As
> (or equivalently: mdadm --assemble --scan)
> but doesn't something else. To determine what you would need to searc=
h for
> 'mdadm' in all the scripts in the initrd and see what turns up.
>

Using mdadm -Ss stops the array:

mdadm: stopped /dev/md127


Where /dev/md127 is the imsm0 device and not the OneTB-RAID1-PV device.


Then executing mdadm -Asvv shows:

mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.


So I am not really sure if that succeed or not, but it doesn't look lik=
e it has because there is not /dev/md/OneTB-RAID1-PV:

ls -al /dev/md/

total 0
drwxr-xr-x=A0 2 root root   60 Nov 16 21:08 .
drwxr-xr-x 21 root root 3440 Nov 16 21:08 ..
lrwxrwxrwx=A0 1 root root  =A0 8 Nov 16 21:08 imsm0 -> ../md127


But after mdadm -Ivv /dev/md/imsm0:


mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
mdadm: match found for member 0
mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices


Then ls -al /dev/md/ reveals /dev/md/OneTB-RAID1-PV:

total 0
drwxr-xr-x=A0 2 root root   80 Nov 16 21:40 .
drwxr-xr-x 21 root root 3480 Nov 16 21:40 ..
lrwxrwxrwx=A0 1 root root  =A0 8 Nov 16 21:08 imsm0 -> ../md127
lrwxrwxrwx=A0 1 root root  =A0 8 Nov 16 21:40 OneTB-RAID1-PV -> ../=
md126



Regardless some initram disk findings:

pwd

/tmp/initrd

Then:

find . -type f | grep md | grep -v amd

/lib/udev/rules.d/64-md-raid.rules
/scripts/local-top/mdadm
/etc/mdadm/mdadm.conf
/conf/conf.d/md
/sbin/mdadm




/lib/udev/rules.d/64-md-raid.rules
http://paste.debian.net/100016/

/scripts/local-top/mdadm
http://paste.debian.net/100017/

/etc/mdadm/mdadm.conf
http://paste.debian.net/100018/

/conf/conf.d/md
http://paste.debian.net/100019/

/sbin/mdadm
{of course is a binary}


-M

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm (was: nosubject)

am 17.11.2010 04:15:14 von NeilBrown

On Tue, 16 Nov 2010 21:44:10 -0500
Mike Viau wrote:

>=20
> > On Wed, 17 Nov 2010 12:26:47 +1100 wrote:
> >>
> >>> On Mon, 15 Nov 2010 16:21:22 +1100 wrote:
> >>>> On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> >>>>
> >>>> How does one fix the problem of not having the array not startin=
g at boot?
> >>>>
> >>>
> >>> To be able to answer that one would need to know exactly what is =
in the
> >>> initramfs. And unfortunately all distros are different and I'm no=
t
> >>> particularly familiar with Ubuntu.
> >>>
> >>> Maybe if you
> >>> mkdir /tmp/initrd
> >>> cd /tmp/initrd
> >>> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
> >>>
> >>> and then have a look around and particularly report etc/mdadm/mda=
dm.conf
> >>> and anything else that might be interesting.
> >>>
> >>> If the mdadm.conf in the initrd is the same as in /etc/mdadm, the=
n it
> >>> *should* work.
> >>>
> >>
> >> Thanks again Neil. I got a chance to examine my systems initramfs =
to discover two differences in the local copy of mdadm.conf and the ini=
tramfs's copy.
> >>
> >> The initramfs's copy contains:
> >>
> >> DEVICE partitions
> >> HOMEHOST
> >> ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
> >> ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb=
7:62659383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a
> >>
> >> So both ARRAY lines got copied over to the initramfs's copy of mda=
dm.conf, but
> >>
> >> CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes
> >>
> >> and
> >>
> >> MAILADDR root
> >>
> >> were not carried over on the update-initramfs command.
> >>
> >>
> >> To your clearly better understanding of all this, does the CREATE =
stanza NEED to be present in the initramfs's copy of mdadm.conf in orde=
r for the array to be created on boot? If so, how can one accomplish th=
is, so that the line is added whenever a new initramfs is created for t=
he kernel?
> >
> > No, those differences couldn't explain it not working.
> >
> > I would really expect that mdadm.conf file to successfully assemble=
the
> > RAID1.
> >
> > As you have the same in /etc/mdadm/mdadm.conf you could see what is=
happening
> > by:
> >
> > mdadm -Ss
> >
> > to stop all md arrays, then
> >
> > mdadm -Asvv
> >
> > to auto-start everything in mdadm.conf and be verbose about that is=
happening.
> >
> > If that fails to start the raid1, then the messages it produces wil=
l be
> > helpful in understanding why.
> > If it succeeds, then there must be something wrong with the initrd.=
.
> > Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
> > mdadm -As
> > (or equivalently: mdadm --assemble --scan)
> > but doesn't something else. To determine what you would need to sea=
rch for
> > 'mdadm' in all the scripts in the initrd and see what turns up.
> >
>=20
> Using mdadm -Ss stops the array:
>=20
> mdadm: stopped /dev/md127
>=20
>=20
> Where /dev/md127 is the imsm0 device and not the OneTB-RAID1-PV devic=
e.
>=20
>=20
> Then executing mdadm -Asvv shows:
>=20
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-2
> mdadm: /dev/dm-2 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-1
> mdadm: /dev/dm-1 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-0
> mdadm: /dev/dm-0 has wrong uuid.
> mdadm: no RAID superblock on /dev/loop0
> mdadm: /dev/loop0 has wrong uuid.
> mdadm: cannot open device /dev/sdc7: Device or resource busy
> mdadm: /dev/sdc7 has wrong uuid.
> mdadm: cannot open device /dev/sdc6: Device or resource busy
> mdadm: /dev/sdc6 has wrong uuid.
> mdadm: cannot open device /dev/sdc5: Device or resource busy
> mdadm: /dev/sdc5 has wrong uuid.
> mdadm: no RAID superblock on /dev/sdc2
> mdadm: /dev/sdc2 has wrong uuid.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm: /dev/sdc1 has wrong uuid.
> mdadm: cannot open device /dev/sdc: Device or resource busy
> mdadm: /dev/sdc has wrong uuid.
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm: /dev/sdb has wrong uuid.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm: /dev/sda has wrong uuid.

This looks wrong. mdadm should be looking for the container as listed =
in
mdadm.conf and it should find a matching uuid on sda and sdb, but it do=
esn't.

Can you:

mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf

so I can compare the uuids?

Thanks,

NeilBrown




> mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
> mdadm: no recogniseable superblock on /dev/dm-3
> mdadm/dev/dm-3 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-2
> mdadm/dev/dm-2 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-1
> mdadm/dev/dm-1 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-0
> mdadm/dev/dm-0 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/loop0
> mdadm/dev/loop0 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc7: Device or resource busy
> mdadm/dev/sdc7 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc6: Device or resource busy
> mdadm/dev/sdc6 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc5: Device or resource busy
> mdadm/dev/sdc5 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/sdc2
> mdadm/dev/sdc2 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm/dev/sdc1 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc: Device or resource busy
> mdadm/dev/sdc is not a container, and one is required.
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm/dev/sdb is not a container, and one is required.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm/dev/sda is not a container, and one is required.
>=20
>=20
> So I am not really sure if that succeed or not, but it doesn't look l=
ike it has because there is not /dev/md/OneTB-RAID1-PV:
>=20
> ls -al /dev/md/
>=20
> total 0
> drwxr-xr-x=A0 2 root root   60 Nov 16 21:08 .
> drwxr-xr-x 21 root root 3440 Nov 16 21:08 ..
> lrwxrwxrwx=A0 1 root root  =A0 8 Nov 16 21:08 imsm0 -> ../md127
>=20
>=20
> But after mdadm -Ivv /dev/md/imsm0:
>=20
>=20
> mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
> mdadm: match found for member 0
> mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices
>=20
>=20
> Then ls -al /dev/md/ reveals /dev/md/OneTB-RAID1-PV:
>=20
> total 0
> drwxr-xr-x=A0 2 root root   80 Nov 16 21:40 .
> drwxr-xr-x 21 root root 3480 Nov 16 21:40 ..
> lrwxrwxrwx=A0 1 root root  =A0 8 Nov 16 21:08 imsm0 -> ../md127
> lrwxrwxrwx=A0 1 root root  =A0 8 Nov 16 21:40 OneTB-RAID1-PV -> .=
/md126
>=20
>=20
>=20
> Regardless some initram disk findings:
>=20
> pwd
>=20
> /tmp/initrd
>=20
> Then:
>=20
> find . -type f | grep md | grep -v amd
>=20
> ./lib/udev/rules.d/64-md-raid.rules
> ./scripts/local-top/mdadm
> ./etc/mdadm/mdadm.conf
> ./conf/conf.d/md
> ./sbin/mdadm
>=20
>=20
>=20
>=20
> ./lib/udev/rules.d/64-md-raid.rules
> http://paste.debian.net/100016/
>=20
> ./scripts/local-top/mdadm
> http://paste.debian.net/100017/
>=20
> ./etc/mdadm/mdadm.conf
> http://paste.debian.net/100018/
>=20
> ./conf/conf.d/md
> http://paste.debian.net/100019/
>=20
> ./sbin/mdadm
> {of course is a binary}
>=20
>=20
> -M
>=20
> =20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm

am 17.11.2010 23:36:23 von Mike Viau

> On Wed, 17 Nov 2010 14:15:14 +1100 wrote:
>
> This looks wrong. mdadm should be looking for the container as listed=
in
> mdadm.conf and it should find a matching uuid on sda and sdb, but it =
doesn't.
>
> Can you:
>
> mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf
>
> so I can compare the uuids?
>

Sure.

# definitions of existing MD arrays ( So you don't have to scroll down =
:P )


ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383

ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb7:626=
59383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a

mdadm -E /dev/sda /dev/sdb

/dev/sda:
        =A0 Magic : Intel Raid ISM Cfg Sig.
      =A0 Version : 1.1.00
  =A0 Orig Family : 601eee02
         Family : 601eee02
     Generation : 00001187
           UUID : 084b969a:0808f5b8:6c784fb7:626593=
83
       Checksum : 2f91ce06 correct
  =A0 MPB Sectors : 1
        =A0 Disks : 2
   RAID Devices : 1

=A0 Disk00 Serial : STF604MH0J34LB
        =A0 State : active
             Id : 00020000
  =A0 Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[OneTB-RAID1-PV]:
           UUID : ae4a1598:72267ed7:3b34867b:9c5649=
7a
     RAID Level : 1
      =A0 Members : 2
        =A0 Slots : [UU]
    =A0 This Slot : 0
     Array Size : 1953519616 (931.51 GiB 1000.20 GB)
   Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
=A0 Sector Offset : 0
  =A0 Num Stripes : 7630936
     Chunk Size : 64 KiB
       Reserved : 0
=A0 Migrate State : idle
    =A0 Map State : normal
  =A0 Dirty State : clean

=A0 Disk01 Serial : STF604MH0PN2YB
        =A0 State : active
             Id : 00030000
  =A0 Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
/dev/sdb:
        =A0 Magic : Intel Raid ISM Cfg Sig.
      =A0 Version : 1.1.00
  =A0 Orig Family : 601eee02
         Family : 601eee02
     Generation : 00001187
           UUID : 084b969a:0808f5b8:6c784fb7:626593=
83
       Checksum : 2f91ce06 correct
  =A0 MPB Sectors : 1
        =A0 Disks : 2
   RAID Devices : 1

=A0 Disk01 Serial : STF604MH0PN2YB
        =A0 State : active
             Id : 00030000
  =A0 Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[OneTB-RAID1-PV]:
           UUID : ae4a1598:72267ed7:3b34867b:9c5649=
7a
     RAID Level : 1
      =A0 Members : 2
        =A0 Slots : [UU]
    =A0 This Slot : 1
     Array Size : 1953519616 (931.51 GiB 1000.20 GB)
   Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
=A0 Sector Offset : 0
  =A0 Num Stripes : 7630936
     Chunk Size : 64 KiB
       Reserved : 0
=A0 Migrate State : idle
    =A0 Map State : normal
  =A0 Dirty State : clean

=A0 Disk00 Serial : STF604MH0J34LB
        =A0 State : active
             Id : 00020000
  =A0 Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

----------------------------------
cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks=

# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes

# automatically tag new arrays as belonging to the local system
HOMEHOST

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb7:626=
59383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a

# This file was auto-generated on Fri, 05 Nov 2010 16:29:48 -0400
# by mkconf 3.1.4-1+8efb9d1


-M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 01:11:49 von NeilBrown

On Wed, 17 Nov 2010 17:36:23 -0500
Mike Viau wrote:

>=20
> > On Wed, 17 Nov 2010 14:15:14 +1100 wrote:
> >
> > This looks wrong. mdadm should be looking for the container as list=
ed in
> > mdadm.conf and it should find a matching uuid on sda and sdb, but i=
t doesn't.
> >
> > Can you:
> >
> > mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf
> >
> > so I can compare the uuids?
> >
>=20
> Sure.
>=20
> # definitions of existing MD arrays ( So you don't have to scroll dow=
n :P )
>=20
>=20
> ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
>=20
> ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb7:6=
2659383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a
>=20
...
>            UUID : 084b969a:0808f5b8:6c784fb7:6265=
9383
> [OneTB-RAID1-PV]:
>            UUID : ae4a1598:72267ed7:3b34867b:9c56=
497a
...
> # definitions of existing MD arrays
> ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
> ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb7:6=
2659383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a

Yes, the uuids are definitely all correct.
This really should work. I just tested a similar config and it worked
exactly as exported.
Weird.

Whatever version of mdadm are you running???
Can you try getting the latest (3.1.4) from
http://www.kernel.org/pub/linux/utils/raid/mdadm/

and see how that works.
Just
make
./mdadm -Asvv

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 01:56:10 von Mike Viau

--_f4366ee5-14f5-476f-9684-7b6604b151da_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable


> On Thu=2C 18 Nov 2010 11:11:49 +1100 wrote:
>
> > On Wed=2C 17 Nov 2010 17:36:23 -0500 Mike Viau wrote:
> >
> > > On Wed=2C 17 Nov 2010 14:15:14 +1100
> > >
> > > This looks wrong. mdadm should be looking for the container as listed=
in
> > > mdadm.conf and it should find a matching uuid on sda and sdb=2C but i=
t doesn't.
> > >
> > > Can you:
> > >
> > > mdadm -E /dev/sda /dev/sdb =3B cat /etc/mdadm/mdadm.conf
> > >
> > > so I can compare the uuids?
> > >
> >
> > Sure.
> >
> > # definitions of existing MD arrays ( So you don't have to scroll down =
:P )
> >
> > ARRAY metadata=3Dimsm UUID=3D084b969a:0808f5b8:6c784fb7:62659383
> >
> > ARRAY /dev/md/OneTB-RAID1-PV container=3D084b969a:0808f5b8:6c784fb7:626=
59383 member=3D0 UUID=3Dae4a1598:72267ed7:3b34867b:9c56497a
> >
> ....
> > UUID : 084b969a:0808f5b8:6c784fb7:62659383
> > [OneTB-RAID1-PV]:
> > UUID : ae4a1598:72267ed7:3b34867b:9c56497a
> ....
>
> Yes=2C the uuids are definitely all correct.
> This really should work. I just tested a similar config and it worked
> exactly as exported.
> Weird.
>
> Whatever version of mdadm are you running???
> Can you try getting the latest (3.1.4) from
> http://www.kernel.org/pub/linux/utils/raid/mdadm/

I am running the same version=2C from a Debian Squeeze package which I pres=
ume is the same.

mdadm -V

mdadm - v3.1.4 - 31st August 2010

>
> and see how that works.
> Just
> make
> ./mdadm -Asvv

Regardless=2C I did recompile (attached is the make output -- no errors) an=
d got similar mdadm output:

../mdadm -Asvv
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/md126p1
mdadm: /dev/md126p1 has wrong uuid.
mdadm: no RAID superblock on /dev/md/OneTB-RAID1-PV
mdadm: /dev/md/OneTB-RAID1-PV has wrong uuid.
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/md126p1
mdadm/dev/md126p1 is not a container=2C and one is required.
mdadm: no recogniseable superblock on /dev/md/OneTB-RAID1-PV
mdadm/dev/md/OneTB-RAID1-PV is not a container=2C and one is required.
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container=2C and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container=2C and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container=2C and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container=2C and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container=2C and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container=2C and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container=2C and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container=2C and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container=2C and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container=2C and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container=2C and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container=2C and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container=2C and one is required.


So what could this mean???


-M
=

--_f4366ee5-14f5-476f-9684-7b6604b151da_
Content-Type: text/plain
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="mdadm_compile.txt"

bWFrZQoKZ2NjIC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90eXBlcyAt V2V4dHJhIC1Xbm8t
dW51c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNlbmRtYWlsPVwiIi91c3Ivc2Jp bi9zZW5kbWFpbCAt
dCJcIiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFkbS5jb25mXCIgLURDT05GRklM RTI9XCIvZXRjL21k
YWRtL21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1E TUFQX0ZJTEU9XCJt
YXBcIiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURVU0VfUFRIUkVB RFMgICAtYyAtbyBt
ZGFkbS5vIG1kYWRtLmMKZ2NjIC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJv dG90eXBlcyAtV2V4
dHJhIC1Xbm8tdW51c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNlbmRtYWlsPVwi Ii91c3Ivc2Jpbi9z
ZW5kbWFpbCAtdCJcIiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFkbS5jb25mXCIg LURDT05GRklMRTI9
XCIvZXRjL21kYWRtL21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9XCIvZGV2Ly5t ZGFkbVwiIC1ETUFQ
X0ZJTEU9XCJtYXBcIiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURV U0VfUFRIUkVBRFMg
ICAtYyAtbyBjb25maWcubyBjb25maWcuYwpnY2MgLVdhbGwgLVdlcnJvciAt V3N0cmljdC1wcm90
b3R5cGVzIC1XZXh0cmEgLVduby11bnVzZWQtcGFyYW1ldGVyIC1nZ2RiIC1E U2VuZG1haWw9XCIi
L3Vzci9zYmluL3NlbmRtYWlsIC10IlwiIC1EQ09ORkZJTEU9XCIvZXRjL21k YWRtLmNvbmZcIiAt
RENPTkZGSUxFMj1cIi9ldGMvbWRhZG0vbWRhZG0uY29uZlwiIC1ETUFQX0RJ Uj1cIi9kZXYvLm1k
YWRtXCIgLURNQVBfRklMRT1cIm1hcFwiIC1ETURNT05fRElSPVwiL2Rldi8u bWRhZG1cIiAtRFVT
RV9QVEhSRUFEUyAgIC1jIC1vIG1kc3RhdC5vIG1kc3RhdC5jCmdjYyAtV2Fs bCAtV2Vycm9yIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNlZC1wYXJhbWV0 ZXIgLWdnZGIgLURT
ZW5kbWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIgLURDT05GRklM RT1cIi9ldGMvbWRh
ZG0uY29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9tZGFkbS5jb25m XCIgLURNQVBfRElS
PVwiL2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIgLURNRE1PTl9E SVI9XCIvZGV2Ly5t
ZGFkbVwiIC1EVVNFX1BUSFJFQURTICAgLWMgLW8gUmVhZE1lLm8gUmVhZE1l LmMKZ2NjIC1XYWxs
IC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2V4dHJhIC1Xbm8tdW51 c2VkLXBhcmFtZXRl
ciAtZ2dkYiAtRFNlbmRtYWlsPVwiIi91c3Ivc2Jpbi9zZW5kbWFpbCAtdCJc IiAtRENPTkZGSUxF
PVwiL2V0Yy9tZGFkbS5jb25mXCIgLURDT05GRklMRTI9XCIvZXRjL21kYWRt L21kYWRtLmNvbmZc
IiAtRE1BUF9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1ETUFQX0ZJTEU9XCJtYXBc IiAtRE1ETU9OX0RJ
Uj1cIi9kZXYvLm1kYWRtXCIgLURVU0VfUFRIUkVBRFMgICAtYyAtbyB1dGls Lm8gdXRpbC5jCmdj
YyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdleHRyYSAt V25vLXVudXNlZC1w
YXJhbWV0ZXIgLWdnZGIgLURTZW5kbWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1h aWwgLXQiXCIgLURD
T05GRklMRT1cIi9ldGMvbWRhZG0uY29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0 Yy9tZGFkbS9tZGFk
bS5jb25mXCIgLURNQVBfRElSPVwiL2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxF PVwibWFwXCIgLURN
RE1PTl9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1EVVNFX1BUSFJFQURTICAgLWMg LW8gTWFuYWdlLm8g
TWFuYWdlLmMKZ2NjIC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90eXBl cyAtV2V4dHJhIC1X
bm8tdW51c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNlbmRtYWlsPVwiIi91c3Iv c2Jpbi9zZW5kbWFp
bCAtdCJcIiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFkbS5jb25mXCIgLURDT05G RklMRTI9XCIvZXRj
L21kYWRtL21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9XCIvZGV2Ly5tZGFkbVwi IC1ETUFQX0ZJTEU9
XCJtYXBcIiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURVU0VfUFRI UkVBRFMgICAtYyAt
byBBc3NlbWJsZS5vIEFzc2VtYmxlLmMKZ2NjIC1XYWxsIC1XZXJyb3IgLVdz dHJpY3QtcHJvdG90
eXBlcyAtV2V4dHJhIC1Xbm8tdW51c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNl bmRtYWlsPVwiIi91
c3Ivc2Jpbi9zZW5kbWFpbCAtdCJcIiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFk bS5jb25mXCIgLURD
T05GRklMRTI9XCIvZXRjL21kYWRtL21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9 XCIvZGV2Ly5tZGFk
bVwiIC1ETUFQX0ZJTEU9XCJtYXBcIiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1k YWRtXCIgLURVU0Vf
UFRIUkVBRFMgICAtYyAtbyBCdWlsZC5vIEJ1aWxkLmMKZ2NjIC1XYWxsIC1X ZXJyb3IgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2V4dHJhIC1Xbm8tdW51c2VkLXBhcmFtZXRlciAt Z2dkYiAtRFNlbmRt
YWlsPVwiIi91c3Ivc2Jpbi9zZW5kbWFpbCAtdCJcIiAtRENPTkZGSUxFPVwi L2V0Yy9tZGFkbS5j
b25mXCIgLURDT05GRklMRTI9XCIvZXRjL21kYWRtL21kYWRtLmNvbmZcIiAt RE1BUF9ESVI9XCIv
ZGV2Ly5tZGFkbVwiIC1ETUFQX0ZJTEU9XCJtYXBcIiAtRE1ETU9OX0RJUj1c Ii9kZXYvLm1kYWRt
XCIgLURVU0VfUFRIUkVBRFMgICAtYyAtbyBDcmVhdGUubyBDcmVhdGUuYwpn Y2MgLVdhbGwgLVdl
cnJvciAtV3N0cmljdC1wcm90b3R5cGVzIC1XZXh0cmEgLVduby11bnVzZWQt cGFyYW1ldGVyIC1n
Z2RiIC1EU2VuZG1haWw9XCIiL3Vzci9zYmluL3NlbmRtYWlsIC10IlwiIC1E Q09ORkZJTEU9XCIv
ZXRjL21kYWRtLmNvbmZcIiAtRENPTkZGSUxFMj1cIi9ldGMvbWRhZG0vbWRh ZG0uY29uZlwiIC1E
TUFQX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURNQVBfRklMRT1cIm1hcFwiIC1E TURNT05fRElSPVwi
L2Rldi8ubWRhZG1cIiAtRFVTRV9QVEhSRUFEUyAgIC1jIC1vIERldGFpbC5v IERldGFpbC5jCmdj
YyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdleHRyYSAt V25vLXVudXNlZC1w
YXJhbWV0ZXIgLWdnZGIgLURTZW5kbWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1h aWwgLXQiXCIgLURD
T05GRklMRT1cIi9ldGMvbWRhZG0uY29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0 Yy9tZGFkbS9tZGFk
bS5jb25mXCIgLURNQVBfRElSPVwiL2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxF PVwibWFwXCIgLURN
RE1PTl9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1EVVNFX1BUSFJFQURTICAgLWMg LW8gRXhhbWluZS5v
IEV4YW1pbmUuYwpnY2MgLVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5 cGVzIC1XZXh0cmEg
LVduby11bnVzZWQtcGFyYW1ldGVyIC1nZ2RiIC1EU2VuZG1haWw9XCIiL3Vz ci9zYmluL3NlbmRt
YWlsIC10IlwiIC1EQ09ORkZJTEU9XCIvZXRjL21kYWRtLmNvbmZcIiAtRENP TkZGSUxFMj1cIi9l
dGMvbWRhZG0vbWRhZG0uY29uZlwiIC1ETUFQX0RJUj1cIi9kZXYvLm1kYWRt XCIgLURNQVBfRklM
RT1cIm1hcFwiIC1ETURNT05fRElSPVwiL2Rldi8ubWRhZG1cIiAtRFVTRV9Q VEhSRUFEUyAgIC1j
IC1vIEdyb3cubyBHcm93LmMKZ2NjIC1XYWxsIC1XZXJyb3IgLVdzdHJpY3Qt cHJvdG90eXBlcyAt
V2V4dHJhIC1Xbm8tdW51c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNlbmRtYWls PVwiIi91c3Ivc2Jp
bi9zZW5kbWFpbCAtdCJcIiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFkbS5jb25m XCIgLURDT05GRklM
RTI9XCIvZXRjL21kYWRtL21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9XCIvZGV2 Ly5tZGFkbVwiIC1E
TUFQX0ZJTEU9XCJtYXBcIiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1kYWRtXCIg LURVU0VfUFRIUkVB
RFMgICAtYyAtbyBNb25pdG9yLm8gTW9uaXRvci5jCmdjYyAtV2FsbCAtV2Vy cm9yIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNlZC1wYXJhbWV0ZXIgLWdn ZGIgLURTZW5kbWFp
bD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIgLURDT05GRklMRT1cIi9l dGMvbWRhZG0uY29u
ZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9tZGFkbS5jb25mXCIgLURN QVBfRElSPVwiL2Rl
di8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIgLURNRE1PTl9ESVI9XCIv ZGV2Ly5tZGFkbVwi
IC1EVVNFX1BUSFJFQURTICAgLWMgLW8gZGxpbmsubyBkbGluay5jCmdjYyAt V2FsbCAtV2Vycm9y
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNlZC1wYXJh bWV0ZXIgLWdnZGIg
LURTZW5kbWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIgLURDT05G RklMRT1cIi9ldGMv
bWRhZG0uY29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9tZGFkbS5j b25mXCIgLURNQVBf
RElSPVwiL2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIgLURNRE1P Tl9ESVI9XCIvZGV2
Ly5tZGFkbVwiIC1EVVNFX1BUSFJFQURTICAgLWMgLW8gS2lsbC5vIEtpbGwu YwpnY2MgLVdhbGwg
LVdlcnJvciAtV3N0cmljdC1wcm90b3R5cGVzIC1XZXh0cmEgLVduby11bnVz ZWQtcGFyYW1ldGVy
IC1nZ2RiIC1EU2VuZG1haWw9XCIiL3Vzci9zYmluL3NlbmRtYWlsIC10Ilwi IC1EQ09ORkZJTEU9
XCIvZXRjL21kYWRtLmNvbmZcIiAtRENPTkZGSUxFMj1cIi9ldGMvbWRhZG0v bWRhZG0uY29uZlwi
IC1ETUFQX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURNQVBfRklMRT1cIm1hcFwi IC1ETURNT05fRElS
PVwiL2Rldi8ubWRhZG1cIiAtRFVTRV9QVEhSRUFEUyAgIC1jIC1vIFF1ZXJ5 Lm8gUXVlcnkuYwpn
Y2MgLVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5cGVzIC1XZXh0cmEg LVduby11bnVzZWQt
cGFyYW1ldGVyIC1nZ2RiIC1EU2VuZG1haWw9XCIiL3Vzci9zYmluL3NlbmRt YWlsIC10IlwiIC1E
Q09ORkZJTEU9XCIvZXRjL21kYWRtLmNvbmZcIiAtRENPTkZGSUxFMj1cIi9l dGMvbWRhZG0vbWRh
ZG0uY29uZlwiIC1ETUFQX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURNQVBfRklM RT1cIm1hcFwiIC1E
TURNT05fRElSPVwiL2Rldi8ubWRhZG1cIiAtRFVTRV9QVEhSRUFEUyAgIC1j IC1vIEluY3JlbWVu
dGFsLm8gSW5jcmVtZW50YWwuYwpnY2MgLVdhbGwgLVdlcnJvciAtV3N0cmlj dC1wcm90b3R5cGVz
IC1XZXh0cmEgLVduby11bnVzZWQtcGFyYW1ldGVyIC1nZ2RiIC1EU2VuZG1h aWw9XCIiL3Vzci9z
YmluL3NlbmRtYWlsIC10IlwiIC1EQ09ORkZJTEU9XCIvZXRjL21kYWRtLmNv bmZcIiAtRENPTkZG
SUxFMj1cIi9ldGMvbWRhZG0vbWRhZG0uY29uZlwiIC1ETUFQX0RJUj1cIi9k ZXYvLm1kYWRtXCIg
LURNQVBfRklMRT1cIm1hcFwiIC1ETURNT05fRElSPVwiL2Rldi8ubWRhZG1c IiAtRFVTRV9QVEhS
RUFEUyAgIC1jIC1vIG1kb3Blbi5vIG1kb3Blbi5jCmdjYyAtV2FsbCAtV2Vy cm9yIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNlZC1wYXJhbWV0ZXIgLWdn ZGIgLURTZW5kbWFp
bD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIgLURDT05GRklMRT1cIi9l dGMvbWRhZG0uY29u
ZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9tZGFkbS5jb25mXCIgLURN QVBfRElSPVwiL2Rl
di8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIgLURNRE1PTl9ESVI9XCIv ZGV2Ly5tZGFkbVwi
IC1EVVNFX1BUSFJFQURTICAgLWMgLW8gc3VwZXIwLm8gc3VwZXIwLmMKZ2Nj IC1XYWxsIC1XZXJy
b3IgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2V4dHJhIC1Xbm8tdW51c2VkLXBh cmFtZXRlciAtZ2dk
YiAtRFNlbmRtYWlsPVwiIi91c3Ivc2Jpbi9zZW5kbWFpbCAtdCJcIiAtRENP TkZGSUxFPVwiL2V0
Yy9tZGFkbS5jb25mXCIgLURDT05GRklMRTI9XCIvZXRjL21kYWRtL21kYWRt LmNvbmZcIiAtRE1B
UF9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1ETUFQX0ZJTEU9XCJtYXBcIiAtRE1E TU9OX0RJUj1cIi9k
ZXYvLm1kYWRtXCIgLURVU0VfUFRIUkVBRFMgICAtYyAtbyBzdXBlcjEubyBz dXBlcjEuYwpnY2Mg
LVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5cGVzIC1XZXh0cmEgLVdu by11bnVzZWQtcGFy
YW1ldGVyIC1nZ2RiIC1EU2VuZG1haWw9XCIiL3Vzci9zYmluL3NlbmRtYWls IC10IlwiIC1EQ09O
RkZJTEU9XCIvZXRjL21kYWRtLmNvbmZcIiAtRENPTkZGSUxFMj1cIi9ldGMv bWRhZG0vbWRhZG0u
Y29uZlwiIC1ETUFQX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURNQVBfRklMRT1c Im1hcFwiIC1ETURN
T05fRElSPVwiL2Rldi8ubWRhZG1cIiAtRFVTRV9QVEhSRUFEUyAgIC1jIC1v IHN1cGVyLWRkZi5v
IHN1cGVyLWRkZi5jCmdjYyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXByb3Rv dHlwZXMgLVdleHRy
YSAtV25vLXVudXNlZC1wYXJhbWV0ZXIgLWdnZGIgLURTZW5kbWFpbD1cIiIv dXNyL3NiaW4vc2Vu
ZG1haWwgLXQiXCIgLURDT05GRklMRT1cIi9ldGMvbWRhZG0uY29uZlwiIC1E Q09ORkZJTEUyPVwi
L2V0Yy9tZGFkbS9tZGFkbS5jb25mXCIgLURNQVBfRElSPVwiL2Rldi8ubWRh ZG1cIiAtRE1BUF9G
SUxFPVwibWFwXCIgLURNRE1PTl9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1EVVNF X1BUSFJFQURTICAg
LWMgLW8gc3VwZXItaW50ZWwubyBzdXBlci1pbnRlbC5jCmdjYyAtV2FsbCAt V2Vycm9yIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNlZC1wYXJhbWV0ZXIg LWdnZGIgLURTZW5k
bWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIgLURDT05GRklMRT1c Ii9ldGMvbWRhZG0u
Y29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9tZGFkbS5jb25mXCIg LURNQVBfRElSPVwi
L2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIgLURNRE1PTl9ESVI9 XCIvZGV2Ly5tZGFk
bVwiIC1EVVNFX1BUSFJFQURTICAgLWMgLW8gYml0bWFwLm8gYml0bWFwLmMK Z2NjIC1XYWxsIC1X
ZXJyb3IgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2V4dHJhIC1Xbm8tdW51c2Vk LXBhcmFtZXRlciAt
Z2dkYiAtRFNlbmRtYWlsPVwiIi91c3Ivc2Jpbi9zZW5kbWFpbCAtdCJcIiAt RENPTkZGSUxFPVwi
L2V0Yy9tZGFkbS5jb25mXCIgLURDT05GRklMRTI9XCIvZXRjL21kYWRtL21k YWRtLmNvbmZcIiAt
RE1BUF9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1ETUFQX0ZJTEU9XCJtYXBcIiAt RE1ETU9OX0RJUj1c
Ii9kZXYvLm1kYWRtXCIgLURVU0VfUFRIUkVBRFMgICAtYyAtbyByZXN0cmlw ZS5vIHJlc3RyaXBl
LmMKZ2NjIC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2V4 dHJhIC1Xbm8tdW51
c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNlbmRtYWlsPVwiIi91c3Ivc2Jpbi9z ZW5kbWFpbCAtdCJc
IiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFkbS5jb25mXCIgLURDT05GRklMRTI9 XCIvZXRjL21kYWRt
L21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1ETUFQ X0ZJTEU9XCJtYXBc
IiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURVU0VfUFRIUkVBRFMg ICAtYyAtbyBzeXNm
cy5vIHN5c2ZzLmMKZ2NjIC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90 eXBlcyAtV2V4dHJh
IC1Xbm8tdW51c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNlbmRtYWlsPVwiIi91 c3Ivc2Jpbi9zZW5k
bWFpbCAtdCJcIiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFkbS5jb25mXCIgLURD T05GRklMRTI9XCIv
ZXRjL21kYWRtL21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9XCIvZGV2Ly5tZGFk bVwiIC1ETUFQX0ZJ
TEU9XCJtYXBcIiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURVU0Vf UFRIUkVBRFMgLURI
QVZFX1NURElOVF9IIC1vIHNoYTEubyAtYyBzaGExLmMKZ2NjIC1XYWxsIC1X ZXJyb3IgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2V4dHJhIC1Xbm8tdW51c2VkLXBhcmFtZXRlciAt Z2dkYiAtRFNlbmRt
YWlsPVwiIi91c3Ivc2Jpbi9zZW5kbWFpbCAtdCJcIiAtRENPTkZGSUxFPVwi L2V0Yy9tZGFkbS5j
b25mXCIgLURDT05GRklMRTI9XCIvZXRjL21kYWRtL21kYWRtLmNvbmZcIiAt RE1BUF9ESVI9XCIv
ZGV2Ly5tZGFkbVwiIC1ETUFQX0ZJTEU9XCJtYXBcIiAtRE1ETU9OX0RJUj1c Ii9kZXYvLm1kYWRt
XCIgLURVU0VfUFRIUkVBRFMgICAtYyAtbyBtYXBmaWxlLm8gbWFwZmlsZS5j CmdjYyAtV2FsbCAt
V2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNl ZC1wYXJhbWV0ZXIg
LWdnZGIgLURTZW5kbWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIg LURDT05GRklMRT1c
Ii9ldGMvbWRhZG0uY29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9t ZGFkbS5jb25mXCIg
LURNQVBfRElSPVwiL2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIg LURNRE1PTl9ESVI9
XCIvZGV2Ly5tZGFkbVwiIC1EVVNFX1BUSFJFQURTICAgLWMgLW8gY3JjMzIu byBjcmMzMi5jCmdj
YyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdleHRyYSAt V25vLXVudXNlZC1w
YXJhbWV0ZXIgLWdnZGIgLURTZW5kbWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1h aWwgLXQiXCIgLURD
T05GRklMRT1cIi9ldGMvbWRhZG0uY29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0 Yy9tZGFkbS9tZGFk
bS5jb25mXCIgLURNQVBfRElSPVwiL2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxF PVwibWFwXCIgLURN
RE1PTl9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1EVVNFX1BUSFJFQURTICAgLWMg LW8gc2dfaW8ubyBz
Z19pby5jCmdjYyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlwZXMg LVdleHRyYSAtV25v
LXVudXNlZC1wYXJhbWV0ZXIgLWdnZGIgLURTZW5kbWFpbD1cIiIvdXNyL3Ni aW4vc2VuZG1haWwg
LXQiXCIgLURDT05GRklMRT1cIi9ldGMvbWRhZG0uY29uZlwiIC1EQ09ORkZJ TEUyPVwiL2V0Yy9t
ZGFkbS9tZGFkbS5jb25mXCIgLURNQVBfRElSPVwiL2Rldi8ubWRhZG1cIiAt RE1BUF9GSUxFPVwi
bWFwXCIgLURNRE1PTl9ESVI9XCIvZGV2Ly5tZGFkbVwiIC1EVVNFX1BUSFJF QURTICAgLWMgLW8g
bXNnLm8gbXNnLmMKZ2NjIC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90 eXBlcyAtV2V4dHJh
IC1Xbm8tdW51c2VkLXBhcmFtZXRlciAtZ2dkYiAtRFNlbmRtYWlsPVwiIi91 c3Ivc2Jpbi9zZW5k
bWFpbCAtdCJcIiAtRENPTkZGSUxFPVwiL2V0Yy9tZGFkbS5jb25mXCIgLURD T05GRklMRTI9XCIv
ZXRjL21kYWRtL21kYWRtLmNvbmZcIiAtRE1BUF9ESVI9XCIvZGV2Ly5tZGFk bVwiIC1ETUFQX0ZJ
TEU9XCJtYXBcIiAtRE1ETU9OX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURVU0Vf UFRIUkVBRFMgICAt
YyAtbyBwbGF0Zm9ybS1pbnRlbC5vIHBsYXRmb3JtLWludGVsLmMKZ2NjIC1X YWxsIC1XZXJyb3Ig
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2V4dHJhIC1Xbm8tdW51c2VkLXBhcmFt ZXRlciAtZ2dkYiAt
RFNlbmRtYWlsPVwiIi91c3Ivc2Jpbi9zZW5kbWFpbCAtdCJcIiAtRENPTkZG SUxFPVwiL2V0Yy9t
ZGFkbS5jb25mXCIgLURDT05GRklMRTI9XCIvZXRjL21kYWRtL21kYWRtLmNv bmZcIiAtRE1BUF9E
SVI9XCIvZGV2Ly5tZGFkbVwiIC1ETUFQX0ZJTEU9XCJtYXBcIiAtRE1ETU9O X0RJUj1cIi9kZXYv
Lm1kYWRtXCIgLURVU0VfUFRIUkVBRFMgICAtYyAtbyBwcm9iZV9yb21zLm8g cHJvYmVfcm9tcy5j
CmdjYyAgLW8gbWRhZG0gbWRhZG0ubyBjb25maWcubyBtZHN0YXQubyAgUmVh ZE1lLm8gdXRpbC5v
IE1hbmFnZS5vIEFzc2VtYmxlLm8gQnVpbGQubyBDcmVhdGUubyBEZXRhaWwu byBFeGFtaW5lLm8g
R3Jvdy5vIE1vbml0b3IubyBkbGluay5vIEtpbGwubyBRdWVyeS5vIEluY3Jl bWVudGFsLm8gbWRv
cGVuLm8gc3VwZXIwLm8gc3VwZXIxLm8gc3VwZXItZGRmLm8gc3VwZXItaW50 ZWwubyBiaXRtYXAu
byByZXN0cmlwZS5vIHN5c2ZzLm8gc2hhMS5vIG1hcGZpbGUubyBjcmMzMi5v IHNnX2lvLm8gbXNn
Lm8gcGxhdGZvcm0taW50ZWwubyBwcm9iZV9yb21zLm8gCmdjYyAtV2FsbCAt V2Vycm9yIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNlZC1wYXJhbWV0ZXIg LWdnZGIgLURTZW5k
bWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIgLURDT05GRklMRT1c Ii9ldGMvbWRhZG0u
Y29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9tZGFkbS5jb25mXCIg LURNQVBfRElSPVwi
L2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIgLURNRE1PTl9ESVI9 XCIvZGV2Ly5tZGFk
bVwiIC1EVVNFX1BUSFJFQURTICAgLWMgLW8gbWRtb24ubyBtZG1vbi5jCmdj YyAtV2FsbCAtV2Vy
cm9yIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdleHRyYSAtV25vLXVudXNlZC1w YXJhbWV0ZXIgLWdn
ZGIgLURTZW5kbWFpbD1cIiIvdXNyL3NiaW4vc2VuZG1haWwgLXQiXCIgLURD T05GRklMRT1cIi9l
dGMvbWRhZG0uY29uZlwiIC1EQ09ORkZJTEUyPVwiL2V0Yy9tZGFkbS9tZGFk bS5jb25mXCIgLURN
QVBfRElSPVwiL2Rldi8ubWRhZG1cIiAtRE1BUF9GSUxFPVwibWFwXCIgLURN RE1PTl9ESVI9XCIv
ZGV2Ly5tZGFkbVwiIC1EVVNFX1BUSFJFQURTICAgLWMgLW8gbW9uaXRvci5v IG1vbml0b3IuYwpn
Y2MgLVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5cGVzIC1XZXh0cmEg LVduby11bnVzZWQt
cGFyYW1ldGVyIC1nZ2RiIC1EU2VuZG1haWw9XCIiL3Vzci9zYmluL3NlbmRt YWlsIC10IlwiIC1E
Q09ORkZJTEU9XCIvZXRjL21kYWRtLmNvbmZcIiAtRENPTkZGSUxFMj1cIi9l dGMvbWRhZG0vbWRh
ZG0uY29uZlwiIC1ETUFQX0RJUj1cIi9kZXYvLm1kYWRtXCIgLURNQVBfRklM RT1cIm1hcFwiIC1E
TURNT05fRElSPVwiL2Rldi8ubWRhZG1cIiAtRFVTRV9QVEhSRUFEUyAgIC1j IC1vIG1hbmFnZW1v
bi5vIG1hbmFnZW1vbi5jCmdjYyAgLXB0aHJlYWQgLXogbm93IC1vIG1kbW9u IG1kbW9uLm8gbW9u
aXRvci5vIG1hbmFnZW1vbi5vIHV0aWwubyBtZHN0YXQubyBzeXNmcy5vIGNv bmZpZy5vIEtpbGwu
byBzZ19pby5vIGRsaW5rLm8gUmVhZE1lLm8gc3VwZXIwLm8gc3VwZXIxLm8g c3VwZXItaW50ZWwu
byBzdXBlci1kZGYubyBzaGExLm8gY3JjMzIubyBtc2cubyBiaXRtYXAubyBw bGF0Zm9ybS1pbnRl
bC5vIHByb2JlX3JvbXMubyAKc2VkIC1lICdzL3tERUZBVUxUX01FVEFEQVRB fS8xLjIvZycgbWRh
ZG0uOC5pbiA+IG1kYWRtLjgKbnJvZmYgLW1hbiBtZGFkbS44ID4gbWRhZG0u bWFuCm5yb2ZmIC1t
YW4gbWQuNCA+IG1kLm1hbgpucm9mZiAtbWFuIG1kYWRtLmNvbmYuNSA+IG1k YWRtLmNvbmYubWFu
Cm5yb2ZmIC1tYW4gbWRtb24uOCA+IG1kbW9uLm1hbgoK

--_f4366ee5-14f5-476f-9684-7b6604b151da_--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 02:28:47 von NeilBrown

On Wed, 17 Nov 2010 19:56:10 -0500
Mike Viau wrote:


> I am running the same version, from a Debian Squeeze package which I presume is the same.
>
> mdadm -V
>
> mdadm - v3.1.4 - 31st August 2010

Yes, should be identical to what I am running.
>
> >
> > and see how that works.
> > Just
> > make
> > ./mdadm -Asvv
>
> Regardless, I did recompile (attached is the make output -- no errors) and got similar mdadm output:
>
> ./mdadm -Asvv
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/md126p1
> mdadm: /dev/md126p1 has wrong uuid.
> mdadm: no RAID superblock on /dev/md/OneTB-RAID1-PV
> mdadm: /dev/md/OneTB-RAID1-PV has wrong uuid
.....
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm: /dev/sdb has wrong uuid.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm: /dev/sda has wrong uuid.

The arrays are clearly currently assembled. Trying to assemble them again is
not likely to produce a good result :-) I should have said to "./mdadm -Ss"
first.

Could you apply this patch and then test again with:

./mdadm -Ss
./mdadm -Asvvv

Thanks,
NeilBrown

diff --git a/Assemble.c b/Assemble.c
index afd4e60..11323fa 100644
--- a/Assemble.c
+++ b/Assemble.c
@@ -344,9 +344,14 @@ int Assemble(struct supertype *st, char *mddev,
if (ident->uuid_set && (!update || strcmp(update, "uuid")!= 0) &&
(!tst || !tst->sb ||
same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)==0)) {
- if (report_missmatch)
+ if (report_missmatch) {
+ char buf[200];
fprintf(stderr, Name ": %s has wrong uuid.\n",
devname);
+ fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, buf, ':'));
+ fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':'));
+ fprintf(stderr, " metadata=%s\n", tst->ss->name);
+ }
goto loop;
}
if (ident->name[0] && (!update || strcmp(update, "name")!= 0) &&
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 03:05:40 von Mike Viau

> On Thu, 18 Nov 2010 12:28:47 +1100 wrote:
> >
> > I am running the same version, from a Debian Squeeze package which =
I presume is the same.
> >
> > mdadm -V
> >
> > mdadm - v3.1.4 - 31st August 2010
>
> Yes, should be identical to what I am running.
> >
> > >
> > > and see how that works.
> > > Just
> > > make
> > > ./mdadm -Asvv
> >
> > Regardless, I did recompile (attached is the make output -- no erro=
rs) and got similar mdadm output:
> >
> > ./mdadm -Asvv
> > mdadm: looking for devices for further assembly
> > mdadm: no RAID superblock on /dev/md126p1
> > mdadm: /dev/md126p1 has wrong uuid.
> > mdadm: no RAID superblock on /dev/md/OneTB-RAID1-PV
> > mdadm: /dev/md/OneTB-RAID1-PV has wrong uuid
> ....
> > mdadm: cannot open device /dev/sdb: Device or resource busy
> > mdadm: /dev/sdb has wrong uuid.
> > mdadm: cannot open device /dev/sda: Device or resource busy
> > mdadm: /dev/sda has wrong uuid.
>
> The arrays are clearly currently assembled. Trying to assemble them a=
gain is
> not likely to produce a good result :-) I should have said to "./mdad=
m -Ss"
> first.
>
> Could you apply this patch and then test again with:
>
> ./mdadm -Ss
> ./mdadm -Asvvv
>

Applied the patch:

if (ident->uuid_set && (!update || strcmp(update, "uuid")!=3D 0) &&
                  =A0 (!tst || !tst=
->sb ||
                     same_uuid(=
content->uuid, ident->uuid, tst->ss->swapuuid)==0)) {
                         =
if (report_missmatch) {
                         =
    =A0 char buf[200];
                         =
      =A0 fprintf(stderr, Name ": %s has wrong uuid.\n",
                         =
              =A0 devname);
                         =
   fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, =
buf, ':'));
                         =
   fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0=
, buf, ':'));
                         =
   fprintf(stderr, " metadata=3D%s\n", tst->ss->name);
                  =A0 }
                       =A0 g=
oto loop;
              =A0 }


And got:

/mdadm -Ss

mdadm: stopped /dev/md127


/mdadm -Asvvv

mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
=A0want UUID-084b969a:0808f5b8:6c784fb7:62659383
Segmentation fault


I took the liberty of extending the char buffer to 2000 bytes/chars and=
64K (1<<16) but got the same segfaults.


-M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 03:32:47 von NeilBrown

On Wed, 17 Nov 2010 21:05:40 -0500
Mike Viau wrote:


> ./mdadm -Ss
>=20
> mdadm: stopped /dev/md127
>=20
>=20
> ./mdadm -Asvvv
>=20
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
> =A0want UUID-084b969a:0808f5b8:6c784fb7:62659383
> Segmentation fault

Try this patch instead please.

NeilBrown

diff --git a/Assemble.c b/Assemble.c
index afd4e60..11e6238 100644
--- a/Assemble.c
+++ b/Assemble.c
@@ -344,9 +344,17 @@ int Assemble(struct supertype *st, char *mddev,
if (ident->uuid_set && (!update || strcmp(update, "uuid")!=3D 0) &&
(!tst || !tst->sb ||
same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)==0=
)) {
- if (report_missmatch)
+ if (report_missmatch) {
+ char buf[200];
fprintf(stderr, Name ": %s has wrong uuid.\n",
devname);
+ fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, bu=
f, ':'));
+ fprintf(stderr, " tst=3D%p sb=3D%p\n", tst, tst?tst->sb:NULL);
+ if (tst) {
+ fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0,=
buf, ':'));
+ fprintf(stderr, " metadata=3D%s\n", tst->ss->name);
+ }
+ }
goto loop;
}
if (ident->name[0] && (!update || strcmp(update, "name")!=3D 0) &&

=20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 04:03:41 von Mike Viau

> On Thu, 18 Nov 2010 13:32:47 +1100 wrote:
> > ./mdadm -Ss
> >
> > mdadm: stopped /dev/md127
> >
> >
> > ./mdadm -Asvvv
> >
> > mdadm: looking for devices for further assembly
> > mdadm: no RAID superblock on /dev/dm-3
> > mdadm: /dev/dm-3 has wrong uuid.
> > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > Segmentation fault
>
> Try this patch instead please.

Applied new patch and got:

/mdadm -Ss

mdadm: stopped /dev/md127


/mdadm -Asvvv
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
=A0want UUID-084b969a:0808f5b8:6c784fb7:62659383
=A0tst=3D0x10dd010 sb=3D(nil)
Segmentation fault


Again tried various buffer sizes (segfault above was with char buf[200]=
;)


if (ident->uuid_set && (!update || strcmp(update, "uuid")!=3D 0) &&
  =A0 (!tst || !tst->sb ||
     same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)=3D=
=3D0)) {
      =A0 if (report_missmatch) {
             char buf[1<<16];
              =A0 fprintf(stderr, Name ": %=
s has wrong uuid.\n",
                       =A0 d=
evname);
        =A0 fprintf(stderr, " want %s\n", __fname_from_=
uuid(ident->uuid, 0, buf, ':'));
             fprintf(stderr, " tst=3D%p sb=3D%p=
\n", tst, tst?tst->sb:NULL);
             if (tst) {
                        fpri=
ntf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':')=
);
                       =A0 f=
printf(stderr, " metadata=3D%s\n", tst->ss->name);
             }
  =A0 }
      =A0 goto loop;
}

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 04:17:18 von NeilBrown

On Wed, 17 Nov 2010 22:03:41 -0500
Mike Viau wrote:

>=20
> > On Thu, 18 Nov 2010 13:32:47 +1100 wrote:
> > > ./mdadm -Ss
> > >
> > > mdadm: stopped /dev/md127
> > >
> > >
> > > ./mdadm -Asvvv
> > >
> > > mdadm: looking for devices for further assembly
> > > mdadm: no RAID superblock on /dev/dm-3
> > > mdadm: /dev/dm-3 has wrong uuid.
> > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > Segmentation fault
> >
> > Try this patch instead please.
>=20
> Applied new patch and got:
>=20
> ./mdadm -Ss
>=20
> mdadm: stopped /dev/md127
>=20
>=20
> ./mdadm -Asvvv
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
> =A0want UUID-084b969a:0808f5b8:6c784fb7:62659383
> =A0tst=3D0x10dd010 sb=3D(nil)
> Segmentation fault

Sorry... I guess I should have tested it myself..

The
if (tst) {

Should be
=20
if (tst && content) {

NeilBrown


>=20
>=20
> Again tried various buffer sizes (segfault above was with char buf[20=
0];)
>=20
>=20
> if (ident->uuid_set && (!update || strcmp(update, "uuid")!=3D 0) &&
>   =A0 (!tst || !tst->sb ||
>      same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)=
==0)) {
>       =A0 if (report_missmatch) {
>              char buf[1<<16];
>               =A0 fprintf(stderr, Name ":=
%s has wrong uuid.\n",
>                        =A0=
devname);
>         =A0 fprintf(stderr, " want %s\n", __fname_fro=
m_uuid(ident->uuid, 0, buf, ':'));
>              fprintf(stderr, " tst=3D%p sb=3D=
%p\n", tst, tst?tst->sb:NULL);
>              if (tst) {
>                         fp=
rintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':=
'));
>                        =A0=
fprintf(stderr, " metadata=3D%s\n", tst->ss->name);
>              }
>   =A0 }
>       =A0 goto loop;
> }
>=20
> =20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 06:10:50 von Mike Viau

> On Thu, 18 Nov 2010 14:17:18 +1100 wrote:
> >
> > > On Thu, 18 Nov 2010 13:32:47 +1100 wrote:
> > > > ./mdadm -Ss
> > > >
> > > > mdadm: stopped /dev/md127
> > > >
> > > >
> > > > ./mdadm -Asvvv
> > > >
> > > > mdadm: looking for devices for further assembly
> > > > mdadm: no RAID superblock on /dev/dm-3
> > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > Segmentation fault
> > >
> > > Try this patch instead please.
> >
> > Applied new patch and got:
> >
> > ./mdadm -Ss
> >
> > mdadm: stopped /dev/md127
> >
> >
> > ./mdadm -Asvvv
> > mdadm: looking for devices for further assembly
> > mdadm: no RAID superblock on /dev/dm-3
> > mdadm: /dev/dm-3 has wrong uuid.
> > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > tst=0x10dd010 sb=(nil)
> > Segmentation fault
>
> Sorry... I guess I should have tested it myself..
>
> The
> if (tst) {
>
> Should be
>
> if (tst && content) {
>

Apply update and got:

mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
mdadm: added /dev/sda to /dev/md/imsm0 as -1
mdadm: added /dev/sdb to /dev/md/imsm0 as -1
mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV


Full output at: http://paste.debian.net/100103/
expires:

2010-11-21 06:07:30
-M


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm

am 18.11.2010 06:38:49 von NeilBrown

On Thu, 18 Nov 2010 00:10:50 -0500
Mike Viau wrote:

>
> > On Thu, 18 Nov 2010 14:17:18 +1100 wrote:
> > >
> > > > On Thu, 18 Nov 2010 13:32:47 +1100 wrote:
> > > > > ./mdadm -Ss
> > > > >
> > > > > mdadm: stopped /dev/md127
> > > > >
> > > > >
> > > > > ./mdadm -Asvvv
> > > > >
> > > > > mdadm: looking for devices for further assembly
> > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > Segmentation fault
> > > >
> > > > Try this patch instead please.
> > >
> > > Applied new patch and got:
> > >
> > > ./mdadm -Ss
> > >
> > > mdadm: stopped /dev/md127
> > >
> > >
> > > ./mdadm -Asvvv
> > > mdadm: looking for devices for further assembly
> > > mdadm: no RAID superblock on /dev/dm-3
> > > mdadm: /dev/dm-3 has wrong uuid.
> > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > tst=0x10dd010 sb=(nil)
> > > Segmentation fault
> >
> > Sorry... I guess I should have tested it myself..
> >
> > The
> > if (tst) {
> >
> > Should be
> >
> > if (tst && content) {
> >
>
> Apply update and got:
>
> mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
> mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
> mdadm: added /dev/sda to /dev/md/imsm0 as -1
> mdadm: added /dev/sdb to /dev/md/imsm0 as -1
> mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
> mdadm: looking for devices for /dev/md/OneTB-RAID1-PV

So just to clarify.

With the Debian mdadm, which is 3.1.4, if you

mdadm -Ss
mdadm -Asvv

it says (among other things) that /dev/sda has wrong uuid.
and doesn't start the array.

But with the mdadm you compiled yourself, which is also 3.1.4,
if you

mdadm -Ss
mdadm -Asvv

then it doesn't give that message, and it works.

That is very strange. It seems that the Debian mdadm is broken somehow, but
I'm fairly sure Debian hardly changes anything - they are *very* good at
getting their changes upstream first.

I don't suppose you have an /etc/mdadm.conf as well as /etc/mdadm/mdadm.conf
do you? If you did and the two were different, the Debian's mdadm would
behave a bit differently to upstream (they prefer different config files) but
I very much doubt that is the problem.

But I guess if the self-compiled one works (even when you take the patch
out), then just
make install

and be happy.

NeilBrown


>
>
> Full output at: http://paste.debian.net/100103/
> expires:
>
> 2010-11-21 06:07:30
> -M
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm

am 22.11.2010 19:07:10 von Mike Viau

--_29b761cb-2fd0-4c65-b0af-e851a6d608d6_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable


> On Thu=2C 18 Nov 2010 16:38:49 +1100 wrote:
> > > On Thu=2C 18 Nov 2010 14:17:18 +1100 wrote:
> > > >
> > > > > On Thu=2C 18 Nov 2010 13:32:47 +1100 wrote:
> > > > > > ./mdadm -Ss
> > > > > >
> > > > > > mdadm: stopped /dev/md127
> > > > > >
> > > > > >
> > > > > > ./mdadm -Asvvv
> > > > > >
> > > > > > mdadm: looking for devices for further assembly
> > > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > > Segmentation fault
> > > > >
> > > > > Try this patch instead please.
> > > >
> > > > Applied new patch and got:
> > > >
> > > > ./mdadm -Ss
> > > >
> > > > mdadm: stopped /dev/md127
> > > >
> > > >
> > > > ./mdadm -Asvvv
> > > > mdadm: looking for devices for further assembly
> > > > mdadm: no RAID superblock on /dev/dm-3
> > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > tst=3D0x10dd010 sb=3D(nil)
> > > > Segmentation fault
> > >
> > > Sorry... I guess I should have tested it myself..
> > >
> > > The
> > > if (tst) {
> > >
> > > Should be
> > >
> > > if (tst && content) {
> > >
> >
> > Apply update and got:
> >
> > mdadm: /dev/sdb is identified as a member of /dev/md/imsm0=2C slot -1.
> > mdadm: /dev/sda is identified as a member of /dev/md/imsm0=2C slot -1.
> > mdadm: added /dev/sda to /dev/md/imsm0 as -1
> > mdadm: added /dev/sdb to /dev/md/imsm0 as -1
> > mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
> > mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
>
> So just to clarify.
>
> With the Debian mdadm=2C which is 3.1.4=2C if you
>
> mdadm -Ss
> mdadm -Asvv
>
> it says (among other things) that /dev/sda has wrong uuid.
> and doesn't start the array.

Actually both compiled and Debian do not start the array. Or atleast create=
the /dev/md/OneTB-RAID1-PV device when running mdadm -I /dev/md/imsm0 does=
..

You are right about seeing a message on /dev/sda about having a wrong uuid =
somewhere though.=A0 I went back to take a look at my output from the Debia=
n mailing list to see that the mdadm did change slightly from this thread h=
as begun.

The old output was copied verbatim on http://lists.debian.org/debian-user/2=
010/11/msg01234.html and says (among other things) that /dev/sda has wrong =
uuid.

The /dev/sd[ab] has wrong uuid messages are missing from the mdadm -Asvv ou=
tput but....

../mdadm -Ivv /dev/md/imsm0=20
mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
mdadm: match found for member 0
mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices


I still have this UUID message when still using the mdadm -I command.


I'll attach the output of both the mdadm commands above as they run now on =
the system=2C but I noticed=2C but also that in the same thread link above=
=2C with the old output I was inqurying as to both /dev/sda and /dev/sdb (t=
he drives which make up the raid1 array) do not appear to recognized as hav=
ing a valid container when one is required.

What is take on GeraldCC (gcsgcatling@bigpond.com) assistance about /dev/sd=
[ab] containing a 8e (for LVM) partition type=2C rather than the fd type to=
denote raid autodetect. If this was the magical fix (which I am not saying=
it can=92t be) why is mdadm -I /dev/md/imsm0 able to bring up the array fo=
r use as an physical volume for LVM?



>
> But with the mdadm you compiled yourself=2C which is also 3.1.4=2C
> if you
>
> mdadm -Ss
> mdadm -Asvv
>
> then it doesn't give that message=2C and it works.

Again=2C actually both compiled and Debian do not start the array. Or atlea=
st
create the /dev/md/OneTB-RAID1-PV device when running mdadm -I
/dev/md/imsm0 does.

>
> That is very strange. It seems that the Debian mdadm is broken somehow=2C=
but
> I'm fairly sure Debian hardly changes anything - they are *very* good at
> getting their changes upstream first.
>
> I don't suppose you have an /etc/mdadm.conf as well as /etc/mdadm/mdadm.c=
onf
> do you? If you did and the two were different=2C the Debian's mdadm would
> behave a bit differently to upstream (they prefer different config files)=
but
> I very much doubt that is the problem.
>

There is no /etc/mdadm.conf on the filesystem only /etc/mdadm/mdadm.conf


> But I guess if the self-compiled one works (even when you take the patch
> out)=2C then just
> make install

I wish this was the case...

>
> and be happy.
>
> NeilBrown
>
>
> >
> >
> > Full output at: http://paste.debian.net/100103/
> > expires:
> >
> > 2010-11-21 06:07:30

Thanks

-M
=

--_29b761cb-2fd0-4c65-b0af-e851a6d608d6_
Content-Type: text/plain
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="Compiled version.txt"

Q29tcGlsZWQgdmVyc2lvbgoKLi9tZGFkbSAtU3MKCm1kYWRtOiBzdG9wcGVk IC9kZXYvbWQxMjcK
Cj09PQoKLi9tZGFkbSAtQXN2dgoKbWRhZG06IGxvb2tpbmcgZm9yIGRldmlj ZXMgZm9yIGZ1cnRo
ZXIgYXNzZW1ibHkKbWRhZG06IG5vIFJBSUQgc3VwZXJibG9jayBvbiAvZGV2 L2RtLTMKbWRhZG06
IC9kZXYvZG0tMyBoYXMgd3JvbmcgdXVpZC4KIHdhbnQgVVVJRC0wODRiOTY5 YTowODA4ZjViODo2
Yzc4NGZiNzo2MjY1OTM4MwogdHN0PTB4OTgyMDEwIHNiPShuaWwpCm1kYWRt OiBubyBSQUlEIHN1
cGVyYmxvY2sgb24gL2Rldi9kbS0yCm1kYWRtOiAvZGV2L2RtLTIgaGFzIHdy b25nIHV1aWQuCiB3
YW50IFVVSUQtMDg0Yjk2OWE6MDgwOGY1Yjg6NmM3ODRmYjc6NjI2NTkzODMK IHRzdD0weDk4MjEy
MCBzYj0obmlsKQptZGFkbTogbm8gUkFJRCBzdXBlcmJsb2NrIG9uIC9kZXYv ZG0tMQptZGFkbTog
L2Rldi9kbS0xIGhhcyB3cm9uZyB1dWlkLgogd2FudCBVVUlELTA4NGI5Njlh OjA4MDhmNWI4OjZj
Nzg0ZmI3OjYyNjU5MzgzCiB0c3Q9MHg5ODIxYjAgc2I9KG5pbCkKbWRhZG06 IG5vIFJBSUQgc3Vw
ZXJibG9jayBvbiAvZGV2L2RtLTAKbWRhZG06IC9kZXYvZG0tMCBoYXMgd3Jv bmcgdXVpZC4KIHdh
bnQgVVVJRC0wODRiOTY5YTowODA4ZjViODo2Yzc4NGZiNzo2MjY1OTM4Mwog dHN0PTB4OTkxOWEw
IHNiPShuaWwpCm1kYWRtOiBubyBSQUlEIHN1cGVyYmxvY2sgb24gL2Rldi9s b29wMAptZGFkbTog
L2Rldi9sb29wMCBoYXMgd3JvbmcgdXVpZC4KIHdhbnQgVVVJRC0wODRiOTY5 YTowODA4ZjViODo2
Yzc4NGZiNzo2MjY1OTM4MwogdHN0PTB4OTkxYTMwIHNiPShuaWwpCm1kYWRt OiBjYW5ub3Qgb3Bl
biBkZXZpY2UgL2Rldi9zZGM3OiBEZXZpY2Ugb3IgcmVzb3VyY2UgYnVzeQpt ZGFkbTogL2Rldi9z
ZGM3IGhhcyB3cm9uZyB1dWlkLgogd2FudCBVVUlELTA4NGI5NjlhOjA4MDhm NWI4OjZjNzg0ZmI3
OjYyNjU5MzgzCiB0c3Q9MHg5OTFhYzAgc2I9KG5pbCkKbWRhZG06IGNhbm5v dCBvcGVuIGRldmlj
ZSAvZGV2L3NkYzY6IERldmljZSBvciByZXNvdXJjZSBidXN5Cm1kYWRtOiAv ZGV2L3NkYzYgaGFz
IHdyb25nIHV1aWQuCiB3YW50IFVVSUQtMDg0Yjk2OWE6MDgwOGY1Yjg6NmM3 ODRmYjc6NjI2NTkz
ODMKIHRzdD0weDk5MWI1MCBzYj0obmlsKQptZGFkbTogY2Fubm90IG9wZW4g ZGV2aWNlIC9kZXYv
c2RjNTogRGV2aWNlIG9yIHJlc291cmNlIGJ1c3kKbWRhZG06IC9kZXYvc2Rj NSBoYXMgd3Jvbmcg
dXVpZC4KIHdhbnQgVVVJRC0wODRiOTY5YTowODA4ZjViODo2Yzc4NGZiNzo2 MjY1OTM4MwogdHN0
PTB4OTkxYmUwIHNiPShuaWwpCm1kYWRtOiBubyBSQUlEIHN1cGVyYmxvY2sg b24gL2Rldi9zZGMy
Cm1kYWRtOiAvZGV2L3NkYzIgaGFzIHdyb25nIHV1aWQuCiB3YW50IFVVSUQt MDg0Yjk2OWE6MDgw
OGY1Yjg6NmM3ODRmYjc6NjI2NTkzODMKIHRzdD0weDk5MWM3MCBzYj0obmls KQptZGFkbTogY2Fu
bm90IG9wZW4gZGV2aWNlIC9kZXYvc2RjMTogRGV2aWNlIG9yIHJlc291cmNl IGJ1c3kKbWRhZG06
IC9kZXYvc2RjMSBoYXMgd3JvbmcgdXVpZC4KIHdhbnQgVVVJRC0wODRiOTY5 YTowODA4ZjViODo2
Yzc4NGZiNzo2MjY1OTM4MwogdHN0PTB4OTkxZDAwIHNiPShuaWwpCm1kYWRt OiBjYW5ub3Qgb3Bl
biBkZXZpY2UgL2Rldi9zZGM6IERldmljZSBvciByZXNvdXJjZSBidXN5Cm1k YWRtOiAvZGV2L3Nk
YyBoYXMgd3JvbmcgdXVpZC4KIHdhbnQgVVVJRC0wODRiOTY5YTowODA4ZjVi ODo2Yzc4NGZiNzo2
MjY1OTM4MwogdHN0PTB4OTkxZDkwIHNiPShuaWwpCm1kYWRtOiAvZGV2L3Nk YiBpcyBpZGVudGlm
aWVkIGFzIGEgbWVtYmVyIG9mIC9kZXYvbWQvaW1zbTAsIHNsb3QgLTEuCm1k YWRtOiAvZGV2L3Nk
YSBpcyBpZGVudGlmaWVkIGFzIGEgbWVtYmVyIG9mIC9kZXYvbWQvaW1zbTAs IHNsb3QgLTEuCm1k
YWRtOiBhZGRlZCAvZGV2L3NkYSB0byAvZGV2L21kL2ltc20wIGFzIC0xCm1k YWRtOiBhZGRlZCAv
ZGV2L3NkYiB0byAvZGV2L21kL2ltc20wIGFzIC0xCm1kYWRtOiBDb250YWlu ZXIgL2Rldi9tZC9p
bXNtMCBoYXMgYmVlbiBhc3NlbWJsZWQgd2l0aCAyIGRyaXZlcwptZGFkbTog bG9va2luZyBmb3Ig
ZGV2aWNlcyBmb3IgL2Rldi9tZC9PbmVUQi1SQUlEMS1QVgptZGFkbTogbm8g cmVjb2duaXNlYWJs
ZSBzdXBlcmJsb2NrIG9uIC9kZXYvZG0tMwptZGFkbS9kZXYvZG0tMyBpcyBu b3QgYSBjb250YWlu
ZXIsIGFuZCBvbmUgaXMgcmVxdWlyZWQuCm1kYWRtOiBubyByZWNvZ25pc2Vh YmxlIHN1cGVyYmxv
Y2sgb24gL2Rldi9kbS0yCm1kYWRtL2Rldi9kbS0yIGlzIG5vdCBhIGNvbnRh aW5lciwgYW5kIG9u
ZSBpcyByZXF1aXJlZC4KbWRhZG06IG5vIHJlY29nbmlzZWFibGUgc3VwZXJi bG9jayBvbiAvZGV2
L2RtLTEKbWRhZG0vZGV2L2RtLTEgaXMgbm90IGEgY29udGFpbmVyLCBhbmQg b25lIGlzIHJlcXVp
cmVkLgptZGFkbTogbm8gcmVjb2duaXNlYWJsZSBzdXBlcmJsb2NrIG9uIC9k ZXYvZG0tMAptZGFk
bS9kZXYvZG0tMCBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUgaXMgcmVx dWlyZWQuCm1kYWRt
OiBubyByZWNvZ25pc2VhYmxlIHN1cGVyYmxvY2sgb24gL2Rldi9sb29wMApt ZGFkbS9kZXYvbG9v
cDAgaXMgbm90IGEgY29udGFpbmVyLCBhbmQgb25lIGlzIHJlcXVpcmVkLgpt ZGFkbTogY2Fubm90
IG9wZW4gZGV2aWNlIC9kZXYvc2RjNzogRGV2aWNlIG9yIHJlc291cmNlIGJ1 c3kKbWRhZG0vZGV2
L3NkYzcgaXMgbm90IGEgY29udGFpbmVyLCBhbmQgb25lIGlzIHJlcXVpcmVk LgptZGFkbTogY2Fu
bm90IG9wZW4gZGV2aWNlIC9kZXYvc2RjNjogRGV2aWNlIG9yIHJlc291cmNl IGJ1c3kKbWRhZG0v
ZGV2L3NkYzYgaXMgbm90IGEgY29udGFpbmVyLCBhbmQgb25lIGlzIHJlcXVp cmVkLgptZGFkbTog
Y2Fubm90IG9wZW4gZGV2aWNlIC9kZXYvc2RjNTogRGV2aWNlIG9yIHJlc291 cmNlIGJ1c3kKbWRh
ZG0vZGV2L3NkYzUgaXMgbm90IGEgY29udGFpbmVyLCBhbmQgb25lIGlzIHJl cXVpcmVkLgptZGFk
bTogbm8gcmVjb2duaXNlYWJsZSBzdXBlcmJsb2NrIG9uIC9kZXYvc2RjMgpt ZGFkbS9kZXYvc2Rj
MiBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUgaXMgcmVxdWlyZWQuCm1k YWRtOiBjYW5ub3Qg
b3BlbiBkZXZpY2UgL2Rldi9zZGMxOiBEZXZpY2Ugb3IgcmVzb3VyY2UgYnVz eQptZGFkbS9kZXYv
c2RjMSBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUgaXMgcmVxdWlyZWQu Cm1kYWRtOiBjYW5u
b3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGM6IERldmljZSBvciByZXNvdXJjZSBi dXN5Cm1kYWRtL2Rl
di9zZGMgaXMgbm90IGEgY29udGFpbmVyLCBhbmQgb25lIGlzIHJlcXVpcmVk LgptZGFkbTogY2Fu
bm90IG9wZW4gZGV2aWNlIC9kZXYvc2RiOiBEZXZpY2Ugb3IgcmVzb3VyY2Ug YnVzeQptZGFkbS9k
ZXYvc2RiIGlzIG5vdCBhIGNvbnRhaW5lciwgYW5kIG9uZSBpcyByZXF1aXJl ZC4KbWRhZG06IGNh
bm5vdCBvcGVuIGRldmljZSAvZGV2L3NkYTogRGV2aWNlIG9yIHJlc291cmNl IGJ1c3kKbWRhZG0v
ZGV2L3NkYSBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUgaXMgcmVxdWly ZWQuCm1kYWRtOiBs
b29raW5nIGZvciBkZXZpY2VzIGZvciAvZGV2L21kL09uZVRCLVJBSUQxLVBW Cm1kYWRtOiBubyBy
ZWNvZ25pc2VhYmxlIHN1cGVyYmxvY2sgb24gL2Rldi9kbS0zCm1kYWRtL2Rl di9kbS0zIGlzIG5v
dCBhIGNvbnRhaW5lciwgYW5kIG9uZSBpcyByZXF1aXJlZC4KbWRhZG06IG5v IHJlY29nbmlzZWFi
bGUgc3VwZXJibG9jayBvbiAvZGV2L2RtLTIKbWRhZG0vZGV2L2RtLTIgaXMg bm90IGEgY29udGFp
bmVyLCBhbmQgb25lIGlzIHJlcXVpcmVkLgptZGFkbTogbm8gcmVjb2duaXNl YWJsZSBzdXBlcmJs
b2NrIG9uIC9kZXYvZG0tMQptZGFkbS9kZXYvZG0tMSBpcyBub3QgYSBjb250 YWluZXIsIGFuZCBv
bmUgaXMgcmVxdWlyZWQuCm1kYWRtOiBubyByZWNvZ25pc2VhYmxlIHN1cGVy YmxvY2sgb24gL2Rl
di9kbS0wCm1kYWRtL2Rldi9kbS0wIGlzIG5vdCBhIGNvbnRhaW5lciwgYW5k IG9uZSBpcyByZXF1
aXJlZC4KbWRhZG06IG5vIHJlY29nbmlzZWFibGUgc3VwZXJibG9jayBvbiAv ZGV2L2xvb3AwCm1k
YWRtL2Rldi9sb29wMCBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUgaXMg cmVxdWlyZWQuCm1k
YWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGM3OiBEZXZpY2Ugb3Ig cmVzb3VyY2UgYnVz
eQptZGFkbS9kZXYvc2RjNyBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUg aXMgcmVxdWlyZWQu
Cm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGM2OiBEZXZpY2Ug b3IgcmVzb3VyY2Ug
YnVzeQptZGFkbS9kZXYvc2RjNiBpcyBub3QgYSBjb250YWluZXIsIGFuZCBv bmUgaXMgcmVxdWly
ZWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGM1OiBEZXZp Y2Ugb3IgcmVzb3Vy
Y2UgYnVzeQptZGFkbS9kZXYvc2RjNSBpcyBub3QgYSBjb250YWluZXIsIGFu ZCBvbmUgaXMgcmVx
dWlyZWQuCm1kYWRtOiBubyByZWNvZ25pc2VhYmxlIHN1cGVyYmxvY2sgb24g L2Rldi9zZGMyCm1k
YWRtL2Rldi9zZGMyIGlzIG5vdCBhIGNvbnRhaW5lciwgYW5kIG9uZSBpcyBy ZXF1aXJlZC4KbWRh
ZG06IGNhbm5vdCBvcGVuIGRldmljZSAvZGV2L3NkYzE6IERldmljZSBvciBy ZXNvdXJjZSBidXN5
Cm1kYWRtL2Rldi9zZGMxIGlzIG5vdCBhIGNvbnRhaW5lciwgYW5kIG9uZSBp cyByZXF1aXJlZC4K
bWRhZG06IGNhbm5vdCBvcGVuIGRldmljZSAvZGV2L3NkYzogRGV2aWNlIG9y IHJlc291cmNlIGJ1
c3kKbWRhZG0vZGV2L3NkYyBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUg aXMgcmVxdWlyZWQu
Cm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGI6IERldmljZSBv ciByZXNvdXJjZSBi
dXN5Cm1kYWRtL2Rldi9zZGIgaXMgbm90IGEgY29udGFpbmVyLCBhbmQgb25l IGlzIHJlcXVpcmVk
LgptZGFkbTogY2Fubm90IG9wZW4gZGV2aWNlIC9kZXYvc2RhOiBEZXZpY2Ug b3IgcmVzb3VyY2Ug
YnVzeQptZGFkbS9kZXYvc2RhIGlzIG5vdCBhIGNvbnRhaW5lciwgYW5kIG9u ZSBpcyByZXF1aXJl
ZC4KCg==

--_29b761cb-2fd0-4c65-b0af-e851a6d608d6_
Content-Type: text/plain
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="DEBIAN version.txt"

REVCSUFOIG1kYWRtCgptZGFkbSAtU3MKCm1kYWRtOiBzdG9wcGVkIC9kZXYv bWQxMjcKCj09PQoK
bWRhZG0gLUFzdnYKCm1kYWRtOiBsb29raW5nIGZvciBkZXZpY2VzIGZvciBm dXJ0aGVyIGFzc2Vt
Ymx5Cm1kYWRtOiBubyBSQUlEIHN1cGVyYmxvY2sgb24gL2Rldi9kbS0zCm1k YWRtOiAvZGV2L2Rt
LTMgaGFzIHdyb25nIHV1aWQuCm1kYWRtOiBubyBSQUlEIHN1cGVyYmxvY2sg b24gL2Rldi9kbS0y
Cm1kYWRtOiAvZGV2L2RtLTIgaGFzIHdyb25nIHV1aWQuCm1kYWRtOiBubyBS QUlEIHN1cGVyYmxv
Y2sgb24gL2Rldi9kbS0xCm1kYWRtOiAvZGV2L2RtLTEgaGFzIHdyb25nIHV1 aWQuCm1kYWRtOiBu
byBSQUlEIHN1cGVyYmxvY2sgb24gL2Rldi9kbS0wCm1kYWRtOiAvZGV2L2Rt LTAgaGFzIHdyb25n
IHV1aWQuCm1kYWRtOiBubyBSQUlEIHN1cGVyYmxvY2sgb24gL2Rldi9sb29w MAptZGFkbTogL2Rl
di9sb29wMCBoYXMgd3JvbmcgdXVpZC4KbWRhZG06IGNhbm5vdCBvcGVuIGRl dmljZSAvZGV2L3Nk
Yzc6IERldmljZSBvciByZXNvdXJjZSBidXN5Cm1kYWRtOiAvZGV2L3NkYzcg aGFzIHdyb25nIHV1
aWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGM2OiBEZXZp Y2Ugb3IgcmVzb3Vy
Y2UgYnVzeQptZGFkbTogL2Rldi9zZGM2IGhhcyB3cm9uZyB1dWlkLgptZGFk bTogY2Fubm90IG9w
ZW4gZGV2aWNlIC9kZXYvc2RjNTogRGV2aWNlIG9yIHJlc291cmNlIGJ1c3kK bWRhZG06IC9kZXYv
c2RjNSBoYXMgd3JvbmcgdXVpZC4KbWRhZG06IG5vIFJBSUQgc3VwZXJibG9j ayBvbiAvZGV2L3Nk
YzIKbWRhZG06IC9kZXYvc2RjMiBoYXMgd3JvbmcgdXVpZC4KbWRhZG06IGNh bm5vdCBvcGVuIGRl
dmljZSAvZGV2L3NkYzE6IERldmljZSBvciByZXNvdXJjZSBidXN5Cm1kYWRt OiAvZGV2L3NkYzEg
aGFzIHdyb25nIHV1aWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rl di9zZGM6IERldmlj
ZSBvciByZXNvdXJjZSBidXN5Cm1kYWRtOiAvZGV2L3NkYyBoYXMgd3Jvbmcg dXVpZC4KbWRhZG06
IC9kZXYvc2RiIGlzIGlkZW50aWZpZWQgYXMgYSBtZW1iZXIgb2YgL2Rldi9t ZC9pbXNtMCwgc2xv
dCAtMS4KbWRhZG06IC9kZXYvc2RhIGlzIGlkZW50aWZpZWQgYXMgYSBtZW1i ZXIgb2YgL2Rldi9t
ZC9pbXNtMCwgc2xvdCAtMS4KbWRhZG06IGFkZGVkIC9kZXYvc2RhIHRvIC9k ZXYvbWQvaW1zbTAg
YXMgLTEKbWRhZG06IGFkZGVkIC9kZXYvc2RiIHRvIC9kZXYvbWQvaW1zbTAg YXMgLTEKbWRhZG06
IENvbnRhaW5lciAvZGV2L21kL2ltc20wIGhhcyBiZWVuIGFzc2VtYmxlZCB3 aXRoIDIgZHJpdmVz
Cm1kYWRtOiBsb29raW5nIGZvciBkZXZpY2VzIGZvciAvZGV2L21kL09uZVRC LVJBSUQxLVBWCm1k
YWRtOiBubyByZWNvZ25pc2VhYmxlIHN1cGVyYmxvY2sgb24gL2Rldi9kbS0z Cm1kYWRtL2Rldi9k
bS0zIGlzIG5vdCBhIGNvbnRhaW5lciwgYW5kIG9uZSBpcyByZXF1aXJlZC4K bWRhZG06IG5vIHJl
Y29nbmlzZWFibGUgc3VwZXJibG9jayBvbiAvZGV2L2RtLTIKbWRhZG0vZGV2 L2RtLTIgaXMgbm90
IGEgY29udGFpbmVyLCBhbmQgb25lIGlzIHJlcXVpcmVkLgptZGFkbTogbm8g cmVjb2duaXNlYWJs
ZSBzdXBlcmJsb2NrIG9uIC9kZXYvZG0tMQptZGFkbS9kZXYvZG0tMSBpcyBu b3QgYSBjb250YWlu
ZXIsIGFuZCBvbmUgaXMgcmVxdWlyZWQuCm1kYWRtOiBubyByZWNvZ25pc2Vh YmxlIHN1cGVyYmxv
Y2sgb24gL2Rldi9kbS0wCm1kYWRtL2Rldi9kbS0wIGlzIG5vdCBhIGNvbnRh aW5lciwgYW5kIG9u
ZSBpcyByZXF1aXJlZC4KbWRhZG06IG5vIHJlY29nbmlzZWFibGUgc3VwZXJi bG9jayBvbiAvZGV2
L2xvb3AwCm1kYWRtL2Rldi9sb29wMCBpcyBub3QgYSBjb250YWluZXIsIGFu ZCBvbmUgaXMgcmVx
dWlyZWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGM3OiBE ZXZpY2Ugb3IgcmVz
b3VyY2UgYnVzeQptZGFkbS9kZXYvc2RjNyBpcyBub3QgYSBjb250YWluZXIs IGFuZCBvbmUgaXMg
cmVxdWlyZWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGM2 OiBEZXZpY2Ugb3Ig
cmVzb3VyY2UgYnVzeQptZGFkbS9kZXYvc2RjNiBpcyBub3QgYSBjb250YWlu ZXIsIGFuZCBvbmUg
aXMgcmVxdWlyZWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9z ZGM1OiBEZXZpY2Ug
b3IgcmVzb3VyY2UgYnVzeQptZGFkbS9kZXYvc2RjNSBpcyBub3QgYSBjb250 YWluZXIsIGFuZCBv
bmUgaXMgcmVxdWlyZWQuCm1kYWRtOiBubyByZWNvZ25pc2VhYmxlIHN1cGVy YmxvY2sgb24gL2Rl
di9zZGMyCm1kYWRtL2Rldi9zZGMyIGlzIG5vdCBhIGNvbnRhaW5lciwgYW5k IG9uZSBpcyByZXF1
aXJlZC4KbWRhZG06IGNhbm5vdCBvcGVuIGRldmljZSAvZGV2L3NkYzE6IERl dmljZSBvciByZXNv
dXJjZSBidXN5Cm1kYWRtL2Rldi9zZGMxIGlzIG5vdCBhIGNvbnRhaW5lciwg YW5kIG9uZSBpcyBy
ZXF1aXJlZC4KbWRhZG06IGNhbm5vdCBvcGVuIGRldmljZSAvZGV2L3NkYzog RGV2aWNlIG9yIHJl
c291cmNlIGJ1c3kKbWRhZG0vZGV2L3NkYyBpcyBub3QgYSBjb250YWluZXIs IGFuZCBvbmUgaXMg
cmVxdWlyZWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZpY2UgL2Rldi9zZGI6 IERldmljZSBvciBy
ZXNvdXJjZSBidXN5Cm1kYWRtL2Rldi9zZGIgaXMgbm90IGEgY29udGFpbmVy LCBhbmQgb25lIGlz
IHJlcXVpcmVkLgptZGFkbTogY2Fubm90IG9wZW4gZGV2aWNlIC9kZXYvc2Rh OiBEZXZpY2Ugb3Ig
cmVzb3VyY2UgYnVzeQptZGFkbS9kZXYvc2RhIGlzIG5vdCBhIGNvbnRhaW5l ciwgYW5kIG9uZSBp
cyByZXF1aXJlZC4KbWRhZG06IGxvb2tpbmcgZm9yIGRldmljZXMgZm9yIC9k ZXYvbWQvT25lVEIt
UkFJRDEtUFYKbWRhZG06IG5vIHJlY29nbmlzZWFibGUgc3VwZXJibG9jayBv biAvZGV2L2RtLTMK
bWRhZG0vZGV2L2RtLTMgaXMgbm90IGEgY29udGFpbmVyLCBhbmQgb25lIGlz IHJlcXVpcmVkLgpt
ZGFkbTogbm8gcmVjb2duaXNlYWJsZSBzdXBlcmJsb2NrIG9uIC9kZXYvZG0t MgptZGFkbS9kZXYv
ZG0tMiBpcyBub3QgYSBjb250YWluZXIsIGFuZCBvbmUgaXMgcmVxdWlyZWQu Cm1kYWRtOiBubyBy
ZWNvZ25pc2VhYmxlIHN1cGVyYmxvY2sgb24gL2Rldi9kbS0xCm1kYWRtL2Rl di9kbS0xIGlzIG5v
dCBhIGNvbnRhaW5lciwgYW5kIG9uZSBpcyByZXF1aXJlZC4KbWRhZG06IG5v IHJlY29nbmlzZWFi
bGUgc3VwZXJibG9jayBvbiAvZGV2L2RtLTAKbWRhZG0vZGV2L2RtLTAgaXMg bm90IGEgY29udGFp
bmVyLCBhbmQgb25lIGlzIHJlcXVpcmVkLgptZGFkbTogbm8gcmVjb2duaXNl YWJsZSBzdXBlcmJs
b2NrIG9uIC9kZXYvbG9vcDAKbWRhZG0vZGV2L2xvb3AwIGlzIG5vdCBhIGNv bnRhaW5lciwgYW5k
IG9uZSBpcyByZXF1aXJlZC4KbWRhZG06IGNhbm5vdCBvcGVuIGRldmljZSAv ZGV2L3NkYzc6IERl
dmljZSBvciByZXNvdXJjZSBidXN5Cm1kYWRtL2Rldi9zZGM3IGlzIG5vdCBh IGNvbnRhaW5lciwg
YW5kIG9uZSBpcyByZXF1aXJlZC4KbWRhZG06IGNhbm5vdCBvcGVuIGRldmlj ZSAvZGV2L3NkYzY6
IERldmljZSBvciByZXNvdXJjZSBidXN5Cm1kYWRtL2Rldi9zZGM2IGlzIG5v dCBhIGNvbnRhaW5l
ciwgYW5kIG9uZSBpcyByZXF1aXJlZC4KbWRhZG06IGNhbm5vdCBvcGVuIGRl dmljZSAvZGV2L3Nk
YzU6IERldmljZSBvciByZXNvdXJjZSBidXN5Cm1kYWRtL2Rldi9zZGM1IGlz IG5vdCBhIGNvbnRh
aW5lciwgYW5kIG9uZSBpcyByZXF1aXJlZC4KbWRhZG06IG5vIHJlY29nbmlz ZWFibGUgc3VwZXJi
bG9jayBvbiAvZGV2L3NkYzIKbWRhZG0vZGV2L3NkYzIgaXMgbm90IGEgY29u dGFpbmVyLCBhbmQg
b25lIGlzIHJlcXVpcmVkLgptZGFkbTogY2Fubm90IG9wZW4gZGV2aWNlIC9k ZXYvc2RjMTogRGV2
aWNlIG9yIHJlc291cmNlIGJ1c3kKbWRhZG0vZGV2L3NkYzEgaXMgbm90IGEg Y29udGFpbmVyLCBh
bmQgb25lIGlzIHJlcXVpcmVkLgptZGFkbTogY2Fubm90IG9wZW4gZGV2aWNl IC9kZXYvc2RjOiBE
ZXZpY2Ugb3IgcmVzb3VyY2UgYnVzeQptZGFkbS9kZXYvc2RjIGlzIG5vdCBh IGNvbnRhaW5lciwg
YW5kIG9uZSBpcyByZXF1aXJlZC4KbWRhZG06IGNhbm5vdCBvcGVuIGRldmlj ZSAvZGV2L3NkYjog
RGV2aWNlIG9yIHJlc291cmNlIGJ1c3kKbWRhZG0vZGV2L3NkYiBpcyBub3Qg YSBjb250YWluZXIs
IGFuZCBvbmUgaXMgcmVxdWlyZWQuCm1kYWRtOiBjYW5ub3Qgb3BlbiBkZXZp Y2UgL2Rldi9zZGE6
IERldmljZSBvciByZXNvdXJjZSBidXN5Cm1kYWRtL2Rldi9zZGEgaXMgbm90 IGEgY29udGFpbmVy
LCBhbmQgb25lIGlzIHJlcXVpcmVkLgoK

--_29b761cb-2fd0-4c65-b0af-e851a6d608d6_--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: How to recreate a dmraid RAID array with mdadm

am 23.11.2010 00:11:29 von NeilBrown

I see the problem now. And John Robinson was nearly there.

The problem is that after assembling the container /dev/md/imsm,
mdadm needs to assemble the RAID1, but doesn't find the
container /dev/md/imsm to assemble it from.
That is because of the
DEVICE partitions
line.
A container is not a partition - it does not appear in /proc/partitions=

You need

DEVICE partitions containers

which is the default if you don't have a DEVICE line (and I didn't have=
a
device line in my testing).

I think all the "wrong uuid" messages were because the device was busy =
(and
so it didn't read a uuid), probably because you didn't "mdadm -Ss" firs=
t.

So just remove the "DEVICE partitions" line, or add " containers" to it=
, and=20
all should be happy.

NeilBrown



On Mon, 22 Nov 2010 13:07:10 -0500
Mike Viau wrote:

>=20
> > On Thu, 18 Nov 2010 16:38:49 +1100 wrote:
> > > > On Thu, 18 Nov 2010 14:17:18 +1100 wrote:
> > > > >
> > > > > > On Thu, 18 Nov 2010 13:32:47 +1100 wrote:
> > > > > > > ./mdadm -Ss
> > > > > > >
> > > > > > > mdadm: stopped /dev/md127
> > > > > > >
> > > > > > >
> > > > > > > ./mdadm -Asvvv
> > > > > > >
> > > > > > > mdadm: looking for devices for further assembly
> > > > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > > > Segmentation fault
> > > > > >
> > > > > > Try this patch instead please.
> > > > >
> > > > > Applied new patch and got:
> > > > >
> > > > > ./mdadm -Ss
> > > > >
> > > > > mdadm: stopped /dev/md127
> > > > >
> > > > >
> > > > > ./mdadm -Asvvv
> > > > > mdadm: looking for devices for further assembly
> > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > tst=3D0x10dd010 sb=3D(nil)
> > > > > Segmentation fault
> > > >
> > > > Sorry... I guess I should have tested it myself..
> > > >
> > > > The
> > > > if (tst) {
> > > >
> > > > Should be
> > > >
> > > > if (tst && content) {
> > > >
> > >
> > > Apply update and got:
> > >
> > > mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot =
-1.
> > > mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot =
-1.
> > > mdadm: added /dev/sda to /dev/md/imsm0 as -1
> > > mdadm: added /dev/sdb to /dev/md/imsm0 as -1
> > > mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
> > > mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
> >
> > So just to clarify.
> >
> > With the Debian mdadm, which is 3.1.4, if you
> >
> > mdadm -Ss
> > mdadm -Asvv
> >
> > it says (among other things) that /dev/sda has wrong uuid.
> > and doesn't start the array.
>=20
> Actually both compiled and Debian do not start the array. Or atleast =
create the /dev/md/OneTB-RAID1-PV device when running mdadm -I /dev/md/=
imsm0 does.
>=20
> You are right about seeing a message on /dev/sda about having a wrong=
uuid somewhere though.=A0 I went back to take a look at my output from=
the Debian mailing list to see that the mdadm did change slightly from=
this thread has begun.
>=20
> The old output was copied verbatim on http://lists.debian.org/debian-=
user/2010/11/msg01234.html and says (among other things) that /dev/sda =
has wrong uuid.
>=20
> The /dev/sd[ab] has wrong uuid messages are missing from the mdadm -A=
svv output but....
>=20
> ./mdadm -Ivv /dev/md/imsm0=20
> mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
> mdadm: match found for member 0
> mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices
>=20
>=20
> I still have this UUID message when still using the mdadm -I command.
>=20
>=20
> I'll attach the output of both the mdadm commands above as they run n=
ow on the system, but I noticed, but also that in the same thread link =
above, with the old output I was inqurying as to both /dev/sda and /dev=
/sdb (the drives which make up the raid1 array) do not appear to recogn=
ized as having a valid container when one is required.
>=20
> What is take on GeraldCC (gcsgcatling@bigpond.com) assistance about /=
dev/sd[ab] containing a 8e (for LVM) partition type, rather than the fd=
type to denote raid autodetect. If this was the magical fix (which I a=
m not saying it can=92t be) why is mdadm -I /dev/md/imsm0 able to bring=
up the array for use as an physical volume for LVM?
>=20
>=20
>=20
> >
> > But with the mdadm you compiled yourself, which is also 3.1.4,
> > if you
> >
> > mdadm -Ss
> > mdadm -Asvv
> >
> > then it doesn't give that message, and it works.
>=20
> Again, actually both compiled and Debian do not start the array. Or a=
tleast
> create the /dev/md/OneTB-RAID1-PV device when running mdadm -I
> /dev/md/imsm0 does.
>=20
> >
> > That is very strange. It seems that the Debian mdadm is broken some=
how, but
> > I'm fairly sure Debian hardly changes anything - they are *very* go=
od at
> > getting their changes upstream first.
> >
> > I don't suppose you have an /etc/mdadm.conf as well as /etc/mdadm/m=
dadm.conf
> > do you? If you did and the two were different, the Debian's mdadm w=
ould
> > behave a bit differently to upstream (they prefer different config =
files) but
> > I very much doubt that is the problem.
> >
>=20
> There is no /etc/mdadm.conf on the filesystem only /etc/mdadm/mdadm.c=
onf
>=20
>=20
> > But I guess if the self-compiled one works (even when you take the =
patch
> > out), then just
> > make install
>=20
> I wish this was the case...
>=20
> >
> > and be happy.
> >
> > NeilBrown
> >
> >
> > >
> > >
> > > Full output at: http://paste.debian.net/100103/
> > > expires:
> > >
> > > 2010-11-21 06:07:30
>=20
> Thanks
>=20
> -M
> =20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: How to recreate a dmraid RAID array with mdadm

am 23.11.2010 17:07:13 von Mike Viau

> OnTue, 23 Nov 2010 10:11:29 +1100 wrote:>
> I see the problem now. And John Robinson was nearly there.
>
> The problem is that after assembling the container /dev/md/imsm,
> mdadm needs to assemble the RAID1, but doesn't find the
> container /dev/md/imsm to assemble it from.
> That is because of the
> DEVICE partitions
> line.
> A container is not a partition - it does not appear in /proc/partitions.
> You need
>
> DEVICE partitions containers
>
> which is the default if you don't have a DEVICE line (and I didn't have a
> device line in my testing).
>
> I think all the "wrong uuid" messages were because the device was busy (and
> so it didn't read a uuid), probably because you didn't "mdadm -Ss" first.
>
> So just remove the "DEVICE partitions" line, or add " containers" to it, and
> all should be happy.
>
> NeilBrown
>

Yes thank you, that seems to be the correct fix.

mdadm -Asvv

mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
mdadm: added /dev/sda to /dev/md/imsm0 as -1
mdadm: added /dev/sdb to /dev/md/imsm0 as -1
mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.
mdadm: looking in container /dev/md127
mdadm: found match on member /md127/0 in /dev/md127
mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices <-- The line I was looking for


ls -al /dev/md/

total 0
drwxr-xr-x 2 root root 80 Nov 23 09:47 .
drwxr-xr-x 21 root root 3480 Nov 23 09:47 ..
lrwxrwxrwx 1 root root 8 Nov 23 09:47 imsm0 -> ../md127
lrwxrwxrwx 1 root root 8 Nov 23 09:47 OneTB-RAID1-PV -> ../md126


I filed a bug[1] as I was just going along with default configurations, to see what is said about it.

Thanks soo much for your help Neil :)

[1] - http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=604702
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html