Impact of missing parameter during mdadm create

Impact of missing parameter during mdadm create

am 01.03.2011 04:19:58 von Mike Viau

Hello mdadm hackers,

I was wondering what (if any) impact would creating an array with the missing parameter have on subsequent assemblies of a mdadm array?

When the array was created, I used a command like:

mdadm --create -l5 -n3 /dev/md0 /dev/sda1 missing /dev/sdb1

And then loaded some initial data on the md0 array from /dev/sdd1, and then I zeroed out /dev/sdd1 and added it to the md0 array.

Details on each drive seem to show they all belong to the same Array UUID, but when the array is (re)assembled (on boot or manually), only /dev/sd{a,b}1 are added to mdadm array automatically, and /dev/sdd1 must be re-added manually.


> mdadm --examine /dev/sd{a,b,d}1
> /dev/sda1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> Name : XEN-HOST:0 (local to host XEN-HOST)
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Raid Devices : 3
>
> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 25f4baf0:9a378d2c:16a87f0c:ff89b2c8
>
> Update Time : Fri Feb 18 16:32:19 2011
> Checksum : 37383bee - correct
> Events : 32184
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Device Role : Active device 0
> Array State : AAA ('A' == active, '.' == missing)
> /dev/sdb1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> Name : XEN-HOST:0 (local to host XEN-HOST)
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Raid Devices : 3
>
> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : f20ab5fd:1f141cae:e0547278:d6cf063e
>
> Update Time : Fri Feb 18 16:32:19 2011
> Checksum : a70821e2 - correct
> Events : 32184
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Device Role : Active device 1
> Array State : AAA ('A' == active, '.' == missing)
> /dev/sdd1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x2
> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> Name : XEN-HOST:0 (local to host XEN-HOST)
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Raid Devices : 3
>
> Avail Dev Size : 1953521072 (931.51 GiB 1000.20 GB)
> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> Recovery Offset : 610474280 sectors
> State : clean
> Device UUID : 33d70114:ffdc4fcc:2c8d65ba:ab50bab2
>
> Update Time : Fri Feb 18 16:32:19 2011
> Checksum : b692957e - correct
> Events : 32184
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Device Role : Active device 2
> Array State : AAA ('A' == active, '.' == missing)






-M


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 01.03.2011 04:59:23 von Mike Viau

Manual re-assembly outputs as follows:


mdadm -Ss

mdadm: stopped /dev/md0

---

mdadm -Asvvv

mdadm: looking for devices for /dev/md/0
mdadm: no RAID superblock on /dev/dm-6
mdadm: /dev/dm-6 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-5
mdadm: /dev/dm-5 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-4
mdadm: /dev/dm-4 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: cannot open device /dev/dm-2: Device or resource busy
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sde has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sda has wrong uuid.
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdb has wrong uuid.
mdadm: /dev/sdd1 is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sda1 is identified as a member of /dev/md/0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot 1.
mdadm: added /dev/sdb1 to /dev/md/0 as 1
mdadm: added /dev/sdd1 to /dev/md/0 as 2
mdadm: looking for devices for /dev/md/0
mdadm: no RAID superblock on /dev/dm-6
mdadm: /dev/dm-6 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-5
mdadm: /dev/dm-5 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-4
mdadm: /dev/dm-4 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: cannot open device /dev/dm-2: Device or resource busy
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sde has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sda has wrong uuid.
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdb has wrong uuid.
mdadm: /dev/sdd1 is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sda1 is identified as a member of /dev/md/0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot 1.
mdadm: added /dev/sdb1 to /dev/md/0 as 1
mdadm: added /dev/sdd1 to /dev/md/0 as 2
mdadm: added /dev/sda1 to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 2 drives (out of 3).

---

mdadm --examine /dev/sd{a,b,d}1

/dev/sda1:
        =A0 Magic : a92b4efc
      =A0 Version : 1.2
  =A0 Feature Map : 0x0
     Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
           Name : XEN-HOST:0=A0 (local to host XEN-=
HOST)
=A0 Creation Time : Mon Dec 20 09:48:07 2010
     Raid Level : raid5
   Raid Devices : 3

=A0Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
     Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
=A0 Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
  =A0 Data Offset : 2048 sectors
   Super Offset : 8 sectors
        =A0 State : clean
  =A0 Device UUID : 25f4baf0:9a378d2c:16a87f0c:ff89b2c8

  =A0 Update Time : Mon Feb 28 23:35:20 2011
       Checksum : 3745d2b9 - correct
         Events : 33374

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing)

/dev/sdb1:
        =A0 Magic : a92b4efc
      =A0 Version : 1.2
  =A0 Feature Map : 0x0
     Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
           Name : XEN-HOST:0=A0 (local to host XEN-=
HOST)
=A0 Creation Time : Mon Dec 20 09:48:07 2010
     Raid Level : raid5
   Raid Devices : 3

=A0Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
     Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
=A0 Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
  =A0 Data Offset : 2048 sectors
   Super Offset : 8 sectors
        =A0 State : clean
  =A0 Device UUID : f20ab5fd:1f141cae:e0547278:d6cf063e

  =A0 Update Time : Mon Feb 28 23:35:20 2011
       Checksum : a715b8ad - correct
         Events : 33374

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing)

/dev/sdd1:
        =A0 Magic : a92b4efc
      =A0 Version : 1.2
  =A0 Feature Map : 0x0
     Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
           Name : XEN-HOST:0=A0 (local to host XEN-=
HOST)
=A0 Creation Time : Mon Dec 20 09:48:07 2010
     Raid Level : raid5
   Raid Devices : 3

=A0Avail Dev Size : 1953521072 (931.51 GiB 1000.20 GB)
     Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
=A0 Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
  =A0 Data Offset : 2048 sectors
   Super Offset : 8 sectors
        =A0 State : clean
  =A0 Device UUID : 33d70114:ffdc4fcc:2c8d65ba:ab50bab2

  =A0 Update Time : Mon Feb 28 23:29:05 2011
       Checksum : 923d11a2 - correct
         Events : 33368

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing)

---


Any ideas or tips? I am considering this might be a bug, but I have onl=
y had this problem in my Debian Squeeze system.

Thanks in advance :)


-M


----------------------------------------
> Subject: Impact of missing parameter during mdadm create
> Date: Mon, 28 Feb 2011 22:19:58 -0500
>
>
> Hello mdadm hackers,
>
> I was wondering what (if any) impact would creating an array with the=
missing parameter have on subsequent assemblies of a mdadm array?
>
> When the array was created, I used a command like:
>
> mdadm --create -l5 -n3 /dev/md0 /dev/sda1 missing /dev/sdb1
>
> And then loaded some initial data on the md0 array from /dev/sdd1, an=
d then I zeroed out /dev/sdd1 and added it to the md0 array.
>
> Details on each drive seem to show they all belong to the same Array =
UUID, but when the array is (re)assembled (on boot or manually), only /=
dev/sd{a,b}1 are added to mdadm array automatically, and /dev/sdd1 must=
be re-added manually.
>
>
> > mdadm --examine /dev/sd{a,b,d}1
> > /dev/sda1:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> > Name : XEN-HOST:0 (local to host XEN-HOST)
> > Creation Time : Mon Dec 20 09:48:07 2010
> > Raid Level : raid5
> > Raid Devices : 3
> >
> > Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
> > Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> > Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> > Data Offset : 2048 sectors
> > Super Offset : 8 sectors
> > State : clean
> > Device UUID : 25f4baf0:9a378d2c:16a87f0c:ff89b2c8
> >
> > Update Time : Fri Feb 18 16:32:19 2011
> > Checksum : 37383bee - correct
> > Events : 32184
> >
> > Layout : left-symmetric
> > Chunk Size : 512K
> >
> > Device Role : Active device 0
> > Array State : AAA ('A' == active, '.' == missing)
> > /dev/sdb1:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> > Name : XEN-HOST:0 (local to host XEN-HOST)
> > Creation Time : Mon Dec 20 09:48:07 2010
> > Raid Level : raid5
> > Raid Devices : 3
> >
> > Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
> > Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> > Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> > Data Offset : 2048 sectors
> > Super Offset : 8 sectors
> > State : clean
> > Device UUID : f20ab5fd:1f141cae:e0547278:d6cf063e
> >
> > Update Time : Fri Feb 18 16:32:19 2011
> > Checksum : a70821e2 - correct
> > Events : 32184
> >
> > Layout : left-symmetric
> > Chunk Size : 512K
> >
> > Device Role : Active device 1
> > Array State : AAA ('A' == active, '.' == missing)
> > /dev/sdd1:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x2
> > Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> > Name : XEN-HOST:0 (local to host XEN-HOST)
> > Creation Time : Mon Dec 20 09:48:07 2010
> > Raid Level : raid5
> > Raid Devices : 3
> >
> > Avail Dev Size : 1953521072 (931.51 GiB 1000.20 GB)
> > Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> > Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> > Data Offset : 2048 sectors
> > Super Offset : 8 sectors
> > Recovery Offset : 610474280 sectors
> > State : clean
> > Device UUID : 33d70114:ffdc4fcc:2c8d65ba:ab50bab2
> >
> > Update Time : Fri Feb 18 16:32:19 2011
> > Checksum : b692957e - correct
> > Events : 32184
> >
> > Layout : left-symmetric
> > Chunk Size : 512K
> >
> > Device Role : Active device 2
> > Array State : AAA ('A' == active, '.' == missing)
>
>
>
>
>
>
> -M
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 01.03.2011 19:38:22 von Mike Viau

> On Tue, 1 Mar 2011 17:13:09 +1000 wrote:
>
>>
>> Manual re-assembly outputs as follows:
>>
>>
>> mdadm -Ss
>>
>> mdadm: stopped /dev/md0
>>
>> ---
>>
>> mdadm -Asvvv
>>
>> mdadm: looking for devices for /dev/md/0
>> mdadm: no RAID superblock on /dev/dm-6
>> mdadm: /dev/dm-6 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-5
>> mdadm: /dev/dm-5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-4
>> mdadm: /dev/dm-4 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-3
>> mdadm: /dev/dm-3 has wrong uuid.
>> mdadm: cannot open device /dev/dm-2: Device or resource busy
>> mdadm: /dev/dm-2 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-1
>> mdadm: /dev/dm-1 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-0
>> mdadm: /dev/dm-0 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sde
>> mdadm: /dev/sde has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdd
>> mdadm: /dev/sdd has wrong uuid.
>> mdadm: cannot open device /dev/sdc7: Device or resource busy
>> mdadm: /dev/sdc7 has wrong uuid.
>> mdadm: cannot open device /dev/sdc6: Device or resource busy
>> mdadm: /dev/sdc6 has wrong uuid.
>> mdadm: cannot open device /dev/sdc5: Device or resource busy
>> mdadm: /dev/sdc5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdc2
>> mdadm: /dev/sdc2 has wrong uuid.
>> mdadm: cannot open device /dev/sdc1: Device or resource busy
>> mdadm: /dev/sdc1 has wrong uuid.
>> mdadm: cannot open device /dev/sdc: Device or resource busy
>> mdadm: /dev/sdc has wrong uuid.
>> mdadm: no RAID superblock on /dev/sda
>> mdadm: /dev/sda has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdb
>> mdadm: /dev/sdb has wrong uuid.
>> mdadm: /dev/sdd1 is identified as a member of /dev/md/0, slot 2.
>> mdadm: /dev/sda1 is identified as a member of /dev/md/0, slot 0.
>> mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot 1.
>> mdadm: added /dev/sdb1 to /dev/md/0 as 1
>> mdadm: added /dev/sdd1 to /dev/md/0 as 2
>> mdadm: looking for devices for /dev/md/0
>> mdadm: no RAID superblock on /dev/dm-6
>> mdadm: /dev/dm-6 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-5
>> mdadm: /dev/dm-5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-4
>> mdadm: /dev/dm-4 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-3
>> mdadm: /dev/dm-3 has wrong uuid.
>> mdadm: cannot open device /dev/dm-2: Device or resource busy
>> mdadm: /dev/dm-2 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-1
>> mdadm: /dev/dm-1 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-0
>> mdadm: /dev/dm-0 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sde
>> mdadm: /dev/sde has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdd
>> mdadm: /dev/sdd has wrong uuid.
>> mdadm: cannot open device /dev/sdc7: Device or resource busy
>> mdadm: /dev/sdc7 has wrong uuid.
>> mdadm: cannot open device /dev/sdc6: Device or resource busy
>> mdadm: /dev/sdc6 has wrong uuid.
>> mdadm: cannot open device /dev/sdc5: Device or resource busy
>> mdadm: /dev/sdc5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdc2
>> mdadm: /dev/sdc2 has wrong uuid.
>> mdadm: cannot open device /dev/sdc1: Device or resource busy
>> mdadm: /dev/sdc1 has wrong uuid.
>> mdadm: cannot open device /dev/sdc: Device or resource busy
>> mdadm: /dev/sdc has wrong uuid.
>> mdadm: no RAID superblock on /dev/sda
>> mdadm: /dev/sda has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdb
>> mdadm: /dev/sdb has wrong uuid.
>> mdadm: /dev/sdd1 is identified as a member of /dev/md/0, slot 2.
>> mdadm: /dev/sda1 is identified as a member of /dev/md/0, slot 0.
>> mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot 1.
>> mdadm: added /dev/sdb1 to /dev/md/0 as 1
>> mdadm: added /dev/sdd1 to /dev/md/0 as 2
>> mdadm: added /dev/sda1 to /dev/md/0 as 0
>> mdadm: /dev/md/0 has been started with 2 drives (out of 3).
>>
>> ---
>>
>> mdadm --examine /dev/sd{a,b,d}1
>>
>> /dev/sda1:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
>> Name : XEN-HOST:0 (local to host XEN-HOST)
>> Creation Time : Mon Dec 20 09:48:07 2010
>> Raid Level : raid5
>> Raid Devices : 3
>>
>> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
>> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
>> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
>> Data Offset : 2048 sectors
>> Super Offset : 8 sectors
>> State : clean
>> Device UUID : 25f4baf0:9a378d2c:16a87f0c:ff89b2c8
>>
>> Update Time : Mon Feb 28 23:35:20 2011
>> Checksum : 3745d2b9 - correct
>> Events : 33374
>>
>> Layout : left-symmetric
>> Chunk Size : 512K
>>
>> Device Role : Active device 0
>> Array State : AAA ('A' == active, '.' == missing)
>>
>> /dev/sdb1:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
>> Name : XEN-HOST:0 (local to host XEN-HOST)
>> Creation Time : Mon Dec 20 09:48:07 2010
>> Raid Level : raid5
>> Raid Devices : 3
>>
>> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
>> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
>> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
>> Data Offset : 2048 sectors
>> Super Offset : 8 sectors
>> State : clean
>> Device UUID : f20ab5fd:1f141cae:e0547278:d6cf063e
>>
>> Update Time : Mon Feb 28 23:35:20 2011
>> Checksum : a715b8ad - correct
>> Events : 33374
>>
>> Layout : left-symmetric
>> Chunk Size : 512K
>>
>> Device Role : Active device 1
>> Array State : AAA ('A' == active, '.' == missing)
>>
>> /dev/sdd1:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
>> Name : XEN-HOST:0 (local to host XEN-HOST)
>> Creation Time : Mon Dec 20 09:48:07 2010
>> Raid Level : raid5
>> Raid Devices : 3
>>
>> Avail Dev Size : 1953521072 (931.51 GiB 1000.20 GB)
>> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
>> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
>> Data Offset : 2048 sectors
>> Super Offset : 8 sectors
>> State : clean
>> Device UUID : 33d70114:ffdc4fcc:2c8d65ba:ab50bab2
>>
>> Update Time : Mon Feb 28 23:29:05 2011
>> Checksum : 923d11a2 - correct
>> Events : 33368
>>
>> Layout : left-symmetric
>> Chunk Size : 512K
>>
>> Device Role : Active device 2
>> Array State : AAA ('A' == active, '.' == missing)
>>
>> ---
>>
>>
>> Any ideas or tips? I am considering this might be a bug, but I have =
only
>> had this problem in my Debian Squeeze system.
>>
>
> What do cat /proc/mdstat and mdadm -D /dev/md0 show you? Also have yo=
u
> updated your mdadm.conf (and the mdadm.conf in the initramfs if you u=
se
> one)?
>

After a reboot I see

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda1[0] sdb1[1]
1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2=
] [UU_]

unused devices:=20


But sometimes I see

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid5 sda1[0] sdb1[1]
1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2=
] [UU_]

unused devices:=20


QUESTION: What does '(auto-read-only)' mean?

In either case --detail output is the same for both cases.

mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Dec 20 09:48:07 2010
Raid Level : raid5
Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Mar 1 13:50:53 2011
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : XEN-HOST:0 (local to host XEN-HOST)
UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
Events : 33422

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 0 0 2 removed


Hmm, so the array is aware that it is missing drive number/RaidDevice o=
f 2, I am not sure what implication of having a major/minor of 0.
QUESTION: Must the Major/Minor information exactly match what the syste=
m detect vs the meta data on the array (I presume)?

If that is the case it looks like I need to make drive number/RaidDevic=
e 2 have a major/minor 8/49.

ls -l /dev/sda1
brw-rw---- 1 root disk 8, 1 Mar=A0 1 14:17 /dev/sda1

ls -l /dev/sdb1
brw-rw---- 1 root disk 8, 17 Mar=A0 1 14:17 /dev/sdb1

ls -l /dev/sdd1
brw-rw---- 1 root floppy 8, 49 Mar=A0 1 14:17 /dev/sdd1


Until I find a solution I am manually running:

mdadm --re-add /dev/md0 /dev/sdd1 -vvv
mdadm: re-added /dev/sdd1

or

mdadm --add /dev/md0 /dev/sdd1 -vvv
mdadm: re-added /dev/sdd1


Which then gives me:

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sda1[0] sdb1[1]
1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2=
] [UU_]
[>....................] recovery =3D 0.1% (1222156/976758784) f=
inish=3D622.3min speed=3D26126K/sec

unused devices:=20

QUESTION: Here is seems sdd1 is given drive number 3 not 2, is that a p=
roblem? (e.g: sdd1[2] vs sdd1[3])


I am also certain my mdadm.conf on my file system is in sync/updated wi=
th the one in my initramfs for all kernels actually.


cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks=

# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes

# automatically tag new arrays as belonging to the local system
HOMEHOST=20

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=3D1.2 UUID=3D7d8a7c68:95a230d0:0a8f6e74:4c8f81=
e9 name=3DXEN-HOST:0



In trying to fix the problem I attempted to change the preferred minor =
of an MD array (RAID) by follow these instructions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# you need to manually assemble the array to change the preferred m=
inor
# if you manually assemble, the superblock will be updated to refle=
ct
# the preferred minor as you indicate with the assembly.
# for example, to set the preferred minor to 4:
mdadm --assemble /dev/md4 /dev/sd[abc]1

# this only works on 2.6 kernels, and only for RAID levels of 1 and=
above.


mdadm --assemble /dev/md0 /dev/sd{a,b,d}1 -vvv
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: added /dev/sdb1 to /dev/md0 as 1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: added /dev/sda1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 2 drives (out of 3) and 1 rebuild=
ing.


So because I specified all the drives, I assume this is the same things=
as assembling the RAID degraded and then manually re-adding the last o=
ne (/dev/sdd1).


-M





--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 03.03.2011 12:06:50 von Ken Drummond

On Tue, 2011-03-01 at 13:38 -0500, Mike Viau wrote:
> > On Tue, 1 Mar 2011 17:13:09 +1000 wrote:
> >
> >> Any ideas or tips? I am considering this might be a bug, but I have only
> >> had this problem in my Debian Squeeze system.
> >>
> >
> > What do cat /proc/mdstat and mdadm -D /dev/md0 show you? Also have you
> > updated your mdadm.conf (and the mdadm.conf in the initramfs if you use
> > one)?
> >
>
> After a reboot I see
>
> cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sda1[0] sdb1[1]
> 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
>
> unused devices:
>
>
> But sometimes I see
>
> cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active (auto-read-only) raid5 sda1[0] sdb1[1]
> 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
>
> unused devices:
>
>
> QUESTION: What does '(auto-read-only)' mean?

auto-read-only means the array is read-only until the first write is
attempted at which point it will become read-write.

>
> In either case --detail output is the same for both cases.
>
> mdadm -D /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
> Raid Devices : 3
> Total Devices : 2
> Persistence : Superblock is persistent
>
> Update Time : Tue Mar 1 13:50:53 2011
> State : clean, degraded
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Name : XEN-HOST:0 (local to host XEN-HOST)
> UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> Events : 33422
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 17 1 active sync /dev/sdb1
> 2 0 0 2 removed
>
>
> Hmm, so the array is aware that it is missing drive number/RaidDevice of 2, I am not sure what implication of having a major/minor of 0.
> QUESTION: Must the Major/Minor information exactly match what the system detect vs the meta data on the array (I presume)?
>
> If that is the case it looks like I need to make drive number/RaidDevice 2 have a major/minor 8/49.
>
> ls -l /dev/sda1
> brw-rw---- 1 root disk 8, 1 Mar 1 14:17 /dev/sda1
>
> ls -l /dev/sdb1
> brw-rw---- 1 root disk 8, 17 Mar 1 14:17 /dev/sdb1
>
> ls -l /dev/sdd1
> brw-rw---- 1 root floppy 8, 49 Mar 1 14:17 /dev/sdd1
>
>
> Until I find a solution I am manually running:
>
> mdadm --re-add /dev/md0 /dev/sdd1 -vvv
> mdadm: re-added /dev/sdd1
>
> or
>
> mdadm --add /dev/md0 /dev/sdd1 -vvv
> mdadm: re-added /dev/sdd1
>
>
> Which then gives me:
>
> cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sdd1[3] sda1[0] sdb1[1]
> 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
> [>....................] recovery = 0.1% (1222156/976758784) finish=622.3min speed=26126K/sec
>
> unused devices:
>

So has the array ever completed a sync?

If it has, and still comes up as degraded on reboot it may pay to add a
bitmap; to make resyncs much quicker while working this out.

> QUESTION: Here is seems sdd1 is given drive number 3 not 2, is that a problem? (e.g: sdd1[2] vs sdd1[3])
>
> I am also certain my mdadm.conf on my file system is in sync/updated with the one in my initramfs for all kernels actually.
>
>
> cat /etc/mdadm/mdadm.conf
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions containers
>
> # auto-create devices with Debian standard permissions
> CREATE owner=root group=disk mode=0660 auto=yes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST
>
> # definitions of existing MD arrays
> ARRAY /dev/md/0 metadata=1.2 UUID=7d8a7c68:95a230d0:0a8f6e74:4c8f81e9 name=XEN-HOST:0
>

I'm not sure if specifying /dev/md/0 is the same as /dev/md0, but I use
the /dev/mdX format and things seem to work for me.

>
>
> In trying to fix the problem I attempted to change the preferred minor of an MD array (RAID) by follow these instructions.
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> # you need to manually assemble the array to change the preferred minor
> # if you manually assemble, the superblock will be updated to reflect
> # the preferred minor as you indicate with the assembly.
> # for example, to set the preferred minor to 4:
> mdadm --assemble /dev/md4 /dev/sd[abc]1
>
> # this only works on 2.6 kernels, and only for RAID levels of 1 and above.
>
>
> mdadm --assemble /dev/md0 /dev/sd{a,b,d}1 -vvv
> mdadm: looking for devices for /dev/md0
> mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
> mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
> mdadm: added /dev/sdb1 to /dev/md0 as 1
> mdadm: added /dev/sdd1 to /dev/md0 as 2
> mdadm: added /dev/sda1 to /dev/md0 as 0
> mdadm: /dev/md0 has been started with 2 drives (out of 3) and 1 rebuilding.
>
>
> So because I specified all the drives, I assume this is the same things as assembling the RAID degraded and then manually re-adding the last one (/dev/sdd1).
>

So if you wait for the resync to complete, what happens if you:

mdadm -S /dev/md0
mdadm -Av /dev/md0

--
Ken.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 04.03.2011 05:55:58 von Mike Viau

> On Thu, 3 Mar 2011 21:06:50 +1000 wrote:
> > On Tue, 2011-03-01 at 13:38 -0500, Mike Viau wrote:
> > QUESTION: What does '(auto-read-only)' mean?
>
> auto-read-only means the array is read-only until the first write is
> attempted at which point it will become read-write.
>

Thanks for the info.

> > cat /etc/mdadm/mdadm.conf
> > # mdadm.conf
> > #
> > # Please refer to mdadm.conf(5) for information about this file.
> > #
> >
> > # by default, scan all partitions (/proc/partitions) for MD superbl=
ocks.
> > # alternatively, specify devices to scan, using wildcards if desire=
d.
> > DEVICE partitions containers
> >
> > # auto-create devices with Debian standard permissions
> > CREATE owner=3Droot group=3Ddisk mode=3D0660 auto=3Dyes
> >
> > # automatically tag new arrays as belonging to the local system
> > HOMEHOST
> >
> > # definitions of existing MD arrays
> > ARRAY /dev/md/0 metadata=3D1.2 UUID=3D7d8a7c68:95a230d0:0a8f6e74:4c=
8f81e9 name=3DXEN-HOST:0
> >
>
> I'm not sure if specifying /dev/md/0 is the same as /dev/md0, but I u=
se
> the /dev/mdX format and things seem to work for me.
>



Thanks I updated my config to use the /dev/mdX format and updated my ke=
rnel's initramfs as well.


> >
> >
> > In trying to fix the problem I
attempted to change the preferred minor of an MD array (RAID) by follow
these instructions.
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > # you need to manually assemble the array to change the preferred m=
inor
> > # if you manually assemble, the superblock will be updated to refle=
ct
> > # the preferred minor as you indicate with the assembly.
> > # for example, to set the preferred minor to 4:
> > mdadm --assemble /dev/md4 /dev/sd[abc]1
> >
> > # this only works on 2.6 kernels, and only for RAID levels of 1 and=
above.
> >
> >
> > mdadm --assemble /dev/md0 /dev/sd{a,b,d}1 -vvv
> > mdadm: looking for devices for /dev/md0
> > mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
> > mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
> > mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
> > mdadm: added /dev/sdb1 to /dev/md0 as 1
> > mdadm: added /dev/sdd1 to /dev/md0 as 2
> > mdadm: added /dev/sda1 to /dev/md0 as 0
> > mdadm: /dev/md0 has been started with 2 drives (out of 3) and 1 reb=
uilding.
> >
> >
>
> So because I specified all the drives, I assume this is the same
things as assembling the RAID degraded and then manually re-adding the
last one (/dev/sdd1).
> >
>
> So if you wait for the resync to complete, what happens if you:
>
> mdadm -S /dev/md0
> mdadm -Av /dev/md0

I allowed the resync to complete and when stopping the array and then a=
ssembling all three drives assembled again.

After a system reboot though, the mdadm raid 5 array was only automatic=
ally assembled with /dev/sd{a,b}1.

mdadm -Av /dev/md0 would also start the array degraded with /dev/sd{a,=
b}1 only unless all three drives were manually specified when assemblin=
g the array, so this doesn't help=A0 :(


Back tracking a bit... by re-worded one of my previous questions:

Where does the mdadm -D /dev/md0 command get the Major/Minor informatio=
n for each drive that is a member of the array from?=20

Does this information have to _exactly_ match the Major/Minor of the bl=
ock devices on the system in order for the array to be built automatica=
lly on system start up? When I created the raid 5 array I passed 'missi=
ng' in place of the block-device/partition that is now /dev/sdd1 (the t=
hird drive in the array).

I searched through the hexdump of my array drives (starting at 0x1000 w=
here the Superblock began), but I could not detect where the major/mino=
r were stored on the drive.


Without knowing exactly what information or where the information is up=
dated for the Major/Minor information, I ran:

mdadm --assemble /dev/md0 --update=3Dhomehost   (To change the home=
host as recorded in the superblock. For version-1 superblocks, this inv=
olves updating the name.)

and

mdadm --assemble /dev/md0 --update=3Dsuper-minor (To update the preferr=
ed minor field on each superblock to match the minor number of=A0 the a=
rray being assembled)


Now the system still unfortunately reboots with 2 of 3 drives in the ar=
ray (degraded), but manually assembly now _works_ by running mdadm -Av =
/dev/md0 (which produces):

mdadm: looking for devices for /dev/md0
mdadm: no RAID superblock on /dev/dm-6
mdadm: /dev/dm-6 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-5
mdadm: /dev/dm-5 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-4
mdadm: /dev/dm-4 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: cannot open device /dev/dm-2: Device or resource busy
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sde has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdb has wrong uuid.
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sda has wrong uuid.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
mdadm: added /dev/sdb1 to /dev/md0 as 1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: added /dev/sda1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 3 drives.


Additionally the tail of mdadm -D /dev/md0 has changed and now shows:

   Number   Major   Minor   RaidDevice State
       0       8      =A0 1  =A0=
     0       active sync   /dev/sda1
       1       8       17    =
   1       active sync   /dev/sdb1
       3       8       49    =
   2       active sync   /dev/sdd1


Rather than (previously):

   Number   Major   Minor   RaidDevice State
       0       8      =A0 1  =A0=
  =A0 0      =A0 active sync   /dev/sda1
       1       8       17    =
=A0 1      =A0 active sync   /dev/sdb1
       2     =A0 0     =A0 0     =A0=
=A0 2      =A0 removed


QUESTION: Is that normal that the details output has incremented a Numb=
er as indicated in the first column? (e.g: 2 changing to 3 on a raid 5 =
array of only 5 drives with no spares)


When the array is manually assembled the state is now considered 'clean=
'

mdadm -D /dev/md0
/dev/md0:
      =A0 Version : 1.2
=A0 Creation Time : Mon Dec 20 09:48:07 2010
     Raid Level : raid5
     Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
=A0 Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
   Raid Devices : 3
=A0 Total Devices : 3
  =A0 Persistence : Superblock is persistent

  =A0 Update Time : Thu Mar=A0 3 23:12:23 2011
        =A0 State : clean
=A0Active Devices : 3
Working Devices : 3
=A0Failed Devices : 0
=A0 Spare Devices : 0


cat /proc/mdstat=20
Personalities : [raid6] [raid5] [raid4]=20
md0 : active (auto-read-only) raid5 sda1[0] sdd1[3] sdb1[1]
    =A0 1953517568 blocks super 1.2 level 5, 512k chunk, algori=
thm 2 [3/3] [UUU]
     
unused devices:



> If it has, and still comes up as degraded on reboot it may pay to add=
a
> bitmap; to make resyncs much quicker while working this out.
>

Could you please explain what you mean further?

I have a feeling I will not going to be so lucky in identifying this de=
graded array after rebooting problem in the near future, but would like=
to make my efforts more efficient if possible.

I am very determined, find the solution :)


-M

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 04.03.2011 06:01:04 von Mike Viau

> > On Thu, 3 Mar 2011 21:06:50 +1000 wrote:
> > > On Tue, 2011-03-01 at 13:38 -0500, Mike Viau wrote:
> > > QUESTION: What does '(auto-read-only)' mean?
> >
> > auto-read-only means the array is read-only until the first write is
> > attempted at which point it will become read-write.
> >
>
> Thanks for the info.
>
> > > cat /etc/mdadm/mdadm.conf
> > > # mdadm.conf
> > > #
> > > # Please refer to mdadm.conf(5) for information about this file.
> > > #
> > >
> > > # by default, scan all partitions (/proc/partitions) for MD superblocks.
> > > # alternatively, specify devices to scan, using wildcards if desired.
> > > DEVICE partitions containers
> > >
> > > # auto-create devices with Debian standard permissions
> > > CREATE owner=root group=disk mode=0660 auto=yes
> > >
> > > # automatically tag new arrays as belonging to the local system
> > > HOMEHOST
> > >
> > > # definitions of existing MD arrays
> > > ARRAY /dev/md/0 metadata=1.2 UUID=7d8a7c68:95a230d0:0a8f6e74:4c8f81e9 name=XEN-HOST:0
> > >
> >
> > I'm not sure if specifying /dev/md/0 is the same as /dev/md0, but I use
> > the /dev/mdX format and things seem to work for me.
> >
>
>
>
> Thanks I updated my config to use the /dev/mdX format and updated my kernel's initramfs as well.
>
>
> > >
> > >
> > > In trying to fix the problem I
> attempted to change the preferred minor of an MD array (RAID) by follow
> these instructions.
> > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > # you need to manually assemble the array to change the preferred minor
> > > # if you manually assemble, the superblock will be updated to reflect
> > > # the preferred minor as you indicate with the assembly.
> > > # for example, to set the preferred minor to 4:
> > > mdadm --assemble /dev/md4 /dev/sd[abc]1
> > >
> > > # this only works on 2.6 kernels, and only for RAID levels of 1 and above.
> > >
> > >
> > > mdadm --assemble /dev/md0 /dev/sd{a,b,d}1 -vvv
> > > mdadm: looking for devices for /dev/md0
> > > mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
> > > mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
> > > mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
> > > mdadm: added /dev/sdb1 to /dev/md0 as 1
> > > mdadm: added /dev/sdd1 to /dev/md0 as 2
> > > mdadm: added /dev/sda1 to /dev/md0 as 0
> > > mdadm: /dev/md0 has been started with 2 drives (out of 3) and 1 rebuilding.
> > >
> > >
> >
> > So because I specified all the drives, I assume this is the same
> things as assembling the RAID degraded and then manually re-adding the
> last one (/dev/sdd1).
> > >
> >
> > So if you wait for the resync to complete, what happens if you:
> >
> > mdadm -S /dev/md0
> > mdadm -Av /dev/md0
>
> I allowed the resync to complete and when stopping the array and then assembling all three drives assembled again.
>
> After a system reboot though, the mdadm raid 5 array was only automatically assembled with /dev/sd{a,b}1.
>
> mdadm -Av /dev/md0 would also start the array degraded with /dev/sd{a,b}1 only unless all three drives were manually specified when assembling the array, so this doesn't help :(
>
>
> Back tracking a bit... by re-worded one of my previous questions:
>
> Where does the mdadm -D /dev/md0 command get the Major/Minor information for each drive that is a member of the array from?
>
> Does this information have to _exactly_ match the Major/Minor of the block devices on the system in order for the array to be built automatically on system start up? When I created the raid 5 array I passed 'missing' in place of the block-device/partition that is now /dev/sdd1 (the third drive in the array).
>
> I searched through the hexdump of my array drives (starting at 0x1000 where the Superblock began), but I could not detect where the major/minor were stored on the drive.
>
>
> Without knowing exactly what information or where the information is updated for the Major/Minor information, I ran:
>
> mdadm --assemble /dev/md0 --update=homehost (To change the homehost as recorded in the superblock. For version-1 superblocks, this involves updating the name.)
>
> and
>
> mdadm --assemble /dev/md0 --update=super-minor (To update the preferred minor field on each superblock to match the minor number of the array being assembled)
>
>
> Now the system still unfortunately reboots with 2 of 3 drives in the array (degraded), but manually assembly now _works_ by running mdadm -Av /dev/md0 (which produces):
>
> mdadm: looking for devices for /dev/md0
> mdadm: no RAID superblock on /dev/dm-6
> mdadm: /dev/dm-6 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-5
> mdadm: /dev/dm-5 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-4
> mdadm: /dev/dm-4 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
> mdadm: cannot open device /dev/dm-2: Device or resource busy
> mdadm: /dev/dm-2 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-1
> mdadm: /dev/dm-1 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-0
> mdadm: /dev/dm-0 has wrong uuid.
> mdadm: no RAID superblock on /dev/sde
> mdadm: /dev/sde has wrong uuid.
> mdadm: no RAID superblock on /dev/sdd
> mdadm: /dev/sdd has wrong uuid.
> mdadm: cannot open device /dev/sdc7: Device or resource busy
> mdadm: /dev/sdc7 has wrong uuid.
> mdadm: cannot open device /dev/sdc6: Device or resource busy
> mdadm: /dev/sdc6 has wrong uuid.
> mdadm: cannot open device /dev/sdc5: Device or resource busy
> mdadm: /dev/sdc5 has wrong uuid.
> mdadm: no RAID superblock on /dev/sdc2
> mdadm: /dev/sdc2 has wrong uuid.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm: /dev/sdc1 has wrong uuid.
> mdadm: cannot open device /dev/sdc: Device or resource busy
> mdadm: /dev/sdc has wrong uuid.
> mdadm: no RAID superblock on /dev/sdb
> mdadm: /dev/sdb has wrong uuid.
> mdadm: no RAID superblock on /dev/sda
> mdadm: /dev/sda has wrong uuid.
> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
> mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
> mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
> mdadm: added /dev/sdb1 to /dev/md0 as 1
> mdadm: added /dev/sdd1 to /dev/md0 as 2
> mdadm: added /dev/sda1 to /dev/md0 as 0
> mdadm: /dev/md0 has been started with 3 drives.
>
>
> Additionally the tail of mdadm -D /dev/md0 has changed and now shows:
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 17 1 active sync /dev/sdb1
> 3 8 49 2 active sync /dev/sdd1
>
>
> Rather than (previously):
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 17 1 active sync /dev/sdb1
> 2 0 0 2 removed
>
>
> QUESTION: Is that normal that the details output has incremented a Number as indicated in the first column? (e.g: 2 changing to 3 on a raid 5 array of only 5 drives with no spares)

EDIT: that should have read "on a raid 5 array of only 3 drives with no spares"

>
>
> When the array is manually assembled the state is now considered 'clean.'
>
> mdadm -D /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
> Raid Devices : 3
> Total Devices : 3
> Persistence : Superblock is persistent
>
> Update Time : Thu Mar 3 23:12:23 2011
> State : clean
> Active Devices : 3
> Working Devices : 3
> Failed Devices : 0
> Spare Devices : 0
>
>
> cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active (auto-read-only) raid5 sda1[0] sdd1[3] sdb1[1]
> 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
>
> unused devices:
>
>
>
> > If it has, and still comes up as degraded on reboot it may pay to add a
> > bitmap; to make resyncs much quicker while working this out.
> >
>
> Could you please explain what you mean further?
>
> I have a feeling I will not going to be so lucky in identifying this degraded array after rebooting problem in the near future, but would like to make my efforts more efficient if possible.
>
> I am very determined, find the solution :)
>
>
> -M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 04.03.2011 08:36:13 von Ken Drummond

On Fri, 2011-03-04 at 00:01 -0500, Mike Viau wrote:
> > > On Thu, 3 Mar 2011 21:06:50 +1000 wrote:
> > > > On Tue, 2011-03-01 at 13:38 -0500, Mike Viau wrote:
> >
> > > > cat /etc/mdadm/mdadm.conf
> > > > # mdadm.conf
> > > > #
> > > > # Please refer to mdadm.conf(5) for information about this file.
> > > > #
> > > >
> > > > # by default, scan all partitions (/proc/partitions) for MD superblocks.
> > > > # alternatively, specify devices to scan, using wildcards if desired.
> > > > DEVICE partitions containers
> > > >
> > > > # auto-create devices with Debian standard permissions
> > > > CREATE owner=root group=disk mode=0660 auto=yes
> > > >
> > > > # automatically tag new arrays as belonging to the local system
> > > > HOMEHOST
> > > >
> > > > # definitions of existing MD arrays
> > > > ARRAY /dev/md/0 metadata=1.2 UUID=7d8a7c68:95a230d0:0a8f6e74:4c8f81e9 name=XEN-HOST:0
> > > >
> > >
> > > I'm not sure if specifying /dev/md/0 is the same as /dev/md0, but I use
> > > the /dev/mdX format and things seem to work for me.
> > >
> >
> >
> >
> > Thanks I updated my config to use the /dev/mdX format and updated my kernel's initramfs as well.
> >

Can you post the output of "dmesg | grep md" after a reboot?

Alternatively you might like to review your system log for what is
happening during the boot.

> >
> > > >
> > > >
> > > > In trying to fix the problem I
> > attempted to change the preferred minor of an MD array (RAID) by follow
> > these instructions.
> > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > > # you need to manually assemble the array to change the preferred minor
> > > > # if you manually assemble, the superblock will be updated to reflect
> > > > # the preferred minor as you indicate with the assembly.
> > > > # for example, to set the preferred minor to 4:
> > > > mdadm --assemble /dev/md4 /dev/sd[abc]1
> > > >
> > > > # this only works on 2.6 kernels, and only for RAID levels of 1 and above.
> > > >
> > > >
> > > > mdadm --assemble /dev/md0 /dev/sd{a,b,d}1 -vvv
> > > > mdadm: looking for devices for /dev/md0
> > > > mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
> > > > mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
> > > > mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
> > > > mdadm: added /dev/sdb1 to /dev/md0 as 1
> > > > mdadm: added /dev/sdd1 to /dev/md0 as 2
> > > > mdadm: added /dev/sda1 to /dev/md0 as 0
> > > > mdadm: /dev/md0 has been started with 2 drives (out of 3) and 1 rebuilding.
> > > >
> > > >
> > >
> > > So because I specified all the drives, I assume this is the same
> > things as assembling the RAID degraded and then manually re-adding the
> > last one (/dev/sdd1).
> > > >
> > >
> > > So if you wait for the resync to complete, what happens if you:
> > >
> > > mdadm -S /dev/md0
> > > mdadm -Av /dev/md0
> >
> > I allowed the resync to complete and when stopping the array and then assembling all three drives assembled again.
> >
> > After a system reboot though, the mdadm raid 5 array was only automatically assembled with /dev/sd{a,b}1.
> >
> > mdadm -Av /dev/md0 would also start the array degraded with /dev/sd{a,b}1 only
> > unless all three drives were manually specified when assembling the array, so this doesn't help :(
> >

So you're saying that "mdadm -Av /dev/md0" assembled the array
completely before a reboot but didn't do the same after a reboot?

> >
> > Back tracking a bit... by re-worded one of my previous questions:
> >
> > Where does the mdadm -D /dev/md0 command get the Major/Minor information
> > for each drive that is a member of the array from?

I might be mistaken, but I think you are confusing the Major/Minor of
the devices making up the array and the Major/Minor of the md device
itself. You can specify the actual component device names in the
mdadm.conf file but this is not a good option because those device names
change between reboots which I why I specify the UUID of the the array
and is also what you have done. mdadm then scans all devices looking
for that UUID. From what you have provided it seems that all three
devices have the array UUID set correctly.

> >
> > Does this information have to _exactly_ match the Major/Minor of the block devices on the system in order for the array to be built automatically on system start up? When I created the raid 5 array I passed 'missing' in place of the block-device/partition that is now /dev/sdd1 (the third drive in the array).
> >
> > I searched through the hexdump of my array drives (starting at 0x1000 where the Superblock began), but I could not detect where the major/minor were stored on the drive.
> >
> >
> > Without knowing exactly what information or where the information is updated for the Major/Minor information, I ran:
> >
> > mdadm --assemble /dev/md0 --update=homehost (To change the homehost as recorded in the superblock. For version-1 superblocks, this involves updating the name.)
> >

So can you post the output

mdadm -D /dev/md0
mdadm -E /dev/sd{b,d,a}1

and the contents of mdadm.conf as they currently stand.

along with the "dmesg | grep md" straight after a reboot as requested
above.

> > and
> >
> > mdadm --assemble /dev/md0 --update=super-minor (To update the preferred minor field on each superblock to match the minor number of the array being assembled)
> >
> >
> > Now the system still unfortunately reboots with 2 of 3 drives in the array
> > (degraded), but manually assembly now _works_ by running mdadm -Av /dev/md0
> > (which produces):
> >
> > mdadm: looking for devices for /dev/md0
> > mdadm: no RAID superblock on /dev/dm-6
> > mdadm: /dev/dm-6 has wrong uuid.
> > mdadm: no RAID superblock on /dev/dm-5
> > mdadm: /dev/dm-5 has wrong uuid.
> > mdadm: no RAID superblock on /dev/dm-4
> > mdadm: /dev/dm-4 has wrong uuid.
> > mdadm: no RAID superblock on /dev/dm-3
> > mdadm: /dev/dm-3 has wrong uuid.
> > mdadm: cannot open device /dev/dm-2: Device or resource busy
> > mdadm: /dev/dm-2 has wrong uuid.
> > mdadm: no RAID superblock on /dev/dm-1
> > mdadm: /dev/dm-1 has wrong uuid.
> > mdadm: no RAID superblock on /dev/dm-0
> > mdadm: /dev/dm-0 has wrong uuid.
> > mdadm: no RAID superblock on /dev/sde
> > mdadm: /dev/sde has wrong uuid.
> > mdadm: no RAID superblock on /dev/sdd
> > mdadm: /dev/sdd has wrong uuid.
> > mdadm: cannot open device /dev/sdc7: Device or resource busy
> > mdadm: /dev/sdc7 has wrong uuid.
> > mdadm: cannot open device /dev/sdc6: Device or resource busy
> > mdadm: /dev/sdc6 has wrong uuid.
> > mdadm: cannot open device /dev/sdc5: Device or resource busy
> > mdadm: /dev/sdc5 has wrong uuid.
> > mdadm: no RAID superblock on /dev/sdc2
> > mdadm: /dev/sdc2 has wrong uuid.
> > mdadm: cannot open device /dev/sdc1: Device or resource busy
> > mdadm: /dev/sdc1 has wrong uuid.
> > mdadm: cannot open device /dev/sdc: Device or resource busy
> > mdadm: /dev/sdc has wrong uuid.
> > mdadm: no RAID superblock on /dev/sdb
> > mdadm: /dev/sdb has wrong uuid.
> > mdadm: no RAID superblock on /dev/sda
> > mdadm: /dev/sda has wrong uuid.
> > mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
> > mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
> > mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
> > mdadm: added /dev/sdb1 to /dev/md0 as 1
> > mdadm: added /dev/sdd1 to /dev/md0 as 2
> > mdadm: added /dev/sda1 to /dev/md0 as 0
> > mdadm: /dev/md0 has been started with 3 drives.
> >

So the array does assemble without specifying the component devices on
the mdadm command line? I thought you said this didn't happen above?

> > > If it has, and still comes up as degraded on reboot it may pay to add a
> > > bitmap; to make resyncs much quicker while working this out.
> > >
> >
> > Could you please explain what you mean further?
> >
> > I have a feeling I will not going to be so lucky in identifying this degraded array after
> > rebooting problem in the near future, but would like to make my efforts more efficient if possible.

A bitmap stores a data structure where each bit represents a "section"
of the array (the size of that section is determined by the size of the
bitmap) and when any data in a "section" is updated the corresponding
bit is turned on. So if an array is written to while degraded mdadm can
check the bitmap and only resync the changed "sections" when a missing
device is added back.


Ken.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 05.03.2011 07:37:35 von Ken Drummond

On Fri, 2011-03-04 at 23:22 -0500, Mike Viau wrote:

> Attached HOST_SYSLOG.txt is the _complete_ syslog after a system reboot showing only 2 drives are assembled automatically on boot, and the array is degraded.
>
>

OK, I guess I should have thought of this before, your syslog shows that
sdd is attached as a USB device and is not discovered until after md0
has been assembled (using all the devices available at the time). I'm
not sure how you would make md auto assembly wait until after your USB
device has been discovered. Hopefully someone else here can provide
some advice, although it's not strictly a linux-raid issue there are
quite a few helpful people here.

I'm pretty sure if you replaced the USB device with another SATA device
things would work as you expected. Another alternative may be to not
have that array auto assembled but wait and assemble after the boot is
finished, this could probably be scripted.

Ken.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: Impact of missing parameter during mdadm create

am 05.03.2011 15:47:58 von Mike Viau

> On Sat, 5 Mar 2011 16:37:35 +1000 wrote:
>=20
> > On Fri, 2011-03-04 at 23:22 -0500, Mike Viau wrote:
> >=A0
> > Attached HOST_SYSLOG.txt is the _complete_ syslog after a system re=
boot showing only 2 drives are assembled automatically on boot, and the=
array is degraded.
> >=20
> >=20
>=20
> OK, I guess I should have thought of this before, your syslog shows t=
hat
> sdd is attached as a USB device and is not discovered until after md0
> has been assembled (using all the devices available at the time). I'=
m
> not sure how you would make md auto assembly wait until after your US=
B
> device has been discovered. Hopefully someone else here can provide
> some advice, although it's not strictly a linux-raid issue there are
> quite a few helpful people here.
>=20
> I'm pretty sure if you replaced the USB device with another SATA devi=
ce
> things would work as you expected. Another alternative may be to not
> have that array auto assembled but wait and assemble after the boot i=
s
> finished, this could probably be scripted.
>=20
> Ken.
>=20

Well that about explains it, thanks for your help and time :)
I am going to mark this thread as [solved] for now, the issue was not w=
ith the missing=A0parameter, but with the drive missing being connected=
to the usb bus. When I get a moment later today I'll search around to =
see what others in this situations have done. If all else fails and I d=
on't find anything or I just find the quick and dirty script fixes, the=
n I will post a new thread looking for suggestions.
I guess I could open up my hard drive enclosure too, but in doing so I'=
d be voiding the=A0warranty=A0I have left on it.
Regardless thanks again Ken!

-M --
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Impact of missing parameter during mdadm create

am 05.03.2011 16:02:44 von John Robinson

On 05/03/2011 14:47, Mike Viau wrote:
[...]
> I am going to mark this thread as [solved] for now, the issue was not
> with the missing parameter, but with the drive missing being
> connected to the usb bus. When I get a moment later today I'll search
> around to see what others in this situations have done. If all else
> fails and I don't find anything or I just find the quick and dirty
> script fixes, then I will post a new thread looking for suggestions.

You need to get the USB drivers and usb-storage modules into your
initramfs. On CentOS 5 you do that with mkinitrd --with-usb but I don't
what other distros use.

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html