Help with recovering resized raid where machine crashed while PENDING

Help with recovering resized raid where machine crashed while PENDING

am 02.07.2011 10:54:05 von Petter Reinholdtsen

I could use some help with recovering a raid5. I had two RAID5 using
three disks. The 1T disks were partitioned into two halves and each
raid5 used one partition from each disk (created this way to be able
to add my 500G disks into the raids).

Then I added two more disks to the setup, partitioned the same way,
and added their partitions to the two raids and asked the raids to
grow. The first raid started growing and got around 60% out before
the machine crashed and had to be rebooted. The second raid did not
start growing and was PENDING. As far as I know, it was still PENDING
when the machine crashed. When I ran mdadm to start the second
growing, the mdadm command hung waiting for the other grow operation
to finish. I ended up killing it after a few hours, hoping to
continue the grow operation when the first raid was done growing after
15 days.

After the crash and first reboot , the first raid5 is activated and
show up as auto-read-only, and the second raid fail to assemble. I
did not specify a backup file when growing, as the recipe I found did
not mention that it was smart to do. Now I wish I had.

Any ideas how I can recover my raid? After reading
, I
suspect creating it again is the solution, but am unsure if I should
recreate it with 3 or 5 partitions. Trying to assembly result in this:

meta:~# mdadm --assemble /dev/md3 /dev/sdd2 /dev/sde2 /dev/sdh2 /dev/sda2 /dev/sdb2
mdadm: Failed to restore critical section for reshape, sorry.
Possibly you needed to specify the --backup-file
meta:~#

How can I know which disks to use when recreating if I want to
recreate using only three disks? Is it the three with the active
state?

This is the content of /proc/mdstat. The md0 and md1 RAIDs can be
ignored as they are on two different disks:

Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active (auto-read-only) raid5 sdd1[0] sda1[4] sdb1[3] sde1[2] sdh1[1]
976558976 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md1 : active raid1 sdc2[0] sdf2[1]
976510912 blocks [2/2] [UU]

md0 : active raid1 sdc1[0] sdf1[1]
248896 blocks [2/2] [UU]

unused devices:

Based on the disks used by md2, I ran "mdadm --examine
/dev/sd[dabeh]2" to get the status of the problematic partitions:

/dev/sda2:
Magic : a92b4efc
Version : 00.91.00
UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
Creation Time : Sun Oct 26 17:29:27 2008
Raid Level : raid5
Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 3

Reshape pos'n : 0
Delta Devices : 2 (3->5)

Update Time : Thu Jun 30 11:00:18 2011
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Checksum : 3274a54b - correct
Events : 193913

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 4 8 98 4 active sync

0 0 8 18 0 active sync /dev/sdb2
1 1 8 34 1 active sync /dev/sdc2
2 2 8 82 2 active sync /dev/sdf2
3 3 8 114 3 active sync /dev/sdh2
4 4 8 98 4 active sync
/dev/sdb2:
Magic : a92b4efc
Version : 00.91.00
UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
Creation Time : Sun Oct 26 17:29:27 2008
Raid Level : raid5
Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 3

Reshape pos'n : 0
Delta Devices : 2 (3->5)

Update Time : Thu Jun 30 11:00:18 2011
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Checksum : 3274a559 - correct
Events : 193913

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 3 8 114 3 active sync /dev/sdh2

0 0 8 18 0 active sync /dev/sdb2
1 1 8 34 1 active sync /dev/sdc2
2 2 8 82 2 active sync /dev/sdf2
3 3 8 114 3 active sync /dev/sdh2
4 4 8 98 4 active sync
/dev/sdd2:
Magic : a92b4efc
Version : 00.91.00
UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
Creation Time : Sun Oct 26 17:29:27 2008
Raid Level : raid5
Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 3

Reshape pos'n : 0
Delta Devices : 2 (3->5)

Update Time : Thu Jun 30 10:59:48 2011
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Checksum : 32779a4d - correct
Events : 193912

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 18 0 active sync /dev/sdb2

0 0 8 18 0 active sync /dev/sdb2
1 1 8 34 1 active sync /dev/sdc2
2 2 8 82 2 active sync /dev/sdf2
3 3 8 114 3 active sync /dev/sdh2
4 4 8 98 4 active sync
/dev/sde2:
Magic : a92b4efc
Version : 00.91.00
UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
Creation Time : Sun Oct 26 17:29:27 2008
Raid Level : raid5
Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 3

Reshape pos'n : 0
Delta Devices : 2 (3->5)

Update Time : Thu Jun 30 11:00:18 2011
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Checksum : 3274a505 - correct
Events : 193913

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 34 1 active sync /dev/sdc2

0 0 8 18 0 active sync /dev/sdb2
1 1 8 34 1 active sync /dev/sdc2
2 2 8 82 2 active sync /dev/sdf2
3 3 8 114 3 active sync /dev/sdh2
4 4 8 98 4 active sync
/dev/sdh2:
Magic : a92b4efc
Version : 00.91.00
UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
Creation Time : Sun Oct 26 17:29:27 2008
Raid Level : raid5
Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 3

Reshape pos'n : 0
Delta Devices : 2 (3->5)

Update Time : Thu Jun 30 10:59:48 2011
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Checksum : 32779a91 - correct
Events : 193912

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 82 2 active sync /dev/sdf2

0 0 8 18 0 active sync /dev/sdb2
1 1 8 34 1 active sync /dev/sdc2
2 2 8 82 2 active sync /dev/sdf2
3 3 8 114 3 active sync /dev/sdh2
4 4 8 98 4 active sync

Happy hacking,
--
Petter Reinholdtsen
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Help with recovering resized raid where machine crashed while PENDING

am 04.07.2011 16:21:27 von Petter Reinholdtsen

[Petter Reinholdtsen]
> Any ideas how I can recover my raid? After reading
> , I
> suspect creating it again is the solution, but am unsure if I should
> recreate it with 3 or 5 partitions. Trying to assembly result in this:

>
> meta:~# mdadm --assemble /dev/md3 /dev/sdd2 /dev/sde2 /dev/sdh2 /dev/sda2 /dev/sdb2
> mdadm: Failed to restore critical section for reshape, sorry.
> Possibly you needed to specify the --backup-file
> meta:~#
>
> How can I know which disks to use when recreating if I want to
> recreate using only three disks? Is it the three with the active
> state?

This is still a problem. I got some help from JyZyXEL on #linux-raid,
and managed to get md2 to continue its reshaping with mdadm
--readwrite /dev/md2, but the raid that was pending its reshaping is
still failing to assemble with the above message.

Is this the only approach to recover?

mdadm --create --assume-clean --level=5 --raid-devices=5 /dev/md3 /dev/sd?2

Should I use --raid-devices=3 or --raid-devices=5, and if I should use
--raid-devices=3, how do I figure which devices to use?

Happy hacking,
--
Petter Reinholdtsen
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Help with recovering resized raid where machine crashed whilePENDING

am 05.07.2011 02:49:23 von NeilBrown

On Sat, 02 Jul 2011 10:54:05 +0200 Petter Reinholdtsen
wrote:

>
> I could use some help with recovering a raid5. I had two RAID5 using
> three disks. The 1T disks were partitioned into two halves and each
> raid5 used one partition from each disk (created this way to be able
> to add my 500G disks into the raids).
>
> Then I added two more disks to the setup, partitioned the same way,
> and added their partitions to the two raids and asked the raids to
> grow. The first raid started growing and got around 60% out before
> the machine crashed and had to be rebooted. The second raid did not
> start growing and was PENDING. As far as I know, it was still PENDING
> when the machine crashed. When I ran mdadm to start the second
> growing, the mdadm command hung waiting for the other grow operation
> to finish. I ended up killing it after a few hours, hoping to
> continue the grow operation when the first raid was done growing after
> 15 days.
>
> After the crash and first reboot , the first raid5 is activated and
> show up as auto-read-only, and the second raid fail to assemble. I
> did not specify a backup file when growing, as the recipe I found did
> not mention that it was smart to do. Now I wish I had.

I probably wouldn't have helped. It is supposed to write backup stuff to the
spares and if it didn't do that, it probably wouldn't have written it to a
file either.

The easiest fix for now is to recreate the array.

mdadm -CR /dev/md3 --metadata=0.90 -n3 -l5 -c64 /dev/sdb2 /dev/sdc2 /dev/sdf2 --assume-clean

should do it.

Then if that looks good, add the extra devices and grow the array again.

NeilBrown



>
> Any ideas how I can recover my raid? After reading
> , I
> suspect creating it again is the solution, but am unsure if I should
> recreate it with 3 or 5 partitions. Trying to assembly result in this:
>
> meta:~# mdadm --assemble /dev/md3 /dev/sdd2 /dev/sde2 /dev/sdh2 /dev/sda2 /dev/sdb2
> mdadm: Failed to restore critical section for reshape, sorry.
> Possibly you needed to specify the --backup-file
> meta:~#
>
> How can I know which disks to use when recreating if I want to
> recreate using only three disks? Is it the three with the active
> state?
>
> This is the content of /proc/mdstat. The md0 and md1 RAIDs can be
> ignored as they are on two different disks:
>
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md2 : active (auto-read-only) raid5 sdd1[0] sda1[4] sdb1[3] sde1[2] sdh1[1]
> 976558976 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
>
> md1 : active raid1 sdc2[0] sdf2[1]
> 976510912 blocks [2/2] [UU]
>
> md0 : active raid1 sdc1[0] sdf1[1]
> 248896 blocks [2/2] [UU]
>
> unused devices:
>
> Based on the disks used by md2, I ran "mdadm --examine
> /dev/sd[dabeh]2" to get the status of the problematic partitions:
>
> /dev/sda2:
> Magic : a92b4efc
> Version : 00.91.00
> UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
> Creation Time : Sun Oct 26 17:29:27 2008
> Raid Level : raid5
> Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
> Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
> Raid Devices : 5
> Total Devices : 5
> Preferred Minor : 3
>
> Reshape pos'n : 0
> Delta Devices : 2 (3->5)
>
> Update Time : Thu Jun 30 11:00:18 2011
> State : active
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
> Checksum : 3274a54b - correct
> Events : 193913
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 4 8 98 4 active sync
>
> 0 0 8 18 0 active sync /dev/sdb2
> 1 1 8 34 1 active sync /dev/sdc2
> 2 2 8 82 2 active sync /dev/sdf2
> 3 3 8 114 3 active sync /dev/sdh2
> 4 4 8 98 4 active sync
> /dev/sdb2:
> Magic : a92b4efc
> Version : 00.91.00
> UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
> Creation Time : Sun Oct 26 17:29:27 2008
> Raid Level : raid5
> Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
> Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
> Raid Devices : 5
> Total Devices : 5
> Preferred Minor : 3
>
> Reshape pos'n : 0
> Delta Devices : 2 (3->5)
>
> Update Time : Thu Jun 30 11:00:18 2011
> State : active
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
> Checksum : 3274a559 - correct
> Events : 193913
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 3 8 114 3 active sync /dev/sdh2
>
> 0 0 8 18 0 active sync /dev/sdb2
> 1 1 8 34 1 active sync /dev/sdc2
> 2 2 8 82 2 active sync /dev/sdf2
> 3 3 8 114 3 active sync /dev/sdh2
> 4 4 8 98 4 active sync
> /dev/sdd2:
> Magic : a92b4efc
> Version : 00.91.00
> UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
> Creation Time : Sun Oct 26 17:29:27 2008
> Raid Level : raid5
> Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
> Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
> Raid Devices : 5
> Total Devices : 5
> Preferred Minor : 3
>
> Reshape pos'n : 0
> Delta Devices : 2 (3->5)
>
> Update Time : Thu Jun 30 10:59:48 2011
> State : clean
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
> Checksum : 32779a4d - correct
> Events : 193912
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 0 8 18 0 active sync /dev/sdb2
>
> 0 0 8 18 0 active sync /dev/sdb2
> 1 1 8 34 1 active sync /dev/sdc2
> 2 2 8 82 2 active sync /dev/sdf2
> 3 3 8 114 3 active sync /dev/sdh2
> 4 4 8 98 4 active sync
> /dev/sde2:
> Magic : a92b4efc
> Version : 00.91.00
> UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
> Creation Time : Sun Oct 26 17:29:27 2008
> Raid Level : raid5
> Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
> Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
> Raid Devices : 5
> Total Devices : 5
> Preferred Minor : 3
>
> Reshape pos'n : 0
> Delta Devices : 2 (3->5)
>
> Update Time : Thu Jun 30 11:00:18 2011
> State : active
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
> Checksum : 3274a505 - correct
> Events : 193913
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 1 8 34 1 active sync /dev/sdc2
>
> 0 0 8 18 0 active sync /dev/sdb2
> 1 1 8 34 1 active sync /dev/sdc2
> 2 2 8 82 2 active sync /dev/sdf2
> 3 3 8 114 3 active sync /dev/sdh2
> 4 4 8 98 4 active sync
> /dev/sdh2:
> Magic : a92b4efc
> Version : 00.91.00
> UUID : 6dcd10c1:39d083f9:e49659ac:48e50bf6
> Creation Time : Sun Oct 26 17:29:27 2008
> Raid Level : raid5
> Used Dev Size : 488279488 (465.66 GiB 500.00 GB)
> Array Size : 1953117952 (1862.64 GiB 1999.99 GB)
> Raid Devices : 5
> Total Devices : 5
> Preferred Minor : 3
>
> Reshape pos'n : 0
> Delta Devices : 2 (3->5)
>
> Update Time : Thu Jun 30 10:59:48 2011
> State : clean
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
> Checksum : 32779a91 - correct
> Events : 193912
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 2 8 82 2 active sync /dev/sdf2
>
> 0 0 8 18 0 active sync /dev/sdb2
> 1 1 8 34 1 active sync /dev/sdc2
> 2 2 8 82 2 active sync /dev/sdf2
> 3 3 8 114 3 active sync /dev/sdh2
> 4 4 8 98 4 active sync
>
> Happy hacking,

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Help with recovering resized raid where machine crashed while PENDING

am 05.07.2011 18:24:33 von Petter Reinholdtsen

[Neil Brown]
> I probably wouldn't have helped. It is supposed to write backup
> stuff to the spares and if it didn't do that, it probably wouldn't
> have written it to a file either.

Right. Is this a bug or the way it is supposed to work?

> The easiest fix for now is to recreate the array.
>
> mdadm -CR /dev/md3 --metadata=0.90 -n3 -l5 -c64 /dev/sdb2 /dev/sdc2
> /dev/sdf2 --assume-clean
>
> should do it.

Thank you. How did you determine which devices to use and which order
to list them in? I've since rebooted and want to make sure I do not
pick the wrong devices.

> Then if that looks good, add the extra devices and grow the array
> again.

Will try when md2 is done growing tomorrow. :)

Happy hacking,
--
Petter Reinholdtsen
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Help with recovering resized raid where machine crashed whilePENDING

am 06.07.2011 01:39:17 von NeilBrown

On Tue, 5 Jul 2011 18:24:33 +0200 Petter Reinholdtsen wrote:

> [Neil Brown]
> > I probably wouldn't have helped. It is supposed to write backup
> > stuff to the spares and if it didn't do that, it probably wouldn't
> > have written it to a file either.
>
> Right. Is this a bug or the way it is supposed to work?

Bug I expect... though actually it might be a different one to what I was
thinking.

Try to assemble the array normally with --verbose.
i.e.

mdadm --assemble /dev/md3 --verbose /dev/sd[abcde]2
(or whatever the right list of devices is).

If this fails with something like

mdadm: too-old timestamp on backup-metadata on ....

then you can assemble the array by

export MDADM_GROW_ALLOW_OLD=1
mdadm --assemble ....(same command as above).

This requires mdadm-3.1.2 or newer.

>
> > The easiest fix for now is to recreate the array.
> >
> > mdadm -CR /dev/md3 --metadata=0.90 -n3 -l5 -c64 /dev/sdb2 /dev/sdc2
> > /dev/sdf2 --assume-clean
> >
> > should do it.
>
> Thank you. How did you determine which devices to use and which order
> to list them in? I've since rebooted and want to make sure I do not
> pick the wrong devices.

You only need this if the above doesn't work:

In the --examine output for a particular device, look at the 'RaidDevice'
column of the 'this' row.
That number tells you the position in the array. The three devices to list
are the three that have the numbers '0', '1', and '2', and you want to list
them in that order.

NeilBrown


>
> > Then if that looks good, add the extra devices and grow the array
> > again.
>
> Will try when md2 is done growing tomorrow. :)
>
> Happy hacking,

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Help with recovering resized raid where machine crashed while PENDING

am 06.07.2011 22:41:02 von Petter Reinholdtsen

[Neil Brown]
> Bug I expect... though actually it might be a different one to what
> I was thinking.

Good to hear that this was not the way it was supposed to work. The
failing RAID really had me worried that my files were lost. :)

> Try to assemble the array normally with --verbose.
> i.e.
>
> mdadm --assemble /dev/md3 --verbose /dev/sd[abcde]2
> (or whatever the right list of devices is).
>
> If this fails with something like
>
> mdadm: too-old timestamp on backup-metadata on ....

This is the message I saw:

meta:/dev# mdadm --assemble --verbose /dev/md3 sdb2 sdg2 sdh2 sdc2 sdf2
mdadm: looking for devices for /dev/md3
mdadm: sdb2 is identified as a member of /dev/md3, slot 0.
mdadm: sdg2 is identified as a member of /dev/md3, slot 4.
mdadm: sdh2 is identified as a member of /dev/md3, slot 3.
mdadm: sdc2 is identified as a member of /dev/md3, slot 1.
mdadm: sdf2 is identified as a member of /dev/md3, slot 2.
mdadm:/dev/md3 has an active reshape - checking if critical section needs to be restored
mdadm: too-old timestamp on backup-metadata on device-3
mdadm: too-old timestamp on backup-metadata on device-4
mdadm: Failed to find backup of critical section
mdadm: Failed to restore critical section for reshape, sorry.
Possibly you needed to specify the --backup-file
meta:/dev#

> then you can assemble the array by
>
> export MDADM_GROW_ALLOW_OLD=1
> mdadm --assemble ....(same command as above).
>
> This requires mdadm-3.1.2 or newer.

And this worked great!

meta:/dev# MDADM_GROW_ALLOW_OLD=1 mdadm --assemble --verbose /dev/md3 sdb2 sdg2 sdh2 sdc2 sdf2
mdadm: looking for devices for /dev/md3
mdadm: sdb2 is identified as a member of /dev/md3, slot 0.
mdadm: sdg2 is identified as a member of /dev/md3, slot 4.
mdadm: sdh2 is identified as a member of /dev/md3, slot 3.
mdadm: sdc2 is identified as a member of /dev/md3, slot 1.
mdadm: sdf2 is identified as a member of /dev/md3, slot 2.
mdadm:/dev/md3 has an active reshape - checking if critical section needs to be restored
mdadm: accepting backup with timestamp 1308555399 for array with timestamp 1309424388
mdadm: restoring critical section
mdadm: added sdc2 to /dev/md3 as 1
mdadm: added sdf2 to /dev/md3 as 2
mdadm: added sdh2 to /dev/md3 as 3
mdadm: added sdg2 to /dev/md3 as 4
mdadm: added sdb2 to /dev/md3 as 0
mdadm: /dev/md3 has been started with 5 drives.
meta:/dev#

Happy hacking,
--
Petter Reinholdtsen
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html