[PATCH 0/2] reshape fixlets for 2.6.37

[PATCH 0/2] reshape fixlets for 2.6.37

am 22.10.2010 20:03:49 von dan.j.williams

Hi Neil,

Here are two fixes that we discovered while implementing the external
reshape code. The second one is tagged for -stable as current versions
of mdadm may not assemble a native array that is being expanded by more
than 1 disk. Incremental assembly, however, does work in this case.

---

Dan Williams (2):
md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case
md/raid5: initialize ->recovery_offset when growing raid_disks

drivers/md/raid5.c | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in

am 22.10.2010 20:03:54 von dan.j.williams

mdmon is orchestrating the reshape progress and we rely on it to manage the
reshape position and metadata updates.

Reported-by: Adam Kwolek
Signed-off-by: Dan Williams
---
drivers/md/raid5.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 69b0a16..9e8ecd5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4243,7 +4243,7 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
set_bit(MD_CHANGE_DEVS, &mddev->flags);
md_wakeup_thread(mddev->thread);
wait_event(mddev->sb_wait, mddev->flags == 0 ||
- kthread_should_stop());
+ !mddev->persistent || kthread_should_stop());
spin_lock_irq(&conf->device_lock);
conf->reshape_safe = mddev->reshape_position;
spin_unlock_irq(&conf->device_lock);
@@ -4344,8 +4344,8 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
set_bit(MD_CHANGE_DEVS, &mddev->flags);
md_wakeup_thread(mddev->thread);
wait_event(mddev->sb_wait,
- !test_bit(MD_CHANGE_DEVS, &mddev->flags)
- || kthread_should_stop());
+ !test_bit(MD_CHANGE_DEVS, &mddev->flags) ||
+ !mddev->persistent || kthread_should_stop());
spin_lock_irq(&conf->device_lock);
conf->reshape_safe = mddev->reshape_position;
spin_unlock_irq(&conf->device_lock);

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 2/2] md/raid5: initialize ->recovery_offset when growing

am 22.10.2010 20:03:59 von dan.j.williams

We mark the disk in-sync, and also need to init ->recovery_offset lest
we confuse older versions of mdadm that don't consider this case at
assembly (i.e. that when growing we assume the disk is insync).

mdadm: /dev/md0 assembled from 3 drives and 1 rebuilding - not enough to start the array.

Cc:
Signed-off-by: Dan Williams
---
drivers/md/raid5.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 9e8ecd5..f8a27d4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
if (raid5_add_disk(mddev, rdev) == 0) {
char nm[20];
if (rdev->raid_disk >= conf->previous_raid_disks) {
+ rdev->recovery_offset = MaxSector;
set_bit(In_sync, &rdev->flags);
added_devices++;
} else

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVSacknowledgement in the external case

am 26.10.2010 07:22:43 von NeilBrown

On Fri, 22 Oct 2010 11:03:54 -0700
Dan Williams wrote:

> mdmon is orchestrating the reshape progress and we rely on it to manage the
> reshape position and metadata updates.
>
> Reported-by: Adam Kwolek
> Signed-off-by: Dan Williams
> ---
> drivers/md/raid5.c | 6 +++---
> 1 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 69b0a16..9e8ecd5 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -4243,7 +4243,7 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
> set_bit(MD_CHANGE_DEVS, &mddev->flags);
> md_wakeup_thread(mddev->thread);
> wait_event(mddev->sb_wait, mddev->flags == 0 ||
> - kthread_should_stop());
> + !mddev->persistent || kthread_should_stop());
> spin_lock_irq(&conf->device_lock);
> conf->reshape_safe = mddev->reshape_position;
> spin_unlock_irq(&conf->device_lock);
> @@ -4344,8 +4344,8 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
> set_bit(MD_CHANGE_DEVS, &mddev->flags);
> md_wakeup_thread(mddev->thread);
> wait_event(mddev->sb_wait,
> - !test_bit(MD_CHANGE_DEVS, &mddev->flags)
> - || kthread_should_stop());
> + !test_bit(MD_CHANGE_DEVS, &mddev->flags) ||
> + !mddev->persistent || kthread_should_stop());
> spin_lock_irq(&conf->device_lock);
> conf->reshape_safe = mddev->reshape_position;
> spin_unlock_irq(&conf->device_lock);


Do we really need this?

If we set MD_CHANGE_DEVS as wake up the thread, the thread will call
md_update_sb which, in the !persistent case, will clear the bit and wakeup
sb_wait. So we should block on those wait_events anyway...

Am I missing something?

And as ->persistent doesn't change, I would rather have that test outside the
wait_event() to make the meaning more explicit. So if I have missed
something, I'll make that change.

Thanks,
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growingraid_disks

am 26.10.2010 07:35:24 von NeilBrown

On Fri, 22 Oct 2010 11:03:59 -0700
Dan Williams wrote:

> We mark the disk in-sync, and also need to init ->recovery_offset lest
> we confuse older versions of mdadm that don't consider this case at
> assembly (i.e. that when growing we assume the disk is insync).
>
> mdadm: /dev/md0 assembled from 3 drives and 1 rebuilding - not enough to start the array.
>
> Cc:
> Signed-off-by: Dan Williams
> ---
> drivers/md/raid5.c | 1 +
> 1 files changed, 1 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 9e8ecd5..f8a27d4 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
> if (raid5_add_disk(mddev, rdev) == 0) {
> char nm[20];
> if (rdev->raid_disk >= conf->previous_raid_disks) {
> + rdev->recovery_offset = MaxSector;
> set_bit(In_sync, &rdev->flags);
> added_devices++;
> } else


Sorry, but I'm not getting this one....

rdev->recovery_offset is only ever used when In_sync is clear. So it makes
no sense to give it a value when In_sync is set.

Maybe there are some places where we clear In_sync that need to have
recovery_offset set to zero, but it isn't obvious to me that that would
explain your symptom.

Can you give a bit more detail of the problem you are seeing please?

Thanks,
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgementin the external case

am 26.10.2010 09:20:25 von dan.j.williams

On 10/25/2010 10:22 PM, Neil Brown wrote:
> On Fri, 22 Oct 2010 11:03:54 -0700
> Dan Williams wrote:
>
>> mdmon is orchestrating the reshape progress and we rely on it to manage the
>> reshape position and metadata updates.
>>
>> Reported-by: Adam Kwolek
>> Signed-off-by: Dan Williams
>> ---
>> drivers/md/raid5.c | 6 +++---
>> 1 files changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 69b0a16..9e8ecd5 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -4243,7 +4243,7 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
>> set_bit(MD_CHANGE_DEVS,&mddev->flags);
>> md_wakeup_thread(mddev->thread);
>> wait_event(mddev->sb_wait, mddev->flags == 0 ||
>> - kthread_should_stop());
>> + !mddev->persistent || kthread_should_stop());
>> spin_lock_irq(&conf->device_lock);
>> conf->reshape_safe = mddev->reshape_position;
>> spin_unlock_irq(&conf->device_lock);
>> @@ -4344,8 +4344,8 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
>> set_bit(MD_CHANGE_DEVS,&mddev->flags);
>> md_wakeup_thread(mddev->thread);
>> wait_event(mddev->sb_wait,
>> - !test_bit(MD_CHANGE_DEVS,&mddev->flags)
>> - || kthread_should_stop());
>> + !test_bit(MD_CHANGE_DEVS,&mddev->flags) ||
>> + !mddev->persistent || kthread_should_stop());
>> spin_lock_irq(&conf->device_lock);
>> conf->reshape_safe = mddev->reshape_position;
>> spin_unlock_irq(&conf->device_lock);
>
>
> Do we really need this?

Sorry, not anymore, this patch predated the md_update_sb() rework and is
no longer needed as far as I can see.

--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growingraid_disks

am 26.10.2010 09:26:33 von dan.j.williams

On 10/25/2010 10:35 PM, Neil Brown wrote:
> On Fri, 22 Oct 2010 11:03:59 -0700
> Dan Williams wrote:
>
>> We mark the disk in-sync, and also need to init ->recovery_offset lest
>> we confuse older versions of mdadm that don't consider this case at
>> assembly (i.e. that when growing we assume the disk is insync).
>>
>> mdadm: /dev/md0 assembled from 3 drives and 1 rebuilding - not enough to start the array.
>>
>> Cc:
>> Signed-off-by: Dan Williams
>> ---
>> drivers/md/raid5.c | 1 +
>> 1 files changed, 1 insertions(+), 0 deletions(-)
>>
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 9e8ecd5..f8a27d4 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
>> if (raid5_add_disk(mddev, rdev) == 0) {
>> char nm[20];
>> if (rdev->raid_disk>= conf->previous_raid_disks) {
>> + rdev->recovery_offset = MaxSector;
>> set_bit(In_sync,&rdev->flags);
>> added_devices++;
>> } else
>
>
> Sorry, but I'm not getting this one....
>
> rdev->recovery_offset is only ever used when In_sync is clear. So it makes
> no sense to give it a value when In_sync is set.
>
> Maybe there are some places where we clear In_sync that need to have
> recovery_offset set to zero, but it isn't obvious to me that that would
> explain your symptom.
>
> Can you give a bit more detail of the problem you are seeing please?

mdadm -A an array growing raid disks by more than max_degraded.

Here is the related mdadm fix, but I figured it was worhtwhile to also
address this in the kernel for the corner case of new kernel + old mdadm.

> commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
> Author: Dan Williams
> Date: Mon Jul 12 12:03:13 2010 -0700
>
> Assemble: fix assembly in the delta_disks > max_degraded case
>
> Incremental assembly works on such an array because the kernel sees the
> disk as in-sync and that the array is reshaping. Teach Assemble() the
> same assumptions.
>
> This is only needed on kernels that do not initialize ->recovery_offset
> when activating spares for reshape.
>
> Signed-off-by: Dan Williams
>
> diff --git a/Assemble.c b/Assemble.c
> index afd4e60..409f0d7 100644
> --- a/Assemble.c
> +++ b/Assemble.c
> @@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
> devices[most_recent].i.events) {
> devices[j].uptodate = 1;
> if (i < content->array.raid_disks) {
> - if (devices[j].i.recovery_start == MaxSector) {
> + if (devices[j].i.recovery_start == MaxSector ||
> + (content->reshape_active &&
> + j >= content->array.raid_disks - content->delta_disks)) {
> okcnt++;
> avail[i]=1;
> } else

--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growingraid_disks

am 26.10.2010 09:54:24 von NeilBrown

On Tue, 26 Oct 2010 00:26:33 -0700
Dan Williams wrote:

> On 10/25/2010 10:35 PM, Neil Brown wrote:
> > On Fri, 22 Oct 2010 11:03:59 -0700
> > Dan Williams wrote:
> >
> >> We mark the disk in-sync, and also need to init ->recovery_offset lest
> >> we confuse older versions of mdadm that don't consider this case at
> >> assembly (i.e. that when growing we assume the disk is insync).
> >>
> >> mdadm: /dev/md0 assembled from 3 drives and 1 rebuilding - not enough to start the array.
> >>
> >> Cc:
> >> Signed-off-by: Dan Williams
> >> ---
> >> drivers/md/raid5.c | 1 +
> >> 1 files changed, 1 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> >> index 9e8ecd5..f8a27d4 100644
> >> --- a/drivers/md/raid5.c
> >> +++ b/drivers/md/raid5.c
> >> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
> >> if (raid5_add_disk(mddev, rdev) == 0) {
> >> char nm[20];
> >> if (rdev->raid_disk>= conf->previous_raid_disks) {
> >> + rdev->recovery_offset = MaxSector;
> >> set_bit(In_sync,&rdev->flags);
> >> added_devices++;
> >> } else
> >
> >
> > Sorry, but I'm not getting this one....
> >
> > rdev->recovery_offset is only ever used when In_sync is clear. So it makes
> > no sense to give it a value when In_sync is set.
> >
> > Maybe there are some places where we clear In_sync that need to have
> > recovery_offset set to zero, but it isn't obvious to me that that would
> > explain your symptom.
> >
> > Can you give a bit more detail of the problem you are seeing please?
>
> mdadm -A an array growing raid disks by more than max_degraded.
>
> Here is the related mdadm fix, but I figured it was worhtwhile to also
> address this in the kernel for the corner case of new kernel + old mdadm.
>
> > commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
> > Author: Dan Williams
> > Date: Mon Jul 12 12:03:13 2010 -0700
> >
> > Assemble: fix assembly in the delta_disks > max_degraded case
> >
> > Incremental assembly works on such an array because the kernel sees the
> > disk as in-sync and that the array is reshaping. Teach Assemble() the
> > same assumptions.
> >
> > This is only needed on kernels that do not initialize ->recovery_offset
> > when activating spares for reshape.
> >
> > Signed-off-by: Dan Williams
> >
> > diff --git a/Assemble.c b/Assemble.c
> > index afd4e60..409f0d7 100644
> > --- a/Assemble.c
> > +++ b/Assemble.c
> > @@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
> > devices[most_recent].i.events) {
> > devices[j].uptodate = 1;
> > if (i < content->array.raid_disks) {
> > - if (devices[j].i.recovery_start == MaxSector) {
> > + if (devices[j].i.recovery_start == MaxSector ||
> > + (content->reshape_active &&
> > + j >= content->array.raid_disks - content->delta_disks)) {
> > okcnt++;
> > avail[i]=1;
> > } else
>
> --
> Dan


I think the issue here is probably that the metadata handler isn't setting
recovery_start "properly" for devices that are in-sync just as far as the
the current reshape has progressed.

I really don't think the kernel has anything to do with it as it doesn't
report any recovery_start value at all when the device is 'in_sync'.

So I think this patch is wrong and you need to fix the getinfo_super method
to set recovery_start to MaxSector when the device is a new device in an
array being reshaped.

Could that make sense?

NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growingraid_disks

am 26.10.2010 10:43:10 von dan.j.williams

On 10/26/2010 12:54 AM, Neil Brown wrote:
> On Tue, 26 Oct 2010 00:26:33 -0700
> Dan Williams wrote:
>
>> On 10/25/2010 10:35 PM, Neil Brown wrote:
>>> On Fri, 22 Oct 2010 11:03:59 -0700
>>> Dan Williams wrote:
>>>
>>>> We mark the disk in-sync, and also need to init ->recovery_offset lest
>>>> we confuse older versions of mdadm that don't consider this case at
>>>> assembly (i.e. that when growing we assume the disk is insync).
>>>>
>>>> mdadm: /dev/md0 assembled from 3 drives and 1 rebuilding - not enough to start the array.
>>>>
>>>> Cc:
>>>> Signed-off-by: Dan Williams
>>>> ---
>>>> drivers/md/raid5.c | 1 +
>>>> 1 files changed, 1 insertions(+), 0 deletions(-)
>>>>
>>>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>>>> index 9e8ecd5..f8a27d4 100644
>>>> --- a/drivers/md/raid5.c
>>>> +++ b/drivers/md/raid5.c
>>>> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
>>>> if (raid5_add_disk(mddev, rdev) == 0) {
>>>> char nm[20];
>>>> if (rdev->raid_disk>= conf->previous_raid_disks) {
>>>> + rdev->recovery_offset = MaxSector;
>>>> set_bit(In_sync,&rdev->flags);
>>>> added_devices++;
>>>> } else
>>>
>>>
>>> Sorry, but I'm not getting this one....
>>>
>>> rdev->recovery_offset is only ever used when In_sync is clear. So it makes
>>> no sense to give it a value when In_sync is set.
>>>
>>> Maybe there are some places where we clear In_sync that need to have
>>> recovery_offset set to zero, but it isn't obvious to me that that would
>>> explain your symptom.
>>>
>>> Can you give a bit more detail of the problem you are seeing please?
>>
>> mdadm -A an array growing raid disks by more than max_degraded.
>>
>> Here is the related mdadm fix, but I figured it was worhtwhile to also
>> address this in the kernel for the corner case of new kernel + old mdadm.
>>
>>> commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
>>> Author: Dan Williams
>>> Date: Mon Jul 12 12:03:13 2010 -0700
>>>
>>> Assemble: fix assembly in the delta_disks> max_degraded case
>>>
>>> Incremental assembly works on such an array because the kernel sees the
>>> disk as in-sync and that the array is reshaping. Teach Assemble() the
>>> same assumptions.
>>>
>>> This is only needed on kernels that do not initialize ->recovery_offset
>>> when activating spares for reshape.
>>>
>>> Signed-off-by: Dan Williams
>>>
>>> diff --git a/Assemble.c b/Assemble.c
>>> index afd4e60..409f0d7 100644
>>> --- a/Assemble.c
>>> +++ b/Assemble.c
>>> @@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
>>> devices[most_recent].i.events) {
>>> devices[j].uptodate = 1;
>>> if (i< content->array.raid_disks) {
>>> - if (devices[j].i.recovery_start == MaxSector) {
>>> + if (devices[j].i.recovery_start == MaxSector ||
>>> + (content->reshape_active&&
>>> + j>= content->array.raid_disks - content->delta_disks)) {
>>> okcnt++;
>>> avail[i]=1;
>>> } else
>>
>> --
>> Dan
>
>
> I think the issue here is probably that the metadata handler isn't setting
> recovery_start "properly" for devices that are in-sync just as far as the
> the current reshape has progressed.
>
> I really don't think the kernel has anything to do with it as it doesn't
> report any recovery_start value at all when the device is 'in_sync'.

This was a few months back, but it looks like I looked at the mdadm fix
and thought I could derive a kernel fix, but yes, the real problem is in
the mdadm metadata handler.

> So I think this patch is wrong

The kernel patch, not the mdadm patch.

> and you need to fix the getinfo_super method
> to set recovery_start to MaxSector when the device is a new device in an
> array being reshaped.
>
> Could that make sense?

Yes, I can push this fix down to super1.c to avoid:

# mdadm -A /dev/md1 /dev/loop[0-3]
mdadm: /dev/md1 assembled from 3 drives and 1 rebuilding - not enough
to start the array.

--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html