[PATCH 0/7] Online Capacity Expansion (raid5/0,

[PATCH 0/7] Online Capacity Expansion (raid5/0,

am 26.01.2011 16:03:09 von adam.kwolek

This patch series implements raid5 and raid0 Online Capacity Expansion
for all arrays in container using imsm metadata.
Series was re-based to new mdadm version (2011.01.26).

It replaces previously sent series for OLCE continuation on container.

Comparing to previously sent series there are the following changes:
1. imsm: Update metadata for second array
Patch is changed. There is no interface changes
in imsm_progress_container_reshape() as Neil suggested.
This patch implements next arrays reshape in container
for imsm metadata.
There is no more reshape limitation for 2 arrays in container only
(as previously exist),
If imsm will support more than 2 arrays in container reshape code is ready.
2. FIX: monitor doesn't handshake with md
Patch is changed. ping_monitor() is sent for external metadata only.
3. Added bug fixes:
- imsm: FIX: do not allow for container operation for the same disks number
(OLCE for no change in raid disks was accepted)
- FIX: Array after takeover has to be frozen
(for single disk raid0, OLCE fails due to recovery process
started by md, during reshape configuration)
- FIX: Container can be left frozen
(when reshape_container() fails, container was left frozen)
- FIX: start_reshape status should be checked
(if md fails to start reshape, mdadm didn't have such information
and goes processed with check-pointing)
4. Patch:
imsm: FIX: spare cannot be added
remains in series as issue is still observed

BR
Adam

---

Adam Kwolek (7):
FIX: start_reshape status should be checked
FIX: Container can be left frozen
imsm: FIX: spare cannot be added
FIX: Array after takeover has to be frozen
imsm: FIX: do not allow for container operation for the same disks number
FIX: monitor doesn't handshake with md
imsm: Update metadata for second array


Grow.c | 14 ++++++++++---
super-intel.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 69 insertions(+), 5 deletions(-)

--
Signature
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 1/7] imsm: Update metadata for second array

am 26.01.2011 16:03:17 von adam.kwolek

When second array reshape is about to start external metadata should be updated
by mdmon in imsm_set_array_state().
for this purposes imsm_progress_container_reshape() is reused.

Signed-off-by: Adam Kwolek
---

super-intel.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 48 insertions(+), 1 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index 8d1f0ad..4e96196 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -5052,6 +5052,46 @@ static void imsm_progress_container_reshape(struct intel_super *super)
struct imsm_super *mpb = super->anchor;
int prev_disks = -1;
int i;
+ int any_migration_in_progress = 0;
+ int disks_count_max = 0;
+ struct imsm_dev *dev_next = NULL;
+
+ /* find maximum number of disks used in any array
+ * and check if any migration is in progress
+ */
+ for (i = 0; i < mpb->num_raid_devs; i++) {
+ struct imsm_dev *dev = get_imsm_dev(super, i);
+ struct imsm_map *map = get_imsm_map(dev, 0);
+ struct imsm_map *migr_map = get_imsm_map(dev, 1);
+ if (migr_map)
+ any_migration_in_progress = 1;
+ if (map->num_members > disks_count_max)
+ disks_count_max = map->num_members;
+ }
+
+ if (any_migration_in_progress == 0) {
+ /* no migration in progress
+ * so we can check if next migration in container
+ * should be started
+ */
+ int next_inst = -1;
+
+ for (i = 0; i < mpb->num_raid_devs; i++) {
+ struct imsm_dev *dev = get_imsm_dev(super, i);
+ struct imsm_map *map = get_imsm_map(dev, 0);
+ if (map->num_members < disks_count_max) {
+ next_inst = i;
+ break;
+ }
+ }
+ if (next_inst >= 0) {
+ /* found array with smaller number of disks in array,
+ * this array should be expanded
+ */
+ dev_next = get_imsm_dev(super, next_inst);
+ prev_disks = disks_count_max;
+ }
+ }

for (i = 0; i < mpb->num_raid_devs; i++) {
struct imsm_dev *dev = get_imsm_dev(super, i);
@@ -5063,6 +5103,9 @@ static void imsm_progress_container_reshape(struct intel_super *super)
if (dev->vol.migr_state)
return;

+ if ((dev_next != NULL) && (dev_next != dev))
+ continue;
+
if (prev_disks == -1)
prev_disks = map->num_members;
if (prev_disks == map->num_members)
@@ -5244,13 +5287,17 @@ static int imsm_set_array_state(struct active_array *a, int consistent)
super->updates_pending++;
}

- /* finalize online capacity expansion/reshape */
+ /* manage online capacity expansion/reshape */
if ((a->curr_action != reshape) &&
(a->prev_action == reshape)) {
struct mdinfo *mdi;

+ /* finalize online capacity expansion/reshape */
for (mdi = a->info.devs; mdi; mdi = mdi->next)
imsm_set_disk(a, mdi->disk.raid_disk, mdi->curr_state);
+
+ /* check next volume reshape */
+ imsm_progress_container_reshape(super);
}

return consistent;

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 2/7] FIX: monitor doesn"t handshake with md

am 26.01.2011 16:03:25 von adam.kwolek

when in container are present raid0 and raid5 arrays, and reshape order is:
1. raid0 array
2. raid5 array

mdadm cannot set new raid_disks for raid0 array. For this action md has to have
handshake with mdmon. We have the following conditions:
1. Raid0 is not monitored
2. raid0 has been just takeovered to raid4/5 (it has to be monitored
3. monitor has to start monitor new raid4/5 array
4. monitor is not started (it is started to second raid5 array)
In such situation pig_monitor is required to let know to m monitor about new array
(not in the starting monitor case only)

Signed-off-by: Adam Kwolek
---

Grow.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/Grow.c b/Grow.c
index f79616c..df352f8 100644
--- a/Grow.c
+++ b/Grow.c
@@ -1662,8 +1662,9 @@ static int reshape_array(char *container, int fd, char *devname,
if (reshape.level > 0 && st->ss->external &&
!mdmon_running(st->container_dev)) {
start_mdmon(st->container_dev);
- ping_monitor(container);
}
+ if (st->ss->external)
+ ping_monitor(container);

/* ->reshape_super might have chosen some spares from the
* container that it wants to be part of the new array.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 3/7] imsm: FIX: do not allow for container operation for the

am 26.01.2011 16:03:33 von adam.kwolek

imsm_reshape_super() currently allows for expansion when requested
raid_disks number is the same as current.
This is wrong. Existing in code condition is too weak.
We should allow for expansion when new disks_number is greater
than current only.

Signed-off-by: Adam Kwolek
---

super-intel.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index 4e96196..5a8886d 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6747,7 +6747,7 @@ static int imsm_reshape_is_allowed_on_container(struct supertype *st,
dprintf("imsm: checking device_num: %i\n",
member->container_member);

- if (geo->raid_disks < member->array.raid_disks) {
+ if (geo->raid_disks <= member->array.raid_disks) {
/* we work on container for Online Capacity Expansion
* only so raid_disks has to grow
*/

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 4/7] FIX: Array after takeover has to be frozen

am 26.01.2011 16:03:41 von adam.kwolek

Problem occurs when we want to expand single disk raid0 array.
This is done via degraded 2 disks raid4 array. When new spare
is added to array, md immediately initiates recovery before
mdadm can configure and start reshape. This is due fact that 2 disk
raid4/5 array is special md case. Mdmon does nothing here because
container is blocked.
Put array in to frozen state allows mdadm to finish configuration
before reshape is executed in md.

Signed-off-by: Adam Kwolek
---

Grow.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/Grow.c b/Grow.c
index df352f8..8b9625d 100644
--- a/Grow.c
+++ b/Grow.c
@@ -1657,6 +1657,7 @@ static int reshape_array(char *container, int fd, char *devname,
fprintf(stderr, Name ": level of %s changed to %s\n",
devname, c);
orig_level = info->array.level;
+ sysfs_freeze_array(info);
}

if (reshape.level > 0 && st->ss->external &&

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 5/7] imsm: FIX: spare cannot be added

am 26.01.2011 16:03:49 von adam.kwolek

When Manage.c adds spare, it calls write_init_super() and this function
is responsible for closing disk handle (Manage.c:812).
For imsm case this handle is reused for managing this disk and handle
(due to this it is not closed).
As handle was opened with flag O_EXCL, adding disk to md fails on writing
to new_dev (md cannot set lock on device).
To resolve situation close current handle and open new one without O_EXCL flag.

Signed-off-by: Anna Czarnowska
Signed-off-by: Adam Kwolek
---

super-intel.c | 9 +++++++++
1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index 5a8886d..16093e1 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -3899,6 +3899,7 @@ static int mgmt_disk(struct supertype *st)
struct intel_super *super = st->sb;
size_t len;
struct imsm_update_add_remove_disk *u;
+ struct dl *d;

if (!super->disk_mgmt_list)
return 0;
@@ -3914,6 +3915,14 @@ static int mgmt_disk(struct supertype *st)
u->type = update_add_remove_disk;
append_metadata_update(st, u, len);

+ for (d = super->disk_mgmt_list; d ; d = d->next) {
+ char buf[PATH_MAX];
+
+ close(d->fd);
+ sprintf(buf, "%d:%d", d->major, d->minor);
+ d->fd = dev_open(buf, O_RDWR | O_DIRECT);
+ }
+
return 0;
}


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 6/7] FIX: Container can be left frozen

am 26.01.2011 16:03:57 von adam.kwolek

When for container operation reshape_super() fails child process
is not started so it cannot unfreeze container.
Grow_reshape() doesn't unfreeze container also.
For fix problem frozen variable can be cleaned when reshape_container()
returns success status only.

Signed-off-by: Adam Kwolek
---

Grow.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/Grow.c b/Grow.c
index 8b9625d..dae1b32 100644
--- a/Grow.c
+++ b/Grow.c
@@ -1554,7 +1554,8 @@ int Grow_reshape(char *devname, int fd, int quiet, char *backup_file,
*/
rv = reshape_container(container, fd, devname, st, &info,
force, backup_file, quiet);
- frozen = 0;
+ if (rv == 0)
+ frozen = 0;
} else {
/* Impose these changes on a single array. First
* check that the metadata is OK with the change. */

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH 7/7] FIX: start_reshape status should be checked

am 26.01.2011 16:04:06 von adam.kwolek

mdadm should verify if reshape is started before it goes
in to check-pointing machine.

Signed-off-by: Adam Kwolek
---

Grow.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/Grow.c b/Grow.c
index dae1b32..a60129a 100644
--- a/Grow.c
+++ b/Grow.c
@@ -1934,7 +1934,12 @@ started:
}
}

- start_reshape(sra);
+ err = start_reshape(sra);
+ if (err) {
+ fprintf(stderr, Name ": Cannot start reshape for %s\n",
+ devname);
+ goto release;
+ }
if (restart)
sysfs_set_str(sra, NULL, "array_state", "active");


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 1/7] imsm: Update metadata for second array

am 28.01.2011 01:24:59 von NeilBrown

On Wed, 26 Jan 2011 16:03:17 +0100
Adam Kwolek wrote:

> When second array reshape is about to start external metadata should
> be updated by mdmon in imsm_set_array_state().
> for this purposes imsm_progress_container_reshape() is reused.

We seem to be failing to communicate...

I told you that you didn't need extra arguments to
imsm_progress_container_reshape because IT ALREADY DOES THE RIGHT THING.
It finds the array that needs to be reshaped next, and it starts the
reshape.

You removed the extra arguments from your previous patch, but put extra
code in imsm_progress_container_reshape to to what it ALREADY DOES.

Have you read and understood the code?

If something isn't working the way you expect, please detail exactly
what.
It is quite possible that the code isn't quite right, but trying to
change it when you don't appear to understand it isn't going to
achieve anything useful.

NeilBrown


>
> Signed-off-by: Adam Kwolek
> ---
>
> super-intel.c | 49
> ++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 48
> insertions(+), 1 deletions(-)
>
> diff --git a/super-intel.c b/super-intel.c
> index 8d1f0ad..4e96196 100644
> --- a/super-intel.c
> +++ b/super-intel.c
> @@ -5052,6 +5052,46 @@ static void
> imsm_progress_container_reshape(struct intel_super *super) struct
> imsm_super *mpb = super->anchor; int prev_disks = -1;
> int i;
> + int any_migration_in_progress = 0;
> + int disks_count_max = 0;
> + struct imsm_dev *dev_next = NULL;
> +
> + /* find maximum number of disks used in any array
> + * and check if any migration is in progress
> + */
> + for (i = 0; i < mpb->num_raid_devs; i++) {
> + struct imsm_dev *dev = get_imsm_dev(super, i);
> + struct imsm_map *map = get_imsm_map(dev, 0);
> + struct imsm_map *migr_map = get_imsm_map(dev, 1);
> + if (migr_map)
> + any_migration_in_progress = 1;
> + if (map->num_members > disks_count_max)
> + disks_count_max = map->num_members;
> + }
> +
> + if (any_migration_in_progress == 0) {
> + /* no migration in progress
> + * so we can check if next migration in container
> + * should be started
> + */
> + int next_inst = -1;
> +
> + for (i = 0; i < mpb->num_raid_devs; i++) {
> + struct imsm_dev *dev = get_imsm_dev(super,
> i);
> + struct imsm_map *map = get_imsm_map(dev, 0);
> + if (map->num_members < disks_count_max) {
> + next_inst = i;
> + break;
> + }
> + }
> + if (next_inst >= 0) {
> + /* found array with smaller number of disks
> in array,
> + * this array should be expanded
> + */
> + dev_next = get_imsm_dev(super, next_inst);
> + prev_disks = disks_count_max;
> + }
> + }
>
> for (i = 0; i < mpb->num_raid_devs; i++) {
> struct imsm_dev *dev = get_imsm_dev(super, i);
> @@ -5063,6 +5103,9 @@ static void
> imsm_progress_container_reshape(struct intel_super *super) if
> (dev->vol.migr_state) return;
>
> + if ((dev_next != NULL) && (dev_next != dev))
> + continue;
> +
> if (prev_disks == -1)
> prev_disks = map->num_members;
> if (prev_disks == map->num_members)
> @@ -5244,13 +5287,17 @@ static int imsm_set_array_state(struct
> active_array *a, int consistent) super->updates_pending++;
> }
>
> - /* finalize online capacity expansion/reshape */
> + /* manage online capacity expansion/reshape */
> if ((a->curr_action != reshape) &&
> (a->prev_action == reshape)) {
> struct mdinfo *mdi;
>
> + /* finalize online capacity expansion/reshape */
> for (mdi = a->info.devs; mdi; mdi = mdi->next)
> imsm_set_disk(a, mdi->disk.raid_disk,
> mdi->curr_state); +
> + /* check next volume reshape */
> + imsm_progress_container_reshape(super);
> }
>
> return consistent;

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 3/7] imsm: FIX: do not allow for container operation forthe same disks number

am 28.01.2011 01:25:52 von NeilBrown

On Wed, 26 Jan 2011 16:03:33 +0100
Adam Kwolek wrote:

> imsm_reshape_super() currently allows for expansion when requested
> raid_disks number is the same as current.
> This is wrong. Existing in code condition is too weak.
> We should allow for expansion when new disks_number is greater
> than current only.

Applied, thanks.

NeilBrown

>
> Signed-off-by: Adam Kwolek
> ---
>
> super-intel.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/super-intel.c b/super-intel.c
> index 4e96196..5a8886d 100644
> --- a/super-intel.c
> +++ b/super-intel.c
> @@ -6747,7 +6747,7 @@ static int
> imsm_reshape_is_allowed_on_container(struct supertype *st,
> dprintf("imsm: checking device_num: %i\n", member->container_member);
>
> - if (geo->raid_disks < member->array.raid_disks) {
> + if (geo->raid_disks <= member->array.raid_disks) {
> /* we work on container for Online Capacity
> Expansion
> * only so raid_disks has to grow
> */

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 4/7] FIX: Array after takeover has to be frozen

am 28.01.2011 01:40:16 von NeilBrown

On Wed, 26 Jan 2011 16:03:41 +0100
Adam Kwolek wrote:

> Problem occurs when we want to expand single disk raid0 array.
> This is done via degraded 2 disks raid4 array. When new spare
> is added to array, md immediately initiates recovery before
> mdadm can configure and start reshape. This is due fact that 2 disk
> raid4/5 array is special md case. Mdmon does nothing here because
> container is blocked.
> Put array in to frozen state allows mdadm to finish configuration
> before reshape is executed in md.
>
> Signed-off-by: Adam Kwolek
> ---
>
> Grow.c | 1 +
> 1 files changed, 1 insertions(+), 0 deletions(-)
>
> diff --git a/Grow.c b/Grow.c
> index df352f8..8b9625d 100644
> --- a/Grow.c
> +++ b/Grow.c
> @@ -1657,6 +1657,7 @@ static int reshape_array(char *container, int
> fd, char *devname, fprintf(stderr, Name ": level of %s changed to
> %s\n", devname, c);
> orig_level = info->array.level;
> + sysfs_freeze_array(info);
> }
>
> if (reshape.level > 0 && st->ss->external &&

This looks like it is in the wrong place. Surely we should freeze the
array *before* we set the level - though as long as it is frozen before
we add spares I suspect it works OK.

I have moved the 'freeze' earlier, but I think I need to rethink all
the freezing - it doesn't seem very tidy at the moment.

Thanks,
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 7/7] FIX: start_reshape status should be checked

am 28.01.2011 01:41:06 von NeilBrown

On Wed, 26 Jan 2011 16:04:06 +0100
Adam Kwolek wrote:

> mdadm should verify if reshape is started before it goes
> in to check-pointing machine.
>
> Signed-off-by: Adam Kwolek
> ---
>
> Grow.c | 7 ++++++-
> 1 files changed, 6 insertions(+), 1 deletions(-)
>
> diff --git a/Grow.c b/Grow.c
> index dae1b32..a60129a 100644
> --- a/Grow.c
> +++ b/Grow.c
> @@ -1934,7 +1934,12 @@ started:
> }
> }
>
> - start_reshape(sra);
> + err = start_reshape(sra);
> + if (err) {
> + fprintf(stderr, Name ": Cannot start reshape for
> %s\n",
> + devname);
> + goto release;
> + }
> if (restart)
> sysfs_set_str(sra, NULL, "array_state", "active");
>

Applied, thanks.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: [PATCH 1/7] imsm: Update metadata for second array

am 28.01.2011 09:11:47 von adam.kwolek

> -----Original Message-----
> From: Neil Brown [mailto:neilb@suse.de]
> Sent: Friday, January 28, 2011 1:25 AM
> To: Kwolek, Adam
> Cc: linux-raid@vger.kernel.org; Williams, Dan J; Ciechanowski, Ed;
> Neubauer, Wojciech
> Subject: Re: [PATCH 1/7] imsm: Update metadata for second array
>
> On Wed, 26 Jan 2011 16:03:17 +0100
> Adam Kwolek wrote:
>
> > When second array reshape is about to start external metadata should
> > be updated by mdmon in imsm_set_array_state().
> > for this purposes imsm_progress_container_reshape() is reused.
>
> We seem to be failing to communicate...
>
> I told you that you didn't need extra arguments to
> imsm_progress_container_reshape because IT ALREADY DOES THE RIGHT
> THING.
> It finds the array that needs to be reshaped next, and it starts the
> reshape.
>
> You removed the extra arguments from your previous patch, but put extra
> code in imsm_progress_container_reshape to to what it ALREADY DOES.
>
> Have you read and understood the code?

Yes

>
> If something isn't working the way you expect, please detail exactly
> what.

prev_disks can be set with wrong value.
This leads not to reshape continuation of next array, but to metadata rollback of reshaped array.

> It is quite possible that the code isn't quite right, but trying to
> change it when you don't appear to understand it isn't going to
> achieve anything useful.
>
> NeilBrown

Not exactly, but I'll remove reshape array condition double check from patch,
and leave things (proper prev_disks initialization) that makes me happy in imsm_progress_container_reshape() only.
You can expect changed patch today.

BR
Adam


>
>
> >
> > Signed-off-by: Adam Kwolek
> > ---
> >
> > super-intel.c | 49
> > ++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 48
> > insertions(+), 1 deletions(-)
> >
> > diff --git a/super-intel.c b/super-intel.c
> > index 8d1f0ad..4e96196 100644
> > --- a/super-intel.c
> > +++ b/super-intel.c
> > @@ -5052,6 +5052,46 @@ static void
> > imsm_progress_container_reshape(struct intel_super *super) struct
> > imsm_super *mpb = super->anchor; int prev_disks = -1;
> > int i;
> > + int any_migration_in_progress = 0;
> > + int disks_count_max = 0;
> > + struct imsm_dev *dev_next = NULL;
> > +
> > + /* find maximum number of disks used in any array
> > + * and check if any migration is in progress
> > + */
> > + for (i = 0; i < mpb->num_raid_devs; i++) {
> > + struct imsm_dev *dev = get_imsm_dev(super, i);
> > + struct imsm_map *map = get_imsm_map(dev, 0);
> > + struct imsm_map *migr_map = get_imsm_map(dev, 1);
> > + if (migr_map)
> > + any_migration_in_progress = 1;
> > + if (map->num_members > disks_count_max)
> > + disks_count_max = map->num_members;
> > + }
> > +
> > + if (any_migration_in_progress == 0) {
> > + /* no migration in progress
> > + * so we can check if next migration in container
> > + * should be started
> > + */
> > + int next_inst = -1;
> > +
> > + for (i = 0; i < mpb->num_raid_devs; i++) {
> > + struct imsm_dev *dev = get_imsm_dev(super,
> > i);
> > + struct imsm_map *map = get_imsm_map(dev, 0);
> > + if (map->num_members < disks_count_max) {
> > + next_inst = i;
> > + break;
> > + }
> > + }
> > + if (next_inst >= 0) {
> > + /* found array with smaller number of disks
> > in array,
> > + * this array should be expanded
> > + */
> > + dev_next = get_imsm_dev(super, next_inst);
> > + prev_disks = disks_count_max;
> > + }
> > + }
> >
> > for (i = 0; i < mpb->num_raid_devs; i++) {
> > struct imsm_dev *dev = get_imsm_dev(super, i);
> > @@ -5063,6 +5103,9 @@ static void
> > imsm_progress_container_reshape(struct intel_super *super) if
> > (dev->vol.migr_state) return;
> >
> > + if ((dev_next != NULL) && (dev_next != dev))
> > + continue;
> > +
> > if (prev_disks == -1)
> > prev_disks = map->num_members;
> > if (prev_disks == map->num_members)
> > @@ -5244,13 +5287,17 @@ static int imsm_set_array_state(struct
> > active_array *a, int consistent) super->updates_pending++;
> > }
> >
> > - /* finalize online capacity expansion/reshape */
> > + /* manage online capacity expansion/reshape */
> > if ((a->curr_action != reshape) &&
> > (a->prev_action == reshape)) {
> > struct mdinfo *mdi;
> >
> > + /* finalize online capacity expansion/reshape */
> > for (mdi = a->info.devs; mdi; mdi = mdi->next)
> > imsm_set_disk(a, mdi->disk.raid_disk,
> > mdi->curr_state); +
> > + /* check next volume reshape */
> > + imsm_progress_container_reshape(super);
> > }
> >
> > return consistent;

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: [PATCH 4/7] FIX: Array after takeover has to be frozen

am 28.01.2011 09:16:56 von adam.kwolek

> -----Original Message-----
> From: Neil Brown [mailto:neilb@suse.de]
> Sent: Friday, January 28, 2011 1:40 AM
> To: Kwolek, Adam
> Cc: linux-raid@vger.kernel.org; Williams, Dan J; Ciechanowski, Ed;
> Neubauer, Wojciech
> Subject: Re: [PATCH 4/7] FIX: Array after takeover has to be frozen
>
> On Wed, 26 Jan 2011 16:03:41 +0100
> Adam Kwolek wrote:
>
> > Problem occurs when we want to expand single disk raid0 array.
> > This is done via degraded 2 disks raid4 array. When new spare
> > is added to array, md immediately initiates recovery before
> > mdadm can configure and start reshape. This is due fact that 2 disk
> > raid4/5 array is special md case. Mdmon does nothing here because
> > container is blocked.
> > Put array in to frozen state allows mdadm to finish configuration
> > before reshape is executed in md.
> >
> > Signed-off-by: Adam Kwolek
> > ---
> >
> > Grow.c | 1 +
> > 1 files changed, 1 insertions(+), 0 deletions(-)
> >
> > diff --git a/Grow.c b/Grow.c
> > index df352f8..8b9625d 100644
> > --- a/Grow.c
> > +++ b/Grow.c
> > @@ -1657,6 +1657,7 @@ static int reshape_array(char *container, int
> > fd, char *devname, fprintf(stderr, Name ": level of %s changed to
> > %s\n", devname, c);
> > orig_level = info->array.level;
> > + sysfs_freeze_array(info);
> > }
> >
> > if (reshape.level > 0 && st->ss->external &&
>
> This looks like it is in the wrong place. Surely we should freeze the
> array *before* we set the level - though as long as it is frozen before
> we add spares I suspect it works OK.
>
> I have moved the 'freeze' earlier, but I think I need to rethink all
> the freezing - it doesn't seem very tidy at the moment.
>
> Thanks,
> NeilBrown

If array is not takeovered it is frozen as expected.
After array takeover, personality changes and array becomes unfrozen.
Such situation addressed my patch.
The way you applied array freezing changes nothing in this scenario.
Already frozen array is frozen again and becomes unfrozen after takeover.

I think my original patch has to be applied.

BR
Adam


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html