Bookmarks

Yahoo Gmail Google Facebook Delicious Twitter Reddit Stumpleupon Myspace Digg

Search queries

WWWXXXAPC, docmd.close 2585, WWWXXXDOCO, nu vot, dhcpd lease file "binding state", WWWXXXDOCO, how to setup procmail to process html2text, how to setup procmail html2text, WWWXXXAPC., XXXCNZZZ

Links

XODOX
Impressum

#1: [PATCH 0/5] misc cleanups for RAID5

Posted on 2011-06-22 06:50:25 by Namhyung Kim

Hello,

These are assorted cleanup patches for RAID5 code.
Please take a look. Any comments are welcomed.

Thanks.


Namhyung Kim (5):
md/raid5: use kmem_cache_zalloc()
md/raid5: factor out dev_need_read()
md/raid5: factor out dev_need_for_write()
md/raid5: use r5_for_each_bio()
md/raid5: get rid of duplicated call to bio_data_dir()

drivers/md/raid5.c | 146 ++++++++++++++++++++++++++--------------------------
1 files changed, 73 insertions(+), 73 deletions(-)

--
1.7.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#2: [PATCH 1/5] md/raid5: use kmem_cache_zalloc()

Posted on 2011-06-22 06:50:26 by Namhyung Kim

Replace kmem_cache_alloc + memset(,0,) to kmem_cache_zalloc.
I think it's not harmful since @conf->slab_cache already knows
actual size of struct stripe_head.

Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
drivers/md/raid5.c | 8 +++-----
1 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index b72edf35ec54..0f71aa9a07c5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -1315,10 +1315,10 @@ static void raid_run_ops(struct stripe_head *sh, unsigned long ops_request)
static int grow_one_stripe(raid5_conf_t *conf)
{
struct stripe_head *sh;
- sh = kmem_cache_alloc(conf->slab_cache, GFP_KERNEL);
+ sh = kmem_cache_zalloc(conf->slab_cache, GFP_KERNEL);
if (!sh)
return 0;
- memset(sh, 0, sizeof(*sh) + (conf->pool_size-1)*sizeof(struct r5dev));
+
sh->raid_conf = conf;
spin_lock_init(&sh->lock);
#ifdef CONFIG_MULTICORE_RAID456
@@ -1435,12 +1435,10 @@ static int resize_stripes(raid5_conf_t *conf, int newsize)
return -ENOMEM;

for (i = conf->max_nr_stripes; i; i--) {
- nsh = kmem_cache_alloc(sc, GFP_KERNEL);
+ nsh = kmem_cache_zalloc(sc, GFP_KERNEL);
if (!nsh)
break;

- memset(nsh, 0, sizeof(*nsh) + (newsize-1)*sizeof(struct r5dev));
-
nsh->raid_conf = conf;
spin_lock_init(&nsh->lock);
#ifdef CONFIG_MULTICORE_RAID456
--
1.7.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#3: [PATCH 2/5] md/raid5: factor out dev_need_read()

Posted on 2011-06-22 06:50:27 by Namhyung Kim

Factor out common condition checking code to dev_need_read()
in the hope that it would improve readability somewhat.

Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
drivers/md/raid5.c | 20 ++++++++++++--------
1 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 0f71aa9a07c5..892a95fe6e8f 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2313,6 +2313,15 @@ handle_failed_stripe(raid5_conf_t *conf, struct stripe_head *sh,
md_wakeup_thread(conf->mddev->thread);
}

+static bool dev_need_read(struct r5dev *dev)
+{
+ if (dev->toread ||
+ (dev->towrite && !test_bit(R5_OVERWRITE, &dev->flags)))
+ return true;
+
+ return false;
+}
+
/* fetch_block5 - checks the given member device to see if its data needs
* to be read or computed to satisfy a request.
*
@@ -2328,13 +2337,9 @@ static int fetch_block5(struct stripe_head *sh, struct stripe_head_state *s,
/* is the data in this block needed, and can we get it? */
if (!test_bit(R5_LOCKED, &dev->flags) &&
!test_bit(R5_UPTODATE, &dev->flags) &&
- (dev->toread ||
- (dev->towrite && !test_bit(R5_OVERWRITE, &dev->flags)) ||
+ (dev_need_read(dev) ||
s->syncing || s->expanding ||
- (s->failed &&
- (failed_dev->toread ||
- (failed_dev->towrite &&
- !test_bit(R5_OVERWRITE, &failed_dev->flags)))))) {
+ (s->failed && dev_need_read(failed_dev)))) {
/* We would like to get this block, possibly by computing it,
* otherwise read it if the backing disk is insync
*/
@@ -2401,8 +2406,7 @@ static int fetch_block6(struct stripe_head *sh, struct stripe_head_state *s,

if (!test_bit(R5_LOCKED, &dev->flags) &&
!test_bit(R5_UPTODATE, &dev->flags) &&
- (dev->toread ||
- (dev->towrite && !test_bit(R5_OVERWRITE, &dev->flags)) ||
+ (dev_need_read(dev) ||
s->syncing || s->expanding ||
(s->failed >= 1 &&
(fdev[0]->toread || s->to_write)) ||
--
1.7.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#4: [PATCH 3/5] md/raid5: factor out dev_need_for_write()

Posted on 2011-06-22 06:50:28 by Namhyung Kim

Factor out common condition checking code to dev_need_for_write()
to improve readability - please suggest a better name :)

Besides, a couple of wierd whitespace fixes are contained also.
Maybe the checkpatch hates some change but it looks more natural
to fix IMHO.

Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
drivers/md/raid5.c | 47 ++++++++++++++++++++++++-----------------------
1 files changed, 24 insertions(+), 23 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 892a95fe6e8f..12f3b939e56d 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2542,6 +2542,18 @@ static void handle_stripe_clean_event(raid5_conf_t *conf,
md_wakeup_thread(conf->mddev->thread);
}

+static bool dev_need_for_write(struct r5dev *dev)
+{
+ if (test_bit(R5_LOCKED, &dev->flags))
+ return false;
+
+ if (!test_bit(R5_UPTODATE, &dev->flags) &&
+ !test_bit(R5_Wantcompute, &dev->flags))
+ return true;
+
+ return false;
+}
+
static void handle_stripe_dirtying5(raid5_conf_t *conf,
struct stripe_head *sh, struct stripe_head_state *s, int disks)
{
@@ -2550,20 +2562,18 @@ static void handle_stripe_dirtying5(raid5_conf_t *conf,
/* would I have to read this buffer for read_modify_write */
struct r5dev *dev = &sh->dev[i];
if ((dev->towrite || i == sh->pd_idx) &&
- !test_bit(R5_LOCKED, &dev->flags) &&
- !(test_bit(R5_UPTODATE, &dev->flags) ||
- test_bit(R5_Wantcompute, &dev->flags))) {
+ dev_need_for_write(dev)) {
if (test_bit(R5_Insync, &dev->flags))
rmw++;
else
rmw += 2*disks; /* cannot read it */
}
/* Would I have to read this buffer for reconstruct_write */
- if (!test_bit(R5_OVERWRITE, &dev->flags) && i != sh->pd_idx &&
- !test_bit(R5_LOCKED, &dev->flags) &&
- !(test_bit(R5_UPTODATE, &dev->flags) ||
- test_bit(R5_Wantcompute, &dev->flags))) {
- if (test_bit(R5_Insync, &dev->flags)) rcw++;
+ if (!test_bit(R5_OVERWRITE, &dev->flags) &&
+ i != sh->pd_idx &&
+ dev_need_for_write(dev)) {
+ if (test_bit(R5_Insync, &dev->flags))
+ rcw++;
else
rcw += 2*disks;
}
@@ -2576,12 +2586,9 @@ static void handle_stripe_dirtying5(raid5_conf_t *conf,
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if ((dev->towrite || i == sh->pd_idx) &&
- !test_bit(R5_LOCKED, &dev->flags) &&
- !(test_bit(R5_UPTODATE, &dev->flags) ||
- test_bit(R5_Wantcompute, &dev->flags)) &&
+ dev_need_for_write(dev) &&
test_bit(R5_Insync, &dev->flags)) {
- if (
- test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) {
+ if (test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) {
pr_debug("Read_old block "
"%d for r-m-w\n", i);
set_bit(R5_LOCKED, &dev->flags);
@@ -2599,12 +2606,9 @@ static void handle_stripe_dirtying5(raid5_conf_t *conf,
struct r5dev *dev = &sh->dev[i];
if (!test_bit(R5_OVERWRITE, &dev->flags) &&
i != sh->pd_idx &&
- !test_bit(R5_LOCKED, &dev->flags) &&
- !(test_bit(R5_UPTODATE, &dev->flags) ||
- test_bit(R5_Wantcompute, &dev->flags)) &&
+ dev_need_for_write(dev) &&
test_bit(R5_Insync, &dev->flags)) {
- if (
- test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) {
+ if (test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) {
pr_debug("Read_old block "
"%d for Reconstruct\n", i);
set_bit(R5_LOCKED, &dev->flags);
@@ -2645,15 +2649,12 @@ static void handle_stripe_dirtying6(raid5_conf_t *conf,
/* check if we haven't enough data */
if (!test_bit(R5_OVERWRITE, &dev->flags) &&
i != pd_idx && i != qd_idx &&
- !test_bit(R5_LOCKED, &dev->flags) &&
- !(test_bit(R5_UPTODATE, &dev->flags) ||
- test_bit(R5_Wantcompute, &dev->flags))) {
+ dev_need_for_write(dev)) {
rcw++;
if (!test_bit(R5_Insync, &dev->flags))
continue; /* it's a failed drive */

- if (
- test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) {
+ if (test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) {
pr_debug("Read_old stripe %llu "
"block %d for Reconstruct\n",
(unsigned long long)sh->sector, i);
--
1.7.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#5: [PATCH 4/5] md/raid5: use r5_for_each_bio()

Posted on 2011-06-22 06:50:29 by Namhyung Kim

There are common bio loop which could be factored out in a usual
for_each_xxx way. Add and use r5_for_each_bio() for those.

Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
drivers/md/raid5.c | 67 ++++++++++++++++++++++++---------------------------
1 files changed, 32 insertions(+), 35 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 12f3b939e56d..6b92e8549e9b 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -81,6 +81,18 @@
* of the current stripe+device
*/
#define r5_next_bio(bio, sect) ( ( (bio)->bi_sector + ((bio)->bi_size>>9) < sect + STRIPE_SECTORS) ? (bio)->bi_next : NULL)
+
+/*
+ * Iterates through all attached bio's to the current stripe+device.
+ * The given bio must be initialized before using this macro.
+ */
+#define r5_for_each_bio(bio, nbio, dev) \
+ for ( ; \
+ ({ if (bio) nbio = r5_next_bio(bio, (dev)->sector); \
+ (bio && (bio)->bi_sector < (dev)->sector + STRIPE_SECTORS);}); \
+ bio = nbio \
+ )
+
/*
* The following can be used to debug the driver
*/
@@ -647,15 +659,12 @@ static void ops_complete_biofill(void *stripe_head_ref)
BUG_ON(!dev->read);
rbi = dev->read;
dev->read = NULL;
- while (rbi && rbi->bi_sector <
- dev->sector + STRIPE_SECTORS) {
- rbi2 = r5_next_bio(rbi, dev->sector);
+
+ r5_for_each_bio(rbi, rbi2, dev)
if (!raid5_dec_bi_phys_segments(rbi)) {
rbi->bi_next = return_bi;
return_bi = rbi;
}
- rbi = rbi2;
- }
}
}
spin_unlock_irq(&conf->device_lock);
@@ -680,17 +689,15 @@ static void ops_run_biofill(struct stripe_head *sh)
for (i = sh->disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if (test_bit(R5_Wantfill, &dev->flags)) {
- struct bio *rbi;
+ struct bio *rbi, *rbi2;
spin_lock_irq(&conf->device_lock);
dev->read = rbi = dev->toread;
dev->toread = NULL;
spin_unlock_irq(&conf->device_lock);
- while (rbi && rbi->bi_sector <
- dev->sector + STRIPE_SECTORS) {
+
+ r5_for_each_bio(rbi, rbi2, dev)
tx = async_copy_data(0, rbi, dev->page,
dev->sector, tx);
- rbi = r5_next_bio(rbi, dev->sector);
- }
}
}

@@ -1018,7 +1025,7 @@ ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx)
struct bio *chosen;

if (test_and_clear_bit(R5_Wantdrain, &dev->flags)) {
- struct bio *wbi;
+ struct bio *wbi, *wbi2;

spin_lock(&sh->lock);
chosen = dev->towrite;
@@ -1027,13 +1034,11 @@ ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx)
wbi = dev->written = chosen;
spin_unlock(&sh->lock);

- while (wbi && wbi->bi_sector <
- dev->sector + STRIPE_SECTORS) {
+ r5_for_each_bio(wbi, wbi2, dev) {
if (wbi->bi_rw & REQ_FUA)
set_bit(R5_WantFUA, &dev->flags);
tx = async_copy_data(1, wbi, dev->page,
dev->sector, tx);
- wbi = r5_next_bio(wbi, dev->sector);
}
}
}
@@ -2228,7 +2233,7 @@ handle_failed_stripe(raid5_conf_t *conf, struct stripe_head *sh,
{
int i;
for (i = disks; i--; ) {
- struct bio *bi;
+ struct bio *bi, *bi2;
int bitmap_end = 0;

if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
@@ -2252,31 +2257,27 @@ handle_failed_stripe(raid5_conf_t *conf, struct stripe_head *sh,
if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
wake_up(&conf->wait_for_overlap);

- while (bi && bi->bi_sector <
- sh->dev[i].sector + STRIPE_SECTORS) {
- struct bio *nextbi = r5_next_bio(bi, sh->dev[i].sector);
+ r5_for_each_bio(bi, bi2, &sh->dev[i]) {
clear_bit(BIO_UPTODATE, &bi->bi_flags);
if (!raid5_dec_bi_phys_segments(bi)) {
md_write_end(conf->mddev);
bi->bi_next = *return_bi;
*return_bi = bi;
}
- bi = nextbi;
}
/* and fail all 'written' */
bi = sh->dev[i].written;
sh->dev[i].written = NULL;
- if (bi) bitmap_end = 1;
- while (bi && bi->bi_sector <
- sh->dev[i].sector + STRIPE_SECTORS) {
- struct bio *bi2 = r5_next_bio(bi, sh->dev[i].sector);
+ if (bi)
+ bitmap_end = 1;
+
+ r5_for_each_bio(bi, bi2, &sh->dev[i]) {
clear_bit(BIO_UPTODATE, &bi->bi_flags);
if (!raid5_dec_bi_phys_segments(bi)) {
md_write_end(conf->mddev);
bi->bi_next = *return_bi;
*return_bi = bi;
}
- bi = bi2;
}

/* fail any reads if this device is non-operational and
@@ -2289,17 +2290,15 @@ handle_failed_stripe(raid5_conf_t *conf, struct stripe_head *sh,
sh->dev[i].toread = NULL;
if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
wake_up(&conf->wait_for_overlap);
- if (bi) s->to_read--;
- while (bi && bi->bi_sector <
- sh->dev[i].sector + STRIPE_SECTORS) {
- struct bio *nextbi =
- r5_next_bio(bi, sh->dev[i].sector);
+ if (bi)
+ s->to_read--;
+
+ r5_for_each_bio(bi, bi2, &sh->dev[i]) {
clear_bit(BIO_UPTODATE, &bi->bi_flags);
if (!raid5_dec_bi_phys_segments(bi)) {
bi->bi_next = *return_bi;
*return_bi = bi;
}
- bi = nextbi;
}
}
spin_unlock_irq(&conf->device_lock);
@@ -2515,16 +2514,14 @@ static void handle_stripe_clean_event(raid5_conf_t *conf,
spin_lock_irq(&conf->device_lock);
wbi = dev->written;
dev->written = NULL;
- while (wbi && wbi->bi_sector <
- dev->sector + STRIPE_SECTORS) {
- wbi2 = r5_next_bio(wbi, dev->sector);
+
+ r5_for_each_bio(wbi, wbi2, dev)
if (!raid5_dec_bi_phys_segments(wbi)) {
md_write_end(conf->mddev);
wbi->bi_next = *return_bi;
*return_bi = wbi;
}
- wbi = wbi2;
- }
+
if (dev->towrite == NULL)
bitmap_end = 1;
spin_unlock_irq(&conf->device_lock);
--
1.7.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#6: [PATCH 5/5] md/raid5: get rid of duplicated call to bio_data_dir()

Posted on 2011-06-22 06:50:30 by Namhyung Kim

In raid5::make_request(), once bio_data_dir(@bi) is detected
it never (and couldn't) be changed. Use the result always.

Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
drivers/md/raid5.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 6b92e8549e9b..ccaa1102057d 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4016,7 +4016,7 @@ static int make_request(mddev_t *mddev, struct bio * bi)
}
}

- if (bio_data_dir(bi) == WRITE &&
+ if (rw == WRITE &&
logical_sector >= mddev->suspend_lo &&
logical_sector < mddev->suspend_hi) {
release_stripe(sh);
@@ -4034,7 +4034,7 @@ static int make_request(mddev_t *mddev, struct bio * bi)
}

if (test_bit(STRIPE_EXPANDING, &sh->state) ||
- !add_stripe_bio(sh, bi, dd_idx, (bi->bi_rw&RW_MASK))) {
+ !add_stripe_bio(sh, bi, dd_idx, rw)) {
/* Stripe is busy expanding or
* add failed due to overlap. Flush everything
* and wait a while
--
1.7.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#7: Re: [PATCH 0/5] misc cleanups for RAID5

Posted on 2011-06-23 02:55:54 by NeilBrown

On Wed, 22 Jun 2011 13:50:25 +0900 Namhyung Kim <namhyung@gmail.com> wrote:

> Hello,
>
> These are assorted cleanup patches for RAID5 code.
> Please take a look. Any comments are welcomed.
>
> Thanks.
>
>
> Namhyung Kim (5):
> md/raid5: use kmem_cache_zalloc()
> md/raid5: factor out dev_need_read()
> md/raid5: factor out dev_need_for_write()
> md/raid5: use r5_for_each_bio()
> md/raid5: get rid of duplicated call to bio_data_dir()
>
> drivers/md/raid5.c | 146 ++++++++++++++++++++++++++--------------------------
> 1 files changed, 73 insertions(+), 73 deletions(-)
>

Hi,
thanks for these.

I have applied the first and the last.

The two "factor out" patches conflict with some much more substantial
refactoring I have been doing in raid5.c. I have just pushed all of that
into my for-next branch:

git://neil.brown.name/md for-next

so you can use that as a basis for any further review.

The r5_for_each_bio() patch I'm not 100% sure I'm happy with, and in any
case it would have conflicted with my other changes too.
I'm not fond of macros that hide details that could be important. A
"for_each" macro that purely and simply walks through a list is fine. A
"for_each" macro that does anything more complicated I start to have doubts
about...
However if you really do like it and want to rebase it on the for-next
branch I'll have another look and think harder about it. Maybe I'll end up
liking it after all, but no promises.

Thanks,
NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#8: Re: [PATCH 0/5] misc cleanups for RAID5

Posted on 2011-06-23 09:30:37 by Namhyung Kim

2011-06-23 (ëª=A9), 10:55 +1000, NeilBrown:
> Hi,
> thanks for these.
>=20
> I have applied the first and the last.
>=20
> The two "factor out" patches conflict with some much more substantia=
l
> refactoring I have been doing in raid5.c. I have just pushed all of=
that
> into my for-next branch:
>=20
> git://neil.brown.name/md for-next
>=20
> so you can use that as a basis for any further review.

Thanks. I'll have a look at that.


>=20
> The r5_for_each_bio() patch I'm not 100% sure I'm happy with, and in=
any
> case it would have conflicted with my other changes too.
> I'm not fond of macros that hide details that could be important. A
> "for_each" macro that purely and simply walks through a list is fine=
=2E A
> "for_each" macro that does anything more complicated I start to have=
doubts
> about...
> However if you really do like it and want to rebase it on the for-ne=
xt
> branch I'll have another look and think harder about it. Maybe I'l=
l end up
> liking it after all, but no promises.

I don't have any strong opinion on it. It was just a suggestion that I
think it helps the code cleaner but ...

Anyway, thanks for your comment.


--=20
Regards,
Namhyung Kim


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message