diff options
author | Lukas Czerner <lczerner@redhat.com> | 2012-11-30 10:42:40 (GMT) |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2012-11-30 10:47:57 (GMT) |
commit | eed8c02e680c04cd737e0a9cef74e68d8eb0cefa (patch) | |
tree | 8bd2bd10b0c02bb8a579ca3fd4f1482e5335c747 /drivers/md/raid5.c | |
parent | d33b98fc82b0908e91fb05ae081acaed7323f9d2 (diff) | |
download | linux-eed8c02e680c04cd737e0a9cef74e68d8eb0cefa.tar.xz |
wait: add wait_event_lock_irq() interface
New wait_event{_interruptible}_lock_irq{_cmd} macros added. This commit
moves the private wait_event_lock_irq() macro from MD to regular wait
includes, introduces new macro wait_event_lock_irq_cmd() instead of using
the old method with omitting cmd parameter which is ugly and makes a use
of new macros in the MD. It also introduces the _interruptible_ variant.
The use of new interface is when one have a special lock to protect data
structures used in the condition, or one also needs to invoke "cmd"
before putting it to sleep.
All new macros are expected to be called with the lock taken. The lock
is released before sleep and is reacquired afterwards. We will leave the
macro with the lock held.
Note to DM: IMO this should also fix theoretical race on waitqueue while
using simultaneously wait_event_lock_irq() and wait_event() because of
lack of locking around current state setting and wait queue removal.
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Cc: Neil Brown <neilb@suse.de>
Cc: David Howells <dhowells@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'drivers/md/raid5.c')
-rw-r--r-- | drivers/md/raid5.c | 12 |
1 files changed, 5 insertions, 7 deletions
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index c5439dc..2bf617d 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -466,7 +466,7 @@ get_active_stripe(struct r5conf *conf, sector_t sector, do { wait_event_lock_irq(conf->wait_for_stripe, conf->quiesce == 0 || noquiesce, - conf->device_lock, /* nothing */); + conf->device_lock); sh = __find_stripe(conf, sector, conf->generation - previous); if (!sh) { if (!conf->inactive_blocked) @@ -480,8 +480,7 @@ get_active_stripe(struct r5conf *conf, sector_t sector, (atomic_read(&conf->active_stripes) < (conf->max_nr_stripes *3/4) || !conf->inactive_blocked), - conf->device_lock, - ); + conf->device_lock); conf->inactive_blocked = 0; } else init_stripe(sh, sector, previous); @@ -1646,8 +1645,7 @@ static int resize_stripes(struct r5conf *conf, int newsize) spin_lock_irq(&conf->device_lock); wait_event_lock_irq(conf->wait_for_stripe, !list_empty(&conf->inactive_list), - conf->device_lock, - ); + conf->device_lock); osh = get_free_stripe(conf); spin_unlock_irq(&conf->device_lock); atomic_set(&nsh->count, 1); @@ -4000,7 +3998,7 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio) spin_lock_irq(&conf->device_lock); wait_event_lock_irq(conf->wait_for_stripe, conf->quiesce == 0, - conf->device_lock, /* nothing */); + conf->device_lock); atomic_inc(&conf->active_aligned_reads); spin_unlock_irq(&conf->device_lock); @@ -6088,7 +6086,7 @@ static void raid5_quiesce(struct mddev *mddev, int state) wait_event_lock_irq(conf->wait_for_stripe, atomic_read(&conf->active_stripes) == 0 && atomic_read(&conf->active_aligned_reads) == 0, - conf->device_lock, /* nothing */); + conf->device_lock); conf->quiesce = 1; spin_unlock_irq(&conf->device_lock); /* allow reshape to continue */ |