summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDmitry Torokhov <dmitry.torokhov@gmail.com>2011-02-23 16:51:28 (GMT)
committerDmitry Torokhov <dmitry.torokhov@gmail.com>2011-02-23 16:53:07 (GMT)
commit1d64b655dc083df5c5ac39945ccbbc6532903bf1 (patch)
treea61e2c1d368d4e1e76fc12e95092df81db0cc3ea
parent9bb794ae0509f39abad6593793ec86d490bad31b (diff)
downloadlinux-1d64b655dc083df5c5ac39945ccbbc6532903bf1.tar.xz
Input: serio/gameport - use 'long' system workqueue
Commit 8ee294cd9def0004887da7f44b80563493b0a097 converted serio subsystem event handling from using a dedicated thread to using common workqueue. Unfortunately, this regressed our boot times, due to the fact that serio jobs take long time to execute. While the new concurrency managed workqueue code manages long-playing works just fine and schedules additional workers as needed, such works wreck havoc among remaining users of flush_scheduled_work(). To solve this problem let's move serio/gameport works from system_wq to system_long_wq which nobody tries to flush. Reported-and-tested-by: Hernando Torque <pantherchen@versanet.de> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
-rw-r--r--drivers/input/gameport/gameport.c2
-rw-r--r--drivers/input/serio/serio.c2
2 files changed, 2 insertions, 2 deletions
diff --git a/drivers/input/gameport/gameport.c b/drivers/input/gameport/gameport.c
index dbf741c..ce57ede 100644
--- a/drivers/input/gameport/gameport.c
+++ b/drivers/input/gameport/gameport.c
@@ -360,7 +360,7 @@ static int gameport_queue_event(void *object, struct module *owner,
event->owner = owner;
list_add_tail(&event->node, &gameport_event_list);
- schedule_work(&gameport_event_work);
+ queue_work(system_long_wq, &gameport_event_work);
out:
spin_unlock_irqrestore(&gameport_event_lock, flags);
diff --git a/drivers/input/serio/serio.c b/drivers/input/serio/serio.c
index 7c38d1f..ba70058 100644
--- a/drivers/input/serio/serio.c
+++ b/drivers/input/serio/serio.c
@@ -299,7 +299,7 @@ static int serio_queue_event(void *object, struct module *owner,
event->owner = owner;
list_add_tail(&event->node, &serio_event_list);
- schedule_work(&serio_event_work);
+ queue_work(system_long_wq, &serio_event_work);
out:
spin_unlock_irqrestore(&serio_event_lock, flags);