diff options
author | Tejun Heo <tj@kernel.org> | 2013-04-01 18:23:35 (GMT) |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2013-04-01 18:23:35 (GMT) |
commit | df2d5ae4995b3fb9392b6089b9623d20b6c3a542 (patch) | |
tree | e3ff31bbf78f19b1094b78d8039a898466643f27 /crypto/crypto_user.c | |
parent | 2728fd2f098c3cc5efaf3f0433855e579d5e4f28 (diff) | |
download | linux-fsl-qoriq-df2d5ae4995b3fb9392b6089b9623d20b6c3a542.tar.xz |
workqueue: map an unbound workqueues to multiple per-node pool_workqueues
Currently, an unbound workqueue has only one "current" pool_workqueue
associated with it. It may have multple pool_workqueues but only the
first pool_workqueue servies new work items. For NUMA affinity, we
want to change this so that there are multiple current pool_workqueues
serving different NUMA nodes.
Introduce workqueue->numa_pwq_tbl[] which is indexed by NUMA node and
points to the pool_workqueue to use for each possible node. This
replaces first_pwq() in __queue_work() and workqueue_congested().
numa_pwq_tbl[] is currently initialized to point to the same
pool_workqueue as first_pwq() so this patch doesn't make any behavior
changes.
v2: Use rcu_dereference_raw() in unbound_pwq_by_node() as the function
may be called only with wq->mutex held.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Diffstat (limited to 'crypto/crypto_user.c')
0 files changed, 0 insertions, 0 deletions