summaryrefslogtreecommitdiff
path: root/lib/percpu_counter.c
AgeCommit message (Collapse)Author
2015-05-28percpu_counter: batch size aware __percpu_counter_compare()Dave Chinner
XFS uses non-stanard batch sizes for avoiding frequent global counter updates on it's allocated inode counters, as they increment or decrement in batches of 64 inodes. Hence the standard percpu counter batch of 32 means that the counter is effectively a global counter. Currently Xfs uses a batch size of 128 so that it doesn't take the global lock on every single modification. However, Xfs also needs to compare accurately against zero, which means we need to use percpu_counter_compare(), and that has a hard-coded batch size of 32, and hence will spuriously fail to detect when it is supposed to use precise comparisons and hence the accounting goes wrong. Add __percpu_counter_compare() to take a custom batch size so we can use it sanely in XFS and factor percpu_counter_compare() to use it. Signed-off-by: Dave Chinner <dchinner@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-08percpu_counter: add @gfp to percpu_counter_init()Tejun Heo
Percpu allocator now supports allocation mask. Add @gfp to percpu_counter_init() so that !GFP_KERNEL allocation masks can be used with percpu_counters too. We could have left percpu_counter_init() alone and added percpu_counter_init_gfp(); however, the number of users isn't that high and introducing _gfp variants to all percpu data structures would be quite ugly, so let's just do the conversion. This is the one with the most users. Other percpu data structures are a lot easier to convert. This patch doesn't make any functional difference. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Jan Kara <jack@suse.cz> Acked-by: "David S. Miller" <davem@davemloft.net> Cc: x86@kernel.org Cc: Jens Axboe <axboe@kernel.dk> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org>
2014-09-08percpu_counter: make percpu_counters_lock irq-safeTejun Heo
percpu_counter is scheduled to grow @gfp support to allow atomic initialization. This patch makes percpu_counters_lock irq-safe so that it can be safely used from atomic contexts. Signed-off-by: Tejun Heo <tj@kernel.org>
2014-04-08lib/percpu_counter.c: fix bad percpu counter state during suspendJens Axboe
I got a bug report yesterday from Laszlo Ersek in which he states that his kvm instance fails to suspend. Laszlo bisected it down to this commit 1cf7e9c68fe8 ("virtio_blk: blk-mq support") where virtio-blk is converted to use the blk-mq infrastructure. After digging a bit, it became clear that the issue was with the queue drain. blk-mq tracks queue usage in a percpu counter, which is incremented on request alloc and decremented when the request is freed. The initial hunt was for an inconsistency in blk-mq, but everything seemed fine. In fact, the counter only returned crazy values when suspend was in progress. When a CPU is unplugged, the percpu counters merges that CPU state with the general state. blk-mq takes care to register a hotcpu notifier with the appropriate priority, so we know it runs after the percpu counter notifier. However, the percpu counter notifier only merges the state when the CPU is fully gone. This leaves a state transition where the CPU going away is no longer in the online mask, yet it still holds private values. This means that in this state, percpu_counter_sum() returns invalid results, and the suspend then hangs waiting for abs(dead-cpu-value) requests to complete which of course will never happen. Fix this by clearing the state earlier, so we never have a case where the CPU isn't in online mask but still holds private state. This bug has been there since forever, I guess we don't have a lot of users where percpu counters needs to be reliable during the suspend cycle. Signed-off-by: Jens Axboe <axboe@fb.com> Reported-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-17percpu_counter: unbreak __percpu_counter_add()Hugh Dickins
Commit 74e72f894d56 ("lib/percpu_counter.c: fix __percpu_counter_add()") looked very plausible, but its arithmetic was badly wrong: obvious once you see the fix, but maddening to get there from the weird tmpfs ENOSPCs Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Ming Lei <tom.leiming@gmail.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Shaohua Li <shli@fusionio.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Fan Du <fan.du@windriver.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-15lib/percpu_counter.c: fix __percpu_counter_add()Ming Lei
__percpu_counter_add() may be called in softirq/hardirq handler (such as, blk_mq_queue_exit() is typically called in hardirq/softirq handler), so we need to call this_cpu_add()(irq safe helper) to update percpu counter, otherwise counts may be lost. This fixes the problem that 'rmmod null_blk' hangs in blk_cleanup_queue() because of miscounting of request_queue->mq_usage_counter. This patch is the v1 of previous one of "lib/percpu_counter.c: disable local irq when updating percpu couter", and takes Andrew's approach which may be more efficient for ARCHs(x86, s390) that have optimized this_cpu_add(). Signed-off-by: Ming Lei <tom.leiming@gmail.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Shaohua Li <shli@fusionio.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Fan Du <fan.du@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-10-25percpu_counter: make APIs irq safeShaohua Li
In my usage, sometimes the percpu APIs are called with irq locked, sometimes not. lockdep complains there is potential deadlock. Let's always use percpucounter lock in irq safe way. There should be no performance penality, as all those are slow code path. Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-07-14kernel: delete __cpuinit usage from all core kernel filesPaul Gortmaker
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. This removes all the uses of the __cpuinit macros from C files in the core kernel directories (kernel, init, lib, mm, and include) that don't really have a specific maintainer. [1] https://lkml.org/lkml/2013/5/20/589 Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-07-03lib/percpu_counter.c: __this_cpu_write() doesn't need to be protected by ↵Fan Du
spinlock __this_cpu_write doesn't need to be protected by spinlock, AS we are doing per cpu write with preempt disabled. And another reason to remove __this_cpu_write outside of spinlock: __percpu_counter_sum is not an accurate counter. Signed-off-by: Fan Du <fan.du@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-07-31switch the protection of percpu_counter list to spinlockAl Viro
... making percpu_counter_destroy() non-blocking Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-11-01lib/percpu_counter.c: enclose hotplug only variables in hotplug ifdefGlauber Costa
These variables are only used when CONFIG_HOTPLUG_CPU is enabled, they are ifdef'ed everywhere else. So don't define them when CONFIG_HOTPLUG_CPU is not enabled. Signed-off-by: Glauber Costa <glommer@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-09-13locking, percpu_counter: Annotate ::lock as rawThomas Gleixner
The percpu_counter::lock can be taken in atomic context and therefore cannot be preempted on -rt - annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-17percpucounter: Optimize __percpu_counter_add a bit through the use of ↵Christoph Lameter
this_cpu() options. The this_cpu_* options can be used to optimize __percpu_counter_add a bit. Avoids some address arithmetic and saves 12 bytes. Before: 00000000000001d3 <__percpu_counter_add>: 1d3: 55 push %rbp 1d4: 48 89 e5 mov %rsp,%rbp 1d7: 41 55 push %r13 1d9: 41 54 push %r12 1db: 53 push %rbx 1dc: 48 89 fb mov %rdi,%rbx 1df: 48 83 ec 08 sub $0x8,%rsp 1e3: 4c 8b 67 30 mov 0x30(%rdi),%r12 1e7: 65 4c 03 24 25 00 00 add %gs:0x0,%r12 1ee: 00 00 1f0: 4d 63 2c 24 movslq (%r12),%r13 1f4: 48 63 c2 movslq %edx,%rax 1f7: 49 01 f5 add %rsi,%r13 1fa: 49 39 c5 cmp %rax,%r13 1fd: 7d 0a jge 209 <__percpu_counter_add+0x36> 1ff: f7 da neg %edx 201: 48 63 d2 movslq %edx,%rdx 204: 49 39 d5 cmp %rdx,%r13 207: 7f 1e jg 227 <__percpu_counter_add+0x54> 209: 48 89 df mov %rbx,%rdi 20c: e8 00 00 00 00 callq 211 <__percpu_counter_add+0x3e> 211: 4c 01 6b 18 add %r13,0x18(%rbx) 215: 48 89 df mov %rbx,%rdi 218: 41 c7 04 24 00 00 00 movl $0x0,(%r12) 21f: 00 220: e8 00 00 00 00 callq 225 <__percpu_counter_add+0x52> 225: eb 04 jmp 22b <__percpu_counter_add+0x58> 227: 45 89 2c 24 mov %r13d,(%r12) 22b: 5b pop %rbx 22c: 5b pop %rbx 22d: 41 5c pop %r12 22f: 41 5d pop %r13 231: c9 leaveq 232: c3 retq After: 00000000000001d3 <__percpu_counter_add>: 1d3: 55 push %rbp 1d4: 48 63 ca movslq %edx,%rcx 1d7: 48 89 e5 mov %rsp,%rbp 1da: 41 54 push %r12 1dc: 53 push %rbx 1dd: 48 89 fb mov %rdi,%rbx 1e0: 48 8b 47 30 mov 0x30(%rdi),%rax 1e4: 65 44 8b 20 mov %gs:(%rax),%r12d 1e8: 4d 63 e4 movslq %r12d,%r12 1eb: 49 01 f4 add %rsi,%r12 1ee: 49 39 cc cmp %rcx,%r12 1f1: 7d 0a jge 1fd <__percpu_counter_add+0x2a> 1f3: f7 da neg %edx 1f5: 48 63 d2 movslq %edx,%rdx 1f8: 49 39 d4 cmp %rdx,%r12 1fb: 7f 21 jg 21e <__percpu_counter_add+0x4b> 1fd: 48 89 df mov %rbx,%rdi 200: e8 00 00 00 00 callq 205 <__percpu_counter_add+0x32> 205: 4c 01 63 18 add %r12,0x18(%rbx) 209: 48 8b 43 30 mov 0x30(%rbx),%rax 20d: 48 89 df mov %rbx,%rdi 210: 65 c7 00 00 00 00 00 movl $0x0,%gs:(%rax) 217: e8 00 00 00 00 callq 21c <__percpu_counter_add+0x49> 21c: eb 04 jmp 222 <__percpu_counter_add+0x4f> 21e: 65 44 89 20 mov %r12d,%gs:(%rax) 222: 5b pop %rbx 223: 41 5c pop %r12 225: c9 leaveq 226: c3 retq Reviewed-by: Pekka Enberg <penberg@kernel.org> Reviewed-by: Tejun Heo <tj@kernel.org> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2010-10-26percpu_counter: use this_cpu_ptr() instead of per_cpu_ptr()Christoph Lameter
this_cpu_ptr() avoids an array lookup and can use the percpu offset of the local cpu directly. Signed-off-by: Christoph Lameter <cl@linux.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-10-26percpu_counter: add debugobj supportTejun Heo
All percpu counters are linked to a global list on initialization and removed from it on destruction. The list is walked during CPU up/down. If a percpu counter is freed without being properly destroyed, the system will oops only on the next CPU up/down making it pretty nasty to track down. This patch adds debugobj support for percpu counters so that such problems can be found easily. As percpu counters don't make sense on stack and can't be statically initialized, debugobj support is pretty simple. It's initialized and activated on counter initialization, and deactivatd and destroyed on counter destruction. With this patch applied, the bug fixed by commit 602586a83b719df0fbd94196a1359ed35aeb2df3 (shmem: put_super must percpu_counter_destroy) triggers the following warning on tmpfs unmount and the system won't oops on the next cpu up/down operation. ------------[ cut here ]------------ WARNING: at lib/debugobjects.c:259 debug_print_object+0x5c/0x70() Hardware name: Bochs ODEBUG: free active (active state 0) object type: percpu_counter Modules linked in: Pid: 3999, comm: umount Not tainted 2.6.36-rc2-work+ #5 Call Trace: [<ffffffff81083f7f>] warn_slowpath_common+0x7f/0xc0 [<ffffffff81084076>] warn_slowpath_fmt+0x46/0x50 [<ffffffff813b45cc>] debug_print_object+0x5c/0x70 [<ffffffff813b50e5>] debug_check_no_obj_freed+0x125/0x210 [<ffffffff811577d3>] kfree+0xb3/0x2f0 [<ffffffff81132edd>] shmem_put_super+0x1d/0x30 [<ffffffff81162e96>] generic_shutdown_super+0x56/0xe0 [<ffffffff81162f86>] kill_anon_super+0x16/0x60 [<ffffffff81162ff7>] kill_litter_super+0x27/0x30 [<ffffffff81163295>] deactivate_locked_super+0x45/0x60 [<ffffffff81163cfa>] deactivate_super+0x4a/0x70 [<ffffffff8117d446>] mntput_no_expire+0x86/0xe0 [<ffffffff8117df7f>] sys_umount+0x6f/0x360 [<ffffffff8103f01b>] system_call_fastpath+0x16/0x1b ---[ end trace cce2a341ba3611a7 ]--- Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Thomas Gleixner <tglxlinutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-10-26percpu: fix list_head init bug in __percpu_counter_init()Masanori ITOH
WARNING: at lib/list_debug.c:26 __list_add+0x3f/0x81() Hardware name: Express5800/B120a [N8400-085] list_add corruption. next->prev should be prev (ffffffff81a7ea00), but was dead000000200200. (next=ffff88080b872d58). Modules linked in: aoe ipt_MASQUERADE iptable_nat nf_nat autofs4 sunrpc bridge 8021q garp stp llc ipv6 cpufreq_ondemand acpi_cpufreq freq_table dm_round_robin dm_multipath kvm_intel kvm uinput lpfc scsi_transport_fc igb ioatdma scsi_tgt i2c_i801 i2c_core dca iTCO_wdt iTCO_vendor_support pcspkr shpchp megaraid_sas [last unloaded: aoe] Pid: 54, comm: events/3 Tainted: G W 2.6.34-vanilla1 #1 Call Trace: [<ffffffff8104bd77>] warn_slowpath_common+0x7c/0x94 [<ffffffff8104bde6>] warn_slowpath_fmt+0x41/0x43 [<ffffffff8120fd2e>] __list_add+0x3f/0x81 [<ffffffff81212a12>] __percpu_counter_init+0x59/0x6b [<ffffffff810d8499>] bdi_init+0x118/0x17e [<ffffffff811f2c50>] blk_alloc_queue_node+0x79/0x143 [<ffffffff811f2d2b>] blk_alloc_queue+0x11/0x13 [<ffffffffa02a931d>] aoeblk_gdalloc+0x8e/0x1c9 [aoe] [<ffffffffa02aa655>] aoecmd_sleepwork+0x25/0xa8 [aoe] [<ffffffff8106186c>] worker_thread+0x1a9/0x237 [<ffffffffa02aa630>] ? aoecmd_sleepwork+0x0/0xa8 [aoe] [<ffffffff81065827>] ? autoremove_wake_function+0x0/0x39 [<ffffffff810616c3>] ? worker_thread+0x0/0x237 [<ffffffff810653ad>] kthread+0x7f/0x87 [<ffffffff8100aa24>] kernel_thread_helper+0x4/0x10 [<ffffffff8106532e>] ? kthread+0x0/0x87 [<ffffffff8100aa20>] ? kernel_thread_helper+0x0/0x10 It's because there is no initialization code for a list_head contained in the struct backing_dev_info under CONFIG_HOTPLUG_CPU, and the bug comes up when block device drivers calling blk_alloc_queue() are used. In case of me, I got them by using aoe. Signed-off-by: Masanori Itoh <itoumsn@nttdata.co.jp> Cc: Tejun Heo <tj@kernel.org> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10tmpfs: add accurate compare function to percpu_counter libraryTim Chen
Add percpu_counter_compare that allows for a quick but accurate comparison of percpu_counter with a given value. A rough count is provided by the count field in percpu_counter structure, without accounting for the other values stored in individual cpu counters. The actual count is a sum of count and the cpu counters. However, count field is never different from the actual value by a factor of batch*num_online_cpu. We do not need to get actual count for comparison if count is different from the given value by this factor and allows for quick comparison without summing up all the per cpu counters. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07Merge branch 'core-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: rcu: fix rcutorture bug rcu: eliminate synchronize_rcu_xxx macro rcu: make treercu safe for suspend and resume rcu: fix rcutree grace-period-latency bug on small systems futex: catch certain assymetric (get|put)_futex_key calls futex: make futex_(get|put)_key() calls symmetric locking, percpu counters: introduce separate lock classes swiotlb: clean up EXPORT_SYMBOL usage swiotlb: remove unnecessary declaration swiotlb: replace architecture-specific swiotlb.h with linux/swiotlb.h swiotlb: add support for systems with highmem swiotlb: store phys address in io_tlb_orig_addr array swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()
2009-01-06percpu_counter: FBC_BATCH should be a variableEric Dumazet
For NR_CPUS >= 16 values, FBC_BATCH is 2*NR_CPUS Considering more and more distros are using high NR_CPUS values, it makes sense to use a more sensible value for FBC_BATCH, and get rid of NR_CPUS. A sensible value is 2*num_online_cpus(), with a minimum value of 32 (This minimum value helps branch prediction in __percpu_counter_add()) We already have a hotcpu notifier, so we can adjust FBC_BATCH dynamically. We rename FBC_BATCH to percpu_counter_batch since its not a constant anymore. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-06Merge branches 'core/futexes', 'core/locking', 'core/rcu' and 'linus' into ↵Ingo Molnar
core/urgent
2008-12-29locking, percpu counters: introduce separate lock classesPeter Zijlstra
Impact: fix lockdep false positives Classify percpu_counter instances similar to regular lock objects -- that is, per instantiation site. The networking code has increased its use of percpu_counters, which leads to false positives if they are treated as a single class. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-10revert "percpu_counter: new function percpu_counter_sum_and_set"Andrew Morton
Revert commit e8ced39d5e8911c662d4d69a342b9d053eaaac4e Author: Mingming Cao <cmm@us.ibm.com> Date: Fri Jul 11 19:27:31 2008 -0400 percpu_counter: new function percpu_counter_sum_and_set As described in revert "percpu counter: clean up percpu_counter_sum_and_set()" the new percpu_counter_sum_and_set() is racy against updates to the cpu-local accumulators on other CPUs. Revert that change. This means that ext4 will be slow again. But correct. Reported-by: Eric Dumazet <dada1@cosmosbay.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mingming Cao <cmm@us.ibm.com> Cc: <linux-ext4@vger.kernel.org> Cc: <stable@kernel.org> [2.6.27.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-12-10revert "percpu counter: clean up percpu_counter_sum_and_set()"Andrew Morton
Revert commit 1f7c14c62ce63805f9574664a6c6de3633d4a354 Author: Mingming Cao <cmm@us.ibm.com> Date: Thu Oct 9 12:50:59 2008 -0400 percpu counter: clean up percpu_counter_sum_and_set() Before this patch we had the following: percpu_counter_sum(): return the percpu_counter's value percpu_counter_sum_and_set(): return the percpu_counter's value, copying that value into the central value and zeroing the per-cpu counters before returning. After this patch, percpu_counter_sum_and_set() has gone, and percpu_counter_sum() gets the old percpu_counter_sum_and_set() functionality. Problem is, as Eric points out, the old percpu_counter_sum_and_set() functionality was racy and wrong. It zeroes out counters on "other" cpus, without holding any locks which will prevent races agaist updates from those other CPUS. This patch reverts 1f7c14c62ce63805f9574664a6c6de3633d4a354. This means that percpu_counter_sum_and_set() still has the race, but percpu_counter_sum() does not. Note that this is not a simple revert - ext4 has since started using percpu_counter_sum() for its dirty_blocks counter as well. Note that this revert patch changes percpu_counter_sum() semantics. Before the patch, a call to percpu_counter_sum() will bring the counter's central counter mostly up-to-date, so a following percpu_counter_read() will return a close value. After this patch, a call to percpu_counter_sum() will leave the counter's central accumulator unaltered, so a subsequent call to percpu_counter_read() can now return a significantly inaccurate result. If there is any code in the tree which was introduced after e8ced39d5e8911c662d4d69a342b9d053eaaac4e was merged, and which depends upon the new percpu_counter_sum() semantics, that code will break. Reported-by: Eric Dumazet <dada1@cosmosbay.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mingming Cao <cmm@us.ibm.com> Cc: <linux-ext4@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-12-10percpu_counter: fix CPU unplug race in percpu_counter_destroy()Eric Dumazet
We should first delete the counter from percpu_counters list before freeing memory, or a percpu_counter_hotcpu_callback() could dereference a NULL pointer. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: David S. Miller <davem@davemloft.net> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mingming Cao <cmm@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-09percpu counter: clean up percpu_counter_sum_and_set()Mingming Cao
percpu_counter_sum_and_set() and percpu_counter_sum() is the same except the former updates the global counter after accounting. Since we are taking the fbc->lock to calculate the precise value of the counter in percpu_counter_sum() anyway, it should simply set fbc->count too, as the percpu_counter_sum_and_set() does. This patch merges these two interfaces into one. Signed-off-by: Mingming Cao <cmm@us.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <linux-ext4@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2008-07-11percpu_counter: new function percpu_counter_sum_and_setMingming Cao
Delayed allocation need to check free blocks at every write time. percpu_counter_read_positive() is not quit accurate. delayed allocation need a more accurate accounting, but using percpu_counter_sum_positive() is frequently is quite expensive. This patch added a new function to update center counter when sum per-cpu counter, to increase the accurate rate for next percpu_counter_read() and require less calling expensive percpu_counter_sum(). Signed-off-by: Mingming Cao <cmm@us.ibm.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2008-04-30mm: bdi: export BDI attributes in sysfsPeter Zijlstra
Provide a place in sysfs (/sys/class/bdi) for the backing_dev_info object. This allows us to see and set the various BDI specific variables. In particular this properly exposes the read-ahead window for all relevant users and /sys/block/<block>/queue/read_ahead_kb should be deprecated. With patient help from Kay Sievers and Greg KH [mszeredi@suse.cz] - split off NFS and FUSE changes into separate patches - document new sysfs attributes under Documentation/ABI - do bdi_class_init as a core_initcall, otherwise the "default" BDI won't be initialized - remove bdi_init_fmt macro, it's not used very much [akpm@linux-foundation.org: fix ia64 warning] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Kay Sievers <kay.sievers@vrfy.org> Acked-by: Greg KH <greg@kroah.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-19Add irq protection in the percpu-counters cpu-hotplug-callback pathGautham R Shenoy
Some of the per-cpu counters and thus their locks are accessed from IRQ contexts. This can cause a deadlock if it interrupts a cpu-offline thread which is transferring a dead-cpu's counts to the global counter. Add appropriate IRQ protection in the cpu-hotplug callback path. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: percpu_counter_init_irqPeter Zijlstra
provide a way to tell lockdep about percpu_counters that are supposed to be used from irq safe contexts. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: percpu_counter_init error handlingPeter Zijlstra
alloc_percpu can fail, propagate that error. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: percpu_count_sum()Peter Zijlstra
Provide an accurate version of percpu_counter_read. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: percpu_counter_sum_positivePeter Zijlstra
s/percpu_counter_sum/&_positive/ Because its consitent with percpu_counter_read* Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: percpu_counter_setPeter Zijlstra
Provide a method to set a percpu counter to a specified value. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: make percpu_counter_add take s64Peter Zijlstra
percpu_counter is a s64 counter, make _add consitent. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: percpu_counter variable batchPeter Zijlstra
Because the current batch setup has an quadric error bound on the counter, allow for an alternative setup. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17lib: percpu_counter_addPeter Zijlstra
s/percpu_counter_mod/percpu_counter_add/ Because its a better name, _mod implies modulo. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-16percpu_counters: use for_each_online_cpu()Andrew Morton
Now that we have implemented hotunplug-time counter spilling, percpu_counter_sum() only needs to look at online CPUs. Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-16percpu_counters(): use cpu notifiersAndrew Morton
per-cpu counters presently must iterate over all possible CPUs in the exhaustive percpu_counter_sum(). But it can be much better to only iterate over the presently-online CPUs. To do this, we must arrange for an offlined CPU's count to be spilled into the counter's central count. We can do this for all percpu_counters in the machine by linking them into a single global list and walking that list at CPU_DEAD time. (I hope. Might have race windows in which the percpu_counter_sum() count is inaccurate?) Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2006-06-23[PATCH] percpu counter data type changes to suppport more than 2**31 ext3 ↵Mingming Cao
free blocks counter The percpu counter data type are changed in this set of patches to support more users like ext3 who need more than 32 bit to store the free blocks total in the filesystem. - Generic perpcu counters data type changes. The size of the global counter and local counter were explictly specified using s64 and s32. The global counter is changed from long to s64, while the local counter is changed from long to s32, so we could avoid doing 64 bit update in most cases. - Users of the percpu counters are updated to make use of the new percpu_counter_init() routine now taking an additional parameter to allow users to pass the initial value of the global counter. Signed-off-by: Mingming Cao <cmm@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23[PATCH] percpu_counters: create lib/percpu_counter.cRavikiran G Thirumalai
- Move percpu_counter routines from mm/swap.c to lib/percpu_counter.c Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>