summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2011-12-10 10:43:44 (GMT)
committerGreg Kroah-Hartman <gregkh@suse.de>2011-12-13 17:11:19 (GMT)
commit3c8ed88974472b928489e3943616500ce2ad0cd8 (patch)
treef13010d3417f86b3909cd55858ee589b7de366c9
parent47dbd7d90ad80edb67822f327241edcab8f3f46f (diff)
downloadlinux-3c8ed88974472b928489e3943616500ce2ad0cd8.tar.xz
kref: Remove the memory barriers
Commit 1b0b3b9980e ("kref: fix CPU ordering with respect to krefs") wrongly adds memory barriers to kref. It states: some atomic operations are only atomic, not ordered. Thus a CPU is allowed to reorder memory references to an object to before the reference is obtained. This fixes it. While true, it fails to show why this is a problem. I say it is not a problem because if there is a race with kref_put() such that we could end up referencing a free'd object without this memory barrier, we would still have that race with the memory barrier. The kref_put() in question could complete (and free the object) before the atomic_inc() and we'd still be up shit creek. The kref_init() case is even worse, if your object is published at this time you're so wrong the memory barrier won't make a difference what so ever. If its not published, the act of publishing should include the needed barriers/locks to make sure all writes prior to the act of publishing are complete such that others will only observe a complete object. Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Oliver Neukum <oneukum@suse.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-rw-r--r--include/linux/kref.h2
1 files changed, 0 insertions, 2 deletions
diff --git a/include/linux/kref.h b/include/linux/kref.h
index fa9907a..d66c88a 100644
--- a/include/linux/kref.h
+++ b/include/linux/kref.h
@@ -29,7 +29,6 @@ struct kref {
static inline void kref_init(struct kref *kref)
{
atomic_set(&kref->refcount, 1);
- smp_mb();
}
/**
@@ -40,7 +39,6 @@ static inline void kref_get(struct kref *kref)
{
WARN_ON(!atomic_read(&kref->refcount));
atomic_inc(&kref->refcount);
- smp_mb__after_atomic_inc();
}
/**