summaryrefslogtreecommitdiff
path: root/arch/arm
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2014-02-07 18:12:32 (GMT)
committerJiri Slaby <jslaby@suse.cz>2014-03-05 16:13:40 (GMT)
commit103a9391af1e014f6cc1c1e224a4f51bf831768f (patch)
treee05111546ba0bc104a49c509e34dc9ab68b7de97 /arch/arm
parent160d1d210a8cc5b29722580484f5256882dc275e (diff)
downloadlinux-fsl-qoriq-103a9391af1e014f6cc1c1e224a4f51bf831768f.tar.xz
ARM: 7955/1: spinlock: ensure we have a compiler barrier before sev
commit 7c8746a9eb287642deaad0e7c2cdf482dce5e4be upstream. When unlocking a spinlock, we require the following, strictly ordered sequence of events: <barrier> /* dmb */ <unlock> <barrier> /* dsb */ <sev> Whilst the code does indeed reflect this in terms of the architecture, the final <barrier> + <sev> have been contracted into a single inline asm without a "memory" clobber, therefore the compiler is at liberty to reorder the unlock to the end of the above sequence. In such a case, a waiting CPU may be woken up before the lock has been unlocked, leading to extremely poor performance. This patch reworks the dsb_sev() function to make use of the dsb() macro and ensure ordering against the unlock. Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Diffstat (limited to 'arch/arm')
-rw-r--r--arch/arm/include/asm/spinlock.h15
1 files changed, 3 insertions, 12 deletions
diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h
index 4f2c280..05f8066 100644
--- a/arch/arm/include/asm/spinlock.h
+++ b/arch/arm/include/asm/spinlock.h
@@ -44,18 +44,9 @@
static inline void dsb_sev(void)
{
-#if __LINUX_ARM_ARCH__ >= 7
- __asm__ __volatile__ (
- "dsb ishst\n"
- SEV
- );
-#else
- __asm__ __volatile__ (
- "mcr p15, 0, %0, c7, c10, 4\n"
- SEV
- : : "r" (0)
- );
-#endif
+
+ dsb(ishst);
+ __asm__(SEV);
}
/*