summaryrefslogtreecommitdiff
path: root/include/asm-generic/bitops/__fls.h
diff options
context:
space:
mode:
authorAndi Kleen <andi@firstfloor.org>2009-01-12 22:01:15 (GMT)
committerIngo Molnar <mingo@elte.hu>2009-01-13 17:56:30 (GMT)
commitc8399943bdb70fef78798b97f975506ecc99e039 (patch)
tree3d7cdf853bdfc012b9ea8513ab775238b94d6f75 /include/asm-generic/bitops/__fls.h
parent4a922a969cb0190ce4580d4b064e2ac35f3ac9bf (diff)
downloadlinux-fsl-qoriq-c8399943bdb70fef78798b97f975506ecc99e039.tar.xz
x86, generic: mark complex bitops.h inlines as __always_inline
Impact: reduce kernel image size Hugh Dickins noticed that older gcc versions when the kernel is built for code size didn't inline some of the bitops. Mark all complex x86 bitops that have more than a single asm statement or two as always inline to avoid this problem. Probably should be done for other architectures too. Ingo then found a better fix that only requires a single line change, but it unfortunately only works on gcc 4.3. On older gccs the original patch still makes a ~0.3% defconfig difference with CONFIG_OPTIMIZE_INLINING=y. With gcc 4.1 and a defconfig like build: 6116998 1138540 883788 8139326 7c323e vmlinux-oi-with-patch 6137043 1138540 883788 8159371 7c808b vmlinux-optimize-inlining ~20k / 0.3% difference. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/asm-generic/bitops/__fls.h')
-rw-r--r--include/asm-generic/bitops/__fls.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/asm-generic/bitops/__fls.h b/include/asm-generic/bitops/__fls.h
index be24465..a60a7cc 100644
--- a/include/asm-generic/bitops/__fls.h
+++ b/include/asm-generic/bitops/__fls.h
@@ -9,7 +9,7 @@
*
* Undefined if no set bit exists, so code should check against 0 first.
*/
-static inline unsigned long __fls(unsigned long word)
+static __always_inline unsigned long __fls(unsigned long word)
{
int num = BITS_PER_LONG - 1;