summaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2017-06-29arm64/vdso: Fix nsec handling for CLOCK_MONOTONIC_RAWWill Deacon
commit dbb236c1ceb697a559e0694ac4c9e7b9131d0b16 upstream. Recently vDSO support for CLOCK_MONOTONIC_RAW was added in 49eea433b326 ("arm64: Add support for CLOCK_MONOTONIC_RAW in clock_gettime() vDSO"). Noticing that the core timekeeping code never set tkr_raw.xtime_nsec, the vDSO implementation didn't bother exposing it via the data page and instead took the unshifted tk->raw_time.tv_nsec value which was then immediately shifted left in the vDSO code. Unfortunately, by accellerating the MONOTONIC_RAW clockid, it uncovered potential 1ns time inconsistencies caused by the timekeeping core not handing sub-ns resolution. Now that the core code has been fixed and is actually setting tkr_raw.xtime_nsec, we need to take that into account in the vDSO by adding it to the shifted raw_time value, in order to fix the user-visible inconsistency. Rather than do that at each use (and expand the data page in the process), instead perform the shift/addition operation when populating the data page and remove the shift from the vDSO code entirely. [jstultz: minor whitespace tweak, tried to improve commit message to make it more clear this fixes a regression] Reported-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: John Stultz <john.stultz@linaro.org> Tested-by: Daniel Mentz <danielmentz@google.com> Acked-by: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Stephen Boyd <stephen.boyd@linaro.org> Cc: Miroslav Lichvar <mlichvar@redhat.com> Link: http://lkml.kernel.org/r/1496965462-20003-4-git-send-email-john.stultz@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14arm64: entry: improve data abort handling of tagged pointersKristina Martsenko
commit 276e93279a630657fff4b086ba14c95955912dfa upstream. This backport has a minor difference from the upstream commit: it adds the asm-uaccess.h file, which is not present in 4.9, because 4.9 does not have commit b4b8664d291a ("arm64: don't pull uaccess.h into *.S"). Original patch description: When handling a data abort from EL0, we currently zero the top byte of the faulting address, as we assume the address is a TTBR0 address, which may contain a non-zero address tag. However, the address may be a TTBR1 address, in which case we should not zero the top byte. This patch fixes that. The effect is that the full TTBR1 address is passed to the task's signal handler (or printed out in the kernel log). When handling a data abort from EL1, we leave the faulting address intact, as we assume it's either a TTBR1 address or a TTBR0 address with tag 0x00. This is true as far as I'm aware, we don't seem to access a tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to forget about address tags, and code added in the future may not always remember to remove tags from addresses before accessing them. So add tag handling to the EL1 data abort handler as well. This also makes it consistent with the EL0 data abort handler. Fixes: d50240a5f6ce ("arm64: mm: permit use of tagged pointers at EL0") Reviewed-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14arm64: hw_breakpoint: fix watchpoint matching for tagged pointersKristina Martsenko
commit 7dcd9dd8cebe9fa626af7e2358d03a37041a70fb upstream. This backport has a small difference from the upstream commit: - The address tag is removed in watchpoint_handler() instead of get_distance_from_watchpoint(), because 4.9 does not have commit fdfeff0f9e3d ("arm64: hw_breakpoint: Handle inexact watchpoint addresses"). Original patch description: When we take a watchpoint exception, the address that triggered the watchpoint is found in FAR_EL1. We compare it to the address of each configured watchpoint to see which one was hit. The configured watchpoint addresses are untagged, while the address in FAR_EL1 will have an address tag if the data access was done using a tagged address. The tag needs to be removed to compare the address to the watchpoints. Currently we don't remove it, and as a result can report the wrong watchpoint as being hit (specifically, always either the highest TTBR0 watchpoint or lowest TTBR1 watchpoint). This patch removes the tag. Fixes: d50240a5f6ce ("arm64: mm: permit use of tagged pointers at EL0") Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14arm64: traps: fix userspace cache maintenance emulation on a tagged pointerKristina Martsenko
commit 81cddd65b5c82758ea5571a25e31ff6f1f89ff02 upstream. This backport has a minor difference from the upstream commit, as v4.9 did not yet have the refactoring done by commit 8b6e70fccff2 ("arm64: traps: correctly handle MRS/MSR with XZR"). Original patch description: When we emulate userspace cache maintenance in the kernel, we can currently send the task a SIGSEGV even though the maintenance was done on a valid address. This happens if the address has a non-zero address tag, and happens to not be mapped in. When we get the address from a user register, we don't currently remove the address tag before performing cache maintenance on it. If the maintenance faults, we end up in either __do_page_fault, where find_vma can't find the VMA if the address has a tag, or in do_translation_fault, where the tagged address will appear to be above TASK_SIZE. In both cases, the address is not mapped in, and the task is sent a SIGSEGV. This patch removes the tag from the address before using it. With this patch, the fault is handled correctly, the address gets mapped in, and the cache maintenance succeeds. As a second bug, if cache maintenance (correctly) fails on an invalid tagged address, the address gets passed into arm64_notify_segfault, where find_vma fails to find the VMA due to the tag, and the wrong si_code may be sent as part of the siginfo_t of the segfault. With this patch, the correct si_code is sent. Fixes: 7dd01aef0557 ("arm64: trap userspace "dc cvau" cache operation on errata-affected core") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14arm64: KVM: Allow unaligned accesses at EL2Marc Zyngier
commit 78fd6dcf11468a5a131b8365580d0c613bcc02cb upstream. We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses at EL2, but we're not really prepared to deal with it. So far, this has been unnoticed, until GCC 7 started emitting those (in particular 64bit writes on a 32bit boundary). Since the rest of the kernel is pretty happy about that, let's follow its example and set SCTLR_EL2.A to zero. Modern CPUs don't really care. Reported-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14arm64: KVM: Preserve RES1 bits in SCTLR_EL2Marc Zyngier
commit d68c1f7fd1b7148dab5fe658321d511998969f2d upstream. __do_hyp_init has the rather bad habit of ignoring RES1 bits and writing them back as zero. On a v8.0-8.2 CPU, this doesn't do anything bad, but may end-up being pretty nasty on future revisions of the architecture. Let's preserve those bits so that we don't have to fix this later on. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-07bpf, arm64: fix faulty emission of map access in tail callsDaniel Borkmann
[ Upstream commit d8b54110ee944de522ccd3531191f39986ec20f9 ] Shubham was recently asking on netdev why in arm64 JIT we don't multiply the index for accessing the tail call map by 8. That led me into testing out arm64 JIT wrt tail calls and it turned out I got a NULL pointer dereference on the tail call. The buggy access is at: prog = array->ptrs[index]; if (prog == NULL) goto out; [...] 00000060: d2800e0a mov x10, #0x70 // #112 00000064: f86a682a ldr x10, [x1,x10] 00000068: f862694b ldr x11, [x10,x2] 0000006c: b40000ab cbz x11, 0x00000080 [...] The code triggering the crash is f862694b. x1 at the time contains the address of the bpf array, x10 offsetof(struct bpf_array, ptrs). Meaning, above we load the pointer to the program at map slot 0 into x10. x10 can then be NULL if the slot is not occupied, which we later on try to access with a user given offset in x2 that is the map index. Fix this by emitting the following instead: [...] 00000060: d2800e0a mov x10, #0x70 // #112 00000064: 8b0a002a add x10, x1, x10 00000068: d37df04b lsl x11, x2, #3 0000006c: f86b694b ldr x11, [x10,x11] 00000070: b40000ab cbz x11, 0x00000084 [...] This basically adds the offset to ptrs to the base address of the bpf array we got and we later on access the map with an index * 8 offset relative to that. The tail call map itself is basically one large area with meta data at the head followed by the array of prog pointers. This makes tail calls working again, tested on Cavium ThunderX ARMv8. Fixes: ddb55992b04d ("arm64: bpf: implement bpf_tail_call() helper") Reported-by: Shubham Bansal <illusionist.neo@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25arm64: uaccess: ensure extension of access_ok() addrMark Rutland
commit a06040d7a791a9177581dcf7293941bd92400856 upstream. Our access_ok() simply hands its arguments over to __range_ok(), which implicitly assummes that the addr parameter is 64 bits wide. This isn't necessarily true for compat code, which might pass down a 32-bit address parameter. In these cases, we don't have a guarantee that the address has been zero extended to 64 bits, and the upper bits of the register may contain unknown values, potentially resulting in a suprious failure. Avoid this by explicitly casting the addr parameter to an unsigned long (as is done on other architectures), ensuring that the parameter is widened appropriately. Fixes: 0aea86a2176c ("arm64: User access library functions") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25arm64: armv8_deprecated: ensure extension of addrMark Rutland
commit 55de49f9aa17b0b2b144dd2af587177b9aadf429 upstream. Our compat swp emulation holds the compat user address in an unsigned int, which it passes to __user_swpX_asm(). When a 32-bit value is passed in a register, the upper 32 bits of the register are unknown, and we must extend the value to 64 bits before we can use it as a base address. This patch casts the address to unsigned long to ensure it has been suitably extended, avoiding the potential issue, and silencing a related warning from clang. Fixes: bd35a4adc413 ("arm64: Port SWP/SWPB emulation support from arm") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25arm64: ensure extension of smp_store_release valueMark Rutland
commit 994870bead4ab19087a79492400a5478e2906196 upstream. When an inline assembly operand's type is narrower than the register it is allocated to, the least significant bits of the register (up to the operand type's width) are valid, and any other bits are permitted to contain any arbitrary value. This aligns with the AAPCS64 parameter passing rules. Our __smp_store_release() implementation does not account for this, and implicitly assumes that operands have been zero-extended to the width of the type being stored to. Thus, we may store unknown values to memory when the value type is narrower than the pointer type (e.g. when storing a char to a long). This patch fixes the issue by casting the value operand to the same width as the pointer operand in all cases, which ensures that the value is zero-extended as we expect. We use the same union trickery as __smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that pointers are potentially cast to narrower width integers in unreachable paths. A whitespace issue at the top of __smp_store_release() is also corrected. No changes are necessary for __smp_load_acquire(). Load instructions implicitly clear any upper bits of the register, and the compiler will only consider the least significant bits of the register as valid regardless. Fixes: 47933ad41a86 ("arch: Introduce smp_load_acquire(), smp_store_release()") Fixes: 878a84d5a8a1 ("arm64: add missing data types in smp_load_acquire/smp_store_release") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25arm64: xchg: hazard against entire exchange variableMark Rutland
commit fee960bed5e857eb126c4e56dd9ff85938356579 upstream. The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a u8 pointer, and thus the hazard only applies to the first byte of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, as demonstrated with the following test case: union u { unsigned long l; unsigned int i[2]; }; unsigned long update_char_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } unsigned long update_long_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when using -O2 or above: 0000000000000000 <update_char_hazard>: 0: d2800001 mov x1, #0x0 // #0 4: f9000001 str x1, [x0] 8: d2800000 mov x0, #0x0 // #0 c: d65f03c0 ret 0000000000000010 <update_long_hazard>: 10: b9400401 ldr w1, [x0,#4] 14: d2800002 mov x2, #0x0 // #0 18: f9000002 str x2, [x0] 1c: b9400400 ldr w0, [x0,#4] 20: 4a000020 eor w0, w1, w0 24: d65f03c0 ret This patch fixes the issue by passing an unsigned long pointer into the +Q constraint, as we do for our cmpxchg code. This may hazard against more than is necessary, but this is better than missing a necessary hazard. Fixes: 305d454aaa29 ("arm64: atomics: implement native {relaxed, acquire, release} atomics") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25arm64: dts: hi6220: Reset the mmc hostsDaniel Lezcano
commit 0fbdf9953b41c28845fe8d05007ff09634ee3000 upstream. The MMC hosts could be left in an unconsistent or uninitialized state from the firmware. Instead of assuming, the firmware did the right things, let's reset the host controllers. This change fixes a bug when the mmc2/sdio is initialized leading to a hung task: [ 242.704294] INFO: task kworker/7:1:675 blocked for more than 120 seconds. [ 242.711129] Not tainted 4.9.0-rc8-00017-gcf0251f #3 [ 242.716571] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 242.724435] kworker/7:1 D 0 675 2 0x00000000 [ 242.729973] Workqueue: events_freezable mmc_rescan [ 242.734796] Call trace: [ 242.737269] [<ffff00000808611c>] __switch_to+0xa8/0xb4 [ 242.742437] [<ffff000008d07c04>] __schedule+0x1c0/0x67c [ 242.747689] [<ffff000008d08254>] schedule+0x40/0xa0 [ 242.752594] [<ffff000008d0b284>] schedule_timeout+0x1c4/0x35c [ 242.758366] [<ffff000008d08e38>] wait_for_common+0xd0/0x15c [ 242.763964] [<ffff000008d09008>] wait_for_completion+0x28/0x34 [ 242.769825] [<ffff000008a1a9f4>] mmc_wait_for_req_done+0x40/0x124 [ 242.775949] [<ffff000008a1ab98>] mmc_wait_for_req+0xc0/0xf8 [ 242.781549] [<ffff000008a1ac3c>] mmc_wait_for_cmd+0x6c/0x84 [ 242.787149] [<ffff000008a26610>] mmc_io_rw_direct_host+0x9c/0x114 [ 242.793270] [<ffff000008a26aa0>] sdio_reset+0x34/0x7c [ 242.798347] [<ffff000008a1d46c>] mmc_rescan+0x2fc/0x360 [ ... ] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25arm64: KVM: Do not use stack-protector to compile EL2 codeMarc Zyngier
commit cde13b5dad60471886a3bccb4f4134c647c4a9dc upstream. We like living dangerously. Nothing explicitely forbids stack-protector to be used in the EL2 code, while distributions routinely compile their kernel with it. We're just lucky that no code actually triggers the instrumentation. Let's not try our luck for much longer, and disable stack-protector for code living at EL2. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-20arm64: KVM: Fix decoding of Rt/Rt2 when trapping AArch32 CP accessesMarc Zyngier
commit c667186f1c01ca8970c785888868b7ffd74e51ee upstream. Our 32bit CP14/15 handling inherited some of the ARMv7 code for handling the trapped system registers, completely missing the fact that the fields for Rt and Rt2 are now 5 bit wide, and not 4... Let's fix it, and provide an accessor for the most common Rt case. Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-14bpf, arm64: fix jit branch offset related to ldimm64Daniel Borkmann
[ Upstream commit ddc665a4bb4b728b4e6ecec8db1b64efa9184b9c ] When the instruction right before the branch destination is a 64 bit load immediate, we currently calculate the wrong jump offset in the ctx->offset[] array as we only account one instruction slot for the 64 bit load immediate although it uses two BPF instructions. Fix it up by setting the offset into the right slot after we incremented the index. Before (ldimm64 test 1): [...] 00000020: 52800007 mov w7, #0x0 // #0 00000024: d2800060 mov x0, #0x3 // #3 00000028: d2800041 mov x1, #0x2 // #2 0000002c: eb01001f cmp x0, x1 00000030: 54ffff82 b.cs 0x00000020 00000034: d29fffe7 mov x7, #0xffff // #65535 00000038: f2bfffe7 movk x7, #0xffff, lsl #16 0000003c: f2dfffe7 movk x7, #0xffff, lsl #32 00000040: f2ffffe7 movk x7, #0xffff, lsl #48 00000044: d29dddc7 mov x7, #0xeeee // #61166 00000048: f2bdddc7 movk x7, #0xeeee, lsl #16 0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32 00000050: f2fdddc7 movk x7, #0xeeee, lsl #48 [...] After (ldimm64 test 1): [...] 00000020: 52800007 mov w7, #0x0 // #0 00000024: d2800060 mov x0, #0x3 // #3 00000028: d2800041 mov x1, #0x2 // #2 0000002c: eb01001f cmp x0, x1 00000030: 540000a2 b.cs 0x00000044 00000034: d29fffe7 mov x7, #0xffff // #65535 00000038: f2bfffe7 movk x7, #0xffff, lsl #16 0000003c: f2dfffe7 movk x7, #0xffff, lsl #32 00000040: f2ffffe7 movk x7, #0xffff, lsl #48 00000044: d29dddc7 mov x7, #0xeeee // #61166 00000048: f2bdddc7 movk x7, #0xeeee, lsl #16 0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32 00000050: f2fdddc7 movk x7, #0xeeee, lsl #48 [...] Also, add a couple of test cases to make sure JITs pass this test. Tested on Cavium ThunderX ARMv8. The added test cases all pass after the fix. Fixes: 8eee539ddea0 ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()") Reported-by: David S. Miller <davem@davemloft.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Cc: Xi Wang <xi.wang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-14arm64: Improve detection of user/non-user mappings in set_pte(_at)Catalin Marinas
commit ec663d967b2276448a416406ca59ff247c0c80c5 upstream. Commit cab15ce604e5 ("arm64: Introduce execute-only page access permissions") allowed a valid user PTE to have the PTE_USER bit clear. As a consequence, the pte_valid_not_user() macro in set_pte() was replaced with pte_valid_global() under the assumption that only user pages have the nG bit set. EFI mappings, however, also have the nG bit set and set_pte() wrongly ignores issuing the DSB+ISB. This patch reinstates the pte_valid_not_user() macro and adds the PTE_UXN bit check since all kernel mappings have this bit set. For clarity, pte_exec() is renamed to pte_user_exec() as it only checks for the absence of PTE_UXN. Consequently, the user executable check in set_pte_at() drops the pte_ng() test since pte_user_exec() is sufficient. Fixes: cab15ce604e5 ("arm64: Introduce execute-only page access permissions") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-14arm64: dts: r8a7795: Mark EthernetAVB device node disabledGeert Uytterhoeven
commit 0d1390ff283f6c38595288e7f74da6829896b8b7 upstream. Device nodes representing I/O devices should be marked disabled in the SoC-specific DTS, and overridden by board-specific DTSes where needed. Fixes: a92843c8a6f8c039 ("arm64: dts: r8a7795: add EthernetAVB device node") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-04-12firmware: qcom: scm: Fix interrupted SCM callsAndy Gross
[ Upstream commit 82bcd087029f6056506ea929f11af02622230901 ] This patch adds a Qualcomm specific quirk to the arm_smccc_smc call. On Qualcomm ARM64 platforms, the SMC call can return before it has completed. If this occurs, the call can be restarted, but it requires using the returned session ID value from the interrupted SMC call. The quirk stores off the session ID from the interrupted call in the quirk structure so that it can be used by the caller. This patch folds in a fix given by Sricharan R: https://lkml.org/lkml/2016/9/28/272 Signed-off-by: Andy Gross <andy.gross@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-04-12arm: kernel: Add SMC structure parameterAndy Gross
[ Upstream commit 680a0873e193bae666439f4b5e32c758e68f114c ] This patch adds a quirk parameter to the arm_smccc_(smc/hvc) calls. The quirk structure allows for specialized SMC operations due to SoC specific requirements. The current arm_smccc_(smc/hvc) is renamed and macros are used instead to specify the standard arm_smccc_(smc/hvc) or the arm_smccc_(smc/hvc)_quirk function. This patch and partial implementation was suggested by Will Deacon. Signed-off-by: Andy Gross <andy.gross@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-04-12arm64: PCI: Add local struct device pointersBjorn Helgaas
[ Upstream commit dfd1972c2b464c10fb585c4c60b594e09d181a01 ] Use a local "struct device *dev" for brevity. No functional change intended. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-04-12arm64: PCI: Manage controller-specific data on per-controller basisTomasz Nowicki
[ Upstream commit 093d24a204425f71f4f106b7e62c8df4b456e1cc ] Currently we use one shared global acpi_pci_root_ops structure to keep controller-specific ops. We pass its pointer to acpi_pci_root_create() and associate it with a host bridge instance for good. Such a design implies serious drawback. Any potential manipulation on the single system-wide acpi_pci_root_ops leads to kernel crash. The structure content is not really changing even across multiple host bridges creation; thus it was not an issue so far. In preparation for adding ECAM quirks mechanism (where controller-specific PCI ops may be different for each host bridge) allocate new acpi_pci_root_ops and fill in with data for each bridge. Now it is safe to have different controller-specific info. As a consequence free acpi_pci_root_ops when host bridge is released. No functional changes in this patch. Signed-off-by: Tomasz Nowicki <tn@semihalf.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-04-12arm64: dts: hisi: fix hip06 sas am-max-trans quirkJohn Garry
[ Upstream commit f65e786604b34d0b599b8c01ecca28be2d746290 ] The string for the am max transmissions quirk property is not correct -> fix it. Signed-off-by: John Garry <john.garry@huawei.com> Reviewed-by: Xiang Chen <chenxiang66@hisilicon.com> Signed-off-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-04-12arm64: mm: unaligned access by user-land should be received as SIGBUSVictor Kamensky
commit 09a6adf53d42ca3088fa3fb41f40b768efc711ed upstream. After 52d7523 (arm64: mm: allow the kernel to handle alignment faults on user accesses) commit user-land accesses that produce unaligned exceptions like in case of aarch32 ldm/stm/ldrd/strd instructions operating on unaligned memory received by user-land as SIGSEGV. It is wrong, it should be reported as SIGBUS as it was before 52d7523 commit. Changed do_bad_area function to take signal and code parameters out of esr value using fault_info table, so in case of do_alignment_fault fault user-land will receive SIGBUS. Wrapped access to fault_info table into esr_to_fault_info function. Fixes: 52d7523 (arm64: mm: allow the kernel to handle alignment faults on user accesses) Signed-off-by: Victor Kamensky <kamensky@cisco.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-30arm64: kaslr: Fix up the kernel image alignmentNeeraj Upadhyay
commit afd0e5a876703accb95894f23317a13e2c49b523 upstream. If kernel image extends across alignment boundary, existing code increases the KASLR offset by size of kernel image. The offset is masked after resizing. There are cases, where after masking, we may still have kernel image extending across boundary. This eventually results in only 2MB block getting mapped while creating the page tables. This results in data aborts while accessing unmapped regions during second relocation (with kaslr offset) in __primary_switch. To fix this problem, round up the kernel image size, by swapper block size, before adding it for correction. For example consider below case, where kernel image still crosses 1GB alignment boundary, after masking the offset, which is fixed by rounding up kernel image size. SWAPPER_TABLE_SHIFT = 30 Swapper using section maps with section size 2MB. CONFIG_PGTABLE_LEVELS = 3 VA_BITS = 39 _text : 0xffffff8008080000 _end : 0xffffff800aa1b000 offset : 0x1f35600000 mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1) (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d offset after existing correction (before mask) = 0x1f37f9b000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d offset (after mask) = 0x1f37e00000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d new offset w/ rounding up = 0x1f38000000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR") Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-22irqchip/gicv3-its: Add workaround for QDF2400 ITS erratum 0065Shanker Donthineni
commit 90922a2d03d84de36bf8a9979d62580102f31a92 upstream. On Qualcomm Datacenter Technologies QDF2400 SoCs, the ITS hardware implementation uses 16Bytes for Interrupt Translation Entry (ITE), but reports an incorrect value of 8Bytes in GITS_TYPER.ITTE_size. It might cause kernel memory corruption depending on the number of MSI(x) that are configured and the amount of memory that has been allocated for ITEs in its_create_device(). This patch fixes the potential memory corruption by setting the correct ITE size to 16Bytes. Cc: stable@vger.kernel.org Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-22arm64: KVM: VHE: Clear HCR_TGE when invalidating guest TLBsMarc Zyngier
commit 68925176296a8b995e503349200e256674bfe5ac upstream. When invalidating guest TLBs, special care must be taken to actually shoot the guest TLBs and not the host ones if we're running on a VHE system. This is controlled by the HCR_EL2.TGE bit, which we forget to clear before invalidating TLBs. Address the issue by introducing two wrappers (__tlb_switch_to_guest and __tlb_switch_to_host) that take care of both the VTTBR_EL2 and HCR_EL2.TGE switching. Reported-by: Tomasz Nowicki <tnowicki@caviumnetworks.com> Tested-by: Tomasz Nowicki <tnowicki@caviumnetworks.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-12arm64: fix erroneous __raw_read_system_reg() casesMark Rutland
commit 7d0928f18bf890d2853281f59aba0dd5a46b34f9 upstream. Since it was introduced in commit da8d02d19ffdd201 ("arm64/capabilities: Make use of system wide safe value"), __raw_read_system_reg() has erroneously mapped some sysreg IDs to other registers. For the fields in ID_ISAR5_EL1, our local feature detection will be erroneous. We may spuriously detect that a feature is uniformly supported, or may fail to detect when it actually is, meaning some compat hwcaps may be erroneous (or not enforced upon hotplug). This patch corrects the erroneous entries. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: da8d02d19ffdd201 ("arm64/capabilities: Make use of system wide safe value") Reported-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-12arm64: dma-mapping: Fix dma_mapping_error() when bypassing SWIOTLBRobin Murphy
commit adbe7e26f4257f72817495b9bce114284060b0d7 upstream. When bypassing SWIOTLB on small-memory systems, we need to avoid calling into swiotlb_dma_mapping_error() in exactly the same way as we avoid swiotlb_dma_supported(), because the former also relies on SWIOTLB state being initialised. Under the assumptions for which we skip SWIOTLB, dma_map_{single,page}() will only ever return the DMA-offset-adjusted physical address of the page passed in, thus we can report success unconditionally. Fixes: b67a8b29df7e ("arm64: mm: only initialize swiotlb when necessary") CC: Jisheng Zhang <jszhang@marvell.com> Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-12arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2Marc Zyngier
commit 8f36ebaf21fdae99c091c67e8b6fab33969f2667 upstream. When we fault in a page, we flush it to the PoC (Point of Coherency) if the faulting vcpu has its own caches off, so that it can observe the page we just brought it. But if the vcpu has its caches on, we skip that step. Bad things happen when *another* vcpu tries to access that page with its own caches disabled. At that point, there is no garantee that the data has made it to the PoC, and we access stale data. The obvious fix is to always flush to PoC when a page is faulted in, no matter what the state of the vcpu is. Fixes: 2d58b733c876 ("arm64: KVM: force cache clean on page fault when caches are off") Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-02-09crypto: arm64/aes-blk - honour iv_out requirement in CBC and CTR modesArd Biesheuvel
commit 11e3b725cfc282efe9d4a354153e99d86a16af08 upstream. Update the ARMv8 Crypto Extensions and the plain NEON AES implementations in CBC and CTR modes to return the next IV back to the skcipher API client. This is necessary for chaining to work correctly. Note that for CTR, this is only done if the request is a round multiple of the block size, since otherwise, chaining is impossible anyway. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26swiotlb: Convert swiotlb_force from int to enumGeert Uytterhoeven
commit ae7871be189cb41184f1e05742b4a99e2c59774d upstream. Convert the flag swiotlb_force from an int to an enum, to prepare for the advent of more possible values. Suggested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64: Fix swiotlb fallback allocationAlexander Graf
commit 524dabe1c68e0bca25ce7b108099e5d89472a101 upstream. Commit b67a8b29df introduced logic to skip swiotlb allocation when all memory is DMA accessible anyway. While this is a great idea, __dma_alloc still calls swiotlb code unconditionally to allocate memory when there is no CMA memory available. The swiotlb code is called to ensure that we at least try get_free_pages(). Without initialization, swiotlb allocation code tries to access io_tlb_list which is NULL. That results in a stack trace like this: Unable to handle kernel NULL pointer dereference at virtual address 00000000 [...] [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0 [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198 [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8 [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm] [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper] [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper] [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper] [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper] [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper] [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4] [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4] [...] Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO option. This patch configures the swiotlb code to use that if we decide not to initialize the swiotlb framework. Fixes: b67a8b29df ("arm64: mm: only initialize swiotlb when necessary") Signed-off-by: Alexander Graf <agraf@suse.de> CC: Jisheng Zhang <jszhang@marvell.com> CC: Geert Uytterhoeven <geert+renesas@glider.be> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64: mm: avoid name clash in __page_to_voff()Oleksandr Andrushchenko
commit 1c8a946bf3754a59cba1fc373949a8114bfe5aaa upstream. The arm64 __page_to_voff() macro takes a parameter called 'page', and also refers to 'struct page'. Thus, if the value passed in is not called 'page', we'll refer to the wrong struct name (which might not exist). Fixes: 3fa72fe9c614 ("arm64: mm: fix __page_to_voff definition") Acked-by: Mark Rutland <mark.rutland@arm.com> Suggested-by: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> Signed-off-by: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Reject attempts to set incomplete hardware breakpoint fieldsDave Martin
commit ad9e202aa1ce571b1d7fed969d06f66067f8a086 upstream. We cannot preserve partial fields for hardware breakpoints, because the values written by userspace to the hardware breakpoint registers can't subsequently be recovered intact from the hardware. So, just reject attempts to write incomplete fields with -EINVAL. Fixes: 478fcb2cdb23 ("arm64: Debugging support") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Avoid uninitialised struct padding in fpr_set()Dave Martin
commit aeb1f39d814b2e21e5e5706a48834bfd553d0059 upstream. This patch adds an explicit __reserved[] field to user_fpsimd_state to replace what was previously unnamed padding. This ensures that data in this region are propagated across assignment rather than being left possibly uninitialised at the destination. Fixes: 60ffc30d5652 ("arm64: Exception handling") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Preserve previous registers for short regset write - 3Dave Martin
commit a672401c00f82e4e19704aff361d9bad18003714 upstream. Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Fixes: 5d220ff9420f ("arm64: Better native ptrace support for compat tasks") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Preserve previous registers for short regset write - 2Dave Martin
commit 9dd73f72f218320c6c90da5f834996e7360dc227 upstream. Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Fixes: 766a85d7bc5d ("arm64: ptrace: add NT_ARM_SYSTEM_CALL regset") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Preserve previous registers for short regset writeDave Martin
commit 9a17b876b573441bfb3387ad55d98bf7184daf9d upstream. Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Fixes: 478fcb2cdb23 ("arm64: Debugging support") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64: avoid returning from bad_modeMark Rutland
commit 7d9e8f71b989230bc613d121ca38507d34ada849 upstream. Generally, taking an unexpected exception should be a fatal event, and bad_mode is intended to cater for this. However, it should be possible to contain unexpected synchronous exceptions from EL0 without bringing the kernel down, by sending a SIGILL to the task. We tried to apply this approach in commit 9955ac47f4ba1c95 ("arm64: don't kill the kernel on a bad esr from el0"), by sending a signal for any bad_mode call resulting from an EL0 exception. However, this also applies to other unexpected exceptions, such as SError and FIQ. The entry paths for these exceptions branch to bad_mode without configuring the link register, and have no kernel_exit. Thus, if we take one of these exceptions from EL0, bad_mode will eventually return to the original user link register value. This patch fixes this by introducing a new bad_el0_sync handler to cater for the recoverable case, and restoring bad_mode to its original state, whereby it calls panic() and never returns. The recoverable case branches to bad_el0_sync with a bl, and returns to userspace via the usual ret_to_user mechanism. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: 9955ac47f4ba1c95 ("arm64: don't kill the kernel on a bad esr from el0") Reported-by: Mark Salter <msalter@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-19arm64: hugetlb: fix the wrong return value for huge_ptep_set_access_flagsHuang Shijie
commit 69d012345a1a32d3f03957f14d972efccc106a98 upstream. In current code, the @changed always returns the last one's status for the huge page with the contiguous bit set. This is really not what we want. Even one of the PTEs is changed, we should tell it to the caller. This patch fixes this issue. Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit") Signed-off-by: Huang Shijie <shijie.huang@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-19arm64: hugetlb: remove the wrong pmd check in find_num_contig()Huang Shijie
commit 20156ce2365d61beaa6f5a78a7a789044e0e7acc upstream. The find_num_contig() will return 1 when the pmd is not present. It will cause a kernel dead loop in the following scenaro: 1.) pmd entry is not present. 2.) the page fault occurs: ... hugetlb_fault() --> hugetlb_no_page() --> set_huge_pte_at() 3.) set_huge_pte_at() will only set the first PMD entry, since the find_num_contig just return 1 in this case. So the PMD entries are all empty except the first one. 4.) when kernel accesses the address mapped by the second PMD entry, a new page fault occurs: ... hugetlb_fault() --> huge_ptep_set_access_flags() The second PMD entry is still empty now. 5.) When the kernel returns, the access will cause a page fault again. The kernel will run like the "4)" above. We will see a dead loop since here. The dead loop is caught in the 32M hugetlb page (2M PMD + Contiguous bit). This patch removes wrong pmd check, and fixes this dead loop. This patch also removes the redundant checks for PGD/PUD in the find_num_contig(). Acked-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Huang Shijie <shijie.huang@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-19arm64: hugetlb: fix the wrong address for several functionsHuang Shijie
commit 0c2f0afe3582c58efeef93bc57bc07d502132618 upstream. The libhugetlbfs meets several failures since the following functions do not use the correct address: huge_ptep_get_and_clear() huge_ptep_set_access_flags() huge_ptep_set_wrprotect() huge_ptep_clear_flush() This patch fixes the wrong address for them. Signed-off-by: Huang Shijie <shijie.huang@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-15ARM64: dts: bcm2835: Fix bcm2837 compatible stringAndreas Färber
commit 4f24450c6e580ac8591942c8bf65355a06b44635 upstream. bcm2837-rpi-3-b.dts, its only in-tree user, was overriding it as "brcm,bcm2837" already. Fixes: 9d56c22a7861 ("ARM: bcm2835: Add devicetree for the Raspberry Pi 3.") Cc: Stephen Warren <swarren@wwwdotorg.org> Signed-off-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-15ARM64: dts: bcm2837-rpi-3-b: remove incorrect pwr LEDAndrea Merello
commit a44e87b47148c6ee6b78509f47e6a15c0fae890a upstream. We are incorrectly defining the pwr LED, attaching it to a gpio line that is wired to the Wi-Fi SDIO module (which fails due to this). The actual power LED is connected to the GPIO expander, which we don't expose currently. Fixes: 9d56c22a7861 ("ARM: bcm2835: Add devicetree for the Raspberry Pi 3.") Thanks-to: Eric Anholt <eric@anholt.net> [for clarifying we can't control the LED] Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-15arm64: dts: mt8173: Fix auxadc nodeMatthias Brugger
commit a3207d644fb89e5d7d5e01f00c04dcfc6d2d44d5 upstream. The devicetree node for mt8173-auxadc lacks the clock and io-channel-cells property. This leads to a non-working driver. mt6577-auxadc 11001000.auxadc: failed to get auxadc clock mt6577-auxadc: probe of 11001000.auxadc failed with error -2 Fix these fields to get the device up and running. Fixes: 748c7d4de46a ("ARM64: dts: mt8173: Add thermal/auxadc device nodes") Signed-off-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-ce - fix for big endianArd Biesheuvel
commit 1803b9a52c4e5a5dbb8a27126f6bc06939359753 upstream. The core AES cipher implementation that uses ARMv8 Crypto Extensions instructions erroneously loads the round keys as 64-bit quantities, which causes the algorithm to fail when built for big endian. In addition, the key schedule generation routine fails to take endianness into account as well, when loading the combining the input key with the round constants. So fix both issues. Fixes: 12ac3efe74f8 ("arm64/crypto: use crypto instructions to generate AES key schedule") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-xts-ce: fix for big endianArd Biesheuvel
commit caf4b9e2b326cc2a5005a5c557274306536ace61 upstream. Emit the XTS tweak literal constants in the appropriate order for a single 128-bit scalar literal load. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/sha1-ce - fix for big endianArd Biesheuvel
commit ee71e5f1e7d25543ee63a80451871f8985b8d431 upstream. The SHA1 digest is an array of 5 32-bit quantities, so we should refer to them as such in order for this code to work correctly when built for big endian. So replace 16 byte scalar loads and stores with 4x4 vector ones where appropriate. Fixes: 2c98833a42cd ("arm64/crypto: SHA-1 using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-neon - fix for big endianArd Biesheuvel
commit a2c435cc99862fd3d165e1b66bf48ac72c839c62 upstream. The AES implementation using pure NEON instructions relies on the generic AES key schedule generation routines, which store the round keys as arrays of 32-bit quantities stored in memory using native endianness. This means we should refer to these round keys using 4x4 loads rather than 16x1 loads. In addition, the ShiftRows tables are loading using a single scalar load, which is also affected by endianness, so emit these tables in the correct order depending on whether we are building for big endian or not. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-ccm-ce: fix for big endianArd Biesheuvel
commit 56e4e76c68fcb51547b5299e5b66a135935ff414 upstream. The AES-CCM implementation that uses ARMv8 Crypto Extensions instructions refers to the AES round keys as pairs of 64-bit quantities, which causes failures when building the code for big endian. In addition, it byte swaps the input counter unconditionally, while this is only required for little endian builds. So fix both issues. Fixes: 12ac3efe74f8 ("arm64/crypto: use crypto instructions to generate AES key schedule") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>