summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2015-08-12powerpc/slb: Add documentation on runtime patching of SLB encodingAnshuman Khandual
This patch adds some documentation to patch_slb_encoding() explaining how it works. Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Update change log and mention the signedness of the immediate] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-12powerpc/slb: Rename all the 'slot' occurrences to 'entry'Anshuman Khandual
The SLB code uses 'slot' and 'entry' interchangeably, change it to always use 'entry'. Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Rewrite change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-12powerpc/slb: Remove a duplicate extern variableAnshuman Khandual
This patch just removes one redundant entry for one extern variable 'slb_compare_rr_to_size' from the scope. This patch does not change any functionality. Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc/ftrace: add powerpc timebase as a trace clock sourceNaveen N. Rao
Add a new powerpc-specific trace clock using the timebase register, similar to x86-tsc. This gives us - a fast, monotonic, hardware clock source for trace entries, and - a clock that can be used to correlate events across cpus as well as across hypervisor and guests. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc/4xx: Fix return value check in hsta_msi_probe()Wei Yongjun
In case of error, the functions platform_get_resource() and kmalloc() returns NULL not ERR_PTR(). The IS_ERR() test in the return value check should be replaced with NULL test. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc: Remove redundant breaksJoe Perches
break; break; isn't useful. Remove one. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc: pci: use %pR for printing struct resourceKevin Hao
Use %pR to simplify the debug code. This also make the debug info more readable. Signed-off-by: Kevin Hao <haokexin@gmail.com> [mpe: Unsplit multi-line printk strings] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc/powernv: Invoke opal_cec_reboot2() on unrecoverable HMI.Mahesh Salgaonkar
Invoke new opal_cec_reboot2() call with reboot type OPAL_REBOOT_PLATFORM_ERROR (for unrecoverable HMI interrupts) to inform BMC/OCC about this error, so that BMC can collect relevant data for error analysis and decide what component to de-configure before rebooting. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc/powernv: Invoke opal_cec_reboot2() on unrecoverable machine check ↵Mahesh Salgaonkar
errors. On non-recoverable MCE errors in kernel space, Linux kernel panics and system reboots. On BMC based system opal-prd runs as a daemon in the host. Hence, kernel crash may prevent opal-prd to detect and analyze this MCE error. This may land us in a situation where the faulty memory never gets de-configured and Linux would keep hitting same MCE error again and again. If this happens in early stage of kernel initialization, then Linux will keep crashing and rebooting in a loop. This patch fixes this issue by invoking new opal_cec_reboot2() call with reboot type OPAL_REBOOT_PLATFORM_ERROR to inform BMC/OCC about this error, so that BMC can collect relevant data for error analysis and decide what component to de-configure before rebooting. This patch is dependent on OPAL patchset posted on skiboot mailing list at https://lists.ozlabs.org/pipermail/skiboot/2015-July/001771.html that introduces opal_cec_reboot2() opal call. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc/powernv: Pull all HMI events before panic.Mahesh Salgaonkar
In the event of unrecovered HMI the existing code panics as soon as it receives the first unrecovered HMI event. This makes host to report partial information about HMIs before panic. There may be more errors which would have caused the HMI and hence more HMI event would have been generated waiting to be pulled by host. This patch implements a logic to pull and display all the HMI event before going down panic path. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-08-06powerpc/powernv: display reason for Malfunction Alert HMI.Mahesh Salgaonkar
The V2 version of HMI event now carries additional information for Malfunction Alert. It now contains error information about CORE and NX checkstop. This patch checks and displays the check stop reason before panic. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: Stewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-30powerpc/kernel: Enable seccomp filterMichael Ellerman
This commit enables seccomp filter on powerpc, now that we have all the necessary pieces in place. To support seccomp's desire to modify the syscall return value under some circumstances, we use a different ABI to the ptrace ABI. That is we use r3 as the syscall return value, and orig_gpr3 is the first syscall parameter. This means the seccomp code, or a ptracer via SECCOMP_RET_TRACE, will see -ENOSYS preloaded in r3. This is identical to the behaviour on x86, and allows seccomp or the ptracer to either leave the -ENOSYS or change it to something else, as well as rejecting or not the syscall by modifying r0. If seccomp does not reject the syscall, we restore the register state to match what ptrace and audit expect, ie. r3 is the first syscall parameter again. We do this restore using orig_gpr3, which may have been modified by seccomp, which allows seccomp to modify the first syscall paramater and allow the syscall to proceed. We need to #ifdef the the additional handling of r3 for seccomp, so move it all out of line. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc/kernel: Add SIG_SYS support for compat tasksMichael Ellerman
SIG_SYS was added in commit a0727e8ce513 "signal, x86: add SIGSYS info and make it synchronous." Because we use the asm-generic struct siginfo, we got support for SIG_SYS for free as part of that commit. However there was no compat handling added for powerpc. That means we've been advertising the existence of signfo._sifields._sigsys to compat tasks, but not actually filling in the fields correctly. Luckily it looks like no one has noticed, presumably because the only user of SIGSYS in the kernel is seccomp filter, which we don't support yet. So before we enable seccomp filter, add compat handling for SIGSYS. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc: Change syscall_get_nr() to return intMichael Ellerman
The documentation for syscall_get_nr() in asm-generic says: Note this returns int even on 64-bit machines. Only 32 bits of system call number can be meaningful. If the actual arch value is 64 bits, this truncates to 32 bits so 0xffffffff means -1. However our implementation was never updated to reflect this. Generally it's not important, but there is once case where it matters. For seccomp filter with SECCOMP_RET_TRACE, the tracer will set regs->gpr[0] to -1 to reject the syscall. When the task is a compat task, this means we end up with 0xffffffff in r0 because ptrace will zero extend the 32-bit value. If syscall_get_nr() returns an unsigned long, then a 64-bit kernel will see a positive value in r0 and will incorrectly allow the syscall through seccomp. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc: Use orig_gpr3 in syscall_get_arguments()Michael Ellerman
Currently syscall_get_arguments() is used by syscall tracepoints, and collect_syscall() which is used in some debugging as well as /proc/pid/syscall. The current implementation just copies regs->gpr[3 .. 5] out, which is fine for all the current use cases. When we enable seccomp filter, that will also start using syscall_get_arguments(). However for seccomp filter we want to use r3 as the return value of the syscall, and orig_gpr3 as the first parameter. This will allow seccomp to modify the return value in r3. To support this we need to modify syscall_get_arguments() to return orig_gpr3 instead of r3. This is safe for all uses because orig_gpr3 always contains the r3 value that was passed to the syscall. We store it in the syscall entry path and never modify it. Update syscall_set_arguments() while we're here, even though it's never used. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc: Rework syscall_get_arguments() so there is only one loopMichael Ellerman
Currently syscall_get_arguments() has two loops, one for compat and one for regular tasks. In prepartion for the next patch, which changes which registers we use, switch it to only have one loop, so we only have one place to update. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc: Don't negate error in syscall_set_return_value()Michael Ellerman
Currently the only caller of syscall_set_return_value() is seccomp filter, which is not enabled on powerpc. This means we have not noticed that our implementation of syscall_set_return_value() negates error, even though the value passed in is already negative. So remove the negation in syscall_set_return_value(), and expect the caller to do it like all other implementations do. Also add a comment about the ccr handling. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc: Drop unused syscall_get_error()Michael Ellerman
syscall_get_error() is unused, and never has been. It's also probably wrong, as it negates r3 before returning it, but that depends on what the caller is expecting. It also doesn't deal with compat, and doesn't deal with TIF_NOERROR. Although we could fix those, until it has a caller and it's clear what semantics the caller wants it's just untested code. So drop it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc/kernel: Change the do_syscall_trace_enter() APIMichael Ellerman
The API for calling do_syscall_trace_enter() is currently sensible enough, it just returns the (modified) syscall number. However once we enable seccomp filter it will get more complicated. When seccomp filter runs, the seccomp kernel code (via SECCOMP_RET_ERRNO), or a ptracer (via SECCOMP_RET_TRACE), may reject the syscall and *may* or may *not* set a return value in r3. That means the assembler that calls do_syscall_trace_enter() can not blindly return ENOSYS, it needs to only return ENOSYS if a return value has not already been set. There is no way to implement that logic with the current API. So change the do_syscall_trace_enter() API to make it deal with the return code juggling, and the assembler can then just return whatever return code it is given. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-29powerpc/kernel: Switch to using MAX_ERRNOMichael Ellerman
Currently on powerpc we have our own #define for the highest (negative) errno value, called _LAST_ERRNO. This is defined to be 516, for reasons which are not clear. The generic code, and x86, use MAX_ERRNO, which is defined to be 4095. In particular seccomp uses MAX_ERRNO to restrict the value that a seccomp filter can return. Currently with the mismatch between _LAST_ERRNO and MAX_ERRNO, a seccomp tracer wanting to return 600, expecting it to be seen as an error, would instead find on powerpc that userspace sees a successful syscall with a return value of 600. To avoid this inconsistency, switch powerpc to use MAX_ERRNO. We are somewhat confident that generic syscalls that can return a non-error value above negative MAX_ERRNO have already been updated to use force_successful_syscall_return(). I have also checked all the powerpc specific syscalls, and believe that none of them expect to return a non-error value between -MAX_ERRNO and -516. So this change should be safe ... Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-27powerpc/perf: Change type of the bhrb_users variableAnshuman Khandual
This patch just changes data type of bhrb_users variable from int to unsigned int because it never contains a negative value. Reported-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-25powerpc/perf/hv-24x7: Simplify extracting counter from result bufferSukadev Bhattiprolu
Simplify code that extracts a 24x7 counter from the HCALL's result buffer. Suggested-by: Joe Perches <joe@perches.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-25powerpc/perf/hv-24x7: Whitespace - fix parameter alignmentSukadev Bhattiprolu
Fix parameter alignment to be consistent with coding style. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-23powerpc: Use hardware RNG for arch_get_random_seed_* not arch_get_random_*Paul Mackerras
The hardware RNG on POWER8 and POWER7+ can be relatively slow, since it can only supply one 64-bit value per microsecond. Currently we read it in arch_get_random_long(), but that slows down reading from /dev/urandom since the code in random.c calls arch_get_random_long() for every longword read from /dev/urandom. Since the hardware RNG supplies high-quality entropy on every read, it matches the semantics of arch_get_random_seed_long() better than those of arch_get_random_long(). Therefore this commit makes the code use the POWER8/7+ hardware RNG only for arch_get_random_seed_{long,int} and not for arch_get_random_{long,int}. This won't affect any other PowerPC-based platforms because none of them currently support a hardware RNG. To make it clear that the ppc_md function pointer is used for arch_get_random_seed_*, we rename it from get_random_long to get_random_seed. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-23powerpc/rtas: Introduce rtas_get_sensor_fast() for IRQ handlersThomas Huth
The EPOW interrupt handler uses rtas_get_sensor(), which in turn uses rtas_busy_delay() to wait for RTAS becoming ready in case it is necessary. But rtas_busy_delay() is annotated with might_sleep() and thus may not be used by interrupts handlers like the EPOW handler! This leads to the following BUG when CONFIG_DEBUG_ATOMIC_SLEEP is enabled: BUG: sleeping function called from invalid context at arch/powerpc/kernel/rtas.c:496 in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/1 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.2.0-rc2-thuth #6 Call Trace: [c00000007ffe7b90] [c000000000807670] dump_stack+0xa0/0xdc (unreliable) [c00000007ffe7bc0] [c0000000000e1f14] ___might_sleep+0x134/0x180 [c00000007ffe7c20] [c00000000002aec0] rtas_busy_delay+0x30/0xd0 [c00000007ffe7c50] [c00000000002bde4] rtas_get_sensor+0x74/0xe0 [c00000007ffe7ce0] [c000000000083264] ras_epow_interrupt+0x44/0x450 [c00000007ffe7d90] [c000000000120260] handle_irq_event_percpu+0xa0/0x300 [c00000007ffe7e70] [c000000000120524] handle_irq_event+0x64/0xc0 [c00000007ffe7eb0] [c000000000124dbc] handle_fasteoi_irq+0xec/0x260 [c00000007ffe7ef0] [c00000000011f4f0] generic_handle_irq+0x50/0x80 [c00000007ffe7f20] [c000000000010f3c] __do_irq+0x8c/0x200 [c00000007ffe7f90] [c0000000000236cc] call_do_irq+0x14/0x24 [c00000007e6f39e0] [c000000000011144] do_IRQ+0x94/0x110 [c00000007e6f3a30] [c000000000002594] hardware_interrupt_common+0x114/0x180 Fix this issue by introducing a new rtas_get_sensor_fast() function that does not use rtas_busy_delay() - and thus can only be used for sensors that do not cause a BUSY condition - known as "fast" sensors. The EPOW sensor is defined to be "fast" in sPAPR - mpe. Fixes: 587f83e8dd50 ("powerpc/pseries: Use rtas_get_sensor in RAS code") Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-23powerpc/rtas: Replace magic values with definesThomas Huth
rtas.h already has some nice #defines for RTAS return status codes - let's use them instead of hard-coded "magic" values! Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-21powerpc/eeh: Dump PHB diag-data for non-existing PEGavin Shan
When detecting EEH error on non-existing PE, including the reserved one, the PE is simply unfrozen without dumping the PHB diag-data, which is useful for locating the root cause of the EEH error. The patch dumps the PHB diag-data when non-existing PE reports error. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-21powerpc/eeh: Fix wrong printed PE numberGavin Shan
On LE kernel, the non-existing PE number in BE format derived from skiboot firmware isn't converted to LE format properly as following kernel log indicates: EEH: Clear non-existing PHB#4-PE#200000000000000 Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-21powerpc/signal: Add helper function to fetch quad word aligned pointerAnshuman Khandual
This patch adds one helper function 'sigcontext_vmx_regs' which computes quad word aligned pointer for 'vmx_reserve' array element in sigcontext structure making the code more readable. Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Reword comment and fix build for CONFIG_ALTIVEC=n] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-16powerpc/signal: Fix confusing header documentation in sigcontext.hAnshuman Khandual
Commit ce48b2100785 "powerpc: Add VSX context save/restore, ptrace and signal support" expanded the 'vmx_reserve' array element to contain 101 double words, but the comment block above was not updated. Also reorder the constants in the array size declaration to reflect the logic mentioned in the comment block above. This change helps in explaining how the HW registers are represented in the array. But no functional change. Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Reworded change log and added whitespace around +'s] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-16powerpc/tm: Drop tm_orig_msr from thread_structAnshuman Khandual
Currently tm_orig_msr is getting used during process context switch only. Then there is ckpt_regs which saves the checkpointed userspace context The MSR slot contained in ckpt_regs structure can be used during process context switch instead of tm_orig_msr, thus allowing us to drop it from thread_struct structure. This patch does that change. Acked-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-16powerpc/powernv: Add poweroff (EPOW, DPO) events support for PowerNV platformVipin K Parashar
This patch adds support for OPAL EPOW (Environmental and Power Warnings) and DPO (Delayed Power Off) events for the PowerNV platform. These events are generated on FSP (Flexible Service Processor) based systems. EPOW events are generated due to various critical system conditions that require system shutdown. A few examples of these conditions are high ambient temperature or system running on UPS power with low UPS battery. DPO event is generated in response to admin initiated system shutdown request. Upon receipt of EPOW and DPO events the host kernel invokes orderly_poweroff() for performing graceful system shutdown. Signed-off-by: Vipin K Parashar <vipin@linux.vnet.ibm.com> Acked-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/powernv: Unfreeze VF PE on releasing itGavin Shan
When releasing PE for SRIOV VF, the PE is forced to be frozen wrongly. When the same PE is picked for another VF, it won't work anyhow. The patch fixes the issue by unfreezing, not freezing the VF PE when releasing it. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/powernv: Include VF PE in PELTV of PF PEGavin Shan
The PELTV of PF PE should include VF PE, which is missed by current code, so that the VF PE is frozen automatically when freezing PF PE. The patch fixes the PELTV of PF PE to include VF PE. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/powernv: Pick M64 PEs based on BARsGavin Shan
On PHB3, PE might be reserved in advance to reflect the M64 segments consumed by the PE according to M64 BARs (exclude VF BARs) of the PCI devices included in the PE. The PE is picked based on M64 BARs instead of the bridge's M64 windows, which might include VF BARs. Otherwise, wrong PE could be picked. The patch calculates the used M64 segments and PE numbers according to the M64 BARs, excluding VF BARs, of PCI devices in one particular PE, instead of the bridge's M64 windows. Then the right PE number is picked. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/powernv: Boolean argument for pnv_ioda_setup_bus_PE()Gavin Shan
The patch changes the type of last argument of pnv_ioda_setup_bus_PE() and phb::pick_m64_pe() to boolean. No functional change. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/powernv: Reserve M64 PEs based on BARsGavin Shan
On PHB3, some PEs might be reserved in advance to reflect the M64 segments consumed by those PEs. We're reserving PEs based on the M64 window of root port, which might contain VF BAR. The PEs for VFs are allocated dynamically, not reserved based on the consumed M64 segments. So the M64 window of root port isn't reliable for the task. Instead, we go through M64 BARs (VF BARs excluded) of PCI devices under the specified root bus and reserve PEs accordingly, as the patch does. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/powernv: Allow to reserve one PE for multiple timesGavin Shan
The PE numbers are reserved according to root port's M64 window, which is aligned to M64 segment finely. So one PE shouldn't be reserved for multiple times. We will reserve PE numbers according to the M64 BARs of PCI device in subsequent patches, which aren't aligned to M64 segment size finely. It means one particular PE could be reserved for multiple times. The patch allows one PE to be reserved for multiple times and we print the warning message at debugging level. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc: Remove mtmsrd(), use existing mtmsr()Anton Blanchard
mtmsr() does the right thing on 32bit and 64bit, so use it everywhere. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc: Add macros for the ibm_architecture_vec[] lengthsMichael Ellerman
The encoding of the lengths in the ibm_architecture_vec array is "interesting" to say the least. It's non-obvious how the number of bytes we provide relates to the length value. In fact we already got it wrong once, see 11e9ed43ca8a "Fix up ibm_architecture_vec definition". So add some macros to make it (hopefully) clearer. These at least have the property that the integer present in the code is equal to the number of bytes that follows it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Stewart Smith <stewart@linux.vnet.ibm.com>
2015-07-13powerpc/iommu: Support "hybrid" iommu/direct DMA ops for coherent_mask < ↵Benjamin Herrenschmidt
dma_mask This patch adds the ability to the DMA direct ops to fallback to the IOMMU ops for coherent alloc/free if the coherent mask of the device isn't suitable for accessing the direct DMA space and the device also happens to have an active IOMMU table. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/iommu: Cleanup setting of DMA base/offsetBenjamin Herrenschmidt
Now that the table and the offset can co-exist, we no longer need to flip/flop, we can just establish both once at boot time. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-13powerpc/iommu: Remove dma_data unionBenjamin Herrenschmidt
To support "hybrid" DMA ops in a subsequent patch, we will need both a direct DMA offset and an iommu pointer. Those are currently exclusive (a union), so change them to be separate fields. While there, also type iommu_table_base properly and make exist only on CONFIG_PPC64 since it's not referenced on 32-bit at all. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-08powerpc/perf/24x7: Fix lockdep warningSukadev Bhattiprolu
The sysfs attributes for the 24x7 counters are dynamically allocated. Initialize the attributes using sysfs_attr_init() to fix following warning which occurs when CONFIG_DEBUG_LOCK_VMALLOC=y. [ 0.346249] audit: initializing netlink subsys (disabled) [ 0.346284] audit: type=2000 audit(1436295254.340:1): initialized [ 0.346489] BUG: key c0000000efe90198 not in .data! [ 0.346491] DEBUG_LOCKS_WARN_ON(1) [ 0.346502] ------------[ cut here ]------------ [ 0.346504] WARNING: at ../kernel/locking/lockdep.c:3002 [ 0.346506] Modules linked in: Reported-by: Gustavo Luiz Duarte <gustavold@linux.vnet.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Tested-by: Gustavo Luiz Duarte <gustavold@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-07powerpc/powernv: Fix race in updating core_idle_stateShreyas B. Prabhu
core_idle_state is maintained for each core. It uses 0-7 bits to track whether a thread in the core has entered fastsleep or winkle. 8th bit is used as a lock bit. The lock bit is set in these 2 scenarios- - The thread is first in subcore to wakeup from sleep/winkle. - If its the last thread in the core about to enter sleep/winkle While the lock bit is set, if any other thread in the core wakes up, it loops until the lock bit is cleared before proceeding in the wakeup path. This helps prevent race conditions w.r.t fastsleep workaround and prevents threads from switching to process context before core/subcore resources are restored. But, in the path to sleep/winkle entry, we currently don't check for lock-bit. This exposes us to following race when running with subcore on- First thread in the subcorea Another thread in the same waking up core entering sleep/winkle lwarx r15,0,r14 ori r15,r15,PNV_CORE_IDLE_LOCK_BIT stwcx. r15,0,r14 [Code to restore subcore state] lwarx r15,0,r14 [clear thread bit] stwcx. r15,0,r14 andi. r15,r15,PNV_CORE_IDLE_THREAD_BITS stw r15,0(r14) Here, after the thread entering sleep clears its thread bit in core_idle_state, the value is overwritten by the thread waking up. In such cases when the core enters fastsleep, code mistakes an idle thread as running. Because of this, the first thread waking up from fastsleep which is supposed to resync timebase skips it. So we can end up having a core with stale timebase value. This patch fixes the above race by looping on the lock bit even while entering the idle states. Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Fixes: 7b54e9f213f76 'powernv/powerpc: Add winkle support for offline cpus' Cc: stable@vger.kernel.org # 3.19+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-06powerpc/powernv: Fix opal-elog interrupt handlerAlistair Popple
The conversion of opal events to a proper irqchip means that handlers are called until the relevant opal event has been cleared by processing it. Events that queue work should therefore use a threaded handler to mask the event until processing is complete. Signed-off-by: Alistair Popple <alistair@popple.id.au> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-06powerpc/ppc4xx_hsta_msi: Include ppc-pci.h to fix reference to hose_listDaniel Axtens
An earlier commit referenced 'hose_list' in sysdev/ppc4xx_hsta_msi.c. hose_list is defined in ppc-pci.h, which was not included in that file. Include it, fixing the build for the akebono defconfig used by the kbuild test robot. Fixes: f2c800aaceb6 ("powerpc/ppc4xx_hsta_msi: Move MSI-related ops to pci_controller_ops") Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-06powerpc: Add plain English description for alignment exception oopsesAnton Blanchard
If we take an alignment exception which we cannot fix, the oops currently prints: Unable to handle kernel paging request for unknown fault Lets print something more useful: Unable to handle kernel paging request for unaligned access at address 0xc0000000f77bba8f Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-06powerpc: Set the correct kernel taint on machine check errors.Daniel Axtens
This means the 'M' flag will work properly when the kernel prints a backtrace. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-06powerpc/powernv: Fix vma page prot flags in opal-prd driverVaidyanathan Srinivasan
opal-prd driver will mmap() firmware code/data area as private mapping to prd user space daemon. Write to this page will trigger COW faults. The new COW pages are normal kernel RAM pages accounted by the kernel and are not special. vma->vm_page_prot value will be used at page fault time for the new COW pages, while pgprot_t value passed in remap_pfn_range() is used for the initial page table entry. Hence: * Do not add _PAGE_SPECIAL in vma, but only for remap_pfn_range() * Also remap_pfn_range() will add the _PAGE_SPECIAL flag using pte_mkspecial() call, hence no need to specify in the driver This fix resolves the page accounting warning shown below: BUG: Bad rss-counter state mm:c0000007d34ac600 idx:1 val:19 The above warning is triggered since _PAGE_SPECIAL was incorrectly being set for the normal kernel COW pages. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>