Age | Commit message (Collapse) | Author |
|
|
|
Pull slave-dmaengine changes from Vinod Koul:
"This brings for slave dmaengine:
- Change dma notification flag to DMA_COMPLETE from DMA_SUCCESS as
dmaengine can only transfer and not verify validaty of dma
transfers
- Bunch of fixes across drivers:
- cppi41 driver fixes from Daniel
- 8 channel freescale dma engine support and updated bindings from
Hongbo
- msx-dma fixes and cleanup by Markus
- DMAengine updates from Dan:
- Bartlomiej and Dan finalized a rework of the dma address unmap
implementation.
- In the course of testing 1/ a collection of enhancements to
dmatest fell out. Notably basic performance statistics, and
fixed / enhanced test control through new module parameters
'run', 'wait', 'noverify', and 'verbose'. Thanks to Andriy and
Linus [Walleij] for their review.
- Testing the raid related corner cases of 1/ triggered bugs in
the recently added 16-source operation support in the ioatdma
driver.
- Some minor fixes / cleanups to mv_xor and ioatdma"
* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (99 commits)
dma: mv_xor: Fix mis-usage of mmio 'base' and 'high_base' registers
dma: mv_xor: Remove unneeded NULL address check
ioat: fix ioat3_irq_reinit
ioat: kill msix_single_vector support
raid6test: add new corner case for ioatdma driver
ioatdma: clean up sed pool kmem_cache
ioatdma: fix selection of 16 vs 8 source path
ioatdma: fix sed pool selection
ioatdma: Fix bug in selftest after removal of DMA_MEMSET.
dmatest: verbose mode
dmatest: convert to dmaengine_unmap_data
dmatest: add a 'wait' parameter
dmatest: add basic performance metrics
dmatest: add support for skipping verification and random data setup
dmatest: use pseudo random numbers
dmatest: support xor-only, or pq-only channels in tests
dmatest: restore ability to start test at module load and init
dmatest: cleanup redundant "dmatest: " prefixes
dmatest: replace stored results mechanism, with uniform messages
Revert "dmatest: append verify result to results"
...
|
|
Pull ARM updates from Russell King:
"Included in this series are:
1. BE8 (modern big endian) changes for ARM from Ben Dooks
2. big.Little support from Nicolas Pitre and Dave Martin
3. support for LPAE systems with all system memory above 4GB
4. Perf updates from Will Deacon
5. Additional prefetching and other performance improvements from Will.
6. Neon-optimised AES implementation fro Ard.
7. A number of smaller fixes scattered around the place.
There is a rather horrid merge conflict in tools/perf - I was never
notified of the conflict because it originally occurred between Will's
tree and other stuff. Consequently I have a resolution which Will
forwarded me, which I'll forward on immediately after sending this
mail.
The other notable thing is I'm expecting some build breakage in the
crypto stuff on ARM only with Ard's AES patches. These were merged
into a stable git branch which others had already pulled, so there's
little I can do about this. The problem is caused because these
patches have a dependency on some code in the crypto git tree - I
tried requesting a branch I can pull to resolve these, and all I got
each time from the crypto people was "we'll revert our patches then"
which would only make things worse since I still don't have the
dependent patches. I've no idea what's going on there or how to
resolve that, and since I can't split these patches from the rest of
this pull request, I'm rather stuck with pushing this as-is or
reverting Ard's patches.
Since it should "come out in the wash" I've left them in - the only
build problems they seem to cause at the moment are with randconfigs,
and since it's a new feature anyway. However, if by -rc1 the
dependencies aren't in, I think it'd be best to revert Ard's patches"
I resolved the perf conflict roughly as per the patch sent by Russell,
but there may be some differences. Any errors are likely mine. Let's
see how the crypto issues work out..
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (110 commits)
ARM: 7868/1: arm/arm64: remove atomic_clear_mask() in "include/asm/atomic.h"
ARM: 7867/1: include: asm: use 'int' instead of 'unsigned long' for 'oldval' in atomic_cmpxchg().
ARM: 7866/1: include: asm: use 'long long' instead of 'u64' within atomic.h
ARM: 7871/1: amba: Extend number of IRQS
ARM: 7887/1: Don't smp_cross_call() on UP devices in arch_irq_work_raise()
ARM: 7872/1: Support arch_irq_work_raise() via self IPIs
ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode
ARM: 7878/1: nommu: Implement dummy early_paging_init()
ARM: 7876/1: clear Thumb-2 IT state on exception handling
ARM: 7874/2: bL_switcher: Remove cpu_hotplug_driver_{lock,unlock}()
ARM: footbridge: fix build warnings for netwinder
ARM: 7873/1: vfp: clear vfp_current_hw_state for dying cpu
ARM: fix misplaced arch_virt_to_idmap()
ARM: 7848/1: mcpm: Implement cpu_kill() to synchronise on powerdown
ARM: 7847/1: mcpm: Factor out logical-to-physical CPU translation
ARM: 7869/1: remove unused XSCALE_PMU Kconfig param
ARM: 7864/1: Handle 64-bit memory in case of 32-bit phys_addr_t
ARM: 7863/1: Let arm_add_memory() always use 64-bit arguments
ARM: 7862/1: pcpu: replace __get_cpu_var_uses
ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code
...
|
|
Conflicts:
arch/arm/include/asm/atomic.h
arch/arm/include/asm/hardirq.h
arch/arm/kernel/smp.c
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC cleanups from Olof Johansson:
"This branch contains code cleanups, moves and removals for 3.13.
Qualcomm msm targets had a bunch of code removal for legacy non-DT
platforms. Nomadik saw more device tree conversions and cleanup of
old code. Tegra has some code refactoring, etc.
One longish patch series from Sebastian Hasselbarth changes the
init_time hooks and tries to use a generic implementation for most
platforms, since they were all doing more or less the same things.
Finally the "shark" platform is removed in this release. It's been
abandoned for a while and nobody seems to care enough to keep it
around. If someone comes along and wants to resurrect it, the removal
can easily be reverted and code brought back.
Beyond this, mostly a bunch of removals of stale content across the
board, etc"
* tag 'cleanup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (79 commits)
ARM: gemini: convert to GENERIC_CLOCKEVENTS
ARM: EXYNOS: remove CONFIG_MACH_EXYNOS[4, 5]_DT config options
ARM: OMAP3: control: add API for setting IVA bootmode
ARM: OMAP3: CM/control: move CM scratchpad save to CM driver
ARM: OMAP3: McBSP: do not access CM register directly
ARM: OMAP3: clock: add API to enable/disable autoidle for a single clock
ARM: OMAP2: CM/PM: remove direct register accesses outside CM code
MAINTAINERS: Add patterns for DTS files for AT91
ARM: at91: remove init_machine() as default is suitable
ARM: at91/dt: split sama5d3 peripheral definitions
ARM: at91/dt: split sam9x5 peripheral definitions
ARM: Remove temporary sched_clock.h header
ARM: clps711x: Use linux/sched_clock.h
MAINTAINERS: Add DTS files to patterns for Samsung platform
ARM: EXYNOS: remove unnecessary header inclusions from exynos4/5 dt machine file
ARM: tegra: fix ARCH_TEGRA_114_SOC select sort order
clk: nomadik: fix missing __init on nomadik_src_init
ARM: drop explicit selection of HAVE_CLK and CLKDEV_LOOKUP
ARM: S3C64XX: Kill CONFIG_PLAT_S3C64XX
ASoC: samsung: Use CONFIG_ARCH_S3C64XX to check for S3C64XX support
...
|
|
Commit 6dedcca610c6 ("hotplug, powerpc, x86: Remove
cpu_hotplug_driver_lock())" removes the the definition of
cpu_hotplug_driver_{lock,unlock} APIs, thereby causing a build error.
Replace these calls with {lock,unlock}_device_hotplug().
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
git://git.baserock.org/delta/linux into devel-stable
Conflicts:
arch/arm/kernel/head.S
This series has been well tested and it would be great to get this
merged now.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
edma header defines DMA_COMPLETE, this causes issues as commit adfedd9a32e4 move
DMA_SUCCESS to DMA_COMPLETE. edma should properly namespace its defines and
needs a future fix
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
CPU hotplug and kexec rely on smp_ops.cpu_kill(), which is supposed
to wait for the CPU to park or power down, and perform the last
rites (such as disabling clocks etc., where the platform doesn't do
this automatically).
kexec in particular is unsafe without performing this
synchronisation to park secondaries. Without it, the secondaries
might not be parked when kexec trashes the kernel.
There is no generic way to do this synchronisation, so a new mcpm
platform_ops method power_down_finish() is added by this patch.
The new method is mandatory. A platform which provides no way to
detect when CPUs are parked is likely broken.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch factors the logical-to-physical CPU translation out of
mcpm_boot_secondary(), so that it can be reused elsewhere.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch proposes to remove the use of the IRQF_DISABLED flag
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/cleanup
From Paul Walmsley <paul@pwsan.com> via Tony Lindgren:
Move some of the OMAP2+ CM and System Control Module direct
register accesses into CM- and System Control
Module-specific "drivers" underneath arch/arm/mach-omap2/. This
is a prerequisite for moving this code out of arch/arm/mach-omap2/ into
drivers/.
Basic test logs are available here:
http://www.pwsan.com/omap/testlogs/cm_scm_cleanup_a_v3.13/20131019101809/
* tag 'omap-for-v3.13/cm-scm-cleanup-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP3: control: add API for setting IVA bootmode
ARM: OMAP3: CM/control: move CM scratchpad save to CM driver
ARM: OMAP3: McBSP: do not access CM register directly
ARM: OMAP3: clock: add API to enable/disable autoidle for a single clock
ARM: OMAP2: CM/PM: remove direct register accesses outside CM code
+ Linux 3.12-rc4
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
In big endian mode mcpm_entry_point is first function
that called on secondaries CPU. First it should switch
CPU into big endian code.
[ben.dooks@codethink.co.uk: merge fix patch from Victor into this]
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
|
|
Pull ARM fixes from Russell King:
"Some more ARM fixes, nothing particularly major here. The biggest
change is to fix the SMP_ON_UP code so that it works with TI's Aegis
cores"
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7851/1: check for number of arguments in syscall_get/set_arguments()
ARM: 7846/1: Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices
ARM: 7845/1: sharpsl_param.c: fix invalid memory access for pxa devices
ARM: 7843/1: drop asm/types.h from generic-y
ARM: 7842/1: MCPM: don't explode if invoked without being initialized first
|
|
This fixes a regression for kernels after v3.2
After commit 72662e01088394577be4a3f14da94cf87bea2591
ARM: head.S: only include __turn_mmu_on in the initial identity mapping
Zaurus PXA devices call sharpsl_save_param() during fixup and hang on
boot because memcpy refers to physical addresses no longer valid if the
MMU is setup.
Zaurus collie (SA1100) is unaffected (function is called in init_machine).
The code was making assumptions and for PXA the virtual address
should have been used before.
Signed-off-by: Marko Katic <dromede@gmail.com>
Signed-off-by: Andrea Adami <andrea.adami@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Currently mcpm_cpu_power_down() and mcpm_cpu_suspend() trigger BUG()
if mcpm_platform_register() is not called beforehand. This may occur
for many reasons such as some incomplete device tree passed to the kernel
or the like.
Let's be nicer to users and avoid killing the kernel if that happens by
logging a warning and returning to the caller. The mcpm_cpu_suspend()
user is already set to deal with this situation, and so is cpu_die()
invoking mcpm_cpu_die().
The problematic case would have been the B.L switcher's usage of
mcpm_cpu_power_down(), however it has to call mcpm_cpu_power_up() first
which is already set to catch an error resulting from a missing
mcpm_platform_register() call.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
HWMOD removal for MMC is breaking edma_start as the events are being manually
triggered due to unused channel list not being clear.
The above issue is fixed by reading the "dmas" property from the DT node if it
exists and clearing the bits in the unused channel list if the dma controller
used by any device is EDMA. For this purpose we use the of_* helpers to parse
the arguments in the dmas phandle list.
Also introduced is a minor clean up of a checkpatch error in old code.
Reviewed-by: Sekhar Nori <nsekhar@ti.com>
Reported-by: Balaji T K <balajitk@ti.com>
Cc: Sekhar Nori <nsekhar@ti.com>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Nishanth Menon <nm@ti.com>
Cc: Pantel Antoniou <panto@antoniou-consulting.com>
Cc: Jason Kridner <jkridner@beagleboard.org>
Cc: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
When the switcher is active, there is no straightforward way to
figure out which logical CPU a given physical CPU maps to.
This patch provides a function
bL_switcher_get_logical_index(mpidr), which is analogous to
get_logical_index().
This function returns the logical CPU on which the specified
physical CPU is grouped (or -EINVAL if unknown).
If the switcher is inactive or not present, -EUNATCH is returned instead.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
This patch exports a bL_switcher_trace_trigger() function to
provide a means for drivers using the trace events to get the
current status when starting a trace session.
Calling this function is equivalent to pinging the trace_trigger
file in sysfs.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
|
|
When tracing switching, an external tracer needs a way to bootstrap
its knowledge of the logical<->physical CPU mapping.
This patch adds a sysfs attribute trace_trigger. A write to this
attribute will generate a power:cpu_migrate_current event for each
online CPU, indicating the current physical CPU for each logical
CPU.
Activating or deactivating the switcher also generates these
events, so that the tracer knows about the resulting remapping of
affected CPUs.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
|
|
This patch adds simple trace events to the b.L switcher code
to allow tracing of CPU migration events.
To make use of the trace events, you will need:
CONFIG_FTRACE=y
CONFIG_ENABLE_DEFAULT_TRACERS=y
The following events are added:
* power:cpu_migrate_begin
* power:cpu_migrate_finish
each with the following data:
u64 timestamp;
u32 cpu_hwid;
power:cpu_migrate_begin occurs immediately before the
switcher-specific migration operations start.
power:cpu_migrate_finish occurs immediately when migration is
completed.
The cpu_hwid field contains the ID fields of the MPIDR.
* For power:cpu_migrate_begin, cpu_hwid is the ID of the outbound
physical CPU (equivalent to (from_phys_cpu,from_phys_cluster)).
* For power:cpu_migrate_finish, cpu_hwid is the ID of the inbound
physical CPU (equivalent to (to_phys_cpu,to_phys_cluster)).
By design, the cpu_hwid field is masked in the same way as the
device tree cpu node reg property, allowing direct correlation to
the DT description of the hardware.
The timestamp is added in order to minimise timing noise. An
accurate system-wide clock should be used for generating this
(hopefully getnstimeofday is appropriate, but it could be changed).
It could be any monotonic shared clock, since the aim is to allow
accurate deltas to be computed. We don't necessarily care about
accurate synchronisation with wall clock time.
In practice, each switch takes place on a single logical CPU,
and the trace infrastructure should guarantee that events are
well-ordered with respect to a single logical CPU.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
In some cases, a significant delay may be observed between the moment
a request for a CPU to come up is made and the moment it is ready to
start executing kernel code. This is especially true when a whole
cluster has to be powered up which may take in the order of miliseconds.
It is therefore a good idea to let the outbound CPU continue to execute
code in the mean time, and be notified when the inbound is ready before
performing the actual switch.
This is achieved by registering a completion block with the appropriate
IPI callback, and programming the sending of an IPI by the early assembly
code prior to entering the main kernel code. Once the IPI is delivered
to the outbound CPU, the completion block is "completed" and the switcher
thread is resumed.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
This allows to poke a predetermined value into a specific address
upon entering the early boot code in bL_head.S.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
Let's wait for the inbound CPU to come up and snoop some of the outbound
CPU cache before bringing the outbound CPU down. That should be more
efficient than going down right away.
Possible improvements might involve some monitoring of the CCI event
counters.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
There is no explicit way to know when a switch started via
bL_switch_request() is complete. This can lead to unpredictable
behaviour when the switcher is controlled by a subsystem which
makes dynamic decisions (such as cpufreq).
The CPU PM notifier is not really suitable for signalling
completion, because the CPU could get suspended and resumed for
other, independent reasons while a switch request is in flight.
Adding a whole new notifier for this seems excessive, and may tempt
people to put heavyweight code on this path.
This patch implements a new bL_switch_request_cb() function that
allows for a per-request lightweight callback, private between the
switcher and the caller of bL_switch_request_cb().
Overlapping switches on a single CPU are considered incorrect if
they are requested via bL_switch_request_cb() with a callback (they
will lead to an unpredictable final state without explicit external
synchronisation to force the requests into a particular order).
Queuing requests robustly would be overkill because only one
subsystem should be attempting to control the switcher at any time.
Overlapping requests of this kind will be failed with -EBUSY to
indicate that the second request won't take effect and the
completer will never be called for it.
bL_switch_request() is retained as a wrapper round the new function,
with the old, fire-and-forget semantics. In this case the last request
will always win. The request may still be denied if a previous request
with a completer is still pending.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
|
|
Some subsystems will need to respond synchronously to runtime
enabling and disabling of the switcher.
This patch adds a dedicated notifier interface to support such
subsystems. Pre- and post- enable/disable notifications are sent
to registered callbacks, allowing safe transition of non-b.L-
transparent subsystems across these control transitions.
Notifier callbacks may veto switcher (de)activation on pre notifications
only. Post notifications won't revert the action.
If enabling or disabling of the switcher fails after the pre-change
notification has been sent, subsystems which have registered
notifiers can be left in an inappropriate state.
This patch sends a suitable post-change notification on failure,
indicating that the old state has been reestablished.
For example, a failed initialisation will result in the following
sequence:
BL_NOTIFY_PRE_ENABLE
/* switcher initialisation fails */
BL_NOTIFY_POST_DISABLE
It is the responsibility of notified subsystems to respond in an
appropriate way.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
Some subsystems will need to know for sure whether the switcher is
enabled or disabled during certain critical regions.
This patch provides a simple mutex-based mechanism to discover
whether the switcher is enabled and temporarily lock out further
enable/disable:
* bL_switcher_get_enabled() returns true iff the switcher is
enabled and temporarily inhibits enable/disable.
* bL_switcher_put_enabled() permits enable/disable of the switcher
again after a previous call to bL_switcher_get_enabled().
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
devel-stable
Nicolas Pitre writes:
This is the first part of the patch series adding IKS (In-Kernel
Switcher) support for big.LITTLE system architectures. This consists of
the core patches only. Extra patches to come later will introduce
various optimizations and tracing support.
Those patches were posted on the list a while ago here:
http://news.gmane.org/group/gmane.linux.ports.arm.kernel/thread=253942
|
|
The Shark machine sub-architecture (also known as DNARD, the
DIGITAL Network Appliance Reference Design) lacks a maintainer
able to apply and test patches to modernize the architecture.
It is suspected that the current kernel, while it compiles,
does not even boot on this machine. The listed maintainer has
expressed that he will not be able to spend any time on the
maintenance for the coming year.
So let's delete it from the kernel for now. It can always be
resurrected with git revert if maintenance is resumed.
As the VIA82c505 PCI adapter was only used by this
architecture, that gets deleted too.
Cc: arm@kernel.org
Cc: Alexander Schulz <alex@shark-linux.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer code update from Thomas Gleixner:
- armada SoC clocksource overhaul with a trivial merge conflict
- Minor improvements to various SoC clocksource drivers
* 'timers/core' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
clocksource: armada-370-xp: Add detailed clock requirements in devicetree binding
clocksource: armada-370-xp: Get reference fixed-clock by name
clocksource: armada-370-xp: Replace WARN_ON with BUG_ON
clocksource: armada-370-xp: Fix device-tree binding
clocksource: armada-370-xp: Introduce new compatibles
clocksource: armada-370-xp: Use CLOCKSOURCE_OF_DECLARE
clocksource: armada-370-xp: Simplify TIMER_CTRL register access
clocksource: armada-370-xp: Use BIT()
ARM: timer-sp: Set dynamic irq affinity
ARM: nomadik: add dynamic irq flag to the timer
clocksource: sh_cmt: 32-bit control register support
clocksource: em_sti: Convert to devm_* managed helpers
|
|
Pull slave-dmaengine updates from Vinod Koul:
"This pull brings:
- Andy's DW driver updates
- Guennadi's sh driver updates
- Pl08x driver fixes from Tomasz & Alban
- Improvements to mmp_pdma by Daniel
- TI EDMA fixes by Joel
- New drivers:
- Hisilicon k3dma driver
- Renesas rcar dma driver
- New API for publishing slave driver capablities
- Various fixes across the subsystem by Andy, Jingoo, Sachin etc..."
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (94 commits)
dma: edma: Remove limits on number of slots
dma: edma: Leave linked to Null slot instead of DUMMY slot
dma: edma: Find missed events and issue them
ARM: edma: Add function to manually trigger an EDMA channel
dma: edma: Write out and handle MAX_NR_SG at a given time
dma: edma: Setup parameters to DMA MAX_NR_SG at a time
dmaengine: pl330: use dma_set_max_seg_size to set the sg limit
dmaengine: dma_slave_caps: remove sg entries
dma: replace devm_request_and_ioremap by devm_ioremap_resource
dma: ste_dma40: Fix potential null pointer dereference
dma: ste_dma40: Remove duplicate const
dma: imx-dma: Remove redundant NULL check
dma: dmagengine: fix function names in comments
dma: add driver for R-Car HPB-DMAC
dma: k3dma: use devm_ioremap_resource() instead of devm_request_and_ioremap()
dma: imx-sdma: Staticize sdma_driver_data structures
pch_dma: Add MODULE_DEVICE_TABLE
dmaengine: PL08x: Add cyclic transfer support
dmaengine: PL08x: Fix reading the byte count in cctl
dmaengine: PL08x: Add support for different maximum transfer size
...
|
|
Manual trigger for events missed as a result of splitting a
scatter gather list and DMA'ing it in batches. Add a helper
function to trigger a channel incase any such events are missed.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
When a cpu goes to a deep idle state where its local timer is shutdown, it
notifies the time frame work to use the broadcast timer instead.
Unfortunately, the broadcast device could wake up any CPU, including an idle one
which is not concerned by the wake up at all.
This implies, in the worst case, an idle CPU will wake up to send an IPI to
another idle cpu.
This patch fixes this for ARM platforms using timer-sp, by setting
CLOCK_EVT_FEAT_DYNIRQ feature.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
In a similar manner to our spinlock implementation, mcpm uses sev to
wake up cores waiting on a lock when the lock is unlocked. In order to
ensure that the final write unlocking the lock is visible, a dsb
instruction is executed immediately prior to the sev.
This patch changes these dsbs to use the -st option, since we only
require that the store unlocking the lock is made visible.
Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Only the basic call to aid debugging.
*** NOT FOR PRODUCTION ***
Usage:
echo <cpuid>,<clusterid> > /dev/b.L_switcher
where <cpuid> is the logical CPU number, and <clusterid> is 0 for the
first cluster and 1 for the second cluster.
Signed-off-by: nicolas Pitre <nico@linaro.org>
|
|
Trying to support both the switcher and CPU hotplug at the same time
is tricky due to ambiguous semantics. So let's at least prevent users
from messing around with those logical CPUs the switcher has removed
and those which were not active when the switcher was activated.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
Up to now, the logical CPU was somehow tied to the physical CPU number
within a cluster. This causes problems when forcing the boot CPU to be
different from the first enumerated CPU in the device tree creating a
discrepancy between logical and physical CPU numbers.
Let's make the pairing completely independent from physical CPU numbers.
Let's keep only those logical CPUs with same initial CPU cluster to create
a uniform scheduler profile without having to modify any of the probed
topology and compute capacity data. This has the potential to create
a non contiguous CPU numbering space when the switcher is active with
potential impact on buggy user space tools. It is however better to fix
those tools rather than making the switcher code more intrusive.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
|
|
By adding no_bL_switcher to the kernel cmdline string, the switcher
won't be activated automatically at boot time. It is still possible
to activate it later with:
echo 1 > /sys/kernel/bL_switcher/active
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
The /sys/kernel/bL_switcher/enable file allows to enable or disable
the switcher by writing 1 or 0 to it respectively. It is still enabled
by default on boot.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
Currently, GIC IDs are hardcoded making the code dependent on the 4+4 b.L
configuration. Let's allow for GIC IDs to be discovered upon switcher
initialization to support other b.L configurations such as the 1+1 one,
or 2+3 as on the VExpress TC2.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
In a regular kernel configuration, all the CPUs are initially available.
But the switcher execution model uses half of them at any time. Instead
of hacking the DTB to remove half of the CPUs, let's remove them at
run time and make sure we still have a working switcher configuration.
This way, the same DTB can be used whether or not the switcher is used.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
We now have a dedicated thread for each logical CPU. That's plenty
of stack space for our needs.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
The workqueues are problematic as they may be contended.
They can't be scheduled with top priority either. Also the optimization
in bL_switch_request() to skip the workqueue entirely when the target CPU
and the calling CPU were the same didn't allow for bL_switch_request() to
be called from atomic context, as might be the case for some cpufreq
drivers.
Let's move to dedicated kthreads instead.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
Per-CPU timers that are shutdown when a CPU is switched over must be disabled
upon switching and reprogrammed on the inbound CPU by relying on the
clock events management API. save/restore sequence is executed with irqs
disabled as mandated by the clock events API.
The next_event is an absolute time, hence, when the inbound CPU resumes,
if the timer has expired the min delta is forced into the tick device to
fire after few cycles.
This patch adds switching support for clock events that are per-CPU and
have to be migrated when a switch takes place; the cpumask of the clock
event device is checked against the cpumask of the current cpu, and if
they match, the clockevent device mode is saved and it is put in
shutdown mode. Resume code reprogrammes the tick device accordingly.
Tested on A15/A7 fast models and architected timers.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
This is the core code implementing big.LITTLE switcher functionality.
Rationale for this code is available here:
http://lwn.net/Articles/481055/
The main entry point for a switch request is:
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
If the calling CPU is not the wanted one, this wrapper takes care of
sending the request to the appropriate CPU with schedule_work_on().
At the moment the core switch operation is handled by bL_switch_to()
which must be called on the CPU for which a switch is requested.
What this code does:
* Return early if the current cluster is the wanted one.
* Close the gate in the kernel entry vector for both the inbound
and outbound CPUs.
* Wake up the inbound CPU so it can perform its reset sequence in
parallel up to the kernel entry vector gate.
* Migrate all interrupts in the GIC targeting the outbound CPU
interface to the inbound CPU interface, including SGIs. This is
performed by gic_migrate_target() in drivers/irqchip/irq-gic.c.
* Call cpu_pm_enter() which takes care of flushing the VFP state to
RAM and save the CPU interface config from the GIC to RAM.
* Modify the cpu_logical_map to refer to the inbound physical CPU.
* Call cpu_suspend() which saves the CPU state (general purpose
registers, page table address) onto the stack and store the
resulting stack pointer in an array indexed by the updated
cpu_logical_map, then call the provided shutdown function.
This happens in arch/arm/kernel/sleep.S.
At this point, the provided shutdown function executed by the outbound
CPU ungates the inbound CPU. Therefore the inbound CPU:
* Picks up the saved stack pointer in the array indexed by its MPIDR
in arch/arm/kernel/sleep.S.
* The MMU and caches are re-enabled using the saved state on the
provided stack, just like if this was a resume operation from a
suspended state.
* Then cpu_suspend() returns, although this is on the inbound CPU
rather than the outbound CPU which called it initially.
* The function cpu_pm_exit() is called which effect is to restore the
CPU interface state in the GIC using the state previously saved by
the outbound CPU.
* Exit of bL_switch_to() to resume normal kernel execution on the
new CPU.
However, the outbound CPU is potentially still running in parallel while
the inbound CPU is resuming normal kernel execution, hence we need
per CPU stack isolation to execute bL_do_switch(). After the outbound
CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to:
* Clean its L1 cache.
* If it is the last CPU still alive in its cluster (last man standing),
it also cleans its L2 cache and disables cache snooping from the other
cluster.
* Power down the CPU (or whole cluster).
Code called from bL_do_switch() might end up referencing 'current' for
some reasons. However, 'current' is derived from the stack pointer.
With any arbitrary stack, the returned value for 'current' and any
dereferenced values through it are just random garbage which may lead to
segmentation faults.
The active page table during the execution of bL_do_switch() is also a
problem. There is no guarantee that the inbound CPU won't destroy the
corresponding task which would free the attached page table while the
outbound CPU is still running and relying on it.
To solve both issues, we borrow some of the task space belonging to
the init/idle task which, by its nature, is lightly used and therefore
is unlikely to clash with our usage. The init task is also never going
away.
Right now the logical CPU number is assumed to be equivalent to the
physical CPU number within each cluster. The kernel should also be
booted with only one cluster active. These limitations will be lifted
eventually.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into fixes
From Sekhar Nori:
DaVinci fixes for v3.11-rc2
The pull request includes fixes for sparse warnings, defconfig changes to
enable DMA usage on peripherals and removal of a duplicated include file.
* tag 'davinci-fixes-for-v3.11-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
ARM: davinci: defconfig: enable EDMA driver
ARM: davinci: make file local variables static
ARM: edma: remove duplicated include from edma.c
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
Remove duplicated include.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
|
|
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer core updates from Thomas Gleixner:
"The timer changes contain:
- posix timer code consolidation and fixes for odd corner cases
- sched_clock implementation moved from ARM to core code to avoid
duplication by other architectures
- alarm timer updates
- clocksource and clockevents unregistration facilities
- clocksource/events support for new hardware
- precise nanoseconds RTC readout (Xen feature)
- generic support for Xen suspend/resume oddities
- the usual lot of fixes and cleanups all over the place
The parts which touch other areas (ARM/XEN) have been coordinated with
the relevant maintainers. Though this results in an handful of
trivial to solve merge conflicts, which we preferred over nasty cross
tree merge dependencies.
The patches which have been committed in the last few days are bug
fixes plus the posix timer lot. The latter was in akpms queue and
next for quite some time; they just got forgotten and Frederic
collected them last minute."
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (59 commits)
hrtimer: Remove unused variable
hrtimers: Move SMP function call to thread context
clocksource: Reselect clocksource when watchdog validated high-res capability
posix-cpu-timers: don't account cpu timer after stopped thread runtime accounting
posix_timers: fix racy timer delta caching on task exit
posix-timers: correctly get dying task time sample in posix_cpu_timer_schedule()
selftests: add basic posix timers selftests
posix_cpu_timers: consolidate expired timers check
posix_cpu_timers: consolidate timer list cleanups
posix_cpu_timer: consolidate expiry time type
tick: Sanitize broadcast control logic
tick: Prevent uncontrolled switch to oneshot mode
tick: Make oneshot broadcast robust vs. CPU offlining
x86: xen: Sync the CMOS RTC as well as the Xen wallclock
x86: xen: Sync the wallclock when the system time is set
timekeeping: Indicate that clock was set in the pvclock gtod notifier
timekeeping: Pass flags instead of multiple bools to timekeeping_update()
xen: Remove clock_was_set() call in the resume path
hrtimers: Support resuming with two or more CPUs online (but stopped)
timer: Fix jiffies wrap behavior of round_jiffies_common()
...
|