Age | Commit message (Collapse) | Author |
|
* pm-assorted:
PM / QoS: Add pm_qos and dev_pm_qos to events-power.txt
PM / QoS: Add dev_pm_qos_request tracepoints
PM / QoS: Add pm_qos_request tracepoints
PM / QoS: Add pm_qos_update_target/flags tracepoints
PM / QoS: Update Documentation/power/pm_qos_interface.txt
PM / Sleep: Print last wakeup source on failed wakeup_count write
PM / QoS: correct the valid range of pm_qos_class
PM / wakeup: Adjust messaging for wake events during suspend
PM / Runtime: Update .runtime_idle() callback documentation
PM / Runtime: Rework the "runtime idle" helper routine
PM / Hibernate: print physical addresses consistently with other parts of kernel
|
|
Adds tracepoints to dev_pm_qos_add_request, dev_pm_qos_update_request,
and dev_pm_qos_remove_request. It's useful for checking device name,
dev_pm_qos_request_type, and value.
Signed-off-by: Sahara <keun-o.park@windriver.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Commit a938da06 introduced a useful little log message to tell
users/debuggers which wakeup source aborted a suspend. However,
this message is only printed if the abort happens during the
in-kernel suspend path (after writing /sys/power/state).
The full specification of the /sys/power/wakeup_count facility
allows user-space power managers to double-check if wakeups have
already happened before it actually tries to suspend (e.g. while it
was running user-space pre-suspend hooks), by writing the last known
wakeup_count value to /sys/power/wakeup_count. This patch changes
the sysfs handler for that node to also print said log message if
that write fails, so that we can figure out the offending wakeup
source for both kinds of suspend aborts.
Signed-off-by: Julius Werner <jwerner@chromium.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
This adds in a new message to the wakeup code which adds an
indication to the log that suspend was cancelled due to a wake event
occouring during the suspend sequence. It also adjusts the message
printed in suspend.c to reflect the potential that a suspend was
aborted, as opposed to a device failing to suspend.
Without these message adjustments one can end up with a kernel log
that says that a device failed to suspend with no actual device
suspend failures, which can be confusing to the log examiner.
Signed-off-by: Bernie Thompson <bhthompson@chromium.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The "index" field of struct cpufreq_frequency_table was never an
index and isn't used at all by the cpufreq core. It only is useful
for cpufreq drivers for their internal purposes.
Many people nowadays blindly set it in ascending order with the
assumption that the core will use it, which is a mistake.
Rename it to "driver_data" as that's what its purpose is. All of its
users are updated accordingly.
[rjw: Changelog]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The "runtime idle" helper routine, rpm_idle(), currently ignores
return values from .runtime_idle() callbacks executed by it.
However, it turns out that many subsystems use
pm_generic_runtime_idle() which checks the return value of the
driver's callback and executes pm_runtime_suspend() for the device
unless that value is not 0. If that logic is moved to rpm_idle()
instead, pm_generic_runtime_idle() can be dropped and its users
will not need any .runtime_idle() callbacks any more.
Moreover, the PCI, SCSI, and SATA subsystems' .runtime_idle()
routines, pci_pm_runtime_idle(), scsi_runtime_idle(), and
ata_port_runtime_idle(), respectively, as well as a few drivers'
ones may be simplified if rpm_idle() calls rpm_suspend() after 0 has
been returned by the .runtime_idle() callback executed by it.
To reduce overall code bloat, make the changes described above.
Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Tested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
|
|
Fix dev_pm_put_subsys_data() so that it doesn't call kfree() under
a spinlock and make it return 1 whenever it leaves NULL
power.subsys_data (regardless of the reason).
Signed-off-by: Shuah Khan <shuah.kh@samsung.com>
Reviewed-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
* pm-assorted:
PM / OPP: add documentation to RCU head in struct opp
PM / sleep: invalidate TEST_CPUS and TEST_CORE support for freeze state
PM / sleep: add TEST_PLATFORM support for freeze state
|
|
When genpd prepares for a system suspend it will fetch a runtime
reference for the device. When returning it we now use the
asyncronous runtime PM API. Thus we don't have to wait for the
device to become idle|suspended before we move on and handle the
next device in queue.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
For irq safe devices return the runtime reference for the parent
by using the asyncronous runtime PM API. Thus we don't have to
wait for it to become idle|suspended. Instead we can move on and
handle the next device in queue.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Use the asyncronous runtime PM API when returning the runtime
reference for the device after the system resume is completed.
By using the asyncronous runtime PM API we don't have to wait
for each an every device to become idle|suspended. Instead we
can move on and handle the next device in queue.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
commit dde8437 (PM / OPP: RCU reclaim) introduced rcu_head for
struct opp. This aids freeing using kfree_rcu. However, we missed
adding documentation for the same. This generates kernel doc warning:
Warning(drivers/base/power/opp.c:70): No description found for
parameter 'head'
Add documentation as appropriate.
[rjw: Changelog]
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Device PM QoS sysfs attributes, if present during device removal,
are removed from within device_pm_remove(), which is too late,
since dpm_sysfs_remove() has already removed the whole attribute
group they belonged to. However, moving the removal of those
attributes to dpm_sysfs_remove() alone is not sufficient, because
in theory they still can be re-added right after being removed by it
(the device's driver is still bound to it at that point).
For this reason, move the entire desctruction of device PM QoS
constraints to dpm_sysfs_remove() and make it prevent any new
constraints from being added after it has run. Also, move the
initialization of the power.qos field in struct device to
device_pm_init_common() and drop the no longer needed
dev_pm_qos_constraints_init().
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The current device PM QoS code assumes that certain functions will
never be called in parallel with each other (for example, it is
assumed that dev_pm_qos_expose_flags() won't be called in parallel
with dev_pm_qos_hide_flags() for the same device and analogously
for the latency limit), which may be overly optimistic. Moreover,
dev_pm_qos_expose_flags() and dev_pm_qos_expose_latency_limit()
leak memory in error code paths (req needs to be freed on errors)
and __dev_pm_qos_drop_user_request() forgets to free the request.
To fix the above issues put more things under the device PM QoS
mutex to make them mutually exclusive and add the missing freeing
of memory.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Apply the introduced memalloc_noio_save() and memalloc_noio_restore() to
force memory allocation with no I/O during runtime_resume/runtime_suspend
callback on device with the flag of 'memalloc_noio' set.
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Decotigny <david.decotigny@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Oliver Neukum <oneukum@suse.de>
Cc: Jiri Kosina <jiri.kosina@suse.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Introduce the flag memalloc_noio in 'struct dev_pm_info' to help PM core
to teach mm not allocating memory with GFP_KERNEL flag for avoiding
probable deadlock.
As explained in the comment, any GFP_KERNEL allocation inside
runtime_resume() or runtime_suspend() on any one of device in the path
from one block or network device to the root device in the device tree
may cause deadlock, the introduced pm_runtime_set_memalloc_noio() sets
or clears the flag on device in the path recursively.
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Oliver Neukum <oneukum@suse.de>
Cc: Jiri Kosina <jiri.kosina@suse.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Greg KH <greg@kroah.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Decotigny <david.decotigny@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Pull USB patches from Greg Kroah-Hartman:
"Here's the big USB merge for 3.9-rc1
Nothing major, lots of gadget fixes, and of course, xhci stuff.
All of this has been in linux-next for a while, with the exception of
the last 3 patches, which were reverts of patches in the tree that
caused problems, they went in yesterday."
* tag 'usb-3.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (190 commits)
Revert "USB: EHCI: make ehci-vt8500 a separate driver"
Revert "USB: EHCI: make ehci-orion a separate driver"
Revert "USB: update host controller Kconfig entries"
USB: update host controller Kconfig entries
USB: EHCI: make ehci-orion a separate driver
USB: EHCI: make ehci-vt8500 a separate driver
USB: usb-storage: unusual_devs update for Super TOP SATA bridge
USB: ehci-omap: Fix autoloading of module
USB: ehci-omap: Don't free gpios that we didn't request
USB: option: add Huawei "ACM" devices using protocol = vendor
USB: serial: fix null-pointer dereferences on disconnect
USB: option: add Yota / Megafon M100-1 4g modem
drivers/usb: add missing GENERIC_HARDIRQS dependencies
USB: storage: properly handle the endian issues of idProduct
testusb: remove all mentions of 'usbfs'
usb: gadget: imx_udc: make it depend on BROKEN
usb: omap_control_usb: fix compile warning
ARM: OMAP: USB: Add phy binding information
ARM: OMAP2: MUSB: Specify omap4 has mailbox
ARM: OMAP: devices: create device for usb part of control module
...
|
|
* pm-cpufreq: (55 commits)
cpufreq / intel_pstate: Fix 32 bit build
cpufreq: conservative: Fix typos in comments
cpufreq: ondemand: Fix typos in comments
cpufreq: exynos: simplify .init() for setting policy->cpus
cpufreq: kirkwood: Add a cpufreq driver for Marvell Kirkwood SoCs
cpufreq/x86: Add P-state driver for sandy bridge.
cpufreq_stats: do not remove sysfs files if frequency table is not present
cpufreq: Do not track governor name for scaling drivers with internal governors.
cpufreq: Only call cpufreq_out_of_sync() for driver that implement cpufreq_driver.target()
cpufreq: Retrieve current frequency from scaling drivers with internal governors
cpufreq: Fix locking issues
cpufreq: Create a macro for unlock_policy_rwsem{read,write}
cpufreq: Remove unused HOTPLUG_CPU code
cpufreq: governors: Fix WARN_ON() for multi-policy platforms
cpufreq: ondemand: Replace down_differential tuner with adj_up_threshold
cpufreq / stats: Get rid of CPUFREQ_STATDEVICE_ATTR
cpufreq: Don't check cpu_online(policy->cpu)
cpufreq: add imx6q-cpufreq driver
cpufreq: Don't remove sysfs link for policy->cpu
cpufreq: Remove unnecessary use of policy->shared_type
...
|
|
PM_SUSPEND_FREEZE state is a general state that
does not need any platform specific support, it equals
frozen processes + suspended devices + idle processors.
Compared with PM_SUSPEND_MEMORY,
PM_SUSPEND_FREEZE saves less power
because the system is still in a running state.
PM_SUSPEND_FREEZE has less resume latency because it does not
touch BIOS, and the processors are in idle state.
Compared with RTPM/idle,
PM_SUSPEND_FREEZE saves more power as
1. the processor has longer sleep time because processes are frozen.
The deeper c-state the processor supports, more power saving we can get.
2. PM_SUSPEND_FREEZE uses system suspend code path, thus we can get
more power saving from the devices that does not have good RTPM support.
This state is useful for
1) platforms that do not have STR, or have a broken STR.
2) platforms that have an extremely low power idle state,
which can be used to replace STR.
The following describes how PM_SUSPEND_FREEZE state works.
1. echo freeze > /sys/power/state
2. the processes are frozen.
3. all the devices are suspended.
4. all the processors are blocked by a wait queue
5. all the processors idles and enters (Deep) c-state.
6. an interrupt fires.
7. a processor is woken up and handles the irq.
8. if it is a general event,
a) the irq handler runs and quites.
b) goto step 4.
9. if it is a real wake event, say, power button pressing, keyboard touch, mouse moving,
a) the irq handler runs and activate the wakeup source
b) wakeup_source_activate() notifies the wait queue.
c) system starts resuming from PM_SUSPEND_FREEZE
10. all the devices are resumed.
11. all the processes are unfrozen.
12. system is back to working.
Known Issue:
The wakeup of this new PM_SUSPEND_FREEZE state may behave differently
from the previous suspend state.
Take ACPI platform for example, there are some GPEs that only enabled
when the system is in sleep state, to wake the system backk from S3/S4.
But we are not touching these GPEs during transition to PM_SUSPEND_FREEZE.
This means we may lose some wake event.
But on the other hand, as we do not disable all the Interrupts during
PM_SUSPEND_FREEZE, we may get some extra "wakeup" Interrupts, that are
not available for S3/S4.
The patches has been tested on an old Sony laptop, and here are the results:
Average Power:
1. RPTM/idle for half an hour:
14.8W, 12.6W, 14.1W, 12.5W, 14.4W, 13.2W, 12.9W
2. Freeze for half an hour:
11W, 10.4W, 9.4W, 11.3W 10.5W
3. RTPM/idle for three hours:
11.6W
4. Freeze for three hours:
10W
5. Suspend to Memory:
0.5~0.9W
Average Resume Latency:
1. RTPM/idle with a black screen: (From pressing keyboard to screen back)
Less than 0.2s
2. Freeze: (From pressing power button to screen back)
2.50s
3. Suspend to Memory: (From pressing power button to screen back)
4.33s
>From the results, we can see that all the platforms should benefit from
this patch, even if it does not have Low Power S0.
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Export cpufreq helpers in OPP to make the cpufreq-core0 and highbank-cpufreq
drivers loadable as modules.
Signed-off-by: Mark Langsdorf <mark.langsdorf@calxeda.com>
Signed-off-by: Nishanth Menon <nm@ti.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
We are GPLV2 library, so be clear in the symbols exported as well.
Signed-off-by: Nishanth Menon <nm@ti.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
There's no need to test whether a (delayed) work item is pending
before queueing, flushing or cancelling it, so remove work_pending()
tests used in those cases.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The dev_pm_qos_flags() will be used in the usb core which could be
compiled as a module. This patch is to export it.
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
* pm-sleep:
PM: Move disabling/enabling runtime PM to late suspend/early resume
|
|
Currently, the PM core disables runtime PM for all devices right
after executing subsystem/driver .suspend() callbacks for them
and re-enables it right before executing subsystem/driver .resume()
callbacks for them. This may lead to problems when there are
two devices such that the .suspend() callback executed for one of
them depends on runtime PM working for the other. In that case,
if runtime PM has already been disabled for the second device,
the first one's .suspend() won't work correctly (and analogously
for resume).
To make those issues go away, make the PM core disable runtime PM
for devices right before executing subsystem/driver .suspend_late()
callbacks for them and enable runtime PM for them right after
executing subsystem/driver .resume_early() callbacks for them. This
way the potential conflitcs between .suspend_late()/.resume_early()
and their runtime PM counterparts are still prevented from happening,
but the subtle ordering issues related to disabling/enabling runtime
PM for devices during system suspend/resume are much easier to avoid.
Reported-and-tested-by: Jan-Matthias Braun <jan_braun@gmx.net>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Kevin Hilman <khilman@deeprootsystems.com>
Cc: 3.4+ <stable@vger.kernel.org>
|
|
Local variable 'error' in dev_pm_qos_add_ancestor_request() need
not contain error codes only, so rename it to 'ret'.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
* pm-runtime:
base: power - use clk_prepare_enable and clk_prepare_disable
|
|
* pm-opp:
PM / OPP: using kfree_rcu() to simplify the code
PM / OPP: predictable fail results for opp_find* functions, v2
PM / OPP: Export symbols for module usage.
PM / OPP: RCU reclaim
|
|
* pm-qos:
PM / QoS: Handle device PM QoS flags while removing constraints
PM / QoS: Resume device before exposing/hiding PM QoS flags
PM / QoS: Document request manipulation requirement for flags
PM / QoS: Fix a free error in the dev_pm_qos_constraints_destroy()
PM / QoS: Fix the return value of dev_pm_qos_update_request()
PM / ACPI: Take device PM QoS flags into account
PM / Domains: Check device PM QoS flags in pm_genpd_poweroff()
PM / QoS: Make it possible to expose PM QoS device flags to user space
PM / QoS: Introduce PM QoS device flags support
PM / QoS: Prepare struct dev_pm_qos_request for more request types
PM / QoS: Introduce request and constraint data types for PM QoS flags
PM / QoS: Prepare device structure for adding more constraint types
|
|
PM QoS flags have to be handled by dev_pm_qos_constraints_destroy()
in the same way as PM QoS resume latency constraints. That is, if
they have been exposed to user space, they have to be hidden from it
and the list of flags requests has to be flushed before destroying
the device's PM QoS object. Make that happen.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
dev_pm_qos_add_request() can return 0, 1, or a negative error code,
therefore the correct error test is "if (error < 0)." Checking just for
non-zero return code leads to erroneous setting of the req->dev pointer
to NULL, which then leads to a repeated call to
dev_pm_qos_add_ancestor_request() in st1232_ts_irq_handler(). This in turn
leads to an Oops, when the I2C host adapter is unloaded and reloaded again
because of the inconsistent state of its QoS request list.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
When PM runtime is enabled in DaVinci and the machine migrates to
common clk framework, the clk_enable() gets called without
clk_prepare(). This patch is to fix this issue so that PM run
time can inter work with common clk framework.
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The callback function of call_rcu() just calls a kfree(), so we
can use kfree_rcu() instead of call_rcu() + callback function.
dpatch engine is used to auto generate this patch.
(https://github.com/weiyj/dpatch)
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Currently the opp_find* functions return -ENODEV when:
a) it cant find a device (e.g. request for an OPP search on device
which was not registered)
b) When it cant find a match for the search strategy used
This makes life a little in-efficient for users such as devfreq
to make reasonable judgement before switching search strategies.
So, standardize the return results as following:
-EINVAL for bad pointer parameters
-ENODEV when device cannot be found
-ERANGE when search fails
This has the following benefit for devfreq implementation:
The search fails when an unregistered device pointer is provided.
This is a trigger to change the search direction and search for
a better fit, however, if we cannot differentiate between a valid
search range failure Vs an unregistered device, second search goes
through the same fail return condition. This can be avoided by
appropriate handling of error return code.
With this change, we also fix devfreq for the improved search
strategy with updated error code.
Signed-off-by: Nishanth Menon <nm@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Acked-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Export the OPP functions for use by driver modules.
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Kevin Hilman <khilman@ti.com>
Cc: linux-pm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
[nm@ti.com: expansion of functions exported]
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Liam Girdwood <lrg@ti.com>
Acked-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
synchronize_rcu() blocks the caller of opp_enable/disbale
for a complete grace period. This blocking duration prevents
any intensive use of the functions. Replace synchronize_rcu()
by call_rcu() which will call our function for freeing the old
opp element.
The duration of opp_enable() and opp_disable() will be no more
dependant of the grace period.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Since dev_pm_qos_add_request(), dev_pm_qos_update_request() and
dev_pm_qos_remove_request() for PM QoS flags should not be invoked
when device in RPM_SUSPENDED, add pm_runtime_get_sync() and pm_runtime_put()
around these functions in dev_pm_qos_expose_flags() and
dev_pm_qos_hide_flags().
[rjw: Modified the subject and changelog to better reflect the code
changes made.]
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
In fact, the callers of dev_pm_qos_add_request(),
dev_pm_qos_update_request() and dev_pm_qos_remove_request() for
requests of type DEV_PM_QOS_FLAGS need to ensure that the target
device is not RPM_SUSPENDED before using any of these functions (or
be prepared for the new PM QoS flags to take effect after the device
has been resumed). Document this in their kerneldoc comments.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Free a wrong point to struct dev_pm_qos->latency which suppose to
be the point to struct dev_pm_qos. The patch is to fix the issue.
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Commit e39473d (PM / QoS: Make it possible to expose PM QoS device
flags to user space) introduced __dev_pm_qos_update_request() to be
called internally by dev_pm_qos_update_request(), but forgot to make
the latter actually use the return value of the former. Fix this
mistake.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Make the generic PM domains pm_genpd_poweroff() function take
device PM QoS flags into account when deciding whether or not to
remove power from the domain.
After this change the routine will return -EBUSY without executing
the domain's .power_off() callback if there is at least one PM QoS
flags request for at least one device in the domain and at least of
those request has at least one of the NO_POWER_OFF and REMOTE_WAKEUP
flags set.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: mark gross <markgross@thegnar.org>
|
|
Define two device PM QoS flags, PM_QOS_FLAG_NO_POWER_OFF
and PM_QOS_FLAG_REMOTE_WAKEUP, and introduce routines
dev_pm_qos_expose_flags() and dev_pm_qos_hide_flags() allowing the
caller to expose those two flags to user space or to hide them
from it, respectively.
After the flags have been exposed, user space will see two
additional sysfs attributes, pm_qos_no_power_off and
pm_qos_remote_wakeup, under the device's /sys/devices/.../power/
directory. Then, writing 1 to one of them will update the
PM QoS flags request owned by user space so that the corresponding
flag is requested to be set. In turn, writing 0 to one of them
will cause the corresponding flag in the user space's request to
be cleared (however, the owners of the other PM QoS flags requests
for the same device may still request the flag to be set and it
may be effectively set even if user space doesn't request that).
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Jean Pihet <j-pihet@ti.com>
Acked-by: mark gross <markgross@thegnar.org>
|
|
Modify the device PM QoS core code to support PM QoS flags requests.
First, add a new field of type struct pm_qos_flags called "flags"
to struct dev_pm_qos for representing the list of PM QoS flags
requests for the given device. Accordingly, add a new "type" field
to struct dev_pm_qos_request (along with an enum for representing
request types) and a new member called "flr" to its data union for
representig flags requests.
Second, modify dev_pm_qos_add_request(), dev_pm_qos_update_request(),
the internal routine apply_constraint() used by them and their
existing callers to cover flags requests as well as latency
requests. In particular, dev_pm_qos_add_request() gets a new
argument called "type" for specifying the type of a request to be
added.
Finally, introduce two routines, __dev_pm_qos_flags() and
dev_pm_qos_flags(), allowing their callers to check which PM QoS
flags have been requested for the given device (the caller is
supposed to pass the mask of flags to check as the routine's
second argument and examine its return value for the result).
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: mark gross <markgross@thegnar.org>
|
|
The subsequent patches will use struct dev_pm_qos_request for
representing both latency requests and flags requests. To make that
easier, put the node member of struct dev_pm_qos_request (under the
name "pnode") into a union called "data" that will represent the
request's value and list node depending on its type.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: mark gross <markgross@thegnar.org>
|
|
Currently struct dev_pm_info contains only one PM QoS constraints
pointer reserved for latency requirements. Since one more device
constraints type (i.e. flags) will be necessary, introduce a new
structure, struct dev_pm_qos, that eventually will contain all of
the available device PM QoS constraints and replace the "constraints"
pointer in struct dev_pm_info with a pointer to the new structure
called "qos".
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Jean Pihet <j-pihet@ti.com>
|
|
If pm_genpd_attach_cpudidle failed we leak memory stored in 'cpu_data'.
Signed-off-by: Jonghwan Choi <jhbird.choi@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael J Wysocki:
- Improved system suspend/resume and runtime PM handling for the SH
TMU, CMT and MTU2 clock event devices (also used by ARM/shmobile).
- Generic PM domains framework extensions related to cpuidle support
and domain objects lookup using names.
- ARM/shmobile power management updates including improved support for
the SH7372's A4S power domain containing the CPU core.
- cpufreq changes related to AMD CPUs support from Matthew Garrett,
Andre Przywara and Borislav Petkov.
- cpu0 cpufreq driver from Shawn Guo.
- cpufreq governor fixes related to the relaxing of limit from Michal
Pecio.
- OMAP cpufreq updates from Axel Lin and Richard Zhao.
- cpuidle ladder governor fixes related to the disabling of states from
Carsten Emde and me.
- Runtime PM core updates related to the interactions with the system
suspend core from Alan Stern and Kevin Hilman.
- Wakeup sources modification allowing more helper functions to be
called from interrupt context from John Stultz and additional
diagnostic code from Todd Poynor.
- System suspend error code path fix from Feng Hong.
Fixed up conflicts in cpufreq/powernow-k8 that stemmed from the
workqueue fixes conflicting fairly badly with the removal of support for
hardware P-state chips. The changes were independent but somewhat
intertwined.
* tag 'pm-for-3.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (76 commits)
Revert "PM QoS: Use spinlock in the per-device PM QoS constraints code"
PM / Runtime: let rpm_resume() succeed if RPM_ACTIVE, even when disabled, v2
cpuidle: rename function name "__cpuidle_register_driver", v2
cpufreq: OMAP: Check IS_ERR() instead of NULL for omap_device_get_by_hwmod_name
cpuidle: remove some empty lines
PM: Prevent runtime suspend during system resume
PM QoS: Use spinlock in the per-device PM QoS constraints code
PM / Sleep: use resume event when call dpm_resume_early
cpuidle / ACPI : move cpuidle_device field out of the acpi_processor_power structure
ACPI / processor: remove pointless variable initialization
ACPI / processor: remove unused function parameter
cpufreq: OMAP: remove loops_per_jiffy recalculate for smp
sections: fix section conflicts in drivers/cpufreq
cpufreq: conservative: update frequency when limits are relaxed
cpufreq / ondemand: update frequency when limits are relaxed
properly __init-annotate pm_sysrq_init()
cpufreq: Add a generic cpufreq-cpu0 driver
PM / OPP: Initialize OPP table from device tree
ARM: add cpufreq transiton notifier to adjust loops_per_jiffy for smp
cpufreq: Remove support for hardware P-state chips from powernow-k8
...
|
|
* pm-qos:
Revert "PM QoS: Use spinlock in the per-device PM QoS constraints code"
|
|
This reverts commit fc2fb3a075c206927d3fbad251dae82ba82ccf2d.
The problem with the above commit is that it makes the device PM QoS
core code hold a spinlock around blocking_notifier_call_chain()
invocations.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|