summaryrefslogtreecommitdiff
path: root/tools
diff options
context:
space:
mode:
authorKan Liang <kan.liang@intel.com>2015-10-02 09:04:34 (GMT)
committerArnaldo Carvalho de Melo <acme@redhat.com>2015-10-02 20:07:55 (GMT)
commit19afd10410957b1c808c2c49a88e6dd8b23aa894 (patch)
tree7c00b3eb48b51e28c3d80b56bdd9c2e6066a4b1b /tools
parent9f065194e2a505bb6fd23946b410a0036e9de2ca (diff)
downloadlinux-19afd10410957b1c808c2c49a88e6dd8b23aa894.tar.xz
perf stat: Reduce min --interval-print to 10ms
The --interval-print parameter was limited to 100ms. However, for example, 10ms is required to do sophisticated bandwidth analysis using uncore events. The test shows that the overhead of the system-wide uncore monitoring with 10ms interval is only ~2%. So this patch reduces the minimal interval-print allowd to 10ms. But 10ms may not work well for all cases. For example, when the cpus/threads number is very large, for system-wide core event monitoring the overhead could be high. To handle this issue, a warning will be displayed when the interval-print is set between 10ms to 100ms. So users can make a decision according to their specific cases. # perf stat -e uncore_imc_1/cas_count_read/ -a --interval-print 10 -- sleep 1 print interval < 100ms. The overhead percentage could be high in some cases. Please proceed with caution. # time counts unit events 0.010200451 0.10 MiB uncore_imc_1/cas_count_read/ 0.020475117 0.02 MiB uncore_imc_1/cas_count_read/ 0.030692800 0.01 MiB uncore_imc_1/cas_count_read/ 0.040948161 0.02 MiB uncore_imc_1/cas_count_read/ 0.051159564 0.00 MiB uncore_imc_1/cas_count_read/ Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1443776674-42511-1-git-send-email-kan.liang@intel.com [ Added warning about overhead when using sub 100ms intervals to the man page ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools')
-rw-r--r--tools/perf/Documentation/perf-stat.txt5
-rw-r--r--tools/perf/builtin-stat.c13
2 files changed, 12 insertions, 6 deletions
diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index 47469ab..4e074a6 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -128,8 +128,9 @@ perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- m
-I msecs::
--interval-print msecs::
- Print count deltas every N milliseconds (minimum: 100ms)
- example: perf stat -I 1000 -e cycles -a sleep 5
+Print count deltas every N milliseconds (minimum: 10ms)
+The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
+ example: 'perf stat -I 1000 -e cycles -a sleep 5'
--per-socket::
Aggregate counts per processor socket for system-wide mode measurements. This
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index a96fb5c..5ef88f7 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -1179,7 +1179,7 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
OPT_STRING(0, "post", &post_cmd, "command",
"command to run after to the measured command"),
OPT_UINTEGER('I', "interval-print", &stat_config.interval,
- "print counts at regular interval in ms (>= 100)"),
+ "print counts at regular interval in ms (>= 10)"),
OPT_SET_UINT(0, "per-socket", &stat_config.aggr_mode,
"aggregate counts per processor socket", AGGR_SOCKET),
OPT_SET_UINT(0, "per-core", &stat_config.aggr_mode,
@@ -1332,9 +1332,14 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
thread_map__read_comms(evsel_list->threads);
if (interval && interval < 100) {
- pr_err("print interval must be >= 100ms\n");
- parse_options_usage(stat_usage, options, "I", 1);
- goto out;
+ if (interval < 10) {
+ pr_err("print interval must be >= 10ms\n");
+ parse_options_usage(stat_usage, options, "I", 1);
+ goto out;
+ } else
+ pr_warning("print interval < 100ms. "
+ "The overhead percentage could be high in some cases. "
+ "Please proceed with caution.\n");
}
if (perf_evlist__alloc_stats(evsel_list, interval))