From: Minchan Kim <minchan@kernel.org>
To: Ganesh Mahendran <opensource.ganesh@gmail.com>
Cc: Linux-MM <linux-mm@kvack.org>,
linux-kernel <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Nitin Gupta <ngupta@vflare.org>,
Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>,
rostedt@goodmis.org, mingo@redhat.com
Subject: Re: [PATCH] mm/zsmalloc: add trace events for zs_compact
Date: Mon, 13 Jun 2016 13:42:37 +0900 [thread overview]
Message-ID: <20160613044237.GC23754@bbox> (raw)
In-Reply-To: <CADAEsF_q0qzk2D_cKMCcvHxF7_eY1cQVKrBp0eM_v05jjOjSOA@mail.gmail.com>
On Wed, Jun 08, 2016 at 02:39:19PM +0800, Ganesh Mahendran wrote:
<snip>
> zsmalloc is not only used by zram, but also zswap. Maybe
> others in the future.
>
> I tried to use function_graph. It seems there are too much log
> printed:
> ------
> root@leo-test:/sys/kernel/debug/tracing# cat trace
> # tracer: function_graph
> #
> # CPU DURATION FUNCTION CALLS
> # | | | | | | |
> 2) | zs_compact [zsmalloc]() {
> 2) | /* zsmalloc_compact_start: pool zram0 */
> 2) 0.889 us | _raw_spin_lock();
> 2) 0.896 us | isolate_zspage [zsmalloc]();
> 2) 0.938 us | _raw_spin_lock();
> 2) 0.875 us | isolate_zspage [zsmalloc]();
> 2) 0.942 us | _raw_spin_lock();
> 2) 0.962 us | isolate_zspage [zsmalloc]();
> ...
> 2) 0.879 us | insert_zspage [zsmalloc]();
> 2) 4.520 us | }
> 2) 0.975 us | _raw_spin_lock();
> 2) 0.890 us | isolate_zspage [zsmalloc]();
> 2) 0.882 us | _raw_spin_lock();
> 2) 0.894 us | isolate_zspage [zsmalloc]();
> 2) | /* zsmalloc_compact_end: pool zram0: 0 pages
> compacted(total 0) */
> 2) # 1351.241 us | }
> ------
> => 1351.241 us used
>
> And it seems the overhead of function_graph is bigger than trace event.
>
> bash-3682 [002] .... 1439.180646: zsmalloc_compact_start: pool zram0
> bash-3682 [002] .... 1439.180659: zsmalloc_compact_end: pool zram0:
> 0 pages compacted(total 0)
> => 13 us > 1351.241 us
You could use set_ftrace_filter to cut out.
To introduce new event trace to get a elasped time, it's pointless,
I think.
It should have more like pool name you mentioned.
Like saying other thread, It would be better to show
[pool name, compact size_class,
the number of object moved, the number of freed page], IMO.
Thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-06-13 4:42 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-07 8:56 Ganesh Mahendran
2016-06-08 0:16 ` Minchan Kim
2016-06-08 1:48 ` Ganesh Mahendran
2016-06-08 5:13 ` Minchan Kim
2016-06-08 6:39 ` Ganesh Mahendran
2016-06-08 14:10 ` Sergey Senozhatsky
2016-06-13 1:51 ` Ganesh Mahendran
2016-06-13 4:42 ` Minchan Kim [this message]
2016-06-13 5:12 ` Sergey Senozhatsky
2016-06-13 7:49 ` Ganesh Mahendran
2016-06-14 8:03 ` Sergey Senozhatsky
2016-06-13 5:13 ` Ganesh Mahendran
2016-06-13 3:47 ` Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160613044237.GC23754@bbox \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=ngupta@vflare.org \
--cc=opensource.ganesh@gmail.com \
--cc=rostedt@goodmis.org \
--cc=sergey.senozhatsky.work@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox