From: Michal Hocko <mhocko@suse.com>
To: Dmitry Rokosov <ddrokosov@salutedevices.com>
Cc: rostedt@goodmis.org, mhiramat@kernel.org, hannes@cmpxchg.org,
roman.gushchin@linux.dev, shakeelb@google.com,
muchun.song@linux.dev, akpm@linux-foundation.org,
kernel@sberdevices.ru, rockosov@gmail.com,
cgroups@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH v3 2/2] mm: memcg: introduce new event to trace shrink_memcg
Date: Mon, 27 Nov 2023 13:50:22 +0100 [thread overview]
Message-ID: <ZWSQji7UDSYa1m5M@tiehlicka> (raw)
In-Reply-To: <20231127113644.btg2xrcpjhq4cdgu@CAB-WSD-L081021>
On Mon 27-11-23 14:36:44, Dmitry Rokosov wrote:
> On Mon, Nov 27, 2023 at 10:33:49AM +0100, Michal Hocko wrote:
> > On Thu 23-11-23 22:39:37, Dmitry Rokosov wrote:
> > > The shrink_memcg flow plays a crucial role in memcg reclamation.
> > > Currently, it is not possible to trace this point from non-direct
> > > reclaim paths. However, direct reclaim has its own tracepoint, so there
> > > is no issue there. In certain cases, when debugging memcg pressure,
> > > developers may need to identify all potential requests for memcg
> > > reclamation including kswapd(). The patchset introduces the tracepoints
> > > mm_vmscan_memcg_shrink_{begin|end}() to address this problem.
> > >
> > > Example of output in the kswapd context (non-direct reclaim):
> > > kswapd0-39 [001] ..... 240.356378: mm_vmscan_memcg_shrink_begin: order=0 gfp_flags=GFP_KERNEL memcg=16
> > > kswapd0-39 [001] ..... 240.356396: mm_vmscan_memcg_shrink_end: nr_reclaimed=0 memcg=16
> > > kswapd0-39 [001] ..... 240.356420: mm_vmscan_memcg_shrink_begin: order=0 gfp_flags=GFP_KERNEL memcg=16
> > > kswapd0-39 [001] ..... 240.356454: mm_vmscan_memcg_shrink_end: nr_reclaimed=1 memcg=16
> > > kswapd0-39 [001] ..... 240.356479: mm_vmscan_memcg_shrink_begin: order=0 gfp_flags=GFP_KERNEL memcg=16
> > > kswapd0-39 [001] ..... 240.356506: mm_vmscan_memcg_shrink_end: nr_reclaimed=4 memcg=16
> > > kswapd0-39 [001] ..... 240.356525: mm_vmscan_memcg_shrink_begin: order=0 gfp_flags=GFP_KERNEL memcg=16
> > > kswapd0-39 [001] ..... 240.356593: mm_vmscan_memcg_shrink_end: nr_reclaimed=11 memcg=16
> > > kswapd0-39 [001] ..... 240.356614: mm_vmscan_memcg_shrink_begin: order=0 gfp_flags=GFP_KERNEL memcg=16
> > > kswapd0-39 [001] ..... 240.356738: mm_vmscan_memcg_shrink_end: nr_reclaimed=25 memcg=16
> > > kswapd0-39 [001] ..... 240.356790: mm_vmscan_memcg_shrink_begin: order=0 gfp_flags=GFP_KERNEL memcg=16
> > > kswapd0-39 [001] ..... 240.357125: mm_vmscan_memcg_shrink_end: nr_reclaimed=53 memcg=16
> >
> > In the previous version I have asked why do we need this specific
> > tracepoint when we already do have trace_mm_vmscan_lru_shrink_{in}active
> > which already give you a very good insight. That includes the number of
> > reclaimed pages but also more. I do see that we do not include memcg id
> > of the reclaimed LRU, but that shouldn't be a big problem to add, no?
>
> >From my point of view, memcg reclaim includes two points: LRU shrink and
> slab shrink, as mentioned in the vmscan.c file.
>
>
> static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
> ...
> reclaimed = sc->nr_reclaimed;
> scanned = sc->nr_scanned;
>
> shrink_lruvec(lruvec, sc);
>
> shrink_slab(sc->gfp_mask, pgdat->node_id, memcg,
> sc->priority);
> ...
>
> So, both of these operations are important for understanding whether
> memcg reclaiming was successful or not, as well as its effectiveness. I
> believe it would be beneficial to summarize them, which is why I have
> created new tracepoints.
This sounds like nice to have rather than must. Put it differently. If
you make existing reclaim trace points memcg aware (print memcg id) then
what prevents you from making analysis you need?
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2023-11-27 12:50 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-23 19:39 [PATCH v3 0/2] mm: memcg: improve vmscan tracepoints Dmitry Rokosov
2023-11-23 19:39 ` [PATCH v3 1/2] mm: memcg: print out cgroup ino in the memcg tracepoints Dmitry Rokosov
2023-11-25 4:11 ` Shakeel Butt
[not found] ` <20231123193937.11628-3-ddrokosov@salutedevices.com>
2023-11-25 6:36 ` [PATCH v3 2/2] mm: memcg: introduce new event to trace shrink_memcg Shakeel Butt
2023-11-25 8:01 ` Dmitry Rokosov
2023-11-25 17:38 ` Shakeel Butt
2023-11-25 17:47 ` Shakeel Butt
2023-11-27 9:33 ` Michal Hocko
2023-11-27 11:36 ` Dmitry Rokosov
2023-11-27 12:50 ` Michal Hocko [this message]
2023-11-27 16:16 ` Dmitry Rokosov
2023-11-28 9:32 ` Michal Hocko
2023-11-29 15:20 ` Dmitry Rokosov
2023-11-29 15:26 ` Dmitry Rokosov
2023-11-29 16:06 ` Michal Hocko
2023-11-29 16:57 ` Dmitry Rokosov
2023-11-29 17:10 ` Michal Hocko
2023-11-29 17:34 ` Steven Rostedt
2023-11-29 17:35 ` Dmitry Rokosov
2023-11-29 17:33 ` Andrew Morton
2023-11-29 17:49 ` Dmitry Rokosov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZWSQji7UDSYa1m5M@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=bpf@vger.kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=ddrokosov@salutedevices.com \
--cc=hannes@cmpxchg.org \
--cc=kernel@sberdevices.ru \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhiramat@kernel.org \
--cc=muchun.song@linux.dev \
--cc=rockosov@gmail.com \
--cc=roman.gushchin@linux.dev \
--cc=rostedt@goodmis.org \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox