From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: roman.gushchin@linux.dev, shakeelb@google.com,
songmuchun@bytedance.com, mhocko@kernel.org,
akpm@linux-foundation.org, corbet@lwn.net, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
Michal Hocko <mhocko@suse.com>
Subject: Re: [PATCH v3] mm: memcontrol: add {pgscan,pgsteal}_{kswapd,direct} items in memory.stat of cgroup v2
Date: Tue, 7 Jun 2022 10:18:33 +0800 [thread overview]
Message-ID: <64d93544-294b-6149-524d-f7ce227d7e33@bytedance.com> (raw)
In-Reply-To: <Yp46w4op9JeX9+g9@cmpxchg.org>
On 2022/6/7 1:34 AM, Johannes Weiner wrote:
> On Mon, Jun 06, 2022 at 11:40:28PM +0800, Qi Zheng wrote:
>> There are already statistics of {pgscan,pgsteal}_kswapd and
>> {pgscan,pgsteal}_direct of memcg event here, but now only the
>> sum of the two is displayed in memory.stat of cgroup v2.
>>
>> In order to obtain more accurate information during monitoring
>> and debugging, and to align with the display in /proc/vmstat,
>> it better to display {pgscan,pgsteal}_kswapd and
>> {pgscan,pgsteal}_direct separately.
>>
>> Also, for forward compatibility, we still display pgscan and
>> pgsteal items so that it won't break existing applications.
>>
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
>> Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
>> Acked-by: Muchun Song <songmuchun@bytedance.com>
>> Acked-by: Shakeel Butt <shakeelb@google.com>
>> Acked-by: Michal Hocko <mhocko@suse.com>
>
> No objection to keeping pgscan and pgsteal, but can you please fix the
> doc to present the items in the same order as memory.stat has them?
Sure, will fix.
Thanks,
Qi
>
>> @@ -1445,9 +1445,21 @@ PAGE_SIZE multiple when read back.
>> pgscan (npn)
>> Amount of scanned pages (in an inactive LRU list)
>>
>> + pgscan_kswapd (npn)
>> + Amount of scanned pages by kswapd (in an inactive LRU list)
>> +
>> + pgscan_direct (npn)
>> + Amount of scanned pages directly (in an inactive LRU list)
>> +
>> pgsteal (npn)
>> Amount of reclaimed pages
>>
>> + pgsteal_kswapd (npn)
>> + Amount of reclaimed pages by kswapd
>> +
>> + pgsteal_direct (npn)
>> + Amount of reclaimed pages directly
>> +
>> pgactivate (npn)
>> Amount of pages moved to the active LRU list
>
> vs:
>
>> @@ -1495,41 +1518,17 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
>> }
>>
>> /* Accumulated memory events */
>> -
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGFAULT),
>> - memcg_events(memcg, PGFAULT));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGMAJFAULT),
>> - memcg_events(memcg, PGMAJFAULT));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGREFILL),
>> - memcg_events(memcg, PGREFILL));
>> seq_buf_printf(&s, "pgscan %lu\n",
>> memcg_events(memcg, PGSCAN_KSWAPD) +
>> memcg_events(memcg, PGSCAN_DIRECT));
>> seq_buf_printf(&s, "pgsteal %lu\n",
>> memcg_events(memcg, PGSTEAL_KSWAPD) +
>> memcg_events(memcg, PGSTEAL_DIRECT));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGACTIVATE),
>> - memcg_events(memcg, PGACTIVATE));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGDEACTIVATE),
>> - memcg_events(memcg, PGDEACTIVATE));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGLAZYFREE),
>> - memcg_events(memcg, PGLAZYFREE));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGLAZYFREED),
>> - memcg_events(memcg, PGLAZYFREED));
>> -
>> -#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(ZSWPIN),
>> - memcg_events(memcg, ZSWPIN));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(ZSWPOUT),
>> - memcg_events(memcg, ZSWPOUT));
>> -#endif
>>
>> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(THP_FAULT_ALLOC),
>> - memcg_events(memcg, THP_FAULT_ALLOC));
>> - seq_buf_printf(&s, "%s %lu\n", vm_event_name(THP_COLLAPSE_ALLOC),
>> - memcg_events(memcg, THP_COLLAPSE_ALLOC));
>> -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>> + for (i = 0; i < ARRAY_SIZE(memcg_vm_event_stat); i++)
>> + seq_buf_printf(&s, "%s %lu\n",
>> + vm_event_name(memcg_vm_event_stat[i]),
>> + memcg_events(memcg, memcg_vm_event_stat[i]));
>
> Thanks
--
Thanks,
Qi
prev parent reply other threads:[~2022-06-07 2:18 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-06 15:40 Qi Zheng
2022-06-06 17:34 ` Johannes Weiner
2022-06-07 2:18 ` Qi Zheng [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=64d93544-294b-6149-524d-f7ce227d7e33@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=corbet@lwn.net \
--cc=hannes@cmpxchg.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mhocko@suse.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox