From: <xu.xin16@zte.com.cn>
To: <shakeel.butt@linux.dev>
Cc: <akpm@linux-foundation.org>, <david@redhat.com>,
<linux-kernel@vger.kernel.org>, <wang.yaxin@zte.com.cn>,
<linux-mm@kvack.org>, <linux-fsdevel@vger.kernel.org>,
<yang.yang29@zte.com.cn>
Subject: Re: [PATCH v2 0/9] support ksm_stat showing at cgroup level
Date: Tue, 6 May 2025 13:09:25 +0800 (CST) [thread overview]
Message-ID: <20250506130925568unpXQ7vLOEaRX4iDWSow2@zte.com.cn> (raw)
In-Reply-To: <ir2s6sqi6hrbz7ghmfngbif6fbgmswhqdljlntesurfl2xvmmv@yp3w2lqyipb5>
> > Users can obtain the KSM information of a cgroup just by:
> >
> > # cat /sys/fs/cgroup/memory.ksm_stat
> > ksm_rmap_items 76800
> > ksm_zero_pages 0
> > ksm_merging_pages 76800
> > ksm_process_profit 309657600
> >
> > Current implementation supports both cgroup v2 and cgroup v1.
> >
>
> Before adding these stats to memcg, add global stats for them in
> enum node_stat_item and then you can expose them in memcg through
> memory.stat instead of a new interface.
Dear shakeel.butt,
If adding these ksm-related items to enum node_stat_item and bringing extra counters-updating
code like __lruvec_stat_add_folio()... embedded into KSM procudure, it increases extra
CPU-consuming while normal KSM procedures happen. Or, we can just traversal all processes of
this memcg and sum their ksm'counters like the current patche set implmentation.
If only including a single "KSM merged pages" entry in memory.stat, I think it is reasonable as
it reflects this memcg's KSM page count. However, adding the other three KSM-related metrics is
less advisable since they are strongly coupled with KSM internals and would primarily interest
users monitoring KSM-specific behavior.
Last but not least, the rationale for adding a ksm_stat entry to memcg also lies in maintaining
structural consistency with the existing /proc/<pid>/ksm_stat interface.
next prev parent reply other threads:[~2025-05-06 5:09 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-01 4:08 xu.xin16
2025-05-01 4:11 ` [PATCH v2 1/9] memcontrol: rename mem_cgroup_scan_tasks() xu.xin.sc
2025-05-01 4:13 ` [PATCH v2 2/9] memcontrol: introduce the new mem_cgroup_scan_tasks() xu.xin.sc
2025-05-01 4:14 ` [PATCH v2 3/9] memcontrol: introduce ksm_stat at memcg-v2 xu.xin.sc
2025-05-01 4:14 ` [PATCH v2 4/9] memcontrol: add ksm_zero_pages in cgroup/memory.ksm_stat xu.xin.sc
2025-05-01 4:15 ` [PATCH v2 5/9] memcontrol: add ksm_merging_pages " xu.xin.sc
2025-05-01 4:15 ` [PATCH v2 6/9] memcontrol: add ksm_profit " xu.xin.sc
2025-05-01 4:16 ` [PATCH v2 7/9] memcontrol-v1: add ksm_stat at memcg-v1 xu.xin.sc
2025-05-01 4:17 ` [PATCH v2 8/9] Documentation: add ksm_stat description in cgroup-v1/memory.rst xu.xin.sc
2025-05-01 4:17 ` [PATCH v2 9/9] Documentation: add ksm_stat description in cgroup-v2.rst xu.xin.sc
2025-05-01 20:27 ` [PATCH v2 0/9] support ksm_stat showing at cgroup level Andrew Morton
2025-05-05 21:30 ` Shakeel Butt
2025-05-06 5:09 ` xu.xin16 [this message]
2025-05-08 18:56 ` Shakeel Butt
2025-06-02 14:14 ` xu.xin16
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250506130925568unpXQ7vLOEaRX4iDWSow2@zte.com.cn \
--to=xu.xin16@zte.com.cn \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shakeel.butt@linux.dev \
--cc=wang.yaxin@zte.com.cn \
--cc=yang.yang29@zte.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox