From: Roman Gushchin <roman.gushchin@linux.dev>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Muchun Song <muchun.song@linux.dev>,
Harry Yoo <harry.yoo@oracle.com>, Qi Zheng <qi.zheng@linux.dev>,
Vlastimil Babka <vbabka@suse.cz>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org,
Meta kernel team <kernel-team@meta.com>
Subject: Re: [PATCH 0/4] memcg: cleanup the memcg stats interfaces
Date: Tue, 11 Nov 2025 11:01:47 -0800 [thread overview]
Message-ID: <87pl9oqtpg.fsf@linux.dev> (raw)
In-Reply-To: <20251110232008.1352063-1-shakeel.butt@linux.dev> (Shakeel Butt's message of "Mon, 10 Nov 2025 15:20:04 -0800")
Shakeel Butt <shakeel.butt@linux.dev> writes:
> The memcg stats are safe against irq (and nmi) context and thus does not
> require disabling irqs. However for some stats which are also maintained
> at node level, it is using irq unsafe interface and thus requiring the
> users to still disables irqs or use interfaces which explicitly disables
> irqs. Let's move memcg code to use irq safe node level stats function
> which is already optimized for architectures with HAVE_CMPXCHG_LOCAL
> (all major ones), so there will not be any performance penalty for its
> usage.
Do you have any production data for this or it's theory-based?
In general I feel we need a benchmark focused on memcg stats:
there was a number of performance improvements and regressions in this
code over last years, so a dedicated benchmark can help with measuring
them.
Nice cleanup btw, thanks!
next prev parent reply other threads:[~2025-11-11 19:01 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-10 23:20 Shakeel Butt
2025-11-10 23:20 ` [PATCH 1/4] memcg: use mod_node_page_state to update stats Shakeel Butt
2025-11-11 1:39 ` Harry Yoo
2025-11-11 18:58 ` Roman Gushchin
2025-11-10 23:20 ` [PATCH 2/4] memcg: remove __mod_lruvec_kmem_state Shakeel Butt
2025-11-11 1:46 ` Harry Yoo
2025-11-11 8:23 ` Qi Zheng
2025-11-11 18:58 ` Roman Gushchin
2025-11-10 23:20 ` [PATCH 3/4] memcg: remove __mod_lruvec_state Shakeel Butt
2025-11-11 5:21 ` Harry Yoo
2025-11-11 18:58 ` Roman Gushchin
2025-11-10 23:20 ` [PATCH 4/4] memcg: remove __lruvec_stat_mod_folio Shakeel Butt
2025-11-11 5:41 ` Harry Yoo
2025-11-11 18:59 ` Roman Gushchin
2025-11-11 0:59 ` [PATCH 0/4] memcg: cleanup the memcg stats interfaces Harry Yoo
2025-11-11 2:23 ` Qi Zheng
2025-11-11 2:39 ` Shakeel Butt
2025-11-11 2:48 ` Qi Zheng
2025-11-11 3:00 ` Shakeel Butt
2025-11-11 3:07 ` Qi Zheng
2025-11-11 3:18 ` Harry Yoo
2025-11-11 3:29 ` Qi Zheng
2025-11-11 3:05 ` Harry Yoo
2025-11-11 8:01 ` Sebastian Andrzej Siewior
2025-11-11 8:36 ` Qi Zheng
2025-11-11 16:45 ` Shakeel Butt
2025-11-12 2:11 ` Qi Zheng
2025-11-11 9:54 ` Vlastimil Babka
2025-11-11 19:01 ` Roman Gushchin [this message]
2025-11-11 19:34 ` Shakeel Butt
2025-11-15 19:27 ` Shakeel Butt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87pl9oqtpg.fsf@linux.dev \
--to=roman.gushchin@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=harry.yoo@oracle.com \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=qi.zheng@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox