linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Joshua Hahn <joshua.hahnjy@gmail.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@kernel.org>,
	Lorenzo Stoakes <ljs@kernel.org>,
	"Liam R . Howlett" <liam.howlett@oracle.com>,
	Vlastimil Babka <vbabka@kernel.org>,
	Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, kernel-team@meta.com
Subject: Re: [PATCH 0/8 RFC] mm/memcontrol, page_counter: move stock from mem_cgroup to page_counter
Date: Mon, 13 Apr 2026 07:29:58 -0700	[thread overview]
Message-ID: <20260413142958.2037913-1-joshua.hahnjy@gmail.com> (raw)
In-Reply-To: <adyZ-t4fiKFv_X5p@tiehlicka>

On Mon, 13 Apr 2026 09:23:38 +0200 Michal Hocko <mhocko@suse.com> wrote:

Hello Michal,

Thank you for your review as always!

> On Fri 10-04-26 14:06:54, Joshua Hahn wrote:
> > Memcg currently keeps a "stock" of 64 pages per-cpu to cache pre-charged
> > allocations, allowing small allocations and frees to avoid walking the
> > expensive mem_cgroup hierarchy traversal on each charge. This design
> > introduces a fastpath to charge/uncharge, but has several limitations:
> > 
> > 1. Each CPU can track up to 7 (NR_MEMCG_STOCK) mem_cgroups. When more
> >    than 7 mem_cgroups are actively charging on a single CPU, a random
> >    victim is evicted, and its associated stock is drained, which
> >    triggers unnecessary hierarchy walks.
> > 
> >    Note that previously there used to be a 1-1 mapping between CPU and
> >    memcg stock; it was bumped up to 7 in f735eebe55f8f ("multi-memcg
> >    percpu charge cache") because it was observed that stock would
> >    frequently get flushed and refilled.
> 
> All true but it is quite important to note that this all is bounded to
> nr_online_cpus*NR_MEMCG_STOCK*MEMCG_CHARGE_BATCH. You are proposing to
> increase this to s@NR_MEMCG_STOCK@nr_leaf_cgroups@. In invornments with
> many cpus and and directly charged cgroups this can be considerable
> hidden overcharge. Have you considered that and evaluated potential
> impact?

This is a great point. I would like to note though, that for systems running
less than 7 leaf cgroups (I'm not sure what systems typically look like outside
of Meta, so I cannot say whether this is likely or not!) this change would
be an optimization since we allocate only for the leaf cgroups we need ; -)

But let's do the math for the worst-case scenario:
Because we initialize the stock to be 0 and only refill on a charge / 
uncharge, the worst-case scenario involves a workload that charges
to all CPUs just once, so that it is not enough to benefit from the
cacheing. On a very large system, say 300 CPUs, with 4k pages, that's
300 * 64 * 4kb = 75 mb of overcharging per leaf-cgroup.

This is definitely a serious amount of overcharging. With that said, I
would like to note that this seems like quite a rare scenario; what
would cause a workload to jump across 300 CPUs? For this to be a regression
it also has to be 8+ workloads all jumping around the CPUs and storing
not-to-be-used cache on all of them, and anything below that would still be
an optimization over the current setup.

Also, let's talk about what happens when we do reach the worst-case scenario.
Once we reach the degenerate state where the stock is charged and the workload
has no intention of running on the CPUs with idle cache, we would eventually
reach the failure branch of try_charge_memcg, which drains all stock!

So IMO, I think the issue of overcharging isn't too bad. It's very difficult
to reach the scenario where all CPUs are caching idle stock, and the existing
recovery mechanism in try_charge_memcg puts us right back into the optimal
scenario where none of the CPUs have stock, and we only refill those that
the workload runs on. I'll be sure to add this in the next spin of the series,
since I think it's important to note (the other overhead being the memory
that we have to allocate percpu for each of the stock structs, which is
only 2 words/cpu/memcg (including parents). But still worth noting explicitly!)

Above is the perspective from the system, in terms of memory pressure and
overcharging. From a user interpretability POV, I think there is a gap between
when a workload litters unused charge everywhere, but there is not enough
memory pressure to trigger a drain_all_stock, so a user might be confused
why their workload is using so much memory.

I think this could be a problem. Especially if there is a userspace
load balancer that schedules work based on how much memory the workload is
using. At Meta we use Senpai in userspace to create benevolent memory pressure
that should be enough to reap cold memory (and also idle stock), but I'm
wondering what this will mean for systems that don't have such cold memory
purging mechanisms. I'll think about this a little bit more.

> > 2. Stock management is tightly coupled to struct mem_cgroup, which
> >    makes it difficult to add a new page_counter to struct mem_cgroup
> >    and do its own stock management, since each operation has to be
> >    duplicated.
> 
> Could you expand why this is a problem we need to address?

Yes of course. So to give some context, I realized that stock was a bit
uncomfortable to work with at a memcg granularity when I tried to introduce
a new page counter for toptier memory tracking (in order to enforce strict
limits. I didn't explicitly note this in the cover letter because I thought
that there was a lot of good motivation aside from the specific use case
I was thinking of, so decided to leave it out. What do you think? : -)

I'm not a memcgv1 user so I cannot tell from experience whether this is a
pain point or not, but I also did find it awkward that one stock gated the
charges for two page_counters memsw and memory, which made the slowpath
incur double the hierarchy walks on a single stock failing, instead of keeping
them separate so that it is less likely for both the page hierarchy walks
to happen on a single charge attempt.

> > 3. Each stock slot requires a css reference, as well as a traversal
> >    overhead on every stock operation to check which cpu-memcg we are
> >    trying to consume stock for.
> 
> Why is this a problem?

I don't think this is really that big of a problem, but just something that
I wanted to note as a benefit of these changes. I remember being a bit
confused by the memcg slot scanning & traversal when reading the stock
code, personally I think being able to directly be able to attribute stock
to the page_cache it comes from, as well as not randomly evicting stock
could be helpful.

> Please also be more explicit what kind of workloads are going to benefit
> from this change. The existing caching scheme is simple and ineffective
> but is it worth improving (likely your points 2 and 3 could clarify that)?

I think that the biggest strength for this series is actually not with
performance gains but rather with more interpretable semantics for stock
management and transparent charging in try_charge_memcg.

But to break it down, any systems using less than 7 cgroups will get
reduced memory overhead (from the percpu structs) and comparable performance.
Any systems using more than 7 leaf cgroups will benefit because stock is
no longer randomly evicted and needed to refill.

From my limited benchmark tests, these didn't seem too visible from a
wall time perspective. But I can trace for how often we refill the stock
in the next version, and I hope that it can show more tangible results.

> All that being said, I like the resulting code which is much easier to
> follow. The caching is nicely transparent in the charging path which is
> a plus. My main worry is that caching has caused some confusion in the
> past and this change will amplify that by the scaling the amount of
> cached charge. This needs to be really carefully evaluated.

Thank you for the words of encouragement Michal!!!

On the point of cached charge, I hope that I've explained it above, I'll
think some more about that scenario as well. 

One last thing to note, that is orthogonal to our conversation here. Above,
I assumed 4k pages. But on systems with bigger base page sizes like 64k,
maybe it makes sense to lower the amount of stock that is cached. 
64 * 64kb = 4mb per CPU, maybe this is a bit overkill? ; -)

Thanks a lot for your thoughtful review, it is always appreciated.
I hope you have a great day!
Joshua


  reply	other threads:[~2026-04-13 14:30 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-10 21:06 Joshua Hahn
2026-04-10 21:06 ` [PATCH 1/8 RFC] mm/page_counter: introduce per-page_counter stock Joshua Hahn
2026-04-10 21:06 ` [PATCH 2/8 RFC] mm/page_counter: use page_counter_stock in page_counter_try_charge Joshua Hahn
2026-04-10 21:06 ` [PATCH 3/8 RFC] mm/page_counter: use page_counter_stock in page_counter_uncharge Joshua Hahn
2026-04-10 21:06 ` [PATCH 4/8 RFC] mm/page_counter: introduce stock drain APIs Joshua Hahn
2026-04-10 21:06 ` [PATCH 5/8 RFC] mm/memcontrol: convert memcg to use page_counter_stock Joshua Hahn
2026-04-10 21:07 ` [PATCH 6/8 RFC] mm/memcontrol: optimize memsw stock for cgroup v1 Joshua Hahn
2026-04-10 21:07 ` [PATCH 7/8 RFC] mm/memcontrol: optimize stock usage for cgroup v2 Joshua Hahn
2026-04-10 21:07 ` [PATCH 8/8 RFC] mm/memcontrol: remove unused memcg_stock code Joshua Hahn
2026-04-13  7:23 ` [PATCH 0/8 RFC] mm/memcontrol, page_counter: move stock from mem_cgroup to page_counter Michal Hocko
2026-04-13 14:29   ` Joshua Hahn [this message]
2026-04-13 15:28     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260413142958.2037913-1-joshua.hahnjy@gmail.com \
    --to=joshua.hahnjy@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=david@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@meta.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=rppt@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox