linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@kernel.org>,
	Lorenzo Stoakes <ljs@kernel.org>,
	"Liam R . Howlett" <liam.howlett@oracle.com>,
	Vlastimil Babka <vbabka@kernel.org>,
	Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, kernel-team@meta.com
Subject: Re: [PATCH 0/8 RFC] mm/memcontrol, page_counter: move stock from mem_cgroup to page_counter
Date: Mon, 13 Apr 2026 17:28:35 +0200	[thread overview]
Message-ID: <ad0LoxXjP49PZBwR@tiehlicka> (raw)
In-Reply-To: <20260413142958.2037913-1-joshua.hahnjy@gmail.com>

On Mon 13-04-26 07:29:58, Joshua Hahn wrote:
> On Mon, 13 Apr 2026 09:23:38 +0200 Michal Hocko <mhocko@suse.com> wrote:
> 
> Hello Michal,
> 
> Thank you for your review as always!
> 
> > On Fri 10-04-26 14:06:54, Joshua Hahn wrote:
> > > Memcg currently keeps a "stock" of 64 pages per-cpu to cache pre-charged
> > > allocations, allowing small allocations and frees to avoid walking the
> > > expensive mem_cgroup hierarchy traversal on each charge. This design
> > > introduces a fastpath to charge/uncharge, but has several limitations:
> > > 
> > > 1. Each CPU can track up to 7 (NR_MEMCG_STOCK) mem_cgroups. When more
> > >    than 7 mem_cgroups are actively charging on a single CPU, a random
> > >    victim is evicted, and its associated stock is drained, which
> > >    triggers unnecessary hierarchy walks.
> > > 
> > >    Note that previously there used to be a 1-1 mapping between CPU and
> > >    memcg stock; it was bumped up to 7 in f735eebe55f8f ("multi-memcg
> > >    percpu charge cache") because it was observed that stock would
> > >    frequently get flushed and refilled.
> > 
> > All true but it is quite important to note that this all is bounded to
> > nr_online_cpus*NR_MEMCG_STOCK*MEMCG_CHARGE_BATCH. You are proposing to
> > increase this to s@NR_MEMCG_STOCK@nr_leaf_cgroups@. In invornments with
> > many cpus and and directly charged cgroups this can be considerable
> > hidden overcharge. Have you considered that and evaluated potential
> > impact?
> 
> This is a great point. I would like to note though, that for systems running
> less than 7 leaf cgroups (I'm not sure what systems typically look like outside
> of Meta, so I cannot say whether this is likely or not!) this change would
> be an optimization since we allocate only for the leaf cgroups we need ; -)
> 
> But let's do the math for the worst-case scenario:
> Because we initialize the stock to be 0 and only refill on a charge / 
> uncharge, the worst-case scenario involves a workload that charges
> to all CPUs just once, so that it is not enough to benefit from the
> cacheing. On a very large system, say 300 CPUs, with 4k pages, that's
> 300 * 64 * 4kb = 75 mb of overcharging per leaf-cgroup.
>
> This is definitely a serious amount of overcharging. With that said, I
> would like to note that this seems like quite a rare scenario; what
> would cause a workload to jump across 300 CPUs? 

A typical situation I would expect this to be more visible is a large
machine hosting a lot of smaller containers. Not an untypical situation.

Without an external pressure those caches could accumulate a lot. On the
other hand a large machine the overall overcharging shouldn't cause the
memory depletion even if we are talking about 1000s of memcgs. The
behavior will change though and this is something you should explain
in your changelog. There will certainly be cons that we need to weigh
against pros. There are many good points below that you can use.
[...]

> > > 2. Stock management is tightly coupled to struct mem_cgroup, which
> > >    makes it difficult to add a new page_counter to struct mem_cgroup
> > >    and do its own stock management, since each operation has to be
> > >    duplicated.
> > 
> > Could you expand why this is a problem we need to address?
> 
> Yes of course. So to give some context, I realized that stock was a bit
> uncomfortable to work with at a memcg granularity when I tried to introduce
> a new page counter for toptier memory tracking (in order to enforce strict
> limits. I didn't explicitly note this in the cover letter because I thought
> that there was a lot of good motivation aside from the specific use case
> I was thinking of, so decided to leave it out. What do you think? : -)

Yes, if there are future plans that might benefit from this then this is
worth mentioning. Because just based on 1 I cannot really tell whether
going this way is better then tune NR_MEMCG_STOCK. As I've said I like
the resulting code better but there are some practical cons as well.

> I'm not a memcgv1 user so I cannot tell from experience whether this is a
> pain point or not, but I also did find it awkward that one stock gated the
> charges for two page_counters memsw and memory, which made the slowpath
> incur double the hierarchy walks on a single stock failing, instead of keeping
> them separate so that it is less likely for both the page hierarchy walks
> to happen on a single charge attempt.

v1 is legacy and we have decided to not invest into new
optimizations/feature long ago.

> 
> > > 3. Each stock slot requires a css reference, as well as a traversal
> > >    overhead on every stock operation to check which cpu-memcg we are
> > >    trying to consume stock for.
> > 
> > Why is this a problem?
> 
> I don't think this is really that big of a problem, but just something that
> I wanted to note as a benefit of these changes. I remember being a bit
> confused by the memcg slot scanning & traversal when reading the stock
> code, personally I think being able to directly be able to attribute stock
> to the page_cache it comes from, as well as not randomly evicting stock
> could be helpful.

OK so this boils down to code clarity.

> > Please also be more explicit what kind of workloads are going to benefit
> > from this change. The existing caching scheme is simple and ineffective
> > but is it worth improving (likely your points 2 and 3 could clarify that)?
> 
> I think that the biggest strength for this series is actually not with
> performance gains but rather with more interpretable semantics for stock
> management and transparent charging in try_charge_memcg.
> 
> But to break it down, any systems using less than 7 cgroups will get
> reduced memory overhead (from the percpu structs) and comparable performance.
> Any systems using more than 7 leaf cgroups will benefit because stock is
> no longer randomly evicted and needed to refill.
> 
> >From my limited benchmark tests, these didn't seem too visible from a
> wall time perspective. But I can trace for how often we refill the stock
> in the next version, and I hope that it can show more tangible results.

Another points for the changelog.
-- 
Michal Hocko
SUSE Labs


      reply	other threads:[~2026-04-13 15:28 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-10 21:06 Joshua Hahn
2026-04-10 21:06 ` [PATCH 1/8 RFC] mm/page_counter: introduce per-page_counter stock Joshua Hahn
2026-04-10 21:06 ` [PATCH 2/8 RFC] mm/page_counter: use page_counter_stock in page_counter_try_charge Joshua Hahn
2026-04-10 21:06 ` [PATCH 3/8 RFC] mm/page_counter: use page_counter_stock in page_counter_uncharge Joshua Hahn
2026-04-10 21:06 ` [PATCH 4/8 RFC] mm/page_counter: introduce stock drain APIs Joshua Hahn
2026-04-10 21:06 ` [PATCH 5/8 RFC] mm/memcontrol: convert memcg to use page_counter_stock Joshua Hahn
2026-04-10 21:07 ` [PATCH 6/8 RFC] mm/memcontrol: optimize memsw stock for cgroup v1 Joshua Hahn
2026-04-10 21:07 ` [PATCH 7/8 RFC] mm/memcontrol: optimize stock usage for cgroup v2 Joshua Hahn
2026-04-10 21:07 ` [PATCH 8/8 RFC] mm/memcontrol: remove unused memcg_stock code Joshua Hahn
2026-04-13  7:23 ` [PATCH 0/8 RFC] mm/memcontrol, page_counter: move stock from mem_cgroup to page_counter Michal Hocko
2026-04-13 14:29   ` Joshua Hahn
2026-04-13 15:28     ` Michal Hocko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ad0LoxXjP49PZBwR@tiehlicka \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=david@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=joshua.hahnjy@gmail.com \
    --cc=kernel-team@meta.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=rppt@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox