linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Harry Yoo <harry.yoo@oracle.com>, Shakeel Butt <shakeel.butt@linux.dev>
Cc: linux-mm@kvack.org, lsf-pc@lists.linux-foundation.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	Yosry Ahmed <yosry.ahmed@linux.dev>,
	Meta kernel team <kernel-team@meta.com>
Subject: Re: [LSF/MM/BPF Topic] Performance improvement for Memory Cgroups
Date: Mon, 31 Mar 2025 20:02:57 +0200	[thread overview]
Message-ID: <34504489-c260-45fa-8375-ac0f27b6677e@suse.cz> (raw)
In-Reply-To: <Z9u0NGdfrHqn_G8j@harry>

On 3/20/25 07:22, Harry Yoo wrote:
> On Tue, Mar 18, 2025 at 11:19:42PM -0700, Shakeel Butt wrote:
> 
> For slab memory, I have an idea:
> 
> Deferring the uncharging of slab objects on free until the CPU slab and
> per-CPU partial slabs are moved to the per-node partial slab list
> might be beneficial.
> 
> Something like:
> 
>     0. SLUB allocator defers uncharging objects if the slab the freed
>        objects belong to is the CPU slab or in the percpu partial slab
>        list.
> 
>     1. memcg_slab_post_alloc_hook() does:
>        1.1 Skips charging, if the object is already charged to the same
>            memcg and has not been uncharged yet.
>        1.2 Uncharges the object if it is charged to a different memcg
>            and then charges it to current memcg.
>        1.3 Charges the object if it's not currently not charged to any memcg.
> 
>     2. deactivate_slab() and __put_partials() uncharges free objects
>        that were not uncharged yet before moving them to the per-node
>        partial slab list.
> 
> Unless 1) we have tasks belonging to many different memcgs on each CPU
> (I'm not an expert on the scheduler's interaction with cgroups, though),
> or 2) load balancing migrates tasks between CPUs too frequently,
> 
> many allocations should hit case 1.1 (Oh, it's already charged to the same
> memcg so skip charging) in the hot path, right?
> 
> Some experiments are needed to determine whether this idea is actually
> beneficial.
> 
> Or has a similar approach been tried before?

I don't think so, it would have to be tried and measured. As I hinted in my
sheaves slot, I doubt the step 0. above happens often enough.

In step 2. you would have to walk the slabs' freelists to check if anything
is charged, right?


      reply	other threads:[~2025-03-31 18:03 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-19  6:19 Shakeel Butt
2025-03-19  8:49 ` [Lsf-pc] " Christian Brauner
2025-03-20  5:02 ` Balbir Singh
2025-03-21 17:57   ` Shakeel Butt
2025-03-20  6:22 ` Harry Yoo
2025-03-31 18:02   ` Vlastimil Babka [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=34504489-c260-45fa-8375-ac0f27b6677e@suse.cz \
    --to=vbabka@suse.cz \
    --cc=hannes@cmpxchg.org \
    --cc=harry.yoo@oracle.com \
    --cc=kernel-team@meta.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox