linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeel.butt@linux.dev>
To: Balbir Singh <balbirs@nvidia.com>
Cc: linux-mm@kvack.org, lsf-pc@lists.linux-foundation.org,
	 Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	 Vlastimil Babka <vbabka@suse.cz>,
	Yosry Ahmed <yosry.ahmed@linux.dev>,
	 Meta kernel team <kernel-team@meta.com>
Subject: Re: [LSF/MM/BPF Topic] Performance improvement for Memory Cgroups
Date: Fri, 21 Mar 2025 10:57:36 -0700	[thread overview]
Message-ID: <j2xi3drok34kplh364q7cajmlofothkyeeppbocptkoar26565@jrzmek2zp3ej> (raw)
In-Reply-To: <4bb9ddf9-6f26-4adc-8f91-ad7b00074e0f@nvidia.com>

On Thu, Mar 20, 2025 at 04:02:27PM +1100, Balbir Singh wrote:
> On 3/19/25 17:19, Shakeel Butt wrote:
> > A bit late but let me still propose a session on topics related to memory
> > cgroups. Last year at LSFMM 2024, we discussed [1] about the potential
> > deprecation of memcg v1. Since then we have made very good progress in that
> > regard. We have moved the v1-only code in a separate file and make it not
> > compile by default, have added warnings in many v1-only interfaces and have
> > removed a lot of v1-only code. This year, I want to focus on performance of
> > memory cgroup, particularly improving cost of charging and stats.
> 
> I'd be very interested in the discussion, I am not there in person, FYI
> 
> > 
> > At the high level we can partition the memory charging in three cases. First
> > is the user memory (anon & file), second if kernel memory (slub mostly) and
> > third is network memory. For network memory, [1] has described some of the
> > challenges. Similarly for kernel memory, we had to revert patches where memcg
> > charging was too expensive [3,4].
> > 
> > I want to discuss and brainstorm different ways to further optimize the
> > memcg charging for all these types of memory. I am at the moment prototying
> > multi-memcg support for per-cpu memcg stocks and would like to see what else
> > we can do.
> > 
> 
> What do you mean by multi-memcg support? Does it means creating those buckets
> per cpu?
> 

Multiple cached memcgs in struct memcg_stock_pcp. In [1] I prototypes a
network specific per-cpu multi-memcg stock. However I think we need a
general support instead of just for networking.



  reply	other threads:[~2025-03-21 17:57 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-19  6:19 Shakeel Butt
2025-03-19  8:49 ` [Lsf-pc] " Christian Brauner
2025-03-20  5:02 ` Balbir Singh
2025-03-21 17:57   ` Shakeel Butt [this message]
2025-03-20  6:22 ` Harry Yoo
2025-03-31 18:02   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=j2xi3drok34kplh364q7cajmlofothkyeeppbocptkoar26565@jrzmek2zp3ej \
    --to=shakeel.butt@linux.dev \
    --cc=balbirs@nvidia.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@meta.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox