From: Shakeel Butt <shakeel.butt@linux.dev>
To: Qi Zheng <qi.zheng@linux.dev>
Cc: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com,
roman.gushchin@linux.dev, muchun.song@linux.dev,
david@kernel.org, lorenzo.stoakes@oracle.com, ziy@nvidia.com,
harry.yoo@oracle.com, yosry.ahmed@linux.dev,
imran.f.khan@oracle.com, kamalesh.babulal@oracle.com,
axelrasmussen@google.com, yuanchu@google.com,
weixugc@google.com, chenridong@huaweicloud.com, mkoutny@suse.com,
akpm@linux-foundation.org, hamzamahfooz@linux.microsoft.com,
apais@linux.microsoft.com, lance.yang@linux.dev, bhe@redhat.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org, Muchun Song <songmuchun@bytedance.com>,
Qi Zheng <zhengqi.arch@bytedance.com>
Subject: Re: [PATCH v4 30/31] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios
Date: Sat, 7 Feb 2026 14:25:44 -0800 [thread overview]
Message-ID: <aYe1R2MMcXbPVYUW@linux.dev> (raw)
In-Reply-To: <9e332cc8436b6092dd6ef9c2d5f69072bb38eaf6.1770279888.git.zhengqi.arch@bytedance.com>
On Thu, Feb 05, 2026 at 05:01:49PM +0800, Qi Zheng wrote:
> From: Muchun Song <songmuchun@bytedance.com>
>
> Now that everything is set up, switch folio->memcg_data pointers to
> objcgs, update the accessors, and execute reparenting on cgroup death.
>
> Finally, folio->memcg_data of LRU folios and kmem folios will always
> point to an object cgroup pointer. The folio->memcg_data of slab
> folios will point to an vector of object cgroups.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>
> /*
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index e7d4e4ff411b6..0e0efaa511d3d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -247,11 +247,25 @@ static inline void reparent_state_local(struct mem_cgroup *memcg, struct mem_cgr
>
> static inline void reparent_locks(struct mem_cgroup *memcg, struct mem_cgroup *parent)
> {
> + int nid, nest = 0;
> +
> spin_lock_irq(&objcg_lock);
> + for_each_node(nid) {
> + spin_lock_nested(&mem_cgroup_lruvec(memcg,
> + NODE_DATA(nid))->lru_lock, nest++);
> + spin_lock_nested(&mem_cgroup_lruvec(parent,
> + NODE_DATA(nid))->lru_lock, nest++);
Is there a reason to acquire locks for all the node together? Why not do
the for_each_node(nid) in memcg_reparent_objcgs() and then reparent the
LRUs for each node one by one and taking and releasing lock
individually. Though the lock for the offlining memcg might not be
contentious but the parent's lock might be if a lot of memory has been
reparented.
> + }
> }
>
> static inline void reparent_unlocks(struct mem_cgroup *memcg, struct mem_cgroup *parent)
> {
> + int nid;
> +
> + for_each_node(nid) {
> + spin_unlock(&mem_cgroup_lruvec(parent, NODE_DATA(nid))->lru_lock);
> + spin_unlock(&mem_cgroup_lruvec(memcg, NODE_DATA(nid))->lru_lock);
> + }
> spin_unlock_irq(&objcg_lock);
> }
>
> @@ -260,12 +274,28 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
> struct obj_cgroup *objcg;
> struct mem_cgroup *parent = parent_mem_cgroup(memcg);
>
> +retry:
> + if (lru_gen_enabled())
> + max_lru_gen_memcg(parent);
> +
> reparent_locks(memcg, parent);
> + if (lru_gen_enabled()) {
> + if (!recheck_lru_gen_max_memcg(parent)) {
> + reparent_unlocks(memcg, parent);
> + cond_resched();
> + goto retry;
> + }
> + lru_gen_reparent_memcg(memcg, parent);
> + } else {
> + lru_reparent_memcg(memcg, parent);
> + }
>
> objcg = __memcg_reparent_objcgs(memcg, parent);
The above does not need lru locks. With the per-node refactor, it will
be out of lru lock.
>
> reparent_unlocks(memcg, parent);
>
> + reparent_state_local(memcg, parent);
> +
> percpu_ref_kill(&objcg->refcnt);
> }
>
>
[...]
> static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
> gfp_t gfp)
> {
> - int ret;
> -
> - ret = try_charge(memcg, gfp, folio_nr_pages(folio));
> - if (ret)
> - goto out;
> + int ret = 0;
> + struct obj_cgroup *objcg;
>
> - css_get(&memcg->css);
> - commit_charge(folio, memcg);
> + objcg = get_obj_cgroup_from_memcg(memcg);
> + /* Do not account at the root objcg level. */
> + if (!obj_cgroup_is_root(objcg))
> + ret = try_charge(memcg, gfp, folio_nr_pages(folio));
Use try_charge_memcg() directly and then this will remove the last user
of try_charge, so remove try_charge completely.
> + if (ret) {
> + obj_cgroup_put(objcg);
> + return ret;
> + }
> + commit_charge(folio, objcg);
> memcg1_commit_charge(folio, memcg);
> -out:
> +
> return ret;
> }
>
next prev parent reply other threads:[~2026-02-07 22:26 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-05 8:54 [PATCH v4 00/31] Eliminate Dying Memory Cgroup Qi Zheng
2026-02-05 8:54 ` [PATCH v4 01/31] mm: memcontrol: remove dead code of checking parent memory cgroup Qi Zheng
2026-02-05 8:54 ` [PATCH v4 02/31] mm: workingset: use folio_lruvec() in workingset_refault() Qi Zheng
2026-02-05 8:54 ` [PATCH v4 03/31] mm: rename unlock_page_lruvec_irq and its variants Qi Zheng
2026-02-05 8:54 ` [PATCH v4 04/31] mm: vmscan: prepare for the refactoring the move_folios_to_lru() Qi Zheng
2026-02-05 8:54 ` [PATCH v4 05/31] mm: vmscan: refactor move_folios_to_lru() Qi Zheng
2026-02-05 8:54 ` [PATCH v4 06/31] mm: memcontrol: allocate object cgroup for non-kmem case Qi Zheng
2026-02-05 8:54 ` [PATCH v4 07/31] mm: memcontrol: return root object cgroup for root memory cgroup Qi Zheng
2026-02-05 9:01 ` [PATCH v4 08/31] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 09/31] buffer: prevent memory cgroup release in folio_alloc_buffers() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 10/31] writeback: prevent memory cgroup release in writeback module Qi Zheng
2026-02-05 9:01 ` [PATCH v4 11/31] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 12/31] mm: page_io: prevent memory cgroup release in page_io module Qi Zheng
2026-02-05 9:01 ` [PATCH v4 13/31] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 14/31] mm: mglru: prevent memory cgroup release in mglru Qi Zheng
2026-02-05 9:01 ` [PATCH v4 15/31] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 16/31] mm: workingset: prevent memory cgroup release in lru_gen_eviction() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 17/31] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 18/31] mm: zswap: prevent memory cgroup release in zswap_compress() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 19/31] mm: workingset: prevent lruvec release in workingset_refault() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 20/31] mm: zswap: prevent lruvec release in zswap_folio_swapin() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 21/31] mm: swap: prevent lruvec release in lru_gen_clear_refs() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 22/31] mm: workingset: prevent lruvec release in workingset_activation() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 23/31] mm: do not open-code lruvec lock Qi Zheng
2026-02-05 9:01 ` [PATCH v4 24/31] mm: memcontrol: prepare for reparenting LRU pages for " Qi Zheng
2026-02-05 15:02 ` kernel test robot
2026-02-05 15:02 ` kernel test robot
2026-02-06 6:13 ` Qi Zheng
2026-02-06 23:34 ` Shakeel Butt
2026-02-05 9:01 ` [PATCH v4 25/31] mm: vmscan: prepare for reparenting traditional LRU folios Qi Zheng
2026-02-07 1:28 ` Shakeel Butt
2026-02-05 9:01 ` [PATCH v4 26/31] mm: vmscan: prepare for reparenting MGLRU folios Qi Zheng
2026-02-12 8:46 ` Harry Yoo
2026-02-15 7:28 ` Qi Zheng
2026-02-05 9:01 ` [PATCH v4 27/31] mm: memcontrol: refactor memcg_reparent_objcgs() Qi Zheng
2026-02-05 9:01 ` [PATCH v4 28/31] mm: workingset: use lruvec_lru_size() to get the number of lru pages Qi Zheng
2026-02-07 1:48 ` Shakeel Butt
2026-02-07 3:59 ` Muchun Song
2026-02-05 9:01 ` [PATCH v4 29/31] mm: memcontrol: prepare for reparenting non-hierarchical stats Qi Zheng
2026-02-07 2:19 ` Shakeel Butt
2026-02-10 6:47 ` Qi Zheng
2026-02-11 0:38 ` Shakeel Butt
2026-02-05 9:01 ` [PATCH v4 30/31] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios Qi Zheng
2026-02-07 19:59 ` Usama Arif
2026-02-07 22:25 ` Shakeel Butt [this message]
2026-02-09 3:49 ` Qi Zheng
2026-02-09 17:53 ` Shakeel Butt
2026-02-10 3:11 ` Qi Zheng
2026-02-05 9:01 ` [PATCH v4 31/31] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers Qi Zheng
2026-02-07 22:26 ` Shakeel Butt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aYe1R2MMcXbPVYUW@linux.dev \
--to=shakeel.butt@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=apais@linux.microsoft.com \
--cc=axelrasmussen@google.com \
--cc=bhe@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huaweicloud.com \
--cc=david@kernel.org \
--cc=hamzamahfooz@linux.microsoft.com \
--cc=hannes@cmpxchg.org \
--cc=harry.yoo@oracle.com \
--cc=hughd@google.com \
--cc=imran.f.khan@oracle.com \
--cc=kamalesh.babulal@oracle.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=mkoutny@suse.com \
--cc=muchun.song@linux.dev \
--cc=qi.zheng@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=songmuchun@bytedance.com \
--cc=weixugc@google.com \
--cc=yosry.ahmed@linux.dev \
--cc=yuanchu@google.com \
--cc=zhengqi.arch@bytedance.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox