From: Hao Ge <hao.ge@linux.dev>
To: Vlastimil Babka <vbabka@suse.cz>,
Alexei Starovoitov <ast@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Shakeel Butt <shakeel.butt@linux.dev>,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Muchun Song <muchun.song@linux.dev>,
Suren Baghdasaryan <surenb@google.com>
Cc: Harry Yoo <harry.yoo@oracle.com>,
cgroups@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, Hao Ge <gehao@kylinos.cn>
Subject: [PATCH v3] slab: Add check for memcg_data != OBJEXTS_ALLOC_FAIL in folio_memcg_kmem
Date: Tue, 14 Oct 2025 23:27:51 +0800 [thread overview]
Message-ID: <20251014152751.499376-1-hao.ge@linux.dev> (raw)
From: Hao Ge <gehao@kylinos.cn>
Since OBJEXTS_ALLOC_FAIL and MEMCG_DATA_OBJEXTS currently share
the same bit position, we cannot determine whether memcg_data still
points to the slabobj_ext vector simply by checking
folio->memcg_data & MEMCG_DATA_OBJEXTS.
If obj_exts allocation failed, slab->obj_exts is set to OBJEXTS_ALLOC_FAIL,
and during the release of the associated folio, the BUG check is triggered
because it was mistakenly assumed that a valid folio->memcg_data
was not cleared before freeing the folio.
So let's check for memcg_data != OBJEXTS_ALLOC_FAIL in folio_memcg_kmem.
Fixes: 7612833192d5 ("slab: Reuse first bit for OBJEXTS_ALLOC_FAIL")
Suggested-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: Hao Ge <gehao@kylinos.cn>
---
v3: Simplify the solution, per Harry's suggestion in the v1 comments
Add Suggested-by: Harry Yoo <harry.yoo@oracle.com>
---
include/linux/memcontrol.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 873e510d6f8d..7ed15f858dc4 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -534,7 +534,9 @@ static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *ob
static inline bool folio_memcg_kmem(struct folio *folio)
{
VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page);
- VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJEXTS, folio);
+ VM_BUG_ON_FOLIO((folio->memcg_data != OBJEXTS_ALLOC_FAIL) &&
+ (folio->memcg_data & MEMCG_DATA_OBJEXTS),
+ folio);
return folio->memcg_data & MEMCG_DATA_KMEM;
}
--
2.25.1
next reply other threads:[~2025-10-14 15:28 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-14 15:27 Hao Ge [this message]
2025-10-14 16:12 ` Suren Baghdasaryan
2025-10-14 20:14 ` Shakeel Butt
2025-10-14 20:58 ` Vlastimil Babka
2025-10-14 22:40 ` Shakeel Butt
2025-10-15 9:25 ` Harry Yoo
2025-10-15 9:54 ` Vlastimil Babka
2025-10-15 10:27 ` Harry Yoo
2025-10-15 10:37 ` Vlastimil Babka
2025-10-15 11:22 ` Hao Ge
2025-10-15 11:40 ` Harry Yoo
2025-10-14 21:13 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251014152751.499376-1-hao.ge@linux.dev \
--to=hao.ge@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=gehao@kylinos.cn \
--cc=hannes@cmpxchg.org \
--cc=harry.yoo@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox