From: Vlastimil Babka <vbabka@suse.cz>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
linux-mm@kvack.org, Harry Yoo <harry.yoo@oracle.com>
Subject: Re: [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags
Date: Mon, 9 Jun 2025 15:12:34 +0200 [thread overview]
Message-ID: <2a40e6cc-c75a-4f0a-943c-5b81456186f7@suse.cz> (raw)
In-Reply-To: <20250606222214.1395799-3-willy@infradead.org>
On 6/7/25 00:22, Matthew Wilcox (Oracle) wrote:
> Slab has its own reasons for using flag bits; they aren't just
> the page bits. Maybe this won't be the ultimate solution, but
> we should be clear that these bits are in use.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> mm/slab.h | 16 ++++++++++++++--
> mm/slub.c | 6 +++---
> 2 files changed, 17 insertions(+), 5 deletions(-)
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 05a21dc796e0..a25f12244b6c 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -50,7 +50,7 @@ typedef union {
>
> /* Reuses the bits in struct page */
> struct slab {
> - unsigned long __page_flags;
> + unsigned long flags;
>
> struct kmem_cache *slab_cache;
> union {
> @@ -99,7 +99,7 @@ struct slab {
>
> #define SLAB_MATCH(pg, sl) \
> static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
> -SLAB_MATCH(flags, __page_flags);
> +SLAB_MATCH(flags, flags);
> SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */
> SLAB_MATCH(_refcount, __page_refcount);
> #ifdef CONFIG_MEMCG
> @@ -113,6 +113,18 @@ static_assert(sizeof(struct slab) <= sizeof(struct page));
> static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)));
> #endif
>
> +/**
> + * enum slab_flags - How the slab flags bits are used.
> + * @SL_locked: Is locked with slab_lock()
> + *
> + * The slab flags share space with the page flags but some bits have
> + * different interpretations. The high bits are used for information
> + * like zone/node/section.
> + */
> +enum slab_flags {
> + SL_locked,
Given how in 3 you do SL_partial = PG_workingset, we could just use
PG_locked here too, just as a flag known to be safe. I've read your
discussion with Harry but I'd simply do that for now?
Also I think the whole enum could be moved to mm/slub.c ? Patch 4 adds
SL_pfmemalloc but also moves all its uses to mm/slub.c
> +};
> +
> /**
> * folio_slab - Converts from folio to slab.
> * @folio: The folio.
> diff --git a/mm/slub.c b/mm/slub.c
> index 31e11ef256f9..e9cbacee406d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -639,12 +639,12 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
> */
> static __always_inline void slab_lock(struct slab *slab)
> {
> - bit_spin_lock(PG_locked, &slab->__page_flags);
> + bit_spin_lock(SL_locked, &slab->flags);
> }
>
> static __always_inline void slab_unlock(struct slab *slab)
> {
> - bit_spin_unlock(PG_locked, &slab->__page_flags);
> + bit_spin_unlock(SL_locked, &slab->flags);
> }
>
> static inline bool
> @@ -1010,7 +1010,7 @@ static void print_slab_info(const struct slab *slab)
> {
> pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n",
> slab, slab->objects, slab->inuse, slab->freelist,
> - &slab->__page_flags);
> + &slab->flags);
> }
>
> void skip_orig_size_check(struct kmem_cache *s, const void *object)
next prev parent reply other threads:[~2025-06-09 13:12 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
2025-06-06 22:22 ` [PATCH 01/10] doc: Move SLUB documentation to the admin guide Matthew Wilcox (Oracle)
2025-06-09 1:42 ` Harry Yoo
2025-06-09 12:13 ` Vlastimil Babka
2025-06-06 22:22 ` [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags Matthew Wilcox (Oracle)
2025-06-09 2:15 ` Harry Yoo
2025-06-09 12:45 ` Matthew Wilcox
2025-06-09 13:12 ` Vlastimil Babka [this message]
2025-06-06 22:22 ` [PATCH 03/10] slab: Add SL_private flag Matthew Wilcox (Oracle)
2025-06-09 2:25 ` Harry Yoo
2025-06-06 22:22 ` [PATCH 04/10] slab: Add SL_pfmemalloc flag Matthew Wilcox (Oracle)
2025-06-09 2:27 ` Harry Yoo
2025-06-06 22:22 ` [PATCH 05/10] doc: Add slab internal kernel-doc Matthew Wilcox (Oracle)
2025-06-09 2:37 ` Harry Yoo
2025-06-09 15:22 ` Matthew Wilcox
2025-06-06 22:22 ` [PATCH 06/10] vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb Matthew Wilcox (Oracle)
2025-06-09 2:44 ` Harry Yoo
2025-06-06 22:22 ` [PATCH 07/10] proc: Remove mention of PG_slab Matthew Wilcox (Oracle)
2025-06-06 22:22 ` [PATCH 08/10] kfence: " Matthew Wilcox (Oracle)
2025-06-09 3:42 ` Harry Yoo
2025-06-09 13:33 ` Vlastimil Babka
2025-06-09 15:02 ` Matthew Wilcox
2025-06-10 13:23 ` Marco Elver
2025-06-06 22:22 ` [PATCH 09/10] memcg_slabinfo: Fix use " Matthew Wilcox (Oracle)
2025-06-09 3:08 ` Harry Yoo
2025-06-06 22:22 ` [PATCH 10/10] slab: Fix MAINTAINERS entry Matthew Wilcox (Oracle)
2025-06-09 3:21 ` Harry Yoo
2025-06-09 13:38 ` Vlastimil Babka
2025-06-09 13:59 ` Lorenzo Stoakes
2025-06-09 16:42 ` Christoph Lameter (Ampere)
2025-06-09 17:44 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2a40e6cc-c75a-4f0a-943c-5b81456186f7@suse.cz \
--to=vbabka@suse.cz \
--cc=cl@gentwo.org \
--cc=harry.yoo@oracle.com \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox