From: Haifeng Xu <haifeng.xu@shopee.com>
To: Matthew Wilcox <willy@infradead.org>, Vlastimil Babka <vbabka@suse.cz>
Cc: "Christoph Lameter (Ampere)" <cl@linux.com>,
penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com,
akpm@linux-foundation.org, roman.gushchin@linux.dev,
42.hyeyoo@gmail.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] slub: Clear __GFP_COMP flag when allocating 0 order page
Date: Fri, 12 Apr 2024 22:14:39 +0800 [thread overview]
Message-ID: <2031ea8c-29a0-4514-b042-7c0eabb4f443@shopee.com> (raw)
In-Reply-To: <ZhkmcNwOKktO3pxT@casper.infradead.org>
On 2024/4/12 20:17, Matthew Wilcox wrote:
> On Fri, Apr 12, 2024 at 10:01:29AM +0200, Vlastimil Babka wrote:
>> On 4/11/24 6:51 PM, Christoph Lameter (Ampere) wrote:
>>> On Thu, 11 Apr 2024, Haifeng Xu wrote:
>>>
>>>> @@ -1875,6 +1875,13 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
>>>> struct slab *slab;
>>>> unsigned int order = oo_order(oo);
>>>>
>>>> + /*
>>>> + * If fallback to the minimum order allocation and the order is 0,
>>>> + * clear the __GFP_COMP flag.
>>>> + */
>>>> + if (order == 0)
>>>> + flags = flags & ~__GFP_COMP;
>>>
>>>
>>> This would be better placed in allocate_slab() when the need for a
>>> fallback to a lower order is detected after the first call to alloc_slab_page().
>>
>> Yeah. Although I don't really see the harm of __GFP_COMP with order-0 in the
>> first place, if the only issue is that the error output might be confusing.
>> I'd also hope we should eventually get rid of those odd non-__GFP_COMP
>> high-order allocations and then can remove the flag.
>
> The patch seems pointless to me. I wouldn't clear the flag. If
> somebody finds it confusing, that's really just their expectations being
> wrong. folio_alloc() sets __GFP_COMP on all allocations, whether or not
> they're order 0.
If we don't care about the warnings at all, then higher-order and lower-order allocations can set
__GFP_COMP when creating a new slab, just like folio_alloc(). If so, there is no need to check
the order in calculate_sizes() and we can set __GFP_COMP in kmem_cache by default.
diff --git a/mm/slub.c b/mm/slub.c
index e7bf1a1a31a8..49a3ebefab86 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4461,9 +4461,7 @@ static int calculate_sizes(struct kmem_cache *s)
if ((int)order < 0)
return 0;
- s->allocflags = 0;
- if (order)
- s->allocflags |= __GFP_COMP;
+ s->allocflags = __GFP_COMP;
if (s->flags & SLAB_CACHE_DMA)
s->allocflags |= GFP_DMA;
next prev parent reply other threads:[~2024-04-12 14:14 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-11 9:18 Haifeng Xu
2024-04-11 16:51 ` Christoph Lameter (Ampere)
2024-04-12 8:01 ` Vlastimil Babka
2024-04-12 12:17 ` Matthew Wilcox
2024-04-12 14:14 ` Haifeng Xu [this message]
2024-04-12 14:18 ` Matthew Wilcox
2024-04-12 9:34 ` Haifeng Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2031ea8c-29a0-4514-b042-7c0eabb4f443@shopee.com \
--to=haifeng.xu@shopee.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox