From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58252C433F5 for ; Wed, 17 Nov 2021 07:01:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0DE761BE6 for ; Wed, 17 Nov 2021 07:01:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D0DE761BE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 80CFD6B0073; Wed, 17 Nov 2021 02:01:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BC976B0074; Wed, 17 Nov 2021 02:01:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D2E66B0078; Wed, 17 Nov 2021 02:01:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 612B76B0073 for ; Wed, 17 Nov 2021 02:01:31 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 277B1805AE for ; Wed, 17 Nov 2021 07:01:21 +0000 (UTC) X-FDA: 78817525962.30.B41BB88 Received: from mail-oi1-f175.google.com (mail-oi1-f175.google.com [209.85.167.175]) by imf06.hostedemail.com (Postfix) with ESMTP id AADF9801AB0C for ; Wed, 17 Nov 2021 07:01:19 +0000 (UTC) Received: by mail-oi1-f175.google.com with SMTP id m6so4399454oim.2 for ; Tue, 16 Nov 2021 23:01:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qSRzun9akkuvFASVnjdIZYF2V5fIxudJv97JmVln+5o=; b=hWSf2ty4hbWUK4qtQWIr9qJtbtBek9FdZpUpOVIWUlvxuChx1yZG318qtoVxbiPAD8 vMfGnYRl9t/Fd9sAuFxbifUv2L0TGSDp3RSwNrTZWbTHrX5i/Dx0cXPNH5nluOsfXw7a dfAHY93kFs3RHqp0TPWXcpOgEEmq5Ucq8iIeSUFOlbC3MY7pRInit1jzEPPyjCX3X0iD CXAztMkWjB4pTjx2AJlcXonr3vA0GikRT3BDvV1ETNbQmcNQcPruTyhIKc4omVbX5zaB Rip7ZDsfj96CpH361z1SycFOfYtTbXq3I46/FMezV8p3F8hCBV/Nik+m0eQteeJA1jay iLWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qSRzun9akkuvFASVnjdIZYF2V5fIxudJv97JmVln+5o=; b=kK3HkshTT3GBpwwU8J3jZcUXe+sTe18hSPmvPt62d/fkaQ5IuFBaSyep6CpALJizv4 nifrvbTrMmPPD3M88SeRBw9vr3ASr7zJQEcAyq0JCE5R0qSYsn/8ebx8TF3LrooDe702 R7ZhXDPIisnpPCcl8XaKPBEM7jyTekxkAy/snYrXZkd2Cv8QZDEPJV9BTl/HXAsM02/k RsADQwntZpV767iAiUAhyWGEFZbjJZc+C7r8tXEoW3qQWOM1ci1CPk3EqkMbjTam8xP9 8SADlzNIJm0jP778TffPdLh0jjaRw2RqiAxR6N670ADfajSCDKugCOGlk5S3/hok8fRs eWuw== X-Gm-Message-State: AOAM531CsKZ+HCRy4uES+vGiFLjSjH9YlNF4DtjgXFQQo+eVVzfNCy9R K5dyqp4pislRA47w9w8uI6q+keSo+kyHb+VzmnbwGQ== X-Google-Smtp-Source: ABdhPJwhpl5sSObyEbJAZSguDXUOfJxGo0+9kfEKu97HmlwhD6PV/Fq/g7mMI8LuM3Ku7BK7NUbPQmD+AstQhp9wiVA= X-Received: by 2002:a05:6808:1903:: with SMTP id bf3mr47860461oib.7.1637132476810; Tue, 16 Nov 2021 23:01:16 -0800 (PST) MIME-Version: 1.0 References: <20211116001628.24216-1-vbabka@suse.cz> <20211116001628.24216-31-vbabka@suse.cz> In-Reply-To: <20211116001628.24216-31-vbabka@suse.cz> From: Marco Elver Date: Wed, 17 Nov 2021 08:00:00 +0100 Message-ID: Subject: Re: [RFC PATCH 30/32] mm/sl*b: Differentiate struct slab fields by sl*b implementations To: Vlastimil Babka Cc: Matthew Wilcox , linux-mm@kvack.org, Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg , Alexander Potapenko , Dmitry Vyukov , kasan-dev@googlegroups.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: AADF9801AB0C X-Stat-Signature: 3crxh93b1mtn8yrd1yjwydzeg5f3kmhj Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hWSf2ty4; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of elver@google.com designates 209.85.167.175 as permitted sender) smtp.mailfrom=elver@google.com X-HE-Tag: 1637132479-953022 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 16 Nov 2021 at 01:16, Vlastimil Babka wrote: > With a struct slab definition separate from struct page, we can go further and > define only fields that the chosen sl*b implementation uses. This means > everything between __page_flags and __page_refcount placeholders now depends on > the chosen CONFIG_SL*B. Some fields exist in all implementations (slab_list) > but can be part of a union in some, so it's simpler to repeat them than > complicate the definition with ifdefs even more. > > The patch doesn't change physical offsets of the fields, although it could be > done later - for example it's now clear that tighter packing in SLOB could be > possible. > > This should also prevent accidental use of fields that don't exist in given > implementation. Before this patch virt_to_cache() and and cache_from_obj() was > visible for SLOB (albeit not used), although it relies on the slab_cache field > that isn't set by SLOB. With this patch it's now a compile error, so these > functions are now hidden behind #ifndef CONFIG_SLOB. > > Signed-off-by: Vlastimil Babka > Cc: Alexander Potapenko (maintainer:KFENCE) > Cc: Marco Elver (maintainer:KFENCE) > Cc: Dmitry Vyukov (reviewer:KFENCE) > Cc: Ran kfence_test with both slab and slub, and all passes: Tested-by: Marco Elver > --- > mm/kfence/core.c | 9 +++++---- > mm/slab.h | 46 ++++++++++++++++++++++++++++++++++++---------- > 2 files changed, 41 insertions(+), 14 deletions(-) > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > index 4eb60cf5ff8b..46103a7628a6 100644 > --- a/mm/kfence/core.c > +++ b/mm/kfence/core.c > @@ -427,10 +427,11 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g > /* Set required slab fields. */ > slab = virt_to_slab((void *)meta->addr); > slab->slab_cache = cache; > - if (IS_ENABLED(CONFIG_SLUB)) > - slab->objects = 1; > - if (IS_ENABLED(CONFIG_SLAB)) > - slab->s_mem = addr; > +#if defined(CONFIG_SLUB) > + slab->objects = 1; > +#elif defined (CONFIG_SLAB) > + slab->s_mem = addr; > +#endif > > /* Memory initialization. */ > for_each_canary(meta, set_canary_byte); > diff --git a/mm/slab.h b/mm/slab.h > index 58b65e5e5d49..10a9ee195249 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -8,9 +8,24 @@ > /* Reuses the bits in struct page */ > struct slab { > unsigned long __page_flags; > + > +#if defined(CONFIG_SLAB) > + > + union { > + struct list_head slab_list; > + struct rcu_head rcu_head; > + }; > + struct kmem_cache *slab_cache; > + void *freelist; /* array of free object indexes */ > + void * s_mem; /* first object */ > + unsigned int active; > + > +#elif defined(CONFIG_SLUB) > + > union { > struct list_head slab_list; > - struct { /* Partial pages */ > + struct rcu_head rcu_head; > + struct { > struct slab *next; > #ifdef CONFIG_64BIT > int slabs; /* Nr of slabs left */ > @@ -18,25 +33,32 @@ struct slab { > short int slabs; > #endif > }; > - struct rcu_head rcu_head; > }; > - struct kmem_cache *slab_cache; /* not slob */ > + struct kmem_cache *slab_cache; > /* Double-word boundary */ > void *freelist; /* first free object */ > union { > - void *s_mem; /* slab: first object */ > - unsigned long counters; /* SLUB */ > - struct { /* SLUB */ > + unsigned long counters; > + struct { > unsigned inuse:16; > unsigned objects:15; > unsigned frozen:1; > }; > }; > + unsigned int __unused; > + > +#elif defined(CONFIG_SLOB) > + > + struct list_head slab_list; > + void * __unused_1; > + void *freelist; /* first free block */ > + void * __unused_2; > + int units; > + > +#else > +#error "Unexpected slab allocator configured" > +#endif > > - union { > - unsigned int active; /* SLAB */ > - int units; /* SLOB */ > - }; > atomic_t __page_refcount; > #ifdef CONFIG_MEMCG > unsigned long memcg_data; > @@ -47,7 +69,9 @@ struct slab { > static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) > SLAB_MATCH(flags, __page_flags); > SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ > +#ifndef CONFIG_SLOB > SLAB_MATCH(rcu_head, rcu_head); > +#endif > SLAB_MATCH(_refcount, __page_refcount); > #ifdef CONFIG_MEMCG > SLAB_MATCH(memcg_data, memcg_data); > @@ -623,6 +647,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, > } > #endif /* CONFIG_MEMCG_KMEM */ > > +#ifndef CONFIG_SLOB > static inline struct kmem_cache *virt_to_cache(const void *obj) > { > struct slab *slab; > @@ -669,6 +694,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) > print_tracking(cachep, x); > return cachep; > } > +#endif /* CONFIG_SLOB */ > > static inline size_t slab_ksize(const struct kmem_cache *s) > { > -- > 2.33.1 >