From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4E2FC4167B for ; Thu, 7 Dec 2023 00:43:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7ACA46B0071; Wed, 6 Dec 2023 19:43:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 75BC66B00A0; Wed, 6 Dec 2023 19:43:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 587E86B00A1; Wed, 6 Dec 2023 19:43:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 452276B0071 for ; Wed, 6 Dec 2023 19:43:32 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 15BD9A0FDE for ; Thu, 7 Dec 2023 00:43:32 +0000 (UTC) X-FDA: 81538173864.25.45F9DC3 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf30.hostedemail.com (Postfix) with ESMTP id 4684E80019 for ; Thu, 7 Dec 2023 00:43:30 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RtXAMO4R; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701909810; a=rsa-sha256; cv=none; b=MJr/7Gc96rA2hM6Te5T2fwhCsTaLZA86bd2fTVdmQ1EtK0xbw6RVXhOj9I1LXKP4ndL2Ho OVC/M4mOmVGLno5kePgh3d7Wu5kZvSNvTMvgfvlAUvOKN3PKxlJgSWjogoTiXDsh8WtNzr vfeOJgkyWNPIRGiOAIVFMIYll1JyHzs= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RtXAMO4R; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701909810; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eh9CtMoBA+zmGvG+NMP6SS+GUAPZn7ZW9r8sCwWKZF8=; b=IWg74CG58VWS1skOitD6spNAYPinhZt0NjForlLRSNqqI1kriE2sq6JGygTeNJql7etUkF dlDv21JmFfBRq0NLLSNnjNBuhG6EMsp6hLDSvjjCeh+jVcIrL9rAkqSyuzUgh4+YVld/r4 At3zz6Bnabg1KlJeLaGlTE5HupNWYIw= Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6ce831cbba6so79355b3a.2 for ; Wed, 06 Dec 2023 16:43:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701909809; x=1702514609; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=eh9CtMoBA+zmGvG+NMP6SS+GUAPZn7ZW9r8sCwWKZF8=; b=RtXAMO4RyFjcY7W7orCPHC5ViAA3GenRMtSP3LBdSHCkGQHqPxwftUbu000a72XSrk VY4T5W+RRdFbqY+mBmvGRoY+NSGMbNeptnXyrsbTsNn6Nko9Gk4BEhlEAPECpwLRSg2D 7kqdHKzCssXOss5xAlMnvQVWfl3E+Pq4IYk8skIu2t48MxQRZFN8JSLZqZ26eGrEwuic f85YHCZEgGrT3J084XnzpIkQNniztlEBouD6GMBTY58iRzjh8IUFKpuiXoJIkS0X5Mzn UD9i6J4h5S6Th3nQlBY5qjkRzKis3MCsxlZqF/4ubPecqSi44aNw1Nl5rCMBgsrrnw+U 5Buw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701909809; x=1702514609; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=eh9CtMoBA+zmGvG+NMP6SS+GUAPZn7ZW9r8sCwWKZF8=; b=P8ygXj2NS0sza0R12A4WA0pLWZYZEcWFPCvxWZoowH6twcTxmmUXjp3AC3A27gc/WC gm48PdC+TSgGP8oh9E/jG922qfvPHAkw5SP1RCjYRXLBIC9e45lak79aqX1LGcxqIKkd 2PuePtL7h0hHUJ5EQypoYYU5GLbz9G6ortiDdXROAlQZ17VlB7BP2F3vRpWsQsmaEMlW gXeGQou2nJzIk0zqCVwT3kXi7sciTqVx20tXo/4SSGhTI8axY+rwnSNpcs/Sc34/+uxS PNpuqTYIevVzbRsgWF58odtubzDBeHhVR8zlr6mrnpuoEshfZUZcBBKb6r7yDtdKFypR c4Aw== X-Gm-Message-State: AOJu0YwyVrKhWxwECPKUeprJ4LkmX0fXOc113LtRtnjS4PbUv18IgqGD +MIoq59WmpEUJZRLdE4OygI= X-Google-Smtp-Source: AGHT+IHVLxtmkCWrQvVjzfGp4Yyi7Sp2zeTFR6kkuCxxzYtmahqouIYzhFcsyiEwLef13SnL50mS0w== X-Received: by 2002:a05:6a20:2449:b0:18a:de88:e0d with SMTP id t9-20020a056a20244900b0018ade880e0dmr1780832pzc.15.1701909808906; Wed, 06 Dec 2023 16:43:28 -0800 (PST) Received: from localhost.localdomain ([1.245.180.67]) by smtp.gmail.com with ESMTPSA id p12-20020a170902e74c00b001cf7bd9ade5sm63977plf.3.2023.12.06.16.43.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 16:43:28 -0800 (PST) Date: Thu, 7 Dec 2023 09:43:04 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 13/21] mm/slab: move pre/post-alloc hooks from slab.h to slub.c Message-ID: References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> <20231120-slab-remove-slab-v2-13-9c9c70177183@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231120-slab-remove-slab-v2-13-9c9c70177183@suse.cz> X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4684E80019 X-Stat-Signature: mc5b5z15wpkzsx4rpjiaqrp5b4dua9ne X-Rspam-User: X-HE-Tag: 1701909810-530044 X-HE-Meta: U2FsdGVkX1/oIME9yj1WgNAK3QyHVJTLnIdCj6/VZTdk8ITwTxQkA+jtT0HuP9+avmbs2pc6irQt/y4mRTyQz6nlxYM65TwqSM4FjLAGgfStOG9jOct4dqyCoxJwpLg7C2woXeui2N64hPpuO/z7gtG+5b7e/M+uWaj6WhXV7hCQO8Ap+FOSfRjSb+XDM9u25rX1CeoteVCi3HWd5VeIwEX7blftVRxV72DhCZi2UHlpUoVq+qOX7EjKEnF3opAw8+Dc0A7k1tLyL6wx65gvuBzNdtYe8yUJJa4quEQQL2x2bR8IhStq6QmDYqEZVnplFKbuHSYROdsff+ED0XZIct3m/QpUjk+0UCtbAJfX/e39j4MPNm0OhMiQY2/9/WEC0deL+chTWC3q4LAOVpzRWOiPs+UNnvIUVgUfqIVz7fzYasA5lTNiGZR5KluPYz7TOysTNqS54+OTsijcwE2BiZ8dsiWboagNPE1ais86dPDCO9Gmr94LwP+/hGJB6kQQ3HeMmE++ugKuJGlWXMfQ7WV7NWwxrbJnTeXpPkoio3PCrzZqyz9Cb25LX6jaoHzQvm7zXpQAxUhd7DzndsFuXNOoi1iydF2NJACAdi1jm6+bu6Rk2WFqfC7PHKog/+UKvyph0KsWICDs7EZYY0bGkyUJMm2t/I6/DM/JV12z/D2Ok8/T9/0/v8eaF2+FOq7IT4w/lgerFhYEybcH7wSfv8csxbbv6rU0EmZd6i3m8Qo0xrgVbSOvY1QqH590tOSf6RMz9od9ANfj8wp2ZApbNjrJ01N3zakOPRK0XtV0I9Kq0Z/fFSyfVCacz9/btoB8C5crSmzRUubvs5sl9GqQhPTLbeCkbi6vEkxpJ+1fTol8i+fgf1u6kXAYD8XBuA31lJCBtdYkxBarHu6J740eANqFn2JRTPTB+haCPFIQJ17WLpIG/JlpjcD12UeHDVzGtiG2UdNnELcCmK8IfCL l7ULwjHm NgLgUNjeLv0sOZttCiNQ1pPs8VFT6PUxyVixjstb11Zzigy0H/Pbf2GKhajNgMncoOwdaRNRpn4dDMj/jwCPbuZgZ3m/KlBC5x0ngfyhiNmd7q+VW4SW+TaQ1lLHp//S8R/no1CWtfkKHfT7ehImydO5rTvbotp2h4XX4ctg6QbigkpMPr97j5tPhEIVY4IIrzNTMlSvpD9Xh+I1UurjNPaOcfUMrprnf1NK5BpxVRJmJHAYHCSTE5EWnIgudVk1PlVr/yMlux2XI82sspMbYv1AyfgLSFqFqMcMIDciEQCUtZttnl172nAwxbIQ+uZYZHmmYyV4YKBk4Vx7TzXYOOOYhZcv0sNJS9FEO077HMgi7yz+nPlShj0CkFPOZubn82TONHIqVBiFhbcde1NgEuTyZkhD24Nukv7PyAEks/9R6KPsBPRBDMPWpsByGXUEgleqEyag6kdLTz349M3k0X4odgIU7T/Mi+MX5R1T+EZJqzOHQXXamm67KCN9ZxYG3p/l+A3LXUCiUDHttmfdZ16QVPAW777nGgmpeGwaYrE5ohjCiaYdCo9LDqB1aCOCJ70oIUO7+3dZNx9U= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 07:34:24PM +0100, Vlastimil Babka wrote: > We don't share the hooks between two slab implementations anymore so > they can be moved away from the header. As part of the move, also move > should_failslab() from slab_common.c as the pre_alloc hook uses it. > This means slab.h can stop including fault-inject.h and kmemleak.h. > Fix up some files that were depending on the includes transitively. > > Reviewed-by: Kees Cook > Signed-off-by: Vlastimil Babka > --- > mm/kasan/report.c | 1 + > mm/memcontrol.c | 1 + > mm/slab.h | 72 ------------------------------------------------- > mm/slab_common.c | 8 +----- > mm/slub.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 5 files changed, 84 insertions(+), 79 deletions(-) > > diff --git a/mm/kasan/report.c b/mm/kasan/report.c > index e77facb62900..011f727bfaff 100644 > --- a/mm/kasan/report.c > +++ b/mm/kasan/report.c > @@ -23,6 +23,7 @@ > #include > #include > #include > +#include > #include > #include > #include > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 947fb50eba31..8a0603517065 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -64,6 +64,7 @@ > #include > #include > #include > +#include > #include "internal.h" > #include > #include > diff --git a/mm/slab.h b/mm/slab.h > index 1ac3a2f8d4c0..65ebf86b3fe9 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -9,8 +9,6 @@ > #include > #include > #include > -#include > -#include > #include > #include > > @@ -796,76 +794,6 @@ static inline size_t slab_ksize(const struct kmem_cache *s) > return s->size; > } > > -static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > - struct list_lru *lru, > - struct obj_cgroup **objcgp, > - size_t size, gfp_t flags) > -{ > - flags &= gfp_allowed_mask; > - > - might_alloc(flags); > - > - if (should_failslab(s, flags)) > - return NULL; > - > - if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) > - return NULL; > - > - return s; > -} > - > -static inline void slab_post_alloc_hook(struct kmem_cache *s, > - struct obj_cgroup *objcg, gfp_t flags, > - size_t size, void **p, bool init, > - unsigned int orig_size) > -{ > - unsigned int zero_size = s->object_size; > - bool kasan_init = init; > - size_t i; > - > - flags &= gfp_allowed_mask; > - > - /* > - * For kmalloc object, the allocated memory size(object_size) is likely > - * larger than the requested size(orig_size). If redzone check is > - * enabled for the extra space, don't zero it, as it will be redzoned > - * soon. The redzone operation for this extra space could be seen as a > - * replacement of current poisoning under certain debug option, and > - * won't break other sanity checks. > - */ > - if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) && > - (s->flags & SLAB_KMALLOC)) > - zero_size = orig_size; > - > - /* > - * When slub_debug is enabled, avoid memory initialization integrated > - * into KASAN and instead zero out the memory via the memset below with > - * the proper size. Otherwise, KASAN might overwrite SLUB redzones and > - * cause false-positive reports. This does not lead to a performance > - * penalty on production builds, as slub_debug is not intended to be > - * enabled there. > - */ > - if (__slub_debug_enabled()) > - kasan_init = false; > - > - /* > - * As memory initialization might be integrated into KASAN, > - * kasan_slab_alloc and initialization memset must be > - * kept together to avoid discrepancies in behavior. > - * > - * As p[i] might get tagged, memset and kmemleak hook come after KASAN. > - */ > - for (i = 0; i < size; i++) { > - p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); > - if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) > - memset(p[i], 0, zero_size); > - kmemleak_alloc_recursive(p[i], s->object_size, 1, > - s->flags, flags); > - kmsan_slab_alloc(s, p[i], flags); > - } > - > - memcg_slab_post_alloc_hook(s, objcg, flags, size, p); > -} > > /* > * The slab lists for all objects. > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 63b8411db7ce..bbc2e3f061f1 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -21,6 +21,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -1470,10 +1471,3 @@ EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); > EXPORT_TRACEPOINT_SYMBOL(kfree); > EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); > > -int should_failslab(struct kmem_cache *s, gfp_t gfpflags) > -{ > - if (__should_failslab(s, gfpflags)) > - return -ENOMEM; > - return 0; > -} > -ALLOW_ERROR_INJECTION(should_failslab, ERRNO); > diff --git a/mm/slub.c b/mm/slub.c > index 979932d046fd..9eb6508152c2 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -34,6 +34,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -3494,6 +3495,86 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, > 0, sizeof(void *)); > } > > +noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) > +{ > + if (__should_failslab(s, gfpflags)) > + return -ENOMEM; > + return 0; > +} > +ALLOW_ERROR_INJECTION(should_failslab, ERRNO); > + > +static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > + struct list_lru *lru, > + struct obj_cgroup **objcgp, > + size_t size, gfp_t flags) > +{ > + flags &= gfp_allowed_mask; > + > + might_alloc(flags); > + > + if (should_failslab(s, flags)) > + return NULL; > + > + if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) > + return NULL; > + > + return s; > +} > + > +static inline void slab_post_alloc_hook(struct kmem_cache *s, > + struct obj_cgroup *objcg, gfp_t flags, > + size_t size, void **p, bool init, > + unsigned int orig_size) > +{ > + unsigned int zero_size = s->object_size; > + bool kasan_init = init; > + size_t i; > + > + flags &= gfp_allowed_mask; > + > + /* > + * For kmalloc object, the allocated memory size(object_size) is likely > + * larger than the requested size(orig_size). If redzone check is > + * enabled for the extra space, don't zero it, as it will be redzoned > + * soon. The redzone operation for this extra space could be seen as a > + * replacement of current poisoning under certain debug option, and > + * won't break other sanity checks. > + */ > + if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) && > + (s->flags & SLAB_KMALLOC)) > + zero_size = orig_size; > + > + /* > + * When slub_debug is enabled, avoid memory initialization integrated > + * into KASAN and instead zero out the memory via the memset below with > + * the proper size. Otherwise, KASAN might overwrite SLUB redzones and > + * cause false-positive reports. This does not lead to a performance > + * penalty on production builds, as slub_debug is not intended to be > + * enabled there. > + */ > + if (__slub_debug_enabled()) > + kasan_init = false; > + > + /* > + * As memory initialization might be integrated into KASAN, > + * kasan_slab_alloc and initialization memset must be > + * kept together to avoid discrepancies in behavior. > + * > + * As p[i] might get tagged, memset and kmemleak hook come after KASAN. > + */ > + for (i = 0; i < size; i++) { > + p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); > + if (p[i] && init && (!kasan_init || > + !kasan_has_integrated_init())) > + memset(p[i], 0, zero_size); > + kmemleak_alloc_recursive(p[i], s->object_size, 1, > + s->flags, flags); > + kmsan_slab_alloc(s, p[i], flags); > + } > + > + memcg_slab_post_alloc_hook(s, objcg, flags, size, p); > +} > + > /* > * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) > * have the fastpath folded into their functions. So no function call > > -- Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > 2.42.1 > >