From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43B6CC4167B for ; Thu, 7 Dec 2023 01:31:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A687E6B00AD; Wed, 6 Dec 2023 20:31:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A18C56B00AE; Wed, 6 Dec 2023 20:31:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B96F6B00AF; Wed, 6 Dec 2023 20:31:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 773B26B00AD for ; Wed, 6 Dec 2023 20:31:08 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 53BD1A0FE2 for ; Thu, 7 Dec 2023 01:31:08 +0000 (UTC) X-FDA: 81538293816.22.DD654F6 Received: from mail-oa1-f45.google.com (mail-oa1-f45.google.com [209.85.160.45]) by imf23.hostedemail.com (Postfix) with ESMTP id 83AC5140004 for ; Thu, 7 Dec 2023 01:31:06 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YVDNSYHD; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.160.45 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701912666; a=rsa-sha256; cv=none; b=wMMDofZUX/7LSbsEb2vYlWaoTKwIVE/2XprUlTnIFCLn54lfenzYNuNVS1I21dknmPWJTo nX3JJWe/eMX2/dqbCcS3RKG+4ZIU0wMB4EoN92ocAE5XAJJWsstKeeosSH75CCwpPlEGd2 NCuXYjaBeFr6pHKNaohoChWIVWd8IaI= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YVDNSYHD; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.160.45 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701912666; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FC4kVPBMR3jCsCH7t301OXLH/yq0oHM2fZo2MjvGfcg=; b=PFmemJEYpoT7LJC2ErG5hY1NbTv0URQKhoTHKY2daZV3JG2F5gIFr0D2VF16B0n9sR3vJl DQRO/0ZHq00xpd4hhECEEAhDWv15cL+TSU1E1BpfhrKKA1fGMXrWrqR/Vzv2kzYsXJwcr3 lclTk6G3B/IbNbQmuIgzqZ/Ul5Q4zXE= Received: by mail-oa1-f45.google.com with SMTP id 586e51a60fabf-1fb04fb8d28so291956fac.3 for ; Wed, 06 Dec 2023 17:31:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701912665; x=1702517465; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=FC4kVPBMR3jCsCH7t301OXLH/yq0oHM2fZo2MjvGfcg=; b=YVDNSYHDMGebYKlrJrG7HbQ0rRMste6uaoP74pmTOWwmxFNUffhyuBKodHspd6auXM PKgq5DOA0m11LN2OCwcraj3AavguYFM6dO4wtUxg8O1J/TEmDnPNtqYI3TRa1XayZlK2 ui6Zlr7blThaxdTMTJztL09rogK7SLZ7DsUJVoVRdpnUPX1GHzOUNf3GLSIC8zR7yXnV 4thldlZJTByEqaha0hPEZ9SuHOqk1vz56RDkkrk41AHf7fPnBvVB4dOZoE7emVWSC0GG Au5LWntdA4lNq2TF5GXYpbJM0T9IXV+OwzOB9k6WOW9M5tLteQzxlewPoIHamf1efCqe moAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701912665; x=1702517465; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=FC4kVPBMR3jCsCH7t301OXLH/yq0oHM2fZo2MjvGfcg=; b=g60wgsJngpfFJdXaph9uKrSDc8uMEfC4NhtUpkSkjbp9CYPJjBJ+FmYpHTSiFL/zzH D/Qy2Hf1SzwQrY7J4pZl2+5k7FwI8waoEzr6GS5DAfch2sLdnNpwz7G5i8ekpMXDHKZY GGY85hUfxjhEfPSuHyUbLdPPUdOxc+QNNmlRWm+06qDsehLJAWk96XOJT/j4eDDrY5hz 9zxYggivMctXt6hfsjiQQ3cZ32MX/+9l4Pn9Jo6Jq6aGNe1vC8Ylza811yhJ36TVW/JR vSWL6CpmCaF3Loy3A/3elozHaHpFC9GuZQxrn2yAW6qR0ftjgDk1EFrouaucmjS90EgC /Xtw== X-Gm-Message-State: AOJu0YzeQ/eHFH/GLsF7HcZ977MF/QeqoyjHmcKO4cHAAlhJaIN0bZ38 j3ATr485MqcTEI8FmfVtId0= X-Google-Smtp-Source: AGHT+IFZ5wq0R88YYcSMswZ76r5mJtY195BOiWGDf+xrPaXU+/BjFAfNM0OZhocjkWwXQnNn88ggEQ== X-Received: by 2002:a05:6870:9d9b:b0:1fb:75a:779d with SMTP id pv27-20020a0568709d9b00b001fb075a779dmr1959822oab.78.1701912665309; Wed, 06 Dec 2023 17:31:05 -0800 (PST) Received: from localhost.localdomain ([1.245.180.67]) by smtp.gmail.com with ESMTPSA id i16-20020a056a00005000b006cde2889213sm158443pfk.14.2023.12.06.17.30.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 17:31:04 -0800 (PST) Date: Thu, 7 Dec 2023 10:30:57 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 18/21] mm/slab: move kmalloc() functions from slab_common.c to slub.c Message-ID: References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> <20231120-slab-remove-slab-v2-18-9c9c70177183@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231120-slab-remove-slab-v2-18-9c9c70177183@suse.cz> X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 83AC5140004 X-Stat-Signature: 4hbws4e6jp8cwutx4ioq19ispp5sxtqm X-HE-Tag: 1701912666-143881 X-HE-Meta: U2FsdGVkX18CCynvhnbEZK2uEI428G89526KYsB5LNbnUFu5xrLZr7h5tuQeIOZAi3f6nnAX6o6PUwKDLdKUz87BO9qkI3rNhdJAWn9J1piULScPfiWDWtB1J2r/ZM3365FGJbCfKXSnKIgnanD2SonBlh53QwNLYRW0uF70IRFA9MPWDMPLY+P93aXibDJL0Oz/B3JpqgRmFnA6P6KNRhzEIPn1o1b7BDMM0hC5ESqrHpwct+JvJIGBkZLH1oZipY5VVFwze/Nz8gXEf0uFYq5nJhl6fJqFZ4+thkbOu26DBfFkW8E8p6oyUZ0hR8J+1mL7t0Oy5Uls34Cy8oj5V0R1hZ7BPCXL4J8ntNSknpGKB9mByKbuALo6UqooJYE7y1wa81KTOilWMqYwpohzxslwgCRoL3/ulwKrpCxyCz5mMjWBJ4HWTWPo4vovFOtY59cSDEEWUzgCmIu18tiubDUqzOP3uuhh9BBndyFEK9Y03OYBVvQnoJIDfWg7jZxhdvy/6Ek09GzaFL0jMctUnpEBzc3eOdlQXZik6LQRDO1zKUp2RWfeK1T8gWGB08ww+/M60mIFhC6Htg1+VeQgr0ovVJ4jrnceRP3Fkpa0zpZkLeqRg00gbcUiDubQYvdgtCKBqagBON5xn1nXu824aac/XaesfXrFlboGrLJJVB9pHi0kY7H8nzAaesxn3Rp8pL9IUhX0W7YYOsfWAkqnnyRooW9dfNrul1wxllqpzrZefuWS/aJjeZuS9D4osevFZmVcmaOUITPqvkZvUIMzhPV1g1qI1z39ghK/mt1Cql6O6YoJA9nbTkKxjxuPuaUtMVj3DcMYCoid3yGm+ZZ6UNTdqV+iM8nHnOyA1WeuLwxeO2QYTC3QW/A2lK0MzjYGG1VhzeEMCB5PF0pPX9ONDOmCO94t2UG2tJTtfAAL6TBC5sfmJJEWPW0NXJQx7/lVFAd6lcI0p/vOQfWFfq+ ij5pdeDD QGKF8T8xANVh80BI1rnl9yGtl3f/TNOf+XrNvK6H0Cbq9UxW9lfPwhig67kmHm252HXkwvCpaXYkFZ4TdP2d4+71DojqU7MvMwxzWYJJeTs2IAT3y+ekm+lQ+kzdRNrbr+RQSzM9adSmmg22+qPVZD3c8/h48nT4RQ56AlPj5CWbYZXwJodKeFArCwvWO6aJO1LQS3lN+/md+n2CBB0T47TJNtwbf3WntuUk+Bclz4uQ/RtGxQPEVPOLu4B/NDiRwN3afBEGX3xdK3qy6CAeZtCScZR6cKdJXafl2fw9CBek6e4k3sdADLzaDNUBm/Lk11P8dbU24yRDIQmzhtfucBrPSI7ukapXuWWFj6mMQ4JW3C5s1WtODO2cBys7T2wL9451GRdCu08Z5VX36H618UR/EqJH/c7ib5cqsR1XzHEoyRowN1mfzwM3RifxldG7p0conWl5EjzNzlwM1O7QWcp84O5N3OQJsmenDuhX+7Ww635m+iFOND2BJ8vVY+n2ReccMZP4NHCLakUn1pFfWZYWBh+K3mjhSExfp3IrzbXB7XusOtlqDx0leixwQIueLAtvkClD/vd1R07FluJXttyc53ZzU+TpCwA2fVha/vE8YQPSfF4CjyfwY6g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 07:34:29PM +0100, Vlastimil Babka wrote: > This will eliminate a call between compilation units through > __kmem_cache_alloc_node() and allow better inlining of the allocation > fast path. > > Reviewed-by: Kees Cook > Signed-off-by: Vlastimil Babka > --- > mm/slab.h | 3 -- > mm/slab_common.c | 119 ---------------------------------------------------- > mm/slub.c | 126 +++++++++++++++++++++++++++++++++++++++++++++++++++---- > 3 files changed, 118 insertions(+), 130 deletions(-) > > diff --git a/mm/slab.h b/mm/slab.h > index 7d7cc7af614e..54deeb0428c6 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -416,9 +416,6 @@ kmalloc_slab(size_t size, gfp_t flags, unsigned long caller) > return kmalloc_caches[kmalloc_type(flags, caller)][index]; > } > > -void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, > - int node, size_t orig_size, > - unsigned long caller); > gfp_t kmalloc_fix_flags(gfp_t flags); > > /* Functions provided by the slab allocators */ > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 31ade17a7ad9..238293b1dbe1 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -936,50 +936,6 @@ void __init create_kmalloc_caches(slab_flags_t flags) > slab_state = UP; > } > > -static void *__kmalloc_large_node(size_t size, gfp_t flags, int node); > -static __always_inline > -void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) > -{ > - struct kmem_cache *s; > - void *ret; > - > - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { > - ret = __kmalloc_large_node(size, flags, node); > - trace_kmalloc(caller, ret, size, > - PAGE_SIZE << get_order(size), flags, node); > - return ret; > - } > - > - if (unlikely(!size)) > - return ZERO_SIZE_PTR; > - > - s = kmalloc_slab(size, flags, caller); > - > - ret = __kmem_cache_alloc_node(s, flags, node, size, caller); > - ret = kasan_kmalloc(s, ret, size, flags); > - trace_kmalloc(caller, ret, size, s->size, flags, node); > - return ret; > -} > - > -void *__kmalloc_node(size_t size, gfp_t flags, int node) > -{ > - return __do_kmalloc_node(size, flags, node, _RET_IP_); > -} > -EXPORT_SYMBOL(__kmalloc_node); > - > -void *__kmalloc(size_t size, gfp_t flags) > -{ > - return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); > -} > -EXPORT_SYMBOL(__kmalloc); > - > -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, > - int node, unsigned long caller) > -{ > - return __do_kmalloc_node(size, flags, node, caller); > -} > -EXPORT_SYMBOL(__kmalloc_node_track_caller); > - > /** > * __ksize -- Report full size of underlying allocation > * @object: pointer to the object > @@ -1016,30 +972,6 @@ size_t __ksize(const void *object) > return slab_ksize(folio_slab(folio)->slab_cache); > } > > -void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) > -{ > - void *ret = __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE, > - size, _RET_IP_); > - > - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE); > - > - ret = kasan_kmalloc(s, ret, size, gfpflags); > - return ret; > -} > -EXPORT_SYMBOL(kmalloc_trace); > - > -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, > - int node, size_t size) > -{ > - void *ret = __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_); > - > - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node); > - > - ret = kasan_kmalloc(s, ret, size, gfpflags); > - return ret; > -} > -EXPORT_SYMBOL(kmalloc_node_trace); > - > gfp_t kmalloc_fix_flags(gfp_t flags) > { > gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK; > @@ -1052,57 +984,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) > return flags; > } > > -/* > - * To avoid unnecessary overhead, we pass through large allocation requests > - * directly to the page allocator. We use __GFP_COMP, because we will need to > - * know the allocation order to free the pages properly in kfree. > - */ > - > -static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) > -{ > - struct page *page; > - void *ptr = NULL; > - unsigned int order = get_order(size); > - > - if (unlikely(flags & GFP_SLAB_BUG_MASK)) > - flags = kmalloc_fix_flags(flags); > - > - flags |= __GFP_COMP; > - page = alloc_pages_node(node, flags, order); > - if (page) { > - ptr = page_address(page); > - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, > - PAGE_SIZE << order); > - } > - > - ptr = kasan_kmalloc_large(ptr, size, flags); > - /* As ptr might get tagged, call kmemleak hook after KASAN. */ > - kmemleak_alloc(ptr, size, 1, flags); > - kmsan_kmalloc_large(ptr, size, flags); > - > - return ptr; > -} > - > -void *kmalloc_large(size_t size, gfp_t flags) > -{ > - void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE); > - > - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), > - flags, NUMA_NO_NODE); > - return ret; > -} > -EXPORT_SYMBOL(kmalloc_large); > - > -void *kmalloc_large_node(size_t size, gfp_t flags, int node) > -{ > - void *ret = __kmalloc_large_node(size, flags, node); > - > - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), > - flags, node); > - return ret; > -} > -EXPORT_SYMBOL(kmalloc_large_node); > - > #ifdef CONFIG_SLAB_FREELIST_RANDOM > /* Randomize a generic freelist */ > static void freelist_randomize(unsigned int *list, > diff --git a/mm/slub.c b/mm/slub.c > index 2baa9e94d9df..d6bc15929d22 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3851,14 +3851,6 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, > } > EXPORT_SYMBOL(kmem_cache_alloc_lru); > > -void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, > - int node, size_t orig_size, > - unsigned long caller) > -{ > - return slab_alloc_node(s, NULL, gfpflags, node, > - caller, orig_size); > -} > - > /** > * kmem_cache_alloc_node - Allocate an object on the specified node > * @s: The cache to allocate from. > @@ -3882,6 +3874,124 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) > } > EXPORT_SYMBOL(kmem_cache_alloc_node); > > +/* > + * To avoid unnecessary overhead, we pass through large allocation requests > + * directly to the page allocator. We use __GFP_COMP, because we will need to > + * know the allocation order to free the pages properly in kfree. > + */ > +static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) > +{ > + struct page *page; > + void *ptr = NULL; > + unsigned int order = get_order(size); > + > + if (unlikely(flags & GFP_SLAB_BUG_MASK)) > + flags = kmalloc_fix_flags(flags); > + > + flags |= __GFP_COMP; > + page = alloc_pages_node(node, flags, order); > + if (page) { > + ptr = page_address(page); > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, > + PAGE_SIZE << order); > + } > + > + ptr = kasan_kmalloc_large(ptr, size, flags); > + /* As ptr might get tagged, call kmemleak hook after KASAN. */ > + kmemleak_alloc(ptr, size, 1, flags); > + kmsan_kmalloc_large(ptr, size, flags); > + > + return ptr; > +} > + > +void *kmalloc_large(size_t size, gfp_t flags) > +{ > + void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE); > + > + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), > + flags, NUMA_NO_NODE); > + return ret; > +} > +EXPORT_SYMBOL(kmalloc_large); > + > +void *kmalloc_large_node(size_t size, gfp_t flags, int node) > +{ > + void *ret = __kmalloc_large_node(size, flags, node); > + > + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), > + flags, node); > + return ret; > +} > +EXPORT_SYMBOL(kmalloc_large_node); > + > +static __always_inline > +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, > + unsigned long caller) > +{ > + struct kmem_cache *s; > + void *ret; > + > + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { > + ret = __kmalloc_large_node(size, flags, node); > + trace_kmalloc(caller, ret, size, > + PAGE_SIZE << get_order(size), flags, node); > + return ret; > + } > + > + if (unlikely(!size)) > + return ZERO_SIZE_PTR; > + > + s = kmalloc_slab(size, flags, caller); > + > + ret = slab_alloc_node(s, NULL, flags, node, caller, size); > + ret = kasan_kmalloc(s, ret, size, flags); > + trace_kmalloc(caller, ret, size, s->size, flags, node); > + return ret; > +} > + > +void *__kmalloc_node(size_t size, gfp_t flags, int node) > +{ > + return __do_kmalloc_node(size, flags, node, _RET_IP_); > +} > +EXPORT_SYMBOL(__kmalloc_node); > + > +void *__kmalloc(size_t size, gfp_t flags) > +{ > + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); > +} > +EXPORT_SYMBOL(__kmalloc); > + > +void *__kmalloc_node_track_caller(size_t size, gfp_t flags, > + int node, unsigned long caller) > +{ > + return __do_kmalloc_node(size, flags, node, caller); > +} > +EXPORT_SYMBOL(__kmalloc_node_track_caller); > + > +void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) > +{ > + void *ret = slab_alloc_node(s, NULL, gfpflags, NUMA_NO_NODE, > + _RET_IP_, size); > + > + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE); > + > + ret = kasan_kmalloc(s, ret, size, gfpflags); > + return ret; > +} > +EXPORT_SYMBOL(kmalloc_trace); > + > +void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, > + int node, size_t size) > +{ > + void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); > + > + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node); > + > + ret = kasan_kmalloc(s, ret, size, gfpflags); > + return ret; > +} > +EXPORT_SYMBOL(kmalloc_node_trace); > + > static noinline void free_to_partial_list( > struct kmem_cache *s, struct slab *slab, > void *head, void *tail, int bulk_cnt, > > -- Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > 2.42.1 > >