From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30872C19F2D for ; Thu, 11 Aug 2022 06:53:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9DEC08E0001; Thu, 11 Aug 2022 02:53:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 98E526B0074; Thu, 11 Aug 2022 02:53:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82EE58E0001; Thu, 11 Aug 2022 02:53:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6D0576B0073 for ; Thu, 11 Aug 2022 02:53:25 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3D03D1A01CA for ; Thu, 11 Aug 2022 06:53:25 +0000 (UTC) X-FDA: 79786395570.31.FF9B69D Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf06.hostedemail.com (Postfix) with ESMTP id BC1DD180168 for ; Thu, 11 Aug 2022 06:53:24 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id w11-20020a17090a380b00b001f73f75a1feso4472449pjb.2 for ; Wed, 10 Aug 2022 23:53:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=+NVZrWQyR/9xTTNc+KMlyV7p4mUpP4VlY/3xh62l4Lk=; b=XgxVVxag6JqrjHx0As38MR/78wh2sKyUTZRftTpoH5CAX0ldCQJmZST7tfZzzzZ0LP pIpazrA2yjEJTF49x4S5QPgHGeZjVhOqScZSc+LN8N4LeUePMCwxOXmCMV6+3aZiNTW6 hHo1h4SzT9QSO0/dus+QG3y3bFlzD5mKjvRwlUSouOasZLT1qfOUEcQs/w5KGRhtGwUo UvMQtEoCLwSMhvUw3NIGK6Y2Sl46z/SVsgakXxq5DTG088S88mviKukL19Pz1PY6gBN4 PYoL/G56BnrewSqvjYOs3LN7dz1HhxRp9CvqzBi5uwSxQdfnlMzHcitvuqTQVbRarR0R 3LLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=+NVZrWQyR/9xTTNc+KMlyV7p4mUpP4VlY/3xh62l4Lk=; b=Oo91OiZKL9JFtHWHgIl8k0a09re3s62gC3Ka9dT/WlZF5uSvC4eT6RNTg/4fBox9V8 jfvlxVkxyuif0cmbLEXaKN2ZqK4V5ESX4OWYvlSge7VpQhvxzoDrpYkv/B/EDR96ufsU me2FsZ/vg+az2UOlKXzQp/sYXZ+IcTjFkUROdfJSHHwxKQaawl/xyEqEKRxCZNEGFWcT z7qkgexeB1Lbb2NLwJvZxGzSJSAjGzJRD4DeO5ORt5SdNgnVpW1l/YzlbJSviNAZn0u8 Dm5TVl/G0vZ83/7+D2QfLS30QcEdxg+ffX4e3kQaLlza0ZVOCS4RrnMy1qjXawMGTDzS E0dw== X-Gm-Message-State: ACgBeo3e7crY6gLwBBwlhZD/EC6g2yfWkICUFy2eBjA2hKPElN/3pD16 UoQ0VBEcEsLBMQqA3Zqg2cU= X-Google-Smtp-Source: AA6agR4vRIHxpoMzBDSYwnWtr5xBOKpPXQNL68zlbcXm5zpp7/z0qhgdg89ukkT9LzIaO3IDeDxBfg== X-Received: by 2002:a17:903:2310:b0:16e:e0c0:463d with SMTP id d16-20020a170903231000b0016ee0c0463dmr30833217plh.18.1660200803262; Wed, 10 Aug 2022 23:53:23 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-79-20-36.ap-northeast-1.compute.amazonaws.com. [35.79.20.36]) by smtp.gmail.com with ESMTPSA id j6-20020a170902da8600b001618b70dcc9sm14471244plx.101.2022.08.10.23.53.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Aug 2022 23:53:22 -0700 (PDT) Date: Thu, 11 Aug 2022 06:53:18 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Rongwei Wang , Christoph Lameter , Joonsoo Kim , David Rientjes , Pekka Enberg , Roman Gushchin , linux-mm@kvack.org, Feng Tang Subject: Re: [PATCH] mm, slub: restrict sysfs validation to debug caches and make it safe Message-ID: References: <20220809140043.9903-1-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220809140043.9903-1-vbabka@suse.cz> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660200804; a=rsa-sha256; cv=none; b=KfPXE3a+yRwhFLtOV6B8/Zi6RCg9KqLiYz3QhQOL+e/UCjqr2g+swHl651ehJhcuOckDUt tjdmyaTwYJQO1oveRqUYSvP6PVNH2OOktJutDpnVcoYVa9zibAptjr3nh1eTRLSNSGmnUs MG3HqtgQyxMB4uE/yejFjJ8+ZspqPB4= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=XgxVVxag; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660200804; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+NVZrWQyR/9xTTNc+KMlyV7p4mUpP4VlY/3xh62l4Lk=; b=Dd26DBWMlxuGRCHoGoGrFG0gaCe+ksjaCVaLm/WfdtgN5SwZwZgoP8sqVY97KY8HrwMeVZ YjTHjkFcWOJVRi5/mCD3YJFSdR695cQ7LPzrI9GKS7slsO50p+Fcs8ZTGoNzirh9oPlNoZ VCeKj8gqxBDLZlbiy5Uvyccj7Mm/Ooc= X-Rspamd-Queue-Id: BC1DD180168 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=XgxVVxag; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: 63t9w74mdasj3akmi5hbqtft1nx5gtsq X-HE-Tag: 1660200804-406682 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 09, 2022 at 04:00:43PM +0200, Vlastimil Babka wrote: > Rongwei Wang reports [1] that cache validation triggered by writing to > /sys/kernel/slab//validate is racy against normal cache > operations (e.g. freeing) in a way that can cause false positive > inconsistency reports for caches with debugging enabled. The problem is > that debugging actions that mark object free or active and actual > freelist operations are not atomic, and the validation can see an > inconsistent state. > > For caches that do or don't have debugging enabled, additional races > involving n->nr_slabs are possible that result in false reports of wrong > slab counts. > > This patch attempts to solve these issues while not adding overhead to > normal (especially fastpath) operations for caches that do not have > debugging enabled. Such overhead would not be justified to make possible > userspace-triggered validation safe. Instead, disable the validation for > caches that don't have debugging enabled and make their sysfs validate > handler return -EINVAL. > > For caches that do have debugging enabled, we can instead extend the > existing approach of not using percpu freelists to force all alloc/free > perations to the slow paths where debugging flags is checked and acted > upon. There can adjust the debug-specific paths to increase lock > coverage against concurrent validation as necessary. > > The processing on free in free_debug_processing() already happens under > n->list_lock and slab_lock() so we can extend it to actually do the > freeing as well and thus make it atomic against concurrent validation. > > The processing on alloc in alloc_debug_processing() currently doesn't > take any locks, but we have to first allocate the object from a slab on > the partial list (as debugging caches have no percpu slabs) and thus > take the n->list_lock anyway. Add a function alloc_single_from_partial() > that additionally takes slab_lock() for the debug processing and then > grabs just the allocated object instead of the whole freelist. This > again makes it atomic against validation and it is also ultimately more > efficient than the current grabbing of freelist immediately followed by > slab deactivation. > > To prevent races on n->nr_slabs updates, make sure that for caches with > debugging enabled, inc_slabs_node() or dec_slabs_node() is called under > n->list_lock. When allocating a new slab for a debug cache, handle the > allocation by a new function alloc_single_from_new_slab() instead of the > current forced deactivation path. > > Neither of these changes affect the fast paths at all. The changes in > slow paths are negligible for non-debug caches. > > The function free_debug_processing() is moved so that it is placed > later than the definitions of add_partial(), remove_partial() and > discard_slab(), to avoid a need for forward declarations. > > [1] https://lore.kernel.org/all/20220529081535.69275-1-rongwei.wang@linux.alibaba.com/ > I started to wonder... Do we need slab_lock() after this patch for debugging caches? if SLUB never takes a slab from partial list (unless moving to another list or freeing it), and if you must acquire list_lock before accessing (to alloc/free objects) per-node partial list, Why do we need it? Of course, it is still needed for ARCHs that does not support cmpxchg_double(). But as code for debugging caches is separated after this patch, maybe we can simply drop slab_lock() in: - alloc_debug_processing() - free_debug_processing() - alloc_single_from_{new_slab,partial}() - validate_slab() And also relevant comments. > Reported-by: Rongwei Wang > Signed-off-by: Vlastimil Babka > --- > Changes from RFC: > - addressed feedback from Hyeonggon Yoo > - rebased on current mainline > The plan is to add to slab tree/linux-next after rc1. Please test and > review. > mm/slub.c | 334 ++++++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 238 insertions(+), 96 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 862dbd9af4f5..6de667bcfe91 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1324,17 +1324,14 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, > } > > static noinline int alloc_debug_processing(struct kmem_cache *s, > - struct slab *slab, > - void *object, unsigned long addr) > + struct slab *slab, void *object) > { > if (s->flags & SLAB_CONSISTENCY_CHECKS) { > if (!alloc_consistency_checks(s, slab, object)) > goto bad; > } > > - /* Success perform special debug activities for allocs */ > - if (s->flags & SLAB_STORE_USER) > - set_track(s, object, TRACK_ALLOC, addr); > + /* Success. Perform special debug activities for allocs */ > trace(s, slab, object, 1); > init_object(s, object, SLUB_RED_ACTIVE); > return 1; > @@ -1385,63 +1382,6 @@ static inline int free_consistency_checks(struct kmem_cache *s, > return 1; > } > > -/* Supports checking bulk free of a constructed freelist */ > -static noinline int free_debug_processing( > - struct kmem_cache *s, struct slab *slab, > - void *head, void *tail, int bulk_cnt, > - unsigned long addr) > -{ > - struct kmem_cache_node *n = get_node(s, slab_nid(slab)); > - void *object = head; > - int cnt = 0; > - unsigned long flags, flags2; > - int ret = 0; > - depot_stack_handle_t handle = 0; > - > - if (s->flags & SLAB_STORE_USER) > - handle = set_track_prepare(); > - > - spin_lock_irqsave(&n->list_lock, flags); > - slab_lock(slab, &flags2); > - > - if (s->flags & SLAB_CONSISTENCY_CHECKS) { > - if (!check_slab(s, slab)) > - goto out; > - } > - > -next_object: > - cnt++; > - > - if (s->flags & SLAB_CONSISTENCY_CHECKS) { > - if (!free_consistency_checks(s, slab, object, addr)) > - goto out; > - } > - > - if (s->flags & SLAB_STORE_USER) > - set_track_update(s, object, TRACK_FREE, addr, handle); > - trace(s, slab, object, 0); > - /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ > - init_object(s, object, SLUB_RED_INACTIVE); > - > - /* Reached end of constructed freelist yet? */ > - if (object != tail) { > - object = get_freepointer(s, object); > - goto next_object; > - } > - ret = 1; > - > -out: > - if (cnt != bulk_cnt) > - slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", > - bulk_cnt, cnt); > - > - slab_unlock(slab, &flags2); > - spin_unlock_irqrestore(&n->list_lock, flags); > - if (!ret) > - slab_fix(s, "Object at 0x%p not freed", object); > - return ret; > -} > - > /* > * Parse a block of slub_debug options. Blocks are delimited by ';' > * > @@ -1661,9 +1601,9 @@ static inline > void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) {} > > static inline int alloc_debug_processing(struct kmem_cache *s, > - struct slab *slab, void *object, unsigned long addr) { return 0; } > + struct slab *slab, void *object) { return 0; } > > -static inline int free_debug_processing( > +static inline void free_debug_processing( > struct kmem_cache *s, struct slab *slab, > void *head, void *tail, int bulk_cnt, > unsigned long addr) { return 0; } > @@ -1671,6 +1611,8 @@ static inline int free_debug_processing( > static inline void slab_pad_check(struct kmem_cache *s, struct slab *slab) {} > static inline int check_object(struct kmem_cache *s, struct slab *slab, > void *object, u8 val) { return 1; } > +static inline void set_track(struct kmem_cache *s, void *object, > + enum track_item alloc, unsigned long addr) {} > static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, > struct slab *slab) {} > static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, > @@ -1976,11 +1918,13 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > */ > slab = alloc_slab_page(alloc_gfp, node, oo); > if (unlikely(!slab)) > - goto out; > + return NULL; > stat(s, ORDER_FALLBACK); > } > > slab->objects = oo_objects(oo); > + slab->inuse = 0; > + slab->frozen = 0; > > account_slab(slab, oo_order(oo), s, flags); > > @@ -2007,15 +1951,6 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > set_freepointer(s, p, NULL); > } > > - slab->inuse = slab->objects; > - slab->frozen = 1; > - > -out: > - if (!slab) > - return NULL; > - > - inc_slabs_node(s, slab_nid(slab), slab->objects); > - > return slab; > } > > @@ -2102,6 +2037,86 @@ static inline void remove_partial(struct kmem_cache_node *n, > n->nr_partial--; > } > > +/* > + * Called only for kmem_cache_debug() caches instead of acquire_slab(), with a > + * slab from the n->partial list. Removes only a single object from the slab > + * under slab_lock(), does the alloc_debug_processing() checks and leaves the > + * slab on the list, or moves it to full list if it was the last object. > + */ > +static void *alloc_single_from_partial(struct kmem_cache *s, > + struct kmem_cache_node *n, struct slab *slab) > +{ > + void *object; > + unsigned long flags; > + > + lockdep_assert_held(&n->list_lock); > + > + slab_lock(slab, &flags); > + > + object = slab->freelist; > + slab->freelist = get_freepointer(s, object); > + slab->inuse++; > + > + if (!alloc_debug_processing(s, slab, object)) { > + slab_unlock(slab, &flags); > + remove_partial(n, slab); > + return NULL; > + } > + > + slab_unlock(slab, &flags); > + > + if (slab->inuse == slab->objects) { > + remove_partial(n, slab); > + add_full(s, n, slab); > + } > + > + return object; > +} > + > +/* > + * Called only for kmem_cache_debug() caches to allocate from a freshly > + * allocated slab. Allocates a single object instead of whole freelist > + * and puts the slab to the partial (or full) list. > + */ > +static void *alloc_single_from_new_slab(struct kmem_cache *s, > + struct slab *slab) > +{ > + int nid = slab_nid(slab); > + struct kmem_cache_node *n = get_node(s, nid); > + unsigned long flags, flags2; > + void *object; > + > + spin_lock_irqsave(&n->list_lock, flags); > + slab_lock(slab, &flags2); > + > + object = slab->freelist; > + slab->freelist = get_freepointer(s, object); > + slab->inuse = 1; > + > + if (!alloc_debug_processing(s, slab, object)) { > + /* > + * It's not really expected that this would fail on a > + * freshly allocated slab, but a concurrent memory > + * corruption in theory could cause that. > + */ > + slab_unlock(slab, &flags2); > + spin_unlock_irqrestore(&n->list_lock, flags); > + return NULL; > + } > + > + slab_unlock(slab, &flags2); > + > + if (slab->inuse == slab->objects) > + add_full(s, n, slab); > + else > + add_partial(n, slab, DEACTIVATE_TO_HEAD); > + > + inc_slabs_node(s, nid, slab->objects); > + spin_unlock_irqrestore(&n->list_lock, flags); > + > + return object; > +} > + > /* > * Remove slab from the partial list, freeze it and > * return the pointer to the freelist. > @@ -2182,6 +2197,13 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, > if (!pfmemalloc_match(slab, gfpflags)) > continue; > > + if (kmem_cache_debug(s)) { > + object = alloc_single_from_partial(s, n, slab); > + if (object) > + break; > + continue; > + } > + > t = acquire_slab(s, n, slab, object == NULL); > if (!t) > break; > @@ -2788,6 +2810,114 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) > { > return atomic_long_read(&n->total_objects); > } > + > +/* Supports checking bulk free of a constructed freelist */ > +static noinline void free_debug_processing( > + struct kmem_cache *s, struct slab *slab, > + void *head, void *tail, int bulk_cnt, > + unsigned long addr) > +{ > + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); > + struct slab *slab_free = NULL; > + void *object = head; > + int cnt = 0; > + unsigned long flags, flags2; > + bool checks_ok = false; > + depot_stack_handle_t handle = 0; > + > + if (s->flags & SLAB_STORE_USER) > + handle = set_track_prepare(); > + > + spin_lock_irqsave(&n->list_lock, flags); > + slab_lock(slab, &flags2); > + > + if (s->flags & SLAB_CONSISTENCY_CHECKS) { > + if (!check_slab(s, slab)) > + goto out; > + } > + > + if (slab->inuse < bulk_cnt) { > + slab_err(s, slab, "Slab has %d allocated objects but %d are to be freed\n", > + slab->inuse, bulk_cnt); > + goto out; > + } > + > +next_object: > + > + if (++cnt > bulk_cnt) > + goto out_cnt; > + > + if (s->flags & SLAB_CONSISTENCY_CHECKS) { > + if (!free_consistency_checks(s, slab, object, addr)) > + goto out; > + } > + > + if (s->flags & SLAB_STORE_USER) > + set_track_update(s, object, TRACK_FREE, addr, handle); > + trace(s, slab, object, 0); > + /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ > + init_object(s, object, SLUB_RED_INACTIVE); > + > + /* Reached end of constructed freelist yet? */ > + if (object != tail) { > + object = get_freepointer(s, object); > + goto next_object; > + } > + checks_ok = true; > + > +out_cnt: > + if (cnt != bulk_cnt) > + slab_err(s, slab, "Bulk free expected %d objects but found %d\n", > + bulk_cnt, cnt); > + > +out: > + if (checks_ok) { > + void *prior = slab->freelist; > + > + /* Perform the actual freeing while we still hold the locks */ > + slab->inuse -= cnt; > + set_freepointer(s, tail, prior); > + slab->freelist = head; > + > + slab_unlock(slab, &flags2); > + > + /* Do we need to remove the slab from full or partial list? */ > + if (!prior) { > + remove_full(s, n, slab); > + } else if (slab->inuse == 0) { > + remove_partial(n, slab); > + stat(s, FREE_REMOVE_PARTIAL); > + } > + > + /* Do we need to discard the slab or add to partial list? */ > + if (slab->inuse == 0) { > + slab_free = slab; > + } else if (!prior) { > + add_partial(n, slab, DEACTIVATE_TO_TAIL); > + stat(s, FREE_ADD_PARTIAL); > + } > + } else { > + slab_unlock(slab, &flags2); > + } > + > + if (slab_free) { > + /* > + * Update the counters while still holding n->list_lock to > + * prevent spurious validation warnings > + */ > + dec_slabs_node(s, slab_nid(slab_free), slab_free->objects); > + } > + > + spin_unlock_irqrestore(&n->list_lock, flags); > + > + if (!checks_ok) > + slab_fix(s, "Object at 0x%p not freed", object); > + > + if (slab_free) { > + stat(s, FREE_SLAB); > + free_slab(s, slab_free); > + } > +} > #endif /* CONFIG_SLUB_DEBUG */ > > #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) > @@ -3036,36 +3166,52 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > return NULL; > } > > + stat(s, ALLOC_SLAB); > + > + if (kmem_cache_debug(s)) { > + freelist = alloc_single_from_new_slab(s, slab); > + > + if (unlikely(!freelist)) > + goto new_objects; > + > + if (s->flags & SLAB_STORE_USER) > + set_track(s, freelist, TRACK_ALLOC, addr); > + > + return freelist; > + } > + > /* > * No other reference to the slab yet so we can > * muck around with it freely without cmpxchg > */ > freelist = slab->freelist; > slab->freelist = NULL; > + slab->inuse = slab->objects; > + slab->frozen = 1; > > - stat(s, ALLOC_SLAB); > + inc_slabs_node(s, slab_nid(slab), slab->objects); > > check_new_slab: > > if (kmem_cache_debug(s)) { > - if (!alloc_debug_processing(s, slab, freelist, addr)) { > - /* Slab failed checks. Next slab needed */ > - goto new_slab; > - } else { > - /* > - * For debug case, we don't load freelist so that all > - * allocations go through alloc_debug_processing() > - */ > - goto return_single; > - } > + /* > + * For debug caches here we had to go through > + * alloc_single_from_partial() so just store the tracking info > + * and return the object > + */ > + if (s->flags & SLAB_STORE_USER) > + set_track(s, freelist, TRACK_ALLOC, addr); > + return freelist; > } > > - if (unlikely(!pfmemalloc_match(slab, gfpflags))) > + if (unlikely(!pfmemalloc_match(slab, gfpflags))) { > /* > * For !pfmemalloc_match() case we don't load freelist so that > * we don't make further mismatched allocations easier. > */ > - goto return_single; > + deactivate_slab(s, slab, get_freepointer(s, freelist)); > + return freelist; > + } > > retry_load_slab: > > @@ -3089,11 +3235,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > c->slab = slab; > > goto load_freelist; > - > -return_single: > - > - deactivate_slab(s, slab, get_freepointer(s, freelist)); > - return freelist; > } > > /* > @@ -3341,9 +3482,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > if (kfence_free(head)) > return; > > - if (kmem_cache_debug(s) && > - !free_debug_processing(s, slab, head, tail, cnt, addr)) > + if (kmem_cache_debug(s)) { > + free_debug_processing(s, slab, head, tail, cnt, addr); > return; > + } > > do { > if (unlikely(n)) { > @@ -3936,6 +4078,7 @@ static void early_kmem_cache_node_alloc(int node) > slab = new_slab(kmem_cache_node, GFP_NOWAIT, node); > > BUG_ON(!slab); > + inc_slabs_node(kmem_cache_node, slab_nid(slab), slab->objects); > if (slab_nid(slab) != node) { > pr_err("SLUB: Unable to allocate memory from node %d\n", node); > pr_err("SLUB: Allocating a useless per node structure in order to be able to continue\n"); > @@ -3950,7 +4093,6 @@ static void early_kmem_cache_node_alloc(int node) > n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false); > slab->freelist = get_freepointer(kmem_cache_node, n); > slab->inuse = 1; > - slab->frozen = 0; > kmem_cache_node->node[node] = n; > init_kmem_cache_node(n); > inc_slabs_node(kmem_cache_node, node, slab->objects); > @@ -5601,7 +5743,7 @@ static ssize_t validate_store(struct kmem_cache *s, > { > int ret = -EINVAL; > > - if (buf[0] == '1') { > + if (buf[0] == '1' && kmem_cache_debug(s)) { > ret = validate_slab_cache(s); > if (ret >= 0) > ret = length; > -- > 2.37.1 >