From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AEB0EC9EC98 for ; Mon, 12 Jan 2026 15:17:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3496C6B00BC; Mon, 12 Jan 2026 10:17:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2FCEF6B00BE; Mon, 12 Jan 2026 10:17:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 212BC6B00BF; Mon, 12 Jan 2026 10:17:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EED3F6B00BC for ; Mon, 12 Jan 2026 10:17:50 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id AEF1E1AD8D3 for ; Mon, 12 Jan 2026 15:17:50 +0000 (UTC) X-FDA: 84323666700.11.8B48086 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf17.hostedemail.com (Postfix) with ESMTP id 81EDE40018 for ; Mon, 12 Jan 2026 15:17:48 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; spf=pass (imf17.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1768231068; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wYjYxR/+1reeFjJM9R0TPTSqWA+3lSDKQpt8R5DuILc=; b=LkIb6l13t3Aj8yE/RcAn+QL9X3IWQBdRQKYG1BjoRqeZ5j7McQvSOZ4oZ9LAgx8RZma190 wv5xvLqF7FcwsPnGXH69VbEdEWX2aS7qDV5zNWjekxrDGff2U1ug/o6srEKzd8c8NpLPFV mi8cepospePLBtFclBuc74O+f8y4s8k= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1768231068; a=rsa-sha256; cv=none; b=6Bz4kx3Fy+rvkXjfJsEH8kJ33Flv7HHjN3KtrkAG3jh5LJL/pXG90NHoL8/0lyFrR2NgND J6h+fz0vYr8HN+6Vgj2O8o1uESMOo/2IoOUEX80To0h0DkN37J8SpP6lpw5Jnz60ZX2nZv jJjT/sbzFkEKgZbNMQjvuJ7N3Wnhu3M= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 3ECEE3369B; Mon, 12 Jan 2026 15:16:59 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 2008E3EA65; Mon, 12 Jan 2026 15:16:59 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id uEqHB2sQZWn7FgAAD6G6ig (envelope-from ); Mon, 12 Jan 2026 15:16:59 +0000 From: Vlastimil Babka Date: Mon, 12 Jan 2026 16:17:10 +0100 Subject: [PATCH RFC v2 16/20] slab: refill sheaves from all nodes MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260112-sheaves-for-all-v2-16-98225cfb50cf@suse.cz> References: <20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz> In-Reply-To: <20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz> To: Harry Yoo , Petr Tesarik , Christoph Lameter , David Rientjes , Roman Gushchin Cc: Hao Li , Andrew Morton , Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Queue-Id: 81EDE40018 X-Stat-Signature: 1h9a3zen8rexhsytsu1p7dh8sf9ginaq X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1768231068-473364 X-HE-Meta: U2FsdGVkX1804tQ7Xdrw/CXML8RefdnpmD7cVxgfisX9pMdaDzLYx4wfHuB0vuNYXCBV4ekCJFF/uhxu6sYNwKSah74JP89XUhKYe+r4R5y8KbHDAWkLdiJY8lYyf3CRjbHwC82RNa0wSqrI7pHt6Fv+5oicDOnHxeSkpXbiyU7tPugNZj940FyG/K2ro6nVjsai/aPDH9fzyHYAh3FACJLJYtQKWNIJyV9YnGpB95zc890mVO8A7zgPuGNtoDWFQJrkS2U2VJCpDdD0rXzs5bsZWuGSnr9iyZLb9qLjjZG6DH2LqFs3Ln8/HGpBE5jYyNbrMOAGSVKM3dTtFTM8J+oWPlW2og9fjZctu3X4XDAGoPhTx6PFn7zjzXPM1R56999QvnPQX8Y44YzFYHPOkRRvVQFhguOG42KqgZ9L/torlt0EY7R/oSt1Wfm05mmXMYSYLm9Yy4p16y5FfJ7kCox5wZeF7hVb3BwoMkFeIRVIhtfZAdzMf2mqqFpe64XhoMKP11eb4RmxlFogbe2Yw/gYdnU6UHdALIfjXqfqnAM3YEFOCzT2QqZ1YWiLxrEBWHUG7vZ4zMPdk6CQQJA7Q7+cmYVKUEvns/IyVWwSPbTW57TsDX3pNM3gv+9/qkLBR6KNdgzJgiRnTY3BhNYD9h7HFey9gg+Bk99vd7iqh3P8a76AEtKsWi4zCAiyW7AT40Cnq3aN2SvdpL/asp32ojcBwuk0dfq0jk3kZIM31/RLQm3CFu6tMVvV+/uarQ8z69bFyBGPrXvjFotahPr8wwaqla0y2pBNuXNOTOsz6emz1xtmnXRRD8tupVnjJb64mQzcs3j+JnnhkB2/nE/uWVjWUiZzoRigLQqMsc9bkVtRh17kM/q9Y0Lx35NMOdMIMdhhGiGdjAgxfadNgGFS113jRy8N+TB2WPRF/l8alzBqMYfmQR28x673cOPbD/srjS9gdLN0E0//tOyU9JW JAT5+bsO ep+b2ax6YHNvjLkDtdb96kgGPf4qTEOorA5gf9HP2uI7FgbeFaDd9Of6FC9QxvyalVi5QRfAAxtlojU1WnIeS/cuIsQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: __refill_objects() currently only attempts to get partial slabs from the local node and then allocates new slab(s). Expand it to trying also other nodes while observing the remote node defrag ratio, similarly to get_any_partial(). This will prevent allocating new slabs on a node while other nodes have many free slabs. It does mean sheaves will contain non-local objects in that case. Allocations that care about specific node will still be served appropriately, but might get a slowpath allocation. Like get_any_partial() we do observe cpuset_zone_allowed(), although we might be refilling a sheaf that will be then used from a different allocation context. We can also use the resulting refill_objects() in __kmem_cache_alloc_bulk() for non-debug caches. This means kmem_cache_alloc_bulk() will get better performance when sheaves are exhausted. kmem_cache_alloc_bulk() cannot indicate a preferred node so it's compatible with sheaves refill in preferring the local node. Its users also have gfp flags that allow spinning, so document that as a requirement. Signed-off-by: Vlastimil Babka --- mm/slub.c | 137 ++++++++++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 106 insertions(+), 31 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 088b4f6f81fa..602674d56ae6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2506,8 +2506,8 @@ static void free_empty_sheaf(struct kmem_cache *s, struct slab_sheaf *sheaf) } static unsigned int -__refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, - unsigned int max); +refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, + unsigned int max); static int refill_sheaf(struct kmem_cache *s, struct slab_sheaf *sheaf, gfp_t gfp) @@ -2518,8 +2518,8 @@ static int refill_sheaf(struct kmem_cache *s, struct slab_sheaf *sheaf, if (!to_fill) return 0; - filled = __refill_objects(s, &sheaf->objects[sheaf->size], gfp, - to_fill, to_fill); + filled = refill_objects(s, &sheaf->objects[sheaf->size], gfp, to_fill, + to_fill); sheaf->size += filled; @@ -6515,29 +6515,22 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) EXPORT_SYMBOL(kmem_cache_free_bulk); static unsigned int -__refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, - unsigned int max) +__refill_objects_node(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, + unsigned int max, struct kmem_cache_node *n) { struct slab *slab, *slab2; struct partial_context pc; unsigned int refilled = 0; unsigned long flags; void *object; - int node; pc.flags = gfp; pc.min_objects = min; pc.max_objects = max; - node = numa_mem_id(); - - if (WARN_ON_ONCE(!gfpflags_allow_spinning(gfp))) + if (!get_partial_node_bulk(s, n, &pc)) return 0; - /* TODO: consider also other nodes? */ - if (!get_partial_node_bulk(s, get_node(s, node), &pc)) - goto new_slab; - list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) { list_del(&slab->slab_list); @@ -6575,8 +6568,6 @@ __refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, } if (unlikely(!list_empty(&pc.slabs))) { - struct kmem_cache_node *n = get_node(s, node); - spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) { @@ -6598,13 +6589,92 @@ __refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, } } + return refilled; +} - if (likely(refilled >= min)) - goto out; +#ifdef CONFIG_NUMA +static unsigned int +__refill_objects_any(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, + unsigned int max, int local_node) +{ + struct zonelist *zonelist; + struct zoneref *z; + struct zone *zone; + enum zone_type highest_zoneidx = gfp_zone(gfp); + unsigned int cpuset_mems_cookie; + unsigned int refilled = 0; + + /* see get_any_partial() for the defrag ratio description */ + if (!s->remote_node_defrag_ratio || + get_cycles() % 1024 > s->remote_node_defrag_ratio) + return 0; + + do { + cpuset_mems_cookie = read_mems_allowed_begin(); + zonelist = node_zonelist(mempolicy_slab_node(), gfp); + for_each_zone_zonelist(zone, z, zonelist, highest_zoneidx) { + struct kmem_cache_node *n; + unsigned int r; + + n = get_node(s, zone_to_nid(zone)); + + if (!n || !cpuset_zone_allowed(zone, gfp) || + n->nr_partial <= s->min_partial) + continue; + + r = __refill_objects_node(s, p, gfp, min, max, n); + refilled += r; + + if (r >= min) { + /* + * Don't check read_mems_allowed_retry() here - + * if mems_allowed was updated in parallel, that + * was a harmless race between allocation and + * the cpuset update + */ + return refilled; + } + p += r; + min -= r; + max -= r; + } + } while (read_mems_allowed_retry(cpuset_mems_cookie)); + + return refilled; +} +#else +static inline unsigned int +__refill_objects_any(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, + unsigned int max, int local_node) +{ + return 0; +} +#endif + +static unsigned int +refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, + unsigned int max) +{ + int local_node = numa_mem_id(); + unsigned int refilled; + struct slab *slab; + + if (WARN_ON_ONCE(!gfpflags_allow_spinning(gfp))) + return 0; + + refilled = __refill_objects_node(s, p, gfp, min, max, + get_node(s, local_node)); + if (refilled >= min) + return refilled; + + refilled += __refill_objects_any(s, p + refilled, gfp, min - refilled, + max - refilled, local_node); + if (refilled >= min) + return refilled; new_slab: - slab = new_slab(s, pc.flags, node); + slab = new_slab(s, gfp, local_node); if (!slab) goto out; @@ -6620,8 +6690,8 @@ __refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min, if (refilled < min) goto new_slab; -out: +out: return refilled; } @@ -6631,18 +6701,20 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, { int i; - /* - * TODO: this might be more efficient (if necessary) by reusing - * __refill_objects() - */ - for (i = 0; i < size; i++) { + if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { + for (i = 0; i < size; i++) { - p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, _RET_IP_, - s->object_size); - if (unlikely(!p[i])) - goto error; + p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, _RET_IP_, + s->object_size); + if (unlikely(!p[i])) + goto error; - maybe_wipe_obj_freeptr(s, p[i]); + maybe_wipe_obj_freeptr(s, p[i]); + } + } else { + i = refill_objects(s, p, flags, size, size); + if (i < size) + goto error; } return i; @@ -6653,7 +6725,10 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } -/* Note that interrupts must be enabled when calling this function. */ +/* + * Note that interrupts must be enabled when calling this function and gfp + * flags must allow spinning. + */ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { -- 2.52.0