From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E2ABC5AD49 for ; Fri, 6 Jun 2025 22:22:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 53CA06B009A; Fri, 6 Jun 2025 18:22:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4EE8B6B009B; Fri, 6 Jun 2025 18:22:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B4DF6B009C; Fri, 6 Jun 2025 18:22:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 199AF6B009A for ; Fri, 6 Jun 2025 18:22:45 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C0871120B3D for ; Fri, 6 Jun 2025 22:22:44 +0000 (UTC) X-FDA: 83526401448.25.54C5F68 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 586B5C0003 for ; Fri, 6 Jun 2025 22:22:43 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EByd4M7+; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749248563; a=rsa-sha256; cv=none; b=w9AAmQG5d6QCctzz0utifVnjGe2xlVwLsCgghQFZ2rXJBrdogmyi45vFptNPzPEgFPcHDr 8wiXsOTO1BBEHMQO11/clzJfLu51VhMQiZmbZBP7PjB9WPT5W1qIsKRfnICL8nqS6yg+zi YrpL9kFdkcQLFx82uNcAIKP418CLdkg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EByd4M7+; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749248563; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rc2mlX8LqDHDHHCWlaMj8mYFH8dLuSQ0CutRdbFfvuQ=; b=3SD5rReOVYG5fqVCfFDRoj2eaUVuVXZdqWpPuFJuVfCo52E9ROGSnnlW3h6ro1XqRJVI10 gjQuRGTYJKg2cYjlLl8NbN//s6oVWeGZHGTu8Sivc/gAx9CQ79/4ymE/oL4fQ3o8sn/PVS NF4Y6KUH2n6+Ft6zTnWKX4iy1ggN3WA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rc2mlX8LqDHDHHCWlaMj8mYFH8dLuSQ0CutRdbFfvuQ=; b=EByd4M7+TLn9aHUoFhXunT1ucX mqQJF/NSbxUJ29cI+A9/9tUICzjDplPRBrgP3iR2z3tOy36rQWW1EQD20O8i0fcg5X1ejymN+Z2oP wxYnerpw4j9GGI4XVNVtnRk9IkW7gyxrKCN5cvetttNdIXoT3XsP9L7/40yzqiGbNY78LEBKph0+U 5rmevsujnW45yyDuuMWtCH6LboDd9ZqEEh61Ohgf58R2chzA9rW7htLpUo2fGFeFD3jCWJhcus3gr Pf52EVfFHKJXz8jSRmg8oBLLvQ3oYuYqHwxzgSOFJ1ndOxi05GHKWGTnX9J56Ocd5U7TUoXNXHWrt npOTKPLQ==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1uNfS4-00000005r7K-3u8F; Fri, 06 Jun 2025 22:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Vlastimil Babka Cc: "Matthew Wilcox (Oracle)" , Christoph Lameter , David Rientjes , linux-mm@kvack.org Subject: [PATCH 03/10] slab: Add SL_private flag Date: Fri, 6 Jun 2025 23:22:05 +0100 Message-ID: <20250606222214.1395799-4-willy@infradead.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250606222214.1395799-1-willy@infradead.org> References: <20250606222214.1395799-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 586B5C0003 X-Stat-Signature: ntfkb6uizrqixo75aen6zzk637qf6rdp X-Rspam-User: X-HE-Tag: 1749248563-407724 X-HE-Meta: U2FsdGVkX18XuWlmVM5Jm7aIUSNQJFnMw1iuWyJjMbY0n0AzjBMeQhNQOW1wzO5EMwJMUkIzpEIyBotWs0NwbKdlFuyEf2yqMfNzCza9gV+v2qO4TK7jXfJeYL4AX7eXPmtd9OHAFCChqck31wla5kIQOleT2IzYdX6HQvFkMRpFCura6uqZ0XHEl+5Wh4ja3++rvkRcmgOukDYYofJM3f+xiTtZOJapAAjRGQZz9/nhajBlrypdBAYzPYiB55cFp2QHLGI4/cFLr/PeHMdL1KIAZ9Yov1vI/tbsZrm9LjyMgygL0nHG/XRUeAUlp2Uh851hOPbEFO4QpZK7d0VMt76I+uDnmV5FZWpwwCyNJY7A4pc4KnSl+IBXJM2Wl0msCFDSQJa4//ZlSghLiTjL+R36370JbVkMy8jDFTDvBZFKCXFEsbSp7/gldjs1hzC1sgTI/XlAIwAGmua5m0TYeqJ+1qlSQ8I+ygs/1Qodfz8Y3AYqU3tvb9tn4eFS/P5mop1iEY18wtCVV4kexUbAwTlrPj3i8Bvy7Yy7mprWSLQXYUnO/DD7M5KYfTLQTaLRMa5Fns51xl9R8DcKfYjhAwI/FCqCNIWRxdvrIAAA7YVyLP4AvtWuAm1Ov7ouhMVFcdYD//rWy4eZID12XGLTq84g87BuY+eTi2P/zJpFb6Lc5W49aCGhWAnj94QZWpx2OkSw07lqPdHAinBybHvRJ7tKJnUqI4i6B+bI48V9qoNiPjnRs0ZWO27nUvx4sXltmWJhmBXG37sLMNq7qiDzkgzyj1kxKM2HGkwaG0ruhw/CwqgY/P/lgra/kB5Ry3sGWigNByducfrLhMys8tWYsieesQ2CfhCAS25ayZlldze/HxBL4NEgiI0a8JGMZw28WSKzuMjkY46df2dt+aJZnSpYg5tQ6nGAuT9CwI2mGAa+LZ2nxkCQPzxQ6xQGJJMKSA5Q/kII7Pg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Give slab its own name for this flag. Keep the PG_workingset alias information in one place. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 2 ++ mm/slub.c | 20 ++++++++------------ 2 files changed, 10 insertions(+), 12 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index a25f12244b6c..fca818011f7d 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -116,6 +116,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t) /** * enum slab_flags - How the slab flags bits are used. * @SL_locked: Is locked with slab_lock() + * @SL_partial: On the per-node partial list * * The slab flags share space with the page flags but some bits have * different interpretations. The high bits are used for information @@ -123,6 +124,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t) */ enum slab_flags { SL_locked, + SL_partial = PG_workingset, /* Historical reasons for this bit */ }; /** diff --git a/mm/slub.c b/mm/slub.c index e9cbacee406d..804b39d06fa0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -91,14 +91,14 @@ * The partially empty slabs cached on the CPU partial list are used * for performance reasons, which speeds up the allocation process. * These slabs are not frozen, but are also exempt from list management, - * by clearing the PG_workingset flag when moving out of the node + * by clearing the SL_partial flag when moving out of the node * partial list. Please see __slab_free() for more details. * * To sum up, the current scheme is: - * - node partial slab: PG_Workingset && !frozen - * - cpu partial slab: !PG_Workingset && !frozen - * - cpu slab: !PG_Workingset && frozen - * - full slab: !PG_Workingset && !frozen + * - node partial slab: SL_partial && !frozen + * - cpu partial slab: !SL_partial && !frozen + * - cpu slab: !SL_partial && frozen + * - full slab: !SL_partial && !frozen * * list_lock * @@ -2717,23 +2717,19 @@ static void discard_slab(struct kmem_cache *s, struct slab *slab) free_slab(s, slab); } -/* - * SLUB reuses PG_workingset bit to keep track of whether it's on - * the per-node partial list. - */ static inline bool slab_test_node_partial(const struct slab *slab) { - return folio_test_workingset(slab_folio(slab)); + return test_bit(SL_partial, &slab->flags); } static inline void slab_set_node_partial(struct slab *slab) { - set_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); + set_bit(SL_partial, &slab->flags); } static inline void slab_clear_node_partial(struct slab *slab) { - clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); + clear_bit(SL_partial, &slab->flags); } /* -- 2.47.2