From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55843C5AD49 for ; Fri, 6 Jun 2025 22:22:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E15C16B0096; Fri, 6 Jun 2025 18:22:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC6866B0098; Fri, 6 Jun 2025 18:22:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8E836B0099; Fri, 6 Jun 2025 18:22:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9764E6B0096 for ; Fri, 6 Jun 2025 18:22:35 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4E6895CC5E for ; Fri, 6 Jun 2025 22:22:35 +0000 (UTC) X-FDA: 83526401070.24.FDD6C2C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id E15F1180007 for ; Fri, 6 Jun 2025 22:22:33 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=m1BhX+jN; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749248554; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Yghc2i/+6SymdqyLRVFUOuLUn5fKNJNTd2GPUUt5xHk=; b=vrV3NuVxY3khCUqi10yLyIBi88HDuQq9XWXMmcpJVbse653s9hM5Cna7zLLxoLe2zyOvHF jVhQi+bGbhX5cH+bz+GacGRUVJRJpZngX2MGt6xO8SbpeWmUdafqRmoLCTcg+GvpfeLddP jLZuYx2smB8WD9un157Gew3JlWs+THY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749248554; a=rsa-sha256; cv=none; b=F9Fvu0ylbeGWpqpS34Nju9PUaSuscdVkWFEKnRjKjWG49ZiUyq+cJ7SfL9Yb9BHYKEqr6D SWqBAQGsKsryIxlFnJe2jay4LzUtFJ2uVF/h2NgXIxGbLarKoWGkFIhnojV+7lbHczKXF/ bYhHdO2uIp9aM4FCbPQSmKbJW+pdyro= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=m1BhX+jN; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yghc2i/+6SymdqyLRVFUOuLUn5fKNJNTd2GPUUt5xHk=; b=m1BhX+jN6HJgT63pJs2A/XZOy0 G9cATFp/Laz1x4/NRgZAPEYhAB8XNxyoIe3DvRfqh7zgAF/XjOO2bXaW0VmM1pfa/gHssCt/BiIGt y8dDf1wvcKtlNRIyV9mUHhLDpq2fgZcmUH6lhAQJB+5byk25ROBRr0Qe+mUlDY2wW9TCVP0qAp/Lx A7I+DHa9WYSuS0CRGGYxOmuUFU2IWXispw/+9zN2TgWJpPD9Ul65l1ThEJDyDs+NYmwntQ9L+r8tw 5mZryAKfWUUwdLCKZw6c+GRbA0VOj2aQXstdr0Md1gOyQGmJD2OjJlspcZsgYfJGJ8Zk9T52CaUK6 H1g/hHkw==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1uNfS4-00000005r7I-3Wpv; Fri, 06 Jun 2025 22:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Vlastimil Babka Cc: "Matthew Wilcox (Oracle)" , Christoph Lameter , David Rientjes , linux-mm@kvack.org Subject: [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags Date: Fri, 6 Jun 2025 23:22:04 +0100 Message-ID: <20250606222214.1395799-3-willy@infradead.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250606222214.1395799-1-willy@infradead.org> References: <20250606222214.1395799-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E15F1180007 X-Stat-Signature: m595bnocap6wdxcpyw5gh144p5zqkce9 X-Rspam-User: X-HE-Tag: 1749248553-777449 X-HE-Meta: U2FsdGVkX1/3TfsCXy9zYNUQ9O4rhBcdzewwYfqpXnFToHk0smAxAkLq6lQKL2FjxcbQrpMWh5vmoMbzZ5GIr27jDT6QhEc716avGQRO578NQ5vSTCKQMlVXR61ltTPwUwpzTiF61/niH1Id5N1iosrhMZrJ27txUmI9OgJeNQOlXRxW1Dym71SGCJ4GBIfkSkSt1OJnEEJoM0/Yf+qsg9KgFCSJyFIZXJckHfF2FoBwQGdDqql7nOxXhJSn4+OMyIKc0dlCPac626TP1cGwaFFFzn5Jfmyp1ImcevnnxsR7hpyqQsghjZWYn2QgPFCtAMUJvFbCHbegzb3vErK0dN8fpCdA0VGDoxycIwOAxz/d+LR3WKMSt3N1t2cCAQGNi2AOMTz13wvq0TI7aUom3+rM1bG6qwi4yxoVcd8Yru8GaX/IG6yLZKDHDOfRuMjOPVlRfaT00WGmoYxU/w2OTfpy8JfESSTzHaHuXTmSXuRI+WGFeWuTBBc31//3Mdj+pSJbpeBEXiizqIxmJL/4+G2ZI6WAnanRZoB2522DnmffHN2QOo8gieKXlU/nBFnVWGN9J0ZVqw62epJij4mBWd/Y524+JqizHJD5ZeActeKCMINcWzYtcVY9cjIf0d25i02aThLKWhhl1k/ok5LNiiEn0CegeY4bEt51sVxBsuwRtQGs96FPN9oiKWeL7/D+Dmt7d1x9qodqfME+hNzhEOs5bxzB/wGYsNmKOoT1f790zSCZEQISdBkJ01l4+YP29W6so9XujT72eqJ8nFkfkgX0nay0OZ+o0TbPx85/ASXHGwZHLeLst3myIpVI9bhnWwiHJ8IoB1lOYbZbu9xPAIxAF7YKjZgW7B9tXxEodPKBSH2if85m/dTJPLMLpl+WEQyEmusQo7UF8A6Fp32Uj8VVnqwx/ai6yTUvLqJX3+aT8GAKvBCGJmJmfi1cfIZpQFjQzSsK/Sc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Slab has its own reasons for using flag bits; they aren't just the page bits. Maybe this won't be the ultimate solution, but we should be clear that these bits are in use. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 16 ++++++++++++++-- mm/slub.c | 6 +++--- 2 files changed, 17 insertions(+), 5 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 05a21dc796e0..a25f12244b6c 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -50,7 +50,7 @@ typedef union { /* Reuses the bits in struct page */ struct slab { - unsigned long __page_flags; + unsigned long flags; struct kmem_cache *slab_cache; union { @@ -99,7 +99,7 @@ struct slab { #define SLAB_MATCH(pg, sl) \ static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) -SLAB_MATCH(flags, __page_flags); +SLAB_MATCH(flags, flags); SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); #ifdef CONFIG_MEMCG @@ -113,6 +113,18 @@ static_assert(sizeof(struct slab) <= sizeof(struct page)); static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t))); #endif +/** + * enum slab_flags - How the slab flags bits are used. + * @SL_locked: Is locked with slab_lock() + * + * The slab flags share space with the page flags but some bits have + * different interpretations. The high bits are used for information + * like zone/node/section. + */ +enum slab_flags { + SL_locked, +}; + /** * folio_slab - Converts from folio to slab. * @folio: The folio. diff --git a/mm/slub.c b/mm/slub.c index 31e11ef256f9..e9cbacee406d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -639,12 +639,12 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s) */ static __always_inline void slab_lock(struct slab *slab) { - bit_spin_lock(PG_locked, &slab->__page_flags); + bit_spin_lock(SL_locked, &slab->flags); } static __always_inline void slab_unlock(struct slab *slab) { - bit_spin_unlock(PG_locked, &slab->__page_flags); + bit_spin_unlock(SL_locked, &slab->flags); } static inline bool @@ -1010,7 +1010,7 @@ static void print_slab_info(const struct slab *slab) { pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n", slab, slab->objects, slab->inuse, slab->freelist, - &slab->__page_flags); + &slab->flags); } void skip_orig_size_check(struct kmem_cache *s, const void *object) -- 2.47.2