From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29DA3EE6458 for ; Fri, 15 Sep 2023 10:59:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE6F06B0350; Fri, 15 Sep 2023 06:59:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A96B36B0352; Fri, 15 Sep 2023 06:59:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 936206B0353; Fri, 15 Sep 2023 06:59:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8097F6B0350 for ; Fri, 15 Sep 2023 06:59:48 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5BD981CA253 for ; Fri, 15 Sep 2023 10:59:48 +0000 (UTC) X-FDA: 81238536456.19.BF1E0B1 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf09.hostedemail.com (Postfix) with ESMTP id 94D3314000A for ; Fri, 15 Sep 2023 10:59:46 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=xt1rdZP1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3ITkEZQsKCNIAyHH2CF6NNC4CC492.0CA96BIL-AA8Jy08.CF4@flex--matteorizzo.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3ITkEZQsKCNIAyHH2CF6NNC4CC492.0CA96BIL-AA8Jy08.CF4@flex--matteorizzo.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694775586; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dKUQWZsAVWnYh2N3/tMI/up9IoZgDH1PEAwTzCTiQv4=; b=gNwDX18DtAJJyCgsV+5+XKBbsLj2Mwm/eEWij9fHqOaJ+lqw/CGekh3J7YttB+HAmOFOgs 4G0rQnphHwNQ2jRe6ie3EwTel1sdvEzIRith7ANfDONVmhvg2nInRi++7bjx2dzU6qI69y Ryl+RD2075j8dvPZi6CPQSHI2KlfYWc= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=xt1rdZP1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3ITkEZQsKCNIAyHH2CF6NNC4CC492.0CA96BIL-AA8Jy08.CF4@flex--matteorizzo.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3ITkEZQsKCNIAyHH2CF6NNC4CC492.0CA96BIL-AA8Jy08.CF4@flex--matteorizzo.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694775586; a=rsa-sha256; cv=none; b=H4r9neoCd6L80VJPXruQuLnKw9icmRYKYTl6fbPgp5ZpxTxaOVCSo1w64fKLNhYVOO+74A eg2Q91i6p5z7v6b0ZHhJDpE0KzI8uhO1QhaaJ0OXGY03O/9rFA4T1SNqxMMaFxFIOFZslF fcEcMTTmXbBzPvuo8e03CDpQk8Edugk= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-d8191a1d5acso1961971276.1 for ; Fri, 15 Sep 2023 03:59:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694775585; x=1695380385; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dKUQWZsAVWnYh2N3/tMI/up9IoZgDH1PEAwTzCTiQv4=; b=xt1rdZP1XJ4BLs1zvMjrghmEaJjGBHZF5/1uckgysiaBaHBjDXntGQOzr2LF1UAE9s xCSTXjC4IPvJkAuY91kZDNG9wnFsGK+mosBCdVRcwWI4crinwU2zzh7L1rG1GB68ESZg vAN1+ok2QSfy3A4rsF5AkXTPEqXKCgMXwGnBKBtgBL8cMKc0J5INDrGk0Cv9qEnSrBZK raQT2oCFy0XnOxjxzltF7FnrIXQm+3/tRCJMjujifyk5b4j7NTo4+nbLsywgZVqSv83Y gZIWrRhp6tdq8IvergOFBHcJ6fev9sZXYJHUUYvKowQ4ActmkVqxb8t6yD6hLEI8GCEH IVIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694775585; x=1695380385; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dKUQWZsAVWnYh2N3/tMI/up9IoZgDH1PEAwTzCTiQv4=; b=NOaf2JX4XNwau0VMdzc/Dna9grTlvPbxFDm8+nShZmcbdNkjhIKR08cQvasle6PE7A C+SppiObSLYNtxxhBZ4Bn7bYx75gupv047wkhyD6PPnKnOB9MDAq+2StQUMBssMtvWpJ edCrAO1SrFs6jzGsloB79XDZjYVC4dpOhyvWd9BK9l/389qHz25el1jm/AILMQssEmXZ z8VdXxB6wgrPRB6ZnnSRwoBMKVy+nFvIodll5CanDvUpOkXDdHHAQhL27Q/RDP2l9ELR PKQCiKpSWwOl8HNKYmjSlcBkXxgrATsEF7OeJU+ai9wCrEh5YblNvtPA1Gzygg74m05E 5WOw== X-Gm-Message-State: AOJu0Yz5/vC0Kh4DFPg5PlToU9cDDeE5PUJvFYxIkv+esRKshQo3O7Dp KE0JNn1Oj/EIA+LcPEtCGNHwMpgjojPQgZchIw== X-Google-Smtp-Source: AGHT+IExUgO0kRxpLLuV9VDemRInF3a4NZjmmNsX0ecLXHU0nrTEoCf8spF7NDX59lcBkESYL/MAzaMJx3pxH3/0YQ== X-Received: from mr-cloudtop2.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:2a6]) (user=matteorizzo job=sendgmr) by 2002:a05:6902:138e:b0:d78:245a:aac4 with SMTP id x14-20020a056902138e00b00d78245aaac4mr27236ybu.1.1694775585627; Fri, 15 Sep 2023 03:59:45 -0700 (PDT) Date: Fri, 15 Sep 2023 10:59:22 +0000 In-Reply-To: <20230915105933.495735-1-matteorizzo@google.com> Mime-Version: 1.0 References: <20230915105933.495735-1-matteorizzo@google.com> X-Mailer: git-send-email 2.42.0.459.ge4e396fd5e-goog Message-ID: <20230915105933.495735-4-matteorizzo@google.com> Subject: [RFC PATCH 03/14] mm/slub: move kmem_cache_order_objects to slab.h From: Matteo Rizzo To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, keescook@chromium.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-hardening@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, corbet@lwn.net, luto@kernel.org, peterz@infradead.org Cc: jannh@google.com, matteorizzo@google.com, evn@google.com, poprdi@google.com, jordyzomer@google.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Stat-Signature: 419urkzh59sjzjz11jafoqrmg1wqehpg X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 94D3314000A X-HE-Tag: 1694775586-498119 X-HE-Meta: U2FsdGVkX19kOLUWtbJSj80Ig2Eib3YJoweqOg/vHIUnHXwyttsGbe6Cmd5SWkix+2vBWR5T++cWiOu8YS5FW3Q8oayoic72FPkgq6HAmnIcKf37i+aS08nv4gb0BGXVe+dAnJxiMu01qu80rBw83z4pUE1owaFrO8Usqq9THM7WxJPz3RYSO87Hj4NyvHxSaAuPVSkVp6NAjoO7xU022PF6OxYK6kr1zwpXHDm5grbPImSBKHiiMUGhMtl3GJk+2l8aNbfSowsw6KEwE1ESfzkdVCCjmeH9fNKX5KBB1o3etd7hySWWo1hQkyb1PPpM4XNPDW9VcmQK32ZAD0D0MVgfafY+qoY10omsyUZrrUzW8z7e5sFF9DXRcwTcPgJ0mEhkcxBbLVs5W61fLw2UgdQDhIFY3Lfb9K4zj6WT20XfU/o0fNqgPn+2kQrLOcD0MtpKDYfCc9LaOj3NAXDGMYzwLpf/mMv72AmTayEUnUq/YDcZAlA2JpiYJgod1muTHi2YiRsA/YmEovjgUxD9qy/314WVCk6r3UuQk72le5LKPehuKV6ZLwDas22gOeVzHpkiMyTHR4qZOezvgfT5KsMpLURXeucO1mNRBujY3kenUopgz145Dxr27HaYoXGuY/N0xg19vwTXYGeYFFNKQSV988FufEaD74hxFVDPUdF49SvT/jlAV/p6J4/1z4iGi7sIx8xF1vFKOg/JJa8SX6BdO3C6W1HLLYKFXsBBPwjkZecGZI7AkI58MqbKpZy196+SmUrxAdIvqRNLwraeJnsqeID/EbaRXkmng4ZeKzT9AE739Oiib7CnvOAA/SSDwDTUWhc6MDpHrg0857YjkCrMQPnQdC/CRq37RMi//jNXTcIsnsZhcJQy67aeA4fHoJT4+s+dL5EoPLHvMFAlFxhuSw53YPVBt0qJq0+a5TCbLvkIW4TnnCQ7uk1ttkoe3OWx+N7SFotK78vfpU/ R/OlJhHB 6dW6cCqXdHmOQdi78UeY0Mk7SdUsFKyTnCsjlkw4NUHWMlIBoZu3x8OjBROn6MJHHP1eWmUV76omOK5r7PgDXiX+lSTuDopMH2HeHV/8wGJctrIDCBQo0GoB+oTkmpeprBjnJ2A4txxXspFlVOMynWPXJtXRL+EMXnszFtmuRAm2U6hGWxzefT7ma7lU/XpgUQ7pGCDLSlb9PEGx25SWolQ5Ip3sr45ZCHv+ygjtgpEiUDZ0oTyO4t+7oumS29z63zTWNUiuDyPBpN2pszeH6fLhVT6lgfUNTo+WW0a8lCoa8nPR5CRYTGO894xGNaloZqhpXvIyb+reM9xCg4G8Bt0rzDa1Iv/fYY4G9x/mxzpB1vwgjPGYa+5zjK1l8yyGUd02ZAhF3H1ak4Dy2tT3Gq0f/DccDKPg0qwpNocC+DBgtrcQcuZbYkuokAErN9N2z/tjTMbXrXfv60qTDVAxp8Eqwe4siWyOrI2TqkDusKqDlDorh47G43I2+xvK7bHcwhJrBZtn3Wxtku68yz1K4LPSz/x4w7Y2O+pNXYrcoeEHRNYSZ8kndwTjJXZHMbDm1LixJ+9NrACsubHf/F+vpTzNCtP207rg5tDOzeAtrvcFn5TGDQKkvsTeRuwfVZ9Y/usEFzzrHExnK0eWHsafjHVdQRsBnXY2HdYNIYqFYlfMQpxYkK8fhMOjgLZkimV4xpvz7jnc/cjQ/FGrwQhAj94907vG/Swahsguh+0m9/pSeqJeAfDnz+iY7kCsrcXbAsO7i X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jann Horn This is refactoring for SLAB_VIRTUAL. The implementation needs to know the order of the virtual memory region allocated to each slab to know how much physical memory to allocate when the slab is reused. We reuse kmem_cache_order_objects for this, so we have to move it before struct slab. Signed-off-by: Jann Horn Co-developed-by: Matteo Rizzo Signed-off-by: Matteo Rizzo --- include/linux/slub_def.h | 9 --------- mm/slab.h | 22 ++++++++++++++++++++++ mm/slub.c | 12 ------------ 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index deb90cf4bffb..0adf5ba8241b 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -83,15 +83,6 @@ struct kmem_cache_cpu { #define slub_percpu_partial_read_once(c) NULL #endif // CONFIG_SLUB_CPU_PARTIAL -/* - * Word size structure that can be atomically updated or read and that - * contains both the order and the number of objects that a slab of the - * given order would contain. - */ -struct kmem_cache_order_objects { - unsigned int x; -}; - /* * Slab cache management. */ diff --git a/mm/slab.h b/mm/slab.h index 25e41dd6087e..3fe0d1e26e26 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -38,6 +38,15 @@ typedef union { freelist_full_t full; } freelist_aba_t; +/* + * Word size structure that can be atomically updated or read and that + * contains both the order and the number of objects that a slab of the + * given order would contain. + */ +struct kmem_cache_order_objects { + unsigned int x; +}; + /* Reuses the bits in struct page */ struct slab { unsigned long __page_flags; @@ -227,6 +236,19 @@ static inline struct slab *virt_to_slab(const void *addr) return folio_slab(folio); } +#define OO_SHIFT 16 +#define OO_MASK ((1 << OO_SHIFT) - 1) + +static inline unsigned int oo_order(struct kmem_cache_order_objects x) +{ + return x.x >> OO_SHIFT; +} + +static inline unsigned int oo_objects(struct kmem_cache_order_objects x) +{ + return x.x & OO_MASK; +} + static inline int slab_order(const struct slab *slab) { return folio_order((struct folio *)slab_folio(slab)); diff --git a/mm/slub.c b/mm/slub.c index b69916ab7aa8..df2529c03bd3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -284,8 +284,6 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) */ #define DEBUG_METADATA_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) -#define OO_SHIFT 16 -#define OO_MASK ((1 << OO_SHIFT) - 1) #define MAX_OBJS_PER_PAGE 32767 /* since slab.objects is u15 */ /* Internal SLUB flags */ @@ -473,16 +471,6 @@ static inline struct kmem_cache_order_objects oo_make(unsigned int order, return x; } -static inline unsigned int oo_order(struct kmem_cache_order_objects x) -{ - return x.x >> OO_SHIFT; -} - -static inline unsigned int oo_objects(struct kmem_cache_order_objects x) -{ - return x.x & OO_MASK; -} - #ifdef CONFIG_SLUB_CPU_PARTIAL static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) { -- 2.42.0.459.ge4e396fd5e-goog