From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B46AC54ED1 for ; Fri, 23 May 2025 17:56:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85FF26B00CB; Fri, 23 May 2025 13:56:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8107D6B00CC; Fri, 23 May 2025 13:56:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FF286B00CD; Fri, 23 May 2025 13:56:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5113D6B00CB for ; Fri, 23 May 2025 13:56:10 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E744080CC6 for ; Fri, 23 May 2025 17:56:09 +0000 (UTC) X-FDA: 83474926458.25.BAD5969 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf11.hostedemail.com (Postfix) with ESMTP id 04E0040009 for ; Fri, 23 May 2025 17:56:07 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o4ZBwkbJ; spf=pass (imf11.hostedemail.com: domain of almasrymina@google.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=almasrymina@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748022968; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qKcG0kIzVQf2DFouXJTY346ztiLo6ETHd82n5f0p1Bw=; b=v1RtoVXwstKGZVVMSTR+tsPTMMRYjCKiHrqENpYNdDelyGameKLlX8daEcC82QAG2wzqyS 85czqXytMCzRD0S3r86xeXOaATpe5AbYY3FCZIaEOyVznUKrchoLc62gNxh46rq1Tu+9f2 OE1nLWTSc6EUyYA236lnS9O3dNelTpI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748022968; a=rsa-sha256; cv=none; b=7syJ4tIkni6ViP0qvyD81UJi2s66lRTehc2kHiN+Yl8ReY7J2dSjJvvFuBewLSzD3IprQt cCIU4AqS2bhRzFZbtfSg9jhzQojJlg2N+xJ7Jmpvz4DmHa2shrvD363q06Nbko72GrUu+o wtP/dRfrD4X8U8FkulB213vF8mu6I5Q= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o4ZBwkbJ; spf=pass (imf11.hostedemail.com: domain of almasrymina@google.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=almasrymina@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-231f6c0b692so20515ad.0 for ; Fri, 23 May 2025 10:56:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748022967; x=1748627767; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=qKcG0kIzVQf2DFouXJTY346ztiLo6ETHd82n5f0p1Bw=; b=o4ZBwkbJUczjVXjI59DPTHcEjTPdIbvSBjDuyuc1uN7jOECGLUKWXkLLHNiw7Cyusk 9qXUUbkUQllrjajEMveM8ic6kfyfBeX3nBt9MlFbwRNA/nWzZYuydN73PPV4oAqVo2M1 0l0wDHW42VipT8KGBs4dcElVlpF+r14fkkNvZE8hYCWLgaQ9C4Ea8Eu5z99M/9GZUnKz MnbvOP55pIUPDtqYcq0cXby27J88NP79tDA0RdGUUuldfyxkboUB1TF+a6gBn7j8mqoQ kcTHIQ7OUrQAyjFKuMFHMy0rC9QPQuNPUmc0wW+niwjUPvTnZ9ldbwHH8n7XCaDD0ARn 8XqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748022967; x=1748627767; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qKcG0kIzVQf2DFouXJTY346ztiLo6ETHd82n5f0p1Bw=; b=AeAwCViLOjhExg1YGEB4xCtaZQ2e/0koIkRpa3YyVCEhpYfIweKSF7gkP2/oHMbArD NfLXDzEqjcVSre/57afZ+bDZvtn5habeawNQ3+ejMtAfnN/m62d+I3q6DO1pnbqB5HnZ uE4vqStgNOvUpeorqs8PNB/vkznacJnTN4+iXqFVCRaajoI7QwQDQCB4maizzd27XQXJ tKqrr3oLidx1wYmSpsr2rFN0MZecIuHuESXyGxyKVG3173pyVFe5BO2Rx2B1qSTePqw8 kDkFtGP/qpEcXF5eUbLweQMRLclAlgabJ41dV0o6zjNX9YURN6Lh+V0yREz8+9EMX1hm bSnQ== X-Forwarded-Encrypted: i=1; AJvYcCXKcl/HyHmkfYkOVlW1F+HH+elCxn9pn0QcC/hr9VGmX3twi/RGlmmWh18Z5hC2q8BRuCoapFQxmg==@kvack.org X-Gm-Message-State: AOJu0YyLJv/sCmBFy1HhonCfqYAHPVvra9BnzHEPcKtLQocH9W2ZDr1+ cCRtk/witeB0Hp2MFQOUIDU9fBgjEjPoxnOiELv7ZH3Q/Y539iSULnu3kFI0pacj+xEhGzwRAJb aV79rAjPcI2JjbXArnrBtUtXflNIObSiyVi4eZqNq X-Gm-Gg: ASbGncuTFI2rJ+ZBNPh1jSXt0Kfqk0GbUODMESrziswm2jn2gQloT94h7Xl6xIq/QKL UK8ulohzxx3fTmhyWPiHFQT3pXOikkOH7gGvJNW+iJ4FnZ2hVpDZ3sN4UjJxOrrqhVqaMR+FCgW d8mHhn2wJg7r2xevhNKHiAJkMvHnzPNIdH9Ary/Asj2ME3SOf8zhVriF1zmr+WW1/22qbuTovxx A== X-Google-Smtp-Source: AGHT+IHG7QjmSnQi5VQF7H68jYu342TfgU01VfBj8sRrando2xNin21bTuvUh2YHtW/IvW4mbUWhDr+UVPFZB1gCm/k= X-Received: by 2002:a17:902:dad2:b0:215:f0c6:4dbf with SMTP id d9443c01a7336-2341808c7d7mr64375ad.14.1748022966452; Fri, 23 May 2025 10:56:06 -0700 (PDT) MIME-Version: 1.0 References: <20250523032609.16334-1-byungchul@sk.com> <20250523032609.16334-19-byungchul@sk.com> In-Reply-To: <20250523032609.16334-19-byungchul@sk.com> From: Mina Almasry Date: Fri, 23 May 2025 10:55:54 -0700 X-Gm-Features: AX0GCFsYNuzYJlporwz1mSapGXVg6LAugMmId4ntk8vawow7YJm9vkQ7qXBE0cE Message-ID: Subject: Re: [PATCH 18/18] mm, netmem: remove the page pool members in struct page To: Byungchul Park Cc: willy@infradead.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel_team@skhynix.com, kuba@kernel.org, ilias.apalodimas@linaro.org, harry.yoo@oracle.com, hawk@kernel.org, akpm@linux-foundation.org, davem@davemloft.net, john.fastabend@gmail.com, andrew+netdev@lunn.ch, asml.silence@gmail.com, toke@redhat.com, tariqt@nvidia.com, edumazet@google.com, pabeni@redhat.com, saeedm@nvidia.com, leon@kernel.org, ast@kernel.org, daniel@iogearbox.net, david@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, horms@kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, vishal.moola@gmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 04E0040009 X-Stat-Signature: 6d6qprjkk86jti5p1qwgw5q18s54gxy8 X-Rspam-User: X-HE-Tag: 1748022967-205877 X-HE-Meta: U2FsdGVkX19UzTxJICa4JAmUQjgZ2U+vVNNKypF4B4geVhrrEweo+nWlUDQy94dfV13jvcU7bsWUrX6vr8z0BKZymI+6H55gZEXAAjv0CRv8bpK5TT3Ph8TLi20gU78fBfHKaZMYTD096pwjAz2p8IP8IY6H0lATL0LjaNS4DAASTJ+WOOUifFqvXeVnM3DDJIX4ou8E8qI0Q6Mb71+bKPUJrcw1teTKXnErj14ZxljRaOWRggUJCIY2he4KDwbvMsoU+57gYTN8QHbqil1zYczw3jBnaAlOY4VV/wxOXyatus4JX5SWKnGgizh2y4Xp5UiKqOxes9iXABGMrfRfWau+E5i6vtqCV16a6ZJ1q54KEo2YnNrxU4chlwjDnO8VBcvWTYbzjX2OjJ5GMpaZpDimWlaYq/gWLVh+pbEHma1OYVrqziQUYTK4ecvPoQO84x7lqpucxG5AUiXhdni+iUgBBmGZixJiuEb/0RLR3OznpeJAs2rWkYtq22zVlthuuYGIm1L6+95OWZDo1219AX5YYMDDquDwc0pTkbuRcdwxC8pqMGVkg3NnaT0QBEEJbEOx7l3VAK7X/CXRTgnhkG0JQVvUmEY6Sb079O06okuXp86I2CpCw+RdOJCyv4PaAnaJljFEnI2aLgOFGkPx9/v7ZMjueD4LsK5U8Lb+VBBFn9Q3ysN/keGgQrz/EkCR6hvJ8DJeQYftMp+rBzu41icLfpA28U4SKPhiS2yugxKHGSLJTq280i/GNt7ZPuuvT3++bXYHA+l/j3SPI4Usc3LfxMKFS2RHb2tNNFufwPYeAnYky1BsfQnjki8gZ8UHD+QxfKUBy3C07LrgFc/xjGn+Gf1Qswimu2+Xgru2PZRBoepBTA3C29wlx9gEpyB6mzwyv2joK750me/CdGelrks9zzsBgEWf+LNVcGdZJwl6X4tRFj4Yyvcn/6HATBi8Or+yVvRSHQwoc8/4RIw tc6FJskz kallJTeYkEZ0Hvn3bTinHF6j1hX3f4WxD7YSHDPYGBg8G37yUrs0CiKg87x1QCFHgQOPTN4pQ2sPuOr8ZSk98Wd4DR6krjGCogNNAHPOBXCAeK41emEmvYuzwfSHxqZy99JMd/giOXK0QqowcS1cnjrZRaan+yayaali9NKJsiTMbWBNksHge9uUjsoPnmPR70ZTUOg/yiG1bfw54sK83K0QyYtKbdB9L0d+5qhxENOnEIP0+VVrO6jqHsD3gJ15XUTGDzSbKfhvTDh/at+LDhGCKdPBsXQM3svmaWqZXOiX44PdR7wr6UWJaJYUrseHzRz+9sgQWbx9QTfI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, May 22, 2025 at 8:26=E2=80=AFPM Byungchul Park w= rote: > > Now that all the users of the page pool members in struct page have been > gone, the members can be removed from struct page. > > However, since struct netmem_desc might still use the space in struct > page, the size of struct netmem_desc should be checked, until struct > netmem_desc has its own instance from slab, to avoid conficting with > other members within struct page. > > Remove the page pool members in struct page and add a static checker for > the size. > > Signed-off-by: Byungchul Park > --- > include/linux/mm_types.h | 11 ----------- > include/net/netmem.h | 28 +++++----------------------- > 2 files changed, 5 insertions(+), 34 deletions(-) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 873e820e1521..5a7864eb9d76 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -119,17 +119,6 @@ struct page { > */ > unsigned long private; > }; > - struct { /* page_pool used by netstack */ > - unsigned long _pp_mapping_pad; > - /** > - * @pp_magic: magic value to avoid recycling non > - * page_pool allocated pages. > - */ > - unsigned long pp_magic; > - struct page_pool *pp; > - unsigned long dma_addr; > - atomic_long_t pp_ref_count; > - }; > struct { /* Tail pages of compound page */ > unsigned long compound_head; /* Bit zero is se= t */ > }; > diff --git a/include/net/netmem.h b/include/net/netmem.h > index c63a7e20f5f3..257c22398d7a 100644 > --- a/include/net/netmem.h > +++ b/include/net/netmem.h > @@ -77,30 +77,12 @@ struct net_iov_area { > unsigned long base_virtual; > }; > > -/* These fields in struct page are used by the page_pool and net stack: > - * > - * struct { > - * unsigned long _pp_mapping_pad; > - * unsigned long pp_magic; > - * struct page_pool *pp; > - * unsigned long dma_addr; > - * atomic_long_t pp_ref_count; > - * }; > - * > - * We mirror the page_pool fields here so the page_pool can access these= fields > - * without worrying whether the underlying fields belong to a page or ne= t_iov. > - * > - * The non-net stack fields of struct page are private to the mm stack a= nd must > - * never be mirrored to net_iov. > +/* XXX: The page pool fields in struct page have been removed but they > + * might still use the space in struct page. Thus, the size of struct > + * netmem_desc should be under control until struct netmem_desc has its > + * own instance from slab. > */ > -#define NET_IOV_ASSERT_OFFSET(pg, iov) \ > - static_assert(offsetof(struct page, pg) =3D=3D \ > - offsetof(struct net_iov, iov)) > -NET_IOV_ASSERT_OFFSET(pp_magic, pp_magic); > -NET_IOV_ASSERT_OFFSET(pp, pp); > -NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr); > -NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count); > -#undef NET_IOV_ASSERT_OFFSET > +static_assert(sizeof(struct netmem_desc) <=3D offsetof(struct page, _ref= count)); > Removing these asserts is actually a bit dangerous. Functions like netmem_or_pp_magic() rely on the fact that the offsets are the same between struct page and struct net_iov to access these fields without worrying about the type of the netmem. What we do in these helpers is we we clear the least significant bit of the netmem, and then access the field. This works only because we verified at build time that the offset is the same. I think we have 3 options here: 1. Keep the asserts as-is, then in the follow up patch where we remove netmem_desc from struct page, we update the asserts to make sure struct page and struct net_iov can grab the netmem_desc in a uniform way. 2. We remove the asserts, but all the helpers that rely on __netmem_clear_lsb need to be modified to do custom handling of net_iov vs page. Something like: static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_m= agic) { if (netmem_is_net_iov(netmem) netmem_to_net_iov(netmem)->pp_magic |=3D pp_magic; else netmem_to_page(netmem)->pp_magic |=3D pp_magic; } Option #2 requires extra checks, which may affect the performance reported by page_pool_bench_simple that I pointed you to before. 3. We could swap out all the individual asserts for one assert, if both page and net_iov have a netmem_desc subfield. This will also need to be reworked when netmem_desc is eventually moved out of struct page and is slab allocated: NET_IOV_ASSERT_OFFSET(netmem_desc, netmem_desc); --=20 Thanks, Mina