From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBB57C83F2D for ; Tue, 15 Jul 2025 19:09:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F7BE6B008C; Tue, 15 Jul 2025 15:09:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 381236B009A; Tue, 15 Jul 2025 15:09:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 249C06B009B; Tue, 15 Jul 2025 15:09:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0E9CB6B008C for ; Tue, 15 Jul 2025 15:09:51 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7C31912AEAE for ; Tue, 15 Jul 2025 19:09:50 +0000 (UTC) X-FDA: 83667438540.01.99AEB3F Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf08.hostedemail.com (Postfix) with ESMTP id 925A3160004 for ; Tue, 15 Jul 2025 19:09:48 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=mU0R+t2u; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of almasrymina@google.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=almasrymina@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752606588; a=rsa-sha256; cv=none; b=LDAhKpBWqg04RViUba07agggSxc/henEoAodWbZWvHTEwL0bKlZOO3gI8Fzyp8crk6lxZt BVkJ5xJMkaRNVdsUyc6LKzCduGWUqskwSKUZUq1/mER4NuxmWsigSf2cHdtiCC0Rvcq2Hs uEjlKWXctB3kjBNGhDfh46ix0P4BcVA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=mU0R+t2u; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of almasrymina@google.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=almasrymina@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752606588; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YLN2gDZ9onoCs+NlWLd/HaKyjjBalP3C26bllNaupd8=; b=5sHjQG+EJdmOMzCoYG0Q96GIXNIFFjdIqxKARb7UuuEIt1QOc9UZHa/xNM3qOUstjK40Xp dduVjcD/UAtED5oOoYJFsXagkW1VkLyG2QWGauMdF1zN7MbyQ7uHZMzIjF4W38NZrAwAvN sBwgcqdRnbu9gJamxuMFpwznWvZAiDA= Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-23dd9ae5aacso37155ad.1 for ; Tue, 15 Jul 2025 12:09:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752606587; x=1753211387; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=YLN2gDZ9onoCs+NlWLd/HaKyjjBalP3C26bllNaupd8=; b=mU0R+t2uHQaCyuhajTC8sa9iSEn/iwXHXautuyUYYSAVIqaRWXmJGaaVyU4GYqwfGR 02dVsIc1QcmhImt5LqqKCYa5rtztqALKLYsdR18XYVVWVHP1fdIcvtZkVJK51tj+7qRF ri6Ok0lYbAPRRSN7O26QaYeQVJozeyv99wE2XT0a6/fDIrqYY7XnBYtpfmxmWYe8f2jW b7w1+vao93Hdgn6w96GVaUj9QsCfSsVq5FrBh9tNF20MgB2mysQ8waI2CC1YbFE3ZgL6 lXxmHZfMHUMxYtw25iS75G1C/8Esu6hxMCcf2jL5AtimYHEKMTikxkQ2MCmKB++Q4VP0 ZDvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752606587; x=1753211387; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YLN2gDZ9onoCs+NlWLd/HaKyjjBalP3C26bllNaupd8=; b=WdimeOJJuoUCrKcPlT3QAQ1O6AWl1WRGLJ7rCoSAokQHhuvU+Rq9at7gHNUxpEgXbj 0SVZHt1Dmv5CEoo9ZGT0Px9t2PqEXwEwPh9Os0/taGmT6fm70rASvus3ZrtQOlKt3qW6 FIoiKSoshYqwcCY9lHbCEp+5zSKJYaAbGqsfHFny0MQqq6lzgI/AZAJxzE6CDN9LXlUC sAzhzH2PKBt8cNuEchlhCbFu7fwDMZQQBxochetCQQ8zMGFwwQnpT2U7O3/NRT7JmLMC JorFf44lwKLUFvSYqA8Nefu0k3slwB+7Y4lvmzycdg2/lOsT07RLC0eJykIOPMIkf0TP aGrw== X-Forwarded-Encrypted: i=1; AJvYcCVmDeUOc+aI+hAxSNiLBi2yPDr5iiqBXz1SqyqcR3uA6NaBiapDSHgAYss0ehRpM3yZPMHNRb0GKw==@kvack.org X-Gm-Message-State: AOJu0Yxuw+53HIR4WMjyBIKb62vaVPGjmonoz78xFiZO5JmfAtagwi+M wMTQZYBx4YQB6yYDhwQVo3qsC623Hd1Aug+gg7XR/dxSCzWmBn3mQt/XktLVqMxhC+kUiy+D7Cw zcchZn0DLxA+gvPotx7/Lq2Z6KW4q2Tde4IF4rpQL X-Gm-Gg: ASbGnct28lfLSW9BQtdje1ELbWPDoeiUZFOLTnvEf271CXvdtw/pdL4r+sVY8J47Vw8 UJpNfpgoNW1nfyxZZ3w/QpU9q9n51ThjRnOzpoNvc8abjAsdxJaD1C3l+KTTUbPBE5GyxzOhNPv 9OALFgpfw7BRH30Ko7Pyke71PW7R4kEDOZarfEGHXDyzwZ26gW2L6A9Cc+Y7JPiEsPBVa1olVc/ Z/emjT6gZt7lfN8TsYUiqf+VyBsWS/g7vpyQA== X-Google-Smtp-Source: AGHT+IHowErSnvUulW5JXQBN4+fmG2zXgFSZ6eFQ/lioc5t7yVKoRiNFwd450XUTKcqJXTXwlIEmOQ0Irhzeefb0YRs= X-Received: by 2002:a17:903:4b50:b0:234:48b2:fd63 with SMTP id d9443c01a7336-23e24d874edmr256525ad.21.1752606586807; Tue, 15 Jul 2025 12:09:46 -0700 (PDT) MIME-Version: 1.0 References: <20250714120047.35901-1-byungchul@sk.com> <20250714120047.35901-3-byungchul@sk.com> <20250715013626.GA49874@system.software.com> In-Reply-To: <20250715013626.GA49874@system.software.com> From: Mina Almasry Date: Tue, 15 Jul 2025 12:09:34 -0700 X-Gm-Features: Ac12FXzmAscmFtXN7b2_Y3o9_BpXC5L5KXyNze-SOzs8SCUx-lkKZEIKcXC8OLg Message-ID: Subject: Re: [PATCH net-next v10 02/12] netmem: use netmem_desc instead of page to access ->pp in __netmem_get_pp() To: Byungchul Park Cc: willy@infradead.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel_team@skhynix.com, ilias.apalodimas@linaro.org, harry.yoo@oracle.com, akpm@linux-foundation.org, andrew+netdev@lunn.ch, asml.silence@gmail.com, toke@redhat.com, david@redhat.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, vishal.moola@gmail.com, hannes@cmpxchg.org, ziy@nvidia.com, jackmanb@google.com, wei.fang@nxp.com, shenwei.wang@nxp.com, xiaoning.wang@nxp.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, sgoutham@marvell.com, gakula@marvell.com, sbhatta@marvell.com, hkelam@marvell.com, bbhushan2@marvell.com, tariqt@nvidia.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, sdf@fomichev.me, saeedm@nvidia.com, leon@kernel.org, mbloch@nvidia.com, danishanwar@ti.com, rogerq@kernel.org, nbd@nbd.name, lorenzo@kernel.org, ryder.lee@mediatek.com, shayne.chen@mediatek.com, sean.wang@mediatek.com, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, aleksander.lobakin@intel.com, horms@kernel.org, m-malladi@ti.com, krzysztof.kozlowski@linaro.org, matthias.schiffer@ew.tq-group.com, robh@kernel.org, imx@lists.linux.dev, intel-wired-lan@lists.osuosl.org, linux-arm-kernel@lists.infradead.org, linux-wireless@vger.kernel.org, linux-mediatek@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 925A3160004 X-Stat-Signature: iiwc6bd9ugsodfqrc16ychcywiozqqaj X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1752606588-531514 X-HE-Meta: U2FsdGVkX1+Y50uNGEt1xrPyx+B1RIuMfLNzZYIaWw0R7+rhdQZQukSMck4Kt+bIBtmAD6QYDmYTOOJBKfvgInHSbGwq3Mn4y3NObZ3YgfcQY3pfgLx9ZOJvgyEv7wzGj9A4xIQOvp9Jqzz9uGrpL0dRMhCVrFTvaJoctYVLGBGMtdHZ0tlUe/seu1UOqRmlbqgr78f6CCpgivjDNerxzQC6tC7jDKOpX2Chf/ywAN7uPlulNnYXY5AMckHABjWU0K49a/O6cCE8859lCUnpulSmN2hItzknfHmfnGmi2AOB99vYBii8gL06B8aAS/T5bRIpabzEZTbuTnr2t1RdCgczaX8bc3TTam4++b+Z8ewYMB6mSIfFdukXSmhcqk+LCU8hffiV2YZ7JDB83MnSfQ0ncFUsshUqEMkcJegW9utPEpcAH/w4ICWfP8tx4asS+Ps3X4cWYqW6Y2FMsN5OVBNSPh/ljSMviv5mccs0mw6Omh4Q/iQOixoIYK3CenFCFPLyd7jAfSzDf82JACUCbDKr0GM70FvotoI9jYSxVuMLJCYo0JN2Sw8o03tDqNYFOr5UFdukNx7Vg0fePjE56AsmGroPSXXogqG0bjtDUe5IlFyVhxKnEJGJmAZhcxQuirSN0cXvCe005crs04RixrLGAhhbkgnDHNe5oRlgUT+2S7e8I30S8w3KHEdGLgMk6KLj4zJBxaHEVvOPtvclxzr/8zR89BdIvdbMf+lrO+covZy5QtuCoX6R4KVWXTxkPOyW+IABDKiGAeDFbcNvsCNs2qP8IQZ1RMYCuew8AmBGAD31gWAzYvcUuhhmFXqfSnD+uxX/oOj6U8FBXg5hU9Wi7oW1vTiTjk6eoAGaLZdD3Qbm/L1xZdOuHDiH+9PCNufJDZdk8W4JyeItaBftt/w1a7GQs/sxDoVbfzg0PI3IvrOZoyDaxDW9pchRmlYy0PpuFrY7QxXWNhVLsMX C9Me7uKo NYtHkaqPYv4ceRlkVy2s5jn8ehRqNUOnUbXv1Dx7ims//jHhvCOenBHd3iYckDSJlYi2sCrz7/wArg7s1i3NEAtwvLTpYr2lULABLd97XDR3oiPPElFV3sNBFbsGUnPX/vEGq1WT6D84CnFLldxQ1QBqInl4E2KXsvIl4g5hrUUjH9t7mvfyNFLRpYzBufiJ75pjLI++Vn4QmrfVkqadNkPBBZkuMz3NdbdzIiKFCsBZv5vpOnScRdIGIMqbL+fFyXrIRh0N8+iWGDzquTa9XFNr7w85ZLJd9fMiNmbjggDX5nF/EEpvrj82puvsHnP56vsxKn+SB862A/ao8KbScBdn7yzjLV7ku8pfjgOlm6pGxNd/L+LBZ2iPTSIOvS3qFO+/SPqEg5KljddvzRhLEcDoKfA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jul 14, 2025 at 6:36=E2=80=AFPM Byungchul Park w= rote: > > On Mon, Jul 14, 2025 at 12:58:15PM -0700, Mina Almasry wrote: > > On Mon, Jul 14, 2025 at 12:37=E2=80=AFPM Mina Almasry wrote: > > > > > > On Mon, Jul 14, 2025 at 5:01=E2=80=AFAM Byungchul Park wrote: > > > > > > > > To eliminate the use of the page pool fields in struct page, the pa= ge > > > > pool code should use netmem descriptor and APIs instead. > > > > > > > > However, __netmem_get_pp() still accesses ->pp via struct page. So > > > > change it to use struct netmem_desc instead, since ->pp no longer w= ill > > > > be available in struct page. > > > > > > > > While at it, add a helper, pp_page_to_nmdesc(), that can be used to > > > > extract netmem_desc from page only if it's pp page. For now that > > > > netmem_desc overlays on page, it can be achieved by just casting. > > > > > > > > Signed-off-by: Byungchul Park > > > > --- > > > > include/net/netmem.h | 13 ++++++++++++- > > > > 1 file changed, 12 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/include/net/netmem.h b/include/net/netmem.h > > > > index 535cf17b9134..2b8a7b51ac99 100644 > > > > --- a/include/net/netmem.h > > > > +++ b/include/net/netmem.h > > > > @@ -267,6 +267,17 @@ static inline struct net_iov *__netmem_clear_l= sb(netmem_ref netmem) > > > > return (struct net_iov *)((__force unsigned long)netmem & ~= NET_IOV); > > > > } > > > > > > > > +static inline struct netmem_desc *pp_page_to_nmdesc(struct page *p= age) > > > > +{ > > > > + DEBUG_NET_WARN_ON_ONCE(!page_pool_page_is_pp(page)); > > > > + > > > > + /* XXX: How to extract netmem_desc from page must be change= d, > > > > + * once netmem_desc no longer overlays on page and will be > > > > + * allocated through slab. > > > > + */ > > > > + return (struct netmem_desc *)page; > > > > +} > > > > + > > > > > > Same thing. Do not create a generic looking pp_page_to_nmdesc helper > > > which does not check that the page is the correct type. The > > > DEBUG_NET... is not good enough. > > > > > > You don't need to add a generic helper here. There is only one call > > > site. Open code this in the callsite. The one callsite is marked as > > > unsafe, only called by code that knows that the netmem is specificall= y > > > a pp page. Open code this in the unsafe callsite, instead of creating > > > a generic looking unsafe helper and not even documenting it's unsafe. > > > > > > > On second read through the series, I actually now think this is a > > great idea :-) Adding this helper has simplified the series greatly. I > > did not realize you were converting entire drivers to netmem just to > > get rid of page->pp accesses. Adding a pp_page_to_nmdesc helper makes > > the entire series simpler. > > > > You're also calling it only from code paths like drivers that already > > assumed that the page is a pp page and did page->pp deference without > > a check, so this should be safe. > > > > Only thing I would change is add a comment explaining that the calling > > code needs to check the page is pp page or know it's a pp page (like a > > driver that supports pp). > > > > > > > > /** > > > > * __netmem_get_pp - unsafely get pointer to the &page_pool backin= g @netmem > > > > * @netmem: netmem reference to get the pointer from > > > > @@ -280,7 +291,7 @@ static inline struct net_iov *__netmem_clear_ls= b(netmem_ref netmem) > > > > */ > > > > static inline struct page_pool *__netmem_get_pp(netmem_ref netmem) > > > > { > > > > - return __netmem_to_page(netmem)->pp; > > > > + return pp_page_to_nmdesc(__netmem_to_page(netmem))->pp; > > > > } > > > > > > This makes me very sad. Casting from netmem -> page -> nmdesc... > > > > > > Instead, we should be able to go from netmem directly to nmdesc. I > > > would suggest rename __netmem_clear_lsb to netmem_to_nmdesc and have > > > it return netmem_desc instead of net_iov. Then use it here. > > > > > > We could have an unsafe version of netmem_to_nmdesc which converts th= e > > > netmem to netmem_desc without clearing the lsb and mark it unsafe. > > > > > > > This, I think, we should address to keep some sanity in the code and > > reduce the casts and make it a bit more maintainable. > > I will reflect your suggestions. To summarize: > > 1) The current implementation of pp_page_to_nmdesc() is good enough > to keep, but add a comment on it like "Check if the page is a pp > page before calling this function or know it's a pp page.". > Yes please. > 2) Introduce the unsafe version, __netmem_to_nmdesc(), and use it in > __netmem_get_pp(). > No need following Pavel's feedback. We can just delete __netmem_get_pp. If we do find a need in the future to extract the netmem_desc from a netmem_ref, I would rather we do a straight cast from netmem_ref to netmem_desc rather than netmem_ref -> pages/net_iov -> netmem_desc. But that seems unnecessary for this series. > 3) Rename __netmem_clear_lsb() to netmem_to_nmdesc(), and return > netmem_desc, and use it in all users of __netmem_clear_lsb(). > Following Pavel's comment, this I think also is not necessary for this series. Cleaning up the return value of __netmem_clear_lsb is good work I think, but we're already on v10 of this and I think it would unnecessary to ask for added cleanups. We can do the cleanup on top. > Anything else? Thank you very much :-) --=20 Thanks, Mina