From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: "Toke Høiland-Jørgensen" <toke@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>, Simon Horman <horms@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Mina Almasry <almasrymina@google.com>,
Yonglong Liu <liuyonglong@huawei.com>,
Yunsheng Lin <linyunsheng@huawei.com>,
Pavel Begunkov <asml.silence@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
netdev@vger.kernel.org, bpf@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH net-next v3 1/3] page_pool: Move pp_magic check into helper functions
Date: Wed, 26 Mar 2025 17:29:49 +0200 [thread overview]
Message-ID: <CAC_iWjJKPkNGEVOoOCAn1ghmP7UvynG4Fjrxbj8+RFwKMG977w@mail.gmail.com> (raw)
In-Reply-To: <20250326-page-pool-track-dma-v3-1-8e464016e0ac@redhat.com>
Thanks Toke,
On Wed, 26 Mar 2025 at 10:19, Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>
> Since we are about to stash some more information into the pp_magic
> field, let's move the magic signature checks into a pair of helper
> functions so it can be changed in one place.
>
> Reviewed-by: Mina Almasry <almasrymina@google.com>
> Tested-by: Yonglong Liu <liuyonglong@huawei.com>
> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 4 ++--
> include/net/page_pool/types.h | 18 ++++++++++++++++++
> mm/page_alloc.c | 9 +++------
> net/core/netmem_priv.h | 5 +++++
> net/core/skbuff.c | 16 ++--------------
> net/core/xdp.c | 4 ++--
> 6 files changed, 32 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> index 6f3094a479e1ec61854bb48a6a0c812167487173..70c6f0b2abb921778c98fbd428594ebd7986a302 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> @@ -706,8 +706,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
> xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo);
> page = xdpi.page.page;
>
> - /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
> - * as we know this is a page_pool page.
> + /* No need to check page_pool_page_is_pp() as we
> + * know this is a page_pool page.
> */
> page_pool_recycle_direct(page->pp, page);
> } while (++n < num);
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index 36eb57d73abc6cfc601e700ca08be20fb8281055..df0d3c1608929605224feb26173135ff37951ef8 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -54,6 +54,14 @@ struct pp_alloc_cache {
> netmem_ref cache[PP_ALLOC_CACHE_SIZE];
> };
>
> +/* Mask used for checking in page_pool_page_is_pp() below. page->pp_magic is
> + * OR'ed with PP_SIGNATURE after the allocation in order to preserve bit 0 for
> + * the head page of compound page and bit 1 for pfmemalloc page.
> + * page_is_pfmemalloc() is checked in __page_pool_put_page() to avoid recycling
> + * the pfmemalloc page.
> + */
> +#define PP_MAGIC_MASK ~0x3UL
> +
> /**
> * struct page_pool_params - page pool parameters
> * @fast: params accessed frequently on hotpath
> @@ -264,6 +272,11 @@ void page_pool_destroy(struct page_pool *pool);
> void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
> const struct xdp_mem_info *mem);
> void page_pool_put_netmem_bulk(netmem_ref *data, u32 count);
> +
> +static inline bool page_pool_page_is_pp(struct page *page)
> +{
> + return (page->pp_magic & PP_MAGIC_MASK) == PP_SIGNATURE;
> +}
> #else
> static inline void page_pool_destroy(struct page_pool *pool)
> {
> @@ -278,6 +291,11 @@ static inline void page_pool_use_xdp_mem(struct page_pool *pool,
> static inline void page_pool_put_netmem_bulk(netmem_ref *data, u32 count)
> {
> }
> +
> +static inline bool page_pool_page_is_pp(struct page *page)
> +{
> + return false;
> +}
> #endif
>
> void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 542d25f77be80304b731411ffd29b276ee13be0c..3535ee76afe946cbb042ecbce603bdbedc9233b9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -55,6 +55,7 @@
> #include <linux/delayacct.h>
> #include <linux/cacheinfo.h>
> #include <linux/pgalloc_tag.h>
> +#include <net/page_pool/types.h>
> #include <asm/div64.h>
> #include "internal.h"
> #include "shuffle.h"
> @@ -872,9 +873,7 @@ static inline bool page_expected_state(struct page *page,
> #ifdef CONFIG_MEMCG
> page->memcg_data |
> #endif
> -#ifdef CONFIG_PAGE_POOL
> - ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) |
> -#endif
> + page_pool_page_is_pp(page) |
> (page->flags & check_flags)))
> return false;
>
> @@ -901,10 +900,8 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
> if (unlikely(page->memcg_data))
> bad_reason = "page still charged to cgroup";
> #endif
> -#ifdef CONFIG_PAGE_POOL
> - if (unlikely((page->pp_magic & ~0x3UL) == PP_SIGNATURE))
> + if (unlikely(page_pool_page_is_pp(page)))
> bad_reason = "page_pool leak";
> -#endif
> return bad_reason;
> }
>
> diff --git a/net/core/netmem_priv.h b/net/core/netmem_priv.h
> index 7eadb8393e002fd1cc2cef8a313d2ea7df76f301..f33162fd281c23e109273ba09950c5d0a2829bc9 100644
> --- a/net/core/netmem_priv.h
> +++ b/net/core/netmem_priv.h
> @@ -18,6 +18,11 @@ static inline void netmem_clear_pp_magic(netmem_ref netmem)
> __netmem_clear_lsb(netmem)->pp_magic = 0;
> }
>
> +static inline bool netmem_is_pp(netmem_ref netmem)
> +{
> + return (netmem_get_pp_magic(netmem) & PP_MAGIC_MASK) == PP_SIGNATURE;
> +}
> +
> static inline void netmem_set_pp(netmem_ref netmem, struct page_pool *pool)
> {
> __netmem_clear_lsb(netmem)->pp = pool;
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index ab8acb737b93299f503e5c298b87e18edd59d555..a64d777488e403d5fdef83ae42ae9e4924c1a0dc 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -893,11 +893,6 @@ static void skb_clone_fraglist(struct sk_buff *skb)
> skb_get(list);
> }
>
> -static bool is_pp_netmem(netmem_ref netmem)
> -{
> - return (netmem_get_pp_magic(netmem) & ~0x3UL) == PP_SIGNATURE;
> -}
> -
> int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb,
> unsigned int headroom)
> {
> @@ -995,14 +990,7 @@ bool napi_pp_put_page(netmem_ref netmem)
> {
> netmem = netmem_compound_head(netmem);
>
> - /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
> - * in order to preserve any existing bits, such as bit 0 for the
> - * head page of compound page and bit 1 for pfmemalloc page, so
> - * mask those bits for freeing side when doing below checking,
> - * and page_is_pfmemalloc() is checked in __page_pool_put_page()
> - * to avoid recycling the pfmemalloc page.
> - */
> - if (unlikely(!is_pp_netmem(netmem)))
> + if (unlikely(!netmem_is_pp(netmem)))
> return false;
>
> page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, false);
> @@ -1042,7 +1030,7 @@ static int skb_pp_frag_ref(struct sk_buff *skb)
>
> for (i = 0; i < shinfo->nr_frags; i++) {
> head_netmem = netmem_compound_head(shinfo->frags[i].netmem);
> - if (likely(is_pp_netmem(head_netmem)))
> + if (likely(netmem_is_pp(head_netmem)))
> page_pool_ref_netmem(head_netmem);
> else
> page_ref_inc(netmem_to_page(head_netmem));
> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index f86eedad586a77eb63a96a85aa6d068d3e94f077..0ba73943c6eed873b3d1c681b3b9a802b590f2d9 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -437,8 +437,8 @@ void __xdp_return(netmem_ref netmem, enum xdp_mem_type mem_type,
> netmem = netmem_compound_head(netmem);
> if (napi_direct && xdp_return_frame_no_direct())
> napi_direct = false;
> - /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
> - * as mem->type knows this a page_pool page
> + /* No need to check netmem_is_pp() as mem->type knows this a
> + * page_pool page
> */
> page_pool_put_full_netmem(netmem_get_pp(netmem), netmem,
> napi_direct);
>
> --
> 2.48.1
>
next prev parent reply other threads:[~2025-03-26 15:30 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-26 8:18 [PATCH net-next v3 0/3] Fix late DMA unmap crash for page pool Toke Høiland-Jørgensen
2025-03-26 8:18 ` [PATCH net-next v3 1/3] page_pool: Move pp_magic check into helper functions Toke Høiland-Jørgensen
2025-03-26 12:55 ` Jesper Dangaard Brouer
2025-03-26 15:29 ` Ilias Apalodimas [this message]
2025-03-26 8:18 ` [PATCH net-next v3 2/3] page_pool: Turn dma_sync into a full-width bool field Toke Høiland-Jørgensen
2025-03-26 12:56 ` Jesper Dangaard Brouer
2025-03-26 17:40 ` Alexander Lobakin
2025-03-26 8:18 ` [PATCH net-next v3 3/3] page_pool: Track DMA-mapped pages and unmap them when destroying the pool Toke Høiland-Jørgensen
2025-03-26 11:48 ` [PATCH net-next v3 0/3] Fix late DMA unmap crash for page pool Jakub Kicinski
2025-03-26 12:20 ` Toke Høiland-Jørgensen
2025-03-26 14:49 ` Jakub Kicinski
2025-03-27 10:41 ` Toke Høiland-Jørgensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAC_iWjJKPkNGEVOoOCAn1ghmP7UvynG4Fjrxbj8+RFwKMG977w@mail.gmail.com \
--to=ilias.apalodimas@linaro.org \
--cc=akpm@linux-foundation.org \
--cc=almasrymina@google.com \
--cc=andrew+netdev@lunn.ch \
--cc=asml.silence@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=liuyonglong@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=toke@redhat.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox