From: Saeed Mahameed <saeedm@nvidia.com>
To: "Toke Høiland-Jørgensen" <toke@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Simon Horman <horms@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Mina Almasry <almasrymina@google.com>,
Yonglong Liu <liuyonglong@huawei.com>,
Yunsheng Lin <linyunsheng@huawei.com>,
Pavel Begunkov <asml.silence@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
netdev@vger.kernel.org, bpf@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH net-next v2 2/3] page_pool: Turn dma_sync and dma_sync_cpu fields into a bitmap
Date: Wed, 26 Mar 2025 11:00:04 -0700 [thread overview]
Message-ID: <Z-RApGJCUUPP0-eO@x130> (raw)
In-Reply-To: <20250325-page-pool-track-dma-v2-2-113ebc1946f3@redhat.com>
On 25 Mar 16:45, Toke Høiland-Jørgensen wrote:
>Change the single-bit booleans for dma_sync into an unsigned long with
>BIT() definitions so that a subsequent patch can write them both with a
>singe WRITE_ONCE() on teardown. Also move the check for the sync_cpu
>side into __page_pool_dma_sync_for_cpu() so it can be disabled for
>non-netmem providers as well.
>
>Reviewed-by: Mina Almasry <almasrymina@google.com>
>Tested-by: Yonglong Liu <liuyonglong@huawei.com>
>Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
>---
> include/net/page_pool/helpers.h | 6 +++---
> include/net/page_pool/types.h | 8 ++++++--
> net/core/devmem.c | 3 +--
> net/core/page_pool.c | 9 +++++----
> 4 files changed, 15 insertions(+), 11 deletions(-)
>
>diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
>index 582a3d00cbe2315edeb92850b6a42ab21e509e45..7ed32bde4b8944deb7fb22e291e95b8487be681a 100644
>--- a/include/net/page_pool/helpers.h
>+++ b/include/net/page_pool/helpers.h
>@@ -443,6 +443,9 @@ static inline void __page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> const dma_addr_t dma_addr,
> u32 offset, u32 dma_sync_size)
> {
>+ if (!(READ_ONCE(pool->dma_sync) & PP_DMA_SYNC_CPU))
>+ return;
>+
page_pool_dma_sync_for_cpu() is a wrapper for this function, and it assumes
pages were created with DMA flag, so you are adding this unnecessary check
for that path.
Just change page_pool_dma_sync_for_cpu() to directly call
dma_sync_single_range_for_cpu(...) as part of this patch.
> dma_sync_single_range_for_cpu(pool->p.dev, dma_addr,
> offset + pool->p.offset, dma_sync_size,
> page_pool_get_dma_dir(pool));
>@@ -473,9 +476,6 @@ page_pool_dma_sync_netmem_for_cpu(const struct page_pool *pool,
> const netmem_ref netmem, u32 offset,
> u32 dma_sync_size)
> {
>- if (!pool->dma_sync_for_cpu)
>- return;
>-
> __page_pool_dma_sync_for_cpu(pool,
> page_pool_get_dma_addr_netmem(netmem),
> offset, dma_sync_size);
>diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
>index df0d3c1608929605224feb26173135ff37951ef8..fbe34024b20061e8bcd1d4474f6ebfc70992f1eb 100644
>--- a/include/net/page_pool/types.h
>+++ b/include/net/page_pool/types.h
>@@ -33,6 +33,10 @@
> #define PP_FLAG_ALL (PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | \
> PP_FLAG_SYSTEM_POOL | PP_FLAG_ALLOW_UNREADABLE_NETMEM)
>
>+/* bit values used in pp->dma_sync */
>+#define PP_DMA_SYNC_DEV BIT(0)
>+#define PP_DMA_SYNC_CPU BIT(1)
>+
> /*
> * Fast allocation side cache array/stack
> *
>@@ -175,12 +179,12 @@ struct page_pool {
>
> bool has_init_callback:1; /* slow::init_callback is set */
> bool dma_map:1; /* Perform DMA mapping */
>- bool dma_sync:1; /* Perform DMA sync for device */
>- bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */
> #ifdef CONFIG_PAGE_POOL_STATS
> bool system:1; /* This is a global percpu pool */
> #endif
>
>+ unsigned long dma_sync;
>+
> __cacheline_group_begin_aligned(frag, PAGE_POOL_FRAG_GROUP_ALIGN);
> long frag_users;
> netmem_ref frag_page;
>diff --git a/net/core/devmem.c b/net/core/devmem.c
>index 6802e82a4d03b6030f6df50ae3661f81e40bc101..955d392d707b12fe784747aa2040ce1a882a64db 100644
>--- a/net/core/devmem.c
>+++ b/net/core/devmem.c
>@@ -340,8 +340,7 @@ int mp_dmabuf_devmem_init(struct page_pool *pool)
> /* dma-buf dma addresses do not need and should not be used with
> * dma_sync_for_cpu/device. Force disable dma_sync.
> */
>- pool->dma_sync = false;
>- pool->dma_sync_for_cpu = false;
>+ pool->dma_sync = 0;
>
> if (pool->p.order != 0)
> return -E2BIG;
>diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>index acef1fcd8ddcfd1853a6f2055c1f1820ab248e8d..d51ca4389dd62d8bc266a9a2b792838257173535 100644
>--- a/net/core/page_pool.c
>+++ b/net/core/page_pool.c
>@@ -203,7 +203,7 @@ static int page_pool_init(struct page_pool *pool,
> memcpy(&pool->slow, ¶ms->slow, sizeof(pool->slow));
>
> pool->cpuid = cpuid;
>- pool->dma_sync_for_cpu = true;
>+ pool->dma_sync = PP_DMA_SYNC_CPU;
>
> /* Validate only known flags were used */
> if (pool->slow.flags & ~PP_FLAG_ALL)
>@@ -238,7 +238,7 @@ static int page_pool_init(struct page_pool *pool,
> if (!pool->p.max_len)
> return -EINVAL;
>
>- pool->dma_sync = true;
>+ pool->dma_sync |= PP_DMA_SYNC_DEV;
>
> /* pool->p.offset has to be set according to the address
> * offset used by the DMA engine to start copying rx data
>@@ -291,7 +291,7 @@ static int page_pool_init(struct page_pool *pool,
> }
>
> if (pool->mp_ops) {
>- if (!pool->dma_map || !pool->dma_sync)
>+ if (!pool->dma_map || !(pool->dma_sync & PP_DMA_SYNC_DEV))
> return -EOPNOTSUPP;
>
> if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) {
>@@ -466,7 +466,8 @@ page_pool_dma_sync_for_device(const struct page_pool *pool,
> netmem_ref netmem,
> u32 dma_sync_size)
> {
>- if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
>+ if ((READ_ONCE(pool->dma_sync) & PP_DMA_SYNC_DEV) &&
>+ dma_dev_need_sync(pool->p.dev))
> __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
> }
>
>
>--
>2.48.1
>
next prev parent reply other threads:[~2025-03-26 18:00 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-25 15:45 [PATCH net-next v2 0/3] Fix late DMA unmap crash for page pool Toke Høiland-Jørgensen
2025-03-25 15:45 ` [PATCH net-next v2 1/3] page_pool: Move pp_magic check into helper functions Toke Høiland-Jørgensen
2025-03-25 15:45 ` [PATCH net-next v2 2/3] page_pool: Turn dma_sync and dma_sync_cpu fields into a bitmap Toke Høiland-Jørgensen
2025-03-25 22:17 ` Jakub Kicinski
2025-03-26 8:12 ` Toke Høiland-Jørgensen
2025-03-26 11:23 ` Jakub Kicinski
2025-03-26 11:29 ` Jakub Kicinski
2025-03-26 18:00 ` Saeed Mahameed [this message]
2025-03-25 15:45 ` [PATCH net-next v2 3/3] page_pool: Track DMA-mapped pages and unmap them when destroying the pool Toke Høiland-Jørgensen
2025-03-26 13:54 ` Jesper Dangaard Brouer
2025-03-26 18:22 ` Saeed Mahameed
2025-03-26 20:02 ` Mina Almasry
2025-03-27 0:29 ` Saeed Mahameed
2025-03-27 1:37 ` Mina Almasry
2025-03-27 3:53 ` Yunsheng Lin
2025-03-27 4:59 ` Mina Almasry
2025-03-27 7:21 ` Yunsheng Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z-RApGJCUUPP0-eO@x130 \
--to=saeedm@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=almasrymina@google.com \
--cc=andrew+netdev@lunn.ch \
--cc=asml.silence@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=liuyonglong@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=tariqt@nvidia.com \
--cc=toke@redhat.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox