From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: "David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Alexander Lobakin <aleksander.lobakin@intel.com>,
Alexander Duyck <alexanderduyck@fb.com>,
Yunsheng Lin <linyunsheng@huawei.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Christoph Lameter <cl@linux.com>,
Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
nex.sw.ncis.osdt.itp.upstreaming@intel.com,
netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH net-next v9 6/9] page_pool: add DMA-sync-for-CPU inline helper
Date: Thu, 4 Apr 2024 17:43:59 +0200 [thread overview]
Message-ID: <20240404154402.3581254-7-aleksander.lobakin@intel.com> (raw)
In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com>
Each driver is responsible for syncing buffers written by HW for CPU
before accessing them. Almost each PP-enabled driver uses the same
pattern, which could be shorthanded into a static inline to make driver
code a little bit more compact.
Introduce a simple helper which performs DMA synchronization for the
size passed from the driver. It can be used even when the pool doesn't
manage DMA-syncs-for-device, just make sure the page has a correct DMA
address set via page_pool_set_dma_addr().
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
include/net/page_pool/helpers.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index c7bb06750e85..873631c79ab1 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -52,6 +52,8 @@
#ifndef _NET_PAGE_POOL_HELPERS_H
#define _NET_PAGE_POOL_HELPERS_H
+#include <linux/dma-mapping.h>
+
#include <net/page_pool/types.h>
#ifdef CONFIG_PAGE_POOL_STATS
@@ -395,6 +397,28 @@ static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
return false;
}
+/**
+ * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
+ * @pool: &page_pool the @page belongs to
+ * @page: page to sync
+ * @offset: offset from page start to "hard" start if using PP frags
+ * @dma_sync_size: size of the data written to the page
+ *
+ * Can be used as a shorthand to sync Rx pages before accessing them in the
+ * driver. Caller must ensure the pool was created with ``PP_FLAG_DMA_MAP``.
+ * Note that this version performs DMA sync unconditionally, even if the
+ * associated PP doesn't perform sync-for-device.
+ */
+static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
+ const struct page *page,
+ u32 offset, u32 dma_sync_size)
+{
+ dma_sync_single_range_for_cpu(pool->p.dev,
+ page_pool_get_dma_addr(page),
+ offset + pool->p.offset, dma_sync_size,
+ page_pool_get_dma_dir(pool));
+}
+
static inline bool page_pool_put(struct page_pool *pool)
{
return refcount_dec_and_test(&pool->user_cnt);
--
2.44.0
next prev parent reply other threads:[~2024-04-04 15:46 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-04 15:43 [PATCH net-next v9 0/9] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2024-04-04 15:43 ` [PATCH net-next v9 1/9] net: intel: introduce {,Intel} Ethernet common library Alexander Lobakin
2024-04-04 15:43 ` [PATCH net-next v9 2/9] iavf: kill "legacy-rx" for good Alexander Lobakin
2024-04-05 10:15 ` Przemek Kitszel
2024-04-04 15:43 ` [PATCH net-next v9 3/9] iavf: drop page splitting and recycling Alexander Lobakin
2024-04-04 15:43 ` [PATCH net-next v9 4/9] slab: introduce kvmalloc_array_node() and kvcalloc_node() Alexander Lobakin
2024-04-05 10:12 ` Przemek Kitszel
2024-04-05 10:44 ` Vlastimil Babka
2024-04-04 15:43 ` [PATCH net-next v9 5/9] page_pool: constify some read-only function arguments Alexander Lobakin
2024-04-04 15:43 ` Alexander Lobakin [this message]
2024-04-04 15:44 ` [PATCH net-next v9 7/9] libeth: add Rx buffer management Alexander Lobakin
2024-04-05 10:32 ` Przemek Kitszel
2024-04-08 9:09 ` Alexander Lobakin
2024-04-09 10:58 ` Przemek Kitszel
2024-04-10 11:49 ` Alexander Lobakin
2024-04-10 13:01 ` Przemek Kitszel
2024-04-10 13:01 ` Alexander Lobakin
2024-04-10 13:12 ` Przemek Kitszel
2024-04-06 4:25 ` Jakub Kicinski
2024-04-08 9:11 ` Alexander Lobakin
2024-04-08 9:45 ` Alexander Lobakin
2024-04-09 16:17 ` Kees Cook
2024-04-10 13:36 ` Alexander Lobakin
2024-04-11 0:54 ` Jakub Kicinski
2024-04-11 9:07 ` Alexander Lobakin
2024-04-11 13:45 ` Jakub Kicinski
2024-04-04 15:44 ` [PATCH net-next v9 8/9] iavf: pack iavf_ring more efficiently Alexander Lobakin
2024-04-04 15:44 ` [PATCH net-next v9 9/9] iavf: switch to Page Pool Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240404154402.3581254-7-aleksander.lobakin@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=akpm@linux-foundation.org \
--cc=alexanderduyck@fb.com \
--cc=cl@linux.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linyunsheng@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=nex.sw.ncis.osdt.itp.upstreaming@intel.com \
--cc=pabeni@redhat.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox