* [PATCH net-next v8 0/4] skbuff: Optimize SKB coalescing for page pool
@ 2023-12-11 3:52 Liang Chen
2023-12-11 3:52 ` [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining Liang Chen
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Liang Chen @ 2023-12-11 3:52 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, hawk, ilias.apalodimas, linyunsheng
Cc: netdev, linux-mm, jasowang, almasrymina, liangchen.linux
The combination of the following condition was excluded from skb coalescing:
from->pp_recycle = 1
from->cloned = 1
to->pp_recycle = 1
With page pool in use, this combination can be quite common(ex.
NetworkMananger may lead to the additional packet_type being registered,
thus the cloning). In scenarios with a higher number of small packets, it
can significantly affect the success rate of coalescing.
This patchset aims to optimize this scenario and enable coalescing of this
particular combination. That also involves supporting multiple users
referencing the same fragment of a pp page to accomondate the need to
increment the "from" SKB page's pp page reference count.
Changes from v7:
- move informative documentation for page_pool_fragment_page
Liang Chen (4):
page_pool: transition to reference count management after page
draining
page_pool: halve BIAS_MAX for multiple user references of a fragment
skbuff: Add a function to check if a page belongs to page_pool
skbuff: Optimization of SKB coalescing for page pool
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +-
include/linux/mm_types.h | 2 +-
include/net/page_pool/helpers.h | 65 +++++++++++--------
include/net/page_pool/types.h | 6 +-
net/core/page_pool.c | 14 ++--
net/core/skbuff.c | 48 ++++++++++----
6 files changed, 87 insertions(+), 52 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining
2023-12-11 3:52 [PATCH net-next v8 0/4] skbuff: Optimize SKB coalescing for page pool Liang Chen
@ 2023-12-11 3:52 ` Liang Chen
2023-12-11 7:43 ` Ilias Apalodimas
2023-12-11 3:52 ` [PATCH net-next v8 2/4] page_pool: halve BIAS_MAX for multiple user references of a fragment Liang Chen
` (2 subsequent siblings)
3 siblings, 1 reply; 15+ messages in thread
From: Liang Chen @ 2023-12-11 3:52 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, hawk, ilias.apalodimas, linyunsheng
Cc: netdev, linux-mm, jasowang, almasrymina, liangchen.linux
To support multiple users referencing the same fragment,
'pp_frag_count' is renamed to 'pp_ref_count', transitioning pp pages
from fragment management to reference count management after draining
based on the suggestion from [1].
The idea is that the concept of fragmenting exists before the page is
drained, and all related functions retain their current names.
However, once the page is drained, its management shifts to being
governed by 'pp_ref_count'. Therefore, all functions associated with
that lifecycle stage of a pp page are renamed.
[1]
http://lore.kernel.org/netdev/f71d9448-70c8-8793-dc9a-0eb48a570300@huawei.com
Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
---
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +-
include/linux/mm_types.h | 2 +-
include/net/page_pool/helpers.h | 60 +++++++++++--------
include/net/page_pool/types.h | 6 +-
net/core/page_pool.c | 12 ++--
5 files changed, 46 insertions(+), 38 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 8d9743a5e42c..98d33ac7ec64 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -298,8 +298,8 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq,
u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags;
struct page *page = frag_page->page;
- if (page_pool_defrag_page(page, drain_count) == 0)
- page_pool_put_defragged_page(rq->page_pool, page, -1, true);
+ if (page_pool_unref_page(page, drain_count) == 0)
+ page_pool_put_unrefed_page(rq->page_pool, page, -1, true);
}
static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 957ce38768b2..64e4572ef06d 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -125,7 +125,7 @@ struct page {
struct page_pool *pp;
unsigned long _pp_mapping_pad;
unsigned long dma_addr;
- atomic_long_t pp_frag_count;
+ atomic_long_t pp_ref_count;
};
struct { /* Tail pages of compound page */
unsigned long compound_head; /* Bit zero is set */
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 4ebd544ae977..d0c5e7e6857a 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -29,7 +29,7 @@
* page allocated from page pool. Page splitting enables memory saving and thus
* avoids TLB/cache miss for data access, but there also is some cost to
* implement page splitting, mainly some cache line dirtying/bouncing for
- * 'struct page' and atomic operation for page->pp_frag_count.
+ * 'struct page' and atomic operation for page->pp_ref_count.
*
* The API keeps track of in-flight pages, in order to let API users know when
* it is safe to free a page_pool object, the API users must call
@@ -214,69 +214,77 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
return pool->p.dma_dir;
}
-/* pp_frag_count represents the number of writers who can update the page
- * either by updating skb->data or via DMA mappings for the device.
- * We can't rely on the page refcnt for that as we don't know who might be
- * holding page references and we can't reliably destroy or sync DMA mappings
- * of the fragments.
+/**
+ * page_pool_fragment_page() - split a fresh page into fragments
+ * @page: page to split
+ * @nr: references to set
+ *
+ * pp_ref_count represents the number of outstanding references to the page,
+ * which will be freed using page_pool APIs (rather than page allocator APIs
+ * like put_page()). Such references are usually held by page_pool-aware
+ * objects like skbs marked for page pool recycling.
*
- * When pp_frag_count reaches 0 we can either recycle the page if the page
- * refcnt is 1 or return it back to the memory allocator and destroy any
- * mappings we have.
+ * This helper allows the caller to take (set) multiple references to a
+ * freshly allocated page. The page must be freshly allocated (have a
+ * pp_ref_count of 1). This is commonly done by drivers and
+ * "fragment allocators" to save atomic operations - either when they know
+ * upfront how many references they will need; or to take MAX references and
+ * return the unused ones with a single atomic dec(), instead of performing
+ * multiple atomic inc() operations.
*/
static inline void page_pool_fragment_page(struct page *page, long nr)
{
- atomic_long_set(&page->pp_frag_count, nr);
+ atomic_long_set(&page->pp_ref_count, nr);
}
-static inline long page_pool_defrag_page(struct page *page, long nr)
+static inline long page_pool_unref_page(struct page *page, long nr)
{
long ret;
- /* If nr == pp_frag_count then we have cleared all remaining
+ /* If nr == pp_ref_count then we have cleared all remaining
* references to the page:
* 1. 'n == 1': no need to actually overwrite it.
* 2. 'n != 1': overwrite it with one, which is the rare case
- * for pp_frag_count draining.
+ * for pp_ref_count draining.
*
* The main advantage to doing this is that not only we avoid a atomic
* update, as an atomic_read is generally a much cheaper operation than
* an atomic update, especially when dealing with a page that may be
- * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count
+ * referenced by only 2 or 3 users; but also unify the pp_ref_count
* handling by ensuring all pages have partitioned into only 1 piece
* initially, and only overwrite it when the page is partitioned into
* more than one piece.
*/
- if (atomic_long_read(&page->pp_frag_count) == nr) {
+ if (atomic_long_read(&page->pp_ref_count) == nr) {
/* As we have ensured nr is always one for constant case using
* the BUILD_BUG_ON(), only need to handle the non-constant case
- * here for pp_frag_count draining, which is a rare case.
+ * here for pp_ref_count draining, which is a rare case.
*/
BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1);
if (!__builtin_constant_p(nr))
- atomic_long_set(&page->pp_frag_count, 1);
+ atomic_long_set(&page->pp_ref_count, 1);
return 0;
}
- ret = atomic_long_sub_return(nr, &page->pp_frag_count);
+ ret = atomic_long_sub_return(nr, &page->pp_ref_count);
WARN_ON(ret < 0);
- /* We are the last user here too, reset pp_frag_count back to 1 to
+ /* We are the last user here too, reset pp_ref_count back to 1 to
* ensure all pages have been partitioned into 1 piece initially,
* this should be the rare case when the last two fragment users call
- * page_pool_defrag_page() currently.
+ * page_pool_unref_page() currently.
*/
if (unlikely(!ret))
- atomic_long_set(&page->pp_frag_count, 1);
+ atomic_long_set(&page->pp_ref_count, 1);
return ret;
}
-static inline bool page_pool_is_last_frag(struct page *page)
+static inline bool page_pool_is_last_ref(struct page *page)
{
- /* If page_pool_defrag_page() returns 0, we were the last user */
- return page_pool_defrag_page(page, 1) == 0;
+ /* If page_pool_unref_page() returns 0, we were the last user */
+ return page_pool_unref_page(page, 1) == 0;
}
/**
@@ -301,10 +309,10 @@ static inline void page_pool_put_page(struct page_pool *pool,
* allow registering MEM_TYPE_PAGE_POOL, but shield linker.
*/
#ifdef CONFIG_PAGE_POOL
- if (!page_pool_is_last_frag(page))
+ if (!page_pool_is_last_ref(page))
return;
- page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct);
+ page_pool_put_unrefed_page(pool, page, dma_sync_size, allow_direct);
#endif
}
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index e1bb92c192de..6a5323619f6e 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -224,9 +224,9 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data,
}
#endif
-void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
- unsigned int dma_sync_size,
- bool allow_direct);
+void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page,
+ unsigned int dma_sync_size,
+ bool allow_direct);
static inline bool is_page_pool_compiled_in(void)
{
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index df2a06d7da52..106220b1f89c 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -650,8 +650,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
return NULL;
}
-void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
- unsigned int dma_sync_size, bool allow_direct)
+void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page,
+ unsigned int dma_sync_size, bool allow_direct)
{
page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct);
if (page && !page_pool_recycle_in_ring(pool, page)) {
@@ -660,7 +660,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
page_pool_return_page(pool, page);
}
}
-EXPORT_SYMBOL(page_pool_put_defragged_page);
+EXPORT_SYMBOL(page_pool_put_unrefed_page);
/**
* page_pool_put_page_bulk() - release references on multiple pages
@@ -687,7 +687,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
struct page *page = virt_to_head_page(data[i]);
/* It is not the last user for the page frag case */
- if (!page_pool_is_last_frag(page))
+ if (!page_pool_is_last_ref(page))
continue;
page = __page_pool_put_page(pool, page, -1, false);
@@ -729,7 +729,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool,
long drain_count = BIAS_MAX - pool->frag_users;
/* Some user is still using the page frag */
- if (likely(page_pool_defrag_page(page, drain_count)))
+ if (likely(page_pool_unref_page(page, drain_count)))
return NULL;
if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) {
@@ -750,7 +750,7 @@ static void page_pool_free_frag(struct page_pool *pool)
pool->frag_page = NULL;
- if (!page || page_pool_defrag_page(page, drain_count))
+ if (!page || page_pool_unref_page(page, drain_count))
return;
page_pool_return_page(pool, page);
--
2.31.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH net-next v8 2/4] page_pool: halve BIAS_MAX for multiple user references of a fragment
2023-12-11 3:52 [PATCH net-next v8 0/4] skbuff: Optimize SKB coalescing for page pool Liang Chen
2023-12-11 3:52 ` [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining Liang Chen
@ 2023-12-11 3:52 ` Liang Chen
2023-12-11 10:12 ` Jesper Dangaard Brouer
2023-12-11 3:52 ` [PATCH net-next v8 3/4] skbuff: Add a function to check if a page belongs to page_pool Liang Chen
2023-12-11 3:52 ` [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool Liang Chen
3 siblings, 1 reply; 15+ messages in thread
From: Liang Chen @ 2023-12-11 3:52 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, hawk, ilias.apalodimas, linyunsheng
Cc: netdev, linux-mm, jasowang, almasrymina, liangchen.linux
Referring to patch [1], in order to support multiple users referencing the
same fragment and prevent overflow from pp_ref_count growing, the initial
value of pp_ref_count is halved, leaving room for pp_ref_count to increment
before the page is drained.
[1]
https://lore.kernel.org/all/20211009093724.10539-3-linyunsheng@huawei.com/
Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
---
net/core/page_pool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 106220b1f89c..436f7ffea7b4 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -26,7 +26,7 @@
#define DEFER_TIME (msecs_to_jiffies(1000))
#define DEFER_WARN_INTERVAL (60 * HZ)
-#define BIAS_MAX LONG_MAX
+#define BIAS_MAX (LONG_MAX >> 1)
#ifdef CONFIG_PAGE_POOL_STATS
/* alloc_stat_inc is intended to be used in softirq context */
--
2.31.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH net-next v8 3/4] skbuff: Add a function to check if a page belongs to page_pool
2023-12-11 3:52 [PATCH net-next v8 0/4] skbuff: Optimize SKB coalescing for page pool Liang Chen
2023-12-11 3:52 ` [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining Liang Chen
2023-12-11 3:52 ` [PATCH net-next v8 2/4] page_pool: halve BIAS_MAX for multiple user references of a fragment Liang Chen
@ 2023-12-11 3:52 ` Liang Chen
2023-12-11 7:40 ` Ilias Apalodimas
2023-12-11 3:52 ` [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool Liang Chen
3 siblings, 1 reply; 15+ messages in thread
From: Liang Chen @ 2023-12-11 3:52 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, hawk, ilias.apalodimas, linyunsheng
Cc: netdev, linux-mm, jasowang, almasrymina, liangchen.linux
Wrap code for checking if a page is a page_pool page into a
function for better readability and ease of reuse.
Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
---
net/core/skbuff.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b157efea5dea..7e26b56cda38 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -890,6 +890,11 @@ static void skb_clone_fraglist(struct sk_buff *skb)
skb_get(list);
}
+static bool is_pp_page(struct page *page)
+{
+ return (page->pp_magic & ~0x3UL) == PP_SIGNATURE;
+}
+
#if IS_ENABLED(CONFIG_PAGE_POOL)
bool napi_pp_put_page(struct page *page, bool napi_safe)
{
@@ -905,7 +910,7 @@ bool napi_pp_put_page(struct page *page, bool napi_safe)
* and page_is_pfmemalloc() is checked in __page_pool_put_page()
* to avoid recycling the pfmemalloc page.
*/
- if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
+ if (unlikely(!is_pp_page(page)))
return false;
pp = page->pp;
--
2.31.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-11 3:52 [PATCH net-next v8 0/4] skbuff: Optimize SKB coalescing for page pool Liang Chen
` (2 preceding siblings ...)
2023-12-11 3:52 ` [PATCH net-next v8 3/4] skbuff: Add a function to check if a page belongs to page_pool Liang Chen
@ 2023-12-11 3:52 ` Liang Chen
2023-12-11 7:46 ` Ilias Apalodimas
3 siblings, 1 reply; 15+ messages in thread
From: Liang Chen @ 2023-12-11 3:52 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, hawk, ilias.apalodimas, linyunsheng
Cc: netdev, linux-mm, jasowang, almasrymina, liangchen.linux
In order to address the issues encountered with commit 1effe8ca4e34
("skbuff: fix coalescing for page_pool fragment recycling"), the
combination of the following condition was excluded from skb coalescing:
from->pp_recycle = 1
from->cloned = 1
to->pp_recycle = 1
However, with page pool environments, the aforementioned combination can
be quite common(ex. NetworkMananger may lead to the additional
packet_type being registered, thus the cloning). In scenarios with a
higher number of small packets, it can significantly affect the success
rate of coalescing. For example, considering packets of 256 bytes size,
our comparison of coalescing success rate is as follows:
Without page pool: 70%
With page pool: 13%
Consequently, this has an impact on performance:
Without page pool: 2.57 Gbits/sec
With page pool: 2.26 Gbits/sec
Therefore, it seems worthwhile to optimize this scenario and enable
coalescing of this particular combination. To achieve this, we need to
ensure the correct increment of the "from" SKB page's page pool
reference count (pp_ref_count).
Following this optimization, the success rate of coalescing measured in
our environment has improved as follows:
With page pool: 60%
This success rate is approaching the rate achieved without using page
pool, and the performance has also been improved:
With page pool: 2.52 Gbits/sec
Below is the performance comparison for small packets before and after
this optimization. We observe no impact to packets larger than 4K.
packet size before after improved
(bytes) (Gbits/sec) (Gbits/sec)
128 1.19 1.27 7.13%
256 2.26 2.52 11.75%
512 4.13 4.81 16.50%
1024 6.17 6.73 9.05%
2048 14.54 15.47 6.45%
4096 25.44 27.87 9.52%
Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
Suggested-by: Jason Wang <jasowang@redhat.com>
---
include/net/page_pool/helpers.h | 5 ++++
net/core/skbuff.c | 41 +++++++++++++++++++++++----------
2 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index d0c5e7e6857a..0dc8fab43bef 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -281,6 +281,11 @@ static inline long page_pool_unref_page(struct page *page, long nr)
return ret;
}
+static inline void page_pool_ref_page(struct page *page)
+{
+ atomic_long_inc(&page->pp_ref_count);
+}
+
static inline bool page_pool_is_last_ref(struct page *page)
{
/* If page_pool_unref_page() returns 0, we were the last user */
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 7e26b56cda38..3c2515a29376 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -947,6 +947,24 @@ static bool skb_pp_recycle(struct sk_buff *skb, void *data, bool napi_safe)
return napi_pp_put_page(virt_to_page(data), napi_safe);
}
+/**
+ * skb_pp_frag_ref() - Increase fragment reference count of a page
+ * @page: page of the fragment on which to increase a reference
+ *
+ * Increase fragment reference count (pp_ref_count) on a page, but if it is
+ * not a page pool page, fallback to increase a reference(_refcount) on a
+ * normal page.
+ */
+static void skb_pp_frag_ref(struct page *page)
+{
+ struct page *head_page = compound_head(page);
+
+ if (likely(is_pp_page(head_page)))
+ page_pool_ref_page(head_page);
+ else
+ page_ref_inc(head_page);
+}
+
static void skb_kfree_head(void *head, unsigned int end_offset)
{
if (end_offset == SKB_SMALL_HEAD_HEADROOM)
@@ -5769,17 +5787,12 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
return false;
/* In general, avoid mixing page_pool and non-page_pool allocated
- * pages within the same SKB. Additionally avoid dealing with clones
- * with page_pool pages, in case the SKB is using page_pool fragment
- * references (page_pool_alloc_frag()). Since we only take full page
- * references for cloned SKBs at the moment that would result in
- * inconsistent reference counts.
- * In theory we could take full references if @from is cloned and
- * !@to->pp_recycle but its tricky (due to potential race with
- * the clone disappearing) and rare, so not worth dealing with.
+ * pages within the same SKB. In theory we could take full
+ * references if @from is cloned and !@to->pp_recycle but its
+ * tricky (due to potential race with the clone disappearing) and
+ * rare, so not worth dealing with.
*/
- if (to->pp_recycle != from->pp_recycle ||
- (from->pp_recycle && skb_cloned(from)))
+ if (to->pp_recycle != from->pp_recycle)
return false;
if (len <= skb_tailroom(to)) {
@@ -5836,8 +5849,12 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
/* if the skb is not cloned this does nothing
* since we set nr_frags to 0.
*/
- for (i = 0; i < from_shinfo->nr_frags; i++)
- __skb_frag_ref(&from_shinfo->frags[i]);
+ if (from->pp_recycle)
+ for (i = 0; i < from_shinfo->nr_frags; i++)
+ skb_pp_frag_ref(skb_frag_page(&from_shinfo->frags[i]));
+ else
+ for (i = 0; i < from_shinfo->nr_frags; i++)
+ __skb_frag_ref(&from_shinfo->frags[i]);
to->truesize += delta;
to->len += len;
--
2.31.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 3/4] skbuff: Add a function to check if a page belongs to page_pool
2023-12-11 3:52 ` [PATCH net-next v8 3/4] skbuff: Add a function to check if a page belongs to page_pool Liang Chen
@ 2023-12-11 7:40 ` Ilias Apalodimas
0 siblings, 0 replies; 15+ messages in thread
From: Ilias Apalodimas @ 2023-12-11 7:40 UTC (permalink / raw)
To: Liang Chen
Cc: davem, edumazet, kuba, pabeni, hawk, linyunsheng, netdev,
linux-mm, jasowang, almasrymina
On Mon, 11 Dec 2023 at 05:53, Liang Chen <liangchen.linux@gmail.com> wrote:
>
> Wrap code for checking if a page is a page_pool page into a
> function for better readability and ease of reuse.
>
> Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> net/core/skbuff.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index b157efea5dea..7e26b56cda38 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -890,6 +890,11 @@ static void skb_clone_fraglist(struct sk_buff *skb)
> skb_get(list);
> }
>
> +static bool is_pp_page(struct page *page)
> +{
> + return (page->pp_magic & ~0x3UL) == PP_SIGNATURE;
> +}
> +
> #if IS_ENABLED(CONFIG_PAGE_POOL)
> bool napi_pp_put_page(struct page *page, bool napi_safe)
> {
> @@ -905,7 +910,7 @@ bool napi_pp_put_page(struct page *page, bool napi_safe)
> * and page_is_pfmemalloc() is checked in __page_pool_put_page()
> * to avoid recycling the pfmemalloc page.
> */
> - if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
> + if (unlikely(!is_pp_page(page)))
> return false;
>
> pp = page->pp;
> --
> 2.31.1
>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining
2023-12-11 3:52 ` [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining Liang Chen
@ 2023-12-11 7:43 ` Ilias Apalodimas
0 siblings, 0 replies; 15+ messages in thread
From: Ilias Apalodimas @ 2023-12-11 7:43 UTC (permalink / raw)
To: Liang Chen
Cc: davem, edumazet, kuba, pabeni, hawk, linyunsheng, netdev,
linux-mm, jasowang, almasrymina
On Mon, 11 Dec 2023 at 05:53, Liang Chen <liangchen.linux@gmail.com> wrote:
>
> To support multiple users referencing the same fragment,
> 'pp_frag_count' is renamed to 'pp_ref_count', transitioning pp pages
> from fragment management to reference count management after draining
> based on the suggestion from [1].
>
> The idea is that the concept of fragmenting exists before the page is
> drained, and all related functions retain their current names.
> However, once the page is drained, its management shifts to being
> governed by 'pp_ref_count'. Therefore, all functions associated with
> that lifecycle stage of a pp page are renamed.
>
> [1]
> http://lore.kernel.org/netdev/f71d9448-70c8-8793-dc9a-0eb48a570300@huawei.com
>
> Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> ---
> .../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +-
> include/linux/mm_types.h | 2 +-
> include/net/page_pool/helpers.h | 60 +++++++++++--------
> include/net/page_pool/types.h | 6 +-
> net/core/page_pool.c | 12 ++--
> 5 files changed, 46 insertions(+), 38 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index 8d9743a5e42c..98d33ac7ec64 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -298,8 +298,8 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq,
> u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags;
> struct page *page = frag_page->page;
>
> - if (page_pool_defrag_page(page, drain_count) == 0)
> - page_pool_put_defragged_page(rq->page_pool, page, -1, true);
> + if (page_pool_unref_page(page, drain_count) == 0)
> + page_pool_put_unrefed_page(rq->page_pool, page, -1, true);
> }
>
> static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq,
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 957ce38768b2..64e4572ef06d 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -125,7 +125,7 @@ struct page {
> struct page_pool *pp;
> unsigned long _pp_mapping_pad;
> unsigned long dma_addr;
> - atomic_long_t pp_frag_count;
> + atomic_long_t pp_ref_count;
> };
> struct { /* Tail pages of compound page */
> unsigned long compound_head; /* Bit zero is set */
> diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
> index 4ebd544ae977..d0c5e7e6857a 100644
> --- a/include/net/page_pool/helpers.h
> +++ b/include/net/page_pool/helpers.h
> @@ -29,7 +29,7 @@
> * page allocated from page pool. Page splitting enables memory saving and thus
> * avoids TLB/cache miss for data access, but there also is some cost to
> * implement page splitting, mainly some cache line dirtying/bouncing for
> - * 'struct page' and atomic operation for page->pp_frag_count.
> + * 'struct page' and atomic operation for page->pp_ref_count.
> *
> * The API keeps track of in-flight pages, in order to let API users know when
> * it is safe to free a page_pool object, the API users must call
> @@ -214,69 +214,77 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
> return pool->p.dma_dir;
> }
>
> -/* pp_frag_count represents the number of writers who can update the page
> - * either by updating skb->data or via DMA mappings for the device.
> - * We can't rely on the page refcnt for that as we don't know who might be
> - * holding page references and we can't reliably destroy or sync DMA mappings
> - * of the fragments.
> +/**
> + * page_pool_fragment_page() - split a fresh page into fragments
> + * @page: page to split
> + * @nr: references to set
> + *
> + * pp_ref_count represents the number of outstanding references to the page,
> + * which will be freed using page_pool APIs (rather than page allocator APIs
> + * like put_page()). Such references are usually held by page_pool-aware
> + * objects like skbs marked for page pool recycling.
> *
> - * When pp_frag_count reaches 0 we can either recycle the page if the page
> - * refcnt is 1 or return it back to the memory allocator and destroy any
> - * mappings we have.
> + * This helper allows the caller to take (set) multiple references to a
> + * freshly allocated page. The page must be freshly allocated (have a
> + * pp_ref_count of 1). This is commonly done by drivers and
> + * "fragment allocators" to save atomic operations - either when they know
> + * upfront how many references they will need; or to take MAX references and
> + * return the unused ones with a single atomic dec(), instead of performing
> + * multiple atomic inc() operations.
> */
> static inline void page_pool_fragment_page(struct page *page, long nr)
> {
> - atomic_long_set(&page->pp_frag_count, nr);
> + atomic_long_set(&page->pp_ref_count, nr);
> }
>
> -static inline long page_pool_defrag_page(struct page *page, long nr)
> +static inline long page_pool_unref_page(struct page *page, long nr)
> {
> long ret;
>
> - /* If nr == pp_frag_count then we have cleared all remaining
> + /* If nr == pp_ref_count then we have cleared all remaining
> * references to the page:
> * 1. 'n == 1': no need to actually overwrite it.
> * 2. 'n != 1': overwrite it with one, which is the rare case
> - * for pp_frag_count draining.
> + * for pp_ref_count draining.
> *
> * The main advantage to doing this is that not only we avoid a atomic
> * update, as an atomic_read is generally a much cheaper operation than
> * an atomic update, especially when dealing with a page that may be
> - * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count
> + * referenced by only 2 or 3 users; but also unify the pp_ref_count
> * handling by ensuring all pages have partitioned into only 1 piece
> * initially, and only overwrite it when the page is partitioned into
> * more than one piece.
> */
> - if (atomic_long_read(&page->pp_frag_count) == nr) {
> + if (atomic_long_read(&page->pp_ref_count) == nr) {
> /* As we have ensured nr is always one for constant case using
> * the BUILD_BUG_ON(), only need to handle the non-constant case
> - * here for pp_frag_count draining, which is a rare case.
> + * here for pp_ref_count draining, which is a rare case.
> */
> BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1);
> if (!__builtin_constant_p(nr))
> - atomic_long_set(&page->pp_frag_count, 1);
> + atomic_long_set(&page->pp_ref_count, 1);
>
> return 0;
> }
>
> - ret = atomic_long_sub_return(nr, &page->pp_frag_count);
> + ret = atomic_long_sub_return(nr, &page->pp_ref_count);
> WARN_ON(ret < 0);
>
> - /* We are the last user here too, reset pp_frag_count back to 1 to
> + /* We are the last user here too, reset pp_ref_count back to 1 to
> * ensure all pages have been partitioned into 1 piece initially,
> * this should be the rare case when the last two fragment users call
> - * page_pool_defrag_page() currently.
> + * page_pool_unref_page() currently.
> */
> if (unlikely(!ret))
> - atomic_long_set(&page->pp_frag_count, 1);
> + atomic_long_set(&page->pp_ref_count, 1);
>
> return ret;
> }
>
> -static inline bool page_pool_is_last_frag(struct page *page)
> +static inline bool page_pool_is_last_ref(struct page *page)
> {
> - /* If page_pool_defrag_page() returns 0, we were the last user */
> - return page_pool_defrag_page(page, 1) == 0;
> + /* If page_pool_unref_page() returns 0, we were the last user */
> + return page_pool_unref_page(page, 1) == 0;
> }
>
> /**
> @@ -301,10 +309,10 @@ static inline void page_pool_put_page(struct page_pool *pool,
> * allow registering MEM_TYPE_PAGE_POOL, but shield linker.
> */
> #ifdef CONFIG_PAGE_POOL
> - if (!page_pool_is_last_frag(page))
> + if (!page_pool_is_last_ref(page))
> return;
>
> - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct);
> + page_pool_put_unrefed_page(pool, page, dma_sync_size, allow_direct);
> #endif
> }
>
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index e1bb92c192de..6a5323619f6e 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -224,9 +224,9 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> }
> #endif
>
> -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
> - unsigned int dma_sync_size,
> - bool allow_direct);
> +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page,
> + unsigned int dma_sync_size,
> + bool allow_direct);
>
> static inline bool is_page_pool_compiled_in(void)
> {
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index df2a06d7da52..106220b1f89c 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -650,8 +650,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
> return NULL;
> }
>
> -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
> - unsigned int dma_sync_size, bool allow_direct)
> +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page,
> + unsigned int dma_sync_size, bool allow_direct)
> {
> page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct);
> if (page && !page_pool_recycle_in_ring(pool, page)) {
> @@ -660,7 +660,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
> page_pool_return_page(pool, page);
> }
> }
> -EXPORT_SYMBOL(page_pool_put_defragged_page);
> +EXPORT_SYMBOL(page_pool_put_unrefed_page);
>
> /**
> * page_pool_put_page_bulk() - release references on multiple pages
> @@ -687,7 +687,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> struct page *page = virt_to_head_page(data[i]);
>
> /* It is not the last user for the page frag case */
> - if (!page_pool_is_last_frag(page))
> + if (!page_pool_is_last_ref(page))
> continue;
>
> page = __page_pool_put_page(pool, page, -1, false);
> @@ -729,7 +729,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool,
> long drain_count = BIAS_MAX - pool->frag_users;
>
> /* Some user is still using the page frag */
> - if (likely(page_pool_defrag_page(page, drain_count)))
> + if (likely(page_pool_unref_page(page, drain_count)))
> return NULL;
>
> if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) {
> @@ -750,7 +750,7 @@ static void page_pool_free_frag(struct page_pool *pool)
>
> pool->frag_page = NULL;
>
> - if (!page || page_pool_defrag_page(page, drain_count))
> + if (!page || page_pool_unref_page(page, drain_count))
> return;
>
> page_pool_return_page(pool, page);
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-11 3:52 ` [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool Liang Chen
@ 2023-12-11 7:46 ` Ilias Apalodimas
2023-12-11 20:14 ` Jakub Kicinski
0 siblings, 1 reply; 15+ messages in thread
From: Ilias Apalodimas @ 2023-12-11 7:46 UTC (permalink / raw)
To: Liang Chen
Cc: davem, edumazet, kuba, pabeni, hawk, linyunsheng, netdev,
linux-mm, jasowang, almasrymina
Hi Liang,
On Mon, 11 Dec 2023 at 05:53, Liang Chen <liangchen.linux@gmail.com> wrote:
>
> In order to address the issues encountered with commit 1effe8ca4e34
> ("skbuff: fix coalescing for page_pool fragment recycling"), the
> combination of the following condition was excluded from skb coalescing:
>
> from->pp_recycle = 1
> from->cloned = 1
> to->pp_recycle = 1
>
> However, with page pool environments, the aforementioned combination can
> be quite common(ex. NetworkMananger may lead to the additional
> packet_type being registered, thus the cloning). In scenarios with a
> higher number of small packets, it can significantly affect the success
> rate of coalescing. For example, considering packets of 256 bytes size,
> our comparison of coalescing success rate is as follows:
>
> Without page pool: 70%
> With page pool: 13%
>
> Consequently, this has an impact on performance:
>
> Without page pool: 2.57 Gbits/sec
> With page pool: 2.26 Gbits/sec
>
> Therefore, it seems worthwhile to optimize this scenario and enable
> coalescing of this particular combination. To achieve this, we need to
> ensure the correct increment of the "from" SKB page's page pool
> reference count (pp_ref_count).
>
> Following this optimization, the success rate of coalescing measured in
> our environment has improved as follows:
>
> With page pool: 60%
>
> This success rate is approaching the rate achieved without using page
> pool, and the performance has also been improved:
>
> With page pool: 2.52 Gbits/sec
>
> Below is the performance comparison for small packets before and after
> this optimization. We observe no impact to packets larger than 4K.
>
> packet size before after improved
> (bytes) (Gbits/sec) (Gbits/sec)
> 128 1.19 1.27 7.13%
> 256 2.26 2.52 11.75%
> 512 4.13 4.81 16.50%
> 1024 6.17 6.73 9.05%
> 2048 14.54 15.47 6.45%
> 4096 25.44 27.87 9.52%
>
> Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
> Suggested-by: Jason Wang <jasowang@redhat.com>
As I said in the past the patch look correct. I don't like the fact
that more pp internals creep into the default network stack, but
perhaps this is fine with the bigger adoption?
Jakub any thoughts/objections?
Thanks
/Ilias
> ---
> include/net/page_pool/helpers.h | 5 ++++
> net/core/skbuff.c | 41 +++++++++++++++++++++++----------
> 2 files changed, 34 insertions(+), 12 deletions(-)
>
> diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
> index d0c5e7e6857a..0dc8fab43bef 100644
> --- a/include/net/page_pool/helpers.h
> +++ b/include/net/page_pool/helpers.h
> @@ -281,6 +281,11 @@ static inline long page_pool_unref_page(struct page *page, long nr)
> return ret;
> }
>
> +static inline void page_pool_ref_page(struct page *page)
> +{
> + atomic_long_inc(&page->pp_ref_count);
> +}
> +
> static inline bool page_pool_is_last_ref(struct page *page)
> {
> /* If page_pool_unref_page() returns 0, we were the last user */
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 7e26b56cda38..3c2515a29376 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -947,6 +947,24 @@ static bool skb_pp_recycle(struct sk_buff *skb, void *data, bool napi_safe)
> return napi_pp_put_page(virt_to_page(data), napi_safe);
> }
>
> +/**
> + * skb_pp_frag_ref() - Increase fragment reference count of a page
> + * @page: page of the fragment on which to increase a reference
> + *
> + * Increase fragment reference count (pp_ref_count) on a page, but if it is
> + * not a page pool page, fallback to increase a reference(_refcount) on a
> + * normal page.
> + */
> +static void skb_pp_frag_ref(struct page *page)
> +{
> + struct page *head_page = compound_head(page);
> +
> + if (likely(is_pp_page(head_page)))
> + page_pool_ref_page(head_page);
> + else
> + page_ref_inc(head_page);
> +}
> +
> static void skb_kfree_head(void *head, unsigned int end_offset)
> {
> if (end_offset == SKB_SMALL_HEAD_HEADROOM)
> @@ -5769,17 +5787,12 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
> return false;
>
> /* In general, avoid mixing page_pool and non-page_pool allocated
> - * pages within the same SKB. Additionally avoid dealing with clones
> - * with page_pool pages, in case the SKB is using page_pool fragment
> - * references (page_pool_alloc_frag()). Since we only take full page
> - * references for cloned SKBs at the moment that would result in
> - * inconsistent reference counts.
> - * In theory we could take full references if @from is cloned and
> - * !@to->pp_recycle but its tricky (due to potential race with
> - * the clone disappearing) and rare, so not worth dealing with.
> + * pages within the same SKB. In theory we could take full
> + * references if @from is cloned and !@to->pp_recycle but its
> + * tricky (due to potential race with the clone disappearing) and
> + * rare, so not worth dealing with.
> */
> - if (to->pp_recycle != from->pp_recycle ||
> - (from->pp_recycle && skb_cloned(from)))
> + if (to->pp_recycle != from->pp_recycle)
> return false;
>
> if (len <= skb_tailroom(to)) {
> @@ -5836,8 +5849,12 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
> /* if the skb is not cloned this does nothing
> * since we set nr_frags to 0.
> */
> - for (i = 0; i < from_shinfo->nr_frags; i++)
> - __skb_frag_ref(&from_shinfo->frags[i]);
> + if (from->pp_recycle)
> + for (i = 0; i < from_shinfo->nr_frags; i++)
> + skb_pp_frag_ref(skb_frag_page(&from_shinfo->frags[i]));
> + else
> + for (i = 0; i < from_shinfo->nr_frags; i++)
> + __skb_frag_ref(&from_shinfo->frags[i]);
>
> to->truesize += delta;
> to->len += len;
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 2/4] page_pool: halve BIAS_MAX for multiple user references of a fragment
2023-12-11 3:52 ` [PATCH net-next v8 2/4] page_pool: halve BIAS_MAX for multiple user references of a fragment Liang Chen
@ 2023-12-11 10:12 ` Jesper Dangaard Brouer
0 siblings, 0 replies; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-12-11 10:12 UTC (permalink / raw)
To: Alexander Duyck
Cc: netdev, linux-mm, kuba, ilias.apalodimas, jasowang, linyunsheng,
Liang Chen, edumazet, davem, almasrymina, pabeni
Hi Alex,
For page_pool BIAS stuff I would really appreciate your review please.
-Jesper
On 11/12/2023 04.52, Liang Chen wrote:
> Referring to patch [1], in order to support multiple users referencing the
> same fragment and prevent overflow from pp_ref_count growing, the initial
> value of pp_ref_count is halved, leaving room for pp_ref_count to increment
> before the page is drained.
>
> [1]
> https://lore.kernel.org/all/20211009093724.10539-3-linyunsheng@huawei.com/
>
> Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> net/core/page_pool.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 106220b1f89c..436f7ffea7b4 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -26,7 +26,7 @@
> #define DEFER_TIME (msecs_to_jiffies(1000))
> #define DEFER_WARN_INTERVAL (60 * HZ)
>
> -#define BIAS_MAX LONG_MAX
> +#define BIAS_MAX (LONG_MAX >> 1)
>
> #ifdef CONFIG_PAGE_POOL_STATS
> /* alloc_stat_inc is intended to be used in softirq context */
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-11 7:46 ` Ilias Apalodimas
@ 2023-12-11 20:14 ` Jakub Kicinski
2023-12-12 3:00 ` Liang Chen
2023-12-13 7:09 ` Ilias Apalodimas
0 siblings, 2 replies; 15+ messages in thread
From: Jakub Kicinski @ 2023-12-11 20:14 UTC (permalink / raw)
To: Ilias Apalodimas
Cc: Liang Chen, davem, edumazet, pabeni, hawk, linyunsheng, netdev,
linux-mm, jasowang, almasrymina
On Mon, 11 Dec 2023 09:46:55 +0200 Ilias Apalodimas wrote:
> As I said in the past the patch look correct. I don't like the fact
> that more pp internals creep into the default network stack, but
> perhaps this is fine with the bigger adoption?
> Jakub any thoughts/objections?
Now that you asked... the helper does seem to be in sort of
a in-between state of being skb specific.
What worries me is that this:
+/**
+ * skb_pp_frag_ref() - Increase fragment reference count of a page
+ * @page: page of the fragment on which to increase a reference
+ *
+ * Increase fragment reference count (pp_ref_count) on a page, but if it is
+ * not a page pool page, fallback to increase a reference(_refcount) on a
+ * normal page.
+ */
+static void skb_pp_frag_ref(struct page *page)
+{
+ struct page *head_page = compound_head(page);
+
+ if (likely(is_pp_page(head_page)))
+ page_pool_ref_page(head_page);
+ else
+ page_ref_inc(head_page);
+}
doesn't even document that the caller must make sure that the skb
which owns this page is marked for pp recycling. The caller added
by this patch does that, but we should indicate somewhere that doing
skb_pp_frag_ref() for frag in a non-pp-recycling skb is not correct.
We can either lean in the direction of making it less skb specific,
put the code in page_pool.c / helpers.h and make it clear that the
caller has to be careful.
Or we make it more skb specific, take a skb pointer as arg, and also
look at its recycling marking..
or just improve the kdoc.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-11 20:14 ` Jakub Kicinski
@ 2023-12-12 3:00 ` Liang Chen
2023-12-13 7:09 ` Ilias Apalodimas
1 sibling, 0 replies; 15+ messages in thread
From: Liang Chen @ 2023-12-12 3:00 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Ilias Apalodimas, davem, edumazet, pabeni, hawk, linyunsheng,
netdev, linux-mm, jasowang, almasrymina
On Tue, Dec 12, 2023 at 4:14 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Mon, 11 Dec 2023 09:46:55 +0200 Ilias Apalodimas wrote:
> > As I said in the past the patch look correct. I don't like the fact
> > that more pp internals creep into the default network stack, but
> > perhaps this is fine with the bigger adoption?
> > Jakub any thoughts/objections?
>
> Now that you asked... the helper does seem to be in sort of
> a in-between state of being skb specific.
>
> What worries me is that this:
>
> +/**
> + * skb_pp_frag_ref() - Increase fragment reference count of a page
> + * @page: page of the fragment on which to increase a reference
> + *
> + * Increase fragment reference count (pp_ref_count) on a page, but if it is
> + * not a page pool page, fallback to increase a reference(_refcount) on a
> + * normal page.
> + */
> +static void skb_pp_frag_ref(struct page *page)
> +{
> + struct page *head_page = compound_head(page);
> +
> + if (likely(is_pp_page(head_page)))
> + page_pool_ref_page(head_page);
> + else
> + page_ref_inc(head_page);
> +}
>
> doesn't even document that the caller must make sure that the skb
> which owns this page is marked for pp recycling. The caller added
> by this patch does that, but we should indicate somewhere that doing
> skb_pp_frag_ref() for frag in a non-pp-recycling skb is not correct.
>
> We can either lean in the direction of making it less skb specific,
> put the code in page_pool.c / helpers.h and make it clear that the
> caller has to be careful.
> Or we make it more skb specific, take a skb pointer as arg, and also
> look at its recycling marking..
> or just improve the kdoc.
Thank you for the suggestion! I will proceed with improving the kdoc.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-11 20:14 ` Jakub Kicinski
2023-12-12 3:00 ` Liang Chen
@ 2023-12-13 7:09 ` Ilias Apalodimas
2023-12-14 2:26 ` Liang Chen
1 sibling, 1 reply; 15+ messages in thread
From: Ilias Apalodimas @ 2023-12-13 7:09 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Liang Chen, davem, edumazet, pabeni, hawk, linyunsheng, netdev,
linux-mm, jasowang, almasrymina
Hi Jakub,
On Mon, 11 Dec 2023 at 22:14, Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Mon, 11 Dec 2023 09:46:55 +0200 Ilias Apalodimas wrote:
> > As I said in the past the patch look correct. I don't like the fact
> > that more pp internals creep into the default network stack, but
> > perhaps this is fine with the bigger adoption?
> > Jakub any thoughts/objections?
>
> Now that you asked... the helper does seem to be in sort of
> a in-between state of being skb specific.
>
> What worries me is that this:
>
> +/**
> + * skb_pp_frag_ref() - Increase fragment reference count of a page
> + * @page: page of the fragment on which to increase a reference
> + *
> + * Increase fragment reference count (pp_ref_count) on a page, but if it is
> + * not a page pool page, fallback to increase a reference(_refcount) on a
> + * normal page.
> + */
> +static void skb_pp_frag_ref(struct page *page)
> +{
> + struct page *head_page = compound_head(page);
> +
> + if (likely(is_pp_page(head_page)))
> + page_pool_ref_page(head_page);
> + else
> + page_ref_inc(head_page);
> +}
>
> doesn't even document that the caller must make sure that the skb
> which owns this page is marked for pp recycling. The caller added
> by this patch does that, but we should indicate somewhere that doing
> skb_pp_frag_ref() for frag in a non-pp-recycling skb is not correct.
Correct
>
> We can either lean in the direction of making it less skb specific,
> put the code in page_pool.c / helpers.h and make it clear that the
> caller has to be careful.
> Or we make it more skb specific, take a skb pointer as arg, and also
> look at its recycling marking..
> or just improve the kdoc.
I've mentioned this in the past, but I generally try to prevent people
from shooting themselves in the foot when creating APIs. Unless
there's a proven performance hit, I'd move the pp_recycle checking in
skb_pp_frag_ref().
Thanks
/Ilias
/Ilias
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-13 7:09 ` Ilias Apalodimas
@ 2023-12-14 2:26 ` Liang Chen
2023-12-14 2:34 ` Jakub Kicinski
0 siblings, 1 reply; 15+ messages in thread
From: Liang Chen @ 2023-12-14 2:26 UTC (permalink / raw)
To: Ilias Apalodimas
Cc: Jakub Kicinski, davem, edumazet, pabeni, hawk, linyunsheng,
netdev, linux-mm, jasowang, almasrymina
On Wed, Dec 13, 2023 at 3:10 PM Ilias Apalodimas
<ilias.apalodimas@linaro.org> wrote:
>
> Hi Jakub,
>
> On Mon, 11 Dec 2023 at 22:14, Jakub Kicinski <kuba@kernel.org> wrote:
> >
> > On Mon, 11 Dec 2023 09:46:55 +0200 Ilias Apalodimas wrote:
> > > As I said in the past the patch look correct. I don't like the fact
> > > that more pp internals creep into the default network stack, but
> > > perhaps this is fine with the bigger adoption?
> > > Jakub any thoughts/objections?
> >
> > Now that you asked... the helper does seem to be in sort of
> > a in-between state of being skb specific.
> >
> > What worries me is that this:
> >
> > +/**
> > + * skb_pp_frag_ref() - Increase fragment reference count of a page
> > + * @page: page of the fragment on which to increase a reference
> > + *
> > + * Increase fragment reference count (pp_ref_count) on a page, but if it is
> > + * not a page pool page, fallback to increase a reference(_refcount) on a
> > + * normal page.
> > + */
> > +static void skb_pp_frag_ref(struct page *page)
> > +{
> > + struct page *head_page = compound_head(page);
> > +
> > + if (likely(is_pp_page(head_page)))
> > + page_pool_ref_page(head_page);
> > + else
> > + page_ref_inc(head_page);
> > +}
> >
> > doesn't even document that the caller must make sure that the skb
> > which owns this page is marked for pp recycling. The caller added
> > by this patch does that, but we should indicate somewhere that doing
> > skb_pp_frag_ref() for frag in a non-pp-recycling skb is not correct.
>
> Correct
>
> >
> > We can either lean in the direction of making it less skb specific,
> > put the code in page_pool.c / helpers.h and make it clear that the
> > caller has to be careful.
> > Or we make it more skb specific, take a skb pointer as arg, and also
> > look at its recycling marking..
> > or just improve the kdoc.
>
> I've mentioned this in the past, but I generally try to prevent people
> from shooting themselves in the foot when creating APIs. Unless
> there's a proven performance hit, I'd move the pp_recycle checking in
> skb_pp_frag_ref().
>
/**
* skb_pp_frag_ref() - Increase fragment references of a page pool aware skb
* @skb: page pool aware skb
*
* Increase the fragment reference count (pp_ref_count) of a skb. This is
* intended to gain fragment references only for page pool aware skbs,
* i.e. when skb->pp_recycle is true, and not for fragments in a
* non-pp-recycling skb. It has a fallback to increase references on normal
* pages, as page pool aware skbs may also have normal page fragments.
*/
Sure. Below is a snippet of the implementation for skb_pp_frag_ref,
which takes an skb as its argument. The loop that iterates through the
frags has been moved inside the function to avoid checking
skb->pp_recycle each time a frag reference is taken(though there would
be some optimization from the compiler). If there is no objection, it
will be included in v10. Thanks!
static int skb_pp_frag_ref(struct sk_buff *skb)
{
struct skb_shared_info *shinfo;
struct page *head_page;
int i;
if (!skb->pp_recycle)
return -EINVAL;
shinfo = skb_shinfo(skb);
for (i = 0; i < shinfo->nr_frags; i++){
head_page = compound_head(skb_frag_page(&shinfo->frags[i]));
if (likely(is_pp_page(head_page)))
page_pool_ref_page(head_page);
else
page_ref_inc(head_page);
}
return 0;
}
/* if the skb is not cloned this does nothing
* since we set nr_frags to 0.
*/
if (skb_pp_frag_ref(from)) {
for (i = 0; i < from_shinfo->nr_frags; i++)
__skb_frag_ref(&from_shinfo->frags[i]);
}
> Thanks
> /Ilias
>
> /Ilias
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-14 2:26 ` Liang Chen
@ 2023-12-14 2:34 ` Jakub Kicinski
2023-12-14 2:46 ` Liang Chen
0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2023-12-14 2:34 UTC (permalink / raw)
To: Liang Chen
Cc: Ilias Apalodimas, davem, edumazet, pabeni, hawk, linyunsheng,
netdev, linux-mm, jasowang, almasrymina
On Thu, 14 Dec 2023 10:26:47 +0800 Liang Chen wrote:
> If there is no objection, it will be included in v10.
If I manage to reach you before you post - please hold off for another
30min with posting, I'm going to apply patch 1.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool
2023-12-14 2:34 ` Jakub Kicinski
@ 2023-12-14 2:46 ` Liang Chen
0 siblings, 0 replies; 15+ messages in thread
From: Liang Chen @ 2023-12-14 2:46 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Ilias Apalodimas, davem, edumazet, pabeni, hawk, linyunsheng,
netdev, linux-mm, jasowang, almasrymina
On Thu, Dec 14, 2023 at 10:34 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Thu, 14 Dec 2023 10:26:47 +0800 Liang Chen wrote:
> > If there is no objection, it will be included in v10.
>
> If I manage to reach you before you post - please hold off for another
> 30min with posting, I'm going to apply patch 1.
Sure. Thank you!
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2023-12-14 2:46 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-11 3:52 [PATCH net-next v8 0/4] skbuff: Optimize SKB coalescing for page pool Liang Chen
2023-12-11 3:52 ` [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining Liang Chen
2023-12-11 7:43 ` Ilias Apalodimas
2023-12-11 3:52 ` [PATCH net-next v8 2/4] page_pool: halve BIAS_MAX for multiple user references of a fragment Liang Chen
2023-12-11 10:12 ` Jesper Dangaard Brouer
2023-12-11 3:52 ` [PATCH net-next v8 3/4] skbuff: Add a function to check if a page belongs to page_pool Liang Chen
2023-12-11 7:40 ` Ilias Apalodimas
2023-12-11 3:52 ` [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing for page pool Liang Chen
2023-12-11 7:46 ` Ilias Apalodimas
2023-12-11 20:14 ` Jakub Kicinski
2023-12-12 3:00 ` Liang Chen
2023-12-13 7:09 ` Ilias Apalodimas
2023-12-14 2:26 ` Liang Chen
2023-12-14 2:34 ` Jakub Kicinski
2023-12-14 2:46 ` Liang Chen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox