From: Matthew Wilcox <willy@infradead.org>
To: Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: brouer@redhat.com, Jesper Dangaard Brouer <hawk@kernel.org>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
netdev@vger.kernel.org, linux-mm@kvack.org,
Shakeel Butt <shakeelb@google.com>
Subject: Re: [PATCH v2 17/24] page_pool: Convert page_pool_return_skb_page() to use netmem
Date: Mon, 9 Jan 2023 18:36:54 +0000 [thread overview]
Message-ID: <Y7xexniPnKSgCMVE@casper.infradead.org> (raw)
In-Reply-To: <c0f53cee-aaa7-2fe8-ff5b-0853085b6514@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4820 bytes --]
On Fri, Jan 06, 2023 at 09:16:25PM +0100, Jesper Dangaard Brouer wrote:
>
>
> On 06/01/2023 17.53, Matthew Wilcox wrote:
> > On Fri, Jan 06, 2023 at 04:49:12PM +0100, Jesper Dangaard Brouer wrote:
> > > On 05/01/2023 22.46, Matthew Wilcox (Oracle) wrote:
> > > > This function accesses the pagepool members of struct page directly,
> > > > so it needs to become netmem. Add page_pool_put_full_netmem() and
> > > > page_pool_recycle_netmem().
> > > >
> > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > > > ---
> > > > include/net/page_pool.h | 14 +++++++++++++-
> > > > net/core/page_pool.c | 13 ++++++-------
> > > > 2 files changed, 19 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> > > > index fbb653c9f1da..126c04315929 100644
> > > > --- a/include/net/page_pool.h
> > > > +++ b/include/net/page_pool.h
> > > > @@ -464,10 +464,16 @@ static inline void page_pool_put_page(struct page_pool *pool,
> > > > }
> > > > /* Same as above but will try to sync the entire area pool->max_len */
> > > > +static inline void page_pool_put_full_netmem(struct page_pool *pool,
> > > > + struct netmem *nmem, bool allow_direct)
> > > > +{
> > > > + page_pool_put_netmem(pool, nmem, -1, allow_direct);
> > > > +}
> > > > +
> > > > static inline void page_pool_put_full_page(struct page_pool *pool,
> > > > struct page *page, bool allow_direct)
> > > > {
> > > > - page_pool_put_page(pool, page, -1, allow_direct);
> > > > + page_pool_put_full_netmem(pool, page_netmem(page), allow_direct);
> > > > }
> > > > /* Same as above but the caller must guarantee safe context. e.g NAPI */
> > > > @@ -477,6 +483,12 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
> > > > page_pool_put_full_page(pool, page, true);
> > > > }
> > > > +static inline void page_pool_recycle_netmem(struct page_pool *pool,
> > > > + struct netmem *nmem)
> > > > +{
> > > > + page_pool_put_full_netmem(pool, nmem, true);
> > > ^^^^
> > >
> > > It is not clear in what context page_pool_recycle_netmem() will be used,
> > > but I think the 'true' (allow_direct=true) might be wrong here.
> > >
> > > It is only in limited special cases (RX-NAPI context) we can allow
> > > direct return to the RX-alloc-cache.
> >
> > Mmm. It's a c'n'p of the previous function:
> >
> > static inline void page_pool_recycle_direct(struct page_pool *pool,
> > struct page *page)
> > {
> > page_pool_put_full_page(pool, page, true);
> > }
> >
> > so perhaps it's just badly named?
>
> Yes, I think so.
>
> Can we name it:
> page_pool_recycle_netmem_direct
>
> And perhaps add a comment with a warning like:
> /* Caller must guarantee safe context. e.g NAPI */
>
> Like the page_pool_recycle_direct() function has a comment.
I don't really like the new name you're proposing here. Really,
page_pool_recycle_direct() is the perfect name, it just has the wrong
type.
I considered the attached megapatch, but I don't think that's a great
idea.
So here's what I'm planning instead:
page_pool: Allow page_pool_recycle_direct() to take a netmem or a page
With no better name for a variant of page_pool_recycle_direct() which
takes a netmem instead of a page, use _Generic() to allow it to take
either a page or a netmem argument. It's a bit ugly, but maybe not
the worst alternative?
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index abe3822a1125..1eed8ed2dcc1 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -477,12 +477,22 @@ static inline void page_pool_put_full_page(struct page_pool *pool,
}
/* Same as above but the caller must guarantee safe context. e.g NAPI */
-static inline void page_pool_recycle_direct(struct page_pool *pool,
+static inline void __page_pool_recycle_direct(struct page_pool *pool,
+ struct netmem *nmem)
+{
+ page_pool_put_full_netmem(pool, nmem, true);
+}
+
+static inline void __page_pool_recycle_page_direct(struct page_pool *pool,
struct page *page)
{
- page_pool_put_full_page(pool, page, true);
+ page_pool_put_full_netmem(pool, page_netmem(page), true);
}
+#define page_pool_recycle_direct(pool, mem) _Generic((mem), \
+ struct netmem *: __page_pool_recycle_direct(pool, (struct netmem *)mem), \
+ struct page *: __page_pool_recycle_page_direct(pool, (struct page *)mem))
+
#define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \
(sizeof(dma_addr_t) > sizeof(unsigned long))
[-- Attachment #2: direct-netmem.diff --]
[-- Type: text/plain, Size: 11183 bytes --]
diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
index 2c3c81473b97..eda5ed12ecee 100644
--- a/Documentation/networking/page_pool.rst
+++ b/Documentation/networking/page_pool.rst
@@ -188,10 +188,10 @@ NAPI poller
dma_dir = page_pool_get_dma_dir(dring->page_pool);
while (done < budget) {
if (some error)
- page_pool_recycle_direct(page_pool, page);
+ page_pool_recycle_direct(page_pool, page_netmem(page));
if (packet_is_xdp) {
if XDP_DROP:
- page_pool_recycle_direct(page_pool, page);
+ page_pool_recycle_direct(page_pool, page_netmem(page));
} else (packet_is_skb) {
page_pool_release_page(page_pool, page);
new_page = page_pool_dev_alloc_pages(page_pool);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 16ce7a90610c..088c2b31e450 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -736,7 +736,7 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
*mapping = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, bp->rx_dir,
DMA_ATTR_WEAK_ORDERING);
if (dma_mapping_error(dev, *mapping)) {
- page_pool_recycle_direct(rxr->page_pool, page);
+ page_pool_recycle_direct(rxr->page_pool, page_netmem(page));
return NULL;
}
return page;
@@ -2975,7 +2975,8 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
dma_unmap_page_attrs(&pdev->dev, mapping, PAGE_SIZE,
bp->rx_dir,
DMA_ATTR_WEAK_ORDERING);
- page_pool_recycle_direct(rxr->page_pool, data);
+ page_pool_recycle_direct(rxr->page_pool,
+ page_netmem(data));
} else {
dma_unmap_single_attrs(&pdev->dev, mapping,
bp->rx_buf_use_size, bp->rx_dir,
@@ -3002,7 +3003,8 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
rx_agg_buf->page = NULL;
__clear_bit(i, rxr->rx_agg_bmap);
- page_pool_recycle_direct(rxr->page_pool, page);
+ page_pool_recycle_direct(rxr->page_pool,
+ page_netmem(page));
} else {
dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping,
BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE,
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
index 36d5202c0aee..df410ce24028 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
@@ -156,7 +156,8 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
for (j = 0; j < frags; j++) {
tx_cons = NEXT_TX(tx_cons);
tx_buf = &txr->tx_buf_ring[tx_cons];
- page_pool_recycle_direct(rxr->page_pool, tx_buf->page);
+ page_pool_recycle_direct(rxr->page_pool,
+ page_netmem(tx_buf->page));
}
}
tx_cons = NEXT_TX(tx_cons);
@@ -209,7 +210,7 @@ void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr,
for (i = 0; i < shinfo->nr_frags; i++) {
struct page *page = skb_frag_page(&shinfo->frags[i]);
- page_pool_recycle_direct(rxr->page_pool, page);
+ page_pool_recycle_direct(rxr->page_pool, page_netmem(page));
}
shinfo->nr_frags = 0;
}
@@ -310,7 +311,8 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
if (xdp_do_redirect(bp->dev, &xdp, xdp_prog)) {
trace_xdp_exception(bp->dev, xdp_prog, act);
- page_pool_recycle_direct(rxr->page_pool, page);
+ page_pool_recycle_direct(rxr->page_pool,
+ page_netmem(page));
return true;
}
diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
index bf0190e1d2ea..ef078a72aa26 100644
--- a/drivers/net/ethernet/engleder/tsnep_main.c
+++ b/drivers/net/ethernet/engleder/tsnep_main.c
@@ -920,7 +920,8 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi,
napi_gro_receive(napi, skb);
} else {
- page_pool_recycle_direct(rx->page_pool, entry->page);
+ page_pool_recycle_direct(rx->page_pool,
+ page_netmem(entry->page));
rx->dropped++;
}
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 644f3c963730..cb4406933794 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -1675,7 +1675,8 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
*/
skb = build_skb(page_address(page), PAGE_SIZE);
if (unlikely(!skb)) {
- page_pool_recycle_direct(rxq->page_pool, page);
+ page_pool_recycle_direct(rxq->page_pool,
+ page_netmem(page));
ndev->stats.rx_dropped++;
netdev_err_once(ndev, "build_skb failed!\n");
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index c8820ab22169..152cf434102a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -335,7 +335,7 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, union mlx5e_alloc_u
/* Non-XSK always uses PAGE_SIZE. */
addr = dma_map_page(rq->pdev, au->page, 0, PAGE_SIZE, rq->buff.map_dir);
if (unlikely(dma_mapping_error(rq->pdev, addr))) {
- page_pool_recycle_direct(rq->page_pool, au->page);
+ page_pool_recycle_direct(rq->page_pool, page_netmem(au->page));
au->page = NULL;
return -ENOMEM;
}
@@ -360,7 +360,7 @@ void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool rec
return;
mlx5e_page_dma_unmap(rq, page);
- page_pool_recycle_direct(rq->page_pool, page);
+ page_pool_recycle_direct(rq->page_pool, page_netmem(page));
} else {
mlx5e_page_dma_unmap(rq, page);
page_pool_release_page(rq->page_pool, page);
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
index 5314c064ceae..8bb172aad9f0 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
@@ -43,7 +43,7 @@ static void lan966x_fdma_rx_free_page(struct lan966x_rx *rx)
if (unlikely(!page))
return;
- page_pool_recycle_direct(rx->page_pool, page);
+ page_pool_recycle_direct(rx->page_pool, page_netmem(page));
}
static void lan966x_fdma_rx_add_dcb(struct lan966x_rx *rx,
@@ -534,7 +534,7 @@ static struct sk_buff *lan966x_fdma_rx_get_frame(struct lan966x_rx *rx,
return skb;
free_page:
- page_pool_recycle_direct(rx->page_pool, page);
+ page_pool_recycle_direct(rx->page_pool, page_netmem(page));
return NULL;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index c6951c976f5d..ce7ff8032038 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -5251,7 +5251,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
stmmac_rx_extended_status(priv, &priv->dev->stats,
&priv->xstats, rx_q->dma_erx + entry);
if (unlikely(status == discard_frame)) {
- page_pool_recycle_direct(rx_q->page_pool, buf->page);
+ page_pool_recycle_direct(rx_q->page_pool,
+ page_netmem(buf->page));
buf->page = NULL;
error = 1;
if (!priv->hwts_rx_en)
@@ -5357,7 +5358,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
skb_put(skb, buf1_len);
/* Data payload copied into SKB, page ready for recycle */
- page_pool_recycle_direct(rx_q->page_pool, buf->page);
+ page_pool_recycle_direct(rx_q->page_pool,
+ page_netmem(buf->page));
buf->page = NULL;
} else if (buf1_len) {
dma_sync_single_for_cpu(priv->device, buf->addr,
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 13c9c2d6b79b..c2f6ea843fe2 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -380,7 +380,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
}
/* the interface is going down, pages are purged */
- page_pool_recycle_direct(pool, page);
+ page_pool_recycle_direct(pool, page_netmem(page));
return;
}
@@ -417,7 +417,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size));
if (!skb) {
ndev->stats.rx_dropped++;
- page_pool_recycle_direct(pool, page);
+ page_pool_recycle_direct(pool, page_netmem(page));
goto requeue;
}
@@ -447,7 +447,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
pkt_size, 0);
if (ret < 0) {
WARN_ON(ret == -ENOMEM);
- page_pool_recycle_direct(pool, new_page);
+ page_pool_recycle_direct(pool, page_netmem(new_page));
}
}
diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
index 83596ec0c7cb..7432fd0ec8ee 100644
--- a/drivers/net/ethernet/ti/cpsw_new.c
+++ b/drivers/net/ethernet/ti/cpsw_new.c
@@ -324,7 +324,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
}
/* the interface is going down, pages are purged */
- page_pool_recycle_direct(pool, page);
+ page_pool_recycle_direct(pool, page_netmem(page));
return;
}
@@ -360,7 +360,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size));
if (!skb) {
ndev->stats.rx_dropped++;
- page_pool_recycle_direct(pool, page);
+ page_pool_recycle_direct(pool, page_netmem(page));
goto requeue;
}
@@ -391,7 +391,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
pkt_size, 0);
if (ret < 0) {
WARN_ON(ret == -ENOMEM);
- page_pool_recycle_direct(pool, new_page);
+ page_pool_recycle_direct(pool, page_netmem(new_page));
}
}
diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c
index 758295c898ac..c3de972743ba 100644
--- a/drivers/net/ethernet/ti/cpsw_priv.c
+++ b/drivers/net/ethernet/ti/cpsw_priv.c
@@ -1131,7 +1131,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv)
cpsw_err(priv, ifup,
"cannot submit page to channel %d rx, error %d\n",
ch, ret);
- page_pool_recycle_direct(pool, page);
+ page_pool_recycle_direct(pool,
+ page_netmem(page));
return ret;
}
}
@@ -1378,7 +1379,7 @@ int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp,
out:
return ret;
drop:
- page_pool_recycle_direct(cpsw->page_pool[ch], page);
+ page_pool_recycle_direct(cpsw->page_pool[ch], page_netmem(page));
return ret;
}
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index abe3822a1125..5cff207c33a4 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -478,9 +478,9 @@ static inline void page_pool_put_full_page(struct page_pool *pool,
/* Same as above but the caller must guarantee safe context. e.g NAPI */
static inline void page_pool_recycle_direct(struct page_pool *pool,
- struct page *page)
+ struct netmem *nmem)
{
- page_pool_put_full_page(pool, page, true);
+ page_pool_put_full_netmem(pool, nmem, true);
}
#define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \
next prev parent reply other threads:[~2023-01-09 18:36 UTC|newest]
Thread overview: 84+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-05 21:46 [PATCH v2 00/24] Split netmem from struct page Matthew Wilcox (Oracle)
2023-01-05 21:46 ` [PATCH v2 01/24] netmem: Create new type Matthew Wilcox (Oracle)
2023-01-06 13:07 ` Jesper Dangaard Brouer
2023-01-09 17:20 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 02/24] netmem: Add utility functions Matthew Wilcox (Oracle)
2023-01-06 2:24 ` kernel test robot
2023-01-06 20:35 ` Matthew Wilcox
2023-01-06 13:35 ` Jesper Dangaard Brouer
2023-01-05 21:46 ` [PATCH v2 03/24] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr() Matthew Wilcox (Oracle)
2023-01-06 13:43 ` Jesper Dangaard Brouer
2023-01-09 17:30 ` Ilias Apalodimas
2023-01-10 9:17 ` Ilias Apalodimas
2023-01-10 18:16 ` Matthew Wilcox
2023-01-10 18:15 ` Matthew Wilcox
2023-01-05 21:46 ` [PATCH v2 04/24] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Matthew Wilcox (Oracle)
2023-01-06 13:46 ` Jesper Dangaard Brouer
2023-01-10 9:28 ` Ilias Apalodimas
2023-01-10 18:47 ` Matthew Wilcox
2023-01-11 13:56 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 05/24] page_pool: Start using netmem in allocation path Matthew Wilcox (Oracle)
2023-01-06 2:34 ` kernel test robot
2023-01-06 13:59 ` Jesper Dangaard Brouer
2023-01-06 15:36 ` Matthew Wilcox
2023-01-10 9:30 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 06/24] page_pool: Convert page_pool_return_page() to page_pool_return_netmem() Matthew Wilcox (Oracle)
2023-01-06 14:10 ` Jesper Dangaard Brouer
2023-01-10 9:39 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 07/24] page_pool: Convert __page_pool_put_page() to __page_pool_put_netmem() Matthew Wilcox (Oracle)
2023-01-06 14:14 ` Jesper Dangaard Brouer
2023-01-10 9:47 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 08/24] page_pool: Convert pp_alloc_cache to contain netmem Matthew Wilcox (Oracle)
2023-01-06 14:18 ` Jesper Dangaard Brouer
2023-01-10 9:58 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 09/24] page_pool: Convert page_pool_defrag_page() to page_pool_defrag_netmem() Matthew Wilcox (Oracle)
2023-01-06 14:29 ` Jesper Dangaard Brouer
2023-01-10 10:27 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 10/24] page_pool: Convert page_pool_put_defragged_page() to netmem Matthew Wilcox (Oracle)
2023-01-06 14:32 ` Jesper Dangaard Brouer
2023-01-10 10:36 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 11/24] page_pool: Convert page_pool_empty_ring() to use netmem Matthew Wilcox (Oracle)
2023-01-06 15:22 ` Jesper Dangaard Brouer
2023-01-10 10:38 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 12/24] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Matthew Wilcox (Oracle)
2023-01-06 15:27 ` Jesper Dangaard Brouer
2023-01-10 10:45 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 13/24] page_pool: Convert page_pool_dma_sync_for_device() to take a netmem Matthew Wilcox (Oracle)
2023-01-06 15:28 ` Jesper Dangaard Brouer
2023-01-10 10:47 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 14/24] page_pool: Convert page_pool_recycle_in_cache() to netmem Matthew Wilcox (Oracle)
2023-01-06 15:29 ` Jesper Dangaard Brouer
2023-01-10 10:48 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 15/24] page_pool: Remove page_pool_defrag_page() Matthew Wilcox (Oracle)
2023-01-06 15:29 ` Jesper Dangaard Brouer
2023-01-10 9:47 ` Ilias Apalodimas
2023-01-10 22:00 ` Matthew Wilcox
2023-01-11 13:58 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 16/24] page_pool: Use netmem in page_pool_drain_frag() Matthew Wilcox (Oracle)
2023-01-06 15:30 ` Jesper Dangaard Brouer
2023-01-10 11:00 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 17/24] page_pool: Convert page_pool_return_skb_page() to use netmem Matthew Wilcox (Oracle)
2023-01-06 15:49 ` Jesper Dangaard Brouer
2023-01-06 16:53 ` Matthew Wilcox
2023-01-06 20:16 ` Jesper Dangaard Brouer
2023-01-09 18:36 ` Matthew Wilcox [this message]
2023-01-10 10:04 ` Jesper Dangaard Brouer
2023-01-05 21:46 ` [PATCH v2 18/24] page_pool: Convert frag_page to frag_nmem Matthew Wilcox (Oracle)
2023-01-06 15:51 ` Jesper Dangaard Brouer
2023-01-10 11:36 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 19/24] xdp: Convert to netmem Matthew Wilcox (Oracle)
2023-01-06 15:53 ` Jesper Dangaard Brouer
2023-01-10 11:50 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 20/24] mm: Remove page pool members from struct page Matthew Wilcox (Oracle)
2023-01-06 15:56 ` Jesper Dangaard Brouer
2023-01-10 11:51 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 21/24] page_pool: Pass a netmem to init_callback() Matthew Wilcox (Oracle)
2023-01-06 16:02 ` Jesper Dangaard Brouer
2023-01-10 11:32 ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 22/24] net: Add support for netmem in skb_frag Matthew Wilcox (Oracle)
2023-01-05 21:46 ` [PATCH v2 23/24] mvneta: Convert to netmem Matthew Wilcox (Oracle)
2023-01-05 21:46 ` [PATCH v2 24/24] mlx5: " Matthew Wilcox (Oracle)
2023-01-06 16:31 ` Jesper Dangaard Brouer
2023-01-09 11:46 ` Tariq Toukan
2023-01-09 12:27 ` Tariq Toukan
2023-01-06 1:20 ` [PATCH v2 00/24] Split netmem from struct page Jesse Brandeburg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y7xexniPnKSgCMVE@casper.infradead.org \
--to=willy@infradead.org \
--cc=brouer@redhat.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=jbrouer@redhat.com \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox