From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7733C43462 for ; Thu, 29 Apr 2021 08:27:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14A9061460 for ; Thu, 29 Apr 2021 08:27:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14A9061460 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D01F6B006C; Thu, 29 Apr 2021 04:27:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0598C6B006E; Thu, 29 Apr 2021 04:27:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DED616B0070; Thu, 29 Apr 2021 04:27:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id BD6F46B006C for ; Thu, 29 Apr 2021 04:27:28 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7AFDA2496 for ; Thu, 29 Apr 2021 08:27:28 +0000 (UTC) X-FDA: 78084725376.18.B782954 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf02.hostedemail.com (Postfix) with ESMTP id D5BC140002E2 for ; Thu, 29 Apr 2021 08:26:56 +0000 (UTC) Received: from dggeml711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4FW7mr14SHz19JR5; Thu, 29 Apr 2021 16:23:24 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggeml711-chm.china.huawei.com (10.3.17.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Thu, 29 Apr 2021 16:27:22 +0800 Received: from [127.0.0.1] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Thu, 29 Apr 2021 16:27:21 +0800 Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers To: Matteo Croce , , CC: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , "Thomas Petazzoni" , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , "Ilias Apalodimas" , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Michel Lespinasse , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Guoqing Jiang , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Aleksandr Nogikh , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Guillaume Nault , , , , Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni References: <20210409223801.104657-1-mcroce@linux.microsoft.com> From: Yunsheng Lin Message-ID: Date: Thu, 29 Apr 2021 16:27:21 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <20210409223801.104657-1-mcroce@linux.microsoft.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggeme714-chm.china.huawei.com (10.1.199.110) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D5BC140002E2 X-Stat-Signature: uhoixdg13z6zba6p6i865byaqw8yayo1 Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from=""; helo=szxga08-in.huawei.com; client-ip=45.249.212.255 X-HE-DKIM-Result: none/none X-HE-Tag: 1619684816-424672 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021/4/10 6:37, Matteo Croce wrote: > From: Matteo Croce >=20 > This is a respin of [1] >=20 > This patchset shows the plans for allowing page_pool to handle and > maintain DMA map/unmap of the pages it serves to the driver. For this > to work a return hook in the network core is introduced. >=20 > The overall purpose is to simplify drivers, by providing a page > allocation API that does recycling, such that each driver doesn't have > to reinvent its own recycling scheme. Using page_pool in a driver > does not require implementing XDP support, but it makes it trivially > easy to do so. Instead of allocating buffers specifically for SKBs > we now allocate a generic buffer and either wrap it on an SKB > (via build_skb) or create an XDP frame. > The recycling code leverages the XDP recycle APIs. >=20 > The Marvell mvpp2 and mvneta drivers are used in this patchset to > demonstrate how to use the API, and tested on a MacchiatoBIN > and EspressoBIN boards respectively. >=20 Hi, Matteo I added the skb frag page recycling in hns3 based on this patchset, and it has above 10%~20% performance improvement for one thread iperf TCP flow(IOMMU is off, there may be more performance improvement if considering the DMA map/unmap avoiding for IOMMU), thanks for the job. The skb frag page recycling support in hns3 driver is not so simple as the mvpp2 and mvneta driver, because: 1. the hns3 driver do not have XDP support yet, so "struct xdp_rxq_info" is added to assist relation binding between the "struct page" and "struct page_pool". 2. the hns3 driver has already a page reusing based on page spliting and page reference count, but it may not work if the upper stack can not handle skb and release the corresponding page fast enough. 3. the hns3 driver support page reference count updating batching, see: aeda9bf87a45 ("net: hns3: batch the page reference count updates") So it would be better if=EF=BC=9A 1. skb frag page recycling do not need "struct xdp_rxq_info" or "struct xdp_mem_info" to bond the relation between "struct page" and "struct page_pool", which seems uncessary at this point if bonding a "struct page_pool" pointer directly in "struct page" does not cause space increasing. 2. it would be good to do the page reference count updating batching in page pool instead of specific driver. page_pool_atomic_sub_if_positive() is added to decide who can call page_pool_put_full_page(), because the driver and stack may hold reference to the same page, only if last one which hold complete reference to a page can call page_pool_put_full_page() to decide if recycling is possible, if not, the page is released, so I am wondering if a similar page_pool_atomic_sub_if_positive() can added to specific user space address unmapping path to allow skb recycling for RX zerocopy too? diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/ne= t/ethernet/hisilicon/hns3/hns3_enet.c index c21dd11..8b01a7d 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -2566,7 +2566,10 @@ static int hns3_alloc_buffer(struct hns3_enet_ring= *ring, unsigned int order =3D hns3_page_order(ring); struct page *p; - p =3D dev_alloc_pages(order); + if (ring->page_pool) + p =3D page_pool_dev_alloc_pages(ring->page_pool); + else + p =3D dev_alloc_pages(order); if (!p) return -ENOMEM; @@ -2582,13 +2585,32 @@ static int hns3_alloc_buffer(struct hns3_enet_rin= g *ring, return 0; } +static void hns3_page_frag_cache_drain(struct hns3_enet_ring *ring, + struct hns3_desc_cb *cb) +{ + if (ring->page_pool) { + struct page *p =3D cb->priv; + + if (page_pool_atomic_sub_if_positive(p, cb->pagecnt_bias)= ) + return; + + if (cb->pagecnt_bias > 1) + page_ref_sub(p, cb->pagecnt_bias - 1); + + page_pool_put_full_page(ring->page_pool, p, false); + return; + } + + __page_frag_cache_drain(cb->priv, cb->pagecnt_bias); +} + static void hns3_free_buffer(struct hns3_enet_ring *ring, struct hns3_desc_cb *cb, int budget) { if (cb->type =3D=3D DESC_TYPE_SKB) napi_consume_skb(cb->priv, budget); else if (!HNAE3_IS_TX_RING(ring) && cb->pagecnt_bias) - __page_frag_cache_drain(cb->priv, cb->pagecnt_bias); + hns3_page_frag_cache_drain(ring, cb); memset(cb, 0, sizeof(*cb)); } @@ -2892,13 +2914,15 @@ static void hns3_nic_reuse_page(struct sk_buff *s= kb, int i, skb_add_rx_frag(skb, i, desc_cb->priv, desc_cb->page_offset + pul= l_len, size - pull_len, truesize); + skb_mark_for_recycle(skb, desc_cb->priv, &ring->rxq_info.mem); + /* Avoid re-using remote and pfmemalloc pages, or the stack is st= ill * using the page when page_offset rollback to zero, flag default * unreuse */ if (!dev_page_is_reusable(desc_cb->priv) || (!desc_cb->page_offset && !hns3_can_reuse_page(desc_cb))) { - __page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_b= ias); + hns3_page_frag_cache_drain(ring, desc_cb); return; } @@ -2911,7 +2935,7 @@ static void hns3_nic_reuse_page(struct sk_buff *skb= , int i, desc_cb->reuse_flag =3D 1; desc_cb->page_offset =3D 0; } else if (desc_cb->pagecnt_bias) { - __page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_b= ias); + hns3_page_frag_cache_drain(ring, desc_cb); return; } @@ -3156,8 +3180,7 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ri= ng, unsigned int length, if (dev_page_is_reusable(desc_cb->priv)) desc_cb->reuse_flag =3D 1; else /* This page cannot be reused so discard it */ - __page_frag_cache_drain(desc_cb->priv, - desc_cb->pagecnt_bias); + hns3_page_frag_cache_drain(ring, desc_cb); hns3_rx_ring_move_fw(ring); return 0; @@ -4028,6 +4051,33 @@ static int hns3_alloc_ring_memory(struct hns3_enet= _ring *ring) goto out_with_desc_cb; if (!HNAE3_IS_TX_RING(ring)) { + struct page_pool_params pp_params =3D { + /* internal DMA mapping in page_pool */ + .flags =3D 0, + .order =3D 0, + .pool_size =3D 1024, + .nid =3D dev_to_node(ring_to_dev(ring)), + .dev =3D ring_to_dev(ring), + .dma_dir =3D DMA_FROM_DEVICE, + .offset =3D 0, + .max_len =3D 0, + }; + + ring->page_pool =3D page_pool_create(&pp_params); + if (IS_ERR(ring->page_pool)) { + dev_err(ring_to_dev(ring), "page pool creation fa= iled\n"); + ring->page_pool =3D NULL; + } + + ret =3D xdp_rxq_info_reg(&ring->rxq_info, ring_to_netdev(= ring), ring->queue_index, 0); + if (ret) + dev_err(ring_to_dev(ring), "xdp_rxq_info_reg fail= ed\n"); + + ret =3D xdp_rxq_info_reg_mem_model(&ring->rxq_info, MEM_T= YPE_PAGE_POOL, + ring->page_pool); + if (ret) + dev_err(ring_to_dev(ring), "xdp_rxq_info_reg_mem_= model failed\n"); + ret =3D hns3_alloc_ring_buffers(ring); if (ret) goto out_with_desc; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/ne= t/ethernet/hisilicon/hns3/hns3_enet.h index daa04ae..fd53fcc 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h @@ -6,6 +6,9 @@ #include +#include +#include + #include "hnae3.h" enum hns3_nic_state { @@ -408,6 +411,8 @@ struct hns3_enet_ring { struct hnae3_queue *tqp; int queue_index; struct device *dev; /* will be used for DMA mapping of descriptor= s */ + struct page_pool *page_pool; struct hnae3_queue *tqp; int queue_index; struct device *dev; /* will be used for DMA mapping of descriptor= s */ + struct page_pool *page_pool; + struct xdp_rxq_info rxq_info; /* statistic */ struct ring_stats stats; diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 75fffc1..70c310e 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -195,6 +195,8 @@ static inline void page_pool_put_full_page(struct pag= e_pool *pool, #endif } +bool page_pool_atomic_sub_if_positive(struct page *page, int i); + /* Same as above but the caller must guarantee safe context. e.g NAPI */ static inline void page_pool_recycle_direct(struct page_pool *pool, struct page *page) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 43bfd2e..8bc8b7e 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -596,6 +596,26 @@ void page_pool_update_nid(struct page_pool *pool, in= t new_nid) } EXPORT_SYMBOL(page_pool_update_nid); +bool page_pool_atomic_sub_if_positive(struct page *page, int i) +{ + atomic_t *v =3D &page->_refcount; + int dec, c; + + do { + c =3D atomic_read(v); + + dec =3D c - i; + if (unlikely(dec =3D=3D 0)) + return false; + else if (unlikely(dec < 0)) { + pr_err("c: %d, dec: %d, i: %d\n", c, dec, i); + return false; + } + } while (!atomic_try_cmpxchg(v, &c, dec)); + + return true; +} + bool page_pool_return_skb_page(void *data) { struct xdp_mem_info mem_info; @@ -606,6 +626,9 @@ bool page_pool_return_skb_page(void *data) if (unlikely(page->signature !=3D PP_SIGNATURE)) return false; + if (page_pool_atomic_sub_if_positive(page, 1)) + return true; + info.raw =3D page_private(page); mem_info =3D info.mem_info;