From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F3B4C4167B for ; Mon, 11 Dec 2023 07:43:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E0BE6B00AB; Mon, 11 Dec 2023 02:43:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 291406B00AC; Mon, 11 Dec 2023 02:43:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1329B6B00AD; Mon, 11 Dec 2023 02:43:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 003E66B00AB for ; Mon, 11 Dec 2023 02:43:58 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BCD10A06BE for ; Mon, 11 Dec 2023 07:43:58 +0000 (UTC) X-FDA: 81553748556.22.96FA77C Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) by imf29.hostedemail.com (Postfix) with ESMTP id E3847120013 for ; Mon, 11 Dec 2023 07:43:56 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b="V5gs/9I7"; dmarc=pass (policy=none) header.from=linaro.org; spf=pass (imf29.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.174 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702280637; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kmfR2qu5FqRClBuXauvkiUcD8R6MpX39/yYiAjqSPc0=; b=2BJBD948lhSzqTLD/XQqsA2xpCA9+5T7PNrnI41gZdwd9onA02bhlCKAWHzaVylONf076p DKXM8a9CQVjgNxPhRHEJvGBw17E/giblQyS25EK+EvIBRfKv2UOPVjBUukKXsQmKPRNHxj inVKPd5cRlvwin9lYCEYgYm837NdJwg= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b="V5gs/9I7"; dmarc=pass (policy=none) header.from=linaro.org; spf=pass (imf29.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.174 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702280637; a=rsa-sha256; cv=none; b=TebXABaEgPsoHAnj/bzUEgGZ0edAYvBI/B/y8VtCAzZFolDo4ST5DvzqzDpjtnthgF2HUy M6rAJKOTKf0H8uAV/Ro7zKQ6zs6MH5rlBYW3npW96kLXxIy+ZTjMUSr7sLnEAORyy6DJj9 Qeh0oSIxUbKDeYybGP8Jls6VAYzDaU4= Received: by mail-lj1-f174.google.com with SMTP id 38308e7fff4ca-2c9fbb846b7so46330751fa.2 for ; Sun, 10 Dec 2023 23:43:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1702280635; x=1702885435; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=kmfR2qu5FqRClBuXauvkiUcD8R6MpX39/yYiAjqSPc0=; b=V5gs/9I7IpNJWSwFqhS8PbK9gSJuXP65tsvhYe4GDeIE0sRowStp39t2p0Rx0as0mS OGn+Q6zKX+iIBsZqrN+aoN7r7I54TenBT1gDjTMpEcAuQ70Kevr8XU5d9gOMnH4Wtn50 a1+rXOozJk/TySbTmiJ/ZwKg8WnhD1h939T6e0ldseBgo6DItbNxQ2XrAg1JirLWvfgX Lpw0B7k+j1r87nGNGT0rEyH4qrIY2DTIRglTE67mRY0Dr12Um5k2CRW0+2Qm6jO9mnan Xoa6tdH+AqttauJZG1ueVK4gLGzi/myoM1DTPN/dsfBSFgmMfdKCSYL4gCybMVqrSCcU 9enA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702280635; x=1702885435; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kmfR2qu5FqRClBuXauvkiUcD8R6MpX39/yYiAjqSPc0=; b=j+hnwIgdUl4pYk5adQus61QNBKn5M8Uy60cPoRUoLIAcMRbX4M9yOAX9l3p2x7RCAa d9awGIqR6TT3hRUZVgjVKoj/SFKU76XtIBZPOvnAbcop58jAegakKsl1OhZYJd50CibB h0R8N99ISTWyj+3iXHtBlZz5MATHamSsKgt01peKl+EsyIQebHLgikMh8Y/naU1XEeIt sapOLbYAxtYN9u+NtJfUEqugES9LQmgxEKs3afaAPMslW7DcKHUjScHTSEdeSElMr/8z /3RRgwG5rmA9dZPHgKLawBH4UwCuZeytjVJh1bPbyL0yel2ZKXvfWJktru+tnDODTf+9 RY4A== X-Gm-Message-State: AOJu0YwZEnUXE8PjZThJ7MdGDmTlPsuqNiu+kBxiW0cyKGh4xpdV1ApW 3jq/KjR/zFA0Vx/BBDIQhoOiZ6eryp/j1JqF9TaZMw== X-Google-Smtp-Source: AGHT+IFpXGEILGXPqdSi4Ve36Y2xHwpaPhIxiUfrLCAGeszr7uYMMNq/tRQtsJ9YjztFKsGMn4SdIJVE3Qy8hEfd8i0= X-Received: by 2002:a2e:1441:0:b0:2cb:2849:bd96 with SMTP id 1-20020a2e1441000000b002cb2849bd96mr1355109lju.0.1702280635246; Sun, 10 Dec 2023 23:43:55 -0800 (PST) MIME-Version: 1.0 References: <20231211035243.15774-1-liangchen.linux@gmail.com> <20231211035243.15774-2-liangchen.linux@gmail.com> In-Reply-To: <20231211035243.15774-2-liangchen.linux@gmail.com> From: Ilias Apalodimas Date: Mon, 11 Dec 2023 09:43:19 +0200 Message-ID: Subject: Re: [PATCH net-next v8 1/4] page_pool: transition to reference count management after page draining To: Liang Chen Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, hawk@kernel.org, linyunsheng@huawei.com, netdev@vger.kernel.org, linux-mm@kvack.org, jasowang@redhat.com, almasrymina@google.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E3847120013 X-Stat-Signature: 8cmf3d3q9cxxwhicjbkkacyoawg7oksx X-Rspam-User: X-HE-Tag: 1702280636-876697 X-HE-Meta: U2FsdGVkX1/80KOpPYaaXps1HTHokMwLhh+N4YBgz3z8VJBbzVYNSvvygk61szgh99s651sLaEHSHHrwmme4k/aNDb9kB08PgpnRj6AaRzilAFcHaqCS8h9zyl1rmIiPA6VvWOk1KSBqOWueH+GnoooC95B1pHC6BsEL+1E4LRjxw1TOz2y68ROq7Jsmz0X6yPV9/SYOAIymDAtBG93NnJnyAJQMFq6n42TSCFHnxTHkXsjA1wN1A6OvQQwwCTO8yBPxQu+/6v0sqKpxJfWZECDLga0UyRYIYNPjieeZKa13LgYeH4TsfEM/yQWFye1ol00QN/FQmUKWH0Y3VQClAfwxq7tO74z/WPV6RhKrIFjCDwOK/2hwtDADYx3QDe9BeClzybKdE3bgYtr7KMZp6uvqAf86hcEwk3N1M3NSv+AFgaOGrPBEZjUgdLOpB3/IYHcsn2w6d+t3mf51kl9sYZ8I8MFYGtHUnYREGaPDMSGSw9j9ed9XtusNU5lSWZVDKz4MG1y0YALuAb5XPb0sCtGoh1SAFyxm3T+j7etPSQJyGJwP1BiofQVFOGh38A/47kbeG2dwNZl/wTidQ9p5NQSNSSe4Ev31JBJbw/XjimNvjM2SL8oOBXzaq0Gtju82c2BDqnH1w8WkvSmKIH/xHuSE78x1MWOrb4kBeWG3kREXgFzIxJ/SMIV0NnqaR9rqEB4mG65BxpcftENELsm8wlQvSn9d/6ibbovxhdtXigm66PpyUZ8lJpkaDyZAwXVATMWO1H+XZWtKgM3V8A6sIheNy9UujSrTYI8KbwXsPcGoUbxR6DB+vriN2g0Hi15K/+lrhVWeoKa0bEBai4Wap53UfCKKrnykPlPvCHVudztrHhdBaiJhDEjeFgV9EjyRPd56/Xl+NqHHQ2YhAqWlZjnXwvU1kFDtO1BqlUILAMOymuKCL5XWZzvqYaVHCZXyXrFh7Q+rjTcYua9mZG4 3AWdm5Th 6xJ2Y7RCVUNTze08q9Zhu+C9AaLTb688XzYRDZ7AAhPXcRzHgg+K0oifvQu3zT/UHfUMy9sr4OiC6mmBs11Cur4qdgbERcmWKfvCAZf7OJ0L4lpjkXylAbPmKlU76Z5mLYAM//axG4x6X4wV6O7PwTiNFL1RIRDpQe2bcXZz5CE4xKYo60n+albEBiN+FDcjqapb9NSjfZlhiALE12DQXiMAtqdG3Qbxdpbw0AMNCErRCIKCs60tVIdxykl8vTN4FfCLRhebWEWm41DVFaCKZSlrvdj1sFPvUXff4vSiXTeUj8+37Uy0S5TwB4YvTzemRw4i8JnFy3DHPtLcVw0PObmbFGhL8WwDsxp5EUlfQcaPO2vEIcIdcU6FFkyg4krcNPtcnJvzFyy+d0syROx4lEOXAM00rXkJTE7nnwuqFzgfxRMbBiDqP5Gw5bUyOq0iBo9Pn1PySG1bbf6k4SeOhVoHtaNWiqPp9xJeGbzZZdh7c21d0aXDE+yCgCGO3M0rpGB04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, 11 Dec 2023 at 05:53, Liang Chen wrote: > > To support multiple users referencing the same fragment, > 'pp_frag_count' is renamed to 'pp_ref_count', transitioning pp pages > from fragment management to reference count management after draining > based on the suggestion from [1]. > > The idea is that the concept of fragmenting exists before the page is > drained, and all related functions retain their current names. > However, once the page is drained, its management shifts to being > governed by 'pp_ref_count'. Therefore, all functions associated with > that lifecycle stage of a pp page are renamed. > > [1] > http://lore.kernel.org/netdev/f71d9448-70c8-8793-dc9a-0eb48a570300@huawei.com > > Signed-off-by: Liang Chen > Reviewed-by: Yunsheng Lin Reviewed-by: Ilias Apalodimas > --- > .../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +- > include/linux/mm_types.h | 2 +- > include/net/page_pool/helpers.h | 60 +++++++++++-------- > include/net/page_pool/types.h | 6 +- > net/core/page_pool.c | 12 ++-- > 5 files changed, 46 insertions(+), 38 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > index 8d9743a5e42c..98d33ac7ec64 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > @@ -298,8 +298,8 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq, > u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; > struct page *page = frag_page->page; > > - if (page_pool_defrag_page(page, drain_count) == 0) > - page_pool_put_defragged_page(rq->page_pool, page, -1, true); > + if (page_pool_unref_page(page, drain_count) == 0) > + page_pool_put_unrefed_page(rq->page_pool, page, -1, true); > } > > static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 957ce38768b2..64e4572ef06d 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -125,7 +125,7 @@ struct page { > struct page_pool *pp; > unsigned long _pp_mapping_pad; > unsigned long dma_addr; > - atomic_long_t pp_frag_count; > + atomic_long_t pp_ref_count; > }; > struct { /* Tail pages of compound page */ > unsigned long compound_head; /* Bit zero is set */ > diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h > index 4ebd544ae977..d0c5e7e6857a 100644 > --- a/include/net/page_pool/helpers.h > +++ b/include/net/page_pool/helpers.h > @@ -29,7 +29,7 @@ > * page allocated from page pool. Page splitting enables memory saving and thus > * avoids TLB/cache miss for data access, but there also is some cost to > * implement page splitting, mainly some cache line dirtying/bouncing for > - * 'struct page' and atomic operation for page->pp_frag_count. > + * 'struct page' and atomic operation for page->pp_ref_count. > * > * The API keeps track of in-flight pages, in order to let API users know when > * it is safe to free a page_pool object, the API users must call > @@ -214,69 +214,77 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) > return pool->p.dma_dir; > } > > -/* pp_frag_count represents the number of writers who can update the page > - * either by updating skb->data or via DMA mappings for the device. > - * We can't rely on the page refcnt for that as we don't know who might be > - * holding page references and we can't reliably destroy or sync DMA mappings > - * of the fragments. > +/** > + * page_pool_fragment_page() - split a fresh page into fragments > + * @page: page to split > + * @nr: references to set > + * > + * pp_ref_count represents the number of outstanding references to the page, > + * which will be freed using page_pool APIs (rather than page allocator APIs > + * like put_page()). Such references are usually held by page_pool-aware > + * objects like skbs marked for page pool recycling. > * > - * When pp_frag_count reaches 0 we can either recycle the page if the page > - * refcnt is 1 or return it back to the memory allocator and destroy any > - * mappings we have. > + * This helper allows the caller to take (set) multiple references to a > + * freshly allocated page. The page must be freshly allocated (have a > + * pp_ref_count of 1). This is commonly done by drivers and > + * "fragment allocators" to save atomic operations - either when they know > + * upfront how many references they will need; or to take MAX references and > + * return the unused ones with a single atomic dec(), instead of performing > + * multiple atomic inc() operations. > */ > static inline void page_pool_fragment_page(struct page *page, long nr) > { > - atomic_long_set(&page->pp_frag_count, nr); > + atomic_long_set(&page->pp_ref_count, nr); > } > > -static inline long page_pool_defrag_page(struct page *page, long nr) > +static inline long page_pool_unref_page(struct page *page, long nr) > { > long ret; > > - /* If nr == pp_frag_count then we have cleared all remaining > + /* If nr == pp_ref_count then we have cleared all remaining > * references to the page: > * 1. 'n == 1': no need to actually overwrite it. > * 2. 'n != 1': overwrite it with one, which is the rare case > - * for pp_frag_count draining. > + * for pp_ref_count draining. > * > * The main advantage to doing this is that not only we avoid a atomic > * update, as an atomic_read is generally a much cheaper operation than > * an atomic update, especially when dealing with a page that may be > - * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count > + * referenced by only 2 or 3 users; but also unify the pp_ref_count > * handling by ensuring all pages have partitioned into only 1 piece > * initially, and only overwrite it when the page is partitioned into > * more than one piece. > */ > - if (atomic_long_read(&page->pp_frag_count) == nr) { > + if (atomic_long_read(&page->pp_ref_count) == nr) { > /* As we have ensured nr is always one for constant case using > * the BUILD_BUG_ON(), only need to handle the non-constant case > - * here for pp_frag_count draining, which is a rare case. > + * here for pp_ref_count draining, which is a rare case. > */ > BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1); > if (!__builtin_constant_p(nr)) > - atomic_long_set(&page->pp_frag_count, 1); > + atomic_long_set(&page->pp_ref_count, 1); > > return 0; > } > > - ret = atomic_long_sub_return(nr, &page->pp_frag_count); > + ret = atomic_long_sub_return(nr, &page->pp_ref_count); > WARN_ON(ret < 0); > > - /* We are the last user here too, reset pp_frag_count back to 1 to > + /* We are the last user here too, reset pp_ref_count back to 1 to > * ensure all pages have been partitioned into 1 piece initially, > * this should be the rare case when the last two fragment users call > - * page_pool_defrag_page() currently. > + * page_pool_unref_page() currently. > */ > if (unlikely(!ret)) > - atomic_long_set(&page->pp_frag_count, 1); > + atomic_long_set(&page->pp_ref_count, 1); > > return ret; > } > > -static inline bool page_pool_is_last_frag(struct page *page) > +static inline bool page_pool_is_last_ref(struct page *page) > { > - /* If page_pool_defrag_page() returns 0, we were the last user */ > - return page_pool_defrag_page(page, 1) == 0; > + /* If page_pool_unref_page() returns 0, we were the last user */ > + return page_pool_unref_page(page, 1) == 0; > } > > /** > @@ -301,10 +309,10 @@ static inline void page_pool_put_page(struct page_pool *pool, > * allow registering MEM_TYPE_PAGE_POOL, but shield linker. > */ > #ifdef CONFIG_PAGE_POOL > - if (!page_pool_is_last_frag(page)) > + if (!page_pool_is_last_ref(page)) > return; > > - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); > + page_pool_put_unrefed_page(pool, page, dma_sync_size, allow_direct); > #endif > } > > diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h > index e1bb92c192de..6a5323619f6e 100644 > --- a/include/net/page_pool/types.h > +++ b/include/net/page_pool/types.h > @@ -224,9 +224,9 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, > } > #endif > > -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, > - unsigned int dma_sync_size, > - bool allow_direct); > +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, > + unsigned int dma_sync_size, > + bool allow_direct); > > static inline bool is_page_pool_compiled_in(void) > { > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index df2a06d7da52..106220b1f89c 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -650,8 +650,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, > return NULL; > } > > -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, > - unsigned int dma_sync_size, bool allow_direct) > +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, > + unsigned int dma_sync_size, bool allow_direct) > { > page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); > if (page && !page_pool_recycle_in_ring(pool, page)) { > @@ -660,7 +660,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, > page_pool_return_page(pool, page); > } > } > -EXPORT_SYMBOL(page_pool_put_defragged_page); > +EXPORT_SYMBOL(page_pool_put_unrefed_page); > > /** > * page_pool_put_page_bulk() - release references on multiple pages > @@ -687,7 +687,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, > struct page *page = virt_to_head_page(data[i]); > > /* It is not the last user for the page frag case */ > - if (!page_pool_is_last_frag(page)) > + if (!page_pool_is_last_ref(page)) > continue; > > page = __page_pool_put_page(pool, page, -1, false); > @@ -729,7 +729,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, > long drain_count = BIAS_MAX - pool->frag_users; > > /* Some user is still using the page frag */ > - if (likely(page_pool_defrag_page(page, drain_count))) > + if (likely(page_pool_unref_page(page, drain_count))) > return NULL; > > if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { > @@ -750,7 +750,7 @@ static void page_pool_free_frag(struct page_pool *pool) > > pool->frag_page = NULL; > > - if (!page || page_pool_defrag_page(page, drain_count)) > + if (!page || page_pool_unref_page(page, drain_count)) > return; > > page_pool_return_page(pool, page); > -- > 2.31.1 >