From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 558D4C4167B for ; Wed, 29 Nov 2023 03:12:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6D036B038C; Tue, 28 Nov 2023 22:12:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B1BF96B038D; Tue, 28 Nov 2023 22:12:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E5396B038E; Tue, 28 Nov 2023 22:12:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8E57D6B038C for ; Tue, 28 Nov 2023 22:12:36 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5D584140445 for ; Wed, 29 Nov 2023 03:12:36 +0000 (UTC) X-FDA: 81509519112.22.F7E9D8B Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf22.hostedemail.com (Postfix) with ESMTP id 8FC61C0005 for ; Wed, 29 Nov 2023 03:12:34 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DR39FARA; spf=pass (imf22.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701227554; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yC3o8ZLN2Kpnqsgd2f5ALs5Dy5o77lOyYvxgO2wSvIk=; b=N90Fd9OoLgOib2X51VTtkrmEatmK/cQNJr1tZ+LlUu54ZfbHywibq9d8VFzCyKGCpUzGG4 JbZbgopqZTU27mSltGPVeOgDr0S1VNyMXK9OUu1Y7NbXXP1dUe9YJPIH22je4YTKC+lo+T XNGtIO23iZDZZLXuwiysSkSktramNsw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701227554; a=rsa-sha256; cv=none; b=X+GS/U7QNHTxUbBDTzhcu+WTSRuQ84fcVVETR2ictN7GEeC4jR4i6GC4ejkvilC1DzNvqD q67pOjJSq/nGEam4YRmloLAYA/ThtdNNkB4GhA7h9jSD0gZZLF2AQTm00lrDgaCSYUcDW/ YEcS6xRtFjp1i2csxn7Mh6yj3K9mmCI= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DR39FARA; spf=pass (imf22.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-5bd5809f63aso3804197a12.3 for ; Tue, 28 Nov 2023 19:12:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701227553; x=1701832353; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yC3o8ZLN2Kpnqsgd2f5ALs5Dy5o77lOyYvxgO2wSvIk=; b=DR39FARAsbD0lzLjOaQp0qOQn9cZl8xzl5Tpyh3Ba+hosGXQDOwVuoZQpvOtNWgOOp DJ4RkTIydNbicH0t+lAMQNcHwB7Mcsmm8/ub2EVwIIagtr5DxwnXuWTDIZFS3kz7oqbK 2UcJ2m0Kz5Oi2WjSfIIKa/3OEiK0yDuV720QVYffYma/d9H75U6A2KIfvxLdEK2BeVIB hywgf2F95JqmuG1/Xbd4R76V5P3p1YrG824Eo8lnFlWuaimYPPe3Io/MOGArWtbxIDyh N1tpCnvPNMXOMQGCmVOakRIRYQqK4WnC2F7FETnwqO0TDSyNGBVn7l9GmxjBzhCc6kgS 9oAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701227553; x=1701832353; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yC3o8ZLN2Kpnqsgd2f5ALs5Dy5o77lOyYvxgO2wSvIk=; b=N/72LGmUBJ/0H/Gs9RDQ08hE5PWiXluqKOBWk9/RtBoCk8jtiAl3vz3FUSoYfBl9ls 9K0XuTb76BQvLyDL0GrGzXPG9tYlT2OjtJtgLSLXa+MgOqXvMEJD07/G0O1+xiTWdrgu nnz+Lz6iL3bwkTXllLfaTj/eJBjOt4Prs7OvpWEFWKZ2s7OcbGpjkgfClYKW0DO20Q+S h5cUmffaO/ExUib606+9aX6j8SZLzMguImZ8kB8Z/sYE3CSaIGdkg4fm5zS0QrPF2mBh dsnK5Wk4xYhTDUGPpeDS8yrqBWfiVeKc01Skt0y0z16v0smwhRQ6xr/vCGkNQcNM7ctA CmSw== X-Gm-Message-State: AOJu0YwB+Pnp7CSf5RgSZDBlL1yiSRo7lrYP/B3oXc5TTkrXQxoWPNxi 46f189fwsWRp10nM5VOK+X8= X-Google-Smtp-Source: AGHT+IGW6NoNN5vuJrT5fO1Jzbzgi2A8tQRIZcZg9S3m262uy/lf571cRRMK9EUcf2LSEcOYB1kqXQ== X-Received: by 2002:a05:6a21:99a7:b0:18a:b5c3:55db with SMTP id ve39-20020a056a2199a700b0018ab5c355dbmr18892598pzb.50.1701227553449; Tue, 28 Nov 2023 19:12:33 -0800 (PST) Received: from localhost.localdomain ([89.187.161.180]) by smtp.gmail.com with ESMTPSA id q10-20020a170902daca00b001cfc46baa40sm5669287plx.158.2023.11.28.19.12.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 19:12:32 -0800 (PST) From: Liang Chen To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, hawk@kernel.org, ilias.apalodimas@linaro.org, linyunsheng@huawei.com Cc: netdev@vger.kernel.org, linux-mm@kvack.org, liangchen.linux@gmail.com Subject: [PATCH net-next v4 1/4] page_pool: Rename pp_frag_count to pp_ref_count Date: Wed, 29 Nov 2023 11:11:58 +0800 Message-Id: <20231129031201.32014-2-liangchen.linux@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231129031201.32014-1-liangchen.linux@gmail.com> References: <20231129031201.32014-1-liangchen.linux@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: 8phmzutgk7idfe684j1nxnkxa19rom49 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 8FC61C0005 X-Rspam-User: X-HE-Tag: 1701227554-64753 X-HE-Meta: U2FsdGVkX19kOvghcN4o/0oXqzGM21h9f0K6wmEaK5mzFQCDRgO+gXeRN8659gvfepni/3AoOLOxl8aNwC9i+8UDF4KAT6ZFFmv6chiKMofel8aB3Plkd+/+JY3b2t9WLn7IyzqgrMRCodXk7v2NqwYXlg5L9QnYbDwU7WsLQVKcCWvE2jLpb5HoDsKLShA/yUQ7vYbLzUl4MLRCMkYDxAHphahr6dcZm6YrAjB2qwGEePoWQdEeOcQCrysA+o7qB2GwxbxSkYZvFJNh5qWyxrCA3NPgy7/lQowVO75rsrZkdWS8ZST+WtJENxX3uEPkA+KzkWt/EDOVzeoi3ONXBUQMoXFpLLOxt9yrXmt0c/FCaqfoS/xXWFaau/STGtoHwZZqAR8IvKqO/U6yOf9yQ2ckUsBh4F407QN3tSXqTwNswheOZUobtp4GK3nAM7K3CCBz6MDCi5WXA8fFNnRWppEV4RhupzqF+c8NyND9mp7wgI/q+Ixizctu78bBdWxPvkDKmMiAwlXM1jKYZaYOCxy3jwnCuV22gZhA1Fg8KZ0BIvnyMXtw0tMkfAKDw6BnhW5Cjc5f2Sd6s1xxanUSEqvzIgQSbepz0ahJROfhr60mIaWU0e8mm+4mnH7q6xEfDbrd3yqH8grIaRDN/IfYpblJhs5mO6BibyhgOfM4SXiulUVf97CT79Phw6JWLn6bFD6DmSO0JEbmm+PaFdDsxiM5fp2/ZM9Jo8PCi4gxO5+CkueTp6qNAkfM5KFB/jLWka3zI3nzjhd4fuwUS3M1mSjyqtKrA1+pSgigydIGJV7gKpWfl8lIQpV/9duxH2Qub07VkoUyGfQ92wLPbspxsfJ9L+w/rjWJ3ucf2BL+KVBFtykCMbKTkweLpvqARvJbSEIeEePAZ7Qtd6XJ6K0xOEeZbqU6bB9nArZUW9rV8QaZ6GScUmcS/a5SL72MvlLBinE/AgWTHKp79BASvcO qN7psF6v mFKpbh0r4IblIvy68K52HJe6vr6CpBta9CnQpYhsk6NQABzRW1BUTdCJkvLvNGpBOqvAR4uCr2G5Z9Yj0ibf2VT8e9kOsqxwrfa1O9DOlv3WrapCqp2xD0/Lq/xpbrwRZL66VVHIuirfAo09xuoLow/WOGsfjFyMIk8DRLy9L7ORNWj1zDCK40117n5ojBBJcgiZ8ErvHiGdMofI585QlMn/380wgTo1ow4TEhg1E6Dn0rmsy0Cw8XNFPK86EnF/5TIwgWVbPDDwR6ofbsaB25m8PXBnvLS4snYmgvbAjekke5W71okcDHTj2ZPoPtjbkzfXHpzuQuFq0i3P/6N1gwhDGeFEqPJFG/OnSgr8HO1Looh5390Z8n2PJK0L2dgwUxe9Pw/abnJKVQXbcFmzlHVToSM+e2Kjd6hO8I+ZGQyLZasdm4CjkWL+c/1or8GQ4s7hhtyhJxAiBDqvM1rwP/MrJUn7Far8I6vi5rmTCQMq2lZP3sSySaOMg6gMJR+6z0cz+t+Z+YPj1xpbNxD+L9FLpEH8ccebLdcd5SrgnN9e0+G2nQr1RVtlsTe5k59eD8FFHxcGHsPP284tAvYfzZZcn3BSdJEgBGCCxDgRAAKNmXdH1mSoJJEl+NzU5SYjdiLvfc+BjmbIInSml0wXYkj/gz3/Zs16h8ZcUA7L4qZqiaI0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To support multiple users referencing the same fragment, pp_frag_count is renamed to pp_ref_count to better reflect its actual meaning based on the suggestion from [1]. [1] http://lore.kernel.org/netdev/f71d9448-70c8-8793-dc9a-0eb48a570300@huawei.com Signed-off-by: Liang Chen --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +- include/linux/mm_types.h | 2 +- include/net/page_pool/helpers.h | 45 ++++++++++--------- include/net/page_pool/types.h | 2 +- net/core/page_pool.c | 12 ++--- 5 files changed, 35 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 8d9743a5e42c..98d33ac7ec64 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -298,8 +298,8 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq, u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; struct page *page = frag_page->page; - if (page_pool_defrag_page(page, drain_count) == 0) - page_pool_put_defragged_page(rq->page_pool, page, -1, true); + if (page_pool_unref_page(page, drain_count) == 0) + page_pool_put_unrefed_page(rq->page_pool, page, -1, true); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 957ce38768b2..64e4572ef06d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -125,7 +125,7 @@ struct page { struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; - atomic_long_t pp_frag_count; + atomic_long_t pp_ref_count; }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 4ebd544ae977..9dc8eaf8a959 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -29,7 +29,7 @@ * page allocated from page pool. Page splitting enables memory saving and thus * avoids TLB/cache miss for data access, but there also is some cost to * implement page splitting, mainly some cache line dirtying/bouncing for - * 'struct page' and atomic operation for page->pp_frag_count. + * 'struct page' and atomic operation for page->pp_ref_count. * * The API keeps track of in-flight pages, in order to let API users know when * it is safe to free a page_pool object, the API users must call @@ -214,69 +214,74 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } -/* pp_frag_count represents the number of writers who can update the page +/* pp_ref_count represents the number of writers who can update the page * either by updating skb->data or via DMA mappings for the device. * We can't rely on the page refcnt for that as we don't know who might be * holding page references and we can't reliably destroy or sync DMA mappings * of the fragments. * - * When pp_frag_count reaches 0 we can either recycle the page if the page + * pp_ref_count initially corresponds to the number of fragments. However, + * when multiple users start to reference a single fragment, for example in + * skb_try_coalesce, the pp_ref_count will become greater than the number of + * fragments. + * + * When pp_ref_count reaches 0 we can either recycle the page if the page * refcnt is 1 or return it back to the memory allocator and destroy any * mappings we have. */ static inline void page_pool_fragment_page(struct page *page, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&page->pp_ref_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_unref_page(struct page *page, long nr) { long ret; - /* If nr == pp_frag_count then we have cleared all remaining + /* If nr == pp_ref_count then we have cleared all remaining * references to the page: * 1. 'n == 1': no need to actually overwrite it. * 2. 'n != 1': overwrite it with one, which is the rare case - * for pp_frag_count draining. + * for pp_ref_count draining. * * The main advantage to doing this is that not only we avoid a atomic * update, as an atomic_read is generally a much cheaper operation than * an atomic update, especially when dealing with a page that may be - * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count + * referenced by only 2 or 3 users; but also unify the pp_ref_count * handling by ensuring all pages have partitioned into only 1 piece * initially, and only overwrite it when the page is partitioned into * more than one piece. */ - if (atomic_long_read(&page->pp_frag_count) == nr) { + if (atomic_long_read(&page->pp_ref_count) == nr) { /* As we have ensured nr is always one for constant case using * the BUILD_BUG_ON(), only need to handle the non-constant case - * here for pp_frag_count draining, which is a rare case. + * here for pp_ref_count draining, which is a rare case. */ BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1); if (!__builtin_constant_p(nr)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return 0; } - ret = atomic_long_sub_return(nr, &page->pp_frag_count); + ret = atomic_long_sub_return(nr, &page->pp_ref_count); WARN_ON(ret < 0); - /* We are the last user here too, reset pp_frag_count back to 1 to + /* We are the last user here too, reset pp_ref_count back to 1 to * ensure all pages have been partitioned into 1 piece initially, * this should be the rare case when the last two fragment users call - * page_pool_defrag_page() currently. + * page_pool_unref_page() currently. */ if (unlikely(!ret)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return ret; } -static inline bool page_pool_is_last_frag(struct page *page) +static inline bool page_pool_is_last_ref(struct page *page) { - /* If page_pool_defrag_page() returns 0, we were the last user */ - return page_pool_defrag_page(page, 1) == 0; + /* If page_pool_unref_page() returns 0, we were the last user */ + return page_pool_unref_page(page, 1) == 0; } /** @@ -301,10 +306,10 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_unrefed_page(pool, page, dma_sync_size, allow_direct); #endif } diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index e1bb92c192de..f0a9689074a0 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -224,7 +224,7 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct); diff --git a/net/core/page_pool.c b/net/core/page_pool.c index df2a06d7da52..106220b1f89c 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -650,8 +650,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, return NULL; } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) { page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { @@ -660,7 +660,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, page_pool_return_page(pool, page); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_unrefed_page); /** * page_pool_put_page_bulk() - release references on multiple pages @@ -687,7 +687,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, struct page *page = virt_to_head_page(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) continue; page = __page_pool_put_page(pool, page, -1, false); @@ -729,7 +729,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_unref_page(page, drain_count))) return NULL; if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { @@ -750,7 +750,7 @@ static void page_pool_free_frag(struct page_pool *pool) pool->frag_page = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!page || page_pool_unref_page(page, drain_count)) return; page_pool_return_page(pool, page); -- 2.31.1