From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E4E7C54EBC for ; Thu, 5 Jan 2023 21:47:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0FF3894000B; Thu, 5 Jan 2023 16:46:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 08BD4940007; Thu, 5 Jan 2023 16:46:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF58F94000B; Thu, 5 Jan 2023 16:46:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C8569940007 for ; Thu, 5 Jan 2023 16:46:54 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AD313A0B47 for ; Thu, 5 Jan 2023 21:46:54 +0000 (UTC) X-FDA: 80322080748.07.C452618 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 39F9840004 for ; Thu, 5 Jan 2023 21:46:52 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DIPsPoas; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955213; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PdoQKhJ3LtOgA1Y51KMPKfFIcW0uZc7eBAEFhZ/WWAE=; b=i0jDVxqYP0f98sQjZUgl4ijceovZCeHvWTt2ODKSRfZFjua/YGbhg2kNuEbEBGLfEOkLWM KTWcNhGmpIARCvaGaESH9gLL/6fr/kdohnMBRCvJCxh9HF2lWaNa2+4ww3zZ6hVt1Eg9bB qP1WI7VLwxCfAYrbLz/nmaRcvIqnbio= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DIPsPoas; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955213; a=rsa-sha256; cv=none; b=CWO0gQOMsXANZxvp3RuKhAGgFHWBomCd43uyp2BgQs5iHmZD1Fa24tPXRsOHlXANBAFkd+ DIRD4Orx4nRA4gs42zew8JJ5BNqg+u9n4elF7G+QKtYhVkEEfyR8agSpt6g8g6Vr214zSo +Om7IY8v7OezfUXjHgLV9qtL2S7KqUk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PdoQKhJ3LtOgA1Y51KMPKfFIcW0uZc7eBAEFhZ/WWAE=; b=DIPsPoas28aIXNv1jols4g81YO Z9VmoAsDNbFfHhpE8STvqmcx6Wnz2FY8d2Wab+eI2Zo8SSmNDh29Trf2wxQ273Q9OsGcW99diVYyh 7z9pTa6dk4qHnP32YGwVSAn8D3WriPCTIVPIMHyAiB9ETD7/aGkpkj3zTUHfsk5xbY3du6kzYLa6m u8uHMPFKZknCmkCk1NL5UzqlueHRX005EzOgJDzfuMNipSLPvULIS6BA4QvRg0H9Xtkcc1cK9M3tp sGP3XahwIQJTLN069G8PTsS/scpstQ5oMNazYNhdNgGJZwh1YN9xe+ni/WP+Tk8alsjMtZIYhMpXu KJI/lEow==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnD-2C; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 10/24] page_pool: Convert page_pool_put_defragged_page() to netmem Date: Thu, 5 Jan 2023 21:46:17 +0000 Message-Id: <20230105214631.3939268-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 39F9840004 X-Stat-Signature: 7m4xp9s3xxqhiuw4iyq8ak6hqsc5yj73 X-HE-Tag: 1672955212-58750 X-HE-Meta: U2FsdGVkX1/ja0Htn59XavKcBDrAmdNgoa86r5XnnH7pBe+bWr3ib3+tjx+qTdLgs3cBpgJXCLCqCHo/qaaGmss7Yt/2OkpEvShJVlwKdeAYS1/9pO0GrcUEthJUNOuc5vfkk8LvMFNY29TAseU3FiprWw7PE6FSU96egQ1eRV7gD220IhCx+QW5zAvSfIsWMA8LcRI6CN/kU3K3GZDzkkGqcwiMbWOxe413F1KJXbmEX/Yj6vrRB1PyI5YheHJIt8MbUDUUw+63WQuLsQT+hRS/mjn4f12gmj0GJ1KHvNk1yDRn7Qod3i1enC37kEvvoeJKCKt70CKacAaYtiHQT2KdangsbJmn3M6DVTgT9DqBwUXRKvLkNpXJXCHid2ECMVUZLFALMkXJPKY4R5V4JkBz2GJ7+tyb7nAYv8VyDpqe68ZTn242zvRkT5U7ACXCbix1Qpmvjq5rTRz9Nhssr3BnZJatI3tMgF4yV1AvIlaMXmP85H5eb7dwc8ZWxaA4ZJPxMQqrOwdKoCzg2VEa089Vlj70DX2VBVaeBzhDrhRk350qnrCQAGaAaE6MFNLHOD258E4Hu7ySGQVIqMduOxN1AksBhpS0TT2i2YFNpBe2q7PQ1Qh6lbtWoU/FEnjjzhEaJU894OBdTVwYokS5bSm6hJdk8p4f4YQdw2c8o60Fk8lzY5gG8Kr5b2uJ/6Kyff5j6k2vQ+Wm0yU63P71UYJDtzqdHBMSKDxGIFIyIHFzruSXQlbtO0oUMWFfxo1FWj8Yix7Qacr5D1ZlA9OQPC15ZXGhwToQmeR9oCT0D6iFk0Kz3STriMsoxaTKOAafNiqe2I1kCXodV/nyx0/fLNJCtyvnzFhnSWy/A01hNKdnQdENl+idc1kIsEKRZFV0q+lu9nbm+fDGnR4JXoqzriinPR/e1qyz/Xdg5X3pxRWLsi6rlgl1hw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert page_pool_is_last_frag(), page_pool_put_page(), page_pool_recycle_in_ring() and use netmem in page_pool_put_page_bulk(). Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 23 ++++++++++++++++------- net/core/page_pool.c | 29 +++++++++++++++-------------- 2 files changed, 31 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8fe494166427..8b826da3b8b0 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -384,7 +384,7 @@ static inline void page_pool_release_page(struct page_pool *pool, page_pool_release_netmem(pool, page_netmem(page)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); @@ -420,15 +420,15 @@ static inline long page_pool_defrag_page(struct page *page, long nr) } static inline bool page_pool_is_last_frag(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { /* If fragments aren't enabled or count is 0 we were the last user */ return !(pool->p.flags & PP_FLAG_PAGE_FRAG) || - (page_pool_defrag_page(page, 1) == 0); + (page_pool_defrag_netmem(nmem, 1) == 0); } -static inline void page_pool_put_page(struct page_pool *pool, - struct page *page, +static inline void page_pool_put_netmem(struct page_pool *pool, + struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { @@ -436,13 +436,22 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_defragged_netmem(pool, nmem, dma_sync_size, allow_direct); #endif } +static inline void page_pool_put_page(struct page_pool *pool, + struct page *page, + unsigned int dma_sync_size, + bool allow_direct) +{ + page_pool_put_netmem(pool, page_netmem(page), dma_sync_size, + allow_direct); +} + /* Same as above but will try to sync the entire area pool->max_len */ static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c54217ce6b77..e727a74504c2 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -516,14 +516,15 @@ static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) */ } -static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) +static bool page_pool_recycle_in_ring(struct page_pool *pool, + struct netmem *nmem) { int ret; /* BH protection not needed if current is serving softirq */ if (in_serving_softirq()) - ret = ptr_ring_produce(&pool->ring, page); + ret = ptr_ring_produce(&pool->ring, nmem); else - ret = ptr_ring_produce_bh(&pool->ring, page); + ret = ptr_ring_produce_bh(&pool->ring, nmem); if (!ret) { recycle_stat_inc(pool, ring); @@ -615,17 +616,17 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, dma_sync_size, allow_direct)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { - page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); - if (page && !page_pool_recycle_in_ring(pool, page)) { + nmem = __page_pool_put_netmem(pool, nmem, dma_sync_size, allow_direct); + if (nmem && !page_pool_recycle_in_ring(pool, nmem)) { /* Cache full, fallback to free pages */ recycle_stat_inc(pool, ring_full); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_defragged_netmem); /* Caller must not use data area after call, as this function overwrites it */ void page_pool_put_page_bulk(struct page_pool *pool, void **data, @@ -634,16 +635,16 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, int i, bulk_len = 0; for (i = 0; i < count; i++) { - struct page *page = virt_to_head_page(data[i]); + struct netmem *nmem = virt_to_netmem(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) continue; - page = __page_pool_put_page(pool, page, -1, false); + nmem = __page_pool_put_netmem(pool, nmem, -1, false); /* Approved for bulk recycling in ptr_ring cache */ - if (page) - data[bulk_len++] = page; + if (nmem) + data[bulk_len++] = nmem; } if (unlikely(!bulk_len)) @@ -669,7 +670,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, * since put_page() with refcnt == 1 can be an expensive operation */ for (; i < bulk_len; i++) - page_pool_return_page(pool, data[i]); + page_pool_return_netmem(pool, data[i]); } EXPORT_SYMBOL(page_pool_put_page_bulk); -- 2.35.1