From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54005C54EBC for ; Wed, 11 Jan 2023 04:22:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43DDD900004; Tue, 10 Jan 2023 23:22:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3772E900003; Tue, 10 Jan 2023 23:22:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A357900004; Tue, 10 Jan 2023 23:22:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0694F900003 for ; Tue, 10 Jan 2023 23:22:17 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CF4F6120330 for ; Wed, 11 Jan 2023 04:22:16 +0000 (UTC) X-FDA: 80341221072.06.1897445 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 64962180006 for ; Wed, 11 Jan 2023 04:22:15 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hMcl3QxN; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673410935; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aHqbxa3I8JJiDz9jm1RkvBoLj0lWQmpkNfdSraTtfMU=; b=6PZWqg3O9WJvSWY/KUun4Qg8RWqUXx+LVdewsPVrY3GLOL/cdBNusykWBdqi2/s//HzmxB lnD0Q0BjGVbQZ2pQrSTOahnLrdnwSfFHAYsOwPO8zU0RXsgtuKi5FsE8Wd5hk3fd15J3jS w6Hg8faIAovjOWIU95qmZCvNas9UQWw= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hMcl3QxN; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673410935; a=rsa-sha256; cv=none; b=CxfV44W/vPb7LoKmf4EzON0EgLIMG58svmCzDY7Ss3CMLFQ7ArFmguEdZg36pJQ+dQASJ3 d3rCy4KLpa/USScR/rK8Vf3mggcI2PpWqQL0LQPIv8cBjgR2T6uca57AZMeVLNNkFRzZdm oSGdtNFPPZY3JhgCeemCl0/ZjxwgN1M= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aHqbxa3I8JJiDz9jm1RkvBoLj0lWQmpkNfdSraTtfMU=; b=hMcl3QxNj5c0tJLjsSe4Wcz5Jk ghASW92Z7wS0aiqP/ZtPeccd00n21bSeMtMJDc2DxQdb5KAFw+ZWHduxyFXFxDi5jOu/v7St1nOg3 L6DcK7VGTpUtg1orv8WB8C4uhuphRGUy9jFt7UYizUuPjMcVi7y8Yt+ahkCX+689JoIfSO9XEtBSq sLJe1ifO/2VD1V4U1P5MjAnZEFH5zqBaw/gXlpLrEi1P5LxArr2tmKBNAa6c/KVeaDrJOVbqmYhVv JmywqQjbILPqM5nxSe/CLve/4fw2cmyP4DRSTq6nN9VmXJT008OdDfkvtRSkdnUZkj84G4poOeGA+ 1xQkpgSg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003ny0-VL; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 10/26] page_pool: Convert page_pool_put_defragged_page() to netmem Date: Wed, 11 Jan 2023 04:21:58 +0000 Message-Id: <20230111042214.907030-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 64962180006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: adkrxn9o3og9qnd7cj6779dja74woacr X-HE-Tag: 1673410935-102850 X-HE-Meta: U2FsdGVkX18Ksqc8mq2L6HHN3MiFeguPfV4/KgkMPDehzju0Cxd+WR10rH1J7vcf4PUUL2HGifs5/oLyE6S2/hTuxdbU7aXZ6J2uatkHj79amFjevHKlgzQRVnEuYfEr2oR8bAI58Ih90SbaGVox4EF97a4kzinX0/qFNjPOlVhi6UUCdgnm+xmlleQj0xcWbtJ+dXr2yaD8Cg4URp2iDkpiSMkByuslFBBn5PR0YVXe6iiUuv5z0dsmUWi4Dlt8kFVgxI1pOloS1a42+/+pB+KrcnufHAeA6Mb7vHJHbEbtHcfIeXLcK35F4lcpc/ej4TTNFOsooXDKHrVYM9N9qlJVNegoY3ZqJfqin/4TUwLbJGoSspmm/Lflw/AwC2Kx6mqUpGQ+yficvVxcQH0g8QxUpppOzY2hReS8bqJTIlqcQsQBYqz9/hQY0gapJIFloLexWs1nRzMDTGp169pT+2zmqNy16FllKh/QxSuscqvX/ESw2tT9ZYWARUwZosLjNwXjFi5u7wLMrjJKWSOxh1L+aTgnnnRQLqUQhBOlRYsdibloI+Wp41nsusXe2mO7qQJ24cQqVmQJYusX9IUWDTgEDlRLcTVQYh2n0438gXw+AxREy+o+pXA8FWPGWLbecYHi5SNxoPAUAai9gSPBjXlfze10o3Q1AEHegl1yCCA6arWTKidNaFOi4gQo4zrPhKKeinETSyMM5ynUKRjhm7VWkAScA/yZhIHUnb6Y3N7fOjXczN2Y7y174N/woaGSESS5x4Rp6bDMibdyto+rZQLBVJ9KRMwAasL90Rz8jQMfgjdLXenzcFKqUh/skNWvdPuDFbOjKJuHUBKV6JSEk0s4dON3tsBVIx9CuEJxEK8HJuZb3JGHb9zYbJQziJbggowUOFSKgea33R4B7hY8WY/hOsxyS+CPJ3xaNlALPdY5ZQdstFtxJQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert page_pool_is_last_frag(), page_pool_put_page(), page_pool_recycle_in_ring() and use netmem in page_pool_put_page_bulk(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 24 +++++++++++++++++------- net/core/page_pool.c | 29 +++++++++++++++-------------- 2 files changed, 32 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 72e241ebed0a..60354e771fdd 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -385,7 +385,7 @@ static inline void page_pool_release_page(struct page_pool *pool, page_pool_release_netmem(pool, page_netmem(page)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); @@ -422,15 +422,15 @@ static inline long page_pool_defrag_page(struct page *page, long nr) } static inline bool page_pool_is_last_frag(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { /* If fragments aren't enabled or count is 0 we were the last user */ return !(pool->p.flags & PP_FLAG_PAGE_FRAG) || - (page_pool_defrag_page(page, 1) == 0); + (page_pool_defrag_netmem(nmem, 1) == 0); } -static inline void page_pool_put_page(struct page_pool *pool, - struct page *page, +static inline void page_pool_put_netmem(struct page_pool *pool, + struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { @@ -438,13 +438,23 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_defragged_netmem(pool, nmem, dma_sync_size, allow_direct); #endif } +/* Compat, remove when all users gone */ +static inline void page_pool_put_page(struct page_pool *pool, + struct page *page, + unsigned int dma_sync_size, + bool allow_direct) +{ + page_pool_put_netmem(pool, page_netmem(page), dma_sync_size, + allow_direct); +} + /* Same as above but will try to sync the entire area pool->max_len */ static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c54217ce6b77..e727a74504c2 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -516,14 +516,15 @@ static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) */ } -static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) +static bool page_pool_recycle_in_ring(struct page_pool *pool, + struct netmem *nmem) { int ret; /* BH protection not needed if current is serving softirq */ if (in_serving_softirq()) - ret = ptr_ring_produce(&pool->ring, page); + ret = ptr_ring_produce(&pool->ring, nmem); else - ret = ptr_ring_produce_bh(&pool->ring, page); + ret = ptr_ring_produce_bh(&pool->ring, nmem); if (!ret) { recycle_stat_inc(pool, ring); @@ -615,17 +616,17 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, dma_sync_size, allow_direct)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { - page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); - if (page && !page_pool_recycle_in_ring(pool, page)) { + nmem = __page_pool_put_netmem(pool, nmem, dma_sync_size, allow_direct); + if (nmem && !page_pool_recycle_in_ring(pool, nmem)) { /* Cache full, fallback to free pages */ recycle_stat_inc(pool, ring_full); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_defragged_netmem); /* Caller must not use data area after call, as this function overwrites it */ void page_pool_put_page_bulk(struct page_pool *pool, void **data, @@ -634,16 +635,16 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, int i, bulk_len = 0; for (i = 0; i < count; i++) { - struct page *page = virt_to_head_page(data[i]); + struct netmem *nmem = virt_to_netmem(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) continue; - page = __page_pool_put_page(pool, page, -1, false); + nmem = __page_pool_put_netmem(pool, nmem, -1, false); /* Approved for bulk recycling in ptr_ring cache */ - if (page) - data[bulk_len++] = page; + if (nmem) + data[bulk_len++] = nmem; } if (unlikely(!bulk_len)) @@ -669,7 +670,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, * since put_page() with refcnt == 1 can be an expensive operation */ for (; i < bulk_len; i++) - page_pool_return_page(pool, data[i]); + page_pool_return_netmem(pool, data[i]); } EXPORT_SYMBOL(page_pool_put_page_bulk); -- 2.35.1