From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB3B9C3DA7A for ; Thu, 5 Jan 2023 21:47:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0E7E90000C; Thu, 5 Jan 2023 16:46:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C48F1900009; Thu, 5 Jan 2023 16:46:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9B7790000C; Thu, 5 Jan 2023 16:46:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8B8BD900009 for ; Thu, 5 Jan 2023 16:46:50 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6DF5FAB35D for ; Thu, 5 Jan 2023 21:46:50 +0000 (UTC) X-FDA: 80322080580.09.5D8EDD0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 06BFDC0008 for ; Thu, 5 Jan 2023 21:46:48 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aF1qIRem; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955209; a=rsa-sha256; cv=none; b=M9hH11zVViTfn7EIZUHFhIZARL2amIR2eSytuNd1/XL5N2+qP+vLeA4qxsrx/YkQ/ierJ9 BSGziIqo/1aY3YQJLsXG3dX7PMWa7E1T4PBv5GXNJrFZticfqQi/JksnlaQrpukVTG9i5d OqAS70tG3rFiI/L+TisXEwJRoaa6mgU= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aF1qIRem; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955209; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9GpVtqE16bx92cBcnBntQf4rdh1V8b/Phx6mQORBxs8=; b=SirdBAi1HBg0RJH9YpKt8sPch6/M9pznaJOxN0Zc3G6SG0FwXz6xEMeiiTvx/CBC8AkDBR PVvtVS0VmsoGzHhym8zLG25G7ddlueX9dmahpeDakIRuhFzeH4qv/GyS6JLEDqM1yq4t6x yZtrYOaImk844YyephuMG036PoolxqI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9GpVtqE16bx92cBcnBntQf4rdh1V8b/Phx6mQORBxs8=; b=aF1qIRem52K1wcky0Gn52SPwNu km2bTNaCfqcBFQyJve/fyCbwsu94ys9iCfuOwBk7iuY560aoHeq/wgOwMETDn/qEFozeXhRHzYDqz wekWD8sj6CM9kP3zUg2r5H7SRM8mKje08APTmzfeBXHm3bcxlFEVQFYwq6QpK8IxEHIP/0QrK5Um5 cs0H4v9q17kkY2uogtXVKFXd1mXdIg2iHjMqXHdW38PlspLjvD8oMqGKYVoYjH1jXEeSWKjmYfgur E/k2Pm9rsnlfSSJD1KdzzQAwOT41vygdkznVoVbZyGbH3IGP3XgrjkJzKIZWgyPfMjaXckRCWMLfU AMRXdASQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWn3-Kc; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 05/24] page_pool: Start using netmem in allocation path. Date: Thu, 5 Jan 2023 21:46:12 +0000 Message-Id: <20230105214631.3939268-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 06BFDC0008 X-Rspamd-Server: rspam01 X-Stat-Signature: jufj36fcna4jm355sbh6ckkhf6ft5knx X-HE-Tag: 1672955208-866988 X-HE-Meta: U2FsdGVkX1/cdO9cILbRyqKJe4P/X0zAF8PmqLvqdXH4nEA+HaaCSTUPHEbBUdbHfxYyB+gpC+fJf+TB2mPznG4nw/vtXzF8LvoyMBrywLcZsgBw34nBO228+WnHrMzmRcrA02EDnBT9R+/zHspxTGSxfQC0lGzX61/Z5+Ht4WW968mf9olPNHujOqwhIK6R+4AathRlhPp0gtZRWT9P9ZHPQQ4aZLJukjVJ1OcUhGuXZnJR+ul0Weq6bNWzYl6K0GGJxrZNv/oG8kRjNfMbXhdIqImnT8Xq8ng7zNif4s9J2eDrZlDBb3tjGXaxdluozsxlCIg/aMseHeLOPqD4TohoZyVUQA9Xv/1iRonGRCyiCgHeHTpX+38jMucOJvxnofndeNXJW1DKd81WNzfg+8cZHTyFNL+OTGLa3h8Y4FhJi0+kXly79aKPpX8Fmz8tpBKuaZxUihfinhrQtcekX88d0kNMBtnHdOex5G0wTy/gJIUUvg7bWdTgSZpl4iogYs/FV5pDfnR/UPde1CgQf00eO43srTylHaAta2pCHm5S8VfvkWACEEXTvBHO5m1EC28noK0WhWA6tnv8ZxU3dFkfy0h2aT1NiBD+96FZXgnprP1uvFnHLY3rxfcKvtntB+BA1MsESIAfFfhOkpXkGbvwjLExZ8RpH4INdXj6az9JfuH/sdigiYu4jEcpgXxerWOCdotavVjO2hgF6Qy3Win2Q5i+ZlX8NHdwkVJp0weRK1ePCT9J3UknJDWcOSx5iMqHNzq08HoGEN+6KmJvOC1hQ4uDrXobRQDJROg8gjonvbVXFDpHNlnSIe9wUYWP2uwElRKU4gG3Uov7engghtUkYBZMV9FLX5Ts5kp/vDG/QcbBZccT93ruwHsL4hYZjAi5+/P3pNr3Lpq/BwX9IUoRavaagPmPPQ9+miFXuu2w7tqaRgQ1cQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __page_pool_alloc_page_order() and __page_pool_alloc_pages_slow() to use netmem internally. This removes a couple of calls to compound_head() that are hidden inside put_page(). Convert trace_page_pool_state_hold(), page_pool_dma_map() and page_pool_set_pp_info() to take a netmem argument. Saves 83 bytes of text in __page_pool_alloc_page_order() and 98 in __page_pool_alloc_pages_slow() for a total of 181 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- include/trace/events/page_pool.h | 14 +++++------ net/core/page_pool.c | 42 +++++++++++++++++--------------- 2 files changed, 29 insertions(+), 27 deletions(-) diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index 113aad0c9e5b..d1237a7ce481 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -67,26 +67,26 @@ TRACE_EVENT(page_pool_state_release, TRACE_EVENT(page_pool_state_hold, TP_PROTO(const struct page_pool *pool, - const struct page *page, u32 hold), + const struct netmem *nmem, u32 hold), - TP_ARGS(pool, page, hold), + TP_ARGS(pool, nmem, hold), TP_STRUCT__entry( __field(const struct page_pool *, pool) - __field(const struct page *, page) + __field(const struct netmem *, nmem) __field(u32, hold) __field(unsigned long, pfn) ), TP_fast_assign( __entry->pool = pool; - __entry->page = page; + __entry->nmem = nmem; __entry->hold = hold; - __entry->pfn = page_to_pfn(page); + __entry->pfn = netmem_pfn(nmem); ), - TP_printk("page_pool=%p page=%p pfn=0x%lx hold=%u", - __entry->pool, __entry->page, __entry->pfn, __entry->hold) + TP_printk("page_pool=%p netmem=%p pfn=0x%lx hold=%u", + __entry->pool, __entry->nmem, __entry->pfn, __entry->hold) ); TRACE_EVENT(page_pool_update_nid, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 437241aba5a7..4e985502c569 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -304,8 +304,9 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, pool->p.dma_dir); } -static bool page_pool_dma_map(struct page_pool *pool, struct page *page) +static bool page_pool_dma_map(struct page_pool *pool, struct netmem *nmem) { + struct page *page = netmem_page(nmem); dma_addr_t dma; /* Setup DMA mapping: use 'struct page' area for storing DMA-addr @@ -328,12 +329,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) } static void page_pool_set_pp_info(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { - page->pp = pool; - page->pp_magic |= PP_SIGNATURE; + nmem->pp = pool; + nmem->pp_magic |= PP_SIGNATURE; if (pool->p.init_callback) - pool->p.init_callback(page, pool->p.init_arg); + pool->p.init_callback(netmem_page(nmem), pool->p.init_arg); } static void page_pool_clear_pp_info(struct netmem *nmem) @@ -345,26 +346,26 @@ static void page_pool_clear_pp_info(struct netmem *nmem) static struct page *__page_pool_alloc_page_order(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; gfp |= __GFP_COMP; - page = alloc_pages_node(pool->p.nid, gfp, pool->p.order); - if (unlikely(!page)) + nmem = page_netmem(alloc_pages_node(pool->p.nid, gfp, pool->p.order)); + if (unlikely(!nmem)) return NULL; if ((pool->p.flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); return NULL; } alloc_stat_inc(pool, slow_high_order); - page_pool_set_pp_info(pool, page); + page_pool_set_pp_info(pool, nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); - return page; + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); + return netmem_page(nmem); } /* slow path */ @@ -398,18 +399,18 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - page = pool->alloc.cache[i]; + struct netmem *nmem = page_netmem(pool->alloc.cache[i]); if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); continue; } - page_pool_set_pp_info(pool, page); - pool->alloc.cache[pool->alloc.count++] = page; + page_pool_set_pp_info(pool, nmem); + pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); } @@ -421,7 +422,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, page = NULL; } - /* When page just alloc'ed is should/must have refcnt 1. */ + /* When page just allocated it should have refcnt 1 (but may have + * speculative references) */ return page; } -- 2.35.1