From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AD03C46467 for ; Sat, 14 Jan 2023 12:30:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2B6D8E0002; Sat, 14 Jan 2023 07:30:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EDB3B8E0001; Sat, 14 Jan 2023 07:30:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA36D8E0002; Sat, 14 Jan 2023 07:30:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CB5D08E0001 for ; Sat, 14 Jan 2023 07:30:53 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7A0A01603A0 for ; Sat, 14 Jan 2023 12:30:53 +0000 (UTC) X-FDA: 80353338786.17.5B81A63 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by imf25.hostedemail.com (Postfix) with ESMTP id 7BE04A0012 for ; Sat, 14 Jan 2023 12:30:51 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=amazon.com header.s=amazon201209 header.b=NxGgXpkw; spf=pass (imf25.hostedemail.com: domain of "prvs=371e7cd40=shayagr@amazon.com" designates 207.171.184.29 as permitted sender) smtp.mailfrom="prvs=371e7cd40=shayagr@amazon.com"; dmarc=pass (policy=quarantine) header.from=amazon.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673699451; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w2HqJkxH3Sd5M+8pTUqFqt54zmtpHZhL/TdcAd3/PQw=; b=i/RzKB/moholMpxge29oV2ZXsaTGCJGjOT6TbGn95DBi8OrSui75+UMXbLuPoJPNJwYgoY htpLAUHEyTbmLdJ4yzI+fwIEgwrs+YqsfrnYVoAXj42R5EC73QCbST2oAgdZ9GZUmBzSFG z0eSGXGEruAuRDU/qmOl788H2AaB5E8= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=amazon.com header.s=amazon201209 header.b=NxGgXpkw; spf=pass (imf25.hostedemail.com: domain of "prvs=371e7cd40=shayagr@amazon.com" designates 207.171.184.29 as permitted sender) smtp.mailfrom="prvs=371e7cd40=shayagr@amazon.com"; dmarc=pass (policy=quarantine) header.from=amazon.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673699451; a=rsa-sha256; cv=none; b=W0Rt319U1Rk1vQuAGpT0IIbC5qv6GrnIzV0Gg98Dy/IeQF6d+WUgdno4Wf7yKA1FPpqP9e xdKWrvNtNb9vShPP5b4ZLmp9xsuSAbQ8f1gvrGSX2RkOqnkpH+5KDWxczn74IdXv4UMcaZ 3G0xQZwF9wXoc9cfsLZpNmjTob5/DDI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1673699452; x=1705235452; h=references:from:to:cc:subject:date:in-reply-to: message-id:mime-version; bh=w2HqJkxH3Sd5M+8pTUqFqt54zmtpHZhL/TdcAd3/PQw=; b=NxGgXpkwdtfvmvygOTy36Dos40zdpDqB9OB+iF8A4a62+if1V7OR3CC6 Qg5XMbBDXciL1+/LFSaNUsh4wUcI/urQqZp/0Ty81p1kuegaxD12DfWeF ZrA5KS1Y0hNlu9PQDLfptzOWvHWulH8qsZEdJ+zVtiWrjdPiimhYclV6P M=; X-IronPort-AV: E=Sophos;i="5.97,216,1669075200"; d="scan'208";a="300343502" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-iad-1a-m6i4x-edda28d4.us-east-1.amazon.com) ([10.25.36.210]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2023 12:30:46 +0000 Received: from EX13D47EUB003.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38]) by email-inbound-relay-iad-1a-m6i4x-edda28d4.us-east-1.amazon.com (Postfix) with ESMTPS id B755781310; Sat, 14 Jan 2023 12:30:42 +0000 (UTC) Received: from EX19D028EUB003.ant.amazon.com (10.252.61.31) by EX13D47EUB003.ant.amazon.com (10.43.166.246) with Microsoft SMTP Server (TLS) id 15.0.1497.45; Sat, 14 Jan 2023 12:30:41 +0000 Received: from u570694869fb251.ant.amazon.com.amazon.com (10.43.161.198) by EX19D028EUB003.ant.amazon.com (10.252.61.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.7; Sat, 14 Jan 2023 12:30:38 +0000 References: <20230111042214.907030-1-willy@infradead.org> <20230111042214.907030-9-willy@infradead.org> User-agent: mu4e 1.6.10; emacs 28.0.91 From: Shay Agroskin To: "Matthew Wilcox (Oracle)" CC: Jesper Dangaard Brouer , Ilias Apalodimas , , , Shakeel Butt , "Jesper Dangaard Brouer" , Jesse Brandeburg Subject: Re: [PATCH v3 08/26] page_pool: Convert pp_alloc_cache to contain netmem Date: Sat, 14 Jan 2023 14:28:50 +0200 In-Reply-To: <20230111042214.907030-9-willy@infradead.org> Message-ID: MIME-Version: 1.0 Content-Type: text/plain; format=flowed X-Originating-IP: [10.43.161.198] X-ClientProxiedBy: EX13D43UWA004.ant.amazon.com (10.43.160.108) To EX19D028EUB003.ant.amazon.com (10.252.61.31) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7BE04A0012 X-Stat-Signature: ctft7b1u7sdsy45zmuxwid8573m69te9 X-Rspam-User: X-HE-Bulk: true X-HE-Tag: 1673699451-747360 X-HE-Meta: U2FsdGVkX19dH0Xnfgk4wGmeEb8AgHGoU112c30ekT2X/dOIe56cTbVlLSk0Fh4kU4YPaFyOgsMqIJdFOBMwxKYX1a7HhFuKAJDN2Y/jGios3xNSuRAez7tWEbywjWsIB/Tr43b9bHvdbwF1qZZAlppsBLydFlJ4KbQILs9lEOqWiaF0QOde0oeDN4rkMxG8HPHkuAUN1Giv+HgXH4+AV05YIpQsZ4KyHCyltQnUWlHBwcmmI/LGrBjQ/mnyCNQpO11Q8+kxRpiZMdNyyIOpoLgQ5/W5OQl6X3g6RVqwHk6f2w9C6JcIj1j2rNSevxstEzqMmycKNm3w4/D7+Kb1nHvGmDFg5Xo7f0tIdYBlBhK7mMotTVVUx6vxE07LG6o2f0VZO+hITmu37/A0eX6HFKsRf5BgMEvTM9T/CzvDYhj0taqW2+eC5JGGUR/hh4LCWLLMTp3NfzS1PJfr0DkKVyzl/Z+FLsC9/G3HEPkjC7X10yNxUcgnSezB37o0nE9RQZ1SvCDVBAitf2UAfvgrfy8vy8EaunulC1kqWamzU3Teq5pQTjFXa9N/tLwY2xPMKF0O+FMUx3ICGQZDuP6frBxbwaJ5bkrMsxlSMoZdHMq0x18nMSQBwPt1DaobXBNVpzWO8DvkjP/DcN5Cfs67NbRS/yt/IQe3r6Rf0YOpU040AxjOBKUe6Pu/vWzGTqDYMYxmlK8gjNFtLlqJzkjBLue3uRMN/myZBiGFWDrj8B85qqjY+zLDiNnZz7/wpKRztqtSoWVrPtrBv6UnAt5dgpU7jhpvRjPbP3RekgeCHmqODg2biAod+eO2cCuri12yomag95BUMZ0gRGbh0DzyElUstpMSsWDyCpxnFbYRp0DbEgkyC95yVwJ7HNL39VVXTdLrIvi5wD7oSW+dkBMjOS+twYRj4LKipWuX/62pE/jhZhrK8qoPk4xUB64cr3IqSSqUsE53vlIiFiuB0My TAeoZmMQ N2R8A0yHJaxRbn1S0ZaXO7wBSJ46YWAlhhOcECPQuFbpuKd3/y9xxGTs+wBZTMTEogyFuaXwvZu5IAsmEGU8krV2FrMbqBZDso/jImLYu/7VXRRi5W8DBxVqUtNinXalEHWSIe8rYwLUOK3N8BqCb2kKF2cvfIf4lCuM+iuI3hM2WLO3EyERlOnbUzrzoERma6KE7k3Rov5rELUYk9qpZTfPGxCQzAcyqVYW73GuWTuCT67rENXY8cJ99/F0lgwqevegKVljf0ZLxJ9FtJifpkdbuuTnz7erkLPU/Lsd+K0e2LHRjN3vPS912RBV7gCu3NIJNZUh5NqaTZb095CdHQilShJljLIIlCAl8SAUeM+yhwYI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: "Matthew Wilcox (Oracle)" writes: > Change the type here from page to netmem. It works out well to > convert page_pool_refill_alloc_cache() to return a netmem > instead > of a page as part of this commit. > > Signed-off-by: Matthew Wilcox (Oracle) > Acked-by: Jesper Dangaard Brouer > Reviewed-by: Ilias Apalodimas > Reviewed-by: Jesse Brandeburg > --- > include/net/page_pool.h | 2 +- > net/core/page_pool.c | 52 > ++++++++++++++++++++--------------------- > 2 files changed, 27 insertions(+), 27 deletions(-) > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index 34d47c10550e..583c13f6f2ab 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -173,7 +173,7 @@ static inline bool > netmem_is_pfmemalloc(const struct netmem *nmem) > #define PP_ALLOC_CACHE_REFILL 64 > struct pp_alloc_cache { > u32 count; > - struct page *cache[PP_ALLOC_CACHE_SIZE]; > + struct netmem *cache[PP_ALLOC_CACHE_SIZE]; > }; > > struct page_pool_params { > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 8f3f7cc5a2d5..c54217ce6b77 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -229,10 +229,10 @@ void page_pool_return_page(struct > page_pool *pool, struct page *page) > } > > noinline > -static struct page *page_pool_refill_alloc_cache(struct > page_pool *pool) > +static struct netmem *page_pool_refill_alloc_cache(struct > page_pool *pool) > { > struct ptr_ring *r = &pool->ring; > - struct page *page; > + struct netmem *nmem; > int pref_nid; /* preferred NUMA node */ > > /* Quicker fallback, avoid locks when ring is empty */ > @@ -253,49 +253,49 @@ static struct page > *page_pool_refill_alloc_cache(struct page_pool *pool) > > /* Refill alloc array, but only if NUMA match */ > do { > - page = __ptr_ring_consume(r); > - if (unlikely(!page)) > + nmem = __ptr_ring_consume(r); > + if (unlikely(!nmem)) > break; > > - if (likely(page_to_nid(page) == pref_nid)) { > - pool->alloc.cache[pool->alloc.count++] = > page; > + if (likely(netmem_nid(nmem) == pref_nid)) { > + pool->alloc.cache[pool->alloc.count++] = > nmem; > } else { > /* NUMA mismatch; > * (1) release 1 page to page-allocator > and > * (2) break out to fallthrough to > alloc_pages_node. > * This limit stress on page buddy > alloactor. > */ > - page_pool_return_page(pool, page); > + page_pool_return_netmem(pool, nmem); > alloc_stat_inc(pool, waive); > - page = NULL; > + nmem = NULL; > break; > } > } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); > > /* Return last page */ > if (likely(pool->alloc.count > 0)) { > - page = pool->alloc.cache[--pool->alloc.count]; > + nmem = pool->alloc.cache[--pool->alloc.count]; > alloc_stat_inc(pool, refill); > } > > - return page; > + return nmem; > } > > /* fast path */ > static struct page *__page_pool_get_cached(struct page_pool > *pool) > { > - struct page *page; > + struct netmem *nmem; > > /* Caller MUST guarantee safe non-concurrent access, > e.g. softirq */ > if (likely(pool->alloc.count)) { > /* Fast-path */ > - page = pool->alloc.cache[--pool->alloc.count]; > + nmem = pool->alloc.cache[--pool->alloc.count]; > alloc_stat_inc(pool, fast); > } else { > - page = page_pool_refill_alloc_cache(pool); > + nmem = page_pool_refill_alloc_cache(pool); > } > > - return page; > + return netmem_page(nmem); > } > > static void page_pool_dma_sync_for_device(struct page_pool > *pool, > @@ -391,13 +391,13 @@ static struct page > *__page_pool_alloc_pages_slow(struct page_pool *pool, > > /* Unnecessary as alloc cache is empty, but guarantees > zero count */ > if (unlikely(pool->alloc.count > 0)) > - return pool->alloc.cache[--pool->alloc.count]; > + return > netmem_page(pool->alloc.cache[--pool->alloc.count]); > > /* Mark empty alloc.cache slots "empty" for > alloc_pages_bulk_array */ > memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); > > nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, > bulk, > - pool->alloc.cache); > + (struct page > **)pool->alloc.cache); Can you fix the alignment here (so that the '(struct page **)' would align the the 'gfp' argument one line above) ? Shay > if (unlikely(!nr_pages)) > return NULL; > > @@ -405,7 +405,7 @@ static struct page > *__page_pool_alloc_pages_slow(struct page_pool *pool, > * page element have not been (possibly) DMA mapped. > */ > for (i = 0; i < nr_pages; i++) { > - struct netmem *nmem = > page_netmem(pool->alloc.cache[i]); > + struct netmem *nmem = pool->alloc.cache[i]; > if ((pp_flags & PP_FLAG_DMA_MAP) && > unlikely(!page_pool_dma_map(pool, nmem))) { > netmem_put(nmem); > @@ -413,7 +413,7 @@ static struct page > *__page_pool_alloc_pages_slow(struct page_pool *pool, > } > > page_pool_set_pp_info(pool, nmem); > - pool->alloc.cache[pool->alloc.count++] = > netmem_page(nmem); > + pool->alloc.cache[pool->alloc.count++] = nmem; > /* Track how many pages are held 'in-flight' */ > pool->pages_state_hold_cnt++; > trace_page_pool_state_hold(pool, nmem, > @@ -422,7 +422,7 @@ static struct page > *__page_pool_alloc_pages_slow(struct page_pool *pool, > > /* Return last page */ > if (likely(pool->alloc.count > 0)) { > - page = pool->alloc.cache[--pool->alloc.count]; > + page = > netmem_page(pool->alloc.cache[--pool->alloc.count]); > alloc_stat_inc(pool, slow); > } else { > page = NULL; > @@ -547,7 +547,7 @@ static bool > page_pool_recycle_in_cache(struct page *page, > } > > /* Caller MUST have verified/know (page_ref_count(page) == > 1) */ > - pool->alloc.cache[pool->alloc.count++] = page; > + pool->alloc.cache[pool->alloc.count++] = > page_netmem(page); > recycle_stat_inc(pool, cached); > return true; > } > @@ -785,7 +785,7 @@ static void page_pool_free(struct page_pool > *pool) > > static void page_pool_empty_alloc_cache_once(struct page_pool > *pool) > { > - struct page *page; > + struct netmem *nmem; > > if (pool->destroy_cnt) > return; > @@ -795,8 +795,8 @@ static void > page_pool_empty_alloc_cache_once(struct page_pool *pool) > * call concurrently. > */ > while (pool->alloc.count) { > - page = pool->alloc.cache[--pool->alloc.count]; > - page_pool_return_page(pool, page); > + nmem = pool->alloc.cache[--pool->alloc.count]; > + page_pool_return_netmem(pool, nmem); > } > } > > @@ -878,15 +878,15 @@ EXPORT_SYMBOL(page_pool_destroy); > /* Caller must provide appropriate safe context, e.g. NAPI. */ > void page_pool_update_nid(struct page_pool *pool, int new_nid) > { > - struct page *page; > + struct netmem *nmem; > > trace_page_pool_update_nid(pool, new_nid); > pool->p.nid = new_nid; > > /* Flush pool alloc cache, as refill will check NUMA node > */ > while (pool->alloc.count) { > - page = pool->alloc.cache[--pool->alloc.count]; > - page_pool_return_page(pool, page); > + nmem = pool->alloc.cache[--pool->alloc.count]; > + page_pool_return_netmem(pool, nmem); > } > } > EXPORT_SYMBOL(page_pool_update_nid);