From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DBBFC83F01 for ; Thu, 31 Aug 2023 14:30:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE219900003; Thu, 31 Aug 2023 10:30:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E6B2E8D0001; Thu, 31 Aug 2023 10:30:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0BE6900003; Thu, 31 Aug 2023 10:30:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BD26E8D0001 for ; Thu, 31 Aug 2023 10:30:04 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 85135A034E for ; Thu, 31 Aug 2023 14:30:04 +0000 (UTC) X-FDA: 81184634328.30.E48D0C6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf12.hostedemail.com (Postfix) with ESMTP id 485B54002A for ; Thu, 31 Aug 2023 14:30:02 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693492202; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jwOaeDLLsK1alnT5KqVbcOVNy+vpEUrB7Ds65fu3n6c=; b=7QLejYXIxcZmoYcNz5CMFHq1pqE+xfXuPdKZzOIZ8jXQaKW0HeL1FE8HDmzQBaEwP91Gfe bkk/0zVI6Ba3Xs+by7KjL9fmggHjS+BGRV1EPjWJUoTIQKVWkbtrq7XbCkxlJ1ASsuDSvY Aw4Eo6WUsl7juV5QClHy363Qu60Oegc= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693492202; a=rsa-sha256; cv=none; b=D0hp4dtJj8tURyQsf4JWei+hCGqwm2QC7QJITcN7AhyqTMEcVo3tVbSsU7m2jFZbOrFKz8 yt6efaRO/wPuB8xjVlOHS+WQRJd2Vbw5qgtKG8Awri+4KRraGar9ApDy28eV5paTJTYoFo wajgm1ghDeQEcG7ZOAbAnKaOh374AXU= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 87CA3C15; Thu, 31 Aug 2023 07:30:40 -0700 (PDT) Received: from [10.1.32.135] (XHFQ2J9959.cambridge.arm.com [10.1.32.135]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AB3FD3F738; Thu, 31 Aug 2023 07:30:00 -0700 (PDT) Message-ID: <035c50f0-5bbf-4e57-bef1-aa4b55c7cfd7@arm.com> Date: Thu, 31 Aug 2023 15:29:59 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 02/14] mm: Convert free_unref_page_list() to use folios Content-Language: en-GB To: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org References: <20230825135918.4164671-1-willy@infradead.org> <20230825135918.4164671-3-willy@infradead.org> From: Ryan Roberts In-Reply-To: <20230825135918.4164671-3-willy@infradead.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 485B54002A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 6u45iq5qn69baqdsojfh49g3wpr7ma93 X-HE-Tag: 1693492202-15269 X-HE-Meta: U2FsdGVkX19FomipPBF0npnlHln7U9eaVVXI0mCphQ1SYy7oHgJOW6BDkE9e01f1UJ944Rmkk+R9UXxhY13lKA4TJkuCTSBWHJsEJcgHk5wQH2FigM3zchPKFI5Ef/2k1Q6iWo1ktZ0w5tha8ZEz/gEXQMAYiDn8bHBwWX1XMIa4TQKPMJtHFsT8Jzvou6S9bpMOweKcRY6h9Bw7cEQRNsB1xJ4Yn1h+NMLhC3zyjjlSrgThxUMvBGyEssSNX+geRF+s7B/lXULgEPHDp+qyemKFU2YC0dCZRLLDmePz0oVGc3Kh3i9Ir1Ciif10e9/5wojhSIWv1vwg35qvnYxzMUAF1MMixYw5mZgTzAVwQT9UyZNBdv1lSdUMp2aSBVt9ZcW1uT+x5dHTx8YVZTMhfaR9aRvTjJ2iq13cYGf2g7qgPQCDXIGcskKE6ZmM06bOuKwKyDADw267GgzAPEmaLyht+DW8bMoMBoR9/itg3aRNWSJjK1vIfd4gDK0kIevdgqe+rrGUIz5r01Af1p04a6HX2cSHoeZMrbgqre7jBANBz6LEyMH/lidvc1ud2qhh+4ML2PB/+pe3S7uK/dS6beyycJaLVrfjXSJt9VEdXYfhFwFWz9VFDTBsteUPJjp6dP8fLWktwZA3weLjJgz4pYRgKwSIkxy3BuRbk74VqmRdRkEW2elRUnJLJpuSPhfPgzb887aryYAzyyRz9Lz4sjDKqdn70M5FqqQ6/hvJhosisAf/F3hXCdGBqGL2thC5gJy0OLIsgE1yxAgNr4LMe8+fQvtfm++Wb4VAgggC6JgXx4Gcv4NHJZKT3ysw+FWtxRUdp/8dAB54ubJS5AejS0IQef7q/foNfF+Z7HVsBdqjhyKCAvaQJfcNNKW2Fpi8iFfYXrxOwHQMOzCEjx6x5pv8EAvsoTPOKi2J/ft/8AJZwpSojRxoGgYjVfoZCckqsfAGozq1KmvoG97NFZF iUZzBzA0 rcslO4/ynw2hCamnZKOOQDi/001fTBEPAj2KsJPJ3XSH2dDcfhq0R6ND/arbvY5eLK6YXQxhlMYK9uiy9WzOIgmHq+d5/bqgG2RiQCyQzjy7GNdoqmHbjH9qyGJqnauqqRR06ToqesxDYqHXz7zsOKJKJUilMgyD1j9EgP2baL1L0iu5obR9ht75F8ErhDbjuCjkqdzuY966OL5ehQi94ztg5nzkRfmLsq2e8ovxUTwF+ihVu3WqgLDxiO6nCpQJ+F5sMBnqduG7AUIK1mNMiLXbH40+ZK8gB9I3TmxlY5Y3WK//eI1d/SxaKfQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 25/08/2023 14:59, Matthew Wilcox (Oracle) wrote: > Most of its callees are not yet ready to accept a folio, but we know > all of the pages passed in are actually folios because they're linked > through ->lru. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > mm/page_alloc.c | 38 ++++++++++++++++++++------------------ > 1 file changed, 20 insertions(+), 18 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 442c1b3480aa..f1ee96fd9bef 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2469,17 +2469,17 @@ void free_unref_page(struct page *page, unsigned int order) > void free_unref_page_list(struct list_head *list) Given the function has *page* in the title and this conversion to folios internally doesn't actually change any behaviour, I don't personally see a lot of value in doing this conversion. But if the aim is that everything eventually becomes a folio, then fair enough. Reviewed-by: Ryan Roberts > { > unsigned long __maybe_unused UP_flags; > - struct page *page, *next; > + struct folio *folio, *next; > struct per_cpu_pages *pcp = NULL; > struct zone *locked_zone = NULL; > int batch_count = 0; > int migratetype; > > /* Prepare pages for freeing */ > - list_for_each_entry_safe(page, next, list, lru) { > - unsigned long pfn = page_to_pfn(page); > - if (!free_unref_page_prepare(page, pfn, 0)) { > - list_del(&page->lru); > + list_for_each_entry_safe(folio, next, list, lru) { > + unsigned long pfn = folio_pfn(folio); > + if (!free_unref_page_prepare(&folio->page, pfn, 0)) { > + list_del(&folio->lru); > continue; > } > > @@ -2487,24 +2487,25 @@ void free_unref_page_list(struct list_head *list) > * Free isolated pages directly to the allocator, see > * comment in free_unref_page. > */ > - migratetype = get_pcppage_migratetype(page); > + migratetype = get_pcppage_migratetype(&folio->page); > if (unlikely(is_migrate_isolate(migratetype))) { > - list_del(&page->lru); > - free_one_page(page_zone(page), page, pfn, 0, migratetype, FPI_NONE); > + list_del(&folio->lru); > + free_one_page(folio_zone(folio), &folio->page, pfn, > + 0, migratetype, FPI_NONE); > continue; > } > } > > - list_for_each_entry_safe(page, next, list, lru) { > - struct zone *zone = page_zone(page); > + list_for_each_entry_safe(folio, next, list, lru) { > + struct zone *zone = folio_zone(folio); > > - list_del(&page->lru); > - migratetype = get_pcppage_migratetype(page); > + list_del(&folio->lru); > + migratetype = get_pcppage_migratetype(&folio->page); > > /* > * Either different zone requiring a different pcp lock or > * excessive lock hold times when freeing a large list of > - * pages. > + * folios. > */ > if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) { > if (pcp) { > @@ -2515,15 +2516,16 @@ void free_unref_page_list(struct list_head *list) > batch_count = 0; > > /* > - * trylock is necessary as pages may be getting freed > + * trylock is necessary as folios may be getting freed > * from IRQ or SoftIRQ context after an IO completion. > */ > pcp_trylock_prepare(UP_flags); > pcp = pcp_spin_trylock(zone->per_cpu_pageset); > if (unlikely(!pcp)) { > pcp_trylock_finish(UP_flags); > - free_one_page(zone, page, page_to_pfn(page), > - 0, migratetype, FPI_NONE); > + free_one_page(zone, &folio->page, > + folio_pfn(folio), 0, > + migratetype, FPI_NONE); > locked_zone = NULL; > continue; > } > @@ -2537,8 +2539,8 @@ void free_unref_page_list(struct list_head *list) > if (unlikely(migratetype >= MIGRATE_PCPTYPES)) > migratetype = MIGRATE_MOVABLE; > > - trace_mm_page_free_batched(page); > - free_unref_page_commit(zone, pcp, page, migratetype, 0); > + trace_mm_page_free_batched(&folio->page); > + free_unref_page_commit(zone, pcp, &folio->page, migratetype, 0); > batch_count++; > } >