From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37B88C27C4F for ; Thu, 13 Jun 2024 06:10:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF48D6B009A; Thu, 13 Jun 2024 02:10:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA3BD6B009B; Thu, 13 Jun 2024 02:10:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6B286B009C; Thu, 13 Jun 2024 02:10:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 887B86B009A for ; Thu, 13 Jun 2024 02:10:02 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 09DF3A0DFC for ; Thu, 13 Jun 2024 06:10:02 +0000 (UTC) X-FDA: 82224839844.18.C2D108E Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf14.hostedemail.com (Postfix) with ESMTP id A3464100008 for ; Thu, 13 Jun 2024 06:09:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=BTKGtrg4; spf=pass (imf14.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718258999; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oGC40GeO7JYDfpfEbIsTnr3g40qS11M+/KRKjbfvEds=; b=6tG6zHlWAQSTYL2hLZ25BJmKFlX+nuO6CTe7BarTQhmRly/JkphitQDm+467GzF65m065J qRCOuK4tQF6olBD9b7sGNDrOGIPp5z1YBE4HbrlAmMq4edc8qis1KEaaKkT3QgFyCDvopx bx7w3614OCRH3ThOUEtNETDwoHN/NeU= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=BTKGtrg4; spf=pass (imf14.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718258999; a=rsa-sha256; cv=none; b=D816tAmg66pt4E5yZe6TvBzDmzge9Hh+WEOdjIFzf5P8+7uC2JF6wAkk48Pt+eI5v/jYFz fZFVLvkjA8b/vfA4Jwz2/Ed9TqGieDODjqj2AnO+B1bImArBNQp8xsGzLNZDaSvuHku4Nu ny8DSuIzCfKgAieypynGOaaKEM3+/14= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 3A367CE2434; Thu, 13 Jun 2024 06:09:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7FEADC32789; Thu, 13 Jun 2024 06:09:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718258994; bh=Sr6SUveVjr7blsh+ImdEQeupI/UCCR3HwsTVz9S7Avk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=BTKGtrg45H14ClXEu//kSVaXAgv0mgHLZLOT6aDHyi/A1NM1HleoUA5GO8JkF+oTX AqzG7qvJifmyHObyPWA3w2sf7gTgqAGYfpjpTyft7IWiyQbiFcTdXxXo0VRT0+GYjf d90xy/iqcQF7OsQVpBuCpY3A0Aet7zjvLmqKCE+mUw+MwgE2/sA2ZPzAGEED0rTxrb IAy4f3xEiPBS9Z+xrGGNVVquR4q9ckWsbZJF4gLoU9VDv1TaeqoDaRp3X9mHjj3HHN OURSFPYPluhuPfqoiAOjB/1x+z6dRp+8zbLBNhdyntDahOKYvdD1Yf13+V9wdQSTKW +ufHZxmQWJ/Eg== Date: Thu, 13 Jun 2024 09:07:46 +0300 From: Mike Rapoport To: Wei Yang Cc: akpm@linux-foundation.org, linux-mm@kvack.org, "Kirill A . Shutemov" , David Hildenbrand Subject: Re: [PATCH] mm/mm_init.c: simplify logic of deferred_[init|free]_pages Message-ID: References: <20240612020421.31975-1-richard.weiyang@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240612020421.31975-1-richard.weiyang@gmail.com> X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A3464100008 X-Stat-Signature: uetfu99594jt6yp8c7tw7n7rj6gd8ask X-HE-Tag: 1718258998-719120 X-HE-Meta: U2FsdGVkX1+j5YPUQnPpyyNmr6oHcpvcK1NIluTdIWeDy7idyBnWGg+gPmSa5qF8DX3p73N21fzhPxw19Z2yQf0jzzPwXiatxxV1BHbROxSdK16puuw15yreBvHCdUZ9/Ddrp1dSqcJbNplTlKqdws823AwZN7aA2dDleHnPNtfHA5vgjn+tRqv31zlqiBvN+PyNMa3scR+AgvCGvrughTqyAcsSRQvWEpMp17nnXIihPFOuplixS2WPs1VmfyYesnkpSMz/vyU6im4/3l4AApkciWG16w1WKMJfida47Z08TDFFNYdrkuMQA4F81Wdd2XCg6yeIdwiW078loNwRrwYgPi7AFXGhS3n/MbQKJlCIwF3a57T5qAiwO6aPigpjTMXZwW1Z5rpU4oNZJwK8vDhVIAaMPKn0/yTyBgUiNXTwNtw416B2VHSVrZpCa3rmhHubyaEO0HBjmAfZVt8uVRbHYs9iJ0+AQJbyYEZUbhm02XfPGn0ORKTfjnRNaSKQhmL/HlDeDIeJVpSXOcY+RjyFTHxJHr9KfR75jrfUh15TUmBDWjON/tuRqbRNvmD4hrAXGmsEPioKy6g844aHFGLIxeiW89hHvb58XUxJDun4Ga0Mdb4lJaaT7XMsPuwe+U6Pe14RealHAj3JlISIUzl+2we9htvGGU6ryWm72Pt94bAaxDiAb6VyxR9CWk840vZ4DQIMdIaqDflv74LgbBAXjH9nLPa/y0HAScoa5PtUexw0tvBvREY4MmhPcZG1UqmYsF9LTiSDBAL4y2UkdkKxZoenOLfCYPVQPvdXfDHs3To0CK2249TYnUQ44gnn8cRXbyCRqXNeZuBEw5CRgzSz9LIBQMFcYvsHDBQ4P51/Jlr4WFL9lNkFFUlEHN0xlZn/dlgLdVQ6hviG4Ygnuvs3cO9nBk/KL/8BQmISQccEdBwoKQbf/NL3flEhHV6R0RDmYnkKTdexjb8CoRI FOLCpILW m0ZI28ntI7fQOnPMZbY9Qc9ajqdlQxChdsw894ocq6QhoakAJr/vBZEZXrQ4w2D/IfLNT3WCku0h6q3IzkD4vWn/ODhNIsJ2g9WbLurxjK/wQAbPq7clJ957YdajskAqzms99+h30+21oiEqj+qEgzZxeT44HFLWh/YEx34SX+ampB/rCngT3vy8KoqzWKGmgGOcQcXme4RBkXaztK5xoaVnscAcw9rorapB8FDCwKTtknFpNIQ5gIdjP/0aZA1LbVndaUeVCuHUHtOK5Rp8H5BpBEOgFTdwpztM8kM/GefVoBxfmPhkjH3P10Ac61+63Meu9cUZqLCgQJQquv7GK+hxh6GankciWKvXl7QsHx3YclSddCsFF46mutmuTv9br1zZuik4bvBHGGZVRgQ60haRMrEptQR/U73Y6wB/ICcV5okhsjVZ6SUEI4FR6O+KomXvIAL5kedUG8EGiyT8V2fejqQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 12, 2024 at 02:04:21AM +0000, Wei Yang wrote: > Function deferred_[init|free]_pages are only used in > deferred_init_maxorder(), which makes sure the range to init/free is > within MAX_ORDER_NR_PAGES size. > > With this knowledge, we can simplify these two functions. Since > > * only the first pfn could be IS_MAX_ORDER_ALIGNED() > > Also since the range passed to deferred_[init|free]_pages is always from > memblock.memory for those we have already allocated memmap to cover, > pfn_valid() always return true. Then we can remove related check. > > Signed-off-by: Wei Yang > CC: Kirill A. Shutemov > CC: Mike Rapoport (IBM) > CC: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > mm/mm_init.c | 63 +++++++--------------------------------------------- > 1 file changed, 8 insertions(+), 55 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index c152c60eca3d..63d70fc60705 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -1911,7 +1911,7 @@ unsigned long __init node_map_pfn_alignment(void) > } > > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT > -static void __init deferred_free_range(unsigned long pfn, > +static void __init deferred_free_pages(unsigned long pfn, > unsigned long nr_pages) > { > struct page *page; > @@ -1950,69 +1950,22 @@ static inline void __init pgdat_init_report_one_done(void) > complete(&pgdat_init_all_done_comp); > } > > -/* > - * Returns true if page needs to be initialized or freed to buddy allocator. > - * > - * We check if a current MAX_PAGE_ORDER block is valid by only checking the > - * validity of the head pfn. > - */ > -static inline bool __init deferred_pfn_valid(unsigned long pfn) > -{ > - if (IS_MAX_ORDER_ALIGNED(pfn) && !pfn_valid(pfn)) > - return false; > - return true; > -} > - > -/* > - * Free pages to buddy allocator. Try to free aligned pages in > - * MAX_ORDER_NR_PAGES sizes. > - */ > -static void __init deferred_free_pages(unsigned long pfn, > - unsigned long end_pfn) > -{ > - unsigned long nr_free = 0; > - > - for (; pfn < end_pfn; pfn++) { > - if (!deferred_pfn_valid(pfn)) { > - deferred_free_range(pfn - nr_free, nr_free); > - nr_free = 0; > - } else if (IS_MAX_ORDER_ALIGNED(pfn)) { > - deferred_free_range(pfn - nr_free, nr_free); > - nr_free = 1; > - } else { > - nr_free++; > - } > - } > - /* Free the last block of pages to allocator */ > - deferred_free_range(pfn - nr_free, nr_free); > -} > - > /* > * Initialize struct pages. We minimize pfn page lookups and scheduler checks > * by performing it only once every MAX_ORDER_NR_PAGES. > * Return number of pages initialized. > */ > -static unsigned long __init deferred_init_pages(struct zone *zone, > - unsigned long pfn, > - unsigned long end_pfn) > +static unsigned long __init deferred_init_pages(struct zone *zone, > + unsigned long pfn, > + unsigned long end_pfn) > { > int nid = zone_to_nid(zone); > - unsigned long nr_pages = 0; > + unsigned long nr_pages = end_pfn - pfn; > int zid = zone_idx(zone); > - struct page *page = NULL; > + struct page *page = pfn_to_page(pfn); > > - for (; pfn < end_pfn; pfn++) { > - if (!deferred_pfn_valid(pfn)) { > - page = NULL; > - continue; > - } else if (!page || IS_MAX_ORDER_ALIGNED(pfn)) { > - page = pfn_to_page(pfn); > - } else { > - page++; > - } > + for (; pfn < end_pfn; pfn++, page++) > __init_single_page(page, pfn, zid, nid); > - nr_pages++; > - } > return nr_pages; > } > > @@ -2096,7 +2049,7 @@ deferred_init_maxorder(u64 *i, struct zone *zone, unsigned long *start_pfn, > break; > > t = min(mo_pfn, epfn); > - deferred_free_pages(spfn, t); > + deferred_free_pages(spfn, t - spfn); > > if (mo_pfn <= epfn) > break; > -- > 2.34.1 > -- Sincerely yours, Mike.