From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D64CE743F6 for ; Fri, 29 Sep 2023 08:31:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0F498D008D; Fri, 29 Sep 2023 04:31:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CBF1D8D0023; Fri, 29 Sep 2023 04:31:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAED88D008D; Fri, 29 Sep 2023 04:31:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AD1FE8D0023 for ; Fri, 29 Sep 2023 04:31:21 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7F1904124D for ; Fri, 29 Sep 2023 08:31:21 +0000 (UTC) X-FDA: 81288965562.04.29B8E02 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf22.hostedemail.com (Postfix) with ESMTP id 2321BC0029 for ; Fri, 29 Sep 2023 08:31:18 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Jw3M0Gcy; spf=pass (imf22.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695976279; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7BRe3y1TDK+eU1XfvDplC3i4CFK5NqZv18lmeAXL4wA=; b=a0Xq5XJ1ShqH6+H2Z+36VZ+7BdhecqAM2aHeEhC3Jcq2Dq3wUtRvJbWj843+2vhWlPwda0 bmzZy2fU5009M6w2IVXNpFk+CgjCeVEU2EON7QbQFA4Uwt+EYZ3vw+g4fLuu3ycxBjrkKc jMayFmi6XDIwEMZDr4yKF8QEOEq/KKs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695976279; a=rsa-sha256; cv=none; b=j2BNyNIfjwaoxFfA0OGtxGNErf2npQyXSiGMnFi3aRyrzSmmpHFYkn2rRKRVb5ZGp/kPP4 wvVJvSP9rfMBFj1LmnvS02E9un5UYzshMb8AqMz+lQKmLFF2u9CVgmoc6/nY2G8/5Iqnhl RUMLWsXYVKu8dc3jGixB5a1gszTRn7k= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Jw3M0Gcy; spf=pass (imf22.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id C6A6ACE1827; Fri, 29 Sep 2023 08:31:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18D03C433C7; Fri, 29 Sep 2023 08:31:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695976275; bh=KFYltfNMXJHEntrJupeguURYviSs9Hszmj5cJIerR6c=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Jw3M0Gcyy1R1Jeqr4D8XW2blUqxBUVDgrcNRH3+wBXk8F2GcYISbIZB3TO9G3b507 AAoEHo1aFevCX9KlX5EQujLQo4O7EXYpzwMZIBIcZuQ563Lxz+vZAMNHgjT63DSJvU OhkYo5vDQuU2bGwarQcNlypnUOE3fN3gU7regX9LwbOmqQC3DMOKxw4yAfxRUyg9pM 98PkO+oa5pWeq+8xzhoNyNdLb7pdMRXsphK6ShFas1okkmZbpNngOa5JWrj+S5yhws rh4LH9MQ7C1KslAQH7lJzBtKdoDbIzeSaWS7P7Z53THb34F6iz9rtwmzEmk3lwlY6Q EGLKFcsXjOjeg== Date: Fri, 29 Sep 2023 11:30:18 +0300 From: Mike Rapoport To: Yajun Deng Cc: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, willy@infradead.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v4 2/2] mm: Init page count in reserve_bootmem_region when MEMINIT_EARLY Message-ID: <20230929083018.GU3303@kernel.org> References: <20230928083302.386202-1-yajun.deng@linux.dev> <20230928083302.386202-3-yajun.deng@linux.dev> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230928083302.386202-3-yajun.deng@linux.dev> X-Stat-Signature: with7g9mxt1uw9n9eobor8ob51zcxe1f X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2321BC0029 X-Rspam-User: X-HE-Tag: 1695976278-418365 X-HE-Meta: U2FsdGVkX18ozWrxTfA3m9F4QPseO1tp2i3wbTq60tTpvm6ucu34QbwjM3qVag3/PnYIpcK0N3t35bU9xj7KWGf6FIgbBPcKnDEQiSHD9YVAt6a+iiUvsv0MHcaTMnhlyiQszPU5ie0fJXTCNewBEfO0Chiok4DGIojNL8AaBfT/s+bO25yLdjsBlvi30Ms43Kc/F4fbaWms6U9vYb0oKmaVUK8sUPSSKCm6ITBpqg5A7VCP6pZ6yHDjoIjnzBFtgndWweqCO+foiKSTZXRYjnoPg8/aBM4qoQvFbAqb+Pd8b5aIFNbeMmtCP8GuMvpLiSTBs1eBx4rhKt306CmTBFLf5LW3+nnxMZeM4klNquub77q4V3MTaDZX0tz0sReeomqVgjrsc9YsIe1po4+UPtXk0KQ0GcCWDQibDqDU2Dli0d7RHRzBsyOpHv5j03isEfc6MW1pNukAyjfmvRex5Y3Ge2hgCoGQcaeShbrsdfeiThY+aGzaBpQ5lTzLFwFP5ONo3qY4CxznS/8KnUkoWlachE9r75FQ3a4ZqbI8yLF1iIfCDnsCgivFdCtCyWESn7wSHY0x1YA7c/B97hVw0VBx/ppGG1ShK3aj2IMDFECvSEBLL7/7Deiu0WDv2fshKr+dER6bNw1PO4TCX80SliLFPAbfKtanP/QTm+XC5XQYmRl8+qKOO/vEkSMls3M4fZhdCEYIg6eRpZQ1pjhCt07cSoe7WVQLum1togxUstx4UgzhdReW8tQKyIa3xlm+zCkKkLm7O5Ro+uzHva/w4+eSegKEoQ/UlwRRo5Q1E4bdQi180NGhxI9sFB6WsIdYbEPWk49quyUSLaeYazDlNWhW5yQ9fotQbSuVpnGVDPNAL7Sqq5VwEv5CNGLBm6caKhnM3lXO3MGeknJWzJnu4f3w2siW5mR0SXc6b94UC+MRV4Cb/EHu0O2+e1nUJH3Ws2KNNih60CPzNkw/mTw Bbljh3iz F+lobpAGsM2IW1+Zajd1RCsyOQSlzwWQDWG6Faj48n+z+m3MGCWZMB7F/Wj607Q9NsgDz+t9p0Sc85tEQDMVFoPwGBRmjVpnMFoYFBs0Tqu8ZJLIz++5Ds26F9Xkb9aLynOnPJ+AYcrLAcf8whda0NZgSbmq17is8pTJAd1jON96ulMjrHQuX7Je/zdQPtWV1RKwmWqbas2LPKyQEun4Dm8OPkE1RhmmUubgycCYQFBbgWK8HFV3/p7Xvqu3SgA5nzFf168cQdWDPZ9SusBrFHazcBZLMAujIdumUI/Fw10jSeNysY3OBWTjkozSpSg9xAFK6Pqg2cdwzKXgub0CXaJe8lA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 28, 2023 at 04:33:02PM +0800, Yajun Deng wrote: > memmap_init_range() would init page count of all pages, but the free > pages count would be reset in __free_pages_core(). There are opposite > operations. It's unnecessary and time-consuming when it's MEMINIT_EARLY > context. > > Init page count in reserve_bootmem_region when in MEMINIT_EARLY context, > and check the page count before reset it. > > At the same time, the INIT_LIST_HEAD in reserve_bootmem_region isn't > need, as it already done in __init_single_page. > > The following data was tested on an x86 machine with 190GB of RAM. > > before: > free_low_memory_core_early() 341ms > > after: > free_low_memory_core_early() 285ms > > Signed-off-by: Yajun Deng > --- > v4: same with v2. > v3: same with v2. > v2: check page count instead of check context before reset it. > v1: https://lore.kernel.org/all/20230922070923.355656-1-yajun.deng@linux.dev/ > --- > mm/mm_init.c | 18 +++++++++++++----- > mm/page_alloc.c | 20 ++++++++++++-------- > 2 files changed, 25 insertions(+), 13 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 9716c8a7ade9..3ab8861e1ef3 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -718,7 +718,7 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) > if (zone_spans_pfn(zone, pfn)) > break; > } > - __init_single_page(pfn_to_page(pfn), pfn, zid, nid, INIT_PAGE_COUNT); > + __init_single_page(pfn_to_page(pfn), pfn, zid, nid, 0); > } > #else > static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} > @@ -756,8 +756,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, > > init_reserved_page(start_pfn, nid); > > - /* Avoid false-positive PageTail() */ > - INIT_LIST_HEAD(&page->lru); > + /* Init page count for reserved region */ Please add a comment that describes _why_ we initialize the page count here. > + init_page_count(page); > > /* > * no need for atomic set_bit because the struct > @@ -888,9 +888,17 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone > } > > page = pfn_to_page(pfn); > - __init_single_page(page, pfn, zone, nid, INIT_PAGE_COUNT); > - if (context == MEMINIT_HOTPLUG) > + > + /* If the context is MEMINIT_EARLY, we will init page count and > + * mark page reserved in reserve_bootmem_region, the free region > + * wouldn't have page count and we will check the pages count > + * in __free_pages_core. > + */ > + __init_single_page(page, pfn, zone, nid, 0); > + if (context == MEMINIT_HOTPLUG) { > + init_page_count(page); > __SetPageReserved(page); Rather than calling init_page_count() and __SetPageReserved() for MEMINIT_HOTPLUG you can set flags to INIT_PAGE_COUNT | INIT_PAGE_RESERVED an call __init_single_page() after the check for MEMINIT_HOTPLUG. But more generally, I wonder if we have to differentiate HOTPLUG here at all. @David, can you comment please? > + } > > /* > * Usually, we want to mark the pageblock MIGRATE_MOVABLE, > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 06be8821d833..b868caabe8dc 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1285,18 +1285,22 @@ void __free_pages_core(struct page *page, unsigned int order) > unsigned int loop; > > /* > - * When initializing the memmap, __init_single_page() sets the refcount > - * of all pages to 1 ("allocated"/"not free"). We have to set the > - * refcount of all involved pages to 0. > + * When initializing the memmap, memmap_init_range sets the refcount > + * of all pages to 1 ("reserved" and "free") in hotplug context. We > + * have to set the refcount of all involved pages to 0. Otherwise, > + * we don't do it, as reserve_bootmem_region only set the refcount on > + * reserve region ("reserved") in early context. > */ Again, why hotplug and early init should be different? > - prefetchw(p); > - for (loop = 0; loop < (nr_pages - 1); loop++, p++) { > - prefetchw(p + 1); > + if (page_count(page)) { > + prefetchw(p); > + for (loop = 0; loop < (nr_pages - 1); loop++, p++) { > + prefetchw(p + 1); > + __ClearPageReserved(p); > + set_page_count(p, 0); > + } > __ClearPageReserved(p); > set_page_count(p, 0); > } > - __ClearPageReserved(p); > - set_page_count(p, 0); > > atomic_long_add(nr_pages, &page_zone(page)->managed_pages); > > -- > 2.25.1 > -- Sincerely yours, Mike.