From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E40E5C433E0 for ; Mon, 1 Feb 2021 09:40:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3FEDD64EC2 for ; Mon, 1 Feb 2021 09:40:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3FEDD64EC2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 82FBB6B006E; Mon, 1 Feb 2021 04:40:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E0456B0070; Mon, 1 Feb 2021 04:40:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D0976B0071; Mon, 1 Feb 2021 04:40:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 5834D6B006E for ; Mon, 1 Feb 2021 04:40:20 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 18646BEF2 for ; Mon, 1 Feb 2021 09:40:20 +0000 (UTC) X-FDA: 77769203400.23.rod36_1a17997275c1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id ECDBC37604 for ; Mon, 1 Feb 2021 09:40:19 +0000 (UTC) X-HE-Tag: rod36_1a17997275c1 X-Filterd-Recvd-Size: 10425 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 1 Feb 2021 09:40:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1612172419; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=UxqdfqW2coC69nGfoNGf/0rHSJtuxDNJRnNza9iJd/Q=; b=bNlLuE3G3fDkcHihQC+7JeA95zjTc4ZPIaPTz769NWxY7RuzBcdB+UJj+vhrlttHETYiZC ospQMmTRw8IEGW8u7YQ4Ehy3UofwS7KpzD+enmnsICQLqwW0JXRObho77g77Mr4avnw2+H xtb1IyGeYqjJHKixp7p6O/QXILGNUko= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-1-Cq9-A-LAMPeTQ12BQbf8zQ-1; Mon, 01 Feb 2021 04:40:15 -0500 X-MC-Unique: Cq9-A-LAMPeTQ12BQbf8zQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0CEE71005D4F; Mon, 1 Feb 2021 09:40:13 +0000 (UTC) Received: from localhost (ovpn-12-43.pek2.redhat.com [10.72.12.43]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6373A5F705; Mon, 1 Feb 2021 09:40:00 +0000 (UTC) Date: Mon, 1 Feb 2021 17:39:58 +0800 From: Baoquan He To: David Hildenbrand Cc: Mike Rapoport , Andrew Morton , Andrea Arcangeli , Borislav Petkov , "H. Peter Anvin" , Ingo Molnar , Mel Gorman , Michal Hocko , Mike Rapoport , Qian Cai , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: Re: [PATCH v3 2/2] mm: fix initialization of struct page for holes in memory layout Message-ID: <20210201093958.GD28734@MiWiFi-R3L-srv> References: <20210111194017.22696-1-rppt@kernel.org> <20210111194017.22696-3-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 02/01/21 at 10:14am, David Hildenbrand wrote: > On 11.01.21 20:40, Mike Rapoport wrote: > > From: Mike Rapoport > > > > There could be struct pages that are not backed by actual physical memory. > > This can happen when the actual memory bank is not a multiple of > > SECTION_SIZE or when an architecture does not register memory holes > > reserved by the firmware as memblock.memory. > > > > Such pages are currently initialized using init_unavailable_mem() function > > that iterates through PFNs in holes in memblock.memory and if there is a > > struct page corresponding to a PFN, the fields if this page are set to > > default values and the page is marked as Reserved. > > > > init_unavailable_mem() does not take into account zone and node the page > > belongs to and sets both zone and node links in struct page to zero. > > > > On a system that has firmware reserved holes in a zone above ZONE_DMA, for > > instance in a configuration below: > > > > # grep -A1 E820 /proc/iomem > > 7a17b000-7a216fff : Unknown E820 type > > 7a217000-7bffffff : System RAM > > > > unset zone link in struct page will trigger > > > > VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page); > > > > because there are pages in both ZONE_DMA32 and ZONE_DMA (unset zone link in > > struct page) in the same pageblock. > > > > Update init_unavailable_mem() to use zone constraints defined by an > > architecture to properly setup the zone link and use node ID of the > > adjacent range in memblock.memory to set the node link. > > > > Fixes: 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather that check each PFN") > > Reported-by: Andrea Arcangeli > > Signed-off-by: Mike Rapoport > > --- > > mm/page_alloc.c | 84 +++++++++++++++++++++++++++++-------------------- > > 1 file changed, 50 insertions(+), 34 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index bdbec4c98173..0b56c3ca354e 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -7077,23 +7077,26 @@ void __init free_area_init_memoryless_node(int nid) > > * Initialize all valid struct pages in the range [spfn, epfn) and mark them > > * PageReserved(). Return the number of struct pages that were initialized. > > */ > > -static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) > > +static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn, > > + int zone, int nid) > > { > > - unsigned long pfn; > > + unsigned long pfn, zone_spfn, zone_epfn; > > u64 pgcnt = 0; > > + zone_spfn = arch_zone_lowest_possible_pfn[zone]; > > + zone_epfn = arch_zone_highest_possible_pfn[zone]; > > + > > + spfn = clamp(spfn, zone_spfn, zone_epfn); > > + epfn = clamp(epfn, zone_spfn, zone_epfn); > > + > > for (pfn = spfn; pfn < epfn; pfn++) { > > if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { > > pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) > > + pageblock_nr_pages - 1; > > continue; > > } > > - /* > > - * Use a fake node/zone (0) for now. Some of these pages > > - * (in memblock.reserved but not in memblock.memory) will > > - * get re-initialized via reserve_bootmem_region() later. > > - */ > > - __init_single_page(pfn_to_page(pfn), pfn, 0, 0); > > + > > + __init_single_page(pfn_to_page(pfn), pfn, zone, nid); > > __SetPageReserved(pfn_to_page(pfn)); > > pgcnt++; > > } > > @@ -7102,51 +7105,64 @@ static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) > > } > > /* > > - * Only struct pages that are backed by physical memory are zeroed and > > - * initialized by going through __init_single_page(). But, there are some > > - * struct pages which are reserved in memblock allocator and their fields > > - * may be accessed (for example page_to_pfn() on some configuration accesses > > - * flags). We must explicitly initialize those struct pages. > > + * Only struct pages that correspond to ranges defined by memblock.memory > > + * are zeroed and initialized by going through __init_single_page() during > > + * memmap_init(). > > + * > > + * But, there could be struct pages that correspond to holes in > > + * memblock.memory. This can happen because of the following reasons: > > + * - phyiscal memory bank size is not necessarily the exact multiple of the > > + * arbitrary section size > > + * - early reserved memory may not be listed in memblock.memory > > + * - memory layouts defined with memmap= kernel parameter may not align > > + * nicely with memmap sections > > * > > - * This function also addresses a similar issue where struct pages are left > > - * uninitialized because the physical address range is not covered by > > - * memblock.memory or memblock.reserved. That could happen when memblock > > - * layout is manually configured via memmap=, or when the highest physical > > - * address (max_pfn) does not end on a section boundary. > > + * Explicitly initialize those struct pages so that: > > + * - PG_Reserved is set > > + * - zone link is set accorging to the architecture constrains > > + * - node is set to node id of the next populated region except for the > > + * trailing hole where last node id is used > > */ > > -static void __init init_unavailable_mem(void) > > +static void __init init_zone_unavailable_mem(int zone) > > { > > - phys_addr_t start, end; > > - u64 i, pgcnt; > > - phys_addr_t next = 0; > > + unsigned long start, end; > > + int i, nid; > > + u64 pgcnt; > > + unsigned long next = 0; > > /* > > - * Loop through unavailable ranges not covered by memblock.memory. > > + * Loop through holes in memblock.memory and initialize struct > > + * pages corresponding to these holes > > */ > > pgcnt = 0; > > - for_each_mem_range(i, &start, &end) { > > + for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, &nid) { > > if (next < start) > > - pgcnt += init_unavailable_range(PFN_DOWN(next), > > - PFN_UP(start)); > > + pgcnt += init_unavailable_range(next, start, zone, nid); > > next = end; > > } > > /* > > - * Early sections always have a fully populated memmap for the whole > > - * section - see pfn_valid(). If the last section has holes at the > > - * end and that section is marked "online", the memmap will be > > - * considered initialized. Make sure that memmap has a well defined > > - * state. > > + * Last section may surpass the actual end of memory (e.g. we can > > + * have 1Gb section and 512Mb of RAM pouplated). > > + * Make sure that memmap has a well defined state in this case. > > */ > > - pgcnt += init_unavailable_range(PFN_DOWN(next), > > - round_up(max_pfn, PAGES_PER_SECTION)); > > + end = round_up(max_pfn, PAGES_PER_SECTION); > > + pgcnt += init_unavailable_range(next, end, zone, nid); > > /* > > * Struct pages that do not have backing memory. This could be because > > * firmware is using some of this memory, or for some other reasons. > > */ > > if (pgcnt) > > - pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt); > > + pr_info("Zone %s: zeroed struct page in unavailable ranges: %lld pages", zone_names[zone], pgcnt); > > +} > > + > > +static void __init init_unavailable_mem(void) > > +{ > > + int zone; > > + > > + for (zone = 0; zone < ZONE_MOVABLE; zone++) > > + init_zone_unavailable_mem(zone); > > Why < ZONE_MOVABLE? > > I remember we can have memory holes inside the movable zone when messing > with "movablecore" cmdline parameter. Maybe because we haven't initialized MOABLE zone info at this time.