From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.0 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6ED6C4363A for ; Tue, 20 Oct 2020 15:18:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 09FCC2222D for ; Tue, 20 Oct 2020 15:18:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="h2ZTQ0h3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09FCC2222D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1507B6B0071; Tue, 20 Oct 2020 11:18:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1000A6B0072; Tue, 20 Oct 2020 11:18:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0B8B6B0073; Tue, 20 Oct 2020 11:18:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id C40EC6B0071 for ; Tue, 20 Oct 2020 11:18:28 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 50E17181AEF09 for ; Tue, 20 Oct 2020 15:18:28 +0000 (UTC) X-FDA: 77392660296.06.love62_2400be227240 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 1D764100FE561 for ; Tue, 20 Oct 2020 15:18:28 +0000 (UTC) X-HE-Tag: love62_2400be227240 X-Filterd-Recvd-Size: 14533 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 15:18:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603207107; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+fq27Lrjh4A+WF1/QMsoiHGG+aBtxAgUfHzjn77I/58=; b=h2ZTQ0h3LZ97nuTUjC8Aq40h8XiEtRT8pBZRtzYIJGMFnYWfJM7Mn8Q0xjZLTOFcb4tvul 3OothFzgUCXk7Y8inLGzIXZt+qEX3+8Bg/XMJW7FxGdTJ2Nsk0ubZctKRD/idjKtzk9Rz/ vuzSopHwVya9AsCU4x1TajX3x0GmrgQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-563-5n0XqEwfO1i_tqMwy6QGFg-1; Tue, 20 Oct 2020 11:18:20 -0400 X-MC-Unique: 5n0XqEwfO1i_tqMwy6QGFg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E417F64145; Tue, 20 Oct 2020 15:18:17 +0000 (UTC) Received: from localhost (ovpn-12-44.pek2.redhat.com [10.72.12.44]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BC9156EF45; Tue, 20 Oct 2020 15:18:16 +0000 (UTC) Date: Tue, 20 Oct 2020 23:18:14 +0800 From: "bhe@redhat.com" To: Rahul Gopakumar Cc: "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "natechancellor@gmail.com" , "ndesaulniers@google.com" , "clang-built-linux@googlegroups.com" , "rostedt@goodmis.org" , Rajender M , Yiu Cho Lau , Peter Jonasson , Venkatesh Rajaram Subject: Re: Performance regressions in "boot_time" tests in Linux 5.8 Kernel Message-ID: <20201020151814.GU25604@MiWiFi-R3L-srv> References: <20201010061124.GE25604@MiWiFi-R3L-srv> <20201013131735.GL25604@MiWiFi-R3L-srv> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/20/20 at 01:45pm, Rahul Gopakumar wrote: > Hi Baoquan, >=20 > We had some trouble applying the patch to problem commit and the latest= upstream commit. Steven (CC'ed) helped us by providing the updated draft= patch. We applied it on the latest commit (3e4fb4346c781068610d03c12b16c= 0cfb0fd24a3), and it doesn't look like improving the performance numbers. Thanks for your feedback. From the code, I am sure what the problem is, but I didn't test it on system with huge memory. Forget mentioning my draft patch is based on akpm/master branch since it's a mm issue, it might be a little different with linus's mainline kernel, sorry for the inconvenience. I will test and debug this on a server with 4T memory in our lab, and update if any progress. >=20 > Patch on latest commit - 20.161 secs > Vanilla latest commit - 19.50 secs Here, do you mean it even cost more time with the patch applied? >=20 > Here is the draft patch we tried >=20 > ------------------------ >=20 > diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c > index 8e7b8c6c576e..ff5fa4c3889e 100644 > --- a/arch/ia64/mm/init.c > +++ b/arch/ia64/mm/init.c > @@ -537,7 +537,7 @@ virtual_memmap_init(u64 start, u64 end, void *arg) > =A0 > =A0 =A0 =A0 =A0 =A0if (map_start < map_end) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0memmap_init_zone((unsigned long)(map= _end - map_start), > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0args->= nid, args->zone, page_to_pfn(map_start), > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0args->= nid, args->zone, page_to_pfn(map_start), page_to_pfn(map_end), > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 MEM= INIT_EARLY, NULL); > =A0 =A0 =A0 =A0 =A0return 0; > =A0} > @@ -547,7 +547,7 @@ memmap_init (unsigned long size, int nid, unsigned = long zone, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long start_pfn) > =A0{ > =A0 =A0 =A0 =A0 =A0if (!vmem_map) { > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 memmap_init_zone(size, nid, zone, start_p= fn, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 memmap_init_zone(size, nid, zone, start_p= fn, start_pfn + size, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 MEM= INIT_EARLY, NULL); > =A0 =A0 =A0 =A0 =A0} else { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0struct page *start; > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 16b799a0522c..65e34b370e33 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2416,7 +2416,7 @@ extern int __meminit __early_pfn_to_nid(unsigned = long pfn, > =A0 > =A0extern void set_dma_reserve(unsigned long new_dma_reserve); > =A0extern void memmap_init_zone(unsigned long, int, unsigned long, unsi= gned long, > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 enum meminit_context, struct vmem_altmap = *); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long, enum meminit_context, stru= ct vmem_altmap *); > =A0extern void setup_per_zone_wmarks(void); > =A0extern int __meminit init_per_zone_wmark_min(void); > =A0extern void mem_init(void); > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index ce3e73e3a5c1..03fddd8f4b11 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -728,7 +728,7 @@ void __ref move_pfn_range_to_zone(struct zone *zone= , unsigned long start_pfn, > =A0 =A0 =A0 =A0 =A0 * expects the zone spans the pfn range. All the pag= es in the range > =A0 =A0 =A0 =A0 =A0 * are reserved so nobody should be touching them so= we should be safe > =A0 =A0 =A0 =A0 =A0 */ > - =A0 =A0 =A0 memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn= , > + =A0 =A0 =A0 memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn= , 0, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 MEMINIT_HOTPLUG, al= tmap); > =A0 > =A0 =A0 =A0 =A0 =A0set_zone_contiguous(zone); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 780c8f023b28..fe80055ea59c 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5989,8 +5989,8 @@ overlap_memmap_init(unsigned long zone, unsigned = long *pfn) > =A0 * done. Non-atomic initialization, single-pass. > =A0 */ > =A0void __meminit memmap_init_zone(unsigned long size, int nid, unsigne= d long zone, > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long start_pfn, enum meminit_con= text context, > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct vmem_altmap *altmap) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long start_pfn, unsigned long zo= ne_end_pfn, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 enum meminit_context context, struct vmem= _altmap *altmap) > =A0{ > =A0 =A0 =A0 =A0 =A0unsigned long pfn, end_pfn =3D start_pfn + size; > =A0 =A0 =A0 =A0 =A0struct page *page; > @@ -6024,7 +6024,7 @@ void __meminit memmap_init_zone(unsigned long siz= e, int nid, unsigned long zone, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (context =3D=3D MEMINIT_EARLY) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (overlap_memmap_i= nit(zone, &pfn)) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0cont= inue; > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (defer_init(nid, pfn, = end_pfn)) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (defer_init(nid, pfn, = zone_end_pfn)) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0brea= k; > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0} > =A0 > @@ -6150,7 +6150,7 @@ void __meminit __weak memmap_init(unsigned long s= ize, int nid, > =A0 > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (end_pfn > start_pfn) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0size =3D end_pfn - s= tart_pfn; > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 memmap_init_zone(size, ni= d, zone, start_pfn, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 memmap_init_zone(size, ni= d, zone, start_pfn, range_end_pfn, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 MEMINIT_EARLY, NULL); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0} > =A0 =A0 =A0 =A0 =A0} >=20 >=20 > ------------------------ >=20 > We have attached default dmesg logs and also dmesg logs collected with = memblock=3Ddebug kernel cmdline for both vanilla and patched kernels. Let= me know if you need more info. >=20 >=20 >=20 > From: bhe@redhat.com > Sent: 13 October 2020 6:47 PM > To: Rahul Gopakumar > Cc: linux-mm@kvack.org ; linux-kernel@vger.kernel.o= rg ; akpm@linux-foundation.org ; natechancellor@gmail.com ; nde= saulniers@google.com ; clang-built-linux@googleg= roups.com ; rostedt@goodmis.org ; Rajender M ; Yiu Cho Lau ; Peter Jonasson ; Venkatesh Rajaram > Subject: Re: Performance regressions in "boot_time" tests in Linux 5.8 = Kernel=20 > =A0 > Hi Rahul, >=20 > On 10/12/20 at 05:21pm, Rahul Gopakumar wrote: > > Hi Baoquan, > >=20 > > Attached collected dmesg logs for with and without > > commit after adding memblock=3Ddebug to kernel cmdline. >=20 > Can you test below draft patch and see if it works for you?=20 >=20 > From a2ea6caef3c73ad9efb2dd2b48039065fe430bb2 Mon Sep 17 00:00:00 2001 > From: Baoquan He > Date: Tue, 13 Oct 2020 20:05:30 +0800 > Subject: [PATCH] mm: make memmap defer init only take effect per zone >=20 > Deferred struct page init is designed to work per zone. However since > commit 73a6e474cb376 ("mm: memmap_init: iterate over memblock regions > rather that check each PFN"), the handling is mistakenly done in all me= mory > ranges inside one zone. Especially in those unmovable zones of multiple= nodes, > memblock reservation split them into many memory ranges. This makes > initialized struct page more than expected in early stage, then increas= es > much boot time. >=20 > Let's fix it to make the memmap defer init handled in zone wide, but no= t in > memory range of one zone. >=20 > Signed-off-by: Baoquan He > --- > =A0arch/ia64/mm/init.c | 4 ++-- > =A0include/linux/mm.h=A0 | 5 +++-- > =A0mm/memory_hotplug.c | 2 +- > =A0mm/page_alloc.c=A0=A0=A0=A0 | 6 +++--- > =A04 files changed, 9 insertions(+), 8 deletions(-) >=20 > diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c > index ef12e097f318..27ca549ff47e 100644 > --- a/arch/ia64/mm/init.c > +++ b/arch/ia64/mm/init.c > @@ -536,7 +536,7 @@ virtual_memmap_init(u64 start, u64 end, void *arg) > =A0 > =A0=A0=A0=A0=A0=A0=A0=A0 if (map_start < map_end) > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 memmap_init_zone((unsi= gned long)(map_end - map_start), > -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0 args->nid, args->zone, page_to_pfn(map_start), > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0 args->nid, args->zone, page_to_pfn(map_start), page= _to_pfn(map_end), > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); > =A0=A0=A0=A0=A0=A0=A0=A0 return 0; > =A0} > @@ -546,7 +546,7 @@ memmap_init (unsigned long size, int nid, unsigned = long zone, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 unsigned long start_pfn) > =A0{ > =A0=A0=A0=A0=A0=A0=A0=A0 if (!vmem_map) { > -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 memmap_init_zone(size, nid,= zone, start_pfn, > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 memmap_init_zone(size, nid,= zone, start_pfn, start_pfn + size, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); > =A0=A0=A0=A0=A0=A0=A0=A0 } else { > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 struct page *start; > diff --git a/include/linux/mm.h b/include/linux/mm.h > index ef360fe70aaf..5f9fc61d5be2 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2439,8 +2439,9 @@ extern int __meminit __early_pfn_to_nid(unsigned = long pfn, > =A0#endif > =A0 > =A0extern void set_dma_reserve(unsigned long new_dma_reserve); > -extern void memmap_init_zone(unsigned long, int, unsigned long, unsign= ed long, > -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enum meminit_context, struc= t vmem_altmap *, int migratetype); > +extern void memmap_init_zone(unsigned long, int, unsigned long, > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 unsigned long, unsigned lon= g, enum meminit_context, > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 struct vmem_altmap *, int m= igratetype); > =A0extern void setup_per_zone_wmarks(void); > =A0extern int __meminit init_per_zone_wmark_min(void); > =A0extern void mem_init(void); > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index b44d4c7ba73b..f9a37e6abc1c 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -732,7 +732,7 @@ void __ref move_pfn_range_to_zone(struct zone *zone= , unsigned long start_pfn, > =A0=A0=A0=A0=A0=A0=A0=A0=A0 * expects the zone spans the pfn range. All= the pages in the range > =A0=A0=A0=A0=A0=A0=A0=A0=A0 * are reserved so nobody should be touching= them so we should be safe > =A0=A0=A0=A0=A0=A0=A0=A0=A0 */ > -=A0=A0=A0=A0=A0=A0 memmap_init_zone(nr_pages, nid, zone_idx(zone), sta= rt_pfn, > +=A0=A0=A0=A0=A0=A0 memmap_init_zone(nr_pages, nid, zone_idx(zone), sta= rt_pfn, 0, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 MEMINIT_HOTPLUG, altmap, migratetype); > =A0 > =A0=A0=A0=A0=A0=A0=A0=A0 set_zone_contiguous(zone); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 2ebf9ddafa3a..e8b19fdd18ec 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6044,7 +6044,7 @@ overlap_memmap_init(unsigned long zone, unsigned = long *pfn) > =A0 * zone stats (e.g., nr_isolate_pageblock) are touched. > =A0 */ > =A0void __meminit memmap_init_zone(unsigned long size, int nid, unsigne= d long zone, > -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 unsigned long start_pfn, > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 unsigned long start_pfn, un= signed long zone_end_pfn, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enum meminit_context c= ontext, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 struct vmem_altmap *al= tmap, int migratetype) > =A0{ > @@ -6080,7 +6080,7 @@ void __meminit memmap_init_zone(unsigned long siz= e, int nid, unsigned long zone, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (context =3D=3D MEM= INIT_EARLY) { > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= if (overlap_memmap_init(zone, &pfn)) > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0 continue; > -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if = (defer_init(nid, pfn, end_pfn)) > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if = (defer_init(nid, pfn, zone_end_pfn)) > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0 break; > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 } > =A0 > @@ -6194,7 +6194,7 @@ void __meminit __weak memmap_init(unsigned long s= ize, int nid, > =A0 > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (end_pfn > start_pf= n) { > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= size =3D end_pfn - start_pfn; > -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 mem= map_init_zone(size, nid, zone, start_pfn, > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 mem= map_init_zone(size, nid, zone, start_pfn, range_end_pfn, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 MEMINIT_EARLY, NULL, = MIGRATE_MOVABLE); > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 } > =A0=A0=A0=A0=A0=A0=A0=A0 } > --=20 > 2.17.2