From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24F42C432C0 for ; Thu, 28 Nov 2019 10:15:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D51DF21741 for ; Thu, 28 Nov 2019 10:15:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D51DF21741 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6C9F66B0517; Thu, 28 Nov 2019 05:15:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 679706B0518; Thu, 28 Nov 2019 05:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58FF56B0519; Thu, 28 Nov 2019 05:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 3EA706B0517 for ; Thu, 28 Nov 2019 05:15:28 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id EF6968249980 for ; Thu, 28 Nov 2019 10:15:27 +0000 (UTC) X-FDA: 76205279094.13.trees38_1289b4dd66f60 X-HE-Tag: trees38_1289b4dd66f60 X-Filterd-Recvd-Size: 6170 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Thu, 28 Nov 2019 10:15:27 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id g7so9682619wrw.4 for ; Thu, 28 Nov 2019 02:15:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=RaQdlszzuDgWhjMpY6dg4NeccHOPLScQ3D/dY4b09no=; b=ovDBNsGarSMPYEuUQy27ZGNggVLvmvpL1T//jnOpKr1hLYW/h193KW1AklLT7m/7xA f8SfntitY279Za/R2MLK99gIEM5aj/P7Tg4txvJXe+mKGyqJswdZYSid3OKCXIUobjED PdoPEZCpFUYXV+X0byJAL0KqLbBXBX1i2qRjWrZmhmprEuq/MNeZVNWKLvQTm6oUIchR P+tP9mgl3cUJKordHYTbM6SoZwQWUlR2j+PyVvn2IOHaiWddGXofXdQRMhaotqTih7h2 lMfRJ+0fRWICNTzLWl5fxWKj6+/6oQ7K8wSDY0aTZ1yR+q5NJqKBJH/K6ne16EFPgJNM l+IA== X-Gm-Message-State: APjAAAUlp01zB+nhN2JGfB4hrsmunlpzaLbbZR8EMZVPZiMoI1uRZW/c aOhGq1E6YKmQEew13slz824= X-Google-Smtp-Source: APXvYqzeCdu3eiR5KbUmuc1zn2SskYx5c5XzkCZjP4ANNnQe6stdWCmblsielqxF13tyiVHnmUASaw== X-Received: by 2002:adf:ec4b:: with SMTP id w11mr46427121wrn.243.1574936126102; Thu, 28 Nov 2019 02:15:26 -0800 (PST) Received: from localhost (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id a24sm7669435wmb.29.2019.11.28.02.15.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Nov 2019 02:15:25 -0800 (PST) Date: Thu, 28 Nov 2019 11:15:24 +0100 From: Michal Hocko To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Oscar Salvador Subject: Re: [PATCH v1] mm/memory_hotplug: don't check the nid in find_(smallest|biggest)_section_pfn Message-ID: <20191128101524.GH26807@dhcp22.suse.cz> References: <20191127174158.28226-1-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191127174158.28226-1-david@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 27-11-19 18:41:58, David Hildenbrand wrote: > Now that we always check against a zone, we can stop checking against > the nid, it is implicitly covered by the zone. > > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Oscar Salvador > Signed-off-by: David Hildenbrand OK, this makes some sense to me. The node really is superfluous and it doesn't add any clarity. Quite contrary it just brings question why do we check it as well. If there ever is a need to check for the node then we have it available in struct zone and that would be much more robust approach because an accidental mismatch between parameters is ruled out. Acked-by: Michal Hocko > --- > mm/memory_hotplug.c | 23 ++++++++--------------- > 1 file changed, 8 insertions(+), 15 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 46b2e056a43f..602f753c662c 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -344,17 +344,14 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > } > > /* find the smallest valid pfn in the range [start_pfn, end_pfn) */ > -static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > - unsigned long start_pfn, > - unsigned long end_pfn) > +static unsigned long find_smallest_section_pfn(struct zone *zone, > + unsigned long start_pfn, > + unsigned long end_pfn) > { > for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SUBSECTION) { > if (unlikely(!pfn_to_online_page(start_pfn))) > continue; > > - if (unlikely(pfn_to_nid(start_pfn) != nid)) > - continue; > - > if (zone != page_zone(pfn_to_page(start_pfn))) > continue; > > @@ -365,9 +362,9 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > } > > /* find the biggest valid pfn in the range [start_pfn, end_pfn). */ > -static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, > - unsigned long start_pfn, > - unsigned long end_pfn) > +static unsigned long find_biggest_section_pfn(struct zone *zone, > + unsigned long start_pfn, > + unsigned long end_pfn) > { > unsigned long pfn; > > @@ -377,9 +374,6 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, > if (unlikely(!pfn_to_online_page(pfn))) > continue; > > - if (unlikely(pfn_to_nid(pfn) != nid)) > - continue; > - > if (zone != page_zone(pfn_to_page(pfn))) > continue; > > @@ -393,7 +387,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > unsigned long end_pfn) > { > unsigned long pfn; > - int nid = zone_to_nid(zone); > > zone_span_writelock(zone); > if (zone->zone_start_pfn == start_pfn) { > @@ -403,7 +396,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > * In this case, we find second smallest valid mem_section > * for shrinking zone. > */ > - pfn = find_smallest_section_pfn(nid, zone, end_pfn, > + pfn = find_smallest_section_pfn(zone, end_pfn, > zone_end_pfn(zone)); > if (pfn) { > zone->spanned_pages = zone_end_pfn(zone) - pfn; > @@ -419,7 +412,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > * In this case, we find second biggest valid mem_section for > * shrinking zone. > */ > - pfn = find_biggest_section_pfn(nid, zone, zone->zone_start_pfn, > + pfn = find_biggest_section_pfn(zone, zone->zone_start_pfn, > start_pfn); > if (pfn) > zone->spanned_pages = pfn - zone->zone_start_pfn + 1; > -- > 2.21.0 -- Michal Hocko SUSE Labs