From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E4EBC11F67 for ; Thu, 1 Jul 2021 16:12:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 26788613F9 for ; Thu, 1 Jul 2021 16:12:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26788613F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 77D978D02AC; Thu, 1 Jul 2021 12:12:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 72DDF8D0001; Thu, 1 Jul 2021 12:12:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A72F8D02AC; Thu, 1 Jul 2021 12:12:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 379C08D0001 for ; Thu, 1 Jul 2021 12:12:28 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DF9B7269B2 for ; Thu, 1 Jul 2021 16:12:27 +0000 (UTC) X-FDA: 78314511534.33.B53F634 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 5FC16D0001AC for ; Thu, 1 Jul 2021 16:12:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625155946; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mIwe8Khs6FhWLrrM+H0LAuvLzKUWvJp+a9XY7xKa/s4=; b=g1oiLlH/M84urRRFMHbZJyhdHCKFKdjhiCUWI0PidXV+f466MBREY/gOpXuEMKW+KVXYvA ceLt1+3O6UxOfB9rt4raI7lva04lOOMk/IamhLQWyFP8JhxuHc5BKIHp16L1G+RRkApLqa m6Z9SfFgkZlOEt/fi5uRcCb9b/QBfTo= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-14--yNgO-dmN3qkAi-T9JTo2Q-1; Thu, 01 Jul 2021 12:12:23 -0400 X-MC-Unique: -yNgO-dmN3qkAi-T9JTo2Q-1 Received: by mail-wr1-f71.google.com with SMTP id h104-20020adf90710000b029010de8455a3aso2767552wrh.12 for ; Thu, 01 Jul 2021 09:12:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=mIwe8Khs6FhWLrrM+H0LAuvLzKUWvJp+a9XY7xKa/s4=; b=epULb5bHRUusmLMZ+QOBJeo2c4pJ9c8k70wRHcIQnd6DkVFRJYj1n4ReLhn8WJc9b7 dMSamb1WYk5QX5hA0Kl8SbIfWnIOeRUc8mYlychCdiXfjDxZUsgJxlyCQ7Ae08UreVJ8 saPwFA2IoDZ1n6PAtP06wCHvHpP6i/Fl5/Amk8uuyWLdj9Sg3oWt0Nwyp/QPDnqWpRzl M3BkBtWV6uKqGZmgny28kOHHGD1IdU6uTbhjJKEQRyUAcSGhTxJt+CxV5fB5yB4k6AZi syIZEaL3pBi8WR2AXy0zrWiXXO6l790Y3lYda5FSIzAavNY8j/5PdW+b68wMU2iFqK06 CSyA== X-Gm-Message-State: AOAM5312xj5cF6LCGcHVc5Dmqq86JHpAeZ4AYVXHzSDyGitmW3nsy4AP iGSx7vWN/akzWTG5kVZaPOZh3K2bZXlau17dY5r8eEGuTaSrWjXmyu+9ZopMgOVZE6juaVXQE0u slSCOPb5PsWk= X-Received: by 2002:a1c:1dc6:: with SMTP id d189mr518854wmd.154.1625155942139; Thu, 01 Jul 2021 09:12:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz5doBWvbxn6E0Y/3q0E5KMQ8lbD0rK3+Z1+HQhaiE5s59/TRHqjn+Y7JyfQoBTQCqbd/4ABw== X-Received: by 2002:a1c:1dc6:: with SMTP id d189mr518818wmd.154.1625155941829; Thu, 01 Jul 2021 09:12:21 -0700 (PDT) Received: from [192.168.3.132] (p4ff23bca.dip0.t-ipconnect.de. [79.242.59.202]) by smtp.gmail.com with ESMTPSA id w13sm454230wrl.47.2021.07.01.09.12.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 01 Jul 2021 09:12:21 -0700 (PDT) Subject: Re: [PATCH] mm: sparse: pass section_nr to section_mark_present To: ohoono.kwon@samsung.com, "akpm@linux-foundation.org" , "mhocko@suse.com" Cc: "bhe@redhat.com" , "rppt@linux.ibm.com" , "ohkwon1043@gmail.com" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" References: <20210701135543epcms1p84a043bf49757bafada0a773372611d69@epcms1p8> <20210701154146epcms1p4398db5708796ae291b09db29240e5ed1@epcms1p4> From: David Hildenbrand Organization: Red Hat Message-ID: <4550295f-0358-9c41-5655-7274f89f6c0a@redhat.com> Date: Thu, 1 Jul 2021 18:12:20 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210701154146epcms1p4398db5708796ae291b09db29240e5ed1@epcms1p4> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="g1oiLlH/"; spf=none (imf01.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: rrua9cgajriqrn5ezyxr8a1yn75nnz8n X-Rspamd-Queue-Id: 5FC16D0001AC X-Rspamd-Server: rspam06 X-HE-Tag: 1625155947-830473 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 01.07.21 17:41, =EA=B6=8C=EC=98=A4=ED=9B=88 wrote: > On Thu, Jul 01, 2021 at 04:34:13PM +0200, David Hildenbrand wrote: >> On 01.07.21 15:55, =EA=B6=8C=EC=98=A4=ED=9B=88 wrote: >>> With CONFIG_SPARSEMEM_EXTREME enabled, __section_nr() which converts >>> mem_section to section_nr could be costly since it iterates all >>> sections to check if the given mem_section is in its range. >> =20 >> It actually iterates all section roots. >> =20 >>> >>> On the other hand, __nr_to_section which converts section_nr to >>> mem_section can be done in O(1). >>> >>> Let's pass section_nr instead of mem_section ptr to section_mark_pres= ent >>> in order to reduce needless iterations. >> =20 >> I'd expect this to be mostly noise, especially as we iterate section >> roots and for most (smallish) machines we might just work on the lowes= t >> section roots only. >> =20 >> Can you actually observe an improvement regarding boot times? >> =20 >> Anyhow, looks straight forward to me, although we might just reintrodu= ce >> similar patterns again easily if it's really just noise (see >> find_memory_block() as used by). And it might allow for a nice cleanup >> (see below). >> =20 >> Reviewed-by: David Hildenbrand >> =20 >> =20 >> Can you send 1) a patch to convert find_memory_block() as well and 2) = a >> patch to rip out __section_nr() completely? >> =20 >>> >>> Signed-off-by: Ohhoon Kwon >>> --- >>> mm/sparse.c | 9 +++++---- >>> 1 file changed, 5 insertions(+), 4 deletions(-) >>> >>> diff --git a/mm/sparse.c b/mm/sparse.c >>> index 55c18aff3e42..4a2700e9a65f 100644 >>> --- a/mm/sparse.c >>> +++ b/mm/sparse.c >>> @@ -186,13 +186,14 @@ void __meminit mminit_validate_memmodel_limits(= unsigned long *start_pfn, >>> * those loops early. >>> */ >>> unsigned long __highest_present_section_nr; >>> -static void section_mark_present(struct mem_section *ms) >>> +static void section_mark_present(unsigned long section_nr) >>> { >>> - unsigned long section_nr =3D __section_nr(ms); >>> + struct mem_section *ms; >>> =20 >>> if (section_nr > __highest_present_section_nr) >>> __highest_present_section_nr =3D section_nr; >>> =20 >>> + ms =3D __nr_to_section(section_nr); >>> ms->section_mem_map |=3D SECTION_MARKED_PRESENT; >>> } >>> =20 >>> @@ -279,7 +280,7 @@ static void __init memory_present(int nid, unsign= ed long start, unsigned long en >>> if (!ms->section_mem_map) { >>> ms->section_mem_map =3D sparse_encode_earl= y_nid(nid) | >>> SECTION_IS= _ONLINE; >>> - section_mark_present(ms); >>> + section_mark_present(section); >>> } >>> } >>> } >>> @@ -933,7 +934,7 @@ int __meminit sparse_add_section(int nid, unsigne= d long start_pfn, >>> =20 >>> ms =3D __nr_to_section(section_nr); >>> set_section_nid(section_nr, nid); >>> - section_mark_present(ms); >>> + section_mark_present(section_nr); >>> =20 >>> /* Align memmap to section boundary in the subsection case= */ >>> if (section_nr_to_pfn(section_nr) !=3D start_pfn) >>> >> =20 >> =20 >> --=20 >> Thanks, >> =20 >> David / dhildenb >> =20 > Dear David. >=20 > I tried to check on time for memblocks_present, but when I tested with = mobile > phones with 8GB ram, the original binary took 0us either as well as the > patched binary. > I'm not sure how the results would differ on huge systems with bigger r= am. > I agree that it could turn out to be just a noise, as you expected. >=20 > However as you also mentioned, the patches will be straight forward whe= n all > codes using __section_nr() are cleaned up nicely. >=20 > Below are the two patches that you asked for. > Please tell me if you need me to send the patches in separate e-mails. Yes, please send them separately. Maybe sent all 3 patches combined in a=20 single series, so Andrew can pick them easily, and reviewers can review=20 more easily. Thanks! --=20 Thanks, David / dhildenb