From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29FFEC369DC for ; Sun, 4 May 2025 11:09:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8CE816B0089; Sun, 4 May 2025 07:09:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 87DD16B008C; Sun, 4 May 2025 07:09:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 745B26B0092; Sun, 4 May 2025 07:09:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 568B36B0089 for ; Sun, 4 May 2025 07:09:24 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C132B1A1B18 for ; Sun, 4 May 2025 11:09:24 +0000 (UTC) X-FDA: 83404954248.29.89C6F1B Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf09.hostedemail.com (Postfix) with ESMTP id 0F729140004 for ; Sun, 4 May 2025 11:09:22 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=mPte1lAn; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746356963; a=rsa-sha256; cv=none; b=MttQoKk1GN+DHH7cRXljuwCRiquCTjf+cvcynptkfc/WDFDoLjY7cGP0z4Mb2HWNBKoRPA vmNVSqesHM1qOF+BqtVFrboQ0odRwZ6RGJmzEqLqWlC71gB5d79tkk3drVIQSJMqE2tud8 /uEfpaSKTAvO940jZ4gmAh0vuD1MYf4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=mPte1lAn; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746356963; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dAFZ1tN9/i989ha3Rp9CE3KS4GVZLA/T1XbbqKVl3sQ=; b=hUwKHmN/rr1AvaRMkgO17sKdOaQteXNCKaI35AX32tXjTjA7W3I1NldXE8FSGhCuqozkY8 P4vGmQ6sCRHk5TcZwT5/8pgPu9p6e5h2JzPKWKV4Z/IOhnBKiHmKfeZnnM+lKPWJS3VCYX TK+koRtTL3k4mPmDjcYYRNXHKsH7NW0= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id C628561120; Sun, 4 May 2025 11:08:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E15BC4CEE7; Sun, 4 May 2025 11:09:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1746356961; bh=cy53ThJnkCDaMJgQ055ps47ch/XTPqto9UYPXZizdlM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=mPte1lAnS27cxeIYbvOKvsumsxykBguHsByHWGdh7XJ7hVSZVEa+UNLMLwQMUnGHE Jv1LJD26++VA4p13BHrz6Vmqdy/qzaI0twmxYBG3o09G9UQJkNAw8dTJp+qKByfKET q2fa9D5tb1Y/tfkrX/VR+MhntHzL+rwdXQi4/u7MCrJwymHPfwMnTxyza0DN41vM8L shN2rThh8q4gKy/og9l3n6+rZ0zd4ehfYBHJuNym5J7halY4t+T9XTvQt5vWJuFLyj Bkn8GXCkVfXWTur2HG4W2GQcnVLsOHobjPhHCfVevcCZtT3pjquT88h3P2Vld9Qdmg wysPSn0YFC2AQ== Date: Sun, 4 May 2025 14:09:12 +0300 From: Mike Rapoport To: Donet Tom Cc: David Hildenbrand , Oscar Salvador , Zi Yan , Greg Kroah-Hartman , Andrew Morton , rafael@kernel.org, Danilo Krummrich , Ritesh Harjani , Jonathan Cameron , Alison Schofield , Yury Norov , Dave Jiang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 1/3] driver/base: Optimize memory block registration to reduce boot time Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0F729140004 X-Stat-Signature: bd8ragfo76ufczasaxfohx8qzg8i71jo X-HE-Tag: 1746356962-572630 X-HE-Meta: U2FsdGVkX1/TmzLM30MCQRNahlp7PkZl7PZLGFESJxZyQuvmIBpdpyhmU+OfYT9BibTeTOhq+vOX+TM4uK9w+Z/Q1/o4bMRJuxC5JYp+QV4C6Y2ZPeOadXmGA3YxRINSsGbJxMgda9/S8QcDiyEi1sl2DQ3+P4eHwTVzACemwRRnveVfKR1ozPYkz8+B85XcEaihvoC9RDMu4UGziwLfWuhN4eiYAJavYqXB2W9xqzSn5f6EWKOd2l02TovtHAqzK9h9oN3BD0U0hFR0UmFFCUdb/CXLZlh1DjMnWjhASs6ALhnnvwd20B2J1w86eBO7cY1vnSo+Syaq2fTYZjmKch+3xoIOlpc+ja1y1z+rnh9L6f8ZRAMNE1YjuvYty3yetAF/ItWWAB2VGO7ztTEmYB2c3zODTMlOh+U94k9hr4QEleQRjTfuyXGDg6q7v0LARtBE/u3s7IWz6Er+GccpIL4M08XpfbPymy4Wke61tyPTKme2F4EShSLWcRA0LBdlnufljeg29ze0aB4fHrJahFYQRjsGB/2aaQnXhBERHTiQfepmabjMH5h612EHNmfBFZAMr/1Z6qPJVPpD3AfanT83mJJVRv8R1mCwijZSJvmu2C90yTleOMyATOJTcHREKT8pc+1RPK/jwFVFZ6rn4ja9CXF/NhGHS+kLfdrciSd/K6oKuDgC+Vq30slPl1IFcHX+1DI0kIUKsILKe3WRDvKKUaJ0KerPgY/oz9JZNFnC11WdHCpIgKcd6nSJjQioxrsXgsNrH3/7ClZyUVXJ/i9x+pIAfI85a5gXcRkrJ/fudbfZNDQPBkkaHFwnN1VJOVbReDgUaW3POqoGWeWx7z+kHH1/ti5MebXvKJqZN12izGICY2AvbEvT2Ggo2yNT3Yy3Zq0BMZ4g3lQvnBttDS06lxZD0mnTObBdSSetZevpM2kSUuiXVCMP0Jou7Gj+3+0GrFZmt1mdak4s8/x errBHxPR hMznYOsmiqKvwK6euCwAt/+o5/lZwmj+GCWJbQOU1V8Sw/nKJLQwOmj2sfptENU69DDCuHqdObk2e/3nZicGi9RsZqnl4RSJZprWhzsrC4woveSFZgI8z5QvjJg05FHJC02LFJq4rXkgTMeOa/DoYp+a8rhNMDZwJjRdaDb7BxXjGtvmVIdCT/eDP/GRGvqicusip8rhBAI+gQbcE/xPP/BnBhWRw/PhW94VXkJ+QoMdTLGZ2y3+2Xzxcj/jaW3oW2SjuD1w6RGgpLZTwtKLNYCpQiD/5M824XlgFTgPFwOj5pU83zt1d2H4+wJRKHs/42crvvi993nPF5Ru33xR3v5UCrbba29jQlNG6OjDh9CPqnaJBKUJjSJe5f98ZGq+B1DQCOr+3uNnBwpl6vu7KRq0ClLvLKpNmD/9F9u6i64ATD6M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, May 03, 2025 at 11:10:12AM +0530, Donet Tom wrote: > During node device initialization, `memory blocks` are registered under > each NUMA node. The `memory blocks` to be registered are identified using > the node’s start and end PFNs, which are obtained from the node's pg_data > > However, not all PFNs within this range necessarily belong to the same > node—some may belong to other nodes. Additionally, due to the > discontiguous nature of physical memory, certain sections within a > `memory block` may be absent. > > As a result, `memory blocks` that fall between a node’s start and end > PFNs may span across multiple nodes, and some sections within those blocks > may be missing. `Memory blocks` have a fixed size, which is architecture > dependent. > > Due to these considerations, the memory block registration is currently > performed as follows: > > for_each_online_node(nid): > start_pfn = pgdat->node_start_pfn; > end_pfn = pgdat->node_start_pfn + node_spanned_pages; > for_each_memory_block_between(PFN_PHYS(start_pfn), PFN_PHYS(end_pfn)) > mem_blk = memory_block_id(pfn_to_section_nr(pfn)); > pfn_mb_start=section_nr_to_pfn(mem_blk->start_section_nr) > pfn_mb_end = pfn_start + memory_block_pfns - 1 > for (pfn = pfn_mb_start; pfn < pfn_mb_end; pfn++): > if (get_nid_for_pfn(pfn) != nid): > continue; > else > do_register_memory_block_under_node(nid, mem_blk, > MEMINIT_EARLY); > > Here, we derive the start and end PFNs from the node's pg_data, then > determine the memory blocks that may belong to the node. For each > `memory block` in this range, we inspect all PFNs it contains and check > their associated NUMA node ID. If a PFN within the block matches the > current node, the memory block is registered under that node. > > If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, get_nid_for_pfn() performs > a binary search in the `memblock regions` to determine the NUMA node ID > for a given PFN. If it is not enabled, the node ID is retrieved directly > from the struct page. > > On large systems, this process can become time-consuming, especially since > we iterate over each `memory block` and all PFNs within it until a match is > found. When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, the additional > overhead of the binary search increases the execution time significantly, > potentially leading to soft lockups during boot. > > In this patch, we iterate over `memblock region` to identify the > `memory blocks` that belong to the current NUMA node. `memblock regions` > are contiguous memory ranges, each associated with a single NUMA node, and > they do not span across multiple nodes. > > for_each_online_node(nid): > for_each_memory_region(r): // r => region > if (r->nid != nid): > continue; > else > for_each_memory_block_between(r->base, r->base + r->size - 1): > do_register_memory_block_under_node(nid, mem_blk, MEMINIT_EARLY); > > We iterate over all `memblock regions` and identify those that belong to > the current NUMA node. For each `memblock region` associated with the > current node, we calculate the start and end `memory blocks` based on the > region's start and end PFNs. We then register all `memory blocks` within > that range under the current node. > > Test Results on My system with 32TB RAM > ======================================= > 1. Boot time with CONFIG_DEFERRED_STRUCT_PAGE_INIT enabled. > > Without this patch > ------------------ > Startup finished in 1min 16.528s (kernel) > > With this patch > --------------- > Startup finished in 17.236s (kernel) - 78% Improvement > > 2. Boot time with CONFIG_DEFERRED_STRUCT_PAGE_INIT disabled. > > Without this patch > ------------------ > Startup finished in 28.320s (kernel) > > With this patch > --------------- > Startup finished in 15.621s (kernel) - 46% Improvement > > Acked-by: David Hildenbrand > Signed-off-by: Donet Tom > > --- > v2 -> v3 > > Fixed indentation issues, made `start_block_id` and `end_block_id` constants, > and moved variable declarations to the places where they are needed. > > v2 - https://lore.kernel.org/all/fbe1e0c7d91bf3fa9a64ff5d84b53ded1d0d5ac7.1745852397.git.donettom@linux.ibm.com/ > v1 - https://lore.kernel.org/all/50142a29010463f436dc5c4feb540e5de3bb09df.1744175097.git.donettom@linux.ibm.com/ > --- > drivers/base/memory.c | 4 ++-- > drivers/base/node.c | 38 ++++++++++++++++++++++++++++++++++++++ > include/linux/memory.h | 2 ++ > include/linux/node.h | 11 +++++------ > 4 files changed, 47 insertions(+), 8 deletions(-) > > diff --git a/drivers/base/memory.c b/drivers/base/memory.c > index 19469e7f88c2..7f1d266ae593 100644 > --- a/drivers/base/memory.c > +++ b/drivers/base/memory.c > @@ -60,7 +60,7 @@ static inline unsigned long pfn_to_block_id(unsigned long pfn) > return memory_block_id(pfn_to_section_nr(pfn)); > } > > -static inline unsigned long phys_to_block_id(unsigned long phys) > +unsigned long phys_to_block_id(unsigned long phys) > { > return pfn_to_block_id(PFN_DOWN(phys)); > } > @@ -632,7 +632,7 @@ int __weak arch_get_memory_phys_device(unsigned long start_pfn) > * > * Called under device_hotplug_lock. > */ > -static struct memory_block *find_memory_block_by_id(unsigned long block_id) > +struct memory_block *find_memory_block_by_id(unsigned long block_id) > { > struct memory_block *mem; > > diff --git a/drivers/base/node.c b/drivers/base/node.c > index cd13ef287011..0f8a4645b26c 100644 > --- a/drivers/base/node.c > +++ b/drivers/base/node.c > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > > static const struct bus_type node_subsys = { > .name = "node", > @@ -850,6 +851,43 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) > kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); > } > > +/* > + * register_memory_blocks_under_node_early : Register the memory > + * blocks under the current node. > + * @nid : Current node under registration > + * > + * This function iterates over all memblock regions and identifies the regions > + * that belong to the current node. For each region which belongs to current > + * node, it calculates the start and end memory blocks based on the region's > + * start and end PFNs. It then registers all memory blocks within that range > + * under the current node. > + */ > +void register_memory_blocks_under_node_early(int nid) > +{ > + struct memblock_region *r; > + > + for_each_mem_region(r) { > + if (r->nid != nid) > + continue; > + > + const unsigned long start_block_id = phys_to_block_id(r->base); > + const unsigned long end_block_id = phys_to_block_id(r->base + r->size - 1); > + unsigned long block_id; > + > + for (block_id = start_block_id; block_id <= end_block_id; block_id++) { > + struct memory_block *mem; > + > + mem = find_memory_block_by_id(block_id); > + if (!mem) > + continue; > + > + do_register_memory_block_under_node(nid, mem, MEMINIT_EARLY); > + put_device(&mem->dev); > + } > + > + } > +} > + > void register_memory_blocks_under_node(int nid, unsigned long start_pfn, > unsigned long end_pfn, > enum meminit_context context) > diff --git a/include/linux/memory.h b/include/linux/memory.h > index 12daa6ec7d09..cb8579226536 100644 > --- a/include/linux/memory.h > +++ b/include/linux/memory.h > @@ -171,6 +171,8 @@ struct memory_group *memory_group_find_by_id(int mgid); > typedef int (*walk_memory_groups_func_t)(struct memory_group *, void *); > int walk_dynamic_memory_groups(int nid, walk_memory_groups_func_t func, > struct memory_group *excluded, void *arg); > +unsigned long phys_to_block_id(unsigned long phys); > +struct memory_block *find_memory_block_by_id(unsigned long block_id); > #define hotplug_memory_notifier(fn, pri) ({ \ > static __meminitdata struct notifier_block fn##_mem_nb =\ > { .notifier_call = fn, .priority = pri };\ > diff --git a/include/linux/node.h b/include/linux/node.h > index 2b7517892230..93beefe8f179 100644 > --- a/include/linux/node.h > +++ b/include/linux/node.h > @@ -114,12 +114,16 @@ extern struct node *node_devices[]; > void register_memory_blocks_under_node(int nid, unsigned long start_pfn, > unsigned long end_pfn, > enum meminit_context context); > +void register_memory_blocks_under_node_early(int nid); > #else > static inline void register_memory_blocks_under_node(int nid, unsigned long start_pfn, > unsigned long end_pfn, > enum meminit_context context) > { > } > +static inline void register_memory_blocks_under_node_early(int nid) > +{ > +} > #endif > > extern void unregister_node(struct node *node); > @@ -134,15 +138,10 @@ static inline int register_one_node(int nid) > int error = 0; > > if (node_online(nid)) { > - struct pglist_data *pgdat = NODE_DATA(nid); > - unsigned long start_pfn = pgdat->node_start_pfn; > - unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages; > - > error = __register_one_node(nid); > if (error) > return error; > - register_memory_blocks_under_node(nid, start_pfn, end_pfn, > - MEMINIT_EARLY); > + register_memory_blocks_under_node_early(nid); Does not that change mean that when register_one_node() is called from memory hotplug it will always try to iterate memblock regions? This would be a problem on architectures that don't keep memblock around after boot. I thought that the for_each_mem_region() loop should be in node_dev_init() when we know for sure that memblock is available. > } > > return error; > -- > 2.48.1 > -- Sincerely yours, Mike.