From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECF59C3ABD8 for ; Fri, 16 May 2025 10:09:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9DF556B0121; Fri, 16 May 2025 06:09:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9677A6B0122; Fri, 16 May 2025 06:09:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E1C36B0123; Fri, 16 May 2025 06:09:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 56CB76B0121 for ; Fri, 16 May 2025 06:09:22 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DDB6FC1D67 for ; Fri, 16 May 2025 10:09:22 +0000 (UTC) X-FDA: 83448348564.28.3DA16A4 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf22.hostedemail.com (Postfix) with ESMTP id 38336C0004 for ; Fri, 16 May 2025 10:09:21 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=JclQu3wy; spf=pass (imf22.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747390161; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tQ6e8V4ScZkvTme/hY7SGkVIO1gBYrEo19877QcLwwg=; b=oXZObmBsLQmvsTcim/3gS5zS9dZLLOPRkUyKESVoOYdppgjCodgOBe+fBXd2Fo9wtrDWyz aqec1i8WG/z8uSJ7OVHH6GK8R17ZaFRrlKZl2vzLV+8dE5y5J3FFCQzuwrDS6etEuMWNeP HeTD0+6RARU89LyjAsznH+qurNtQqck= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747390161; a=rsa-sha256; cv=none; b=RBYYa/laMT9uRKDKF8BnqdxNP8iIAT2qDLlpbaiGGmzVUAB1WkC42MY2K2xd6tyKyIjvMX fDPrT5Az2aIo3CbmV0hh13PGq80dyBuOrtavk1M10Z7HXq3BVpnqQdRAuv8NQ46TJ3JoQp 0ufF+ipd4Gr0O+jZtPMbfhs+aV8JD8k= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=JclQu3wy; spf=pass (imf22.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 92195A4E6A6; Fri, 16 May 2025 10:09:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5DED7C4CEE4; Fri, 16 May 2025 10:09:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747390160; bh=kVrshvKd5nIGlIZNHziZvog4XCiYuAiPq6I8gahwiEw=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=JclQu3wyPQQOHXxgoHQ5xk6HV92AJrnQZodvFirKBkH58nLBk7XsHL3JOjSsmEB7h Q9bx0Zugk9bMqMfUC8Ed8SCoA/LrVaI/X6UEZpaPWluYsNg8kDoEx3lS1IcybduWbO HHCnz9+qZV6FjDso2b8CLHLyEEiqtgk5zM0gwmqiEJh2NtZQFWtlCrgdAhMIFRFha0 82GBZfZRXdmZKZUOd4ASWG5zUmpCh6Jtym7EM1COEm4eTWGlVZsC+mxxT2qwAufe28 Ivp0zxO+9fXIpBko2n1wJ4PBgb6K8nIg2pPBQox4NO6/vgARCQQe66yCZreecLulei 6DZWfKArkHhUw== Date: Fri, 16 May 2025 13:09:11 +0300 From: Mike Rapoport To: David Hildenbrand Cc: Donet Tom , Andrew Morton , Oscar Salvador , Zi Yan , Ritesh Harjani , rafael@kernel.org, Danilo Krummrich , Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jonathan Cameron , Alison Schofield , Yury Norov , Dave Jiang Subject: Re: [PATCH v4 1/4] driver/base: Optimize memory block registration to reduce boot time Message-ID: References: <56cb2494-56ba-4895-9dd1-23243c2eecdb@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <56cb2494-56ba-4895-9dd1-23243c2eecdb@redhat.com> X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 38336C0004 X-Stat-Signature: fu9aebihfahbutpox87bxuseowkr7ixh X-Rspam-User: X-HE-Tag: 1747390161-371896 X-HE-Meta: U2FsdGVkX19//EUBGwquhMe54gTiQSNCvgJSV2tg+9NNub/zOAAfiHLHCObdM3qdzaEaig/hoqVqAXEZ5cxdfi3YtfC4AgHyHa4gM97NvUK92sm7Kijn1BdVHUE53T/xUhwQRkGbLxrvoR0xgYCa4HMI5y5IG494v3pPBBuUV9vqdVIZ7jC/I3bnKZtaSHvUOpvW9/Idzimlel0mkXsLuqiG3GTQlVbMP5WAWVULKKdD8df0WGkaoTvx0tTOjljgeHNnNELEushBq0Lu9bTvKfpIBqlIOwUPdleAkpJCrrjPLGdTZRLX/cW1tvsQIqNgS28nYYsWe2GePTeap39+WcjnlSqvhGDwK5DrMhp0PjJzNyi6JAKL9HLx//aOH55ep6as2nHqQ/AmOfaMKAgNrVKIMxWdbOOKyhqYXAIFajQQb0+dmkzLPu9TvTu2Xmepa6PAdhlgG+l681Tm/k0aRpcH+0MtKA19XSWdfOgh3appE+hSoTKIQBEX5krqIx1q2MvsuuSWU/jk93q503I0NKY4qW6Y47TsrrKAGrQ0dBEtej/yHRHHezXOhBfT0ZB/TcFGtYCQKV150mil/JTqI0WMh34TN5NCRMMulCka3gh/9VZxfcVGJIKhhkjOQJiKGgYNUbvqTPbgi0JkuTrtYgBKvb1pC0vglr1FKBF/NUTXrnWakpzWqh3vCw90CTc4VEqoLEAHwEV0FY0DtRrilCqESzoXqC7OOUk4cq6HjVZylsucLPKEx5A6jXP87EymfptOQJzkReJrKGjbmk2aTBMN1X3SNZWgsYUGO1WD62weEESHJdPVFlkW/HsU1fd24WPoUOhFAvJcY7hTcA34kcdfHAyyepj+A0ankeUI1opETFNSMAO07PKarRnuzSJ2KRVCWzaxb5xLGblv8VV0Ksas0GlGFp/EvuyAgnCSEDGwb7Ws0viTaIBbcN++Ptvt2MwDMo9p/fmIK7Jgn4R I0XbiUO3 EaGs87kxNhHMXtddiWNRxbwYueUpNzW0DG5x0E2TWgk8SwTnwuqInlJs5dQ08NJRx856XgmPwwff776nQZeTVKlu7u++F8kot7MadZDdI7jQpp7Wh2VImZB2vHXhCdUN1tngFHUZSBVhnzJOCz1vQotxDPDVRdIg8QPw1+dVqYlKSZmENQxa/TDOGXQpX74LLCayYfdTBaXzgi3jextUmaFzpN8h5n0qkIbVfspRF+1JKbJsH+FTYn/wS1Fd3KUqg29C788HM7dhzuENPmmueDlAAhv1HcKwYOVkBx6Eu/L1eNAThq4jiAx/HQdjlT/4XFBatA5XIGIFXVcANytF5vPYmdApGLWoBNT3SMY8QIEuprwo1SoD4H3sx63R/yAeWfOgDCPzLvGgAdLsoD49tsmSNvs4XPefwJXQmgrDwZyjH6Ck4p6Fgw1zpwbLysHwHECqi X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, May 16, 2025 at 11:15:29AM +0200, David Hildenbrand wrote: > On 16.05.25 10:19, Donet Tom wrote: > > During node device initialization, `memory blocks` are registered under > > each NUMA node. The `memory blocks` to be registered are identified using > > the node’s start and end PFNs, which are obtained from the node's pg_data > > > > However, not all PFNs within this range necessarily belong to the same > > node—some may belong to other nodes. Additionally, due to the > > discontiguous nature of physical memory, certain sections within a > > `memory block` may be absent. > > > > As a result, `memory blocks` that fall between a node’s start and end > > PFNs may span across multiple nodes, and some sections within those blocks > > may be missing. `Memory blocks` have a fixed size, which is architecture > > dependent. > > > > Due to these considerations, the memory block registration is currently > > performed as follows: > > > > for_each_online_node(nid): > > start_pfn = pgdat->node_start_pfn; > > end_pfn = pgdat->node_start_pfn + node_spanned_pages; > > for_each_memory_block_between(PFN_PHYS(start_pfn), PFN_PHYS(end_pfn)) > > mem_blk = memory_block_id(pfn_to_section_nr(pfn)); > > pfn_mb_start=section_nr_to_pfn(mem_blk->start_section_nr) > > pfn_mb_end = pfn_start + memory_block_pfns - 1 > > for (pfn = pfn_mb_start; pfn < pfn_mb_end; pfn++): > > if (get_nid_for_pfn(pfn) != nid): > > continue; > > else > > do_register_memory_block_under_node(nid, mem_blk, > > MEMINIT_EARLY); > > > > Here, we derive the start and end PFNs from the node's pg_data, then > > determine the memory blocks that may belong to the node. For each > > `memory block` in this range, we inspect all PFNs it contains and check > > their associated NUMA node ID. If a PFN within the block matches the > > current node, the memory block is registered under that node. > > > > If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, get_nid_for_pfn() performs > > a binary search in the `memblock regions` to determine the NUMA node ID > > for a given PFN. If it is not enabled, the node ID is retrieved directly > > from the struct page. > > > > On large systems, this process can become time-consuming, especially since > > we iterate over each `memory block` and all PFNs within it until a match is > > found. When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, the additional > > overhead of the binary search increases the execution time significantly, > > potentially leading to soft lockups during boot. > > > > In this patch, we iterate over `memblock region` to identify the > > `memory blocks` that belong to the current NUMA node. `memblock regions` > > are contiguous memory ranges, each associated with a single NUMA node, and > > they do not span across multiple nodes. > > > > for_each_online_node(nid): > > for_each_memory_region(r): // r => region > > if (r->nid != nid): > > continue; > > else > > for_each_memory_block_between(r->base, r->base + r->size - 1): > > do_register_memory_block_under_node(nid, mem_blk, MEMINIT_EARLY); > > > > We iterate over all `memblock regions` and identify those that belong to > > the current NUMA node. For each `memblock region` associated with the > > current node, we calculate the start and end `memory blocks` based on the > > region's start and end PFNs. We then register all `memory blocks` within > > that range under the current node. > > > > Test Results on My system with 32TB RAM > > ======================================= > > 1. Boot time with CONFIG_DEFERRED_STRUCT_PAGE_INIT enabled. > > > > Without this patch > > ------------------ > > Startup finished in 1min 16.528s (kernel) > > > > With this patch > > --------------- > > Startup finished in 17.236s (kernel) - 78% Improvement > > > > 2. Boot time with CONFIG_DEFERRED_STRUCT_PAGE_INIT disabled. > > > > Without this patch > > ------------------ > > Startup finished in 28.320s (kernel) > > > > With this patch > > --------------- > > Startup finished in 15.621s (kernel) - 46% Improvement > > > > Acked-by: David Hildenbrand > > Acked-by: Zi Yan > > Signed-off-by: Donet Tom > > > > --- > > v3 -> v4 > > > > Addressed Mike's comment by making node_dev_init() call __register_one_node(). > > > > V3 - https://lore.kernel.org/all/b49ed289096643ff5b5fbedcf1d1c1be42845a74.1746250339.git.donettom@linux.ibm.com/ > > v2 - https://lore.kernel.org/all/fbe1e0c7d91bf3fa9a64ff5d84b53ded1d0d5ac7.1745852397.git.donettom@linux.ibm.com/ > > v1 - https://lore.kernel.org/all/50142a29010463f436dc5c4feb540e5de3bb09df.1744175097.git.donettom@linux.ibm.com/ > > --- > > drivers/base/memory.c | 4 ++-- > > drivers/base/node.c | 41 ++++++++++++++++++++++++++++++++++++++++- > > include/linux/memory.h | 2 ++ > > include/linux/node.h | 3 +++ > > 4 files changed, 47 insertions(+), 3 deletions(-) > > > > diff --git a/drivers/base/memory.c b/drivers/base/memory.c > > index 19469e7f88c2..7f1d266ae593 100644 > > --- a/drivers/base/memory.c > > +++ b/drivers/base/memory.c > > @@ -60,7 +60,7 @@ static inline unsigned long pfn_to_block_id(unsigned long pfn) > > return memory_block_id(pfn_to_section_nr(pfn)); > > } > > -static inline unsigned long phys_to_block_id(unsigned long phys) > > +unsigned long phys_to_block_id(unsigned long phys) > > { > > return pfn_to_block_id(PFN_DOWN(phys)); > > } > > > I was wondering whether we should move all these helpers into a header, and > export sections_per_block instead. Probably doesn't really matter for your > use case. > > > @@ -632,7 +632,7 @@ int __weak arch_get_memory_phys_device(unsigned long start_pfn) > > * > > * Called under device_hotplug_lock. > > */ > > -static struct memory_block *find_memory_block_by_id(unsigned long block_id) > > +struct memory_block *find_memory_block_by_id(unsigned long block_id) > > { > > struct memory_block *mem; > > diff --git a/drivers/base/node.c b/drivers/base/node.c > > index cd13ef287011..f8cafd8c8fb1 100644 > > --- a/drivers/base/node.c > > +++ b/drivers/base/node.c > > @@ -20,6 +20,7 @@ > > #include > > #include > > #include > > +#include > > static const struct bus_type node_subsys = { > > .name = "node", > > @@ -850,6 +851,43 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) > > kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); > > } > > +/* > > + * register_memory_blocks_under_node_early : Register the memory > > + * blocks under the current node. > > + * @nid : Current node under registration > > + * > > + * This function iterates over all memblock regions and identifies the regions > > + * that belong to the current node. For each region which belongs to current > > + * node, it calculates the start and end memory blocks based on the region's > > + * start and end PFNs. It then registers all memory blocks within that range > > + * under the current node. > > + */ > > +static void register_memory_blocks_under_node_early(int nid) > > +{ > > + struct memblock_region *r; > > + > > + for_each_mem_region(r) { > > + if (r->nid != nid) > > + continue; > > + > > + const unsigned long start_block_id = phys_to_block_id(r->base); > > + const unsigned long end_block_id = phys_to_block_id(r->base + r->size - 1); > > + unsigned long block_id; > > This should definitely be above the if(). > > > + > > + for (block_id = start_block_id; block_id <= end_block_id; block_id++) { > > + struct memory_block *mem; > > + > > + mem = find_memory_block_by_id(block_id); > > + if (!mem) > > + continue; > > + > > + do_register_memory_block_under_node(nid, mem, MEMINIT_EARLY); > > + put_device(&mem->dev); > > + } > > + > > + } > > +} > > + > > void register_memory_blocks_under_node(int nid, unsigned long start_pfn, > > unsigned long end_pfn, > > enum meminit_context context) > > @@ -974,8 +1012,9 @@ void __init node_dev_init(void) > > * to applicable memory block devices and already created cpu devices. > > */ > > for_each_online_node(i) { > > - ret = register_one_node(i); > > + ret = __register_one_node(i); > > if (ret) > > panic("%s() failed to add node: %d\n", __func__, ret); > > + register_memory_blocks_under_node_early(i); > > } > > In general, LGTM. > > > BUT :) > > I was wondering whether having a register_memory_blocks_early() call *after* > the for_each_online_node(), and walking all memory regions only once would > make a difference. I don't know how many nodes there should be to see measurable performance difference, but having register_memory_blocks_under_node_early() after for_each_online_node() is definitely nicer. There's no real need to run for_each_mem_region() for every online node. > We'd have to be smart about memory blocks that fall into multiple regions, > but it should be a corner case and doable. This is a corner case that should be handled regardless of the loop order. And I don't think it's handled today at all. If we have a block that crosses node boundaries, current implementation of register_mem_block_under_node_early() will register it under the first node. > OTOH, we usually don't expect having a lot of regions, so iterating over > them is probably not a big bottleneck? Anyhow, just wanted to raise it. There would be at least a region per node and having for_each_online_node() for_each_mem_region() makes the loop O(n²) for no good reason. > -- > Cheers, > > David / dhildenb > -- Sincerely yours, Mike.