From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12DFFC54E58 for ; Tue, 12 Mar 2024 09:21:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F4CF6B0099; Tue, 12 Mar 2024 05:21:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6550A6B017B; Tue, 12 Mar 2024 05:21:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F4DD6B017C; Tue, 12 Mar 2024 05:21:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3E8EB6B0099 for ; Tue, 12 Mar 2024 05:21:25 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 14366A0F12 for ; Tue, 12 Mar 2024 09:21:25 +0000 (UTC) X-FDA: 81887843730.23.0E866DB Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) by imf21.hostedemail.com (Postfix) with ESMTP id BEFBD1C0009 for ; Tue, 12 Mar 2024 09:21:21 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UMy4ULmC; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.12 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710235283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XgdJGpzkCDLs7s6mYSnVc3Jrw5raLUsyoSYVXv4kZoI=; b=r9XiYBgb/YI1Y7EA6/TyHPIEyGt8otXK2OPgv8ILqk1A1Yb+RJKdRsy/xItK5T3IsoPhsI O8Sa8AFCI4BYdREfWJ1pCCRDYIdtEqXN5ep1sR+Bdc2sPOhT92EY8jb3HY4GD4eh6iRVA+ qwmNOn8sqngKJkCYYpdiuOjYuqPKJ14= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UMy4ULmC; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.12 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710235283; a=rsa-sha256; cv=none; b=aOugWbsKO+PIlHaCeCIDZ3KMGPdS62gDS+13IoAI/HYr5IeS8v9KBne16fWLo2KmbCjEE9 i45ndA5sfKIa8p4CSkjcxaV+xkzuHNyx8KEpkKjnZFE7aOjG+5VM7R/a5EeLP+9W7ahE9r sM6h63iMI06ahDzBJPfWeGMuXK+Sruw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710235282; x=1741771282; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=4GCwzfGbTS24EMO9wkYt9MaMxJ9FDXPxqe6Ri3RZJG4=; b=UMy4ULmCNBhDJhuM7WGnlB/VfA3l+f+lBqLoXVAme/BbKKXdixA2TBKU yzZrNXp6xx/9E7esPqfN11W1WYKXUJ/Ayl9CxipgpyKKFOyCWqPTR8wTK 0qmDKc4qMAbu5cup09+l8oubDt+ShJ+uI6BI4PKPCqM1CHdkDlXI3iXee h5q5FKi0L8vgDhRKv0VPjKuUYhtNcv79vMu+bpXFY5s6eDBqgVQerDs9Y VhiFrEndr4y4sIY6HDD+Wf8qtA7xAjIFp7gIzDVto7FMSaPMVqE3/kS72 i9J99/0w+RfziyRbCgFTBTKquSsa4Gpzx0zaYcIZsvO3G/Rx0gxbwYSv3 A==; X-IronPort-AV: E=McAfee;i="6600,9927,11010"; a="16373634" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16373634" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 02:21:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="11905406" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 02:21:14 -0700 From: "Huang, Ying" To: "Ho-Ren (Jack) Chuang" Cc: "Gregory Price" , aneesh.kumar@linux.ibm.com, mhocko@suse.com, tj@kernel.org, john@jagalactic.com, "Eishan Mirakhur" , "Vinicius Tavares Petrucci" , "Ravis OpenSrc" , "Alistair Popple" , "Rafael J. Wysocki" , Len Brown , Dan Williams , Vishal Verma , Dave Jiang , Andrew Morton , Jonathan Cameron , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-mm@kvack.org, "Ho-Ren (Jack) Chuang" , "Ho-Ren (Jack) Chuang" , qemu-devel@nongnu.org, Hao Xiang Subject: Re: [PATCH v2 1/1] memory tier: acpi/hmat: create CPUless memory tiers after obtaining HMAT info In-Reply-To: <20240312061729.1997111-2-horenchuang@bytedance.com> (Ho-Ren Chuang's message of "Tue, 12 Mar 2024 06:17:27 +0000") References: <20240312061729.1997111-1-horenchuang@bytedance.com> <20240312061729.1997111-2-horenchuang@bytedance.com> Date: Tue, 12 Mar 2024 17:19:19 +0800 Message-ID: <874jdb4xk8.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: BEFBD1C0009 X-Rspam-User: X-Stat-Signature: 5uhzqcriditcenxwuhotgw8dchocx8e5 X-Rspamd-Server: rspam01 X-HE-Tag: 1710235281-779772 X-HE-Meta: U2FsdGVkX19nwJq+PiTG57aUzdVDuNvOFsrDOvvdNkG3lhNHrNiDZV3EkpEg+mrc7UWdot/fDVoEryHmBfGwELP8Pk0U77N8SLyDhikDw6FwZLrqGtqhD4cxxfcwEJOjGaSQ6maebSrNv+DqCw/CSjNZIcohQOZPvOJ3Q7S8rUGkxGFzPKfCvqr/l14B3N8sZVJo6Gb/gkKhH2wc4y2lS9fvZBQHfJM7SLClFMKk0WTSql7vDMPB5VjQNGKCUwao0M/8aiRLHUzbNhse7sdCj3Az2YmGpzXGh1eaD/YT1ZSwCbLIFJdttkQ9Kbx8U23QaHpe0qX0gEe1VTOWMuGFImh7yw1A9pYmN8RBk2dt4fLoWPhBllKe4hJD+5Po/Ci0w6uZ0vnb05bzjYJqGXxOCynMYU4SaBaLlRqRCceW7tCP0H7EOhefdn3SS18ARNnjOIh/rw0rCvgv9DRwOySCSYfvr8jj/9oQ/W3MWAC/54cEaFGAnvOgEFMQ6xq07vgPAWNIjKK3wJNxxHTqZCYm8fr7Q0mqD+ZNojRlESlllcp/Xc/lwHk1pff6M+Z81UT6f6s7z4KYY0KoVU92dDQVNtQLfKiNhXcX6F9zOTlrKQWBYQeHOLNQoBq0qTsdD9qWWbQR6oAeXO5FfmL6qNr3eIRl+WwmniWWC+IqsWmGIVW5SerJ7MgyOVC+ls1EAhwdVh0yC7OnstOvYC5y4UemtZXlI/FYstU+nlpTNeDTkUDWQTkFkp4lI9fTFcSHdRVX2N1H4TQBijhT/5HC4IQ5BAqrmuBQQkJaQRW0BsxR6skKdaMX1ZwUwb7xf7UjKqXbjmCQ/SbKWB0V6466bjYTWJABhj/KM4o6xzfuRNrZK3PeCRKnFT9O/76cmR/vCHY0mjiFsxOpQuRkx7umNU6ahMvgmsuI8jpNAxILdCm3l3e7f15KnUGSLVXycxBRBGDE/eWgksmTgf5eCNemN4p aViL5AMI gg8jiQXDlFFdcpVxaCUgF94LmsGcfVLKIOwbJ4UPgkRNBp9SVR9o5+TtPPlGF+mi7uSNvk1wWzcgrEgR/CHlnNIwDhoM2d6OEcs9aGHVcWdpj4NARvccRZxCeZB5HQa93/CH26alzItt86sarZ4IGaxMTdQW0hpa4NXpdQHY5r0T0qImKAkEt74Zv+eCY5gbW9j0tShzzO0JzTHJvwlAxDDZzmJ/bgYOhG4OOPOXCTWeqZ0ifhWE6f1hb1QqH2P3xOWiJYUunM3aCRlRvO6FUMiCuIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: "Ho-Ren (Jack) Chuang" writes: > The current implementation treats emulated memory devices, such as > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > (E820_TYPE_RAM). However, these emulated devices have different > characteristics than traditional DRAM, making it important to > distinguish them. Thus, we modify the tiered memory initialization process > to introduce a delay specifically for CPUless NUMA nodes. This delay > ensures that the memory tier initialization for these nodes is deferred > until HMAT information is obtained during the boot process. Finally, > demotion tables are recalculated at the end. > > * Abstract common functions into `find_alloc_memory_type()` We should move kmem_put_memory_types() (renamed to mt_put_memory_types()?) too. This can be put in a separate patch. > Since different memory devices require finding or allocating a memory type, > these common steps are abstracted into a single function, > `find_alloc_memory_type()`, enhancing code scalability and conciseness. > > * Handle cases where there is no HMAT when creating memory tiers > There is a scenario where a CPUless node does not provide HMAT information. > If no HMAT is specified, it falls back to using the default DRAM tier. > > * Change adist calculation code to use another new lock, mt_perf_lock. > In the current implementation, iterating through CPUlist nodes requires > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > trying to acquire the same lock, leading to a potential deadlock. > Therefore, we propose introducing a standalone `mt_perf_lock` to protect > `default_dram_perf`. This approach not only avoids deadlock but also > prevents holding a large lock simultaneously. > > Signed-off-by: Ho-Ren (Jack) Chuang > Signed-off-by: Hao Xiang > --- > drivers/acpi/numa/hmat.c | 11 ++++++ > drivers/dax/kmem.c | 13 +------ > include/linux/acpi.h | 6 ++++ > include/linux/memory-tiers.h | 8 +++++ > mm/memory-tiers.c | 70 +++++++++++++++++++++++++++++++++--- > 5 files changed, 92 insertions(+), 16 deletions(-) > > diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c > index d6b85f0f6082..28812ec2c793 100644 > --- a/drivers/acpi/numa/hmat.c > +++ b/drivers/acpi/numa/hmat.c > @@ -38,6 +38,8 @@ static LIST_HEAD(targets); > static LIST_HEAD(initiators); > static LIST_HEAD(localities); > > +static LIST_HEAD(hmat_memory_types); > + HMAT isn't a device driver for some memory devices. So I don't think we should manage memory types in HMAT. Instead, if the memory_type of a node isn't set by the driver, we should manage it in memory-tier.c as fallback. > static DEFINE_MUTEX(target_lock); > > /* > @@ -149,6 +151,12 @@ int acpi_get_genport_coordinates(u32 uid, > } > EXPORT_SYMBOL_NS_GPL(acpi_get_genport_coordinates, CXL); > > +struct memory_dev_type *hmat_find_alloc_memory_type(int adist) > +{ > + return find_alloc_memory_type(adist, &hmat_memory_types); > +} > +EXPORT_SYMBOL_GPL(hmat_find_alloc_memory_type); > + > static __init void alloc_memory_initiator(unsigned int cpu_pxm) > { > struct memory_initiator *initiator; > @@ -1038,6 +1046,9 @@ static __init int hmat_init(void) > if (!hmat_set_default_dram_perf()) > register_mt_adistance_algorithm(&hmat_adist_nb); > > + /* Post-create CPUless memory tiers after getting HMAT info */ > + memory_tier_late_init(); > + This should be called in memory-tier.c via late_initcall(memory_tier_late_init); Then, we don't need hmat to call it. > return 0; > out_put: > hmat_free_structures(); > diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c > index 42ee360cf4e3..aee17ab59f4f 100644 > --- a/drivers/dax/kmem.c > +++ b/drivers/dax/kmem.c > @@ -55,21 +55,10 @@ static LIST_HEAD(kmem_memory_types); > > static struct memory_dev_type *kmem_find_alloc_memory_type(int adist) > { > - bool found = false; > struct memory_dev_type *mtype; > > mutex_lock(&kmem_memory_type_lock); > - list_for_each_entry(mtype, &kmem_memory_types, list) { > - if (mtype->adistance == adist) { > - found = true; > - break; > - } > - } > - if (!found) { > - mtype = alloc_memory_type(adist); > - if (!IS_ERR(mtype)) > - list_add(&mtype->list, &kmem_memory_types); > - } > + mtype = find_alloc_memory_type(adist, &kmem_memory_types); > mutex_unlock(&kmem_memory_type_lock); > > return mtype; > diff --git a/include/linux/acpi.h b/include/linux/acpi.h > index b7165e52b3c6..3f927ff01f02 100644 > --- a/include/linux/acpi.h > +++ b/include/linux/acpi.h > @@ -434,12 +434,18 @@ int thermal_acpi_critical_trip_temp(struct acpi_device *adev, int *ret_temp); > > #ifdef CONFIG_ACPI_HMAT > int acpi_get_genport_coordinates(u32 uid, struct access_coordinate *coord); > +struct memory_dev_type *hmat_find_alloc_memory_type(int adist); > #else > static inline int acpi_get_genport_coordinates(u32 uid, > struct access_coordinate *coord) > { > return -EOPNOTSUPP; > } > + > +static inline struct memory_dev_type *hmat_find_alloc_memory_type(int adist) > +{ > + return NULL; > +} > #endif > > #ifdef CONFIG_ACPI_NUMA > diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h > index 69e781900082..4bc2596c5774 100644 > --- a/include/linux/memory-tiers.h > +++ b/include/linux/memory-tiers.h > @@ -48,6 +48,9 @@ int mt_calc_adistance(int node, int *adist); > int mt_set_default_dram_perf(int nid, struct access_coordinate *perf, > const char *source); > int mt_perf_to_adistance(struct access_coordinate *perf, int *adist); > +struct memory_dev_type *find_alloc_memory_type(int adist, > + struct list_head *memory_types); > +void memory_tier_late_init(void); > #ifdef CONFIG_MIGRATION > int next_demotion_node(int node); > void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets); > @@ -136,5 +139,10 @@ static inline int mt_perf_to_adistance(struct access_coordinate *perf, int *adis > { > return -EIO; > } > + > +static inline void memory_tier_late_init(void) > +{ > + > +} > #endif /* CONFIG_NUMA */ > #endif /* _LINUX_MEMORY_TIERS_H */ > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c > index 0537664620e5..79f748d60e6f 100644 > --- a/mm/memory-tiers.c > +++ b/mm/memory-tiers.c > @@ -6,6 +6,7 @@ > #include > #include > #include > +#include > > #include "internal.h" > > @@ -35,6 +36,7 @@ struct node_memory_type_map { > }; > > static DEFINE_MUTEX(memory_tier_lock); > +static DEFINE_MUTEX(mt_perf_lock); Please add comments about what it protects. And put it near the data structure it protects. > static LIST_HEAD(memory_tiers); > static struct node_memory_type_map node_memory_types[MAX_NUMNODES]; > struct memory_dev_type *default_dram_type; > @@ -623,6 +625,58 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype) > } > EXPORT_SYMBOL_GPL(clear_node_memory_type); > > +struct memory_dev_type *find_alloc_memory_type(int adist, struct list_head *memory_types) > +{ > + bool found = false; > + struct memory_dev_type *mtype; > + > + list_for_each_entry(mtype, memory_types, list) { > + if (mtype->adistance == adist) { > + found = true; > + break; > + } > + } > + if (!found) { > + mtype = alloc_memory_type(adist); > + if (!IS_ERR(mtype)) > + list_add(&mtype->list, memory_types); > + } > + > + return mtype; > +} > +EXPORT_SYMBOL_GPL(find_alloc_memory_type); > + > +static void memory_tier_late_create(int node) > +{ > + struct memory_dev_type *mtype = NULL; > + int adist = MEMTIER_ADISTANCE_DRAM; > + > + mt_calc_adistance(node, &adist); > + if (adist != MEMTIER_ADISTANCE_DRAM) { We can manage default_dram_type() via find_alloc_memory_type() too. And, if "node_memory_types[node].memtype == NULL", we can call mt_calc_adistance(node, &adist) and find_alloc_memory_type() in set_node_memory_tier(). Then, we can cover hotpluged memory node too. > + mtype = hmat_find_alloc_memory_type(adist); > + if (!IS_ERR(mtype)) > + __init_node_memory_type(node, mtype); > + else > + pr_err("Failed to allocate a memory type at %s()\n", __func__); > + } > + > + set_node_memory_tier(node); > +} > + > +void memory_tier_late_init(void) > +{ > + int nid; > + > + mutex_lock(&memory_tier_lock); > + for_each_node_state(nid, N_MEMORY) > + if (!node_state(nid, N_CPU)) We should exclude "node_memory_types[nid].memtype != NULL". Some memory nodes may be onlined by some device drivers and setup memory tiers already. > + memory_tier_late_create(nid); > + > + establish_demotion_targets(); > + mutex_unlock(&memory_tier_lock); > +} > +EXPORT_SYMBOL_GPL(memory_tier_late_init); > + > static void dump_hmem_attrs(struct access_coordinate *coord, const char *prefix) > { > pr_info( > @@ -636,7 +690,7 @@ int mt_set_default_dram_perf(int nid, struct access_coordinate *perf, > { > int rc = 0; > > - mutex_lock(&memory_tier_lock); > + mutex_lock(&mt_perf_lock); > if (default_dram_perf_error) { > rc = -EIO; > goto out; > @@ -684,7 +738,7 @@ int mt_set_default_dram_perf(int nid, struct access_coordinate *perf, > } > > out: > - mutex_unlock(&memory_tier_lock); > + mutex_unlock(&mt_perf_lock); > return rc; > } > > @@ -700,7 +754,7 @@ int mt_perf_to_adistance(struct access_coordinate *perf, int *adist) > perf->read_bandwidth + perf->write_bandwidth == 0) > return -EINVAL; > > - mutex_lock(&memory_tier_lock); > + mutex_lock(&mt_perf_lock); > /* > * The abstract distance of a memory node is in direct proportion to > * its memory latency (read + write) and inversely proportional to its > @@ -713,7 +767,7 @@ int mt_perf_to_adistance(struct access_coordinate *perf, int *adist) > (default_dram_perf.read_latency + default_dram_perf.write_latency) * > (default_dram_perf.read_bandwidth + default_dram_perf.write_bandwidth) / > (perf->read_bandwidth + perf->write_bandwidth); > - mutex_unlock(&memory_tier_lock); > + mutex_unlock(&mt_perf_lock); > > return 0; > } > @@ -836,6 +890,14 @@ static int __init memory_tier_init(void) > * types assigned. > */ > for_each_node_state(node, N_MEMORY) { > + if (!node_state(node, N_CPU)) > + /* > + * Defer memory tier initialization on CPUless numa nodes. > + * These will be initialized when HMAT information is HMAT is platform specific, we should avoid to mention it in general code if possible. > + * available. > + */ > + continue; > + > memtier = set_node_memory_tier(node); > if (IS_ERR(memtier)) > /* -- Best Regards, Huang, Ying