From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AFD3C4707B for ; Mon, 15 Jan 2024 01:26:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C8356B006E; Sun, 14 Jan 2024 20:26:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6785F6B0071; Sun, 14 Jan 2024 20:26:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 519556B0072; Sun, 14 Jan 2024 20:26:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 430846B006E for ; Sun, 14 Jan 2024 20:26:23 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CBCA6160346 for ; Mon, 15 Jan 2024 01:26:22 +0000 (UTC) X-FDA: 81679805004.25.4D376E1 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by imf16.hostedemail.com (Postfix) with ESMTP id CDF32180018 for ; Mon, 15 Jan 2024 01:26:19 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=JbEDoHNr; spf=pass (imf16.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705281980; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tGJhSHBugncOAn0Sf0dKR+2bStC7rwJThdZ2F9AktXs=; b=uG69NcegvHiul/Q4OJu7wtsr2jZl+5lNPpOUj1L93LF0WAiKG3q3N9dRbaPMfe9AnUnD2k 1zk3TR2znDOQT6BNpcHyXXjTd/0iUG8bVmN6ScJSCaQohlbFeu4MqsZBYc62WhiE3Veb96 dTeoZhcOYyE9S7DZohVoyvolSyoN1W4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705281980; a=rsa-sha256; cv=none; b=b0v0Df53jZ7vGUSGfHtS8mhKz+MPabWF8TmgSc4D6WcbDmamopaz6gxcS93zL+TXq1xqbj 8MT+x4EKfjo3WXRkMNPtDVmSTXNuZNlgImagCpGtswkpiu24MN1d1NEqWkrWjhAMPDt8o5 ZYYsY6IhHsK23R5dFyqW0fzfWt9rk44= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=JbEDoHNr; spf=pass (imf16.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705281980; x=1736817980; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=RAN96hU9YlRYVx95l6KQuFTyks7KtMSlSqiqq5bKzoM=; b=JbEDoHNrF1D1JWS87/OVNISMo/SiC40a0MRlBGfWrtRNeHPM8RaMLBli dNt04gu2dVR7mVjpMB+H35A4gnEhEhZGyaARpOAIlMNCC0iFf4t+0Ow02 0wL9NsTRNkRISJZ7liHOz9Oc98ZY0QNwFwbwQDYsndiKPjO2oPfzBwAQ6 e7qm2zDgqbVa5JZ6325q9r/SVMEaNUg4Mu/XWPq4O5hlvszYvCk994G0Q woo4yTYO0bfbgdmywzBrsq2GbWpGYQ9aoQVgEJBPtN6Ky4BuYK6q5dXYR HmCCPT3zuJyTl6dZ9fW6Y51p+6uEgpDxUQkuT5BgDlkHmkQG3Qn5suzt+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10953"; a="396653518" X-IronPort-AV: E=Sophos;i="6.04,195,1695711600"; d="scan'208";a="396653518" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2024 17:26:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10953"; a="783653565" X-IronPort-AV: E=Sophos;i="6.04,195,1695711600"; d="scan'208";a="783653565" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2024 17:26:11 -0800 From: "Huang, Ying" To: Hao Xiang Cc: "aneesh.kumar@linux.ibm.com" , Jonathan Cameron , Gregory Price , Srinivasulu Thanneeru , Srinivasulu Opensrc , "linux-cxl@vger.kernel.org" , "linux-mm@kvack.org" , "dan.j.williams@intel.com" , "mhocko@suse.com" , "tj@kernel.org" , "john@jagalactic.com" , Eishan Mirakhur , Vinicius Tavares Petrucci , Ravis OpenSrc , "linux-kernel@vger.kernel.org" , Johannes Weiner , Wei Xu , "Ho-Ren (Jack) Chuang" Subject: Re: [External] Re: [EXT] Re: [RFC PATCH v2 0/2] Node migration between memory tiers In-Reply-To: (Hao Xiang's message of "Fri, 12 Jan 2024 00:14:04 -0800") References: <87fs00njft.fsf@yhuang6-desk2.ccr.corp.intel.com> <87edezc5l1.fsf@yhuang6-desk2.ccr.corp.intel.com> <87a5pmddl5.fsf@yhuang6-desk2.ccr.corp.intel.com> <87wmspbpma.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o7dv897s.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240109155049.00003f13@Huawei.com> <20240110141821.0000370d@Huawei.com> <87il3z2g03.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Mon, 15 Jan 2024 09:24:13 +0800 Message-ID: <871qaj2xtu.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: CDF32180018 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: dgdy6j8a3i9rf7cubbgam6oqhmf7qwar X-HE-Tag: 1705281979-613457 X-HE-Meta: U2FsdGVkX1+N6vReb7KSsMM5CiszMpqOAGX1jADNtSsqqUu+7HK78U8zTLC8aRQMOxvMCXtE2zGzGP7LrxhOIdPU8Nce3zCKiwyHXmCY++/b5DOba7RttxLc3LL6JHrZZZsxnu31Yx8w9X04taEOpccqWaHGaXlKM4bh+ieynd8rdRGFpr4oUnC9qx7uKfM5wICjDW9Xp1W3AZAJZBUMgI+g+UZAuN4Z0x/29LqbQU0o8UOnOxaOKYwOdzg8AAVRK6JDI+o4h1Jp2PCjrg+9+vr3c7f66Lj3G8rb1jgjmjpI2uqF1jh/blelPZiOaiWtQmgYAKOsYvRxQlQS9eaWOLgt0YGTuIncYsJJyydRPE9eTFH5mXKPuQQ8YbnrOiWoyQ1+vb9Z2ez6bFxVl0H3i/4CBlKAQhPSFH5YM2EvOZk+HuSi/B1OHE5sYr7Hhj3YH0hfLsLqWkc4Gkvta5ITKjqIX36mIkoxBMVQ6wVDIwgbOfRoWtFql0p18awLOKyhlSQS/OVIrW+hMbBolFJB67j/mHuNZUFhHp2NHnK3bKyy1LbN09oybV6++dRWkKaNe2BuvHjFLYAPTL5EPruT4NboEHTzHd8qpvyWoH33MI0NPVpIJbhdJ489qNzXT7kTgw68hwxe5sNW78LGd1QMLbmqC1GwdrjLCIw1ogFltzHfWTI3XWdR2bA/S9Re6c3MnAQ+KZZkmToSWeDmq2cPQJ/lVJM6gnugU+c9NJXTATPZVPNAFdAjfqbbe25EtMaQIaWJWkFFMVvp9tPCF4ex2/Oox6NJycKf7Ew5cTRooWmCiLn7/M0dYzZGMobidxH529zeF+JNdTa+bKm5ZtXoMh2jGwNrMlPoCRANBKyPGRvAR8/y0J34JcByBXuKSo/2r5QQ/JA0Jg1g0LgwCrlL9zLvGD6/n5L1+c4Jac44awW9t5J7ucJFIsrQo2sdkqQkhW4P/RJNPjdNr/zePvH Ynzh8vKn ifIKkqdBb1cMd2Uy8rzkQzewIDZyeXU4qFtNywWhX6eI302c6v/GwMAfuPiJjH0wMQ5g5eCPuDFcKCTh79ZIFt44XPlq9gupFVjupttd/hG9JjbTDR0zwHAWzAWlFuMNvkgyKFOaxJfE4RcRL/ZFDrr0pNA874XhdBZ680cVy/yDLQpsGsPpP14McpPaOSzj/a+/Tt66AMdohjlAYu6F0ozlY8ZrPIPW4DqlI5xyrC1hyVAfrxHwOxUtlx9bKEKcstn4Z/AyH8WupXcuwEdD1+cHAHQ/RonBVxcLHoXYJaEDQIMehUXijjtkJryOWjTExQK1Bs22mgBpX270gQVFaxm0MrQCL2r7AocY3WTOlI+ZkhvcoZzHpxlXrf94UN1/6NKpKyH94DuzQTQ9obWXKW8ogeqkX0CwfgCjp4KdoPAwdOUDq8jgc0VTwCw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hao Xiang writes: > On Thu, Jan 11, 2024 at 11:02=E2=80=AFPM Huang, Ying wrote: >> >> Hao Xiang writes: >> >> > On Wed, Jan 10, 2024 at 6:18=E2=80=AFAM Jonathan Cameron >> > wrote: >> >> >> >> On Tue, 9 Jan 2024 16:28:15 -0800 >> >> Hao Xiang wrote: >> >> >> >> > On Tue, Jan 9, 2024 at 9:59=E2=80=AFAM Gregory Price wrote: >> >> > > >> >> > > On Tue, Jan 09, 2024 at 03:50:49PM +0000, Jonathan Cameron wrote: >> >> > > > On Tue, 09 Jan 2024 11:41:11 +0800 >> >> > > > "Huang, Ying" wrote: >> >> > > > > Gregory Price writes: >> >> > > > > > On Thu, Jan 04, 2024 at 02:05:01PM +0800, Huang, Ying wrote: >> >> > > > > It's possible to change the performance of a NUMA node change= d, if we >> >> > > > > hot-remove a memory device, then hot-add another different me= mory >> >> > > > > device. It's hoped that the CDAT changes too. >> >> > > > >> >> > > > Not supported, but ACPI has _HMA methods to in theory allow cha= nging >> >> > > > HMAT values based on firmware notifications... So we 'could' m= ake >> >> > > > it work for HMAT based description. >> >> > > > >> >> > > > Ultimately my current thinking is we'll end up emulating CXL ty= pe3 >> >> > > > devices (hiding topology complexity) and you can update CDAT but >> >> > > > IIRC that is only meant to be for degraded situations - so if y= ou >> >> > > > want multiple performance regions, CDAT should describe them fo= rm the start. >> >> > > > >> >> > > >> >> > > That was my thought. I don't think it's particularly *realistic*= for >> >> > > HMAT/CDAT values to change at runtime, but I can imagine a case w= here >> >> > > it could be valuable. >> >> > > >> >> > > > > > https://lore.kernel.org/linux-cxl/CAAYibXjZ0HSCqMrzXGv62cML= ncS_81R3e1uNV5Fu4CPm0zAtYw@mail.gmail.com/ >> >> > > > > > >> >> > > > > > This group wants to enable passing CXL memory through to KV= M/QEMU >> >> > > > > > (i.e. host CXL expander memory passed through to the guest)= , and >> >> > > > > > allow the guest to apply memory tiering. >> >> > > > > > >> >> > > > > > There are multiple issues with this, presently: >> >> > > > > > >> >> > > > > > 1. The QEMU CXL virtual device is not and probably never wi= ll be >> >> > > > > > performant enough to be a commodity class virtualization. >> >> > > > >> >> > > > I'd flex that a bit - we will end up with a solution for virtua= lization but >> >> > > > it isn't the emulation that is there today because it's not pos= sible to >> >> > > > emulate some of the topology in a peformant manner (interleavin= g with sub >> >> > > > page granularity / interleaving at all (to a lesser degree)). T= here are >> >> > > > ways to do better than we are today, but they start to look like >> >> > > > software dissagregated memory setups (think lots of page faults= in the host). >> >> > > > >> >> > > >> >> > > Agreed, the emulated device as-is can't be the virtualization dev= ice, >> >> > > but it doesn't mean it can't be the basis for it. >> >> > > >> >> > > My thought is, if you want to pass host CXL *memory* through to t= he >> >> > > guest, you don't actually care to pass CXL *control* through to t= he >> >> > > guest. That control lies pretty squarely with the host/hyperviso= r. >> >> > > >> >> > > So, at least in theory, you can just cut the type3 device out of = the >> >> > > QEMU configuration entirely and just pass it through as a distinc= t numa >> >> > > node with specific hmat qualities. >> >> > > >> >> > > Barring that, if we must go through the type3 device, the questio= n is >> >> > > how difficult would it be to just make a stripped down type3 devi= ce >> >> > > to provide the informational components, but hack off anything >> >> > > topology/interleave related? Then you just do direct passthrough = as you >> >> > > described below. >> >> > > >> >> > > qemu/kvm would report errors if you tried to touch the naughty bi= ts. >> >> > > >> >> > > The second question is... is that device "compliant" or does it n= eed >> >> > > super special handling from the kernel driver :D? If what i desc= ribed >> >> > > is not "compliant", then it's probably a bad idea, and KVM/QEMU s= hould >> >> > > just hide the CXL device entirely from the guest (for this use ca= se) >> >> > > and just pass the memory through as a numa node. >> >> > > >> >> > > Which gets us back to: The memory-tiering component needs a way to >> >> > > place nodes in different tiers based on HMAT/CDAT/User Whim. All = three >> >> > > of those seem like totally valid ways to go about it. >> >> > > >> >> > > > > > >> >> > > > > > 2. When passing memory through as an explicit NUMA node, bu= t not as >> >> > > > > > part of a CXL memory device, the nodes are lumped togeth= er in the >> >> > > > > > DRAM tier. >> >> > > > > > >> >> > > > > > None of this has to do with firmware. >> >> > > > > > >> >> > > > > > Memory-type is an awful way of denoting membership of a tie= r, but we >> >> > > > > > have HMAT information that can be passed through via QEMU: >> >> > > > > > >> >> > > > > > -object memory-backend-ram,size=3D4G,id=3Dram-node0 \ >> >> > > > > > -object memory-backend-ram,size=3D4G,id=3Dram-node1 \ >> >> > > > > > -numa node,nodeid=3D0,cpus=3D0-4,memdev=3Dram-node0 \ >> >> > > > > > -numa node,initiator=3D0,nodeid=3D1,memdev=3Dram-node1 \ >> >> > > > > > -numa hmat-lb,initiator=3D0,target=3D0,hierarchy=3Dmemory,d= ata-type=3Daccess-latency,latency=3D10 \ >> >> > > > > > -numa hmat-lb,initiator=3D0,target=3D0,hierarchy=3Dmemory,d= ata-type=3Daccess-bandwidth,bandwidth=3D10485760 \ >> >> > > > > > -numa hmat-lb,initiator=3D0,target=3D1,hierarchy=3Dmemory,d= ata-type=3Daccess-latency,latency=3D20 \ >> >> > > > > > -numa hmat-lb,initiator=3D0,target=3D1,hierarchy=3Dmemory,d= ata-type=3Daccess-bandwidth,bandwidth=3D5242880 >> >> > > > > > >> >> > > > > > Not only would it be nice if we could change tier membershi= p based on >> >> > > > > > this data, it's realistically the only way to allow guests = to accomplish >> >> > > > > > memory tiering w/ KVM/QEMU and CXL memory passed through to= the guest. >> >> > > > >> >> > > > This I fully agree with. There will be systems with a bunch of= normal DDR with different >> >> > > > access characteristics irrespective of CXL. + likely HMAT solut= ions will be used >> >> > > > before we get anything more complex in place for CXL. >> >> > > > >> >> > > >> >> > > Had not even considered this, but that's completely accurate as w= ell. >> >> > > >> >> > > And more discretely: What of devices that don't provide HMAT/CDAT= ? That >> >> > > isn't necessarily a violation of any standard. There probably co= uld be >> >> > > a release valve for us to still make those devices useful. >> >> > > >> >> > > The concern I have with not implementing a movement mechanism *at= all* >> >> > > is that a one-size-fits-all initial-placement heuristic feels gro= ss >> >> > > when we're, at least ideologically, moving toward "software defin= ed memory". >> >> > > >> >> > > Personally I think the movement mechanism is a good idea that get= s folks >> >> > > where they're going sooner, and it doesn't hurt anything by exist= ing. We >> >> > > can change the initial placement mechanism too. >> >> > >> >> > I think providing users a way to "FIX" the memory tiering is a back= up >> >> > option. Given that DDRs with different access characteristics provi= de >> >> > the relevant CDAT/HMAT information, the kernel should be able to >> >> > correctly establish memory tiering on boot. >> >> >> >> Include hotplug and I'll be happier! I know that's messy though. >> >> >> >> > Current memory tiering code has >> >> > 1) memory_tier_init() to iterate through all boot onlined memory >> >> > nodes. All nodes are assumed to be fast tier (adistance >> >> > MEMTIER_ADISTANCE_DRAM is used). >> >> > 2) dev_dax_kmem_probe to iterate through all devdax controlled memo= ry >> >> > nodes. This is the place the kernel reads the memory attributes from >> >> > HMAT and recognizes the memory nodes into the correct tier (devdax >> >> > controlled CXL, pmem, etc). >> >> > If we want DDRs with different memory characteristics to be put into >> >> > the correct tier (as in the guest VM memory tiering case), we proba= bly >> >> > need a third path to iterate the boot onlined memory nodes and also= be >> >> > able to read their memory attributes. I don't think we can do that = in >> >> > 1) because the ACPI subsystem is not yet initialized. >> >> >> >> Can we move it later in general? Or drag HMAT parsing earlier? >> >> ACPI table availability is pretty early, it's just that we don't both= er >> >> with HMAT because nothing early uses it. >> >> IIRC SRAT parsing occurs way before memory_tier_init() will be called. >> > >> > I tested the call sequence under a debugger earlier. hmat_init() is >> > called after memory_tier_init(). Let me poke around and see what our >> > options are. >> >> This sounds reasonable. >> >> Please keep in mind that we need a way to identify the base line memory >> type(default_dram_type). A simple method is to use NUMA nodes with CPU >> attached. But I remember that Aneesh said that some NUMA nodes without >> CPU will need to be put in default_dram_type too on their systems. We >> need a way to identify that. > > Yes, I am doing some prototyping the way you described. In > memory_tier_init(), we will just set the memory tier for the NUMA > nodes with CPU. In hmat_init(), I am trying to call back to mm to > finish the memory tier initialization for the CPUless NUMA nodes. If a > CPUless numa node can't get the effective adistance from > mt_calc_adistance(), we will fallback to add that node to > default_dram_type. Sound reasonable for me. > The other thing I want to experiment is to call mt_calc_adistance() on > a memory node with CPU and see what kind of adistance will be > returned. Anyway, we need a base line to start. The abstract distance is calculated based on the ratio of the performance of a node to that of default DRAM node. -- Best Regards, Huang, Ying