From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D401C43334 for ; Wed, 13 Jul 2022 06:38:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94241940102; Wed, 13 Jul 2022 02:38:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F0E79400E5; Wed, 13 Jul 2022 02:38:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76A3B940102; Wed, 13 Jul 2022 02:38:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 605E99400E5 for ; Wed, 13 Jul 2022 02:38:34 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 25A6D6105A for ; Wed, 13 Jul 2022 06:38:34 +0000 (UTC) X-FDA: 79681122948.04.03A83AE Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf12.hostedemail.com (Postfix) with ESMTP id 9FCFC40070 for ; Wed, 13 Jul 2022 06:38:33 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id i8-20020a17090a4b8800b001ef8a65bfbdso2029178pjh.1 for ; Tue, 12 Jul 2022 23:38:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3e3pJSn+M1p+AKf6+mrrb7nSRFrQqiLQ7X//Th/AJb0=; b=WZnD0BmqOehn7o3sxTv6UwecN0WAa1Zgs18TuhCm25O1JFBw6SVuNnqt+Iml+RVW7k No2aKGg19mTNRkyUuUYeCDPU58iCvcXRicMlQOv8CsCjZHGD9wkEZiDj3Gp+8NeG0iZ2 PcfGwMBXl3feR3KLhPN8IA+Gwls4vg9OXWu2mlK4YJQ0eT4mGz0fPyId69klo6rpuzVZ fpi6e08+iCf3lr+7wq4unTbE5FBQE55R7Zzvj+Q0ehsovfFQ6Jm8z4ZlEntgff7zNX+e KvvbbuTXKHEApNbDHsrPGG9YexMrnVnHGcuDGb5Jfdk5jrYtD6mPcEyYZE+jG+5nZt6Z T1EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3e3pJSn+M1p+AKf6+mrrb7nSRFrQqiLQ7X//Th/AJb0=; b=cxJRv0dsGu0a+yCk0BKI9/X27wD6vHT3OHh7LiM2jrK+cSFppb/iJcdxMw+/GJkC5r FJ1oGwWLbu+Gu3H8u1FrBzsp+/ULwCo8MU5o+5K4Yo3py6NTKgvWSgmosKdcNLilVqFG k1z4LvpCyLrsROmeaHohpnM8/0t2lQsemihdWVL6D/PM56lMXAuhdBbVGr1XEt+gE6Vb A6NPdhfc9FHlezFvuQNmiurkS3T5Xm7ftTTL3AC7uzZaEJH34bdv1m4pAXKxGt2OyPat tydD65iwdOjU0pxYV6VTsByWmFioBrN8jqIAnSdBbD2mXGlUGadV49cqEXgsQ8UJ7eUJ a/nA== X-Gm-Message-State: AJIora8HlQKk0v+ot8u6SVOPXLK5BlfICpHhUaQhHGrL2d/KJ7EeREti b7DYH6AqwBGgEEPmRfLwTNiuF2mx8e02auWS9ZYh8g== X-Google-Smtp-Source: AGRyM1vf9Z3HHtU5BhvDJDg6WUceyuJUoIYVNKHXCnFjSlXvMACY6qlCCLgtiSzMzYYruS+zo+6MIH3Q4qx9+eUzRo4= X-Received: by 2002:a17:90b:896:b0:1ef:935c:f326 with SMTP id bj22-20020a17090b089600b001ef935cf326mr2128692pjb.193.1657694312437; Tue, 12 Jul 2022 23:38:32 -0700 (PDT) MIME-Version: 1.0 References: <20220704070612.299585-1-aneesh.kumar@linux.ibm.com> <87r130b2rh.fsf@yhuang6-desk2.ccr.corp.intel.com> <60e97fa2-0b89-cf42-5307-5a57c956f741@linux.ibm.com> <87r12r5dwu.fsf@yhuang6-desk2.ccr.corp.intel.com> <0a55e48a-b4b7-4477-a72f-73644b5fc4cb@linux.ibm.com> <80e5308f-bd83-609e-0f23-33cb89fe9141@linux.ibm.com> <87a69d65ls.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87a69d65ls.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Wei Xu Date: Tue, 12 Jul 2022 23:38:21 -0700 Message-ID: Subject: Re: [PATCH v8 00/12] mm/demotion: Memory tiers and demotion To: "Huang, Ying" Cc: Yang Shi , Aneesh Kumar K V , Linux MM , Andrew Morton , Davidlohr Bueso , Tim C Chen , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , Johannes Weiner , jvgediya.oss@gmail.com Content-Type: multipart/alternative; boundary="000000000000a3fc0705e3aa0846" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657694313; a=rsa-sha256; cv=none; b=x63CWkxHiVMmDWUwBPsT6v0xF1HPlgTS+6yNbW+Kx8HFvmD/EsrIZdsJnYkHA2qstvLS+I ZJyfj8550VFG9h6/DX151RVyyY9pPAF61bcq+dhSlgPtPnbZ5PCEmhFoHD8w0GyU9hN3ZG aH8lVhFNliuCL47qmxvgsDzM3UJzr38= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=WZnD0Bmq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of weixugc@google.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=weixugc@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657694313; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3e3pJSn+M1p+AKf6+mrrb7nSRFrQqiLQ7X//Th/AJb0=; b=f9P3qTBHJ64xOh96DweTnkNiG85PZ/7+Uo8AaMo3Ukwz4EAh+dC5MCw6xhK+6eQpip+ePk YaPY5ih9xjWX5XUpXG/FwMvuXGpWPqqHZTbfRdDQ7OJyPEGnsaHdBaq4qGN9oYyeqqDjJ+ 80KMLyl3vPG/WX4dkUgwWrMwvTcghH0= X-Rspam-User: X-Stat-Signature: wtcegtnc76giabwr1g6fy613xxjg4ipc X-Rspamd-Queue-Id: 9FCFC40070 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=WZnD0Bmq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of weixugc@google.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=weixugc@google.com X-Rspamd-Server: rspam03 X-HE-Tag: 1657694313-181531 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --000000000000a3fc0705e3aa0846 Content-Type: text/plain; charset="UTF-8" On Tue, Jul 12, 2022 at 8:42 PM Huang, Ying wrote: > Yang Shi writes: > > > On Mon, Jul 11, 2022 at 10:10 PM Aneesh Kumar K V > > wrote: > >> > >> On 7/12/22 10:12 AM, Aneesh Kumar K V wrote: > >> > On 7/12/22 6:46 AM, Huang, Ying wrote: > >> >> Aneesh Kumar K V writes: > >> >> > >> >>> On 7/5/22 9:59 AM, Huang, Ying wrote: > >> >>>> Hi, Aneesh, > >> >>>> > >> >>>> "Aneesh Kumar K.V" writes: > >> >>>> > >> >>>>> The current kernel has the basic memory tiering support: Inactive > >> >>>>> pages on a higher tier NUMA node can be migrated (demoted) to a > lower > >> >>>>> tier NUMA node to make room for new allocations on the higher tier > >> >>>>> NUMA node. Frequently accessed pages on a lower tier NUMA node > can be > >> >>>>> migrated (promoted) to a higher tier NUMA node to improve the > >> >>>>> performance. > >> >>>>> > >> >>>>> In the current kernel, memory tiers are defined implicitly via a > >> >>>>> demotion path relationship between NUMA nodes, which is created > during > >> >>>>> the kernel initialization and updated when a NUMA node is > hot-added or > >> >>>>> hot-removed. The current implementation puts all nodes with CPU > into > >> >>>>> the top tier, and builds the tier hierarchy tier-by-tier by > establishing > >> >>>>> the per-node demotion targets based on the distances between > nodes. > >> >>>>> > >> >>>>> This current memory tier kernel interface needs to be improved for > >> >>>>> several important use cases: > >> >>>>> > >> >>>>> * The current tier initialization code always initializes > >> >>>>> each memory-only NUMA node into a lower tier. But a memory-only > >> >>>>> NUMA node may have a high performance memory device (e.g. a DRAM > >> >>>>> device attached via CXL.mem or a DRAM-backed memory-only node on > >> >>>>> a virtual machine) and should be put into a higher tier. > >> >>>>> > >> >>>>> * The current tier hierarchy always puts CPU nodes into the top > >> >>>>> tier. But on a system with HBM (e.g. GPU memory) devices, these > >> >>>>> memory-only HBM NUMA nodes should be in the top tier, and DRAM > nodes > >> >>>>> with CPUs are better to be placed into the next lower tier. > >> >>>>> > >> >>>>> * Also because the current tier hierarchy always puts CPU nodes > >> >>>>> into the top tier, when a CPU is hot-added (or hot-removed) and > >> >>>>> triggers a memory node from CPU-less into a CPU node (or vice > >> >>>>> versa), the memory tier hierarchy gets changed, even though no > >> >>>>> memory node is added or removed. This can make the tier > >> >>>>> hierarchy unstable and make it difficult to support tier-based > >> >>>>> memory accounting. > >> >>>>> > >> >>>>> * A higher tier node can only be demoted to selected nodes on the > >> >>>>> next lower tier as defined by the demotion path, not any other > >> >>>>> node from any lower tier. This strict, hard-coded demotion > order > >> >>>>> does not work in all use cases (e.g. some use cases may want to > >> >>>>> allow cross-socket demotion to another node in the same demotion > >> >>>>> tier as a fallback when the preferred demotion node is out of > >> >>>>> space), and has resulted in the feature request for an > interface to > >> >>>>> override the system-wide, per-node demotion order from the > >> >>>>> userspace. This demotion order is also inconsistent with the > page > >> >>>>> allocation fallback order when all the nodes in a higher tier > are > >> >>>>> out of space: The page allocation can fall back to any node from > >> >>>>> any lower tier, whereas the demotion order doesn't allow that. > >> >>>>> > >> >>>>> * There are no interfaces for the userspace to learn about the > memory > >> >>>>> tier hierarchy in order to optimize its memory allocations. > >> >>>>> > >> >>>>> This patch series make the creation of memory tiers explicit under > >> >>>>> the control of userspace or device driver. > >> >>>>> > >> >>>>> Memory Tier Initialization > >> >>>>> ========================== > >> >>>>> > >> >>>>> By default, all memory nodes are assigned to the default tier with > >> >>>>> tier ID value 200. > >> >>>>> > >> >>>>> A device driver can move up or down its memory nodes from the > default > >> >>>>> tier. For example, PMEM can move down its memory nodes below the > >> >>>>> default tier, whereas GPU can move up its memory nodes above the > >> >>>>> default tier. > >> >>>>> > >> >>>>> The kernel initialization code makes the decision on which exact > tier > >> >>>>> a memory node should be assigned to based on the requests from the > >> >>>>> device drivers as well as the memory device hardware information > >> >>>>> provided by the firmware. > >> >>>>> > >> >>>>> Hot-adding/removing CPUs doesn't affect memory tier hierarchy. > >> >>>>> > >> >>>>> Memory Allocation for Demotion > >> >>>>> ============================== > >> >>>>> This patch series keep the demotion target page allocation logic > same. > >> >>>>> The demotion page allocation pick the closest NUMA node in the > >> >>>>> next lower tier to the current NUMA node allocating pages from. > >> >>>>> > >> >>>>> This will be later improved to use the same page allocation > strategy > >> >>>>> using fallback list. > >> >>>>> > >> >>>>> Sysfs Interface: > >> >>>>> ------------- > >> >>>>> Listing current list of memory tiers details: > >> >>>>> > >> >>>>> :/sys/devices/system/memtier$ ls > >> >>>>> default_tier max_tier memtier1 power uevent > >> >>>>> :/sys/devices/system/memtier$ cat default_tier > >> >>>>> memtier200 > >> >>>>> :/sys/devices/system/memtier$ cat max_tier > >> >>>>> 400 > >> >>>>> :/sys/devices/system/memtier$ > >> >>>>> > >> >>>>> Per node memory tier details: > >> >>>>> > >> >>>>> For a cpu only NUMA node: > >> >>>>> > >> >>>>> :/sys/devices/system/node# cat node0/memtier > >> >>>>> :/sys/devices/system/node# echo 1 > node0/memtier > >> >>>>> :/sys/devices/system/node# cat node0/memtier > >> >>>>> :/sys/devices/system/node# > >> >>>>> > >> >>>>> For a NUMA node with memory: > >> >>>>> :/sys/devices/system/node# cat node1/memtier > >> >>>>> 1 > >> >>>>> :/sys/devices/system/node# ls ../memtier/ > >> >>>>> default_tier max_tier memtier1 power uevent > >> >>>>> :/sys/devices/system/node# echo 2 > node1/memtier > >> >>>>> :/sys/devices/system/node# > >> >>>>> :/sys/devices/system/node# ls ../memtier/ > >> >>>>> default_tier max_tier memtier1 memtier2 power uevent > >> >>>>> :/sys/devices/system/node# cat node1/memtier > >> >>>>> 2 > >> >>>>> :/sys/devices/system/node# > >> >>>>> > >> >>>>> Removing a memory tier > >> >>>>> :/sys/devices/system/node# cat node1/memtier > >> >>>>> 2 > >> >>>>> :/sys/devices/system/node# echo 1 > node1/memtier > >> >>>> > >> >>>> Thanks a lot for your patchset. > >> >>>> > >> >>>> Per my understanding, we haven't reach consensus on > >> >>>> > >> >>>> - how to create the default memory tiers in kernel (via abstract > >> >>>> distance provided by drivers? Or use SLIT as the first step?) > >> >>>> > >> >>>> - how to override the default memory tiers from user space > >> >>>> > >> >>>> As in the following thread and email, > >> >>>> > >> >>>> https://lore.kernel.org/lkml/YqjZyP11O0yCMmiO@cmpxchg.org/ > >> >>>> > >> >>>> I think that we need to finalized on that firstly? > >> >>> > >> >>> I did list the proposal here > >> >>> > >> >>> > https://lore.kernel.org/linux-mm/7b72ccf4-f4ae-cb4e-f411-74d055482026@linux.ibm.com > >> >>> > >> >>> So both the kernel default and driver-specific default tiers now > become kernel parameters that can be updated > >> >>> if the user wants a different tier topology. > >> >>> > >> >>> All memory that is not managed by a driver gets added to > default_memory_tier which got a default value of 200 > >> >>> > >> >>> For now, the only driver that is updated is dax kmem, which adds > the memory it manages to memory tier 100. > >> >>> Later as we learn more about the device attributes (HMAT or > something similar) that we might want to use > >> >>> to control the tier assignment this can be a range of memory tiers. > >> >>> > >> >>> Based on the above, I guess we can merge what is posted in this > series and later fine-tune/update > >> >>> the memory tier assignment based on device attributes. > >> >> > >> >> Sorry for late reply. > >> >> > >> >> As the first step, it may be better to skip the parts that we haven't > >> >> reached consensus yet, for example, the user space interface to > override > >> >> the default memory tiers. And we can use 0, 1, 2 as the default > memory > >> >> tier IDs. We can refine/revise the in-kernel implementation, but we > >> >> cannot change the user space ABI. > >> >> > >> > > >> > Can you help list the use case that will be broken by using tierID as > outlined in this series? > >> > One of the details that were mentioned earlier was the need to track > top-tier memory usage in a > >> > memcg and IIUC the patchset posted > https://lore.kernel.org/linux-mm/cover.1655242024.git.tim.c.chen@linux.intel.com > >> > can work with tier IDs too. Let me know if you think otherwise. So at > this point > >> > I am not sure which area we are still debating w.r.t the userspace > interface. > >> > > >> > I will still keep the default tier IDs with a large range between > them. That will allow > >> > us to go back to tierID based demotion order if we can. That is much > simpler than using tierID and rank > >> > together. If we still want to go back to rank based approach the > tierID value won't have much > >> > meaning anyway. > >> > > >> > Any feedback on patches 1 - 5, so that I can request Andrew to merge > them? > >> > > >> > >> Looking at this again, I guess we just need to drop patch 7 > >> mm/demotion: Add per node memory tier attribute to sysfs ? > >> > >> We do agree to use the device model to expose memory tiers to userspace > so patch 6 can still be included. > >> It also exposes max_tier, default_tier, and node list of a memory tier. > All these are useful > >> and agreed upon. Hence patch 6 can be merged? > >> > >> patch 8 - 10 -> are done based on the request from others and is > independent of how memory tiers > >> are exposed/created from userspace. Hence that can be merged? > >> > >> If you agree I can rebase the series moving patch 7,11,12 as the last > patches in the series so > >> that we can skip merging them based on what we conclude w.r.t usage of > rank. > > > > I think the most controversial part is the user visible interfaces so > > far. And IIUC the series could be split roughly into two parts, patch > > 1 - 5 and others. The patch 1 -5 added the explicit memory tier > > support and fixed the issue reported by Jagdish. I think we are on the > > same page for this part. But I haven't seen any thorough review on > > those patches yet since we got distracted by spending most time > > discussing about the user visible interfaces. > > > > So would it help to move things forward to submit patch 1 - 5 as a > > standalone series to get thorough review then get merged? > > Yes. I think this is a good idea. We can discuss the in kernel > implementation (without user space interface) in details and try to make > it merged. > > And we can continue our discussion of user space interface in a separate > thread. > > Best Regards, > Huang, Ying > > I also agree that it is a good idea to split this patch series into the kernel and userspace parts. The current sysfs interface provides more dynamic memtiers than what I have expected. Let's have more discussions on that after the kernel space changes are finalized. Wei --000000000000a3fc0705e3aa0846 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Tue, Jul 12, 2022 at 8:42 PM Huang= , Ying <ying.huang@intel.com= > wrote:
Yang= Shi <shy828301= @gmail.com> writes:

> On Mon, Jul 11, 2022 at 10:10 PM Aneesh Kumar K V
> <an= eesh.kumar@linux.ibm.com> wrote:
>>
>> On 7/12/22 10:12 AM, Aneesh Kumar K V wrote:
>> > On 7/12/22 6:46 AM, Huang, Ying wrote:
>> >> Aneesh Kumar K V <aneesh.kumar@linux.ibm.com> writes:
>> >>
>> >>> On 7/5/22 9:59 AM, Huang, Ying wrote:
>> >>>> Hi, Aneesh,
>> >>>>
>> >>>> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com<= /a>> writes:
>> >>>>
>> >>>>> The current kernel has the basic memory tieri= ng support: Inactive
>> >>>>> pages on a higher tier NUMA node can be migra= ted (demoted) to a lower
>> >>>>> tier NUMA node to make room for new allocatio= ns on the higher tier
>> >>>>> NUMA node.=C2=A0 Frequently accessed pages on= a lower tier NUMA node can be
>> >>>>> migrated (promoted) to a higher tier NUMA nod= e to improve the
>> >>>>> performance.
>> >>>>>
>> >>>>> In the current kernel, memory tiers are defin= ed implicitly via a
>> >>>>> demotion path relationship between NUMA nodes= , which is created during
>> >>>>> the kernel initialization and updated when a = NUMA node is hot-added or
>> >>>>> hot-removed.=C2=A0 The current implementation= puts all nodes with CPU into
>> >>>>> the top tier, and builds the tier hierarchy t= ier-by-tier by establishing
>> >>>>> the per-node demotion targets based on the di= stances between nodes.
>> >>>>>
>> >>>>> This current memory tier kernel interface nee= ds to be improved for
>> >>>>> several important use cases:
>> >>>>>
>> >>>>> * The current tier initialization code always= initializes
>> >>>>>=C2=A0 =C2=A0each memory-only NUMA node into a= lower tier.=C2=A0 But a memory-only
>> >>>>>=C2=A0 =C2=A0NUMA node may have a high perform= ance memory device (e.g. a DRAM
>> >>>>>=C2=A0 =C2=A0device attached via CXL.mem or a = DRAM-backed memory-only node on
>> >>>>>=C2=A0 =C2=A0a virtual machine) and should be = put into a higher tier.
>> >>>>>
>> >>>>> * The current tier hierarchy always puts CPU = nodes into the top
>> >>>>>=C2=A0 =C2=A0tier. But on a system with HBM (e= .g. GPU memory) devices, these
>> >>>>>=C2=A0 =C2=A0memory-only HBM NUMA nodes should= be in the top tier, and DRAM nodes
>> >>>>>=C2=A0 =C2=A0with CPUs are better to be placed= into the next lower tier.
>> >>>>>
>> >>>>> * Also because the current tier hierarchy alw= ays puts CPU nodes
>> >>>>>=C2=A0 =C2=A0into the top tier, when a CPU is = hot-added (or hot-removed) and
>> >>>>>=C2=A0 =C2=A0triggers a memory node from CPU-l= ess into a CPU node (or vice
>> >>>>>=C2=A0 =C2=A0versa), the memory tier hierarchy= gets changed, even though no
>> >>>>>=C2=A0 =C2=A0memory node is added or removed.= =C2=A0 This can make the tier
>> >>>>>=C2=A0 =C2=A0hierarchy unstable and make it di= fficult to support tier-based
>> >>>>>=C2=A0 =C2=A0memory accounting.
>> >>>>>
>> >>>>> * A higher tier node can only be demoted to s= elected nodes on the
>> >>>>>=C2=A0 =C2=A0next lower tier as defined by the= demotion path, not any other
>> >>>>>=C2=A0 =C2=A0node from any lower tier.=C2=A0 T= his strict, hard-coded demotion order
>> >>>>>=C2=A0 =C2=A0does not work in all use cases (e= .g. some use cases may want to
>> >>>>>=C2=A0 =C2=A0allow cross-socket demotion to an= other node in the same demotion
>> >>>>>=C2=A0 =C2=A0tier as a fallback when the prefe= rred demotion node is out of
>> >>>>>=C2=A0 =C2=A0space), and has resulted in the f= eature request for an interface to
>> >>>>>=C2=A0 =C2=A0override the system-wide, per-nod= e demotion order from the
>> >>>>>=C2=A0 =C2=A0userspace.=C2=A0 This demotion or= der is also inconsistent with the page
>> >>>>>=C2=A0 =C2=A0allocation fallback order when al= l the nodes in a higher tier are
>> >>>>>=C2=A0 =C2=A0out of space: The page allocation= can fall back to any node from
>> >>>>>=C2=A0 =C2=A0any lower tier, whereas the demot= ion order doesn't allow that.
>> >>>>>
>> >>>>> * There are no interfaces for the userspace t= o learn about the memory
>> >>>>>=C2=A0 =C2=A0tier hierarchy in order to optimi= ze its memory allocations.
>> >>>>>
>> >>>>> This patch series make the creation of memory= tiers explicit under
>> >>>>> the control of userspace or device driver. >> >>>>>
>> >>>>> Memory Tier Initialization
>> >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> >>>>>
>> >>>>> By default, all memory nodes are assigned to = the default tier with
>> >>>>> tier ID value 200.
>> >>>>>
>> >>>>> A device driver can move up or down its memor= y nodes from the default
>> >>>>> tier.=C2=A0 For example, PMEM can move down i= ts memory nodes below the
>> >>>>> default tier, whereas GPU can move up its mem= ory nodes above the
>> >>>>> default tier.
>> >>>>>
>> >>>>> The kernel initialization code makes the deci= sion on which exact tier
>> >>>>> a memory node should be assigned to based on = the requests from the
>> >>>>> device drivers as well as the memory device h= ardware information
>> >>>>> provided by the firmware.
>> >>>>>
>> >>>>> Hot-adding/removing CPUs doesn't affect m= emory tier hierarchy.
>> >>>>>
>> >>>>> Memory Allocation for Demotion
>> >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> >>>>> This patch series keep the demotion target pa= ge allocation logic same.
>> >>>>> The demotion page allocation pick the closest= NUMA node in the
>> >>>>> next lower tier to the current NUMA node allo= cating pages from.
>> >>>>>
>> >>>>> This will be later improved to use the same p= age allocation strategy
>> >>>>> using fallback list.
>> >>>>>
>> >>>>> Sysfs Interface:
>> >>>>> -------------
>> >>>>> Listing current list of memory tiers details:=
>> >>>>>
>> >>>>> :/sys/devices/system/memtier$ ls
>> >>>>> default_tier max_tier=C2=A0 memtier1=C2=A0 po= wer=C2=A0 uevent
>> >>>>> :/sys/devices/system/memtier$ cat default_tie= r
>> >>>>> memtier200
>> >>>>> :/sys/devices/system/memtier$ cat max_tier >> >>>>> 400
>> >>>>> :/sys/devices/system/memtier$
>> >>>>>
>> >>>>> Per node memory tier details:
>> >>>>>
>> >>>>> For a cpu only NUMA node:
>> >>>>>
>> >>>>> :/sys/devices/system/node# cat node0/memtier<= br> >> >>>>> :/sys/devices/system/node# echo 1 > node0/= memtier
>> >>>>> :/sys/devices/system/node# cat node0/memtier<= br> >> >>>>> :/sys/devices/system/node#
>> >>>>>
>> >>>>> For a NUMA node with memory:
>> >>>>> :/sys/devices/system/node# cat node1/memtier<= br> >> >>>>> 1
>> >>>>> :/sys/devices/system/node# ls ../memtier/
>> >>>>> default_tier=C2=A0 max_tier=C2=A0 memtier1=C2= =A0 power=C2=A0 uevent
>> >>>>> :/sys/devices/system/node# echo 2 > node1/= memtier
>> >>>>> :/sys/devices/system/node#
>> >>>>> :/sys/devices/system/node# ls ../memtier/
>> >>>>> default_tier=C2=A0 max_tier=C2=A0 memtier1=C2= =A0 memtier2=C2=A0 power=C2=A0 uevent
>> >>>>> :/sys/devices/system/node# cat node1/memtier<= br> >> >>>>> 2
>> >>>>> :/sys/devices/system/node#
>> >>>>>
>> >>>>> Removing a memory tier
>> >>>>> :/sys/devices/system/node# cat node1/memtier<= br> >> >>>>> 2
>> >>>>> :/sys/devices/system/node# echo 1 > node1/= memtier
>> >>>>
>> >>>> Thanks a lot for your patchset.
>> >>>>
>> >>>> Per my understanding, we haven't reach consen= sus on
>> >>>>
>> >>>> - how to create the default memory tiers in kerne= l (via abstract
>> >>>>=C2=A0 =C2=A0distance provided by drivers?=C2=A0 O= r use SLIT as the first step?)
>> >>>>
>> >>>> - how to override the default memory tiers from u= ser space
>> >>>>
>> >>>> As in the following thread and email,
>> >>>>
>> >>>>
https://lore.ker= nel.org/lkml/YqjZyP11O0yCMmiO@cmpxchg.org/
>> >>>>
>> >>>> I think that we need to finalized on that firstly= ?
>> >>>
>> >>> I did list the proposal here
>> >>>
>> >>> https://lore.kernel.org/linux-mm/7b72ccf4-f4ae-cb4e-f411-74d055482026@= linux.ibm.com
>> >>>
>> >>> So both the kernel default and driver-specific defaul= t tiers now become kernel parameters that can be updated
>> >>> if the user wants a different tier topology.
>> >>>
>> >>> All memory that is not managed by a driver gets added= to default_memory_tier which got a default value of 200
>> >>>
>> >>> For now, the only driver that is updated is dax kmem,= which adds the memory it manages to memory tier 100.
>> >>> Later as we learn more about the device attributes (H= MAT or something similar) that we might want to use
>> >>> to control the tier assignment this can be a range of= memory tiers.
>> >>>
>> >>> Based on the above, I guess we can merge what is post= ed in this series and later fine-tune/update
>> >>> the memory tier assignment based on device attributes= .
>> >>
>> >> Sorry for late reply.
>> >>
>> >> As the first step, it may be better to skip the parts tha= t we haven't
>> >> reached consensus yet, for example, the user space interf= ace to override
>> >> the default memory tiers.=C2=A0 And we can use 0, 1, 2 as= the default memory
>> >> tier IDs.=C2=A0 We can refine/revise the in-kernel implem= entation, but we
>> >> cannot change the user space ABI.
>> >>
>> >
>> > Can you help list the use case that will be broken by using t= ierID as outlined in this series?
>> > One of the details that were mentioned earlier was the need t= o track top-tier memory usage in a
>> > memcg and IIUC the patchset posted https://lore.kernel.org/linux-mm/cover.1655242= 024.git.tim.c.chen@linux.intel.com
>> > can work with tier IDs too. Let me know if you think otherwis= e. So at this point
>> > I am not sure which area we are still debating w.r.t the user= space interface.
>> >
>> > I will still keep the default tier IDs with a large range bet= ween them. That will allow
>> > us to go back to tierID based demotion order if we can. That = is much simpler than using tierID and rank
>> > together. If we still want to go back to rank based approach = the tierID value won't have much
>> > meaning anyway.
>> >
>> > Any feedback on patches 1 - 5, so that I can request Andrew t= o merge them?
>> >
>>
>> Looking at this again, I guess we just need to drop patch 7
>> mm/demotion: Add per node memory tier attribute to sysfs ?
>>
>> We do agree to use the device model to expose memory tiers to user= space so patch 6 can still be included.
>> It also exposes max_tier, default_tier, and node list of a memory = tier. All these are useful
>> and agreed upon. Hence patch 6 can be merged?
>>
>> patch 8 - 10 -> are done based on the request from others and i= s independent of how memory tiers
>> are exposed/created from userspace. Hence that can be merged?
>>
>> If you agree I can rebase the series moving patch 7,11,12 as the l= ast patches in the series so
>> that we can skip merging them based on what we conclude w.r.t usag= e of rank.
>
> I think the most controversial part is the user visible interfaces so<= br> > far. And IIUC the series could be split roughly into two parts, patch<= br> > 1 - 5 and others. The patch 1 -5 added the explicit memory tier
> support and fixed the issue reported by Jagdish. I think we are on the=
> same page for this part. But I haven't seen any thorough review on=
> those patches yet since we got distracted by spending most time
> discussing about the user visible interfaces.
>
> So would it help to move things forward to submit patch 1 - 5 as a
> standalone series to get thorough review then get merged?

Yes.=C2=A0 I think this is a good idea.=C2=A0 We can discuss the in kernel<= br> implementation (without user space interface) in details and try to make it merged.

And we can continue our discussion of user space interface in a separate thread.

Best Regards,
Huang, Ying


I also agree that it is a good idea to= split this patch series into the kernel and userspace parts.
The current sysfs interface provides more dynamic memtiers than= what I have expected.=C2=A0 Let's have more discussions on that after = the kernel space changes are finalized.

Wei
<= /div>
--000000000000a3fc0705e3aa0846--