From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF094C433EF for ; Tue, 1 Mar 2022 06:28:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 013AC8D0002; Tue, 1 Mar 2022 01:28:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F04BB8D0001; Tue, 1 Mar 2022 01:28:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCC5F8D0002; Tue, 1 Mar 2022 01:28:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id CD0198D0001 for ; Tue, 1 Mar 2022 01:28:20 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8F7CC238E7 for ; Tue, 1 Mar 2022 06:28:20 +0000 (UTC) X-FDA: 79194837960.01.F2AC35A Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf27.hostedemail.com (Postfix) with ESMTP id 4C08540003 for ; Tue, 1 Mar 2022 06:28:19 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4K76ff2m10z9sMH; Tue, 1 Mar 2022 14:24:42 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Tue, 1 Mar 2022 14:28:14 +0800 Subject: Re: [PATCH -V13 2/3] NUMA balancing: optimize page placement for memory tiering system To: Huang Ying CC: , , Feng Tang , Baolin Wang , Michal Hocko , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , Oscar Salvador , Shakeel Butt , zhongjiang-ali , Randy Dunlap , Johannes Weiner , Peter Zijlstra , Mel Gorman , Andrew Morton References: <20220221084529.1052339-1-ying.huang@intel.com> <20220221084529.1052339-3-ying.huang@intel.com> From: Miaohe Lin Message-ID: <4652446e-2089-a3c4-fbdb-321322887392@huawei.com> Date: Tue, 1 Mar 2022 14:28:13 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220221084529.1052339-3-ying.huang@intel.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 4C08540003 X-Stat-Signature: 88mpz6gxfq8gxdc5yyd53e9qsq6a78ii X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam07 X-HE-Tag: 1646116099-479992 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/2/21 16:45, Huang Ying wrote: > With the advent of various new memory types, some machines will have > multiple types of memory, e.g. DRAM and PMEM (persistent memory). The > memory subsystem of these machines can be called memory tiering > system, because the performance of the different types of memory are > usually different. > > In such system, because of the memory accessing pattern changing etc, > some pages in the slow memory may become hot globally. So in this > patch, the NUMA balancing mechanism is enhanced to optimize the page > placement among the different memory types according to hot/cold > dynamically. > > In a typical memory tiering system, there are CPUs, fast memory and > slow memory in each physical NUMA node. The CPUs and the fast memory > will be put in one logical node (called fast memory node), while the > slow memory will be put in another (faked) logical node (called slow > memory node). That is, the fast memory is regarded as local while the > slow memory is regarded as remote. So it's possible for the recently > accessed pages in the slow memory node to be promoted to the fast > memory node via the existing NUMA balancing mechanism. > > The original NUMA balancing mechanism will stop to migrate pages if > the free memory of the target node becomes below the high watermark. > This is a reasonable policy if there's only one memory type. But this > makes the original NUMA balancing mechanism almost do not work to > optimize page placement among different memory types. Details are as > follows. > > It's the common cases that the working-set size of the workload is > larger than the size of the fast memory nodes. Otherwise, it's > unnecessary to use the slow memory at all. So, there are almost > always no enough free pages in the fast memory nodes, so that the > globally hot pages in the slow memory node cannot be promoted to the > fast memory node. To solve the issue, we have 2 choices as follows, > > a. Ignore the free pages watermark checking when promoting hot pages > from the slow memory node to the fast memory node. This will > create some memory pressure in the fast memory node, thus trigger > the memory reclaiming. So that, the cold pages in the fast memory > node will be demoted to the slow memory node. > > b. Make kswapd of the fast memory node to reclaim pages until the free > pages are a little more than the high watermark (named as promo > watermark). Then, if the free pages of the fast memory node reaches > high watermark, and some hot pages need to be promoted, kswapd of the > fast memory node will be waken up to demote more cold pages in the > fast memory node to the slow memory node. This will free some extra > space in the fast memory node, so the hot pages in the slow memory > node can be promoted to the fast memory node. > > The choice "a" may create high memory pressure in the fast memory > node. If the memory pressure of the workload is high, the memory > pressure may become so high that the memory allocation latency of the > workload is influenced, e.g. the direct reclaiming may be triggered. > > The choice "b" works much better at this aspect. If the memory > pressure of the workload is high, the hot pages promotion will stop > earlier because its allocation watermark is higher than that of the Many thanks for your path. The patch looks good to me but I have a question. WMARK_PROMO is only used inside pgdat_balanced when NUMA_BALANCING_MEMORY_TIERING is set. So its allocation watermark seems to be as same as the normal memory allocation. How should I understand the above sentence? Am I miss something? Many thanks. :) > normal memory allocation. So in this patch, choice "b" is > implemented. A new zone watermark (WMARK_PROMO) is added. Which is > larger than the high watermark and can be controlled via > watermark_scale_factor. > > In addition to the original page placement optimization among sockets, > the NUMA balancing mechanism is extended to be used to optimize page > placement according to hot/cold among different memory types. So the > sysctl user space interface (numa_balancing) is extended in a backward > compatible way as follow, so that the users can enable/disable these > functionality individually. > > The sysctl is converted from a Boolean value to a bits field. The > definition of the flags is, > > - 0: NUMA_BALANCING_DISABLED > - 1: NUMA_BALANCING_NORMAL > - 2: NUMA_BALANCING_MEMORY_TIERING > > We have tested the patch with the pmbench memory accessing benchmark > with the 80:20 read/write ratio and the Gauss access address > distribution on a 2 socket Intel server with Optane DC Persistent > Memory Model. The test results shows that the pmbench score can > improve up to 95.9%. > > Thanks Andrew Morton to help fix the document format error. > > Signed-off-by: "Huang, Ying" > Tested-by: Baolin Wang > Reviewed-by: Baolin Wang > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Rik van Riel > Cc: Mel Gorman > Cc: Peter Zijlstra > Cc: Dave Hansen > Cc: Yang Shi > Cc: Zi Yan > Cc: Wei Xu > Cc: Oscar Salvador > Cc: Shakeel Butt > Cc: zhongjiang-ali > Cc: Randy Dunlap > Cc: Johannes Weiner > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > --- > Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++------- > include/linux/mmzone.h | 1 + > include/linux/sched/sysctl.h | 10 +++++++ > kernel/sched/core.c | 21 ++++++++++++--- > kernel/sysctl.c | 2 +- > mm/migrate.c | 16 ++++++++++-- > mm/page_alloc.c | 3 ++- > mm/vmscan.c | 6 ++++- > 8 files changed, 70 insertions(+), 18 deletions(-) > > diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst > index d359bcfadd39..fdfd2b684822 100644 > --- a/Documentation/admin-guide/sysctl/kernel.rst > +++ b/Documentation/admin-guide/sysctl/kernel.rst > @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst). > numa_balancing > ============== > > -Enables/disables automatic page fault based NUMA memory > -balancing. Memory is moved automatically to nodes > -that access it often. > +Enables/disables and configures automatic page fault based NUMA memory > +balancing. Memory is moved automatically to nodes that access it often. > +The value to set can be the result of ORing the following: > > -Enables/disables automatic NUMA memory balancing. On NUMA machines, there > -is a performance penalty if remote memory is accessed by a CPU. When this > -feature is enabled the kernel samples what task thread is accessing memory > -by periodically unmapping pages and later trapping a page fault. At the > -time of the page fault, it is determined if the data being accessed should > -be migrated to a local memory node. > += ================================= > +0 NUMA_BALANCING_DISABLED > +1 NUMA_BALANCING_NORMAL > +2 NUMA_BALANCING_MEMORY_TIERING > += ================================= > + > +Or NUMA_BALANCING_NORMAL to optimize page placement among different > +NUMA nodes to reduce remote accessing. On NUMA machines, there is a > +performance penalty if remote memory is accessed by a CPU. When this > +feature is enabled the kernel samples what task thread is accessing > +memory by periodically unmapping pages and later trapping a page > +fault. At the time of the page fault, it is determined if the data > +being accessed should be migrated to a local memory node. > > The unmapping of pages and trapping faults incur additional overhead that > ideally is offset by improved memory locality but there is no universal > @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms, > numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, > numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls. > > +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among > +different types of memory (represented as different NUMA nodes) to > +place the hot pages in the fast memory. This is implemented based on > +unmapping and page fault too. > > numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb > =============================================================================================================================== > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 44bd054ca12b..06bc55db19bf 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -342,6 +342,7 @@ enum zone_watermarks { > WMARK_MIN, > WMARK_LOW, > WMARK_HIGH, > + WMARK_PROMO, > NR_WMARK > }; > > diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h > index c19dd5a2c05c..b5eec8854c5a 100644 > --- a/include/linux/sched/sysctl.h > +++ b/include/linux/sched/sysctl.h > @@ -23,6 +23,16 @@ enum sched_tunable_scaling { > SCHED_TUNABLESCALING_END, > }; > > +#define NUMA_BALANCING_DISABLED 0x0 > +#define NUMA_BALANCING_NORMAL 0x1 > +#define NUMA_BALANCING_MEMORY_TIERING 0x2 > + > +#ifdef CONFIG_NUMA_BALANCING > +extern int sysctl_numa_balancing_mode; > +#else > +#define sysctl_numa_balancing_mode 0 > +#endif > + > /* > * control realtime throttling: > * > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index fcf0c180617c..c25348e9ae3a 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -4280,7 +4280,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing); > > #ifdef CONFIG_NUMA_BALANCING > > -void set_numabalancing_state(bool enabled) > +int sysctl_numa_balancing_mode; > + > +static void __set_numabalancing_state(bool enabled) > { > if (enabled) > static_branch_enable(&sched_numa_balancing); > @@ -4288,13 +4290,22 @@ void set_numabalancing_state(bool enabled) > static_branch_disable(&sched_numa_balancing); > } > > +void set_numabalancing_state(bool enabled) > +{ > + if (enabled) > + sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL; > + else > + sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED; > + __set_numabalancing_state(enabled); > +} > + > #ifdef CONFIG_PROC_SYSCTL > int sysctl_numa_balancing(struct ctl_table *table, int write, > void *buffer, size_t *lenp, loff_t *ppos) > { > struct ctl_table t; > int err; > - int state = static_branch_likely(&sched_numa_balancing); > + int state = sysctl_numa_balancing_mode; > > if (write && !capable(CAP_SYS_ADMIN)) > return -EPERM; > @@ -4304,8 +4315,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write, > err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); > if (err < 0) > return err; > - if (write) > - set_numabalancing_state(state); > + if (write) { > + sysctl_numa_balancing_mode = state; > + __set_numabalancing_state(state); > + } > return err; > } > #endif > diff --git a/kernel/sysctl.c b/kernel/sysctl.c > index 5ae443b2882e..c90a564af720 100644 > --- a/kernel/sysctl.c > +++ b/kernel/sysctl.c > @@ -1689,7 +1689,7 @@ static struct ctl_table kern_table[] = { > .mode = 0644, > .proc_handler = sysctl_numa_balancing, > .extra1 = SYSCTL_ZERO, > - .extra2 = SYSCTL_ONE, > + .extra2 = SYSCTL_FOUR, > }, > #endif /* CONFIG_NUMA_BALANCING */ > { > diff --git a/mm/migrate.c b/mm/migrate.c > index cdeaf01e601a..08ca9b9b142e 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -51,6 +51,7 @@ > #include > #include > #include > +#include > > #include > > @@ -2034,16 +2035,27 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) > { > int page_lru; > int nr_pages = thp_nr_pages(page); > + int order = compound_order(page); > > - VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); > + VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); > > /* Do not migrate THP mapped by multiple processes */ > if (PageTransHuge(page) && total_mapcount(page) > 1) > return 0; > > /* Avoid migrating to a node that is nearly full */ > - if (!migrate_balanced_pgdat(pgdat, nr_pages)) > + if (!migrate_balanced_pgdat(pgdat, nr_pages)) { > + int z; > + > + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)) > + return 0; > + for (z = pgdat->nr_zones - 1; z >= 0; z--) { > + if (populated_zone(pgdat->node_zones + z)) > + break; > + } > + wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); > return 0; > + } > > if (isolate_lru_page(page)) > return 0; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3589febc6d31..295b8f1fc31d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8474,7 +8474,8 @@ static void __setup_per_zone_wmarks(void) > > zone->watermark_boost = 0; > zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; > - zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; > + zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp; > + zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp; > > spin_unlock_irqrestore(&zone->lock, flags); > } > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 6dd8f455bb82..199b8aadbdd6 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -56,6 +56,7 @@ > > #include > #include > +#include > > #include "internal.h" > > @@ -3988,7 +3989,10 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > if (!managed_zone(zone)) > continue; > > - mark = high_wmark_pages(zone); > + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) > + mark = wmark_pages(zone, WMARK_PROMO); > + else > + mark = high_wmark_pages(zone); > if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx)) > return true; > } >