From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E406C433F5 for ; Tue, 26 Apr 2022 08:54:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF45F6B0073; Tue, 26 Apr 2022 04:54:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA3DD6B0074; Tue, 26 Apr 2022 04:54:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 944856B0075; Tue, 26 Apr 2022 04:54:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 812FB6B0073 for ; Tue, 26 Apr 2022 04:54:13 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 431E8120BD8 for ; Tue, 26 Apr 2022 08:54:13 +0000 (UTC) X-FDA: 79398418386.28.1CF7B0C Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf07.hostedemail.com (Postfix) with ESMTP id E6E2540041 for ; Tue, 26 Apr 2022 08:54:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650963251; x=1682499251; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Zi5acPYW8kItYABL06yrdnT8ZAOgTg+ybOskYR3O5U0=; b=NmhKP+oiLgeAVv7YwyFwexd7bP2OiEiQUHeaXxtQcaQA/EZ1CVsQ0bO0 kXLYK3Dk2kO5B2NhrC2Z97f4ZI3M+AfqdcJ6fWSyIf4DeLWKLv65g/o6L ak4GJXpQ8L3fUKkds9PKTZgMZzhnqJ5E1nkBFxVl23L9GjIIvm0RuB9qm +ywT+ytV5EslM857s2hwKAG4i4MDPYd9K+tS0KSavOVAx2PeU//y4ilur b9z7r4Z0WASvQ7KhdPaoydEZjpyZAZPD3cbLuet8wzxwrSlqGmigVjEhr bI1RzIPZYuSfkXLZTWRprIp/36cyBiy7JGPiugRDAh0dRxkJDBvZiCCw4 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10328"; a="290647649" X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="290647649" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 01:52:53 -0700 X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="579776078" Received: from yyu16-mobl.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.128]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 01:52:46 -0700 From: Huang Ying To: Peter Zijlstra , Mel Gorman , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Michal Hocko , Rik van Riel , Mel Gorman , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt Subject: [PATCH -V2 3/3] memory tiering: adjust hot threshold automatically Date: Tue, 26 Apr 2022 16:51:05 +0800 Message-Id: <20220426085105.60822-4-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220426085105.60822-1-ying.huang@intel.com> References: <20220426085105.60822-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NmhKP+oi; spf=none (imf07.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E6E2540041 X-Stat-Signature: q3bhesd5o4twn1fu6o5n6e89z9mq7yds X-HE-Tag: 1650963249-775435 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The promotion hot threshold is workload and system configuration dependent. So in this patch, a method to adjust the hot threshold automatically is implemented. The basic idea is to control the number of the candidate promotion pages to match the promotion rate limit. If the hint page fault latency of a page is less than the hot threshold, we will try to promote the page, and the page is called the candidate promotion page. If the number of the candidate promotion pages in the statistics interval is much more than the promotion rate limit, the hot threshold will be decreased to reduce the number of the candidate promotion pages. Otherwise, the hot threshold will be increased to increase the number of the candidate promotion pages. To make the above method works, in each statistics interval, the total number of the pages to check (on which the hint page faults occur) and the hot/cold distribution need to be stable. Because the page tables are scanned linearly in NUMA balancing, but the hot/cold distribution isn't uniform along the address usually, the statistics interval should be larger than the NUMA balancing scan period. So in the patch, the max scan period is used as statistics interval and it works well in our tests. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 5 +++++ kernel/sched/core.c | 15 ++++++++++++++ kernel/sched/fair.c | 46 +++++++++++++++++++++++++++++++++++++----- 3 files changed, 61 insertions(+), 5 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index f2887b1c9b0b..d542b03b9d5c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -920,6 +920,11 @@ typedef struct pglist_data { unsigned long numa_nr_candidate; /* number of promote candidate pages at * rate limit start time */ unsigned int numa_ts; /* promote rate limit start time in ms */ + /* promote threshold adjusting start time in ms */ + unsigned int numa_threshold_ts; + unsigned int numa_threshold; /* promote threshold in ms */ + /* number of promote candidate pages at numa_threshold_ts */ + unsigned long numa_threshold_nr_candidate; #endif /* Fields commonly accessed by the page reclaim scanner */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 51efaabac3e4..671eef0c6a21 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4364,6 +4364,18 @@ void set_numabalancing_state(bool enabled) } #ifdef CONFIG_PROC_SYSCTL +static void reset_memory_tiering(void) +{ + struct pglist_data *pgdat; + + for_each_online_pgdat(pgdat) { + pgdat->numa_threshold = 0; + pgdat->numa_threshold_nr_candidate = + node_page_state(pgdat, PGPROMOTE_CANDIDATE); + pgdat->numa_threshold_ts = jiffies_to_msecs(jiffies); + } +} + int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { @@ -4380,6 +4392,9 @@ int sysctl_numa_balancing(struct ctl_table *table, int write, if (err < 0) return err; if (write) { + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && + (state & NUMA_BALANCING_MEMORY_TIERING)) + reset_memory_tiering(); sysctl_numa_balancing_mode = state; __set_numabalancing_state(state); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2975c1cbdb60..e8ba1e977708 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1491,6 +1491,35 @@ static bool numa_promotion_rate_limit(struct pglist_data *pgdat, return false; } +#define NUMA_MIGRATION_ADJUST_STEPS 16 + +static void numa_promotion_adjust_threshold(struct pglist_data *pgdat, + unsigned long rate_limit, + unsigned int ref_th) +{ + unsigned int now, last_th_ts, th_period, unit_th, th; + unsigned long nr_cand, ref_cand, diff_cand; + + now = jiffies_to_msecs(jiffies); + th_period = sysctl_numa_balancing_scan_period_max; + last_th_ts = pgdat->numa_threshold_ts; + if (now - last_th_ts > th_period && + cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) { + ref_cand = rate_limit * + sysctl_numa_balancing_scan_period_max / MSEC_PER_SEC; + nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate; + unit_th = ref_th * 2 / NUMA_MIGRATION_ADJUST_STEPS; + th = pgdat->numa_threshold ? : ref_th; + if (diff_cand > ref_cand * 11 / 10) + th = max(th - unit_th, unit_th); + else if (diff_cand < ref_cand * 9 / 10) + th = min(th + unit_th, ref_th * 2); + pgdat->numa_threshold_nr_candidate = nr_cand; + pgdat->numa_threshold = th; + } +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1505,19 +1534,26 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long rate_limit, latency, th; + unsigned long rate_limit; + unsigned int latency, th, def_th; pgdat = NODE_DATA(dst_nid); - if (pgdat_free_space_enough(pgdat)) + if (pgdat_free_space_enough(pgdat)) { + /* workload changed, reset hot threshold */ + pgdat->numa_threshold = 0; return true; + } + + def_th = sysctl_numa_balancing_hot_threshold; + rate_limit = sysctl_numa_balancing_promote_rate_limit << \ + (20 - PAGE_SHIFT); + numa_promotion_adjust_threshold(pgdat, rate_limit, def_th); - th = sysctl_numa_balancing_hot_threshold; + th = pgdat->numa_threshold ? : def_th; latency = numa_hint_fault_latency(page); if (latency >= th) return false; - rate_limit = sysctl_numa_balancing_promote_rate_limit << \ - (20 - PAGE_SHIFT); return !numa_promotion_rate_limit(pgdat, rate_limit, thp_nr_pages(page)); } -- 2.30.2