From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D7ABC27C4F for ; Thu, 13 Jun 2024 13:18:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A59E66B009B; Thu, 13 Jun 2024 09:18:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A04C76B009D; Thu, 13 Jun 2024 09:18:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 880CD6B009E; Thu, 13 Jun 2024 09:18:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 58F496B009B for ; Thu, 13 Jun 2024 09:18:18 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CAA3B1A19EA for ; Thu, 13 Jun 2024 13:18:17 +0000 (UTC) X-FDA: 82225919034.17.56674BD Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf28.hostedemail.com (Postfix) with ESMTP id C747AC001C for ; Thu, 13 Jun 2024 13:18:15 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of honggyu.kim@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=honggyu.kim@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718284695; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nv1oWni34khPhdRw64DblZPBH5T/Fb3rcF3xoWmpT6k=; b=jzEwHiMmRgUVniKMfsZFLYX9PYGL929bK5nHdY3JVfP0k2og+DgMUx4rFGtNBSsOYsT2AT PI+yQhqIgG9OiWcHAdKJT1412PIETYGUhdjoVWRHCf1jT6mnMJquVtZvOwsMAotdX1JxnC y+4NNBhBXahW9eNIlyVEUygWuGHYKrE= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of honggyu.kim@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=honggyu.kim@sk.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718284695; a=rsa-sha256; cv=none; b=uPmG0exzKWBhUDK51jKKTLMXfwuJr4yL7Po0cpu8lWyH1vVNyfS1fu9jlkk2FIoaAnUY9V OrtwNO7iGGPTPyz0JCBu2G1g/wr/cbBrvq6jdGsUelP5M3hT1WpbyqzEr8MAsO/Vzg4X+Q wsllZyZFGA1hAol/np+XmMv+vbjx17A= X-AuditID: a67dfc5b-d6dff70000001748-40-666af1901b75 From: Honggyu Kim To: SeongJae Park , damon@lists.linux.dev Cc: Andrew Morton , Masami Hiramatsu , Mathieu Desnoyers , Steven Rostedt , Gregory Price , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, 42.hyeyoo@gmail.com, art.jeongseob@gmail.com, kernel_team@skhynix.com, Honggyu Kim , Hyeongtak Ji Subject: [PATCH 5/8] mm/damon/paddr: introduce DAMOS_MIGRATE_COLD action for demotion Date: Thu, 13 Jun 2024 22:17:36 +0900 Message-ID: <20240613131741.513-6-honggyu.kim@sk.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20240613131741.513-1-honggyu.kim@sk.com> References: <20240613131741.513-1-honggyu.kim@sk.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrBLMWRmVeSWpSXmKPExsXC9ZZnke6Ej1lpBjdW8llM7DGwmLN+DZvF /Qev2S2e/P/NatHQ9IjF4vKuOWwW99b8Z7U4sv4si8Xms2eYLRYvV7PY1/GAyeLw1zdMDjwe S0+/YfPYOesuu0fLvlvsHptWdbJ5bPo0id3jxIzfLB4vNs9k9Nj48T+7x+dNcgGcUVw2Kak5 mWWpRfp2CVwZ7w8/ZSnYZF2x9vdq9gbGpQZdjJwcEgImEtsOnWTuYuQAs+9c0wAJswmoSVx5 OYkJJCwiYCUxbUdsFyMXB7PANWaJ5c2LwOLCAqESLcscQMpZBFQl/vSdZAWxeQVMJWa132KD mK4p8Xj7T3YQm1PATGLdt/uMILYQUE3H/2fMEPWCEidnPmEBsZkF5CWat85mhuj9zCbxcqYW hC0pcXDFDZYJjPyzkLTMQtKygJFpFaNQZl5ZbmJmjoleRmVeZoVecn7uJkZg+C+r/RO9g/HT heBDjAIcjEo8vB7PstKEWBPLiitzDzFKcDArifDOWggU4k1JrKxKLcqPLyrNSS0+xCjNwaIk zmv0rTxFSCA9sSQ1OzW1ILUIJsvEwSnVwMjld5Jhq76pcr32hJn12xYavfu4Jf15bvCpuf73 f6xT/9w3tWOd6Cqv/FPr+5fciy+yeG0srHcxyD/qkMLX6aeLrgtbML9cMNkmq9vN5prZO9HP OxgNZ0vf641wmrLLqUi3q2eBxvvsa9W7XaQ0qu5LnPi/w+WJ6ZFQn+tC7B8yXDSfZRR+/KnE UpyRaKjFXFScCACWyrSaewIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrHLMWRmVeSWpSXmKPExsXCNUNLT3fCx6w0g2P3mS0m9hhYzFm/hs3i /oPX7BZP/v9mtWhoesRi8fnZa2aLziffGS0Ozz3JanF51xw2i3tr/rNaHFl/lsVi89kzzBaL l6tZ7Ot4wGRx+OsbJgd+j6Wn37B57Jx1l92jZd8tdo9NqzrZPDZ9msTucWLGbxaPF5tnMnps /Pif3ePbbQ+PxS8+MHl83iQXwB3FZZOSmpNZllqkb5fAlfH+8FOWgk3WFWt/r2ZvYFxq0MXI wSEhYCJx55pGFyMnB5uAmsSVl5OYQMIiAlYS03bEdjFycTALXGOWWN68CCwuLBAq0bLMAaSc RUBV4k/fSVYQm1fAVGJW+y02EFtCQFPi8faf7CA2p4CZxLpv9xlBbCGgmo7/z5gh6gUlTs58 wgJiMwvISzRvnc08gZFnFpLULCSpBYxMqxhFMvPKchMzc0z1irMzKvMyK/SS83M3MQJDfVnt n4k7GL9cdj/EKMDBqMTD6/EsK02INbGsuDL3EKMEB7OSCO+shUAh3pTEyqrUovz4otKc1OJD jNIcLErivF7hqQlCAumJJanZqakFqUUwWSYOTqkGRp6wL2+DGuM2la101t3DMlG0L7RqV9u0 ZRJT7ngJBO573TYh/s45q30vYn57zQ71/rBJK00j5c2z+RsT3hxPUcjeu/WQY8df1fUiF3z6 /r9Y28Pi+KvowL9VPV6NSnMf3pmw6NQb3dk1R5Yd1J3iuH6iEBejdfLVLe82ts3fvbT024s/ K12zezWVWIozEg21mIuKEwGItZOkcQIAAA== X-CFilter-Loop: Reflected X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: C747AC001C X-Stat-Signature: 3j8778pjkig99wzqmw1rinxd9hftoyma X-HE-Tag: 1718284695-686499 X-HE-Meta: U2FsdGVkX18dj4bJwSJo256QN48Tpvc2kIGkgeYT4joL6d2x9cDLiqeSYT60gY5hBNk+8JqLtxczhxCDa39wX5Jxf7rkKHWlx5PHGZPr5A178MQdPvxPHhWQfdjMMQt5r6rq7t/s/F8N37ebJ3eqPxVx1NybyHjDtN1exV5yOQ9wRcTD7l8oSswyPW5mH9STCqQaRKC+0OqBG+wnUHVtAY0QTNEeMEvmrKr6LCt5LV5KuQSuEMm6EKBxIr0qx0/U7bFknvejPhSmVMCwHh+7lqeOvnKdwTm30l9hLHhbg5KVSkmKH5f2DTWXi4k/QbrnRfQlytaM2kUyWhONWhSVjnfRVE5qoBrUI8B/TLT/4WMiwyvR21DwzYRMoJD0qS3jLTcMDw5okunksz5sPk8e6DmMyi7G+gjczqXoFHlW8Bcmmz3yO9j4FuodKJwTN8Y09qzzNz1a+zed/DhQ8otxqGM7zqUaOmw79ng0+40pLGv+Dnfe9f+S2ClmVH2xktg66B6Z1jNjmMs2jLzAqBGjD7Io8TtTvmTkVpWQw6LOD/78zpW7zYCxCue0Mhlk37VDZdDkZrH5WpwHCwTbjuw+BAh3iik8+n3E2xrEGtEw/IE8j4XCBl3EzcuqpVIZDM9ScxsKKKy/2dtywX8knC+IuuNfT7uja6n0vsImkIBCu/1jDd46l8oUwWpPwb4e5L4ZXknJdm3IeHFzo7K3XS9x8s6X50YmPypRj6bG1YB4DorR0O8/5v7vriWrKBcIBc5NHNAkOQkKsPb4lEARSuCxUKs0Xg7Gz3AtYQP33gDQOuwCU9CYTnADRa0eXmuiAvN851B4kXRTFDGnwkGSyHMsOhbFK57OcIZqthiKKTgurGVYLmHQ7FtuCR3jSNz6zTWBwsdSSeF+42+GlPKwmZkf433q8Z1LL5rty91v620UQgMSIKfT87x6vTOg+BAw3PHE6ajabhce9mqahvMGwHH qZvwzQXH RNDzZ/gWq09GFkT0YCgx1K13JcaM7SYQfo4U4ZpaW3PHBwUt/F8rEIasbCrL5usgs648Bw5Y2bxd60wT6ziFm+gAWuGfesKB0hzyaQwEmCP7PafiyCXpcGMIW3Fes8YrO5++vF3vu3bbwJKOC/K6oB086RwkxJbwDCmj9l+NT8Uq4LT5ZXkYmpvSJHoJJ2ijiDb/Hpx/VXufHwblGH71+AQ6OqA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch introduces DAMOS_MIGRATE_COLD action, which is similar to DAMOS_PAGEOUT, but migrate folios to the given 'target_nid' in the sysfs instead of swapping them out. The 'target_nid' sysfs knob informs the migration target node ID. Here is one of the example usage of this 'migrate_cold' action. $ cd /sys/kernel/mm/damon/admin/kdamonds/ $ cat contexts//schemes//action migrate_cold $ echo 2 > contexts//schemes//target_nid $ echo commit > state $ numactl -p 0 ./hot_cold 500M 600M & $ numastat -c -p hot_cold Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Total -------------- ------ ------ ------ ----- 701 (hot_cold) 501 0 601 1101 Since there are some common routines with pageout, many functions have similar logics between pageout and migrate cold. damon_pa_migrate_folio_list() is a minimized version of shrink_folio_list(). Signed-off-by: Honggyu Kim Signed-off-by: Hyeongtak Ji Signed-off-by: SeongJae Park --- include/linux/damon.h | 2 + mm/damon/paddr.c | 154 +++++++++++++++++++++++++++++++++++++++ mm/damon/sysfs-schemes.c | 1 + 3 files changed, 157 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index 21d6b69a015c..56714b6eb0d7 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -105,6 +105,7 @@ struct damon_target { * @DAMOS_NOHUGEPAGE: Call ``madvise()`` for the region with MADV_NOHUGEPAGE. * @DAMOS_LRU_PRIO: Prioritize the region on its LRU lists. * @DAMOS_LRU_DEPRIO: Deprioritize the region on its LRU lists. + * @DAMOS_MIGRATE_COLD: Migrate the regions prioritizing colder regions. * @DAMOS_STAT: Do nothing but count the stat. * @NR_DAMOS_ACTIONS: Total number of DAMOS actions * @@ -122,6 +123,7 @@ enum damos_action { DAMOS_NOHUGEPAGE, DAMOS_LRU_PRIO, DAMOS_LRU_DEPRIO, + DAMOS_MIGRATE_COLD, DAMOS_STAT, /* Do nothing but only record the stat */ NR_DAMOS_ACTIONS, }; diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 18797c1b419b..882ae54af829 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -12,6 +12,9 @@ #include #include #include +#include +#include +#include #include "../internal.h" #include "ops-common.h" @@ -325,6 +328,153 @@ static unsigned long damon_pa_deactivate_pages(struct damon_region *r, return damon_pa_mark_accessed_or_deactivate(r, s, false); } +static unsigned int __damon_pa_migrate_folio_list( + struct list_head *migrate_folios, struct pglist_data *pgdat, + int target_nid) +{ + unsigned int nr_succeeded; + nodemask_t allowed_mask = NODE_MASK_NONE; + struct migration_target_control mtc = { + /* + * Allocate from 'node', or fail quickly and quietly. + * When this happens, 'page' will likely just be discarded + * instead of migrated. + */ + .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | + __GFP_NOWARN | __GFP_NOMEMALLOC | GFP_NOWAIT, + .nid = target_nid, + .nmask = &allowed_mask + }; + + if (pgdat->node_id == target_nid || target_nid == NUMA_NO_NODE) + return 0; + + if (list_empty(migrate_folios)) + return 0; + + /* Migration ignores all cpuset and mempolicy settings */ + migrate_pages(migrate_folios, alloc_migrate_folio, NULL, + (unsigned long)&mtc, MIGRATE_ASYNC, MR_DAMON, + &nr_succeeded); + + return nr_succeeded; +} + +static unsigned int damon_pa_migrate_folio_list(struct list_head *folio_list, + struct pglist_data *pgdat, + int target_nid) +{ + unsigned int nr_migrated = 0; + struct folio *folio; + LIST_HEAD(ret_folios); + LIST_HEAD(migrate_folios); + + while (!list_empty(folio_list)) { + struct folio *folio; + + cond_resched(); + + folio = lru_to_folio(folio_list); + list_del(&folio->lru); + + if (!folio_trylock(folio)) + goto keep; + + /* Relocate its contents to another node. */ + list_add(&folio->lru, &migrate_folios); + folio_unlock(folio); + continue; +keep: + list_add(&folio->lru, &ret_folios); + } + /* 'folio_list' is always empty here */ + + /* Migrate folios selected for migration */ + nr_migrated += __damon_pa_migrate_folio_list( + &migrate_folios, pgdat, target_nid); + /* + * Folios that could not be migrated are still in @migrate_folios. Add + * those back on @folio_list + */ + if (!list_empty(&migrate_folios)) + list_splice_init(&migrate_folios, folio_list); + + try_to_unmap_flush(); + + list_splice(&ret_folios, folio_list); + + while (!list_empty(folio_list)) { + folio = lru_to_folio(folio_list); + list_del(&folio->lru); + folio_putback_lru(folio); + } + + return nr_migrated; +} + +static unsigned long damon_pa_migrate_pages(struct list_head *folio_list, + int target_nid) +{ + int nid; + unsigned long nr_migrated = 0; + LIST_HEAD(node_folio_list); + unsigned int noreclaim_flag; + + if (list_empty(folio_list)) + return nr_migrated; + + noreclaim_flag = memalloc_noreclaim_save(); + + nid = folio_nid(lru_to_folio(folio_list)); + do { + struct folio *folio = lru_to_folio(folio_list); + + if (nid == folio_nid(folio)) { + list_move(&folio->lru, &node_folio_list); + continue; + } + + nr_migrated += damon_pa_migrate_folio_list(&node_folio_list, + NODE_DATA(nid), + target_nid); + nid = folio_nid(lru_to_folio(folio_list)); + } while (!list_empty(folio_list)); + + nr_migrated += damon_pa_migrate_folio_list(&node_folio_list, + NODE_DATA(nid), + target_nid); + + memalloc_noreclaim_restore(noreclaim_flag); + + return nr_migrated; +} + +static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s) +{ + unsigned long addr, applied; + LIST_HEAD(folio_list); + + for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { + struct folio *folio = damon_get_folio(PHYS_PFN(addr)); + + if (!folio) + continue; + + if (damos_pa_filter_out(s, folio)) + goto put_folio; + + if (!folio_isolate_lru(folio)) + goto put_folio; + list_add(&folio->lru, &folio_list); +put_folio: + folio_put(folio); + } + applied = damon_pa_migrate_pages(&folio_list, s->target_nid); + cond_resched(); + return applied * PAGE_SIZE; +} + + static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx, struct damon_target *t, struct damon_region *r, struct damos *scheme) @@ -336,6 +486,8 @@ static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx, return damon_pa_mark_accessed(r, scheme); case DAMOS_LRU_DEPRIO: return damon_pa_deactivate_pages(r, scheme); + case DAMOS_MIGRATE_COLD: + return damon_pa_migrate(r, scheme); case DAMOS_STAT: break; default: @@ -356,6 +508,8 @@ static int damon_pa_scheme_score(struct damon_ctx *context, return damon_hot_score(context, r, scheme); case DAMOS_LRU_DEPRIO: return damon_cold_score(context, r, scheme); + case DAMOS_MIGRATE_COLD: + return damon_cold_score(context, r, scheme); default: break; } diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c index 0632d28b67f8..880015d5b5ea 100644 --- a/mm/damon/sysfs-schemes.c +++ b/mm/damon/sysfs-schemes.c @@ -1458,6 +1458,7 @@ static const char * const damon_sysfs_damos_action_strs[] = { "nohugepage", "lru_prio", "lru_deprio", + "migrate_cold", "stat", }; -- 2.34.1