From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 178C4C77B7C for ; Wed, 2 Jul 2025 20:14:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B131B8D0006; Wed, 2 Jul 2025 16:14:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEA988D0001; Wed, 2 Jul 2025 16:14:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DAA98D0006; Wed, 2 Jul 2025 16:14:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 896628D0001 for ; Wed, 2 Jul 2025 16:14:45 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0C7FB140434 for ; Wed, 2 Jul 2025 20:14:45 +0000 (UTC) X-FDA: 83620427730.15.8AA3C11 Received: from mail-yb1-f169.google.com (mail-yb1-f169.google.com [209.85.219.169]) by imf20.hostedemail.com (Postfix) with ESMTP id 308C91C0009 for ; Wed, 2 Jul 2025 20:14:43 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MC0RiQQw; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of bijan311@gmail.com designates 209.85.219.169 as permitted sender) smtp.mailfrom=bijan311@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751487283; a=rsa-sha256; cv=none; b=TYsRjKOoOWHsWx3JT/sOX6SAHvlGmpE4QXHIYMzVs1JA9SVX3VfhytcbQvf4ceR356CcBY T9+wJ5mx4eiBrDQLxF1Bu/W7icm1l2POo+izsc1+F8WKOomyNmUEHqVSIf+AtrSlWymCVX UBTB0a6EEaqgEZyoan3MIh3wbHm8LPw= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MC0RiQQw; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of bijan311@gmail.com designates 209.85.219.169 as permitted sender) smtp.mailfrom=bijan311@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751487283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ux7vTJe4Gmqx6TiCWWLEznlY/hR11iXVrOjLOOQ9hBo=; b=puo7VUoi1GN4hm2ZU0lp41zDKa8X73mBypIQ5mqZhyoloGhxzWsrjxpg8YPAtb1oQ+75Zo tffatuK8CH2qMfV6OPHKpyL6FxlT9UCPLqN5wcvrk3XBi6uZBjLxmBDTzztp/XJhhjRH4t 3C5RMJXyOe0Uy3nagSPRycgUE9AQjLU= Received: by mail-yb1-f169.google.com with SMTP id 3f1490d57ef6-e7b4ba530feso4357887276.1 for ; Wed, 02 Jul 2025 13:14:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751487282; x=1752092082; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ux7vTJe4Gmqx6TiCWWLEznlY/hR11iXVrOjLOOQ9hBo=; b=MC0RiQQwoRwfhIcDriK/xBJPg+Yn79ouAqKySaad9UHjZRCRliwqPdYuRXiqq+R/ku Lbb5lK2QlSf8VOpO4gtX6mNEAuWgFcuBvrJSZu7RnQhUYW6PckBc5jpp2Z2hXJNGPOcP X1YTrfHddV1H5yEI33iQxShPdifX8Xwgyync8/EwbjRn0kpMSo5/svwwZiqY2YWVeLiK SdydQttthhqiCRrcB/OzNOpp19MqgqYE635VUryE/zOsE2lNvjTRFYjHjJbHJkf64TGu u2mgkouTSm8ZCyyVL9Qe0UNSRANu0eM6EwFkES7h7TG6azPsRs17ZZTM96AXI1rmdRqA 7WeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751487282; x=1752092082; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ux7vTJe4Gmqx6TiCWWLEznlY/hR11iXVrOjLOOQ9hBo=; b=uEJl7gAG4gMOfq+53LEf2GfZQ+cJ/xMutvPpr7gdGCqa7g9FjtA7Gr1/xugJclN5yA IqP+q7TT11qNpTQlDvJgpFAnHIjiWI7WGDgI2aRuGdiTX9BqoNZqtVS0U6h69gDgVKmh 9dOkgWd0mfIvl3AGaiKt1P2gxej2YPaYSyevxl23+H0wEpICAu8z2XXIzx+yZT8ZO/cS 3r7n2fqERKIkXVDrfRIqmNJ/Y/qQAdmjo+KOg34vknl/Rz8FoBAU8eDQ8HUHIZ1cyyoh 7/TUag6wnj25ir0uWWVfMi2OeaXaoy//ICSxRp/G4h88v2FfgpQ06xLezb/jgLpxAKTX eYKA== X-Forwarded-Encrypted: i=1; AJvYcCUWVSoaoAYPm8FbP1k7b3hy9WC7XEMIDAXQH/2y7whxxzkV6ss3m+sonCaJRdsA9goMV/TohF4qkQ==@kvack.org X-Gm-Message-State: AOJu0YxiHNO6mtErbLsErn1CYOvnN5y3b5IcVBMsG8mcJaMbOc16PMrk pTxQjy0YgQw48Sc1uMe21ly35lJBZ4vXPc134YxpDW3Wo5Qma8FZYKjx X-Gm-Gg: ASbGncusIiZoe39xw9J8L9y0ZwElrWoMmCA+Lqok6T0EMXfklcoQqwnjjwnjVqh3uAa ovfhWAHeKn51823IRNMwEcaaoVG2MduJLva07pSacNrZf3a7EY+SAf1BDnOjy5EsGXTl/zKkwyP MFGVGaoJ6YJY66vciWFPeRU2NPF+HmQQFK7zGVfnJQ0TuBeq77PmoAgQ0mbttyh8ASQ3vshyDHO MbfCIefp+3VgACJl5snNdQeXQqpTa3dvrPuS06msYRyympySK7znqHnbdhoczAoR1bbgtyYDFT7 qPMvaywYXmfTQDy8mmdYq+p6YyLpe8u/SDG01QJw1y6wsW6EpynIPIMRDhPMi4orJ9gq6Mi/YtM /OkPRAUwMlIqHQvNS6Q== X-Google-Smtp-Source: AGHT+IHA+gOuV+brpZwRwERkJ7yj/jSN/p3J3iXUigSwyqpIrYWnlxAOmwngxwZ6r4wEWO6YRW3O0Q== X-Received: by 2002:a05:690c:6002:b0:70e:a1e:d9f8 with SMTP id 00721157ae682-716590c4d46mr15751777b3.22.1751487281890; Wed, 02 Jul 2025 13:14:41 -0700 (PDT) Received: from bijan-laptop.attlocal.net ([2600:1700:680e:c000:873e:8f35:7cd8:3fe3]) by smtp.gmail.com with ESMTPSA id 00721157ae682-71515cb4347sm26124157b3.83.2025.07.02.13.14.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Jul 2025 13:14:41 -0700 (PDT) From: Bijan Tabatabai To: damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org Cc: sj@kernel.org, akpm@linux-foundation.org, corbet@lwn.net, joshua.hahnjy@gmail.com, bijantabatab@micron.com, venkataravis@micron.com, emirakhur@micron.com, ajayjoshi@micron.com, vtavarespetr@micron.com, Ravi Shankar Jonnalagadda Subject: [RFC PATCH v3 08/13] mm/damon: Move migration helpers from paddr to ops-common Date: Wed, 2 Jul 2025 15:13:31 -0500 Message-ID: <20250702201337.5780-9-bijan311@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250702201337.5780-1-bijan311@gmail.com> References: <20250702201337.5780-1-bijan311@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: c3r3qk6g66ieio344gdbyd4awu9egkhj X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 308C91C0009 X-Rspam-User: X-HE-Tag: 1751487283-484008 X-HE-Meta: U2FsdGVkX1/BtuiIWm+D85HdjJc0adzrEC89YX88oQyBnvcJllZxXAFvn7njg2L82hDFJpfO51dV3kTvPm1JR/0a/CkiwgTbSg93xieg50L7XCWRVO7qr9GTX6NOKhEwcl0fs6BTbJfUonfTNv0QvPtgSsegRVHniNmJ5j2rChNdG7XyZlAZ0KV3z7Ep4YvdVm0H0T63bpeSB2ExSbU1uCJ7dzZazbzhYl0NMPRYYGZwAZKNslDcxaSqwVy7yIfDffWVmkoP4UoeP6ivbY9uTQtWP2J2gvoqoHOaBz/2V+WfKoux/hJgC2s9uDiv0CLTGmWNxBPB4luESKhgCqI0aZoGcmNCHvKWIAYEC4+ZO1sQCaGLpJQtjtbl+XqrnUwZwmBY9Nvkp+i4nizkiqWjB67F7/XJFhkWhe3s4Lrghm6DvC0S8LixP0CFUP9GRbjhGG4Rt+jsNhlaiFBlZsixCFGx0cL+wMZ9Ow97cF1862bo6H7ULoYmLss1FQDALb88PU6+kbMkkG9R3cO4yyQyPTpyOltjwh83h7NKvD39bz/M8f2KkX50+5HhNLTJ/BAD+foVRXpobhD2M/U2IcelLu1kZQb18zdtYMY4Bosb2HHdoTm+gYR8Tq/jx1aNigrgnm5/S31XPYndQB9ZAb4A1N1QRVCqI+YxxeB5U5NnivdSyCY13ihQx7Oi6tI0MKvf1cc9+lT6ZAuhIA/XTexmyFb3g1PVu5xq9dhBLG3qjWZQqzIwRxSH0fmzzxTLNdKHcqcBQH9ClNIXWnpaFTYiNKkxV594QGWts9zMieVcHuIAeNc9S/2VxDc4x/fNmWP4iTRkNzDBtNArunktdqGYVvmUGJ0OHdDv7IR9If37pY5Y4QU+gOgj6niP9nFRDXKbWMrbnSDCVqkc2styosR+Y4RVi+zAc2PyZVhdN3B+SCTIpfnwAITu34HtV45+hQ4/u1lX7KowzJUooJCdzVZ K3zVBQwO bqjkIaMQnTDgs4GEIEVE8tAqSBgNZcfHMHsfhLZSgdwSE+qnlR11RNErmQjqUJG53ss9v8wY5pap0iKS+dP1gxaMdjexeV6yZCwi56j9BqCQ8VZ+XDHH+T8TFPWZ8Gka70smsG1UdIH5hgDom+g7xO1SQhrWs/wTKx9DXJt0bjLXH9JffQdcqsON4+Oab47TgjfVl/7YDleF7vAlHQn4XvmZxp3wJd5T57jssSASHJS6KTLrdsyTsdsKtOsbmaBu51jYWP3lHvl3HYLzntPR0TD0/m0JPa6lUOu/p45UHNmiwBJh5v/GDymGnx/wuTdLGBB89Sk7wEnxeZuHUz8T41ZpAuV91IPcaDX9AmihlfDkQP3QbFJCre/n13I/SaQTxnGFHQmkcdd/NVjjSjRSOUpmNGM0txH9+dKKu/bz6tBIwkEzCDByl009IqTuesTWj4UT77Ie2RK0P/f7L7gGdGgC7aWHAfZCt14Fg5I5XudfbhuEqXTEgFECdJ1Lwr7t+dyGhv+GB25yzaLf9j+2br6geMyOPTWzlm6pNekJD1h4UkYJaBXYjuIFPVWVX8HvKsrJz3l7f1ysLNPwhFpAT3rCrMdm/PCyOUPa9AW8tnhuL2XeTMIYbgHvmVg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Bijan Tabatabai This patch moves the damon_pa_migrate_pages function along with its corresponding helper functions from paddr to ops-common. The function prefix of "damon_pa_" was also changed to just "damon_" accordingly. This patch will allow page migration to be available to vaddr schemes as well as paddr schemes. Co-developed-by: Ravi Shankar Jonnalagadda Signed-off-by: Ravi Shankar Jonnalagadda Signed-off-by: Bijan Tabatabai --- mm/damon/ops-common.c | 120 +++++++++++++++++++++++++++++++++++++++++ mm/damon/ops-common.h | 2 + mm/damon/paddr.c | 122 +----------------------------------------- 3 files changed, 123 insertions(+), 121 deletions(-) diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c index b43620fee6bb..918158ef3d99 100644 --- a/mm/damon/ops-common.c +++ b/mm/damon/ops-common.c @@ -5,6 +5,7 @@ * Author: SeongJae Park */ +#include #include #include #include @@ -12,6 +13,7 @@ #include #include +#include "../internal.h" #include "ops-common.h" /* @@ -138,3 +140,121 @@ int damon_cold_score(struct damon_ctx *c, struct damon_region *r, /* Return coldness of the region */ return DAMOS_MAX_SCORE - hotness; } + +static unsigned int __damon_migrate_folio_list( + struct list_head *migrate_folios, struct pglist_data *pgdat, + int target_nid) +{ + unsigned int nr_succeeded = 0; + struct migration_target_control mtc = { + /* + * Allocate from 'node', or fail quickly and quietly. + * When this happens, 'page' will likely just be discarded + * instead of migrated. + */ + .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | + __GFP_NOWARN | __GFP_NOMEMALLOC | GFP_NOWAIT, + .nid = target_nid, + }; + + if (pgdat->node_id == target_nid || target_nid == NUMA_NO_NODE) + return 0; + + if (list_empty(migrate_folios)) + return 0; + + /* Migration ignores all cpuset and mempolicy settings */ + migrate_pages(migrate_folios, alloc_migration_target, NULL, + (unsigned long)&mtc, MIGRATE_ASYNC, MR_DAMON, + &nr_succeeded); + + return nr_succeeded; +} + +static unsigned int damon_migrate_folio_list(struct list_head *folio_list, + struct pglist_data *pgdat, + int target_nid) +{ + unsigned int nr_migrated = 0; + struct folio *folio; + LIST_HEAD(ret_folios); + LIST_HEAD(migrate_folios); + + while (!list_empty(folio_list)) { + struct folio *folio; + + cond_resched(); + + folio = lru_to_folio(folio_list); + list_del(&folio->lru); + + if (!folio_trylock(folio)) + goto keep; + + /* Relocate its contents to another node. */ + list_add(&folio->lru, &migrate_folios); + folio_unlock(folio); + continue; +keep: + list_add(&folio->lru, &ret_folios); + } + /* 'folio_list' is always empty here */ + + /* Migrate folios selected for migration */ + nr_migrated += __damon_migrate_folio_list( + &migrate_folios, pgdat, target_nid); + /* + * Folios that could not be migrated are still in @migrate_folios. Add + * those back on @folio_list + */ + if (!list_empty(&migrate_folios)) + list_splice_init(&migrate_folios, folio_list); + + try_to_unmap_flush(); + + list_splice(&ret_folios, folio_list); + + while (!list_empty(folio_list)) { + folio = lru_to_folio(folio_list); + list_del(&folio->lru); + folio_putback_lru(folio); + } + + return nr_migrated; +} + +unsigned long damon_migrate_pages(struct list_head *folio_list, int target_nid) +{ + int nid; + unsigned long nr_migrated = 0; + LIST_HEAD(node_folio_list); + unsigned int noreclaim_flag; + + if (list_empty(folio_list)) + return nr_migrated; + + noreclaim_flag = memalloc_noreclaim_save(); + + nid = folio_nid(lru_to_folio(folio_list)); + do { + struct folio *folio = lru_to_folio(folio_list); + + if (nid == folio_nid(folio)) { + list_move(&folio->lru, &node_folio_list); + continue; + } + + nr_migrated += damon_migrate_folio_list(&node_folio_list, + NODE_DATA(nid), + target_nid); + nid = folio_nid(lru_to_folio(folio_list)); + } while (!list_empty(folio_list)); + + nr_migrated += damon_migrate_folio_list(&node_folio_list, + NODE_DATA(nid), + target_nid); + + memalloc_noreclaim_restore(noreclaim_flag); + + return nr_migrated; +} diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h index cc9f5da9c012..54209a7e70e6 100644 --- a/mm/damon/ops-common.h +++ b/mm/damon/ops-common.h @@ -16,3 +16,5 @@ int damon_cold_score(struct damon_ctx *c, struct damon_region *r, struct damos *s); int damon_hot_score(struct damon_ctx *c, struct damon_region *r, struct damos *s); + +unsigned long damon_migrate_pages(struct list_head *folio_list, int target_nid); diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index fcab148e6865..48e3e6fed636 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include "../internal.h" @@ -381,125 +380,6 @@ static unsigned long damon_pa_deactivate_pages(struct damon_region *r, sz_filter_passed); } -static unsigned int __damon_pa_migrate_folio_list( - struct list_head *migrate_folios, struct pglist_data *pgdat, - int target_nid) -{ - unsigned int nr_succeeded = 0; - struct migration_target_control mtc = { - /* - * Allocate from 'node', or fail quickly and quietly. - * When this happens, 'page' will likely just be discarded - * instead of migrated. - */ - .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | - __GFP_NOWARN | __GFP_NOMEMALLOC | GFP_NOWAIT, - .nid = target_nid, - }; - - if (pgdat->node_id == target_nid || target_nid == NUMA_NO_NODE) - return 0; - - if (list_empty(migrate_folios)) - return 0; - - /* Migration ignores all cpuset and mempolicy settings */ - migrate_pages(migrate_folios, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_ASYNC, MR_DAMON, - &nr_succeeded); - - return nr_succeeded; -} - -static unsigned int damon_pa_migrate_folio_list(struct list_head *folio_list, - struct pglist_data *pgdat, - int target_nid) -{ - unsigned int nr_migrated = 0; - struct folio *folio; - LIST_HEAD(ret_folios); - LIST_HEAD(migrate_folios); - - while (!list_empty(folio_list)) { - struct folio *folio; - - cond_resched(); - - folio = lru_to_folio(folio_list); - list_del(&folio->lru); - - if (!folio_trylock(folio)) - goto keep; - - /* Relocate its contents to another node. */ - list_add(&folio->lru, &migrate_folios); - folio_unlock(folio); - continue; -keep: - list_add(&folio->lru, &ret_folios); - } - /* 'folio_list' is always empty here */ - - /* Migrate folios selected for migration */ - nr_migrated += __damon_pa_migrate_folio_list( - &migrate_folios, pgdat, target_nid); - /* - * Folios that could not be migrated are still in @migrate_folios. Add - * those back on @folio_list - */ - if (!list_empty(&migrate_folios)) - list_splice_init(&migrate_folios, folio_list); - - try_to_unmap_flush(); - - list_splice(&ret_folios, folio_list); - - while (!list_empty(folio_list)) { - folio = lru_to_folio(folio_list); - list_del(&folio->lru); - folio_putback_lru(folio); - } - - return nr_migrated; -} - -static unsigned long damon_pa_migrate_pages(struct list_head *folio_list, - int target_nid) -{ - int nid; - unsigned long nr_migrated = 0; - LIST_HEAD(node_folio_list); - unsigned int noreclaim_flag; - - if (list_empty(folio_list)) - return nr_migrated; - - noreclaim_flag = memalloc_noreclaim_save(); - - nid = folio_nid(lru_to_folio(folio_list)); - do { - struct folio *folio = lru_to_folio(folio_list); - - if (nid == folio_nid(folio)) { - list_move(&folio->lru, &node_folio_list); - continue; - } - - nr_migrated += damon_pa_migrate_folio_list(&node_folio_list, - NODE_DATA(nid), - target_nid); - nid = folio_nid(lru_to_folio(folio_list)); - } while (!list_empty(folio_list)); - - nr_migrated += damon_pa_migrate_folio_list(&node_folio_list, - NODE_DATA(nid), - target_nid); - - memalloc_noreclaim_restore(noreclaim_flag); - - return nr_migrated; -} - static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s, unsigned long *sz_filter_passed) { @@ -527,7 +407,7 @@ static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s, addr += folio_size(folio); folio_put(folio); } - applied = damon_pa_migrate_pages(&folio_list, s->target_nid); + applied = damon_migrate_pages(&folio_list, s->target_nid); cond_resched(); s->last_applied = folio; return applied * PAGE_SIZE; -- 2.43.5