From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C693C77B7C for ; Wed, 2 Jul 2025 20:15:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 325916B009C; Wed, 2 Jul 2025 16:15:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D4DC6B00AD; Wed, 2 Jul 2025 16:15:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19CEE6B00AE; Wed, 2 Jul 2025 16:15:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 077536B00AD for ; Wed, 2 Jul 2025 16:15:07 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 62C5D58F91 for ; Wed, 2 Jul 2025 20:15:06 +0000 (UTC) X-FDA: 83620428612.18.2B47A71 Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) by imf03.hostedemail.com (Postfix) with ESMTP id 768A720007 for ; Wed, 2 Jul 2025 20:15:04 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="FXsfon/a"; spf=pass (imf03.hostedemail.com: domain of bijan311@gmail.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=bijan311@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751487304; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oiZANFhzMVZOSV0URi2HW3PqK+ggIehph8dgAXp+7Gc=; b=FGaRUz+Ke2KfjwOz/G4he6EcRlfpEA8GPunZFgaRLjfNxBctAECEeoLdm62Jz1d+lOOm6r 7Wcd1ZK3fNf38c9E3/Q7my/XxQOzci+DZDuh3sSOmGw1JaHyN2WiUsIci73AKhOc/Ic9Tv hTZwzsgiv93Ug2R9bQCcGFeoZnksfCY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="FXsfon/a"; spf=pass (imf03.hostedemail.com: domain of bijan311@gmail.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=bijan311@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751487304; a=rsa-sha256; cv=none; b=6d1mRYVu/GKO/w83XJhAb2y+SeYgBL9V1salN1hqYN0F2NKoa38J91pqWzU4m+ZUUbYzUN ZAYSy/YSOf+cjdRptjQbDLEL0m6QhImLWbo4s43XZjPMTJe/cfwbg7jry40Y0wDO5/HfFO WLVXAqNSs8mV4cx6d/v0aJ9CW4gFeRY= Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-70e64b430daso53472467b3.3 for ; Wed, 02 Jul 2025 13:15:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751487303; x=1752092103; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oiZANFhzMVZOSV0URi2HW3PqK+ggIehph8dgAXp+7Gc=; b=FXsfon/aRbdYtyLsb4WZ4IHEzSKLpBvCza29jK0vlvqZu+K0EhhcpCSE9gbyPa5c56 Tu+rIoEscc0MAe25WSkxK3QN1g7J9SXxTeuk9kNqUEzHHcDlBIc8KjILEjrxVcoPIrPO pQaNuKHUZatY23OejjTyA3UAHUzekjAD1loKFaP8xMMC0hrwc3Vl2nzcr4LX9jHLK/sz E3JpbSsrRKZQT3SrPIToYoXbJJeBiZ98m24q5u+NGuuKgeMgGPmigUmgPWE1JtQHJExz iAQgCdnHuERdjOebF4FUT9dskPlQ02gB1oGqhUD46bDd3S8utNllt4Ho8nAHbtQC75J7 5QuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751487303; x=1752092103; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oiZANFhzMVZOSV0URi2HW3PqK+ggIehph8dgAXp+7Gc=; b=giPTusOZF2ZqOWt8i5MRVzIuzdP2BvSABm9jAwc1tjzBiO+8ROgyVy5BTuZw1Lr7M3 8bu2aUsufH3OpYqg1LkS1MaqCtl1TAHlcA1VSJ5wZkcM88Nfr3qSNxRJHJalWHgqzbVz mJxc7qYC2fuhIN5jthxZUNMMmYR00QzwiqtVpS+IusgfSendJvxhuZx2D7hHqo3x1JzM S5OA/KSGzXmI09vv+FXlrjA0+NlcYEh1bInud83RCCc90om9fQ8RBTqrpjQpGuFvD8ev JbzkxRlJweGli5bwB8Lypz0GfRff1oG3Tvdu0zjm79c2nLGv/4bJHyACUzmZUrxZAzJw 3+QQ== X-Forwarded-Encrypted: i=1; AJvYcCVic4Ey5pr0zOvYKNjxZNXfFfMbKBjn/hdgxI7xY6iZUTvWfo9nbsNJ7Wqtrxy9ucAba6AcVHuYjQ==@kvack.org X-Gm-Message-State: AOJu0YwbO1wDe0AJbSrKvXF0vCohKYvrysDveCblCl0JvVdAd108tKaq UBsfwCep6innLyMpzimIUrmsrZgqyM5b3jnl2ysyC/q9sEcbu1/el4L1 X-Gm-Gg: ASbGncuLof5s4ON1qRluC+XxcgLpsrigU7oZw4iaZgkt7DGE3bX+hENBsm5gYW6nImz 1tgq9e79oNM+KuYrtMlQj1IjAIn/4hOYYy5466j+72gEVQ3au1mIg9ESfudhOjQu/EWvadvia/n 3qVMYu6yXYGgcRPHiWovu4RJnSIoRblSej94QmILUZtz6XOB7KmwHEkeE185a5xWLsXnT9b6pf7 mfMU3md3suKzjZDj1rynB97uZD7qu7KOPba/HjNBgqRYalriSCZeyGT1/GEju9dc4vXVtHzysFJ nLSCfEYg2F9wXzuX0UZPNutx2KC2ncdsPiYntSStx/f/PaYvKpYu+ESd0fHm0OSmc1E4MW2ZjGo YWVQu8zE= X-Google-Smtp-Source: AGHT+IGXljG5fdV2UV7d/ojq16hPZdWQ6hJouShkJiIdv6pbB0txRRUWvUoz7wZX8friaN7DEPp3vw== X-Received: by 2002:a05:690c:11:b0:712:d473:b802 with SMTP id 00721157ae682-7164d40a305mr64256017b3.18.1751487303270; Wed, 02 Jul 2025 13:15:03 -0700 (PDT) Received: from bijan-laptop.attlocal.net ([2600:1700:680e:c000:873e:8f35:7cd8:3fe3]) by smtp.gmail.com with ESMTPSA id 00721157ae682-71515cb4347sm26124157b3.83.2025.07.02.13.14.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Jul 2025 13:15:02 -0700 (PDT) From: Bijan Tabatabai To: damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org Cc: sj@kernel.org, akpm@linux-foundation.org, corbet@lwn.net, joshua.hahnjy@gmail.com, bijantabatab@micron.com, venkataravis@micron.com, emirakhur@micron.com, ajayjoshi@micron.com, vtavarespetr@micron.com, Ravi Shankar Jonnalagadda Subject: [RFC PATCH v3 12/13] mm/damon: Move folio filtering from paddr to ops-common Date: Wed, 2 Jul 2025 15:13:35 -0500 Message-ID: <20250702201337.5780-13-bijan311@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250702201337.5780-1-bijan311@gmail.com> References: <20250702201337.5780-1-bijan311@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 768A720007 X-Stat-Signature: 4kfp1r7xci7bropppmrqrst8rsiugiia X-HE-Tag: 1751487304-204719 X-HE-Meta: U2FsdGVkX18DSl+8h8CxDrVCPmyBVaOwgILnw2Pqw2tOki24LgKoNrs0rTCf1jJAC+3ZQ/sBIumSj9pTaELuakKIE81dGWoweV0p0lGGgtCwArlx+Hl/JJnqnGmtVA+ld9Cv8ZgKOSzQzuVlMAZxlYfMpstrprj9hy2u92KtcmDBKdjVomthdKsBpCjjzo4TclehuCLBPuqt6lSURSU7e1Z9TFfIkv8kUsD1HeMPRikXw0V/UhaHXDVqCYLFfuCMs08LKi2tVcgP3oo2QLkrT7fbDotVYLHba12FHA499dqpPXYUKUeegTKi/cEr5yBasU6mzefym0fxyzmbdyBUBUXnpaFuHMECeq+4mvXKxt5qIxSrBUQZ8YS2FlliYffMd8YLaDXD/pWtcgyLRkdLvmFonVu//VgMcUOWnvuFCB2xW1S+EZm8G8z2IlZGR64v9J3pwAS/87RnyM8UJT5ibOSUktB+BOfe0MD7Akms9XuDG4IUO8rSQsvauglmVLpcBXKgeIvRjfV2W+UI+WloPdwSXzO95gd8gU/Fla/uFWxoaCKdTfQm6/c2ik5HzJpehcgEQgAjRMH3bcg/YYxsULOKc8KjyV83/FQJg2QM+e3zxRa+GdrKZ9D0CDW6tcmBM3sl2yTWDqztdbQ2PkpYcmAPsrQC/+xRWVEJyoqG45crs4WppcU+KhlVqrMvfG/MAYBTEfkQ54GnBEmgyevNPDe7lVfvmbdyk+6fjyF8MZ/BltFBLPkHhN5LpmqQX3Cy5GDhggK/44zqr3NNpS4Wqy73+Qc7quL9CZI6pT2m2/tODfJtkiFtuOBd+/c2SMzduFi0U1lxrAqefVtRKRL4A/GNMpsodauOXApPhmHs8lFoaipgpBVwnFG1z19f+qwkgQxI62ufIjuCdV32a0WfFdXoBm6+CAMZJzMScjNRtANfgGU6PzoWBZ15nvLRtnqfuJ4jpG9qP3yg4kJ1OmU LoWjqE3B BxA+pR0NqYq6XnMmGNFuq2N9ZxOlyFBAOcnBYS7A3m24bPQy7RuVIsnk0lHuP1zfXgARtePrUbM0hcoYNZQpxD8T2MRVCMcf1Qek+T6ovDqI93/OvBvsqMJjy1mrbHvg2d5bwkUB/HKI9v2y1hIp8x+v49QchOjMZww1O7Yyuds2GUShFAhIzFdHj2zjvmDvm4RTIaebG9cmWghrZDmQV3W32NnDuRaBD8FPqnCmVUlpmJA2tofuXjTyfypqm1lSA/ArFlF4+Q5hUgUiUWjwCSW9tqyb6ggo6TpNzAYHtnfnw6tiWvldCZYqmXpLI9sFuZrHtSX8ImmoNYNKG+aCqvXiBDmpAlmiQOL3VB/NXIRzf4+ZYHu6m6q2qTKcMljNQg9U1RmfUQG7fgsF6HwAVCgyTMJqBz0OtkLA7TUj8/BIEglh97JZNMZZWxCoLw4u8eKz4ajGXUOKcQHZg6JYIUQdf/zY0MFVI41hUfsTz1QPFm2FXO/Vs1zK/z9cVD+iCshf8HTYZSPFIytR+p0hFnqZARNMRSdo1rasLhE1+YzIXtNsYz89gotBotVewWY/QrIQ29ERho3r6xvlrs/cq9KbCMA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Bijan Tabatabai This patch moves damos_pa_filter_match and the functions it calls to ops-common, renaming it to damos_folio_filter_match. Doing so allows us to share the filtering logic for the vaddr version of the migrate_{hot,cold} schemes. Co-developed-by: Ravi Shankar Jonnalagadda Signed-off-by: Ravi Shankar Jonnalagadda Signed-off-by: Bijan Tabatabai --- mm/damon/ops-common.c | 150 +++++++++++++++++++++++++++++++++++++++++ mm/damon/ops-common.h | 3 + mm/damon/paddr.c | 153 +----------------------------------------- 3 files changed, 154 insertions(+), 152 deletions(-) diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c index 918158ef3d99..6a9797d1d7ff 100644 --- a/mm/damon/ops-common.c +++ b/mm/damon/ops-common.c @@ -141,6 +141,156 @@ int damon_cold_score(struct damon_ctx *c, struct damon_region *r, return DAMOS_MAX_SCORE - hotness; } +static bool damon_folio_mkold_one(struct folio *folio, + struct vm_area_struct *vma, unsigned long addr, void *arg) +{ + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); + + while (page_vma_mapped_walk(&pvmw)) { + addr = pvmw.address; + if (pvmw.pte) + damon_ptep_mkold(pvmw.pte, vma, addr); + else + damon_pmdp_mkold(pvmw.pmd, vma, addr); + } + return true; +} + +void damon_folio_mkold(struct folio *folio) +{ + struct rmap_walk_control rwc = { + .rmap_one = damon_folio_mkold_one, + .anon_lock = folio_lock_anon_vma_read, + }; + bool need_lock; + + if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { + folio_set_idle(folio); + return; + } + + need_lock = !folio_test_anon(folio) || folio_test_ksm(folio); + if (need_lock && !folio_trylock(folio)) + return; + + rmap_walk(folio, &rwc); + + if (need_lock) + folio_unlock(folio); + +} + +static bool damon_folio_young_one(struct folio *folio, + struct vm_area_struct *vma, unsigned long addr, void *arg) +{ + bool *accessed = arg; + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); + pte_t pte; + + *accessed = false; + while (page_vma_mapped_walk(&pvmw)) { + addr = pvmw.address; + if (pvmw.pte) { + pte = ptep_get(pvmw.pte); + + /* + * PFN swap PTEs, such as device-exclusive ones, that + * actually map pages are "old" from a CPU perspective. + * The MMU notifier takes care of any device aspects. + */ + *accessed = (pte_present(pte) && pte_young(pte)) || + !folio_test_idle(folio) || + mmu_notifier_test_young(vma->vm_mm, addr); + } else { +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + *accessed = pmd_young(pmdp_get(pvmw.pmd)) || + !folio_test_idle(folio) || + mmu_notifier_test_young(vma->vm_mm, addr); +#else + WARN_ON_ONCE(1); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + } + if (*accessed) { + page_vma_mapped_walk_done(&pvmw); + break; + } + } + + /* If accessed, stop walking */ + return *accessed == false; +} + +bool damon_folio_young(struct folio *folio) +{ + bool accessed = false; + struct rmap_walk_control rwc = { + .arg = &accessed, + .rmap_one = damon_folio_young_one, + .anon_lock = folio_lock_anon_vma_read, + }; + bool need_lock; + + if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { + if (folio_test_idle(folio)) + return false; + else + return true; + } + + need_lock = !folio_test_anon(folio) || folio_test_ksm(folio); + if (need_lock && !folio_trylock(folio)) + return false; + + rmap_walk(folio, &rwc); + + if (need_lock) + folio_unlock(folio); + + return accessed; +} + +bool damos_folio_filter_match(struct damos_filter *filter, struct folio *folio) +{ + bool matched = false; + struct mem_cgroup *memcg; + size_t folio_sz; + + switch (filter->type) { + case DAMOS_FILTER_TYPE_ANON: + matched = folio_test_anon(folio); + break; + case DAMOS_FILTER_TYPE_ACTIVE: + matched = folio_test_active(folio); + break; + case DAMOS_FILTER_TYPE_MEMCG: + rcu_read_lock(); + memcg = folio_memcg_check(folio); + if (!memcg) + matched = false; + else + matched = filter->memcg_id == mem_cgroup_id(memcg); + rcu_read_unlock(); + break; + case DAMOS_FILTER_TYPE_YOUNG: + matched = damon_folio_young(folio); + if (matched) + damon_folio_mkold(folio); + break; + case DAMOS_FILTER_TYPE_HUGEPAGE_SIZE: + folio_sz = folio_size(folio); + matched = filter->sz_range.min <= folio_sz && + folio_sz <= filter->sz_range.max; + break; + case DAMOS_FILTER_TYPE_UNMAPPED: + matched = !folio_mapped(folio) || !folio_raw_mapping(folio); + break; + default: + break; + } + + return matched == filter->matching; +} + static unsigned int __damon_migrate_folio_list( struct list_head *migrate_folios, struct pglist_data *pgdat, int target_nid) diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h index 54209a7e70e6..61ad54aaf256 100644 --- a/mm/damon/ops-common.h +++ b/mm/damon/ops-common.h @@ -11,10 +11,13 @@ struct folio *damon_get_folio(unsigned long pfn); void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr); void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr); +void damon_folio_mkold(struct folio *folio); +bool damon_folio_young(struct folio *folio); int damon_cold_score(struct damon_ctx *c, struct damon_region *r, struct damos *s); int damon_hot_score(struct damon_ctx *c, struct damon_region *r, struct damos *s); +bool damos_folio_filter_match(struct damos_filter *filter, struct folio *folio); unsigned long damon_migrate_pages(struct list_head *folio_list, int target_nid); diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 48e3e6fed636..53a55c5114fb 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -18,45 +18,6 @@ #include "../internal.h" #include "ops-common.h" -static bool damon_folio_mkold_one(struct folio *folio, - struct vm_area_struct *vma, unsigned long addr, void *arg) -{ - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); - - while (page_vma_mapped_walk(&pvmw)) { - addr = pvmw.address; - if (pvmw.pte) - damon_ptep_mkold(pvmw.pte, vma, addr); - else - damon_pmdp_mkold(pvmw.pmd, vma, addr); - } - return true; -} - -static void damon_folio_mkold(struct folio *folio) -{ - struct rmap_walk_control rwc = { - .rmap_one = damon_folio_mkold_one, - .anon_lock = folio_lock_anon_vma_read, - }; - bool need_lock; - - if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { - folio_set_idle(folio); - return; - } - - need_lock = !folio_test_anon(folio) || folio_test_ksm(folio); - if (need_lock && !folio_trylock(folio)) - return; - - rmap_walk(folio, &rwc); - - if (need_lock) - folio_unlock(folio); - -} - static void damon_pa_mkold(unsigned long paddr) { struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); @@ -86,75 +47,6 @@ static void damon_pa_prepare_access_checks(struct damon_ctx *ctx) } } -static bool damon_folio_young_one(struct folio *folio, - struct vm_area_struct *vma, unsigned long addr, void *arg) -{ - bool *accessed = arg; - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); - pte_t pte; - - *accessed = false; - while (page_vma_mapped_walk(&pvmw)) { - addr = pvmw.address; - if (pvmw.pte) { - pte = ptep_get(pvmw.pte); - - /* - * PFN swap PTEs, such as device-exclusive ones, that - * actually map pages are "old" from a CPU perspective. - * The MMU notifier takes care of any device aspects. - */ - *accessed = (pte_present(pte) && pte_young(pte)) || - !folio_test_idle(folio) || - mmu_notifier_test_young(vma->vm_mm, addr); - } else { -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - *accessed = pmd_young(pmdp_get(pvmw.pmd)) || - !folio_test_idle(folio) || - mmu_notifier_test_young(vma->vm_mm, addr); -#else - WARN_ON_ONCE(1); -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ - } - if (*accessed) { - page_vma_mapped_walk_done(&pvmw); - break; - } - } - - /* If accessed, stop walking */ - return *accessed == false; -} - -static bool damon_folio_young(struct folio *folio) -{ - bool accessed = false; - struct rmap_walk_control rwc = { - .arg = &accessed, - .rmap_one = damon_folio_young_one, - .anon_lock = folio_lock_anon_vma_read, - }; - bool need_lock; - - if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { - if (folio_test_idle(folio)) - return false; - else - return true; - } - - need_lock = !folio_test_anon(folio) || folio_test_ksm(folio); - if (need_lock && !folio_trylock(folio)) - return false; - - rmap_walk(folio, &rwc); - - if (need_lock) - folio_unlock(folio); - - return accessed; -} - static bool damon_pa_young(unsigned long paddr, unsigned long *folio_sz) { struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); @@ -205,49 +97,6 @@ static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx) return max_nr_accesses; } -static bool damos_pa_filter_match(struct damos_filter *filter, - struct folio *folio) -{ - bool matched = false; - struct mem_cgroup *memcg; - size_t folio_sz; - - switch (filter->type) { - case DAMOS_FILTER_TYPE_ANON: - matched = folio_test_anon(folio); - break; - case DAMOS_FILTER_TYPE_ACTIVE: - matched = folio_test_active(folio); - break; - case DAMOS_FILTER_TYPE_MEMCG: - rcu_read_lock(); - memcg = folio_memcg_check(folio); - if (!memcg) - matched = false; - else - matched = filter->memcg_id == mem_cgroup_id(memcg); - rcu_read_unlock(); - break; - case DAMOS_FILTER_TYPE_YOUNG: - matched = damon_folio_young(folio); - if (matched) - damon_folio_mkold(folio); - break; - case DAMOS_FILTER_TYPE_HUGEPAGE_SIZE: - folio_sz = folio_size(folio); - matched = filter->sz_range.min <= folio_sz && - folio_sz <= filter->sz_range.max; - break; - case DAMOS_FILTER_TYPE_UNMAPPED: - matched = !folio_mapped(folio) || !folio_raw_mapping(folio); - break; - default: - break; - } - - return matched == filter->matching; -} - /* * damos_pa_filter_out - Return true if the page should be filtered out. */ @@ -259,7 +108,7 @@ static bool damos_pa_filter_out(struct damos *scheme, struct folio *folio) return false; damos_for_each_ops_filter(filter, scheme) { - if (damos_pa_filter_match(filter, folio)) + if (damos_folio_filter_match(filter, folio)) return !filter->allow; } return scheme->ops_filters_default_reject; -- 2.43.5