From: kaiyang2@cs.cmu.edu
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Kaiyang Zhao <kaiyang2@cs.cmu.edu>,
hannes@cmpxchg.org, ziy@nvidia.com, dskarlat@cs.cmu.edu
Subject: [RFC PATCH 6/7] pass gfp mask of the allocation that waked kswapd to track number of pages scanned on behalf of each alloc type
Date: Wed, 20 Mar 2024 02:42:17 +0000 [thread overview]
Message-ID: <20240320024218.203491-7-kaiyang2@cs.cmu.edu> (raw)
In-Reply-To: <20240320024218.203491-1-kaiyang2@cs.cmu.edu>
From: Kaiyang Zhao <kaiyang2@cs.cmu.edu>
In preparation for exporting the number of pages scanned for each alloc
type
Signed-off-by: Kaiyang Zhao <zh_kaiyang@hotmail.com>
---
include/linux/mmzone.h | 1 +
mm/vmscan.c | 13 +++++++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index a4889c9d4055..abc9f1623c82 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1288,6 +1288,7 @@ typedef struct pglist_data {
struct task_struct *kswapd; /* Protected by kswapd_lock */
int kswapd_order;
enum zone_type kswapd_highest_zoneidx;
+ gfp_t kswapd_gfp;
int kswapd_failures; /* Number of 'reclaimed == 0' runs */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index aa21da983804..ed0f47e2e810 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -7330,7 +7330,7 @@ clear_reclaim_active(pg_data_t *pgdat, int highest_zoneidx)
* or lower is eligible for reclaim until at least one usable zone is
* balanced.
*/
-static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
+static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx, gfp_t gfp_mask)
{
int i;
unsigned long nr_soft_reclaimed;
@@ -7345,6 +7345,8 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
.order = order,
.may_unmap = 1,
};
+ if (is_migrate_movable(gfp_migratetype(gfp_mask)))
+ sc.gfp_mask |= __GFP_MOVABLE;
set_task_reclaim_state(current, &sc.reclaim_state);
psi_memstall_enter(&pflags);
@@ -7659,6 +7661,7 @@ static int kswapd(void *p)
pg_data_t *pgdat = (pg_data_t *)p;
struct task_struct *tsk = current;
const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
+ gfp_t gfp_mask;
if (!cpumask_empty(cpumask))
set_cpus_allowed_ptr(tsk, cpumask);
@@ -7680,6 +7683,7 @@ static int kswapd(void *p)
WRITE_ONCE(pgdat->kswapd_order, 0);
WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES);
+ WRITE_ONCE(pgdat->kswapd_gfp, 0);
atomic_set(&pgdat->nr_writeback_throttled, 0);
for ( ; ; ) {
bool ret;
@@ -7687,6 +7691,7 @@ static int kswapd(void *p)
alloc_order = reclaim_order = READ_ONCE(pgdat->kswapd_order);
highest_zoneidx = kswapd_highest_zoneidx(pgdat,
highest_zoneidx);
+ gfp_mask = READ_ONCE(pgdat->kswapd_gfp);
kswapd_try_sleep:
kswapd_try_to_sleep(pgdat, alloc_order, reclaim_order,
@@ -7696,8 +7701,10 @@ static int kswapd(void *p)
alloc_order = READ_ONCE(pgdat->kswapd_order);
highest_zoneidx = kswapd_highest_zoneidx(pgdat,
highest_zoneidx);
+ gfp_mask = READ_ONCE(pgdat->kswapd_gfp);
WRITE_ONCE(pgdat->kswapd_order, 0);
WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES);
+ WRITE_ONCE(pgdat->kswapd_gfp, 0);
ret = try_to_freeze();
if (kthread_should_stop())
@@ -7721,7 +7728,7 @@ static int kswapd(void *p)
trace_mm_vmscan_kswapd_wake(pgdat->node_id, highest_zoneidx,
alloc_order);
reclaim_order = balance_pgdat(pgdat, alloc_order,
- highest_zoneidx);
+ highest_zoneidx, gfp_mask);
if (reclaim_order < alloc_order)
goto kswapd_try_sleep;
}
@@ -7759,6 +7766,8 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order,
if (READ_ONCE(pgdat->kswapd_order) < order)
WRITE_ONCE(pgdat->kswapd_order, order);
+ WRITE_ONCE(pgdat->kswapd_gfp, gfp_flags);
+
if (!waitqueue_active(&pgdat->kswapd_wait))
return;
--
2.40.1
next prev parent reply other threads:[~2024-03-20 2:42 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-20 2:42 [RFC PATCH 0/7] mm: providing ample physical memory contiguity by confining unmovable allocations kaiyang2
2024-03-20 2:42 ` [RFC PATCH 1/7] sysfs interface for the boundary of movable zone kaiyang2
2024-03-20 2:42 ` [RFC PATCH 2/7] Disallows high-order movable allocations in other zones if ZONE_MOVABLE is populated kaiyang2
2024-03-20 2:42 ` [RFC PATCH 3/7] compaction accepts a destination zone kaiyang2
2024-03-20 2:42 ` [RFC PATCH 4/7] vmstat counter for pages migrated across zones kaiyang2
2024-03-20 2:42 ` [RFC PATCH 5/7] proactively move pages out of unmovable zones in kcompactd kaiyang2
2024-03-20 2:42 ` kaiyang2 [this message]
2024-03-20 2:42 ` [RFC PATCH 7/7] exports the number of pages scanned on behalf of movable/unmovable allocations kaiyang2
2024-03-20 2:47 ` [RFC PATCH 0/7] mm: providing ample physical memory contiguity by confining unmovable allocations Zi Yan
2024-03-20 2:57 ` kaiyang2
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240320024218.203491-7-kaiyang2@cs.cmu.edu \
--to=kaiyang2@cs.cmu.edu \
--cc=dskarlat@cs.cmu.edu \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox