From: Raghavendra K T <raghavendra.kt@amd.com>
To: <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>
Cc: Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
"Mel Gorman" <mgorman@suse.de>,
Andrew Morton <akpm@linux-foundation.org>,
"David Hildenbrand" <david@redhat.com>, <rppt@kernel.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Bharata B Rao <bharata@amd.com>,
Raghavendra K T <raghavendra.kt@amd.com>
Subject: [RFC PATCH V1 1/2] sched/numa: Introduce per vma scan counter
Date: Wed, 3 May 2023 07:35:48 +0530 [thread overview]
Message-ID: <abd037023141f25f79c6bbbb801c8405e4c449a1.1683033105.git.raghavendra.kt@amd.com> (raw)
In-Reply-To: <cover.1683033105.git.raghavendra.kt@amd.com>
With the recent numa scan enhancements, only the tasks which had
previously accessed vma are allowed to scan.
While this has improved significant system time overhead, there are
corner cases, which genuinely needs some relaxation for e.g., concern
raised by PeterZ where unfairness amongst the theread belonging to
disjoint set of VMSs can potentially amplify the side effects of vma
regions belonging to some of the tasks being left unscanned.
To address this, allow scanning for first few times with a per vma
counter.
Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
include/linux/mm_types.h | 1 +
kernel/sched/fair.c | 30 +++++++++++++++++++++++++++---
2 files changed, 28 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 3fc9e680f174..f66e6b4e0620 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -479,6 +479,7 @@ struct vma_numab_state {
unsigned long next_scan;
unsigned long next_pid_reset;
unsigned long access_pids[2];
+ unsigned int scan_counter;
};
/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a29ca11bead2..3c50dc3893eb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2928,19 +2928,38 @@ static void reset_ptenuma_scan(struct task_struct *p)
p->mm->numa_scan_offset = 0;
}
+/* Scan 1GB or 4 * scan_size */
+#define VMA_DISJOINT_SET_ACCESS_THRESH 4U
+
static bool vma_is_accessed(struct vm_area_struct *vma)
{
unsigned long pids;
+ unsigned int windows;
+ unsigned int scan_size = READ_ONCE(sysctl_numa_balancing_scan_size);
+
+ if (scan_size < MAX_SCAN_WINDOW)
+ windows = MAX_SCAN_WINDOW / scan_size;
+
+ /* Allow only half of the windows for disjoint set cases */
+ windows /= 2;
+
+ windows = max(VMA_DISJOINT_SET_ACCESS_THRESH, windows);
+
/*
- * Allow unconditional access first two times, so that all the (pages)
- * of VMAs get prot_none fault introduced irrespective of accesses.
+ * Make sure to allow scanning of disjoint vma set for the first
+ * few times.
+ * OR At mm level allow unconditional access first two times, so that
+ * all the (pages) of VMAs get prot_none fault introduced irrespective
+ * of accesses.
* This is also done to avoid any side effect of task scanning
* amplifying the unfairness of disjoint set of VMAs' access.
*/
- if (READ_ONCE(current->mm->numa_scan_seq) < 2)
+ if (READ_ONCE(vma->numab_state->scan_counter) < windows ||
+ READ_ONCE(current->mm->numa_scan_seq) < 2)
return true;
pids = vma->numab_state->access_pids[0] | vma->numab_state->access_pids[1];
+
return test_bit(hash_32(current->pid, ilog2(BITS_PER_LONG)), &pids);
}
@@ -3058,6 +3077,8 @@ static void task_numa_work(struct callback_head *work)
/* Reset happens after 4 times scan delay of scan start */
vma->numab_state->next_pid_reset = vma->numab_state->next_scan +
msecs_to_jiffies(VMA_PID_RESET_PERIOD);
+
+ WRITE_ONCE(vma->numab_state->scan_counter, 0);
}
/*
@@ -3084,6 +3105,9 @@ static void task_numa_work(struct callback_head *work)
vma->numab_state->access_pids[1] = 0;
}
+ WRITE_ONCE(vma->numab_state->scan_counter,
+ READ_ONCE(vma->numab_state->scan_counter) + 1);
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
--
2.34.1
next prev parent reply other threads:[~2023-05-03 2:06 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-03 2:05 [RFC PATCH V1 0/2] sched/numa: Disjoint set vma scan improvements Raghavendra K T
2023-05-03 2:05 ` Raghavendra K T [this message]
2023-05-03 17:42 ` [RFC PATCH V1 1/2] sched/numa: Introduce per vma scan counter Raghavendra K T
2023-05-03 2:05 ` [RFC PATCH V1 2/2] sched/numa: Introduce per vma numa_scan_seq Raghavendra K T
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=abd037023141f25f79c6bbbb801c8405e4c449a1.1683033105.git.raghavendra.kt@amd.com \
--to=raghavendra.kt@amd.com \
--cc=akpm@linux-foundation.org \
--cc=bharata@amd.com \
--cc=david@redhat.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rppt@kernel.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox