* [PATCH V3 1/4] sched/numa: Apply the scan delay to every new vma
2023-02-28 4:50 [PATCH V3 0/4] sched/numa: Enhance vma scanning Raghavendra K T
@ 2023-02-28 4:50 ` Raghavendra K T
2023-02-28 4:50 ` [PATCH V3 2/4] sched/numa: Enhance vma scanning logic Raghavendra K T
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Raghavendra K T @ 2023-02-28 4:50 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: Ingo Molnar, Peter Zijlstra, Mel Gorman, Andrew Morton,
David Hildenbrand, rppt, Bharata B Rao, Disha Talreja,
Mel Gorman, Raghavendra K T
From: Mel Gorman <mgorman@techsingularity.net>
Currently whenever a new task is created we wait for
sysctl_numa_balancing_scan_delay to avoid unnessary scanning
overhead. Extend the same logic to new or very short-lived VMAs.
(Raghavendra: Add initialization in vm_area_dup())
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
include/linux/mm.h | 16 ++++++++++++++++
include/linux/mm_types.h | 7 +++++++
kernel/fork.c | 2 ++
kernel/sched/fair.c | 19 +++++++++++++++++++
4 files changed, 44 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 974ccca609d2..41cc8997d4e5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -29,6 +29,7 @@
#include <linux/pgtable.h>
#include <linux/kasan.h>
#include <linux/memremap.h>
+#include <linux/slab.h>
struct mempolicy;
struct anon_vma;
@@ -611,6 +612,20 @@ struct vm_operations_struct {
unsigned long addr);
};
+#ifdef CONFIG_NUMA_BALANCING
+static inline void vma_numab_state_init(struct vm_area_struct *vma)
+{
+ vma->numab_state = NULL;
+}
+static inline void vma_numab_state_free(struct vm_area_struct *vma)
+{
+ kfree(vma->numab_state);
+}
+#else
+static inline void vma_numab_state_init(struct vm_area_struct *vma) {}
+static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
+#endif /* CONFIG_NUMA_BALANCING */
+
static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
{
static const struct vm_operations_struct dummy_vm_ops = {};
@@ -619,6 +634,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_mm = mm;
vma->vm_ops = &dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
+ vma_numab_state_init(vma);
}
static inline void vma_set_anonymous(struct vm_area_struct *vma)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 500e536796ca..a4a1093870d3 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -435,6 +435,10 @@ struct anon_vma_name {
char name[];
};
+struct vma_numab_state {
+ unsigned long next_scan;
+};
+
/*
* This struct describes a virtual memory area. There is one of these
* per VM-area/task. A VM area is any part of the process virtual memory
@@ -504,6 +508,9 @@ struct vm_area_struct {
#endif
#ifdef CONFIG_NUMA
struct mempolicy *vm_policy; /* NUMA policy for the VMA */
+#endif
+#ifdef CONFIG_NUMA_BALANCING
+ struct vma_numab_state *numab_state; /* NUMA Balancing state */
#endif
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
} __randomize_layout;
diff --git a/kernel/fork.c b/kernel/fork.c
index 08969f5aa38d..6c19a3305990 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -474,6 +474,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
*/
*new = data_race(*orig);
INIT_LIST_HEAD(&new->anon_vma_chain);
+ vma_numab_state_init(new);
dup_anon_vma_name(orig, new);
}
return new;
@@ -481,6 +482,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
void vm_area_free(struct vm_area_struct *vma)
{
+ vma_numab_state_free(vma);
free_anon_vma_name(vma);
kmem_cache_free(vm_area_cachep, vma);
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e4a0b8bd941c..e39c36e71cec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3015,6 +3015,25 @@ static void task_numa_work(struct callback_head *work)
if (!vma_is_accessible(vma))
continue;
+ /* Initialise new per-VMA NUMAB state. */
+ if (!vma->numab_state) {
+ vma->numab_state = kzalloc(sizeof(struct vma_numab_state),
+ GFP_KERNEL);
+ if (!vma->numab_state)
+ continue;
+
+ vma->numab_state->next_scan = now +
+ msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
+ }
+
+ /*
+ * Scanning the VMA's of short lived tasks add more overhead. So
+ * delay the scan for new VMAs.
+ */
+ if (mm->numa_scan_seq && time_before(jiffies,
+ vma->numab_state->next_scan))
+ continue;
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH V3 2/4] sched/numa: Enhance vma scanning logic
2023-02-28 4:50 [PATCH V3 0/4] sched/numa: Enhance vma scanning Raghavendra K T
2023-02-28 4:50 ` [PATCH V3 1/4] sched/numa: Apply the scan delay to every new vma Raghavendra K T
@ 2023-02-28 4:50 ` Raghavendra K T
2023-02-28 4:50 ` [PATCH V3 3/4] sched/numa: implement access PID reset logic Raghavendra K T
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Raghavendra K T @ 2023-02-28 4:50 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: Ingo Molnar, Peter Zijlstra, Mel Gorman, Andrew Morton,
David Hildenbrand, rppt, Bharata B Rao, Disha Talreja,
Raghavendra K T
During the Numa scanning make sure only relevant vmas of the
tasks are scanned.
Before:
All the tasks of a process participate in scanning the vma
even if they do not access vma in it's lifespan.
Now:
Except cases of first few unconditional scans, if a process do
not touch vma (exluding false positive cases of PID collisions)
tasks no longer scan all vma
Logic used:
1) 6 bits of PID used to mark active bit in vma numab status during
fault to remember PIDs accessing vma. (Thanks Mel)
2) Subsequently in scan path, vma scanning is skipped if current PID
had not accessed vma.
3) First two times we do allow unconditional scan to preserve earlier
behaviour of scanning.
Acknowledgement to Bharata B Rao <bharata@amd.com> for initial patch
to store pid information and Peter Zijlstra <peterz@infradead.org>
(Usage of test and set bit)
Suggested-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
include/linux/mm.h | 14 ++++++++++++++
include/linux/mm_types.h | 1 +
kernel/sched/fair.c | 19 +++++++++++++++++++
mm/memory.c | 3 +++
4 files changed, 37 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 41cc8997d4e5..097680aaca1e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1388,6 +1388,16 @@ static inline int xchg_page_access_time(struct page *page, int time)
last_time = page_cpupid_xchg_last(page, time >> PAGE_ACCESS_TIME_BUCKETS);
return last_time << PAGE_ACCESS_TIME_BUCKETS;
}
+
+static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
+{
+ unsigned int pid_bit;
+
+ pid_bit = current->pid % BITS_PER_LONG;
+ if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids)) {
+ __set_bit(pid_bit, &vma->numab_state->access_pids);
+ }
+}
#else /* !CONFIG_NUMA_BALANCING */
static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
{
@@ -1437,6 +1447,10 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
{
return false;
}
+
+static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
+{
+}
#endif /* CONFIG_NUMA_BALANCING */
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index a4a1093870d3..582523e73546 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -437,6 +437,7 @@ struct anon_vma_name {
struct vma_numab_state {
unsigned long next_scan;
+ unsigned long access_pids;
};
/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e39c36e71cec..05490cb2d5c6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2916,6 +2916,21 @@ static void reset_ptenuma_scan(struct task_struct *p)
p->mm->numa_scan_offset = 0;
}
+static bool vma_is_accessed(struct vm_area_struct *vma)
+{
+ /*
+ * Allow unconditional access first two times, so that all the (pages)
+ * of VMAs get prot_none fault introduced irrespective of accesses.
+ * This is also done to avoid any side effect of task scanning
+ * amplifying the unfairness of disjoint set of VMAs' access.
+ */
+ if (READ_ONCE(current->mm->numa_scan_seq) < 2)
+ return true;
+
+ return test_bit(current->pid % BITS_PER_LONG,
+ &vma->numab_state->access_pids);
+}
+
/*
* The expensive part of numa migration is done from task_work context.
* Triggered from task_tick_numa().
@@ -3034,6 +3049,10 @@ static void task_numa_work(struct callback_head *work)
vma->numab_state->next_scan))
continue;
+ /* Do not scan the VMA if task has not accessed */
+ if (!vma_is_accessed(vma))
+ continue;
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
diff --git a/mm/memory.c b/mm/memory.c
index 8c8420934d60..150c03a3419c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4698,6 +4698,9 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
{
get_page(page);
+ /* Record the current PID acceesing VMA */
+ vma_set_access_pid_bit(vma);
+
count_vm_numa_event(NUMA_HINT_FAULTS);
if (page_nid == numa_node_id()) {
count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH V3 3/4] sched/numa: implement access PID reset logic
2023-02-28 4:50 [PATCH V3 0/4] sched/numa: Enhance vma scanning Raghavendra K T
2023-02-28 4:50 ` [PATCH V3 1/4] sched/numa: Apply the scan delay to every new vma Raghavendra K T
2023-02-28 4:50 ` [PATCH V3 2/4] sched/numa: Enhance vma scanning logic Raghavendra K T
@ 2023-02-28 4:50 ` Raghavendra K T
2023-02-28 4:50 ` [PATCH V3 4/4] sched/numa: Use hash_32 to mix up PIDs accessing VMA Raghavendra K T
2023-02-28 21:24 ` [PATCH V3 0/4] sched/numa: Enhance vma scanning Andrew Morton
4 siblings, 0 replies; 8+ messages in thread
From: Raghavendra K T @ 2023-02-28 4:50 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: Ingo Molnar, Peter Zijlstra, Mel Gorman, Andrew Morton,
David Hildenbrand, rppt, Bharata B Rao, Disha Talreja,
Raghavendra K T
This helps to ensure, only recently accessed PIDs scan the
VMAs.
Current implementation: (idea supported by PeterZ)
1. Accessing PID information is maintained in two windows.
access_pids[1] being newest.
2. Reset old access PID info i.e. access_pid[0] every
(4 * sysctl_numa_balancing_scan_delay) interval after initial
scan delay period expires.
The above interval seemed to be experimentally optimum since it
avoids frequent reset of access info as well as helps clearing
the old access info regularly.
The reset logic is implemented in scan path.
Suggested-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
include/linux/mm.h | 4 ++--
include/linux/mm_types.h | 3 ++-
kernel/sched/fair.c | 23 +++++++++++++++++++++--
3 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 097680aaca1e..bd07289fc68e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1394,8 +1394,8 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
unsigned int pid_bit;
pid_bit = current->pid % BITS_PER_LONG;
- if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids)) {
- __set_bit(pid_bit, &vma->numab_state->access_pids);
+ if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids[1])) {
+ __set_bit(pid_bit, &vma->numab_state->access_pids[1]);
}
}
#else /* !CONFIG_NUMA_BALANCING */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 582523e73546..1f1f8bfeae36 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -437,7 +437,8 @@ struct anon_vma_name {
struct vma_numab_state {
unsigned long next_scan;
- unsigned long access_pids;
+ unsigned long next_pid_reset;
+ unsigned long access_pids[2];
};
/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 05490cb2d5c6..f76d5ecaf345 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2918,6 +2918,7 @@ static void reset_ptenuma_scan(struct task_struct *p)
static bool vma_is_accessed(struct vm_area_struct *vma)
{
+ unsigned long pids;
/*
* Allow unconditional access first two times, so that all the (pages)
* of VMAs get prot_none fault introduced irrespective of accesses.
@@ -2927,10 +2928,12 @@ static bool vma_is_accessed(struct vm_area_struct *vma)
if (READ_ONCE(current->mm->numa_scan_seq) < 2)
return true;
- return test_bit(current->pid % BITS_PER_LONG,
- &vma->numab_state->access_pids);
+ pids = vma->numab_state->access_pids[0] | vma->numab_state->access_pids[1];
+ return test_bit(current->pid % BITS_PER_LONG, &pids);
}
+#define VMA_PID_RESET_PERIOD (4 * sysctl_numa_balancing_scan_delay)
+
/*
* The expensive part of numa migration is done from task_work context.
* Triggered from task_tick_numa().
@@ -3039,6 +3042,10 @@ static void task_numa_work(struct callback_head *work)
vma->numab_state->next_scan = now +
msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
+
+ /* Reset happens after 4 times scan delay of scan start */
+ vma->numab_state->next_pid_reset = vma->numab_state->next_scan +
+ msecs_to_jiffies(VMA_PID_RESET_PERIOD);
}
/*
@@ -3053,6 +3060,18 @@ static void task_numa_work(struct callback_head *work)
if (!vma_is_accessed(vma))
continue;
+ /*
+ * RESET access PIDs regularly for old VMAs. Resetting after checking
+ * vma for recent access to avoid clearing PID info before access..
+ */
+ if (mm->numa_scan_seq &&
+ time_after(jiffies, vma->numab_state->next_pid_reset)) {
+ vma->numab_state->next_pid_reset = vma->numab_state->next_pid_reset +
+ msecs_to_jiffies(VMA_PID_RESET_PERIOD);
+ vma->numab_state->access_pids[0] = READ_ONCE(vma->numab_state->access_pids[1]);
+ vma->numab_state->access_pids[1] = 0;
+ }
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH V3 4/4] sched/numa: Use hash_32 to mix up PIDs accessing VMA
2023-02-28 4:50 [PATCH V3 0/4] sched/numa: Enhance vma scanning Raghavendra K T
` (2 preceding siblings ...)
2023-02-28 4:50 ` [PATCH V3 3/4] sched/numa: implement access PID reset logic Raghavendra K T
@ 2023-02-28 4:50 ` Raghavendra K T
2023-02-28 21:24 ` [PATCH V3 0/4] sched/numa: Enhance vma scanning Andrew Morton
4 siblings, 0 replies; 8+ messages in thread
From: Raghavendra K T @ 2023-02-28 4:50 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: Ingo Molnar, Peter Zijlstra, Mel Gorman, Andrew Morton,
David Hildenbrand, rppt, Bharata B Rao, Disha Talreja,
Raghavendra K T
before: last 6 bits of PID is used as index to store
information about tasks accessing VMA's.
after: hash_32 is used to take of cases where tasks are
created over a period of time, and thus improve collision
probability.
Result:
The patch series overall improving autonuma cost by a huge
margin.
Kernbench anbd dbench showed around 5% improvement and
system time in mmtest autonuma showed 80% improvement
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
include/linux/mm.h | 2 +-
kernel/sched/fair.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bd07289fc68e..8493697d1dce 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1393,7 +1393,7 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
{
unsigned int pid_bit;
- pid_bit = current->pid % BITS_PER_LONG;
+ pid_bit = hash_32(current->pid, ilog2(BITS_PER_LONG));
if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids[1])) {
__set_bit(pid_bit, &vma->numab_state->access_pids[1]);
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f76d5ecaf345..46fd9b372e4c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2929,7 +2929,7 @@ static bool vma_is_accessed(struct vm_area_struct *vma)
return true;
pids = vma->numab_state->access_pids[0] | vma->numab_state->access_pids[1];
- return test_bit(current->pid % BITS_PER_LONG, &pids);
+ return test_bit(hash_32(current->pid, ilog2(BITS_PER_LONG)), &pids);
}
#define VMA_PID_RESET_PERIOD (4 * sysctl_numa_balancing_scan_delay)
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH V3 0/4] sched/numa: Enhance vma scanning
2023-02-28 4:50 [PATCH V3 0/4] sched/numa: Enhance vma scanning Raghavendra K T
` (3 preceding siblings ...)
2023-02-28 4:50 ` [PATCH V3 4/4] sched/numa: Use hash_32 to mix up PIDs accessing VMA Raghavendra K T
@ 2023-02-28 21:24 ` Andrew Morton
2023-03-01 4:16 ` Raghavendra K T
4 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2023-02-28 21:24 UTC (permalink / raw)
To: Raghavendra K T
Cc: linux-kernel, linux-mm, Ingo Molnar, Peter Zijlstra, Mel Gorman,
David Hildenbrand, rppt, Bharata B Rao, Disha Talreja
On Tue, 28 Feb 2023 10:20:18 +0530 Raghavendra K T <raghavendra.kt@amd.com> wrote:
> The patchset proposes one of the enhancements to numa vma scanning
> suggested by Mel. This is continuation of [3].
>
> ...
>
> include/linux/mm.h | 30 +++++++++++++++++++++
> include/linux/mm_types.h | 9 +++++++
> kernel/fork.c | 2 ++
> kernel/sched/fair.c | 57 ++++++++++++++++++++++++++++++++++++++++
> mm/memory.c | 3 +++
It's unclear (to me) which tree would normally carry these.
But there are significant textual conflicts with the "Per-VMA locks"
patchset, and there might be functional issues as well. So mm.git
would be the better choice.
Please can you redo and retest against tomorrow's mm-unstable branch
(git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm)? Hopefully the
sched developers can take a look and provide feedback.
Thanks.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH V3 0/4] sched/numa: Enhance vma scanning
2023-02-28 21:24 ` [PATCH V3 0/4] sched/numa: Enhance vma scanning Andrew Morton
@ 2023-03-01 4:16 ` Raghavendra K T
2023-03-01 12:32 ` Raghavendra K T
0 siblings, 1 reply; 8+ messages in thread
From: Raghavendra K T @ 2023-03-01 4:16 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Ingo Molnar, Peter Zijlstra, Mel Gorman,
David Hildenbrand, rppt, Bharata B Rao, Disha Talreja
On 3/1/2023 2:54 AM, Andrew Morton wrote:
> On Tue, 28 Feb 2023 10:20:18 +0530 Raghavendra K T <raghavendra.kt@amd.com> wrote:
>
>> The patchset proposes one of the enhancements to numa vma scanning
>> suggested by Mel. This is continuation of [3].
>>
>> ...
>>
>> include/linux/mm.h | 30 +++++++++++++++++++++
>> include/linux/mm_types.h | 9 +++++++
>> kernel/fork.c | 2 ++
>> kernel/sched/fair.c | 57 ++++++++++++++++++++++++++++++++++++++++
>> mm/memory.c | 3 +++
>
> It's unclear (to me) which tree would normally carry these.
>
> But there are significant textual conflicts with the "Per-VMA locks"
> patchset, and there might be functional issues as well. So mm.git
> would be the better choice.
>
> Please can you redo and retest against tomorrow's mm-unstable branch
> (git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm)? Hopefully the
> sched developers can take a look and provide feedback.
>
Thank you Andrew. Sure will do that.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH V3 0/4] sched/numa: Enhance vma scanning
2023-03-01 4:16 ` Raghavendra K T
@ 2023-03-01 12:32 ` Raghavendra K T
0 siblings, 0 replies; 8+ messages in thread
From: Raghavendra K T @ 2023-03-01 12:32 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Ingo Molnar, Peter Zijlstra, Mel Gorman,
David Hildenbrand, rppt, Bharata B Rao, Disha Talreja
On 3/1/2023 9:46 AM, Raghavendra K T wrote:
> On 3/1/2023 2:54 AM, Andrew Morton wrote:
>> On Tue, 28 Feb 2023 10:20:18 +0530 Raghavendra K T
>> <raghavendra.kt@amd.com> wrote:
>>
>>> The patchset proposes one of the enhancements to numa vma scanning
>>> suggested by Mel. This is continuation of [3].
>>>
>>> ...
>>>
>>> include/linux/mm.h | 30 +++++++++++++++++++++
>>> include/linux/mm_types.h | 9 +++++++
>>> kernel/fork.c | 2 ++
>>> kernel/sched/fair.c | 57 ++++++++++++++++++++++++++++++++++++++++
>>> mm/memory.c | 3 +++
>>
>> It's unclear (to me) which tree would normally carry these.
>>
>> But there are significant textual conflicts with the "Per-VMA locks"
>> patchset, and there might be functional issues as well. So mm.git
>> would be the better choice.
>>
>> Please can you redo and retest against tomorrow's mm-unstable branch
>> (git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm)? Hopefully the
>> sched developers can take a look and provide feedback.
>>
>
> Thank you Andrew. Sure will do that.
>
Thanks again, Sent rebased patches,
Just to record, so that new discussion can happen in new posting
https://lore.kernel.org/lkml/cover.1677672277.git.raghavendra.kt@amd.com/T/#t
^ permalink raw reply [flat|nested] 8+ messages in thread