* [Patch v3 0/2] mm_slot: fix the usage of mm_slot_entry
@ 2025-09-24 0:48 Wei Yang
2025-09-24 0:48 ` [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL Wei Yang
2025-09-24 0:48 ` [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang
0 siblings, 2 replies; 16+ messages in thread
From: Wei Yang @ 2025-09-24 0:48 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, Wei Yang
The usage of mm_slot_entry() in ksm/khugepaged is not correct. In case
mm_slot_lookup() return a NULL slot, mm_slot_entry() should not be called.
To fix this:
Patch 1: check slot before continue in ksm.c
Patch 2: remove the definition of khugepaged_mm_slot
v3:
fix a pf because of slot change
fix uninitialized mm_slot
v2:
fix the error in code instead guard by compiler
V1:
add a BUILD_BUG_ON_MSG() to make sure slot is the first element
[1]: https://lkml.kernel.org/r/20250914000026.17986-1-richard.weiyang@gmail.com
[2]: https://lkml.kernel.org/r/20250919071244.17020-1-richard.weiyang@gmail.com
Wei Yang (2):
mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
mm/khugepaged: remove definition of struct khugepaged_mm_slot
mm/khugepaged.c | 58 ++++++++++++++++++-------------------------------
mm/ksm.c | 22 ++++++++++---------
2 files changed, 33 insertions(+), 47 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 0:48 [Patch v3 0/2] mm_slot: fix the usage of mm_slot_entry Wei Yang
@ 2025-09-24 0:48 ` Wei Yang
2025-09-24 2:19 ` Chengming Zhou
` (2 more replies)
2025-09-24 0:48 ` [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang
1 sibling, 3 replies; 16+ messages in thread
From: Wei Yang @ 2025-09-24 0:48 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, Wei Yang, Kiryl Shutsemau, Dan Carpenter
Patch series "mm_slot: fix the usage of mm_slot_entry", v2.
When using mm_slot in ksm, there is code like:
slot = mm_slot_lookup(mm_slots_hash, mm);
mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
if (mm_slot && ..) {
}
The mm_slot_entry() won't return a valid value if slot is NULL generally.
But currently it works since slot is the first element of struct
ksm_mm_slot.
To reduce the ambiguity and make it robust, access mm_slot_entry() when
slot is !NULL.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Cc: Kiryl Shutsemau <kirill@shutemov.name>
Cc: xu xin <xu.xin16@zte.com.cn>
Cc: Dan Carpenter <dan.carpenter@linaro.org>
---
v3:
* fix uninitialized mm_slot
---
mm/ksm.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 2dbe92e3dd52..c00a21800067 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
void __ksm_exit(struct mm_struct *mm)
{
- struct ksm_mm_slot *mm_slot;
+ struct ksm_mm_slot *mm_slot = NULL;
struct mm_slot *slot;
int easy_to_free = 0;
@@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
spin_lock(&ksm_mmlist_lock);
slot = mm_slot_lookup(mm_slots_hash, mm);
- mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
- if (mm_slot && ksm_scan.mm_slot != mm_slot) {
- if (!mm_slot->rmap_list) {
- hash_del(&slot->hash);
- list_del(&slot->mm_node);
- easy_to_free = 1;
- } else {
- list_move(&slot->mm_node,
- &ksm_scan.mm_slot->slot.mm_node);
+ if (slot) {
+ mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
+ if (ksm_scan.mm_slot != mm_slot) {
+ if (!mm_slot->rmap_list) {
+ hash_del(&slot->hash);
+ list_del(&slot->mm_node);
+ easy_to_free = 1;
+ } else {
+ list_move(&slot->mm_node,
+ &ksm_scan.mm_slot->slot.mm_node);
+ }
}
}
spin_unlock(&ksm_mmlist_lock);
--
2.34.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot
2025-09-24 0:48 [Patch v3 0/2] mm_slot: fix the usage of mm_slot_entry Wei Yang
2025-09-24 0:48 ` [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL Wei Yang
@ 2025-09-24 0:48 ` Wei Yang
2025-09-24 3:18 ` Lance Yang
` (2 more replies)
1 sibling, 3 replies; 16+ messages in thread
From: Wei Yang @ 2025-09-24 0:48 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, Wei Yang, Kiryl Shutsemau, SeongJae Park
Current code is not correct to get struct khugepaged_mm_slot by
mm_slot_entry() without checking mm_slot is !NULL. There is no problem
reported since slot is the first element of struct khugepaged_mm_slot.
While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there
is no need to define it.
Remove the definition of struct khugepaged_mm_slot, so there is not chance
to miss use mm_slot_entry().
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: David Hildenbrand <david@redhat.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Kiryl Shutsemau <kirill@shutemov.name>
Cc: xu xin <xu.xin16@zte.com.cn>
Cc: SeongJae Park <sj@kernel.org>
Cc: Nico Pache <npache@redhat.com>
---
v3:
* fix a PF reported by SeongJae, where slot is changed to next one
---
mm/khugepaged.c | 58 ++++++++++++++++++-------------------------------
1 file changed, 21 insertions(+), 37 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 204ce3059267..e3f7d1760567 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -103,14 +103,6 @@ struct collapse_control {
nodemask_t alloc_nmask;
};
-/**
- * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned
- * @slot: hash lookup from mm to mm_slot
- */
-struct khugepaged_mm_slot {
- struct mm_slot slot;
-};
-
/**
* struct khugepaged_scan - cursor for scanning
* @mm_head: the head of the mm list to scan
@@ -121,7 +113,7 @@ struct khugepaged_mm_slot {
*/
struct khugepaged_scan {
struct list_head mm_head;
- struct khugepaged_mm_slot *mm_slot;
+ struct mm_slot *mm_slot;
unsigned long address;
};
@@ -384,7 +376,10 @@ int hugepage_madvise(struct vm_area_struct *vma,
int __init khugepaged_init(void)
{
- mm_slot_cache = KMEM_CACHE(khugepaged_mm_slot, 0);
+ mm_slot_cache = kmem_cache_create("khugepaged_mm_slot",
+ sizeof(struct mm_slot),
+ __alignof__(struct mm_slot),
+ 0, NULL);
if (!mm_slot_cache)
return -ENOMEM;
@@ -438,7 +433,6 @@ static bool hugepage_pmd_enabled(void)
void __khugepaged_enter(struct mm_struct *mm)
{
- struct khugepaged_mm_slot *mm_slot;
struct mm_slot *slot;
int wakeup;
@@ -447,12 +441,10 @@ void __khugepaged_enter(struct mm_struct *mm)
if (unlikely(mm_flags_test_and_set(MMF_VM_HUGEPAGE, mm)))
return;
- mm_slot = mm_slot_alloc(mm_slot_cache);
- if (!mm_slot)
+ slot = mm_slot_alloc(mm_slot_cache);
+ if (!slot)
return;
- slot = &mm_slot->slot;
-
spin_lock(&khugepaged_mm_lock);
mm_slot_insert(mm_slots_hash, mm, slot);
/*
@@ -480,14 +472,12 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
void __khugepaged_exit(struct mm_struct *mm)
{
- struct khugepaged_mm_slot *mm_slot;
struct mm_slot *slot;
int free = 0;
spin_lock(&khugepaged_mm_lock);
slot = mm_slot_lookup(mm_slots_hash, mm);
- mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot);
- if (mm_slot && khugepaged_scan.mm_slot != mm_slot) {
+ if (slot && khugepaged_scan.mm_slot != slot) {
hash_del(&slot->hash);
list_del(&slot->mm_node);
free = 1;
@@ -496,9 +486,9 @@ void __khugepaged_exit(struct mm_struct *mm)
if (free) {
mm_flags_clear(MMF_VM_HUGEPAGE, mm);
- mm_slot_free(mm_slot_cache, mm_slot);
+ mm_slot_free(mm_slot_cache, slot);
mmdrop(mm);
- } else if (mm_slot) {
+ } else if (slot) {
/*
* This is required to serialize against
* hpage_collapse_test_exit() (which is guaranteed to run
@@ -1432,9 +1422,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
return result;
}
-static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
+static void collect_mm_slot(struct mm_slot *slot)
{
- struct mm_slot *slot = &mm_slot->slot;
struct mm_struct *mm = slot->mm;
lockdep_assert_held(&khugepaged_mm_lock);
@@ -1451,7 +1440,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
*/
/* khugepaged_mm_lock actually not necessary for the below */
- mm_slot_free(mm_slot_cache, mm_slot);
+ mm_slot_free(mm_slot_cache, slot);
mmdrop(mm);
}
}
@@ -2394,7 +2383,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
__acquires(&khugepaged_mm_lock)
{
struct vma_iterator vmi;
- struct khugepaged_mm_slot *mm_slot;
struct mm_slot *slot;
struct mm_struct *mm;
struct vm_area_struct *vma;
@@ -2405,14 +2393,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
*result = SCAN_FAIL;
if (khugepaged_scan.mm_slot) {
- mm_slot = khugepaged_scan.mm_slot;
- slot = &mm_slot->slot;
+ slot = khugepaged_scan.mm_slot;
} else {
slot = list_first_entry(&khugepaged_scan.mm_head,
struct mm_slot, mm_node);
- mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot);
khugepaged_scan.address = 0;
- khugepaged_scan.mm_slot = mm_slot;
+ khugepaged_scan.mm_slot = slot;
}
spin_unlock(&khugepaged_mm_lock);
@@ -2510,7 +2496,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
breakouterloop_mmap_lock:
spin_lock(&khugepaged_mm_lock);
- VM_BUG_ON(khugepaged_scan.mm_slot != mm_slot);
+ VM_BUG_ON(khugepaged_scan.mm_slot != slot);
/*
* Release the current mm_slot if this mm is about to die, or
* if we scanned all vmas of this mm.
@@ -2522,16 +2508,14 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
* mm_slot not pointing to the exiting mm.
*/
if (!list_is_last(&slot->mm_node, &khugepaged_scan.mm_head)) {
- slot = list_next_entry(slot, mm_node);
- khugepaged_scan.mm_slot =
- mm_slot_entry(slot, struct khugepaged_mm_slot, slot);
+ khugepaged_scan.mm_slot = list_next_entry(slot, mm_node);
khugepaged_scan.address = 0;
} else {
khugepaged_scan.mm_slot = NULL;
khugepaged_full_scans++;
}
- collect_mm_slot(mm_slot);
+ collect_mm_slot(slot);
}
return progress;
@@ -2618,7 +2602,7 @@ static void khugepaged_wait_work(void)
static int khugepaged(void *none)
{
- struct khugepaged_mm_slot *mm_slot;
+ struct mm_slot *slot;
set_freezable();
set_user_nice(current, MAX_NICE);
@@ -2629,10 +2613,10 @@ static int khugepaged(void *none)
}
spin_lock(&khugepaged_mm_lock);
- mm_slot = khugepaged_scan.mm_slot;
+ slot = khugepaged_scan.mm_slot;
khugepaged_scan.mm_slot = NULL;
- if (mm_slot)
- collect_mm_slot(mm_slot);
+ if (slot)
+ collect_mm_slot(slot);
spin_unlock(&khugepaged_mm_lock);
return 0;
}
--
2.34.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 0:48 ` [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL Wei Yang
@ 2025-09-24 2:19 ` Chengming Zhou
2025-09-24 9:35 ` Kiryl Shutsemau
2025-09-24 9:35 ` David Hildenbrand
2 siblings, 0 replies; 16+ messages in thread
From: Chengming Zhou @ 2025-09-24 2:19 UTC (permalink / raw)
To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang,
Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang,
xu.xin16
Cc: linux-mm, Kiryl Shutsemau, Dan Carpenter
On 2025/9/24 08:48, Wei Yang wrote:
> Patch series "mm_slot: fix the usage of mm_slot_entry", v2.
>
> When using mm_slot in ksm, there is code like:
>
> slot = mm_slot_lookup(mm_slots_hash, mm);
> mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> if (mm_slot && ..) {
> }
>
> The mm_slot_entry() won't return a valid value if slot is NULL generally.
> But currently it works since slot is the first element of struct
> ksm_mm_slot.
>
> To reduce the ambiguity and make it robust, access mm_slot_entry() when
> slot is !NULL.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Dev Jain <dev.jain@arm.com>
> Reviewed-by: Lance Yang <lance.yang@linux.dev>
> Cc: Kiryl Shutsemau <kirill@shutemov.name>
> Cc: xu xin <xu.xin16@zte.com.cn>
> Cc: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Thanks.
>
> ---
> v3:
> * fix uninitialized mm_slot
> ---
> mm/ksm.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 2dbe92e3dd52..c00a21800067 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
>
> void __ksm_exit(struct mm_struct *mm)
> {
> - struct ksm_mm_slot *mm_slot;
> + struct ksm_mm_slot *mm_slot = NULL;
> struct mm_slot *slot;
> int easy_to_free = 0;
>
> @@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
>
> spin_lock(&ksm_mmlist_lock);
> slot = mm_slot_lookup(mm_slots_hash, mm);
> - mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> - if (mm_slot && ksm_scan.mm_slot != mm_slot) {
> - if (!mm_slot->rmap_list) {
> - hash_del(&slot->hash);
> - list_del(&slot->mm_node);
> - easy_to_free = 1;
> - } else {
> - list_move(&slot->mm_node,
> - &ksm_scan.mm_slot->slot.mm_node);
> + if (slot) {
> + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> + if (ksm_scan.mm_slot != mm_slot) {
> + if (!mm_slot->rmap_list) {
> + hash_del(&slot->hash);
> + list_del(&slot->mm_node);
> + easy_to_free = 1;
> + } else {
> + list_move(&slot->mm_node,
> + &ksm_scan.mm_slot->slot.mm_node);
> + }
> }
> }
> spin_unlock(&ksm_mmlist_lock);
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot
2025-09-24 0:48 ` [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang
@ 2025-09-24 3:18 ` Lance Yang
2025-09-24 5:51 ` Dev Jain
2025-09-24 9:39 ` David Hildenbrand
2 siblings, 0 replies; 16+ messages in thread
From: Lance Yang @ 2025-09-24 3:18 UTC (permalink / raw)
To: Wei Yang
Cc: linux-mm, Kiryl Shutsemau, SeongJae Park, dev.jain, akpm, baohua,
ryan.roberts, david, xu.xin16, chengming.zhou, npache,
Liam.Howlett, lorenzo.stoakes, baolin.wang, ziy
On 2025/9/24 08:48, Wei Yang wrote:
> Current code is not correct to get struct khugepaged_mm_slot by
> mm_slot_entry() without checking mm_slot is !NULL. There is no problem
> reported since slot is the first element of struct khugepaged_mm_slot.
>
> While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there
> is no need to define it.
>
> Remove the definition of struct khugepaged_mm_slot, so there is not chance
> to miss use mm_slot_entry().
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Kiryl Shutsemau <kirill@shutemov.name>
> Cc: xu xin <xu.xin16@zte.com.cn>
> Cc: SeongJae Park <sj@kernel.org>
> Cc: Nico Pache <npache@redhat.com>
LGTM.
Acked-by: Lance Yang <lance.yang@linux.dev>
[..]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot
2025-09-24 0:48 ` [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang
2025-09-24 3:18 ` Lance Yang
@ 2025-09-24 5:51 ` Dev Jain
2025-09-24 9:39 ` David Hildenbrand
2 siblings, 0 replies; 16+ messages in thread
From: Dev Jain @ 2025-09-24 5:51 UTC (permalink / raw)
To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang,
Liam.Howlett, npache, ryan.roberts, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, Kiryl Shutsemau, SeongJae Park
On 24/09/25 6:18 am, Wei Yang wrote:
> Current code is not correct to get struct khugepaged_mm_slot by
> mm_slot_entry() without checking mm_slot is !NULL. There is no problem
> reported since slot is the first element of struct khugepaged_mm_slot.
>
> While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there
> is no need to define it.
>
> Remove the definition of struct khugepaged_mm_slot, so there is not chance
> to miss use mm_slot_entry().
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Kiryl Shutsemau <kirill@shutemov.name>
> Cc: xu xin <xu.xin16@zte.com.cn>
> Cc: SeongJae Park <sj@kernel.org>
> Cc: Nico Pache <npache@redhat.com>
>
> ---
Reviewed-by: Dev Jain <dev.jain@arm.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 0:48 ` [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL Wei Yang
2025-09-24 2:19 ` Chengming Zhou
@ 2025-09-24 9:35 ` Kiryl Shutsemau
2025-09-24 9:40 ` David Hildenbrand
2025-09-24 9:35 ` David Hildenbrand
2 siblings, 1 reply; 16+ messages in thread
From: Kiryl Shutsemau @ 2025-09-24 9:35 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou, linux-mm, Dan Carpenter
On Wed, Sep 24, 2025 at 12:48:53AM +0000, Wei Yang wrote:
> Patch series "mm_slot: fix the usage of mm_slot_entry", v2.
>
> When using mm_slot in ksm, there is code like:
>
> slot = mm_slot_lookup(mm_slots_hash, mm);
> mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> if (mm_slot && ..) {
> }
>
> The mm_slot_entry() won't return a valid value if slot is NULL generally.
> But currently it works since slot is the first element of struct
> ksm_mm_slot.
>
> To reduce the ambiguity and make it robust, access mm_slot_entry() when
> slot is !NULL.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Dev Jain <dev.jain@arm.com>
> Reviewed-by: Lance Yang <lance.yang@linux.dev>
> Cc: Kiryl Shutsemau <kirill@shutemov.name>
> Cc: xu xin <xu.xin16@zte.com.cn>
> Cc: Dan Carpenter <dan.carpenter@linaro.org>
>
> ---
> v3:
> * fix uninitialized mm_slot
> ---
> mm/ksm.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 2dbe92e3dd52..c00a21800067 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
>
> void __ksm_exit(struct mm_struct *mm)
> {
> - struct ksm_mm_slot *mm_slot;
> + struct ksm_mm_slot *mm_slot = NULL;
> struct mm_slot *slot;
> int easy_to_free = 0;
>
> @@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
>
> spin_lock(&ksm_mmlist_lock);
> slot = mm_slot_lookup(mm_slots_hash, mm);
> - mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> - if (mm_slot && ksm_scan.mm_slot != mm_slot) {
> - if (!mm_slot->rmap_list) {
> - hash_del(&slot->hash);
> - list_del(&slot->mm_node);
> - easy_to_free = 1;
> - } else {
> - list_move(&slot->mm_node,
> - &ksm_scan.mm_slot->slot.mm_node);
> + if (slot) {
> + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> + if (ksm_scan.mm_slot != mm_slot) {
> + if (!mm_slot->rmap_list) {
> + hash_del(&slot->hash);
> + list_del(&slot->mm_node);
> + easy_to_free = 1;
> + } else {
> + list_move(&slot->mm_node,
> + &ksm_scan.mm_slot->slot.mm_node);
> + }
Indent level gets extreme.
Any reason not to fold slot check inside mm_slot_entry() as I suggested
on early version.
> }
> }
> spin_unlock(&ksm_mmlist_lock);
> --
> 2.34.1
>
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 0:48 ` [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL Wei Yang
2025-09-24 2:19 ` Chengming Zhou
2025-09-24 9:35 ` Kiryl Shutsemau
@ 2025-09-24 9:35 ` David Hildenbrand
2 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2025-09-24 9:35 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, Kiryl Shutsemau, Dan Carpenter
On 24.09.25 02:48, Wei Yang wrote:
> Patch series "mm_slot: fix the usage of mm_slot_entry", v2.
>
That should be dropped.
Subject: "mm/ksm: don't call mm_slot_entry() when the slot is NULL"
> When using mm_slot in ksm, there is code like:
>
> slot = mm_slot_lookup(mm_slots_hash, mm);
> mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> if (mm_slot && ..) {
> }
>
> The mm_slot_entry() won't return a valid value if slot is NULL generally.
> But currently it works since slot is the first element of struct
> ksm_mm_slot.
>
> To reduce the ambiguity and make it robust, access mm_slot_entry() when
> slot is !NULL.
"... only call mm_slot_entry() when we have a valid slot."
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot
2025-09-24 0:48 ` [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang
2025-09-24 3:18 ` Lance Yang
2025-09-24 5:51 ` Dev Jain
@ 2025-09-24 9:39 ` David Hildenbrand
2025-09-24 14:59 ` Wei Yang
2 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2025-09-24 9:39 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, Kiryl Shutsemau, SeongJae Park
On 24.09.25 02:48, Wei Yang wrote:
> Current code is not correct to get struct khugepaged_mm_slot by
> mm_slot_entry() without checking mm_slot is !NULL.
"Current code calls mm_slot_entry() even when we don't have a valid
slot, which is not future proof."
There is no problem
> reported since slot is the first element of struct khugepaged_mm_slot.
"Currently, this is not a problem because "slot" is the first member in
struct khugepaged_mm_slot."
>
> While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there
> is no need to define it.
>
> Remove the definition of struct khugepaged_mm_slot, so there is not chance
> to miss use mm_slot_entry().
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Kiryl Shutsemau <kirill@shutemov.name>
> Cc: xu xin <xu.xin16@zte.com.cn>
> Cc: SeongJae Park <sj@kernel.org>
> Cc: Nico Pache <npache@redhat.com>
>
> ---
> v3:
> * fix a PF reported by SeongJae, where slot is changed to next one
> ---
> mm/khugepaged.c | 58 ++++++++++++++++++-------------------------------
> 1 file changed, 21 insertions(+), 37 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 204ce3059267..e3f7d1760567 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -103,14 +103,6 @@ struct collapse_control {
> nodemask_t alloc_nmask;
> };
>
> -/**
> - * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned
> - * @slot: hash lookup from mm to mm_slot
> - */
> -struct khugepaged_mm_slot {
> - struct mm_slot slot;
> -};
> -
> /**
> * struct khugepaged_scan - cursor for scanning
> * @mm_head: the head of the mm list to scan
> @@ -121,7 +113,7 @@ struct khugepaged_mm_slot {
> */
> struct khugepaged_scan {
> struct list_head mm_head;
> - struct khugepaged_mm_slot *mm_slot;
> + struct mm_slot *mm_slot;
> unsigned long address;
> };
>
> @@ -384,7 +376,10 @@ int hugepage_madvise(struct vm_area_struct *vma,
>
> int __init khugepaged_init(void)
> {
> - mm_slot_cache = KMEM_CACHE(khugepaged_mm_slot, 0);
> + mm_slot_cache = kmem_cache_create("khugepaged_mm_slot",
> + sizeof(struct mm_slot),
> + __alignof__(struct mm_slot),
> + 0, NULL);
Just wondering: do we really have to maintain the old name? Could we
instead just have
mm_slot_cache = KMEM_CACHE(mm_slot, 0);
?
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 9:35 ` Kiryl Shutsemau
@ 2025-09-24 9:40 ` David Hildenbrand
2025-09-24 10:06 ` Kiryl Shutsemau
0 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2025-09-24 9:40 UTC (permalink / raw)
To: Kiryl Shutsemau, Wei Yang
Cc: akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou, linux-mm, Dan Carpenter
On 24.09.25 11:35, Kiryl Shutsemau wrote:
> On Wed, Sep 24, 2025 at 12:48:53AM +0000, Wei Yang wrote:
>> Patch series "mm_slot: fix the usage of mm_slot_entry", v2.
>>
>> When using mm_slot in ksm, there is code like:
>>
>> slot = mm_slot_lookup(mm_slots_hash, mm);
>> mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
>> if (mm_slot && ..) {
>> }
>>
>> The mm_slot_entry() won't return a valid value if slot is NULL generally.
>> But currently it works since slot is the first element of struct
>> ksm_mm_slot.
>>
>> To reduce the ambiguity and make it robust, access mm_slot_entry() when
>> slot is !NULL.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Acked-by: David Hildenbrand <david@redhat.com>
>> Reviewed-by: Dev Jain <dev.jain@arm.com>
>> Reviewed-by: Lance Yang <lance.yang@linux.dev>
>> Cc: Kiryl Shutsemau <kirill@shutemov.name>
>> Cc: xu xin <xu.xin16@zte.com.cn>
>> Cc: Dan Carpenter <dan.carpenter@linaro.org>
>>
>> ---
>> v3:
>> * fix uninitialized mm_slot
>> ---
>> mm/ksm.c | 22 ++++++++++++----------
>> 1 file changed, 12 insertions(+), 10 deletions(-)
>>
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index 2dbe92e3dd52..c00a21800067 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
>>
>> void __ksm_exit(struct mm_struct *mm)
>> {
>> - struct ksm_mm_slot *mm_slot;
>> + struct ksm_mm_slot *mm_slot = NULL;
>> struct mm_slot *slot;
>> int easy_to_free = 0;
>>
>> @@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
>>
>> spin_lock(&ksm_mmlist_lock);
>> slot = mm_slot_lookup(mm_slots_hash, mm);
>> - mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
>> - if (mm_slot && ksm_scan.mm_slot != mm_slot) {
>> - if (!mm_slot->rmap_list) {
>> - hash_del(&slot->hash);
>> - list_del(&slot->mm_node);
>> - easy_to_free = 1;
>> - } else {
>> - list_move(&slot->mm_node,
>> - &ksm_scan.mm_slot->slot.mm_node);
>> + if (slot) {
>> + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
>> + if (ksm_scan.mm_slot != mm_slot) {
>> + if (!mm_slot->rmap_list) {
>> + hash_del(&slot->hash);
>> + list_del(&slot->mm_node);
>> + easy_to_free = 1;
>> + } else {
>> + list_move(&slot->mm_node,
>> + &ksm_scan.mm_slot->slot.mm_node);
>> + }
>
> Indent level gets extreme.
You call this extreme? :)
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 9:40 ` David Hildenbrand
@ 2025-09-24 10:06 ` Kiryl Shutsemau
2025-09-24 10:09 ` David Hildenbrand
0 siblings, 1 reply; 16+ messages in thread
From: Kiryl Shutsemau @ 2025-09-24 10:06 UTC (permalink / raw)
To: David Hildenbrand
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou, linux-mm, Dan Carpenter
On Wed, Sep 24, 2025 at 11:40:21AM +0200, David Hildenbrand wrote:
> On 24.09.25 11:35, Kiryl Shutsemau wrote:
> > On Wed, Sep 24, 2025 at 12:48:53AM +0000, Wei Yang wrote:
> > > Patch series "mm_slot: fix the usage of mm_slot_entry", v2.
> > >
> > > When using mm_slot in ksm, there is code like:
> > >
> > > slot = mm_slot_lookup(mm_slots_hash, mm);
> > > mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> > > if (mm_slot && ..) {
> > > }
> > >
> > > The mm_slot_entry() won't return a valid value if slot is NULL generally.
> > > But currently it works since slot is the first element of struct
> > > ksm_mm_slot.
> > >
> > > To reduce the ambiguity and make it robust, access mm_slot_entry() when
> > > slot is !NULL.
> > >
> > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> > > Acked-by: David Hildenbrand <david@redhat.com>
> > > Reviewed-by: Dev Jain <dev.jain@arm.com>
> > > Reviewed-by: Lance Yang <lance.yang@linux.dev>
> > > Cc: Kiryl Shutsemau <kirill@shutemov.name>
> > > Cc: xu xin <xu.xin16@zte.com.cn>
> > > Cc: Dan Carpenter <dan.carpenter@linaro.org>
> > >
> > > ---
> > > v3:
> > > * fix uninitialized mm_slot
> > > ---
> > > mm/ksm.c | 22 ++++++++++++----------
> > > 1 file changed, 12 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/mm/ksm.c b/mm/ksm.c
> > > index 2dbe92e3dd52..c00a21800067 100644
> > > --- a/mm/ksm.c
> > > +++ b/mm/ksm.c
> > > @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
> > > void __ksm_exit(struct mm_struct *mm)
> > > {
> > > - struct ksm_mm_slot *mm_slot;
> > > + struct ksm_mm_slot *mm_slot = NULL;
> > > struct mm_slot *slot;
> > > int easy_to_free = 0;
> > > @@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
> > > spin_lock(&ksm_mmlist_lock);
> > > slot = mm_slot_lookup(mm_slots_hash, mm);
> > > - mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> > > - if (mm_slot && ksm_scan.mm_slot != mm_slot) {
> > > - if (!mm_slot->rmap_list) {
> > > - hash_del(&slot->hash);
> > > - list_del(&slot->mm_node);
> > > - easy_to_free = 1;
> > > - } else {
> > > - list_move(&slot->mm_node,
> > > - &ksm_scan.mm_slot->slot.mm_node);
> > > + if (slot) {
> > > + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> > > + if (ksm_scan.mm_slot != mm_slot) {
> > > + if (!mm_slot->rmap_list) {
> > > + hash_del(&slot->hash);
> > > + list_del(&slot->mm_node);
> > > + easy_to_free = 1;
> > > + } else {
> > > + list_move(&slot->mm_node,
> > > + &ksm_scan.mm_slot->slot.mm_node);
> > > + }
> >
> > Indent level gets extreme.
>
> You call this extreme? :)
Emphasis on "gets". :)
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 10:06 ` Kiryl Shutsemau
@ 2025-09-24 10:09 ` David Hildenbrand
2025-09-24 10:15 ` Kiryl Shutsemau
0 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2025-09-24 10:09 UTC (permalink / raw)
To: Kiryl Shutsemau
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou, linux-mm, Dan Carpenter
>>>> mm/ksm.c | 22 ++++++++++++----------
>>>> 1 file changed, 12 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/mm/ksm.c b/mm/ksm.c
>>>> index 2dbe92e3dd52..c00a21800067 100644
>>>> --- a/mm/ksm.c
>>>> +++ b/mm/ksm.c
>>>> @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
>>>> void __ksm_exit(struct mm_struct *mm)
>>>> {
>>>> - struct ksm_mm_slot *mm_slot;
>>>> + struct ksm_mm_slot *mm_slot = NULL;
>>>> struct mm_slot *slot;
>>>> int easy_to_free = 0;
>>>> @@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
>>>> spin_lock(&ksm_mmlist_lock);
>>>> slot = mm_slot_lookup(mm_slots_hash, mm);
>>>> - mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
>>>> - if (mm_slot && ksm_scan.mm_slot != mm_slot) {
>>>> - if (!mm_slot->rmap_list) {
>>>> - hash_del(&slot->hash);
>>>> - list_del(&slot->mm_node);
>>>> - easy_to_free = 1;
>>>> - } else {
>>>> - list_move(&slot->mm_node,
>>>> - &ksm_scan.mm_slot->slot.mm_node);
>>>> + if (slot) {
>>>> + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
>>>> + if (ksm_scan.mm_slot != mm_slot) {
>>>> + if (!mm_slot->rmap_list) {
>>>> + hash_del(&slot->hash);
>>>> + list_del(&slot->mm_node);
>>>> + easy_to_free = 1;
>>>> + } else {
>>>> + list_move(&slot->mm_node,
>>>> + &ksm_scan.mm_slot->slot.mm_node);
>>>> + }
>>>
>>> Indent level gets extreme.
>>
>> You call this extreme? :)
>
> Emphasis on "gets". :)
:)
I would prefer to not make it okay to call mm_slot_entry() with a NULL
pointer.
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 10:09 ` David Hildenbrand
@ 2025-09-24 10:15 ` Kiryl Shutsemau
2025-09-24 10:42 ` David Hildenbrand
0 siblings, 1 reply; 16+ messages in thread
From: Kiryl Shutsemau @ 2025-09-24 10:15 UTC (permalink / raw)
To: David Hildenbrand
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou, linux-mm, Dan Carpenter
On Wed, Sep 24, 2025 at 12:09:03PM +0200, David Hildenbrand wrote:
> > > > > mm/ksm.c | 22 ++++++++++++----------
> > > > > 1 file changed, 12 insertions(+), 10 deletions(-)
> > > > >
> > > > > diff --git a/mm/ksm.c b/mm/ksm.c
> > > > > index 2dbe92e3dd52..c00a21800067 100644
> > > > > --- a/mm/ksm.c
> > > > > +++ b/mm/ksm.c
> > > > > @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
> > > > > void __ksm_exit(struct mm_struct *mm)
> > > > > {
> > > > > - struct ksm_mm_slot *mm_slot;
> > > > > + struct ksm_mm_slot *mm_slot = NULL;
> > > > > struct mm_slot *slot;
> > > > > int easy_to_free = 0;
> > > > > @@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
> > > > > spin_lock(&ksm_mmlist_lock);
> > > > > slot = mm_slot_lookup(mm_slots_hash, mm);
> > > > > - mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> > > > > - if (mm_slot && ksm_scan.mm_slot != mm_slot) {
> > > > > - if (!mm_slot->rmap_list) {
> > > > > - hash_del(&slot->hash);
> > > > > - list_del(&slot->mm_node);
> > > > > - easy_to_free = 1;
> > > > > - } else {
> > > > > - list_move(&slot->mm_node,
> > > > > - &ksm_scan.mm_slot->slot.mm_node);
> > > > > + if (slot) {
> > > > > + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
> > > > > + if (ksm_scan.mm_slot != mm_slot) {
> > > > > + if (!mm_slot->rmap_list) {
> > > > > + hash_del(&slot->hash);
> > > > > + list_del(&slot->mm_node);
> > > > > + easy_to_free = 1;
> > > > > + } else {
> > > > > + list_move(&slot->mm_node,
> > > > > + &ksm_scan.mm_slot->slot.mm_node);
> > > > > + }
> > > >
> > > > Indent level gets extreme.
> > >
> > > You call this extreme? :)
> >
> > Emphasis on "gets". :)
>
> :)
>
> I would prefer to not make it okay to call mm_slot_entry() with a NULL
> pointer.
Maybe invert the checks then?
if (!slot)
goto unlock;
mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
if (ksm_scan.mm_slot == mm_slot)
goto unlock;
if (!mm_slot->rmap_list) {
...
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 10:15 ` Kiryl Shutsemau
@ 2025-09-24 10:42 ` David Hildenbrand
2025-09-24 14:52 ` Wei Yang
0 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2025-09-24 10:42 UTC (permalink / raw)
To: Kiryl Shutsemau
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou, linux-mm, Dan Carpenter
On 24.09.25 12:15, Kiryl Shutsemau wrote:
> On Wed, Sep 24, 2025 at 12:09:03PM +0200, David Hildenbrand wrote:
>>>>>> mm/ksm.c | 22 ++++++++++++----------
>>>>>> 1 file changed, 12 insertions(+), 10 deletions(-)
>>>>>>
>>>>>> diff --git a/mm/ksm.c b/mm/ksm.c
>>>>>> index 2dbe92e3dd52..c00a21800067 100644
>>>>>> --- a/mm/ksm.c
>>>>>> +++ b/mm/ksm.c
>>>>>> @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
>>>>>> void __ksm_exit(struct mm_struct *mm)
>>>>>> {
>>>>>> - struct ksm_mm_slot *mm_slot;
>>>>>> + struct ksm_mm_slot *mm_slot = NULL;
>>>>>> struct mm_slot *slot;
>>>>>> int easy_to_free = 0;
>>>>>> @@ -2936,15 +2936,17 @@ void __ksm_exit(struct mm_struct *mm)
>>>>>> spin_lock(&ksm_mmlist_lock);
>>>>>> slot = mm_slot_lookup(mm_slots_hash, mm);
>>>>>> - mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
>>>>>> - if (mm_slot && ksm_scan.mm_slot != mm_slot) {
>>>>>> - if (!mm_slot->rmap_list) {
>>>>>> - hash_del(&slot->hash);
>>>>>> - list_del(&slot->mm_node);
>>>>>> - easy_to_free = 1;
>>>>>> - } else {
>>>>>> - list_move(&slot->mm_node,
>>>>>> - &ksm_scan.mm_slot->slot.mm_node);
>>>>>> + if (slot) {
>>>>>> + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
>>>>>> + if (ksm_scan.mm_slot != mm_slot) {
>>>>>> + if (!mm_slot->rmap_list) {
>>>>>> + hash_del(&slot->hash);
>>>>>> + list_del(&slot->mm_node);
>>>>>> + easy_to_free = 1;
>>>>>> + } else {
>>>>>> + list_move(&slot->mm_node,
>>>>>> + &ksm_scan.mm_slot->slot.mm_node);
>>>>>> + }
>>>>>
>>>>> Indent level gets extreme.
>>>>
>>>> You call this extreme? :)
>>>
>>> Emphasis on "gets". :)
>>
>> :)
>>
>> I would prefer to not make it okay to call mm_slot_entry() with a NULL
>> pointer.
>
> Maybe invert the checks then?
If that cleans it up, sure.
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
2025-09-24 10:42 ` David Hildenbrand
@ 2025-09-24 14:52 ` Wei Yang
0 siblings, 0 replies; 16+ messages in thread
From: Wei Yang @ 2025-09-24 14:52 UTC (permalink / raw)
To: David Hildenbrand
Cc: Kiryl Shutsemau, Wei Yang, akpm, lorenzo.stoakes, ziy,
baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain,
baohua, lance.yang, xu.xin16, chengming.zhou, linux-mm,
Dan Carpenter
On Wed, Sep 24, 2025 at 12:42:36PM +0200, David Hildenbrand wrote:
[...]
>> > > > >
>> > > > > Indent level gets extreme.
>> > > >
>> > > > You call this extreme? :)
>> > >
>> > > Emphasis on "gets". :)
>> >
>> > :)
>> >
>> > I would prefer to not make it okay to call mm_slot_entry() with a NULL
>> > pointer.
>>
>> Maybe invert the checks then?
>
>If that cleans it up, sure.
>
Ok, I will send a new version with this style.
>--
>Cheers
>
>David / dhildenb
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot
2025-09-24 9:39 ` David Hildenbrand
@ 2025-09-24 14:59 ` Wei Yang
0 siblings, 0 replies; 16+ messages in thread
From: Wei Yang @ 2025-09-24 14:59 UTC (permalink / raw)
To: David Hildenbrand
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou, linux-mm, Kiryl Shutsemau, SeongJae Park
On Wed, Sep 24, 2025 at 11:39:44AM +0200, David Hildenbrand wrote:
>On 24.09.25 02:48, Wei Yang wrote:
>> Current code is not correct to get struct khugepaged_mm_slot by
>> mm_slot_entry() without checking mm_slot is !NULL.
>
>"Current code calls mm_slot_entry() even when we don't have a valid slot,
>which is not future proof."
>
>There is no problem
>> reported since slot is the first element of struct khugepaged_mm_slot.
>
>"Currently, this is not a problem because "slot" is the first member in
>struct khugepaged_mm_slot."
>
>>
>> While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there
>> is no need to define it.
>>
>> Remove the definition of struct khugepaged_mm_slot, so there is not chance
>> to miss use mm_slot_entry().
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Lance Yang <lance.yang@linux.dev>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Kiryl Shutsemau <kirill@shutemov.name>
>> Cc: xu xin <xu.xin16@zte.com.cn>
>> Cc: SeongJae Park <sj@kernel.org>
>> Cc: Nico Pache <npache@redhat.com>
>>
>> ---
>> v3:
>> * fix a PF reported by SeongJae, where slot is changed to next one
>> ---
>> mm/khugepaged.c | 58 ++++++++++++++++++-------------------------------
>> 1 file changed, 21 insertions(+), 37 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 204ce3059267..e3f7d1760567 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -103,14 +103,6 @@ struct collapse_control {
>> nodemask_t alloc_nmask;
>> };
>> -/**
>> - * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned
>> - * @slot: hash lookup from mm to mm_slot
>> - */
>> -struct khugepaged_mm_slot {
>> - struct mm_slot slot;
>> -};
>> -
>> /**
>> * struct khugepaged_scan - cursor for scanning
>> * @mm_head: the head of the mm list to scan
>> @@ -121,7 +113,7 @@ struct khugepaged_mm_slot {
>> */
>> struct khugepaged_scan {
>> struct list_head mm_head;
>> - struct khugepaged_mm_slot *mm_slot;
>> + struct mm_slot *mm_slot;
>> unsigned long address;
>> };
>> @@ -384,7 +376,10 @@ int hugepage_madvise(struct vm_area_struct *vma,
>> int __init khugepaged_init(void)
>> {
>> - mm_slot_cache = KMEM_CACHE(khugepaged_mm_slot, 0);
>> + mm_slot_cache = kmem_cache_create("khugepaged_mm_slot",
>> + sizeof(struct mm_slot),
>> + __alignof__(struct mm_slot),
>> + 0, NULL);
>
>
>Just wondering: do we really have to maintain the old name? Could we instead
>just have
>
>mm_slot_cache = KMEM_CACHE(mm_slot, 0);
>
Hmm... I am fine with this, if no other objections.
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-09-24 14:59 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-24 0:48 [Patch v3 0/2] mm_slot: fix the usage of mm_slot_entry Wei Yang
2025-09-24 0:48 ` [Patch v3 1/2] mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL Wei Yang
2025-09-24 2:19 ` Chengming Zhou
2025-09-24 9:35 ` Kiryl Shutsemau
2025-09-24 9:40 ` David Hildenbrand
2025-09-24 10:06 ` Kiryl Shutsemau
2025-09-24 10:09 ` David Hildenbrand
2025-09-24 10:15 ` Kiryl Shutsemau
2025-09-24 10:42 ` David Hildenbrand
2025-09-24 14:52 ` Wei Yang
2025-09-24 9:35 ` David Hildenbrand
2025-09-24 0:48 ` [Patch v3 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang
2025-09-24 3:18 ` Lance Yang
2025-09-24 5:51 ` Dev Jain
2025-09-24 9:39 ` David Hildenbrand
2025-09-24 14:59 ` Wei Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox