* [Patch v4 0/2] mm_slot: fix the usage of mm_slot_entry() @ 2025-09-27 0:45 Wei Yang 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang 2025-09-27 0:45 ` [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang 0 siblings, 2 replies; 12+ messages in thread From: Wei Yang @ 2025-09-27 0:45 UTC (permalink / raw) To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Wei Yang The usage of mm_slot_entry() in ksm/khugepaged is not correct. In case mm_slot_lookup() return a NULL slot, mm_slot_entry() should not be called. To fix this: Patch 1: check slot before continue in ksm.c Patch 2: remove the definition of khugepaged_mm_slot v4: * adjust change log * use invert style in patch 1 * rename slab to "mm_slot" v3: fix a pf because of slot change fix uninitialized mm_slot v2: fix the error in code instead guard by compiler V1: add a BUILD_BUG_ON_MSG() to make sure slot is the first element [1]: https://lkml.kernel.org/r/20250914000026.17986-1-richard.weiyang@gmail.com [2]: https://lkml.kernel.org/r/20250919071244.17020-1-richard.weiyang@gmail.com [3]: https://lkml.kernel.org/r/20250924004854.29889-1-richard.weiyang@gmail.com Wei Yang (2): mm/ksm: don't call mm_slot_entry() when the slot is NULL mm/khugepaged: remove definition of struct khugepaged_mm_slot mm/khugepaged.c | 55 ++++++++++++++++--------------------------------- mm/ksm.c | 23 ++++++++++++--------- 2 files changed, 31 insertions(+), 47 deletions(-) -- 2.34.1 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL 2025-09-27 0:45 [Patch v4 0/2] mm_slot: fix the usage of mm_slot_entry() Wei Yang @ 2025-09-27 0:45 ` Wei Yang 2025-09-27 8:39 ` Muhammad Usama Anjum ` (4 more replies) 2025-09-27 0:45 ` [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang 1 sibling, 5 replies; 12+ messages in thread From: Wei Yang @ 2025-09-27 0:45 UTC (permalink / raw) To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Wei Yang, Kiryl Shutsemau, Dan Carpenter When using mm_slot in ksm, there is code like: slot = mm_slot_lookup(mm_slots_hash, mm); mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); if (mm_slot && ..) { } The mm_slot_entry() won't return a valid value if slot is NULL generally. But currently it works since slot is the first element of struct ksm_mm_slot. To reduce the ambiguity and make it robust, only call mm_slot_entry() when we have a valid slot. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: Kiryl Shutsemau <kirill@shutemov.name> Cc: xu xin <xu.xin16@zte.com.cn> Cc: Dan Carpenter <dan.carpenter@linaro.org> Cc: Chengming Zhou <chengming.zhou@linux.dev> --- v3: * adjust subject and changelog based on David's comment * use invert style for coding suggested by Kiryl * drop RB and Ack-by v2: * fix uninitialized mm_slot --- mm/ksm.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 2dbe92e3dd52..7bc726b50b2f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm) void __ksm_exit(struct mm_struct *mm) { - struct ksm_mm_slot *mm_slot; + struct ksm_mm_slot *mm_slot = NULL; struct mm_slot *slot; int easy_to_free = 0; @@ -2936,17 +2936,20 @@ void __ksm_exit(struct mm_struct *mm) spin_lock(&ksm_mmlist_lock); slot = mm_slot_lookup(mm_slots_hash, mm); + if (!slot) + goto unlock; mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); - if (mm_slot && ksm_scan.mm_slot != mm_slot) { - if (!mm_slot->rmap_list) { - hash_del(&slot->hash); - list_del(&slot->mm_node); - easy_to_free = 1; - } else { - list_move(&slot->mm_node, - &ksm_scan.mm_slot->slot.mm_node); - } + if (ksm_scan.mm_slot == mm_slot) + goto unlock; + if (!mm_slot->rmap_list) { + hash_del(&slot->hash); + list_del(&slot->mm_node); + easy_to_free = 1; + } else { + list_move(&slot->mm_node, + &ksm_scan.mm_slot->slot.mm_node); } +unlock: spin_unlock(&ksm_mmlist_lock); if (easy_to_free) { -- 2.34.1 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang @ 2025-09-27 8:39 ` Muhammad Usama Anjum 2025-09-28 13:53 ` Dev Jain ` (3 subsequent siblings) 4 siblings, 0 replies; 12+ messages in thread From: Muhammad Usama Anjum @ 2025-09-27 8:39 UTC (permalink / raw) To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Kiryl Shutsemau, Dan Carpenter On 9/27/25 5:45 AM, Wei Yang wrote: > When using mm_slot in ksm, there is code like: > > slot = mm_slot_lookup(mm_slots_hash, mm); > mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); > if (mm_slot && ..) { > } > > The mm_slot_entry() won't return a valid value if slot is NULL generally. > But currently it works since slot is the first element of struct > ksm_mm_slot. > > To reduce the ambiguity and make it robust, only call mm_slot_entry() > when we have a valid slot. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: Dan Carpenter <dan.carpenter@linaro.org> > Cc: Chengming Zhou <chengming.zhou@linux.dev> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> > > --- > v3: > * adjust subject and changelog based on David's comment > * use invert style for coding suggested by Kiryl > * drop RB and Ack-by > v2: > * fix uninitialized mm_slot > --- > mm/ksm.c | 23 +++++++++++++---------- > 1 file changed, 13 insertions(+), 10 deletions(-) > > diff --git a/mm/ksm.c b/mm/ksm.c > index 2dbe92e3dd52..7bc726b50b2f 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm) > > void __ksm_exit(struct mm_struct *mm) > { > - struct ksm_mm_slot *mm_slot; > + struct ksm_mm_slot *mm_slot = NULL; > struct mm_slot *slot; > int easy_to_free = 0; > > @@ -2936,17 +2936,20 @@ void __ksm_exit(struct mm_struct *mm) > > spin_lock(&ksm_mmlist_lock); > slot = mm_slot_lookup(mm_slots_hash, mm); > + if (!slot) > + goto unlock; > mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); > - if (mm_slot && ksm_scan.mm_slot != mm_slot) { > - if (!mm_slot->rmap_list) { > - hash_del(&slot->hash); > - list_del(&slot->mm_node); > - easy_to_free = 1; > - } else { > - list_move(&slot->mm_node, > - &ksm_scan.mm_slot->slot.mm_node); > - } > + if (ksm_scan.mm_slot == mm_slot) > + goto unlock; > + if (!mm_slot->rmap_list) { > + hash_del(&slot->hash); > + list_del(&slot->mm_node); > + easy_to_free = 1; > + } else { > + list_move(&slot->mm_node, > + &ksm_scan.mm_slot->slot.mm_node); > } > +unlock: > spin_unlock(&ksm_mmlist_lock); > > if (easy_to_free) { ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang 2025-09-27 8:39 ` Muhammad Usama Anjum @ 2025-09-28 13:53 ` Dev Jain 2025-09-29 8:13 ` David Hildenbrand ` (2 subsequent siblings) 4 siblings, 0 replies; 12+ messages in thread From: Dev Jain @ 2025-09-28 13:53 UTC (permalink / raw) To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Kiryl Shutsemau, Dan Carpenter On 27/09/25 6:15 am, Wei Yang wrote: > When using mm_slot in ksm, there is code like: > > slot = mm_slot_lookup(mm_slots_hash, mm); > mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); > if (mm_slot && ..) { > } > > The mm_slot_entry() won't return a valid value if slot is NULL generally. > But currently it works since slot is the first element of struct > ksm_mm_slot. > > To reduce the ambiguity and make it robust, only call mm_slot_entry() > when we have a valid slot. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: Dan Carpenter <dan.carpenter@linaro.org> > Cc: Chengming Zhou <chengming.zhou@linux.dev> > > --- > v3: > * adjust subject and changelog based on David's comment > * use invert style for coding suggested by Kiryl > * drop RB and Ack-by > v2: > * fix uninitialized mm_slot > --- Reviewed-by: Dev Jain <dev.jain@arm.com> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang 2025-09-27 8:39 ` Muhammad Usama Anjum 2025-09-28 13:53 ` Dev Jain @ 2025-09-29 8:13 ` David Hildenbrand 2025-09-29 10:58 ` Kiryl Shutsemau 2025-09-29 15:14 ` Zi Yan 4 siblings, 0 replies; 12+ messages in thread From: David Hildenbrand @ 2025-09-29 8:13 UTC (permalink / raw) To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Kiryl Shutsemau, Dan Carpenter On 27.09.25 02:45, Wei Yang wrote: > When using mm_slot in ksm, there is code like: > > slot = mm_slot_lookup(mm_slots_hash, mm); > mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); > if (mm_slot && ..) { > } > > The mm_slot_entry() won't return a valid value if slot is NULL generally. > But currently it works since slot is the first element of struct > ksm_mm_slot. > > To reduce the ambiguity and make it robust, only call mm_slot_entry() > when we have a valid slot. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: Dan Carpenter <dan.carpenter@linaro.org> > Cc: Chengming Zhou <chengming.zhou@linux.dev> > > --- > v3: > * adjust subject and changelog based on David's comment > * use invert style for coding suggested by Kiryl > * drop RB and Ack-by > v2: > * fix uninitialized mm_slot > --- Acked-by: David Hildenbrand <david@redhat.com> -- Cheers David / dhildenb ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang ` (2 preceding siblings ...) 2025-09-29 8:13 ` David Hildenbrand @ 2025-09-29 10:58 ` Kiryl Shutsemau 2025-09-29 15:14 ` Zi Yan 4 siblings, 0 replies; 12+ messages in thread From: Kiryl Shutsemau @ 2025-09-29 10:58 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou, linux-mm, Dan Carpenter On Sat, Sep 27, 2025 at 12:45:38AM +0000, Wei Yang wrote: > When using mm_slot in ksm, there is code like: > > slot = mm_slot_lookup(mm_slots_hash, mm); > mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); > if (mm_slot && ..) { > } > > The mm_slot_entry() won't return a valid value if slot is NULL generally. > But currently it works since slot is the first element of struct > ksm_mm_slot. > > To reduce the ambiguity and make it robust, only call mm_slot_entry() > when we have a valid slot. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: Dan Carpenter <dan.carpenter@linaro.org> > Cc: Chengming Zhou <chengming.zhou@linux.dev> Acked-by: Kiryl Shutsemau <kas@kernel.org> -- Kiryl Shutsemau / Kirill A. Shutemov ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang ` (3 preceding siblings ...) 2025-09-29 10:58 ` Kiryl Shutsemau @ 2025-09-29 15:14 ` Zi Yan 4 siblings, 0 replies; 12+ messages in thread From: Zi Yan @ 2025-09-29 15:14 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou, linux-mm, Kiryl Shutsemau, Dan Carpenter On 26 Sep 2025, at 20:45, Wei Yang wrote: > When using mm_slot in ksm, there is code like: > > slot = mm_slot_lookup(mm_slots_hash, mm); > mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); > if (mm_slot && ..) { > } > > The mm_slot_entry() won't return a valid value if slot is NULL generally. > But currently it works since slot is the first element of struct > ksm_mm_slot. > > To reduce the ambiguity and make it robust, only call mm_slot_entry() > when we have a valid slot. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: Dan Carpenter <dan.carpenter@linaro.org> > Cc: Chengming Zhou <chengming.zhou@linux.dev> > > --- > v3: > * adjust subject and changelog based on David's comment > * use invert style for coding suggested by Kiryl > * drop RB and Ack-by > v2: > * fix uninitialized mm_slot > --- > mm/ksm.c | 23 +++++++++++++---------- > 1 file changed, 13 insertions(+), 10 deletions(-) > Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot 2025-09-27 0:45 [Patch v4 0/2] mm_slot: fix the usage of mm_slot_entry() Wei Yang 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang @ 2025-09-27 0:45 ` Wei Yang 2025-09-27 8:40 ` Muhammad Usama Anjum ` (3 more replies) 1 sibling, 4 replies; 12+ messages in thread From: Wei Yang @ 2025-09-27 0:45 UTC (permalink / raw) To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Wei Yang, Kiryl Shutsemau, SeongJae Park Current code calls mm_slot_entry() even when we don't have a valid slot, which is not future proof. Currently, this is not a problem because "slot" is the first member in struct khugepaged_mm_slot. While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there is no need to define it. Remove the definition of struct khugepaged_mm_slot, so there is not chance to miss use mm_slot_entry(). Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Acked-by: Lance Yang <lance.yang@linux.dev> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: David Hildenbrand <david@redhat.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Kiryl Shutsemau <kirill@shutemov.name> Cc: xu xin <xu.xin16@zte.com.cn> Cc: SeongJae Park <sj@kernel.org> Cc: Nico Pache <npache@redhat.com> --- v3: * adjust changelog * rename the slab cache to "mm_slot" v2: * fix a PF reported by SeongJae, where slot is changed to next one --- mm/khugepaged.c | 55 ++++++++++++++++--------------------------------- 1 file changed, 18 insertions(+), 37 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 204ce3059267..67540078083b 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -103,14 +103,6 @@ struct collapse_control { nodemask_t alloc_nmask; }; -/** - * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned - * @slot: hash lookup from mm to mm_slot - */ -struct khugepaged_mm_slot { - struct mm_slot slot; -}; - /** * struct khugepaged_scan - cursor for scanning * @mm_head: the head of the mm list to scan @@ -121,7 +113,7 @@ struct khugepaged_mm_slot { */ struct khugepaged_scan { struct list_head mm_head; - struct khugepaged_mm_slot *mm_slot; + struct mm_slot *mm_slot; unsigned long address; }; @@ -384,7 +376,7 @@ int hugepage_madvise(struct vm_area_struct *vma, int __init khugepaged_init(void) { - mm_slot_cache = KMEM_CACHE(khugepaged_mm_slot, 0); + mm_slot_cache = KMEM_CACHE(mm_slot, 0); if (!mm_slot_cache) return -ENOMEM; @@ -438,7 +430,6 @@ static bool hugepage_pmd_enabled(void) void __khugepaged_enter(struct mm_struct *mm) { - struct khugepaged_mm_slot *mm_slot; struct mm_slot *slot; int wakeup; @@ -447,12 +438,10 @@ void __khugepaged_enter(struct mm_struct *mm) if (unlikely(mm_flags_test_and_set(MMF_VM_HUGEPAGE, mm))) return; - mm_slot = mm_slot_alloc(mm_slot_cache); - if (!mm_slot) + slot = mm_slot_alloc(mm_slot_cache); + if (!slot) return; - slot = &mm_slot->slot; - spin_lock(&khugepaged_mm_lock); mm_slot_insert(mm_slots_hash, mm, slot); /* @@ -480,14 +469,12 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, void __khugepaged_exit(struct mm_struct *mm) { - struct khugepaged_mm_slot *mm_slot; struct mm_slot *slot; int free = 0; spin_lock(&khugepaged_mm_lock); slot = mm_slot_lookup(mm_slots_hash, mm); - mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); - if (mm_slot && khugepaged_scan.mm_slot != mm_slot) { + if (slot && khugepaged_scan.mm_slot != slot) { hash_del(&slot->hash); list_del(&slot->mm_node); free = 1; @@ -496,9 +483,9 @@ void __khugepaged_exit(struct mm_struct *mm) if (free) { mm_flags_clear(MMF_VM_HUGEPAGE, mm); - mm_slot_free(mm_slot_cache, mm_slot); + mm_slot_free(mm_slot_cache, slot); mmdrop(mm); - } else if (mm_slot) { + } else if (slot) { /* * This is required to serialize against * hpage_collapse_test_exit() (which is guaranteed to run @@ -1432,9 +1419,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, return result; } -static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) +static void collect_mm_slot(struct mm_slot *slot) { - struct mm_slot *slot = &mm_slot->slot; struct mm_struct *mm = slot->mm; lockdep_assert_held(&khugepaged_mm_lock); @@ -1451,7 +1437,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) */ /* khugepaged_mm_lock actually not necessary for the below */ - mm_slot_free(mm_slot_cache, mm_slot); + mm_slot_free(mm_slot_cache, slot); mmdrop(mm); } } @@ -2394,7 +2380,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, __acquires(&khugepaged_mm_lock) { struct vma_iterator vmi; - struct khugepaged_mm_slot *mm_slot; struct mm_slot *slot; struct mm_struct *mm; struct vm_area_struct *vma; @@ -2405,14 +2390,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, *result = SCAN_FAIL; if (khugepaged_scan.mm_slot) { - mm_slot = khugepaged_scan.mm_slot; - slot = &mm_slot->slot; + slot = khugepaged_scan.mm_slot; } else { slot = list_first_entry(&khugepaged_scan.mm_head, struct mm_slot, mm_node); - mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); khugepaged_scan.address = 0; - khugepaged_scan.mm_slot = mm_slot; + khugepaged_scan.mm_slot = slot; } spin_unlock(&khugepaged_mm_lock); @@ -2510,7 +2493,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, breakouterloop_mmap_lock: spin_lock(&khugepaged_mm_lock); - VM_BUG_ON(khugepaged_scan.mm_slot != mm_slot); + VM_BUG_ON(khugepaged_scan.mm_slot != slot); /* * Release the current mm_slot if this mm is about to die, or * if we scanned all vmas of this mm. @@ -2522,16 +2505,14 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, * mm_slot not pointing to the exiting mm. */ if (!list_is_last(&slot->mm_node, &khugepaged_scan.mm_head)) { - slot = list_next_entry(slot, mm_node); - khugepaged_scan.mm_slot = - mm_slot_entry(slot, struct khugepaged_mm_slot, slot); + khugepaged_scan.mm_slot = list_next_entry(slot, mm_node); khugepaged_scan.address = 0; } else { khugepaged_scan.mm_slot = NULL; khugepaged_full_scans++; } - collect_mm_slot(mm_slot); + collect_mm_slot(slot); } return progress; @@ -2618,7 +2599,7 @@ static void khugepaged_wait_work(void) static int khugepaged(void *none) { - struct khugepaged_mm_slot *mm_slot; + struct mm_slot *slot; set_freezable(); set_user_nice(current, MAX_NICE); @@ -2629,10 +2610,10 @@ static int khugepaged(void *none) } spin_lock(&khugepaged_mm_lock); - mm_slot = khugepaged_scan.mm_slot; + slot = khugepaged_scan.mm_slot; khugepaged_scan.mm_slot = NULL; - if (mm_slot) - collect_mm_slot(mm_slot); + if (slot) + collect_mm_slot(slot); spin_unlock(&khugepaged_mm_lock); return 0; } -- 2.34.1 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot 2025-09-27 0:45 ` [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang @ 2025-09-27 8:40 ` Muhammad Usama Anjum 2025-09-29 8:13 ` David Hildenbrand ` (2 subsequent siblings) 3 siblings, 0 replies; 12+ messages in thread From: Muhammad Usama Anjum @ 2025-09-27 8:40 UTC (permalink / raw) To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Kiryl Shutsemau, SeongJae Park On 9/27/25 5:45 AM, Wei Yang wrote: > Current code calls mm_slot_entry() even when we don't have a valid slot, > which is not future proof. Currently, this is not a problem because > "slot" is the first member in struct khugepaged_mm_slot. > > While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there > is no need to define it. > > Remove the definition of struct khugepaged_mm_slot, so there is not chance > to miss use mm_slot_entry(). > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Acked-by: Lance Yang <lance.yang@linux.dev> > Reviewed-by: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: SeongJae Park <sj@kernel.org> > Cc: Nico Pache <npache@redhat.com> Acked-by: Muhammad Usama Anjum <usama.anjum@collabora.com> > > --- > v3: > * adjust changelog > * rename the slab cache to "mm_slot" > v2: > * fix a PF reported by SeongJae, where slot is changed to next one > --- > mm/khugepaged.c | 55 ++++++++++++++++--------------------------------- > 1 file changed, 18 insertions(+), 37 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 204ce3059267..67540078083b 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -103,14 +103,6 @@ struct collapse_control { > nodemask_t alloc_nmask; > }; > > -/** > - * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned > - * @slot: hash lookup from mm to mm_slot > - */ > -struct khugepaged_mm_slot { > - struct mm_slot slot; > -}; > - > /** > * struct khugepaged_scan - cursor for scanning > * @mm_head: the head of the mm list to scan > @@ -121,7 +113,7 @@ struct khugepaged_mm_slot { > */ > struct khugepaged_scan { > struct list_head mm_head; > - struct khugepaged_mm_slot *mm_slot; > + struct mm_slot *mm_slot; > unsigned long address; > }; > > @@ -384,7 +376,7 @@ int hugepage_madvise(struct vm_area_struct *vma, > > int __init khugepaged_init(void) > { > - mm_slot_cache = KMEM_CACHE(khugepaged_mm_slot, 0); > + mm_slot_cache = KMEM_CACHE(mm_slot, 0); > if (!mm_slot_cache) > return -ENOMEM; > > @@ -438,7 +430,6 @@ static bool hugepage_pmd_enabled(void) > > void __khugepaged_enter(struct mm_struct *mm) > { > - struct khugepaged_mm_slot *mm_slot; > struct mm_slot *slot; > int wakeup; > > @@ -447,12 +438,10 @@ void __khugepaged_enter(struct mm_struct *mm) > if (unlikely(mm_flags_test_and_set(MMF_VM_HUGEPAGE, mm))) > return; > > - mm_slot = mm_slot_alloc(mm_slot_cache); > - if (!mm_slot) > + slot = mm_slot_alloc(mm_slot_cache); > + if (!slot) > return; > > - slot = &mm_slot->slot; > - > spin_lock(&khugepaged_mm_lock); > mm_slot_insert(mm_slots_hash, mm, slot); > /* > @@ -480,14 +469,12 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, > > void __khugepaged_exit(struct mm_struct *mm) > { > - struct khugepaged_mm_slot *mm_slot; > struct mm_slot *slot; > int free = 0; > > spin_lock(&khugepaged_mm_lock); > slot = mm_slot_lookup(mm_slots_hash, mm); > - mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); > - if (mm_slot && khugepaged_scan.mm_slot != mm_slot) { > + if (slot && khugepaged_scan.mm_slot != slot) { > hash_del(&slot->hash); > list_del(&slot->mm_node); > free = 1; > @@ -496,9 +483,9 @@ void __khugepaged_exit(struct mm_struct *mm) > > if (free) { > mm_flags_clear(MMF_VM_HUGEPAGE, mm); > - mm_slot_free(mm_slot_cache, mm_slot); > + mm_slot_free(mm_slot_cache, slot); > mmdrop(mm); > - } else if (mm_slot) { > + } else if (slot) { > /* > * This is required to serialize against > * hpage_collapse_test_exit() (which is guaranteed to run > @@ -1432,9 +1419,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, > return result; > } > > -static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) > +static void collect_mm_slot(struct mm_slot *slot) > { > - struct mm_slot *slot = &mm_slot->slot; > struct mm_struct *mm = slot->mm; > > lockdep_assert_held(&khugepaged_mm_lock); > @@ -1451,7 +1437,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) > */ > > /* khugepaged_mm_lock actually not necessary for the below */ > - mm_slot_free(mm_slot_cache, mm_slot); > + mm_slot_free(mm_slot_cache, slot); > mmdrop(mm); > } > } > @@ -2394,7 +2380,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > __acquires(&khugepaged_mm_lock) > { > struct vma_iterator vmi; > - struct khugepaged_mm_slot *mm_slot; > struct mm_slot *slot; > struct mm_struct *mm; > struct vm_area_struct *vma; > @@ -2405,14 +2390,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > *result = SCAN_FAIL; > > if (khugepaged_scan.mm_slot) { > - mm_slot = khugepaged_scan.mm_slot; > - slot = &mm_slot->slot; > + slot = khugepaged_scan.mm_slot; > } else { > slot = list_first_entry(&khugepaged_scan.mm_head, > struct mm_slot, mm_node); > - mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); > khugepaged_scan.address = 0; > - khugepaged_scan.mm_slot = mm_slot; > + khugepaged_scan.mm_slot = slot; > } > spin_unlock(&khugepaged_mm_lock); > > @@ -2510,7 +2493,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > breakouterloop_mmap_lock: > > spin_lock(&khugepaged_mm_lock); > - VM_BUG_ON(khugepaged_scan.mm_slot != mm_slot); > + VM_BUG_ON(khugepaged_scan.mm_slot != slot); > /* > * Release the current mm_slot if this mm is about to die, or > * if we scanned all vmas of this mm. > @@ -2522,16 +2505,14 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > * mm_slot not pointing to the exiting mm. > */ > if (!list_is_last(&slot->mm_node, &khugepaged_scan.mm_head)) { > - slot = list_next_entry(slot, mm_node); > - khugepaged_scan.mm_slot = > - mm_slot_entry(slot, struct khugepaged_mm_slot, slot); > + khugepaged_scan.mm_slot = list_next_entry(slot, mm_node); > khugepaged_scan.address = 0; > } else { > khugepaged_scan.mm_slot = NULL; > khugepaged_full_scans++; > } > > - collect_mm_slot(mm_slot); > + collect_mm_slot(slot); > } > > return progress; > @@ -2618,7 +2599,7 @@ static void khugepaged_wait_work(void) > > static int khugepaged(void *none) > { > - struct khugepaged_mm_slot *mm_slot; > + struct mm_slot *slot; > > set_freezable(); > set_user_nice(current, MAX_NICE); > @@ -2629,10 +2610,10 @@ static int khugepaged(void *none) > } > > spin_lock(&khugepaged_mm_lock); > - mm_slot = khugepaged_scan.mm_slot; > + slot = khugepaged_scan.mm_slot; > khugepaged_scan.mm_slot = NULL; > - if (mm_slot) > - collect_mm_slot(mm_slot); > + if (slot) > + collect_mm_slot(slot); > spin_unlock(&khugepaged_mm_lock); > return 0; > } ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot 2025-09-27 0:45 ` [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang 2025-09-27 8:40 ` Muhammad Usama Anjum @ 2025-09-29 8:13 ` David Hildenbrand 2025-09-29 15:16 ` Zi Yan 2025-09-30 6:01 ` Raghavendra K T 3 siblings, 0 replies; 12+ messages in thread From: David Hildenbrand @ 2025-09-29 8:13 UTC (permalink / raw) To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Kiryl Shutsemau, SeongJae Park On 27.09.25 02:45, Wei Yang wrote: > Current code calls mm_slot_entry() even when we don't have a valid slot, > which is not future proof. Currently, this is not a problem because > "slot" is the first member in struct khugepaged_mm_slot. > > While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there > is no need to define it. > > Remove the definition of struct khugepaged_mm_slot, so there is not chance > to miss use mm_slot_entry(). > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Acked-by: Lance Yang <lance.yang@linux.dev> > Reviewed-by: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: SeongJae Park <sj@kernel.org> > Cc: Nico Pache <npache@redhat.com> > > --- Acked-by: David Hildenbrand <david@redhat.com> -- Cheers David / dhildenb ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot 2025-09-27 0:45 ` [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang 2025-09-27 8:40 ` Muhammad Usama Anjum 2025-09-29 8:13 ` David Hildenbrand @ 2025-09-29 15:16 ` Zi Yan 2025-09-30 6:01 ` Raghavendra K T 3 siblings, 0 replies; 12+ messages in thread From: Zi Yan @ 2025-09-29 15:16 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou, linux-mm, Kiryl Shutsemau, SeongJae Park On 26 Sep 2025, at 20:45, Wei Yang wrote: > Current code calls mm_slot_entry() even when we don't have a valid slot, > which is not future proof. Currently, this is not a problem because > "slot" is the first member in struct khugepaged_mm_slot. > > While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there > is no need to define it. > > Remove the definition of struct khugepaged_mm_slot, so there is not chance > to miss use mm_slot_entry(). > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Acked-by: Lance Yang <lance.yang@linux.dev> > Reviewed-by: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: SeongJae Park <sj@kernel.org> > Cc: Nico Pache <npache@redhat.com> > > --- > v3: > * adjust changelog > * rename the slab cache to "mm_slot" > v2: > * fix a PF reported by SeongJae, where slot is changed to next one > --- > mm/khugepaged.c | 55 ++++++++++++++++--------------------------------- > 1 file changed, 18 insertions(+), 37 deletions(-) > Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot 2025-09-27 0:45 ` [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang ` (2 preceding siblings ...) 2025-09-29 15:16 ` Zi Yan @ 2025-09-30 6:01 ` Raghavendra K T 3 siblings, 0 replies; 12+ messages in thread From: Raghavendra K T @ 2025-09-30 6:01 UTC (permalink / raw) To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16, chengming.zhou Cc: linux-mm, Kiryl Shutsemau, SeongJae Park On 9/27/2025 6:15 AM, Wei Yang wrote: > Current code calls mm_slot_entry() even when we don't have a valid slot, > which is not future proof. Currently, this is not a problem because > "slot" is the first member in struct khugepaged_mm_slot. > > While struct khugepaged_mm_slot is just a wrapper of struct mm_slot, there > is no need to define it. > > Remove the definition of struct khugepaged_mm_slot, so there is not chance > to miss use mm_slot_entry(). > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Acked-by: Lance Yang <lance.yang@linux.dev> > Reviewed-by: Dev Jain <dev.jain@arm.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: David Hildenbrand <david@redhat.com> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Kiryl Shutsemau <kirill@shutemov.name> > Cc: xu xin <xu.xin16@zte.com.cn> > Cc: SeongJae Park <sj@kernel.org> > Cc: Nico Pache <npache@redhat.com> > Reviewed-by: Raghavendra K T <raghavendra.kt@amd.com> > --- > v3: > * adjust changelog > * rename the slab cache to "mm_slot" > v2: > * fix a PF reported by SeongJae, where slot is changed to next one > --- > mm/khugepaged.c | 55 ++++++++++++++++--------------------------------- > 1 file changed, 18 insertions(+), 37 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 204ce3059267..67540078083b 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -103,14 +103,6 @@ struct collapse_control { > nodemask_t alloc_nmask; > }; > > -/** > - * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned > - * @slot: hash lookup from mm to mm_slot > - */ > -struct khugepaged_mm_slot { > - struct mm_slot slot; > -}; > - > /** > * struct khugepaged_scan - cursor for scanning > * @mm_head: the head of the mm list to scan > @@ -121,7 +113,7 @@ struct khugepaged_mm_slot { > */ > struct khugepaged_scan { > struct list_head mm_head; > - struct khugepaged_mm_slot *mm_slot; > + struct mm_slot *mm_slot; > unsigned long address; > }; > Thanks. The change definitely makes sense since there are no other members in khugepaged_mm_slot. Also slot => mm_slot name change makes it more clear (from my experience using this code as reference in kscand ) > @@ -384,7 +376,7 @@ int hugepage_madvise(struct vm_area_struct *vma, > > int __init khugepaged_init(void) > { > - mm_slot_cache = KMEM_CACHE(khugepaged_mm_slot, 0); > + mm_slot_cache = KMEM_CACHE(mm_slot, 0); > if (!mm_slot_cache) > return -ENOMEM; > > @@ -438,7 +430,6 @@ static bool hugepage_pmd_enabled(void) > > void __khugepaged_enter(struct mm_struct *mm) > { > - struct khugepaged_mm_slot *mm_slot; > struct mm_slot *slot; > int wakeup; > > @@ -447,12 +438,10 @@ void __khugepaged_enter(struct mm_struct *mm) > if (unlikely(mm_flags_test_and_set(MMF_VM_HUGEPAGE, mm))) > return; > > - mm_slot = mm_slot_alloc(mm_slot_cache); > - if (!mm_slot) > + slot = mm_slot_alloc(mm_slot_cache); > + if (!slot) > return; > > - slot = &mm_slot->slot; > - > spin_lock(&khugepaged_mm_lock); > mm_slot_insert(mm_slots_hash, mm, slot); > /* > @@ -480,14 +469,12 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, > > void __khugepaged_exit(struct mm_struct *mm) > { > - struct khugepaged_mm_slot *mm_slot; > struct mm_slot *slot; > int free = 0; > > spin_lock(&khugepaged_mm_lock); > slot = mm_slot_lookup(mm_slots_hash, mm); > - mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); > - if (mm_slot && khugepaged_scan.mm_slot != mm_slot) { > + if (slot && khugepaged_scan.mm_slot != slot) { > hash_del(&slot->hash); > list_del(&slot->mm_node); > free = 1; > @@ -496,9 +483,9 @@ void __khugepaged_exit(struct mm_struct *mm) > > if (free) { > mm_flags_clear(MMF_VM_HUGEPAGE, mm); > - mm_slot_free(mm_slot_cache, mm_slot); > + mm_slot_free(mm_slot_cache, slot); > mmdrop(mm); > - } else if (mm_slot) { > + } else if (slot) { > /* > * This is required to serialize against > * hpage_collapse_test_exit() (which is guaranteed to run > @@ -1432,9 +1419,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, > return result; > } > > -static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) > +static void collect_mm_slot(struct mm_slot *slot) > { > - struct mm_slot *slot = &mm_slot->slot; > struct mm_struct *mm = slot->mm; > > lockdep_assert_held(&khugepaged_mm_lock); > @@ -1451,7 +1437,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) > */ > > /* khugepaged_mm_lock actually not necessary for the below */ > - mm_slot_free(mm_slot_cache, mm_slot); > + mm_slot_free(mm_slot_cache, slot); > mmdrop(mm); > } > } > @@ -2394,7 +2380,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > __acquires(&khugepaged_mm_lock) > { > struct vma_iterator vmi; > - struct khugepaged_mm_slot *mm_slot; > struct mm_slot *slot; > struct mm_struct *mm; > struct vm_area_struct *vma; > @@ -2405,14 +2390,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > *result = SCAN_FAIL; > > if (khugepaged_scan.mm_slot) { > - mm_slot = khugepaged_scan.mm_slot; > - slot = &mm_slot->slot; > + slot = khugepaged_scan.mm_slot; > } else { > slot = list_first_entry(&khugepaged_scan.mm_head, > struct mm_slot, mm_node); > - mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); > khugepaged_scan.address = 0; > - khugepaged_scan.mm_slot = mm_slot; > + khugepaged_scan.mm_slot = slot; > } > spin_unlock(&khugepaged_mm_lock); > > @@ -2510,7 +2493,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > breakouterloop_mmap_lock: > > spin_lock(&khugepaged_mm_lock); > - VM_BUG_ON(khugepaged_scan.mm_slot != mm_slot); > + VM_BUG_ON(khugepaged_scan.mm_slot != slot); > /* > * Release the current mm_slot if this mm is about to die, or > * if we scanned all vmas of this mm. > @@ -2522,16 +2505,14 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > * mm_slot not pointing to the exiting mm. > */ > if (!list_is_last(&slot->mm_node, &khugepaged_scan.mm_head)) { > - slot = list_next_entry(slot, mm_node); > - khugepaged_scan.mm_slot = > - mm_slot_entry(slot, struct khugepaged_mm_slot, slot); > + khugepaged_scan.mm_slot = list_next_entry(slot, mm_node); > khugepaged_scan.address = 0; > } else { > khugepaged_scan.mm_slot = NULL; > khugepaged_full_scans++; > } > > - collect_mm_slot(mm_slot); > + collect_mm_slot(slot); > } > > return progress; > @@ -2618,7 +2599,7 @@ static void khugepaged_wait_work(void) > > static int khugepaged(void *none) > { > - struct khugepaged_mm_slot *mm_slot; > + struct mm_slot *slot; > > set_freezable(); > set_user_nice(current, MAX_NICE); > @@ -2629,10 +2610,10 @@ static int khugepaged(void *none) > } > > spin_lock(&khugepaged_mm_lock); > - mm_slot = khugepaged_scan.mm_slot; > + slot = khugepaged_scan.mm_slot; > khugepaged_scan.mm_slot = NULL; > - if (mm_slot) > - collect_mm_slot(mm_slot); > + if (slot) > + collect_mm_slot(slot); > spin_unlock(&khugepaged_mm_lock); > return 0; > } ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-09-30 6:01 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-09-27 0:45 [Patch v4 0/2] mm_slot: fix the usage of mm_slot_entry() Wei Yang 2025-09-27 0:45 ` [Patch v4 1/2] mm/ksm: don't call mm_slot_entry() when the slot is NULL Wei Yang 2025-09-27 8:39 ` Muhammad Usama Anjum 2025-09-28 13:53 ` Dev Jain 2025-09-29 8:13 ` David Hildenbrand 2025-09-29 10:58 ` Kiryl Shutsemau 2025-09-29 15:14 ` Zi Yan 2025-09-27 0:45 ` [Patch v4 2/2] mm/khugepaged: remove definition of struct khugepaged_mm_slot Wei Yang 2025-09-27 8:40 ` Muhammad Usama Anjum 2025-09-29 8:13 ` David Hildenbrand 2025-09-29 15:16 ` Zi Yan 2025-09-30 6:01 ` Raghavendra K T
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox