* [Patch v2 0/2] mm_slot: following fixup for usage of mm_slot_entry()
@ 2025-10-01 9:18 Wei Yang
2025-10-01 9:18 ` [Patch v2 1/2] mm/ksm: cleanup mm_slot_entry() invocation Wei Yang
2025-10-01 9:19 ` [Patch v2 2/2] mm/khugepaged: use KMEM_CACHE() Wei Yang
0 siblings, 2 replies; 4+ messages in thread
From: Wei Yang @ 2025-10-01 9:18 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, kirill, raghavendra.kt, Wei Yang
We got some late review commits during review of "mm_slot: fix the usage of
mm_slot_entry()" in [1].
The v2 is merged into mm-stable, here are following fixup based on current
mm-stable with last commit 1367da7eb875 ("mm: swap: check for stable address
space before operating on the VMA").
Since there is no code change from latest review in [1], I preserve the RB and
Ack-by.
[1]: https://lkml.kernel.org/r/20250927004539.19308-1-richard.weiyang@gmail.com
Wei Yang (2):
mm/ksm: cleanup mm_slot_entry() invocation
mm/khugepaged: use KMEM_CACHE()
mm/khugepaged.c | 5 +----
mm/ksm.c | 27 ++++++++++++++-------------
2 files changed, 15 insertions(+), 17 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 4+ messages in thread* [Patch v2 1/2] mm/ksm: cleanup mm_slot_entry() invocation
2025-10-01 9:18 [Patch v2 0/2] mm_slot: following fixup for usage of mm_slot_entry() Wei Yang
@ 2025-10-01 9:18 ` Wei Yang
2025-10-01 9:19 ` [Patch v2 2/2] mm/khugepaged: use KMEM_CACHE() Wei Yang
1 sibling, 0 replies; 4+ messages in thread
From: Wei Yang @ 2025-10-01 9:18 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, kirill, raghavendra.kt, Wei Yang, Dan Carpenter
We got some late review commits during review of commit 08498be43ee6
("mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL"). Let's
reduce the indentation level and make the code easier to follow by
using gotos to a new label.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: Kiryl Shutsemau <kirill@shutemov.name>
Cc: xu xin <xu.xin16@zte.com.cn>
Cc: Dan Carpenter <dan.carpenter@linaro.org>
Cc: Chengming Zhou <chengming.zhou@linux.dev>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Kiryl Shutsemau <kas@kernel.org>
Acked-by: Zi Yan <ziy@nvidia.com>
---
mm/ksm.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 04019a15b25d..7bc726b50b2f 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2921,7 +2921,7 @@ int __ksm_enter(struct mm_struct *mm)
void __ksm_exit(struct mm_struct *mm)
{
- struct ksm_mm_slot *mm_slot;
+ struct ksm_mm_slot *mm_slot = NULL;
struct mm_slot *slot;
int easy_to_free = 0;
@@ -2936,19 +2936,20 @@ void __ksm_exit(struct mm_struct *mm)
spin_lock(&ksm_mmlist_lock);
slot = mm_slot_lookup(mm_slots_hash, mm);
- if (slot) {
- mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
- if (ksm_scan.mm_slot != mm_slot) {
- if (!mm_slot->rmap_list) {
- hash_del(&slot->hash);
- list_del(&slot->mm_node);
- easy_to_free = 1;
- } else {
- list_move(&slot->mm_node,
- &ksm_scan.mm_slot->slot.mm_node);
- }
- }
+ if (!slot)
+ goto unlock;
+ mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot);
+ if (ksm_scan.mm_slot == mm_slot)
+ goto unlock;
+ if (!mm_slot->rmap_list) {
+ hash_del(&slot->hash);
+ list_del(&slot->mm_node);
+ easy_to_free = 1;
+ } else {
+ list_move(&slot->mm_node,
+ &ksm_scan.mm_slot->slot.mm_node);
}
+unlock:
spin_unlock(&ksm_mmlist_lock);
if (easy_to_free) {
--
2.34.1
^ permalink raw reply [flat|nested] 4+ messages in thread* [Patch v2 2/2] mm/khugepaged: use KMEM_CACHE()
2025-10-01 9:18 [Patch v2 0/2] mm_slot: following fixup for usage of mm_slot_entry() Wei Yang
2025-10-01 9:18 ` [Patch v2 1/2] mm/ksm: cleanup mm_slot_entry() invocation Wei Yang
@ 2025-10-01 9:19 ` Wei Yang
1 sibling, 0 replies; 4+ messages in thread
From: Wei Yang @ 2025-10-01 9:19 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, kirill, raghavendra.kt, Wei Yang, SeongJae Park
We got some late review commits during review of commit b4c9ffb54b32
("mm/khugepaged: remove definition of struct khugepaged_mm_slot"). No
need to keep the old cache name "khugepaged_mm_slot", let's simply
use KMEM_CACHE().
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: David Hildenbrand <david@redhat.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Kiryl Shutsemau <kirill@shutemov.name>
Cc: xu xin <xu.xin16@zte.com.cn>
Cc: SeongJae Park <sj@kernel.org>
Cc: Nico Pache <npache@redhat.com>
Acked-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Raghavendra K T <raghavendra.kt@amd.com>
---
mm/khugepaged.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 7ab2d1a42df3..abe54f0043c7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -376,10 +376,7 @@ int hugepage_madvise(struct vm_area_struct *vma,
int __init khugepaged_init(void)
{
- mm_slot_cache = kmem_cache_create("khugepaged_mm_slot",
- sizeof(struct mm_slot),
- __alignof__(struct mm_slot),
- 0, NULL);
+ mm_slot_cache = KMEM_CACHE(mm_slot, 0);
if (!mm_slot_cache)
return -ENOMEM;
--
2.34.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [Patch v2 0/2] mm_slot: following fixup for usage of mm_slot_entry()
@ 2025-10-01 9:16 Wei Yang
0 siblings, 0 replies; 4+ messages in thread
From: Wei Yang @ 2025-10-01 9:16 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, xu.xin16,
chengming.zhou
Cc: linux-mm, kirill, raghavendra.kt, Wei Yang
We got some late review commits during review of "mm_slot: fix the usage of
mm_slot_entry()" in [1].
The v2 is merged into mm-stable, here are following fixup based on current
mm-stable with last commit 1367da7eb875 ("mm: swap: check for stable address
space before operating on the VMA").
Since there is no code change from latest review in [1], I preserve the RB and
Ack-by.
[1]: https://lkml.kernel.org/r/20250927004539.19308-1-richard.weiyang@gmail.com
Wei Yang (2):
mm/ksm: cleanup mm_slot_entry() invocation
mm/khugepaged: use KMEM_CACHE()
mm/khugepaged.c | 5 +----
mm/ksm.c | 27 ++++++++++++++-------------
2 files changed, 15 insertions(+), 17 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-10-01 9:19 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-01 9:18 [Patch v2 0/2] mm_slot: following fixup for usage of mm_slot_entry() Wei Yang
2025-10-01 9:18 ` [Patch v2 1/2] mm/ksm: cleanup mm_slot_entry() invocation Wei Yang
2025-10-01 9:19 ` [Patch v2 2/2] mm/khugepaged: use KMEM_CACHE() Wei Yang
-- strict thread matches above, loose matches on Subject: below --
2025-10-01 9:16 [Patch v2 0/2] mm_slot: following fixup for usage of mm_slot_entry() Wei Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox