* [PATCH 0/5] mm: swap: small fixes and comment cleanups
@ 2025-10-29 8:56 Youngjun Park
2025-10-29 8:56 ` [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Youngjun Park @ 2025-10-29 8:56 UTC (permalink / raw)
To: akpm
Cc: linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
youngjun.park, gunho.lee
Hello,
This patch series contains small fixes and cleanups in the swap code.
It includes:
- A memory leak fix on the error path
- Minor logic adjustments
- Removal of redundant or outdated comments
Thank you,
Youngjun Park
Youngjun Park (5):
mm, swap: Fix memory leak in setup_clusters() error path
mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational
mm, swap: Remove redundant comment for read_swap_cache_async
mm: swap: change swap_alloc_slow() to void
mm: swap: remove scan_swap_map_slots() references from comments
mm/swap_state.c | 4 ----
mm/swapfile.c | 35 ++++++++++++++++-------------------
2 files changed, 16 insertions(+), 23 deletions(-)
base-commit: f30d294530d939fa4b77d61bc60f25c4284841fa (mm-new)
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path
2025-10-29 8:56 [PATCH 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
@ 2025-10-29 8:56 ` Youngjun Park
2025-10-29 15:41 ` Kairui Song
2025-10-29 8:56 ` [PATCH 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Youngjun Park @ 2025-10-29 8:56 UTC (permalink / raw)
To: akpm
Cc: linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
youngjun.park, gunho.lee
Some error path neglects to free the allocated 'cluster_info',
causing a memory leak.
Change the error jumps to the 'err_free' label to ensure proper
cleanup.
Fixes: 07adc4cf1ecd ("mm, swap: implement dynamic allocation of swap table")
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
---
mm/swapfile.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index c35bb8593f50..6dc0e7a738bc 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3339,7 +3339,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
*/
err = swap_cluster_setup_bad_slot(cluster_info, 0);
if (err)
- goto err;
+ goto err_free;
for (i = 0; i < swap_header->info.nr_badpages; i++) {
unsigned int page_nr = swap_header->info.badpages[i];
@@ -3347,12 +3347,12 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
continue;
err = swap_cluster_setup_bad_slot(cluster_info, page_nr);
if (err)
- goto err;
+ goto err_free;
}
for (i = maxpages; i < round_up(maxpages, SWAPFILE_CLUSTER); i++) {
err = swap_cluster_setup_bad_slot(cluster_info, i);
if (err)
- goto err;
+ goto err_free;
}
INIT_LIST_HEAD(&si->free_clusters);
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational
2025-10-29 8:56 [PATCH 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
2025-10-29 8:56 ` [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
@ 2025-10-29 8:56 ` Youngjun Park
2025-10-29 16:09 ` Kairui Song
2025-10-29 8:56 ` [PATCH 3/5] mm, swap: Remove redundant comment for read_swap_cache_async Youngjun Park
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Youngjun Park @ 2025-10-29 8:56 UTC (permalink / raw)
To: akpm
Cc: linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
youngjun.park, gunho.lee
The current non rotational check is unreliable
as the device's rotational status can be changed by a user via sysfs.
Use the more reliable SWP_SOLIDSTATE flag which is set at swapon time,
to ensure the nr_rotate_swap count remains consistent.
Plus, it is easy to read and simple.
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
---
mm/swapfile.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 6dc0e7a738bc..b5d42918c01b 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2913,7 +2913,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
if (p->flags & SWP_CONTINUED)
free_swap_count_continuations(p);
- if (!p->bdev || !bdev_nonrot(p->bdev))
+ if (!(p->flags & SWP_SOLIDSTATE))
atomic_dec(&nr_rotate_swap);
mutex_lock(&swapon_mutex);
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-10-29 8:56 [PATCH 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
2025-10-29 8:56 ` [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
2025-10-29 8:56 ` [PATCH 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
@ 2025-10-29 8:56 ` Youngjun Park
2025-10-29 8:56 ` [PATCH 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
2025-10-29 8:56 ` [PATCH 5/5] mm: swap: remove scan_swap_map_slots() references from comments Youngjun Park
4 siblings, 0 replies; 12+ messages in thread
From: Youngjun Park @ 2025-10-29 8:56 UTC (permalink / raw)
To: akpm
Cc: linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
youngjun.park, gunho.lee
The function now manages get/put_swap_device() internally, making the
comment explaining this behavior to callers unnecessary.
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
---
mm/swap_state.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b13e9c4baa90..d20d238109f9 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -509,10 +509,6 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* and reading the disk if it is not already cached.
* A failure return means that either the page allocation failed or that
* the swap entry is no longer in use.
- *
- * get/put_swap_device() aren't needed to call this function, because
- * __read_swap_cache_async() call them and swap_read_folio() holds the
- * swap cache folio lock.
*/
struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct vm_area_struct *vma, unsigned long addr,
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 4/5] mm: swap: change swap_alloc_slow() to void
2025-10-29 8:56 [PATCH 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
` (2 preceding siblings ...)
2025-10-29 8:56 ` [PATCH 3/5] mm, swap: Remove redundant comment for read_swap_cache_async Youngjun Park
@ 2025-10-29 8:56 ` Youngjun Park
2025-10-29 16:13 ` Kairui Song
2025-10-29 8:56 ` [PATCH 5/5] mm: swap: remove scan_swap_map_slots() references from comments Youngjun Park
4 siblings, 1 reply; 12+ messages in thread
From: Youngjun Park @ 2025-10-29 8:56 UTC (permalink / raw)
To: akpm
Cc: linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
youngjun.park, gunho.lee
swap_alloc_slow() does not need to return a bool, as all callers
handle allocation results via the entry parameter. Update the
function signature and remove return statements accordingly.
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
---
mm/swapfile.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index b5d42918c01b..89eb57eee7f7 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1339,7 +1339,7 @@ static bool swap_alloc_fast(swp_entry_t *entry,
}
/* Rotate the device and switch to a new cluster */
-static bool swap_alloc_slow(swp_entry_t *entry,
+static void swap_alloc_slow(swp_entry_t *entry,
int order)
{
unsigned long offset;
@@ -1356,10 +1356,10 @@ static bool swap_alloc_slow(swp_entry_t *entry,
put_swap_device(si);
if (offset) {
*entry = swp_entry(si->type, offset);
- return true;
+ return;
}
if (order)
- return false;
+ return;
}
spin_lock(&swap_avail_lock);
@@ -1378,7 +1378,6 @@ static bool swap_alloc_slow(swp_entry_t *entry,
goto start_over;
}
spin_unlock(&swap_avail_lock);
- return false;
}
/*
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 5/5] mm: swap: remove scan_swap_map_slots() references from comments
2025-10-29 8:56 [PATCH 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
` (3 preceding siblings ...)
2025-10-29 8:56 ` [PATCH 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
@ 2025-10-29 8:56 ` Youngjun Park
4 siblings, 0 replies; 12+ messages in thread
From: Youngjun Park @ 2025-10-29 8:56 UTC (permalink / raw)
To: akpm
Cc: linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
youngjun.park, gunho.lee
The scan_swap_map_slots() helper has been removed, but several comments
still referred to it in swap allocation and reclaim paths. This patch
cleans up those outdated references and reflows the affected comment
blocks to match kernel coding style.
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
---
mm/swapfile.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 89eb57eee7f7..1dace4356bd1 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -236,11 +236,10 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
ret = -nr_pages;
/*
- * When this function is called from scan_swap_map_slots() and it's
- * called by vmscan.c at reclaiming folios. So we hold a folio lock
- * here. We have to use trylock for avoiding deadlock. This is a special
- * case and you should use folio_free_swap() with explicit folio_lock()
- * in usual operations.
+ * We hold a folio lock here. We have to use trylock for
+ * avoiding deadlock. This is a special case and you should
+ * use folio_free_swap() with explicit folio_lock() in usual
+ * operations.
*/
if (!folio_trylock(folio))
goto out;
@@ -1365,14 +1364,13 @@ static void swap_alloc_slow(swp_entry_t *entry,
spin_lock(&swap_avail_lock);
/*
* if we got here, it's likely that si was almost full before,
- * and since scan_swap_map_slots() can drop the si->lock,
* multiple callers probably all tried to get a page from the
* same si and it filled up before we could get one; or, the si
- * filled up between us dropping swap_avail_lock and taking
- * si->lock. Since we dropped the swap_avail_lock, the
- * swap_avail_head list may have been modified; so if next is
- * still in the swap_avail_head list then try it, otherwise
- * start over if we have not gotten any slots.
+ * filled up between us dropping swap_avail_lock.
+ * Since we dropped the swap_avail_lock, the swap_avail_list
+ * may have been modified; so if next is still in the
+ * swap_avail_head list then try it, otherwise start over if we
+ * have not gotten any slots.
*/
if (plist_node_empty(&si->avail_list))
goto start_over;
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path
2025-10-29 8:56 ` [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
@ 2025-10-29 15:41 ` Kairui Song
2025-10-30 14:32 ` YoungJun Park
0 siblings, 1 reply; 12+ messages in thread
From: Kairui Song @ 2025-10-29 15:41 UTC (permalink / raw)
To: Youngjun Park
Cc: akpm, linux-mm, shikemeng, nphamcs, bhe, baohua, chrisl, gunho.lee
On Wed, Oct 29, 2025 at 8:44 PM Youngjun Park <youngjun.park@lge.com> wrote:
>
> Some error path neglects to free the allocated 'cluster_info',
> causing a memory leak.
>
> Change the error jumps to the 'err_free' label to ensure proper
> cleanup.
>
> Fixes: 07adc4cf1ecd ("mm, swap: implement dynamic allocation of swap table")
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> ---
> mm/swapfile.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index c35bb8593f50..6dc0e7a738bc 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3339,7 +3339,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> */
> err = swap_cluster_setup_bad_slot(cluster_info, 0);
> if (err)
> - goto err;
> + goto err_free;
> for (i = 0; i < swap_header->info.nr_badpages; i++) {
> unsigned int page_nr = swap_header->info.badpages[i];
>
> @@ -3347,12 +3347,12 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> continue;
> err = swap_cluster_setup_bad_slot(cluster_info, page_nr);
> if (err)
> - goto err;
> + goto err_free;
> }
> for (i = maxpages; i < round_up(maxpages, SWAPFILE_CLUSTER); i++) {
> err = swap_cluster_setup_bad_slot(cluster_info, i);
> if (err)
> - goto err;
> + goto err_free;
> }
>
> INIT_LIST_HEAD(&si->free_clusters);
> --
> 2.34.1
>
>
Nice catch.
Maybe it's better to just move free_cluster_info under "err:"? That
might help to avoid more issues like this. free_cluster_info checks if
cluster_info is null already, we just need to initialize *cluster_info
with "= NULL".
Just a nit pick, I'm fine either way.
Reviewed-by: Kairui Song <kasong@tencent.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational
2025-10-29 8:56 ` [PATCH 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
@ 2025-10-29 16:09 ` Kairui Song
2025-10-30 14:35 ` YoungJun Park
0 siblings, 1 reply; 12+ messages in thread
From: Kairui Song @ 2025-10-29 16:09 UTC (permalink / raw)
To: Youngjun Park
Cc: akpm, linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
gunho.lee
On Wed, Oct 29, 2025 at 05:56:56PM +0800, Youngjun Park wrote:
> The current non rotational check is unreliable
> as the device's rotational status can be changed by a user via sysfs.
>
> Use the more reliable SWP_SOLIDSTATE flag which is set at swapon time,
> to ensure the nr_rotate_swap count remains consistent.
> Plus, it is easy to read and simple.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> ---
> mm/swapfile.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 6dc0e7a738bc..b5d42918c01b 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -2913,7 +2913,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
> if (p->flags & SWP_CONTINUED)
> free_swap_count_continuations(p);
>
> - if (!p->bdev || !bdev_nonrot(p->bdev))
> + if (!(p->flags & SWP_SOLIDSTATE))
> atomic_dec(&nr_rotate_swap);
Good catch.
Is this introduced by 81a0298bdfab? So we need a Fixes: and backport to stable I think?
>
> mutex_lock(&swapon_mutex);
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5] mm: swap: change swap_alloc_slow() to void
2025-10-29 8:56 ` [PATCH 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
@ 2025-10-29 16:13 ` Kairui Song
2025-10-30 14:39 ` YoungJun Park
0 siblings, 1 reply; 12+ messages in thread
From: Kairui Song @ 2025-10-29 16:13 UTC (permalink / raw)
To: Youngjun Park
Cc: akpm, linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
gunho.lee
On Wed, Oct 29, 2025 at 05:56:58PM +0800, Youngjun Park wrote:
> swap_alloc_slow() does not need to return a bool, as all callers
> handle allocation results via the entry parameter. Update the
> function signature and remove return statements accordingly.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> ---
> mm/swapfile.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index b5d42918c01b..89eb57eee7f7 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1339,7 +1339,7 @@ static bool swap_alloc_fast(swp_entry_t *entry,
> }
>
> /* Rotate the device and switch to a new cluster */
> -static bool swap_alloc_slow(swp_entry_t *entry,
> +static void swap_alloc_slow(swp_entry_t *entry,
> int order)
> {
> unsigned long offset;
> @@ -1356,10 +1356,10 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> put_swap_device(si);
> if (offset) {
> *entry = swp_entry(si->type, offset);
> - return true;
> + return;
> }
> if (order)
> - return false;
> + return;
> }
>
> spin_lock(&swap_avail_lock);
> @@ -1378,7 +1378,6 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> goto start_over;
> }
> spin_unlock(&swap_avail_lock);
> - return false;
> }
>
Hi Youngjun,
Thanks for the patch.
I just found a patch from mine series is doing the same thing:
https://lore.kernel.org/linux-mm/20251029-swap-table-p2-v1-15-3d43f3b6ec32@tencent.com/
I'm fine merging your cleanup first, the conflict is really trivial and easy to resolve.
So:
Reviewed-by: Kairui Song <kasong@tencent.com>
> /*
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path
2025-10-29 15:41 ` Kairui Song
@ 2025-10-30 14:32 ` YoungJun Park
0 siblings, 0 replies; 12+ messages in thread
From: YoungJun Park @ 2025-10-30 14:32 UTC (permalink / raw)
To: Kairui Song
Cc: akpm, linux-mm, shikemeng, nphamcs, bhe, baohua, chrisl, gunho.lee
On Wed, Oct 29, 2025 at 11:41:27PM +0800, Kairui Song wrote:
> On Wed, Oct 29, 2025 at 8:44 PM Youngjun Park <youngjun.park@lge.com> wrote:
> >
> > Some error path neglects to free the allocated 'cluster_info',
> > causing a memory leak.
> >
> > Change the error jumps to the 'err_free' label to ensure proper
> > cleanup.
> >
> > Fixes: 07adc4cf1ecd ("mm, swap: implement dynamic allocation of swap table")
> > Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> > ---
> > mm/swapfile.c | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index c35bb8593f50..6dc0e7a738bc 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -3339,7 +3339,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> > */
> > err = swap_cluster_setup_bad_slot(cluster_info, 0);
> > if (err)
> > - goto err;
> > + goto err_free;
> > for (i = 0; i < swap_header->info.nr_badpages; i++) {
> > unsigned int page_nr = swap_header->info.badpages[i];
> >
> > @@ -3347,12 +3347,12 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> > continue;
> > err = swap_cluster_setup_bad_slot(cluster_info, page_nr);
> > if (err)
> > - goto err;
> > + goto err_free;
> > }
> > for (i = maxpages; i < round_up(maxpages, SWAPFILE_CLUSTER); i++) {
> > err = swap_cluster_setup_bad_slot(cluster_info, i);
> > if (err)
> > - goto err;
> > + goto err_free;
> > }
> >
> > INIT_LIST_HEAD(&si->free_clusters);
> > --
> > 2.34.1
> >
> >
>
> Nice catch.
Hello Kairui.
> Maybe it's better to just move free_cluster_info under "err:"? That
> might help to avoid more issues like this. free_cluster_info checks if
> cluster_info is null already, we just need to initialize *cluster_info
> with "= NULL".
You're right - moving free_cluster_info under 'err:' is cleaner.
I'll update the code accordingly.
Thanks!
Youngjun Park
> Just a nit pick, I'm fine either way.
>
> Reviewed-by: Kairui Song <kasong@tencent.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational
2025-10-29 16:09 ` Kairui Song
@ 2025-10-30 14:35 ` YoungJun Park
0 siblings, 0 replies; 12+ messages in thread
From: YoungJun Park @ 2025-10-30 14:35 UTC (permalink / raw)
To: Kairui Song
Cc: akpm, linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
gunho.lee
On Thu, Oct 30, 2025 at 12:09:29AM +0800, Kairui Song wrote:
> On Wed, Oct 29, 2025 at 05:56:56PM +0800, Youngjun Park wrote:
> > The current non rotational check is unreliable
> > as the device's rotational status can be changed by a user via sysfs.
> >
> > Use the more reliable SWP_SOLIDSTATE flag which is set at swapon time,
> > to ensure the nr_rotate_swap count remains consistent.
> > Plus, it is easy to read and simple.
> >
> > Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> > ---
> > mm/swapfile.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 6dc0e7a738bc..b5d42918c01b 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -2913,7 +2913,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
> > if (p->flags & SWP_CONTINUED)
> > free_swap_count_continuations(p);
> >
> > - if (!p->bdev || !bdev_nonrot(p->bdev))
> > + if (!(p->flags & SWP_SOLIDSTATE))
> > atomic_dec(&nr_rotate_swap);
>
> Good catch.
>
> Is this introduced by 81a0298bdfab? So we need a Fixes: and backport to stable I think?
Okay, I'll add a Fixes tag :)
Thanks,
Youngjun Park
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5] mm: swap: change swap_alloc_slow() to void
2025-10-29 16:13 ` Kairui Song
@ 2025-10-30 14:39 ` YoungJun Park
0 siblings, 0 replies; 12+ messages in thread
From: YoungJun Park @ 2025-10-30 14:39 UTC (permalink / raw)
To: Kairui Song
Cc: akpm, linux-mm, shikemeng, kasong, nphamcs, bhe, baohua, chrisl,
gunho.lee
On Thu, Oct 30, 2025 at 12:13:20AM +0800, Kairui Song wrote:
> On Wed, Oct 29, 2025 at 05:56:58PM +0800, Youngjun Park wrote:
> > swap_alloc_slow() does not need to return a bool, as all callers
> > handle allocation results via the entry parameter. Update the
> > function signature and remove return statements accordingly.
> >
> > Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> > ---
> > mm/swapfile.c | 7 +++----
> > 1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index b5d42918c01b..89eb57eee7f7 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -1339,7 +1339,7 @@ static bool swap_alloc_fast(swp_entry_t *entry,
> > }
> >
> > /* Rotate the device and switch to a new cluster */
> > -static bool swap_alloc_slow(swp_entry_t *entry,
> > +static void swap_alloc_slow(swp_entry_t *entry,
> > int order)
> > {
> > unsigned long offset;
> > @@ -1356,10 +1356,10 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> > put_swap_device(si);
> > if (offset) {
> > *entry = swp_entry(si->type, offset);
> > - return true;
> > + return;
> > }
> > if (order)
> > - return false;
> > + return;
> > }
> >
> > spin_lock(&swap_avail_lock);
> > @@ -1378,7 +1378,6 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> > goto start_over;
> > }
> > spin_unlock(&swap_avail_lock);
> > - return false;
> > }
> >
>
> Hi Youngjun,
>
> Thanks for the patch.
>
> I just found a patch from mine series is doing the same thing:
> https://lore.kernel.org/linux-mm/20251029-swap-table-p2-v1-15-3d43f3b6ec32@tencent.com/
>
> I'm fine merging your cleanup first, the conflict is really trivial and easy to resolve.
>
> So:
>
> Reviewed-by: Kairui Song <kasong@tencent.com>
>
Hello Kairui,
Ah, I see you're working on the same area! Thanks for the review and
for being flexible about the merge order. Much appreciated!
Best,
Youngjun Park
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-10-30 14:39 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-29 8:56 [PATCH 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
2025-10-29 8:56 ` [PATCH 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
2025-10-29 15:41 ` Kairui Song
2025-10-30 14:32 ` YoungJun Park
2025-10-29 8:56 ` [PATCH 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
2025-10-29 16:09 ` Kairui Song
2025-10-30 14:35 ` YoungJun Park
2025-10-29 8:56 ` [PATCH 3/5] mm, swap: Remove redundant comment for read_swap_cache_async Youngjun Park
2025-10-29 8:56 ` [PATCH 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
2025-10-29 16:13 ` Kairui Song
2025-10-30 14:39 ` YoungJun Park
2025-10-29 8:56 ` [PATCH 5/5] mm: swap: remove scan_swap_map_slots() references from comments Youngjun Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox