* [PATCH v2 1/5] mm, swap: Fix memory leak in setup_clusters() error path
2025-10-31 6:50 [PATCH v2 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
@ 2025-10-31 6:50 ` Youngjun Park
2025-11-01 0:01 ` Baoquan He
2025-11-05 4:30 ` Chris Li
2025-10-31 6:50 ` [PATCH v2 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
` (3 subsequent siblings)
4 siblings, 2 replies; 22+ messages in thread
From: Youngjun Park @ 2025-10-31 6:50 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, Kemeng Shi, Kairui Song, Nhat Pham, Baoquan He,
Barry Song, Chris Li, youngjun.park
The setup_clusters() function could leak 'cluster_info' memory if an
error occurred on a path that did not jump to the 'err_free' label.
This patch simplifies the error handling by removing the goto label
and instead calling free_cluster_info() on all error exit paths.
The new logic is safe, as free_cluster_info() already handles NULL
pointer inputs.
Fixes: 07adc4cf1ecd ("mm, swap: implement dynamic allocation of swap table")
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
Reviewed-by: Kairui Song <kasong@tencent.com>
diff --git a/mm/swapfile.c b/mm/swapfile.c
index c35bb8593f50..ed2c8e4edb71 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3324,7 +3324,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
si->global_cluster = kmalloc(sizeof(*si->global_cluster),
GFP_KERNEL);
if (!si->global_cluster)
- goto err_free;
+ goto err;
for (i = 0; i < SWAP_NR_ORDERS; i++)
si->global_cluster->next[i] = SWAP_ENTRY_INVALID;
spin_lock_init(&si->global_cluster_lock);
@@ -3377,9 +3377,8 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
}
return cluster_info;
-err_free:
- free_cluster_info(cluster_info, maxpages);
err:
+ free_cluster_info(cluster_info, maxpages);
return ERR_PTR(err);
}
--
2.34.1
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 1/5] mm, swap: Fix memory leak in setup_clusters() error path
2025-10-31 6:50 ` [PATCH v2 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
@ 2025-11-01 0:01 ` Baoquan He
2025-11-05 4:30 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Baoquan He @ 2025-11-01 0:01 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Barry Song, Chris Li
On 10/31/25 at 03:50pm, Youngjun Park wrote:
> The setup_clusters() function could leak 'cluster_info' memory if an
> error occurred on a path that did not jump to the 'err_free' label.
>
> This patch simplifies the error handling by removing the goto label
> and instead calling free_cluster_info() on all error exit paths.
>
> The new logic is safe, as free_cluster_info() already handles NULL
> pointer inputs.
>
> Fixes: 07adc4cf1ecd ("mm, swap: implement dynamic allocation of swap table")
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> Reviewed-by: Kairui Song <kasong@tencent.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index c35bb8593f50..ed2c8e4edb71 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
LGTM,
Reviewed-by: Baoquan He <bhe@redhat.com>
> @@ -3324,7 +3324,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> si->global_cluster = kmalloc(sizeof(*si->global_cluster),
> GFP_KERNEL);
> if (!si->global_cluster)
> - goto err_free;
> + goto err;
> for (i = 0; i < SWAP_NR_ORDERS; i++)
> si->global_cluster->next[i] = SWAP_ENTRY_INVALID;
> spin_lock_init(&si->global_cluster_lock);
> @@ -3377,9 +3377,8 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> }
>
> return cluster_info;
> -err_free:
> - free_cluster_info(cluster_info, maxpages);
> err:
> + free_cluster_info(cluster_info, maxpages);
> return ERR_PTR(err);
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 1/5] mm, swap: Fix memory leak in setup_clusters() error path
2025-10-31 6:50 ` [PATCH v2 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
2025-11-01 0:01 ` Baoquan He
@ 2025-11-05 4:30 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Chris Li @ 2025-11-05 4:30 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
Sorry for the late reply. Has been super busy.
Acked-by: Chris Li <chrisl@kernel.org>
Chris
On Thu, Oct 30, 2025 at 11:54 PM Youngjun Park <youngjun.park@lge.com> wrote:
>
> The setup_clusters() function could leak 'cluster_info' memory if an
> error occurred on a path that did not jump to the 'err_free' label.
>
> This patch simplifies the error handling by removing the goto label
> and instead calling free_cluster_info() on all error exit paths.
>
> The new logic is safe, as free_cluster_info() already handles NULL
> pointer inputs.
>
> Fixes: 07adc4cf1ecd ("mm, swap: implement dynamic allocation of swap table")
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> Reviewed-by: Kairui Song <kasong@tencent.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index c35bb8593f50..ed2c8e4edb71 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3324,7 +3324,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> si->global_cluster = kmalloc(sizeof(*si->global_cluster),
> GFP_KERNEL);
> if (!si->global_cluster)
> - goto err_free;
> + goto err;
> for (i = 0; i < SWAP_NR_ORDERS; i++)
> si->global_cluster->next[i] = SWAP_ENTRY_INVALID;
> spin_lock_init(&si->global_cluster_lock);
> @@ -3377,9 +3377,8 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
> }
>
> return cluster_info;
> -err_free:
> - free_cluster_info(cluster_info, maxpages);
> err:
> + free_cluster_info(cluster_info, maxpages);
> return ERR_PTR(err);
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational
2025-10-31 6:50 [PATCH v2 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
2025-10-31 6:50 ` [PATCH v2 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
@ 2025-10-31 6:50 ` Youngjun Park
2025-11-01 0:02 ` Baoquan He
2025-11-05 4:30 ` Chris Li
2025-10-31 6:50 ` [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async Youngjun Park
` (2 subsequent siblings)
4 siblings, 2 replies; 22+ messages in thread
From: Youngjun Park @ 2025-10-31 6:50 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, Kemeng Shi, Kairui Song, Nhat Pham, Baoquan He,
Barry Song, Chris Li, youngjun.park
The current non rotational check is unreliable
as the device's rotational status can be changed by a user via sysfs.
Use the more reliable SWP_SOLIDSTATE flag which is set at swapon time,
to ensure the nr_rotate_swap count remains consistent.
Plus, it is easy to read and simple.
Fixes: 81a0298bdfab ("mm, swap: don't use VMA based swap readahead if HDD is used as swap")
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
diff --git a/mm/swapfile.c b/mm/swapfile.c
index ed2c8e4edb71..b7cde8ef6742 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2913,7 +2913,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
if (p->flags & SWP_CONTINUED)
free_swap_count_continuations(p);
- if (!p->bdev || !bdev_nonrot(p->bdev))
+ if (!(p->flags & SWP_SOLIDSTATE))
atomic_dec(&nr_rotate_swap);
mutex_lock(&swapon_mutex);
--
2.34.1
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational
2025-10-31 6:50 ` [PATCH v2 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
@ 2025-11-01 0:02 ` Baoquan He
2025-11-05 4:30 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Baoquan He @ 2025-11-01 0:02 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Barry Song, Chris Li
On 10/31/25 at 03:50pm, Youngjun Park wrote:
> The current non rotational check is unreliable
> as the device's rotational status can be changed by a user via sysfs.
>
> Use the more reliable SWP_SOLIDSTATE flag which is set at swapon time,
> to ensure the nr_rotate_swap count remains consistent.
> Plus, it is easy to read and simple.
>
> Fixes: 81a0298bdfab ("mm, swap: don't use VMA based swap readahead if HDD is used as swap")
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index ed2c8e4edb71..b7cde8ef6742 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -2913,7 +2913,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
> if (p->flags & SWP_CONTINUED)
> free_swap_count_continuations(p);
>
> - if (!p->bdev || !bdev_nonrot(p->bdev))
> + if (!(p->flags & SWP_SOLIDSTATE))
> atomic_dec(&nr_rotate_swap);
Reviewed-by: Baoquan He <bhe@redhat.com>
>
> mutex_lock(&swapon_mutex);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational
2025-10-31 6:50 ` [PATCH v2 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
2025-11-01 0:02 ` Baoquan He
@ 2025-11-05 4:30 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Chris Li @ 2025-11-05 4:30 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
Acked-by: Chris Li <chrisl@kernel.org>
Chris
On Thu, Oct 30, 2025 at 11:54 PM Youngjun Park <youngjun.park@lge.com> wrote:
>
> The current non rotational check is unreliable
> as the device's rotational status can be changed by a user via sysfs.
>
> Use the more reliable SWP_SOLIDSTATE flag which is set at swapon time,
> to ensure the nr_rotate_swap count remains consistent.
> Plus, it is easy to read and simple.
>
> Fixes: 81a0298bdfab ("mm, swap: don't use VMA based swap readahead if HDD is used as swap")
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index ed2c8e4edb71..b7cde8ef6742 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -2913,7 +2913,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
> if (p->flags & SWP_CONTINUED)
> free_swap_count_continuations(p);
>
> - if (!p->bdev || !bdev_nonrot(p->bdev))
> + if (!(p->flags & SWP_SOLIDSTATE))
> atomic_dec(&nr_rotate_swap);
>
> mutex_lock(&swapon_mutex);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-10-31 6:50 [PATCH v2 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
2025-10-31 6:50 ` [PATCH v2 1/5] mm, swap: Fix memory leak in setup_clusters() error path Youngjun Park
2025-10-31 6:50 ` [PATCH v2 2/5] mm, swap: Use SWP_SOLIDSTATE to determine if swap is rotational Youngjun Park
@ 2025-10-31 6:50 ` Youngjun Park
2025-11-01 0:02 ` Baoquan He
2025-11-05 6:03 ` Chris Li
2025-10-31 6:50 ` [PATCH v2 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
2025-10-31 6:50 ` [PATCH v2 5/5] mm: swap: remove scan_swap_map_slots() references from comments Youngjun Park
4 siblings, 2 replies; 22+ messages in thread
From: Youngjun Park @ 2025-10-31 6:50 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, Kemeng Shi, Kairui Song, Nhat Pham, Baoquan He,
Barry Song, Chris Li, youngjun.park
The function now manages get/put_swap_device() internally, making the
comment explaining this behavior to callers unnecessary.
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b13e9c4baa90..d20d238109f9 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -509,10 +509,6 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* and reading the disk if it is not already cached.
* A failure return means that either the page allocation failed or that
* the swap entry is no longer in use.
- *
- * get/put_swap_device() aren't needed to call this function, because
- * __read_swap_cache_async() call them and swap_read_folio() holds the
- * swap cache folio lock.
*/
struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct vm_area_struct *vma, unsigned long addr,
--
2.34.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-10-31 6:50 ` [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async Youngjun Park
@ 2025-11-01 0:02 ` Baoquan He
2025-11-05 6:03 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Baoquan He @ 2025-11-01 0:02 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Barry Song, Chris Li
On 10/31/25 at 03:50pm, Youngjun Park wrote:
> The function now manages get/put_swap_device() internally, making the
> comment explaining this behavior to callers unnecessary.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index b13e9c4baa90..d20d238109f9 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -509,10 +509,6 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> * and reading the disk if it is not already cached.
> * A failure return means that either the page allocation failed or that
> * the swap entry is no longer in use.
> - *
> - * get/put_swap_device() aren't needed to call this function, because
> - * __read_swap_cache_async() call them and swap_read_folio() holds the
> - * swap cache folio lock.
> */
> struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct vm_area_struct *vma, unsigned long addr,
Reviewed-by: Baoquan He <bhe@redhat.com>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-10-31 6:50 ` [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async Youngjun Park
2025-11-01 0:02 ` Baoquan He
@ 2025-11-05 6:03 ` Chris Li
2025-11-05 7:58 ` Chris Li
1 sibling, 1 reply; 22+ messages in thread
From: Chris Li @ 2025-11-05 6:03 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
On Thu, Oct 30, 2025 at 11:54 PM Youngjun Park <youngjun.park@lge.com> wrote:
>
> The function now manages get/put_swap_device() internally, making the
> comment explaining this behavior to callers unnecessary.
This commit message is not precise. The previous comment explaining
this behavior is not just unnecessary. The comment is plain wrong. It
does not match the code. __read_swap_cache_async() does not internally
call get/put_swap_device() any more. So this comment regarding
get/put_swap_device should move to __read_swap_cache_async(), the
caller of __read_swap_cache_async() needs to manage the
get/put_swap_device() or by other means not to race with swap off.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index b13e9c4baa90..d20d238109f9 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -509,10 +509,6 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> * and reading the disk if it is not already cached.
> * A failure return means that either the page allocation failed or that
> * the swap entry is no longer in use.
> - *
> - * get/put_swap_device() aren't needed to call this function, because
> - * __read_swap_cache_async() call them and swap_read_folio() holds the
> - * swap cache folio lock.
I don't think we should simply delete this comment. We can delete from
this function but please add a comment to __read_swap_cache_async()
regarding the caller's responsibility.
Chris
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-11-05 6:03 ` Chris Li
@ 2025-11-05 7:58 ` Chris Li
2025-11-05 9:27 ` YoungJun Park
2025-11-06 0:29 ` Andrew Morton
0 siblings, 2 replies; 22+ messages in thread
From: Chris Li @ 2025-11-05 7:58 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
Actually this patch can be dropped. The swap table phase II 1/19
already addresses those comments and fits my description of what needs
to be done for __read_swap_cache_async().
Also, the comment adjustment is much better done within a series that
actually overhaul the code rather than out of series pure comment
clean up.
Hi Youngjun and Andrew,
Do you mind putting this 5 patches series AFTER the swap table phase
II? It is the comment adjustment anyway. If there is a conflict with
phase II, I think phase II should take priority.
Chris
On Tue, Nov 4, 2025 at 10:03 PM Chris Li <chrisl@kernel.org> wrote:
>
> On Thu, Oct 30, 2025 at 11:54 PM Youngjun Park <youngjun.park@lge.com> wrote:
> >
> > The function now manages get/put_swap_device() internally, making the
> > comment explaining this behavior to callers unnecessary.
>
> This commit message is not precise. The previous comment explaining
> this behavior is not just unnecessary. The comment is plain wrong. It
> does not match the code. __read_swap_cache_async() does not internally
> call get/put_swap_device() any more. So this comment regarding
> get/put_swap_device should move to __read_swap_cache_async(), the
> caller of __read_swap_cache_async() needs to manage the
> get/put_swap_device() or by other means not to race with swap off.
>
> >
> > Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> >
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index b13e9c4baa90..d20d238109f9 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -509,10 +509,6 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > * and reading the disk if it is not already cached.
> > * A failure return means that either the page allocation failed or that
> > * the swap entry is no longer in use.
> > - *
> > - * get/put_swap_device() aren't needed to call this function, because
> > - * __read_swap_cache_async() call them and swap_read_folio() holds the
> > - * swap cache folio lock.
>
>
> I don't think we should simply delete this comment. We can delete from
> this function but please add a comment to __read_swap_cache_async()
> regarding the caller's responsibility.
>
> Chris
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-11-05 7:58 ` Chris Li
@ 2025-11-05 9:27 ` YoungJun Park
2025-11-06 0:29 ` Andrew Morton
1 sibling, 0 replies; 22+ messages in thread
From: YoungJun Park @ 2025-11-05 9:27 UTC (permalink / raw)
To: Chris Li
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
On Tue, Nov 04, 2025 at 11:58:17PM -0800, Chris Li wrote:
> Actually this patch can be dropped. The swap table phase II 1/19
> already addresses those comments and fits my description of what needs
> to be done for __read_swap_cache_async().
>
> Also, the comment adjustment is much better done within a series that
> actually overhaul the code rather than out of series pure comment
> clean up.
>
> Hi Youngjun and Andrew,
>
> Do you mind putting this 5 patches series AFTER the swap table phase
> II? It is the comment adjustment anyway. If there is a conflict with
> phase II, I think phase II should take priority.
>
> Chris
Hi Chris,
Sounds good to me. Happy to have these queued with adjustment after phase II.
Thank you,
Youngjun
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-11-05 7:58 ` Chris Li
2025-11-05 9:27 ` YoungJun Park
@ 2025-11-06 0:29 ` Andrew Morton
2025-11-20 5:00 ` Chris Li
1 sibling, 1 reply; 22+ messages in thread
From: Andrew Morton @ 2025-11-06 0:29 UTC (permalink / raw)
To: Chris Li
Cc: Youngjun Park, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
On Tue, 4 Nov 2025 23:58:17 -0800 Chris Li <chrisl@kernel.org> wrote:
> Do you mind putting this 5 patches series AFTER the swap table phase
> II? It is the comment adjustment anyway. If there is a conflict with
> phase II, I think phase II should take priority.
OK, I'll drop this series "mm: swap: small fixes and comment cleanups", v2.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-11-06 0:29 ` Andrew Morton
@ 2025-11-20 5:00 ` Chris Li
2025-11-20 16:54 ` Andrew Morton
0 siblings, 1 reply; 22+ messages in thread
From: Chris Li @ 2025-11-20 5:00 UTC (permalink / raw)
To: Andrew Morton
Cc: Youngjun Park, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
On Wed, Nov 5, 2025 at 4:29 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Tue, 4 Nov 2025 23:58:17 -0800 Chris Li <chrisl@kernel.org> wrote:
>
> > Do you mind putting this 5 patches series AFTER the swap table phase
> > II? It is the comment adjustment anyway. If there is a conflict with
> > phase II, I think phase II should take priority.
>
> OK, I'll drop this series "mm: swap: small fixes and comment cleanups", v2.
Hi Andrew,
I discuss with Kairui, his swap table phase II series will need to
refresh a V3 anyway.
Please re-apply this cleanup 5 patch series from YongJun, it is
already reviewed. Kairui's phase II V3 will use it as base so we don't
hold up this cleanup series for too long.
Thanks
Chris
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-11-20 5:00 ` Chris Li
@ 2025-11-20 16:54 ` Andrew Morton
2025-11-20 18:34 ` Chris Li
0 siblings, 1 reply; 22+ messages in thread
From: Andrew Morton @ 2025-11-20 16:54 UTC (permalink / raw)
To: Chris Li
Cc: Youngjun Park, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
On Wed, 19 Nov 2025 21:00:50 -0800 Chris Li <chrisl@kernel.org> wrote:
> On Wed, Nov 5, 2025 at 4:29 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Tue, 4 Nov 2025 23:58:17 -0800 Chris Li <chrisl@kernel.org> wrote:
> >
> > > Do you mind putting this 5 patches series AFTER the swap table phase
> > > II? It is the comment adjustment anyway. If there is a conflict with
> > > phase II, I think phase II should take priority.
> >
> > OK, I'll drop this series "mm: swap: small fixes and comment cleanups", v2.
>
> Hi Andrew,
>
> I discuss with Kairui, his swap table phase II series will need to
> refresh a V3 anyway.
> Please re-apply this cleanup 5 patch series from YongJun, it is
> already reviewed. Kairui's phase II V3 will use it as base so we don't
> hold up this cleanup series for too long.
OK...
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async
2025-11-20 16:54 ` Andrew Morton
@ 2025-11-20 18:34 ` Chris Li
0 siblings, 0 replies; 22+ messages in thread
From: Chris Li @ 2025-11-20 18:34 UTC (permalink / raw)
To: Andrew Morton
Cc: Youngjun Park, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
On Thu, Nov 20, 2025 at 8:54 AM Andrew Morton <akpm@linux-foundation.org> wrote:
> > Hi Andrew,
> >
> > I discuss with Kairui, his swap table phase II series will need to
> > refresh a V3 anyway.
> > Please re-apply this cleanup 5 patch series from YongJun, it is
> > already reviewed. Kairui's phase II V3 will use it as base so we don't
> > hold up this cleanup series for too long.
>
> OK...
Thank you so much and sorry for the back and forth, the phase II
review takes longer than expected.
Chris
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 4/5] mm: swap: change swap_alloc_slow() to void
2025-10-31 6:50 [PATCH v2 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
` (2 preceding siblings ...)
2025-10-31 6:50 ` [PATCH v2 3/5] mm, swap: Remove redundant comment for read_swap_cache_async Youngjun Park
@ 2025-10-31 6:50 ` Youngjun Park
2025-11-01 0:03 ` Baoquan He
2025-11-05 6:04 ` Chris Li
2025-10-31 6:50 ` [PATCH v2 5/5] mm: swap: remove scan_swap_map_slots() references from comments Youngjun Park
4 siblings, 2 replies; 22+ messages in thread
From: Youngjun Park @ 2025-10-31 6:50 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, Kemeng Shi, Kairui Song, Nhat Pham, Baoquan He,
Barry Song, Chris Li, youngjun.park
swap_alloc_slow() does not need to return a bool, as all callers
handle allocation results via the entry parameter. Update the
function signature and remove return statements accordingly.
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
Reviewed-by: Kairui Song <kasong@tencent.com>
diff --git a/mm/swapfile.c b/mm/swapfile.c
index b7cde8ef6742..4f8a1843262e 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1339,7 +1339,7 @@ static bool swap_alloc_fast(swp_entry_t *entry,
}
/* Rotate the device and switch to a new cluster */
-static bool swap_alloc_slow(swp_entry_t *entry,
+static void swap_alloc_slow(swp_entry_t *entry,
int order)
{
unsigned long offset;
@@ -1356,10 +1356,10 @@ static bool swap_alloc_slow(swp_entry_t *entry,
put_swap_device(si);
if (offset) {
*entry = swp_entry(si->type, offset);
- return true;
+ return;
}
if (order)
- return false;
+ return;
}
spin_lock(&swap_avail_lock);
@@ -1378,7 +1378,6 @@ static bool swap_alloc_slow(swp_entry_t *entry,
goto start_over;
}
spin_unlock(&swap_avail_lock);
- return false;
}
/*
--
2.34.1
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 4/5] mm: swap: change swap_alloc_slow() to void
2025-10-31 6:50 ` [PATCH v2 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
@ 2025-11-01 0:03 ` Baoquan He
2025-11-05 6:04 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Baoquan He @ 2025-11-01 0:03 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Barry Song, Chris Li
On 10/31/25 at 03:50pm, Youngjun Park wrote:
> swap_alloc_slow() does not need to return a bool, as all callers
> handle allocation results via the entry parameter. Update the
> function signature and remove return statements accordingly.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> Reviewed-by: Kairui Song <kasong@tencent.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index b7cde8ef6742..4f8a1843262e 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1339,7 +1339,7 @@ static bool swap_alloc_fast(swp_entry_t *entry,
Reviewed-by: Baoquan He <bhe@redhat.com>
> }
>
> /* Rotate the device and switch to a new cluster */
> -static bool swap_alloc_slow(swp_entry_t *entry,
> +static void swap_alloc_slow(swp_entry_t *entry,
> int order)
> {
> unsigned long offset;
> @@ -1356,10 +1356,10 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> put_swap_device(si);
> if (offset) {
> *entry = swp_entry(si->type, offset);
> - return true;
> + return;
> }
> if (order)
> - return false;
> + return;
> }
>
> spin_lock(&swap_avail_lock);
> @@ -1378,7 +1378,6 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> goto start_over;
> }
> spin_unlock(&swap_avail_lock);
> - return false;
> }
>
> /*
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 4/5] mm: swap: change swap_alloc_slow() to void
2025-10-31 6:50 ` [PATCH v2 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
2025-11-01 0:03 ` Baoquan He
@ 2025-11-05 6:04 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Chris Li @ 2025-11-05 6:04 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
Acked-by: Chris Li <chrisl@kernel.org>
Chris
On Thu, Oct 30, 2025 at 11:54 PM Youngjun Park <youngjun.park@lge.com> wrote:
>
> swap_alloc_slow() does not need to return a bool, as all callers
> handle allocation results via the entry parameter. Update the
> function signature and remove return statements accordingly.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
> Reviewed-by: Kairui Song <kasong@tencent.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index b7cde8ef6742..4f8a1843262e 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1339,7 +1339,7 @@ static bool swap_alloc_fast(swp_entry_t *entry,
> }
>
> /* Rotate the device and switch to a new cluster */
> -static bool swap_alloc_slow(swp_entry_t *entry,
> +static void swap_alloc_slow(swp_entry_t *entry,
> int order)
> {
> unsigned long offset;
> @@ -1356,10 +1356,10 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> put_swap_device(si);
> if (offset) {
> *entry = swp_entry(si->type, offset);
> - return true;
> + return;
> }
> if (order)
> - return false;
> + return;
> }
>
> spin_lock(&swap_avail_lock);
> @@ -1378,7 +1378,6 @@ static bool swap_alloc_slow(swp_entry_t *entry,
> goto start_over;
> }
> spin_unlock(&swap_avail_lock);
> - return false;
> }
>
> /*
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 5/5] mm: swap: remove scan_swap_map_slots() references from comments
2025-10-31 6:50 [PATCH v2 0/5] mm: swap: small fixes and comment cleanups Youngjun Park
` (3 preceding siblings ...)
2025-10-31 6:50 ` [PATCH v2 4/5] mm: swap: change swap_alloc_slow() to void Youngjun Park
@ 2025-10-31 6:50 ` Youngjun Park
2025-11-01 0:03 ` Baoquan He
2025-11-05 6:50 ` Chris Li
4 siblings, 2 replies; 22+ messages in thread
From: Youngjun Park @ 2025-10-31 6:50 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, Kemeng Shi, Kairui Song, Nhat Pham, Baoquan He,
Barry Song, Chris Li, youngjun.park
The scan_swap_map_slots() helper has been removed, but several comments
still referred to it in swap allocation and reclaim paths. This patch
cleans up those outdated references and reflows the affected comment
blocks to match kernel coding style.
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 4f8a1843262e..543f303f101d 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -236,11 +236,10 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
ret = -nr_pages;
/*
- * When this function is called from scan_swap_map_slots() and it's
- * called by vmscan.c at reclaiming folios. So we hold a folio lock
- * here. We have to use trylock for avoiding deadlock. This is a special
- * case and you should use folio_free_swap() with explicit folio_lock()
- * in usual operations.
+ * We hold a folio lock here. We have to use trylock for
+ * avoiding deadlock. This is a special case and you should
+ * use folio_free_swap() with explicit folio_lock() in usual
+ * operations.
*/
if (!folio_trylock(folio))
goto out;
@@ -1365,14 +1364,13 @@ static void swap_alloc_slow(swp_entry_t *entry,
spin_lock(&swap_avail_lock);
/*
* if we got here, it's likely that si was almost full before,
- * and since scan_swap_map_slots() can drop the si->lock,
* multiple callers probably all tried to get a page from the
* same si and it filled up before we could get one; or, the si
- * filled up between us dropping swap_avail_lock and taking
- * si->lock. Since we dropped the swap_avail_lock, the
- * swap_avail_head list may have been modified; so if next is
- * still in the swap_avail_head list then try it, otherwise
- * start over if we have not gotten any slots.
+ * filled up between us dropping swap_avail_lock.
+ * Since we dropped the swap_avail_lock, the swap_avail_list
+ * may have been modified; so if next is still in the
+ * swap_avail_head list then try it, otherwise start over if we
+ * have not gotten any slots.
*/
if (plist_node_empty(&si->avail_list))
goto start_over;
--
2.34.1
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v2 5/5] mm: swap: remove scan_swap_map_slots() references from comments
2025-10-31 6:50 ` [PATCH v2 5/5] mm: swap: remove scan_swap_map_slots() references from comments Youngjun Park
@ 2025-11-01 0:03 ` Baoquan He
2025-11-05 6:50 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Baoquan He @ 2025-11-01 0:03 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Barry Song, Chris Li
On 10/31/25 at 03:50pm, Youngjun Park wrote:
> The scan_swap_map_slots() helper has been removed, but several comments
> still referred to it in swap allocation and reclaim paths. This patch
> cleans up those outdated references and reflows the affected comment
> blocks to match kernel coding style.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 4f8a1843262e..543f303f101d 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
Reviewed-by: Baoquan He <bhe@redhat.com>
> @@ -236,11 +236,10 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
> ret = -nr_pages;
>
> /*
> - * When this function is called from scan_swap_map_slots() and it's
> - * called by vmscan.c at reclaiming folios. So we hold a folio lock
> - * here. We have to use trylock for avoiding deadlock. This is a special
> - * case and you should use folio_free_swap() with explicit folio_lock()
> - * in usual operations.
> + * We hold a folio lock here. We have to use trylock for
> + * avoiding deadlock. This is a special case and you should
> + * use folio_free_swap() with explicit folio_lock() in usual
> + * operations.
> */
> if (!folio_trylock(folio))
> goto out;
> @@ -1365,14 +1364,13 @@ static void swap_alloc_slow(swp_entry_t *entry,
> spin_lock(&swap_avail_lock);
> /*
> * if we got here, it's likely that si was almost full before,
> - * and since scan_swap_map_slots() can drop the si->lock,
> * multiple callers probably all tried to get a page from the
> * same si and it filled up before we could get one; or, the si
> - * filled up between us dropping swap_avail_lock and taking
> - * si->lock. Since we dropped the swap_avail_lock, the
> - * swap_avail_head list may have been modified; so if next is
> - * still in the swap_avail_head list then try it, otherwise
> - * start over if we have not gotten any slots.
> + * filled up between us dropping swap_avail_lock.
> + * Since we dropped the swap_avail_lock, the swap_avail_list
> + * may have been modified; so if next is still in the
> + * swap_avail_head list then try it, otherwise start over if we
> + * have not gotten any slots.
> */
> if (plist_node_empty(&si->avail_list))
> goto start_over;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 5/5] mm: swap: remove scan_swap_map_slots() references from comments
2025-10-31 6:50 ` [PATCH v2 5/5] mm: swap: remove scan_swap_map_slots() references from comments Youngjun Park
2025-11-01 0:03 ` Baoquan He
@ 2025-11-05 6:50 ` Chris Li
1 sibling, 0 replies; 22+ messages in thread
From: Chris Li @ 2025-11-05 6:50 UTC (permalink / raw)
To: Youngjun Park
Cc: Andrew Morton, linux-mm, Kemeng Shi, Kairui Song, Nhat Pham,
Baoquan He, Barry Song
Acked-by: Chris Li <chrisl@kernel.org>
Chris
On Thu, Oct 30, 2025 at 11:54 PM Youngjun Park <youngjun.park@lge.com> wrote:
>
> The scan_swap_map_slots() helper has been removed, but several comments
> still referred to it in swap allocation and reclaim paths. This patch
> cleans up those outdated references and reflows the affected comment
> blocks to match kernel coding style.
>
> Signed-off-by: Youngjun Park <youngjun.park@lge.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 4f8a1843262e..543f303f101d 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -236,11 +236,10 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
> ret = -nr_pages;
>
> /*
> - * When this function is called from scan_swap_map_slots() and it's
> - * called by vmscan.c at reclaiming folios. So we hold a folio lock
> - * here. We have to use trylock for avoiding deadlock. This is a special
> - * case and you should use folio_free_swap() with explicit folio_lock()
> - * in usual operations.
> + * We hold a folio lock here. We have to use trylock for
> + * avoiding deadlock. This is a special case and you should
> + * use folio_free_swap() with explicit folio_lock() in usual
> + * operations.
> */
> if (!folio_trylock(folio))
> goto out;
> @@ -1365,14 +1364,13 @@ static void swap_alloc_slow(swp_entry_t *entry,
> spin_lock(&swap_avail_lock);
> /*
> * if we got here, it's likely that si was almost full before,
> - * and since scan_swap_map_slots() can drop the si->lock,
> * multiple callers probably all tried to get a page from the
> * same si and it filled up before we could get one; or, the si
> - * filled up between us dropping swap_avail_lock and taking
> - * si->lock. Since we dropped the swap_avail_lock, the
> - * swap_avail_head list may have been modified; so if next is
> - * still in the swap_avail_head list then try it, otherwise
> - * start over if we have not gotten any slots.
> + * filled up between us dropping swap_avail_lock.
> + * Since we dropped the swap_avail_lock, the swap_avail_list
> + * may have been modified; so if next is still in the
> + * swap_avail_head list then try it, otherwise start over if we
> + * have not gotten any slots.
> */
> if (plist_node_empty(&si->avail_list))
> goto start_over;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread