* [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout
@ 2024-12-05 6:50 Hugh Dickins
2024-12-05 17:00 ` Shakeel Butt
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Hugh Dickins @ 2024-12-05 6:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: Baolin Wang, linux-kernel, linux-mm
/proc/meminfo ShmemHugePages has been showing overlarge amounts (more
than Shmem) after swapping out THPs: we forgot to update NR_SHMEM_THPS.
Add shmem_update_stats(), to avoid repetition, and risk of making that
mistake again: the call from shmem_delete_from_page_cache() is the bugfix;
the call from shmem_replace_folio() is reassuring, but not really a bugfix
(replace corrects misplaced swapin readahead, but huge swapin readahead
would be a mistake).
Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org
---
mm/shmem.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index ccb9629a0f70..f6fb053ac50d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -787,6 +787,14 @@ static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+static void shmem_update_stats(struct folio *folio, int nr_pages)
+{
+ if (folio_test_pmd_mappable(folio))
+ __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages);
+ __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages);
+ __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages);
+}
+
/*
* Somewhat like filemap_add_folio, but error if expected item has gone.
*/
@@ -821,10 +829,7 @@ static int shmem_add_to_page_cache(struct folio *folio,
xas_store(&xas, folio);
if (xas_error(&xas))
goto unlock;
- if (folio_test_pmd_mappable(folio))
- __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr);
- __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr);
- __lruvec_stat_mod_folio(folio, NR_SHMEM, nr);
+ shmem_update_stats(folio, nr);
mapping->nrpages += nr;
unlock:
xas_unlock_irq(&xas);
@@ -852,8 +857,7 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
error = shmem_replace_entry(mapping, folio->index, folio, radswap);
folio->mapping = NULL;
mapping->nrpages -= nr;
- __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
- __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
+ shmem_update_stats(folio, -nr);
xa_unlock_irq(&mapping->i_pages);
folio_put_refs(folio, nr);
BUG_ON(error);
@@ -1969,10 +1973,8 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
}
if (!error) {
mem_cgroup_replace_folio(old, new);
- __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages);
- __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages);
- __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages);
- __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages);
+ shmem_update_stats(new, nr_pages);
+ shmem_update_stats(old, -nr_pages);
}
xa_unlock_irq(&swap_mapping->i_pages);
--
2.43.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout
2024-12-05 6:50 [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout Hugh Dickins
@ 2024-12-05 17:00 ` Shakeel Butt
2024-12-05 20:48 ` Yosry Ahmed
2024-12-06 1:22 ` Baolin Wang
2 siblings, 0 replies; 4+ messages in thread
From: Shakeel Butt @ 2024-12-05 17:00 UTC (permalink / raw)
To: Hugh Dickins; +Cc: Andrew Morton, Baolin Wang, linux-kernel, linux-mm
On Wed, Dec 04, 2024 at 10:50:06PM -0800, Hugh Dickins wrote:
> /proc/meminfo ShmemHugePages has been showing overlarge amounts (more
> than Shmem) after swapping out THPs: we forgot to update NR_SHMEM_THPS.
>
> Add shmem_update_stats(), to avoid repetition, and risk of making that
> mistake again: the call from shmem_delete_from_page_cache() is the bugfix;
> the call from shmem_replace_folio() is reassuring, but not really a bugfix
> (replace corrects misplaced swapin readahead, but huge swapin readahead
> would be a mistake).
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Cc: stable@vger.kernel.org
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout
2024-12-05 6:50 [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout Hugh Dickins
2024-12-05 17:00 ` Shakeel Butt
@ 2024-12-05 20:48 ` Yosry Ahmed
2024-12-06 1:22 ` Baolin Wang
2 siblings, 0 replies; 4+ messages in thread
From: Yosry Ahmed @ 2024-12-05 20:48 UTC (permalink / raw)
To: Hugh Dickins; +Cc: Andrew Morton, Baolin Wang, linux-kernel, linux-mm
On Wed, Dec 4, 2024 at 10:50 PM Hugh Dickins <hughd@google.com> wrote:
>
> /proc/meminfo ShmemHugePages has been showing overlarge amounts (more
> than Shmem) after swapping out THPs: we forgot to update NR_SHMEM_THPS.
>
> Add shmem_update_stats(), to avoid repetition, and risk of making that
> mistake again: the call from shmem_delete_from_page_cache() is the bugfix;
> the call from shmem_replace_folio() is reassuring, but not really a bugfix
> (replace corrects misplaced swapin readahead, but huge swapin readahead
> would be a mistake).
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
> Cc: stable@vger.kernel.org
> ---
> mm/shmem.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index ccb9629a0f70..f6fb053ac50d 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -787,6 +787,14 @@ static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> }
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> +static void shmem_update_stats(struct folio *folio, int nr_pages)
> +{
> + if (folio_test_pmd_mappable(folio))
> + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages);
> + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages);
> + __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages);
> +}
> +
> /*
> * Somewhat like filemap_add_folio, but error if expected item has gone.
> */
> @@ -821,10 +829,7 @@ static int shmem_add_to_page_cache(struct folio *folio,
> xas_store(&xas, folio);
> if (xas_error(&xas))
> goto unlock;
> - if (folio_test_pmd_mappable(folio))
> - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr);
> - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr);
> - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr);
> + shmem_update_stats(folio, nr);
> mapping->nrpages += nr;
> unlock:
> xas_unlock_irq(&xas);
> @@ -852,8 +857,7 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> error = shmem_replace_entry(mapping, folio->index, folio, radswap);
> folio->mapping = NULL;
> mapping->nrpages -= nr;
> - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
> - __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
> + shmem_update_stats(folio, -nr);
> xa_unlock_irq(&mapping->i_pages);
> folio_put_refs(folio, nr);
> BUG_ON(error);
> @@ -1969,10 +1973,8 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
> }
> if (!error) {
> mem_cgroup_replace_folio(old, new);
> - __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages);
> - __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages);
> - __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages);
> - __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages);
> + shmem_update_stats(new, nr_pages);
> + shmem_update_stats(old, -nr_pages);
> }
> xa_unlock_irq(&swap_mapping->i_pages);
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout
2024-12-05 6:50 [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout Hugh Dickins
2024-12-05 17:00 ` Shakeel Butt
2024-12-05 20:48 ` Yosry Ahmed
@ 2024-12-06 1:22 ` Baolin Wang
2 siblings, 0 replies; 4+ messages in thread
From: Baolin Wang @ 2024-12-06 1:22 UTC (permalink / raw)
To: Hugh Dickins, Andrew Morton; +Cc: linux-kernel, linux-mm
On 2024/12/5 14:50, Hugh Dickins wrote:
> /proc/meminfo ShmemHugePages has been showing overlarge amounts (more
> than Shmem) after swapping out THPs: we forgot to update NR_SHMEM_THPS.
>
> Add shmem_update_stats(), to avoid repetition, and risk of making that
> mistake again: the call from shmem_delete_from_page_cache() is the bugfix;
> the call from shmem_replace_folio() is reassuring, but not really a bugfix
> (replace corrects misplaced swapin readahead, but huge swapin readahead
> would be a mistake).
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Cc: stable@vger.kernel.org
Indeed. Thanks for fixing.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/shmem.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index ccb9629a0f70..f6fb053ac50d 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -787,6 +787,14 @@ static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> }
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> +static void shmem_update_stats(struct folio *folio, int nr_pages)
> +{
> + if (folio_test_pmd_mappable(folio))
> + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages);
> + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages);
> + __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages);
> +}
> +
> /*
> * Somewhat like filemap_add_folio, but error if expected item has gone.
> */
> @@ -821,10 +829,7 @@ static int shmem_add_to_page_cache(struct folio *folio,
> xas_store(&xas, folio);
> if (xas_error(&xas))
> goto unlock;
> - if (folio_test_pmd_mappable(folio))
> - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr);
> - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr);
> - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr);
> + shmem_update_stats(folio, nr);
> mapping->nrpages += nr;
> unlock:
> xas_unlock_irq(&xas);
> @@ -852,8 +857,7 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> error = shmem_replace_entry(mapping, folio->index, folio, radswap);
> folio->mapping = NULL;
> mapping->nrpages -= nr;
> - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
> - __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
> + shmem_update_stats(folio, -nr);
> xa_unlock_irq(&mapping->i_pages);
> folio_put_refs(folio, nr);
> BUG_ON(error);
> @@ -1969,10 +1973,8 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
> }
> if (!error) {
> mem_cgroup_replace_folio(old, new);
> - __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages);
> - __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages);
> - __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages);
> - __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages);
> + shmem_update_stats(new, nr_pages);
> + shmem_update_stats(old, -nr_pages);
> }
> xa_unlock_irq(&swap_mapping->i_pages);
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-12-06 1:22 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-05 6:50 [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout Hugh Dickins
2024-12-05 17:00 ` Shakeel Butt
2024-12-05 20:48 ` Yosry Ahmed
2024-12-06 1:22 ` Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox