* [PATCH] mm/migrate: fix shmem xarray update during migration
@ 2025-02-28 15:42 Zi Yan
2025-02-28 16:35 ` Matthew Wilcox
2025-02-28 17:02 ` Shivank Garg
0 siblings, 2 replies; 5+ messages in thread
From: Zi Yan @ 2025-02-28 15:42 UTC (permalink / raw)
To: Liu Shixin, Baolin Wang, linux-mm
Cc: Andrew Morton, Barry Song, David Hildenbrand, Kefeng Wang,
Lance Yang, Ryan Roberts, Matthew Wilcox, Hugh Dickins,
Charan Teja Kalla, linux-kernel, Zi Yan
Pagecache uses multi-index entries for large folio, so does shmem. Only
swap cache still stores multiple entries for a single large folio.
Commit fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
fixed swap cache but got shmem wrong by storing multiple entries for
a large shmem folio. Fix it by storing a single entry for a shmem
folio.
Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
Reported-by: Liu Shixin <liushixin2@huawei.com>
Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@huawei.com/
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/migrate.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 365c6daa8d1b..9db26f5527a8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -44,6 +44,7 @@
#include <linux/sched/sysctl.h>
#include <linux/memory-tiers.h>
#include <linux/pagewalk.h>
+#include <linux/shmem_fs.h>
#include <asm/tlbflush.h>
@@ -524,7 +525,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
folio_set_swapcache(newfolio);
newfolio->private = folio_get_private(folio);
}
- entries = nr;
+ /* shmem uses high-order entry */
+ if (shmem_mapping(mapping))
+ entries = 1;
+ else
+ entries = nr;
} else {
VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
entries = 1;
--
2.47.2
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/migrate: fix shmem xarray update during migration
2025-02-28 15:42 [PATCH] mm/migrate: fix shmem xarray update during migration Zi Yan
@ 2025-02-28 16:35 ` Matthew Wilcox
2025-02-28 17:05 ` Zi Yan
2025-02-28 17:02 ` Shivank Garg
1 sibling, 1 reply; 5+ messages in thread
From: Matthew Wilcox @ 2025-02-28 16:35 UTC (permalink / raw)
To: Zi Yan
Cc: Liu Shixin, Baolin Wang, linux-mm, Andrew Morton, Barry Song,
David Hildenbrand, Kefeng Wang, Lance Yang, Ryan Roberts,
Hugh Dickins, Charan Teja Kalla, linux-kernel
On Fri, Feb 28, 2025 at 10:42:19AM -0500, Zi Yan wrote:
> @@ -524,7 +525,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
> folio_set_swapcache(newfolio);
> newfolio->private = folio_get_private(folio);
> }
> - entries = nr;
> + /* shmem uses high-order entry */
> + if (shmem_mapping(mapping))
It's be cheaper to check folio_test_anon() here, right?
Also, how did this bug remain unnoticed for almost 4 years?
Our testing is terrible ;-(
> + entries = 1;
> + else
> + entries = nr;
> } else {
> VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
> entries = 1;
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/migrate: fix shmem xarray update during migration
2025-02-28 15:42 [PATCH] mm/migrate: fix shmem xarray update during migration Zi Yan
2025-02-28 16:35 ` Matthew Wilcox
@ 2025-02-28 17:02 ` Shivank Garg
2025-02-28 17:17 ` Zi Yan
1 sibling, 1 reply; 5+ messages in thread
From: Shivank Garg @ 2025-02-28 17:02 UTC (permalink / raw)
To: Zi Yan, Liu Shixin, Baolin Wang, linux-mm
Cc: Andrew Morton, Barry Song, David Hildenbrand, Kefeng Wang,
Lance Yang, Ryan Roberts, Matthew Wilcox, Hugh Dickins,
Charan Teja Kalla, linux-kernel
On 2/28/2025 9:12 PM, Zi Yan wrote:
> Pagecache uses multi-index entries for large folio, so does shmem. Only
> swap cache still stores multiple entries for a single large folio.
> Commit fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
> fixed swap cache but got shmem wrong by storing multiple entries for
> a large shmem folio. Fix it by storing a single entry for a shmem
> folio.
>
> Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
> Reported-by: Liu Shixin <liushixin2@huawei.com>
> Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@huawei.com/
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> mm/migrate.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 365c6daa8d1b..9db26f5527a8 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -44,6 +44,7 @@
> #include <linux/sched/sysctl.h>
> #include <linux/memory-tiers.h>
> #include <linux/pagewalk.h>
> +#include <linux/shmem_fs.h>
>
> #include <asm/tlbflush.h>
>
> @@ -524,7 +525,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
> folio_set_swapcache(newfolio);
> newfolio->private = folio_get_private(folio);
> }
> - entries = nr;
> + /* shmem uses high-order entry */
> + if (shmem_mapping(mapping))
> + entries = 1;
> + else
> + entries = nr;
LGTM functionally.
As a minor style suggestion, can we consider using a ternary operator:
entries = shmem_mapping(mapping) ? 1 : nr; /* shmem uses high-order entry */
This looks cleaner to me.
Reviewed-by: Shivank Garg <shivankg@amd.com>
> } else {
> VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
> entries = 1;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/migrate: fix shmem xarray update during migration
2025-02-28 16:35 ` Matthew Wilcox
@ 2025-02-28 17:05 ` Zi Yan
0 siblings, 0 replies; 5+ messages in thread
From: Zi Yan @ 2025-02-28 17:05 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Liu Shixin, Baolin Wang, linux-mm, Andrew Morton, Barry Song,
David Hildenbrand, Kefeng Wang, Lance Yang, Ryan Roberts,
Hugh Dickins, Charan Teja Kalla, linux-kernel
On 28 Feb 2025, at 11:35, Matthew Wilcox wrote:
> On Fri, Feb 28, 2025 at 10:42:19AM -0500, Zi Yan wrote:
>> @@ -524,7 +525,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>> folio_set_swapcache(newfolio);
>> newfolio->private = folio_get_private(folio);
>> }
>> - entries = nr;
>> + /* shmem uses high-order entry */
>> + if (shmem_mapping(mapping))
>
> It's be cheaper to check folio_test_anon() here, right?
Yes and it gets rid of the new include. Let me send v2.
>
> Also, how did this bug remain unnoticed for almost 4 years?
> Our testing is terrible ;-(
Probably not 4 years. Before fc346d0a70a1 (2023), shmem was right,
swap cache was wrong. After fc346d0a70a1, shmem is wrong, swap cache
is right. And before Baolin’s patch, shmem only has PMD size folio
to use multi-index entries. Maybe getting PMD size folio is
really impossible when compaction runs a lot?
>
>> + entries = 1;
>> + else
>> + entries = nr;
>> } else {
>> VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
>> entries = 1;
>> --
>> 2.47.2
>>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/migrate: fix shmem xarray update during migration
2025-02-28 17:02 ` Shivank Garg
@ 2025-02-28 17:17 ` Zi Yan
0 siblings, 0 replies; 5+ messages in thread
From: Zi Yan @ 2025-02-28 17:17 UTC (permalink / raw)
To: Shivank Garg
Cc: Liu Shixin, Baolin Wang, linux-mm, Andrew Morton, Barry Song,
David Hildenbrand, Kefeng Wang, Lance Yang, Ryan Roberts,
Matthew Wilcox, Hugh Dickins, Charan Teja Kalla, linux-kernel
On 28 Feb 2025, at 12:02, Shivank Garg wrote:
> On 2/28/2025 9:12 PM, Zi Yan wrote:
>> Pagecache uses multi-index entries for large folio, so does shmem. Only
>> swap cache still stores multiple entries for a single large folio.
>> Commit fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
>> fixed swap cache but got shmem wrong by storing multiple entries for
>> a large shmem folio. Fix it by storing a single entry for a shmem
>> folio.
>>
>> Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
>> Reported-by: Liu Shixin <liushixin2@huawei.com>
>> Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@huawei.com/
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>> mm/migrate.c | 7 ++++++-
>> 1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 365c6daa8d1b..9db26f5527a8 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -44,6 +44,7 @@
>> #include <linux/sched/sysctl.h>
>> #include <linux/memory-tiers.h>
>> #include <linux/pagewalk.h>
>> +#include <linux/shmem_fs.h>
>>
>> #include <asm/tlbflush.h>
>>
>> @@ -524,7 +525,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>> folio_set_swapcache(newfolio);
>> newfolio->private = folio_get_private(folio);
>> }
>> - entries = nr;
>> + /* shmem uses high-order entry */
>> + if (shmem_mapping(mapping))
>> + entries = 1;
>> + else
>> + entries = nr;
>
> LGTM functionally.
> As a minor style suggestion, can we consider using a ternary operator:
>
> entries = shmem_mapping(mapping) ? 1 : nr; /* shmem uses high-order entry */
>
> This looks cleaner to me.
>
> Reviewed-by: Shivank Garg <shivankg@amd.com>
Thanks.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-02-28 17:17 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-28 15:42 [PATCH] mm/migrate: fix shmem xarray update during migration Zi Yan
2025-02-28 16:35 ` Matthew Wilcox
2025-02-28 17:05 ` Zi Yan
2025-02-28 17:02 ` Shivank Garg
2025-02-28 17:17 ` Zi Yan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox