* [PATCH v2] mm: Fix shmem THP counters on migration
[not found] <20230619054955.140276-1-jglauber@digitalocean.com>
@ 2023-06-19 10:33 ` Jan Glauber
2023-06-21 6:46 ` Baolin Wang
0 siblings, 1 reply; 2+ messages in thread
From: Jan Glauber @ 2023-06-19 10:33 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, linux-kernel, Jan Glauber
The per node numa_stat values for shmem don't change on
page migration for THP:
grep shmem /sys/fs/cgroup/machine.slice/.../memory.numa_stat:
shmem N0=1092616192 N1=10485760
shmem_thp N0=1092616192 N1=10485760
migratepages 9181 0 1:
shmem N0=0 N1=1103101952
shmem_thp N0=1092616192 N1=10485760
Fix that by updating shmem_thp counters likewise to shmem counters
on page migration.
Signed-off-by: Jan Glauber <jglauber@digitalocean.com>
---
mm/migrate.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/mm/migrate.c b/mm/migrate.c
index 01cac26a3127..d2ba786ea105 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -492,6 +492,11 @@ int folio_migrate_mapping(struct address_space *mapping,
if (folio_test_swapbacked(folio) && !folio_test_swapcache(folio)) {
__mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
__mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
+
+ if (folio_test_transhuge(folio)) {
+ __mod_lruvec_state(old_lruvec, NR_SHMEM_THPS, -nr);
+ __mod_lruvec_state(new_lruvec, NR_SHMEM_THPS, nr);
+ }
}
#ifdef CONFIG_SWAP
if (folio_test_swapcache(folio)) {
--
2.25.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v2] mm: Fix shmem THP counters on migration
2023-06-19 10:33 ` [PATCH v2] mm: Fix shmem THP counters on migration Jan Glauber
@ 2023-06-21 6:46 ` Baolin Wang
0 siblings, 0 replies; 2+ messages in thread
From: Baolin Wang @ 2023-06-21 6:46 UTC (permalink / raw)
To: Jan Glauber, akpm; +Cc: linux-mm, linux-kernel, Huang, Ying
Cc: Huang Ying
On 6/19/2023 6:33 PM, Jan Glauber wrote:
> The per node numa_stat values for shmem don't change on
> page migration for THP:
>
> grep shmem /sys/fs/cgroup/machine.slice/.../memory.numa_stat:
>
> shmem N0=1092616192 N1=10485760
> shmem_thp N0=1092616192 N1=10485760
>
> migratepages 9181 0 1:
>
> shmem N0=0 N1=1103101952
> shmem_thp N0=1092616192 N1=10485760
>
> Fix that by updating shmem_thp counters likewise to shmem counters
> on page migration.
>
> Signed-off-by: Jan Glauber <jglauber@digitalocean.com>
> ---
Please add your change history.
> mm/migrate.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 01cac26a3127..d2ba786ea105 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -492,6 +492,11 @@ int folio_migrate_mapping(struct address_space *mapping,
> if (folio_test_swapbacked(folio) && !folio_test_swapcache(folio)) {
> __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> +
> + if (folio_test_transhuge(folio)) {
I am afraid this validation is fragile, IIUC the file backed folio can
contain various numbers of pages in future.
So seems using folio_test_pmd_mappable() seems more suitable for THP.
> + __mod_lruvec_state(old_lruvec, NR_SHMEM_THPS, -nr);
> + __mod_lruvec_state(new_lruvec, NR_SHMEM_THPS, nr);
> + }
> }
> #ifdef CONFIG_SWAP
> if (folio_test_swapcache(folio)) {
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-06-21 6:46 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20230619054955.140276-1-jglauber@digitalocean.com>
2023-06-19 10:33 ` [PATCH v2] mm: Fix shmem THP counters on migration Jan Glauber
2023-06-21 6:46 ` Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox