* [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page()
@ 2024-07-19 14:43 Zi Yan
2024-07-19 14:43 ` [PATCH 2/2] memory tiering: introduce folio_has_cpupid() check Zi Yan
2024-07-20 8:11 ` [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() Kefeng Wang
0 siblings, 2 replies; 6+ messages in thread
From: Zi Yan @ 2024-07-19 14:43 UTC (permalink / raw)
To: Andrew Morton, linux-mm
Cc: David Hildenbrand, Huang, Ying, Baolin Wang, linux-kernel, Zi Yan
last_cpupid is only available when memory tiering is off or the folio
is in toptier node. Complete the check to read last_cpupid when it is
available.
Before the fix, the default last_cpupid will be used even if memory
tiering mode is turned off at runtime instead of the actual value. This
can prevent task_numa_fault() from getting right numa fault stats, but
should not cause any crash. User might see performance changes after the
fix.
Reported-by: David Hildenbrand <david@redhat.com>
Closes: https://lore.kernel.org/linux-mm/9af34a6b-ca56-4a64-8aa6-ade65f109288@redhat.com/
Fixes: 33024536bafd ("memory tiering: hot page selection with hint page fault latency")
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
mm/huge_memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f4be468e06a4..825317aee88e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1712,7 +1712,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
* For memory tiering mode, cpupid of slow memory page is used
* to record page access time. So use default value.
*/
- if (node_is_toptier(nid))
+ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
+ node_is_toptier(nid))
last_cpupid = folio_last_cpupid(folio);
target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags);
if (target_nid == NUMA_NO_NODE)
--
2.43.0
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH 2/2] memory tiering: introduce folio_has_cpupid() check 2024-07-19 14:43 [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() Zi Yan @ 2024-07-19 14:43 ` Zi Yan 2024-07-20 7:50 ` Kefeng Wang 2024-07-20 8:11 ` [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() Kefeng Wang 1 sibling, 1 reply; 6+ messages in thread From: Zi Yan @ 2024-07-19 14:43 UTC (permalink / raw) To: Andrew Morton, linux-mm Cc: David Hildenbrand, Huang, Ying, Baolin Wang, linux-kernel, Zi Yan Instead of open coded check for if memory tiering mode is on and a folio is in the top tier memory, use a function to encapsulate the check. Signed-off-by: Zi Yan <ziy@nvidia.com> Reviewed-by: "Huang, Ying" <ying.huang@intel.com> --- include/linux/memory-tiers.h | 8 ++++++++ kernel/sched/fair.c | 3 +-- mm/huge_memory.c | 6 ++---- mm/memory-tiers.c | 17 +++++++++++++++++ mm/memory.c | 3 +-- mm/mprotect.c | 3 +-- 6 files changed, 30 insertions(+), 10 deletions(-) diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h index 0dc0cf2863e2..10c127d461c4 100644 --- a/include/linux/memory-tiers.h +++ b/include/linux/memory-tiers.h @@ -73,6 +73,10 @@ static inline bool node_is_toptier(int node) } #endif + +bool folio_has_cpupid(struct folio *folio); + + #else #define numa_demotion_enabled false @@ -151,5 +155,9 @@ static inline struct memory_dev_type *mt_find_alloc_memory_type(int adist, static inline void mt_put_memory_types(struct list_head *memory_types) { } +static inline bool folio_has_cpupid(struct folio *folio) +{ + return true; +} #endif /* CONFIG_NUMA */ #endif /* _LINUX_MEMORY_TIERS_H */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8a5b1ae0aa55..03de808cb3cc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1840,8 +1840,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio, * The pages in slow memory node should be migrated according * to hot/cold instead of private/shared. */ - if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && - !node_is_toptier(src_nid)) { + if (!folio_has_cpupid(folio)) { struct pglist_data *pgdat; unsigned long rate_limit; unsigned int latency, th, def_th; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 825317aee88e..d925a93bb9ed 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1712,8 +1712,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. */ - if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || - node_is_toptier(nid)) + if (folio_has_cpupid(folio)) last_cpupid = folio_last_cpupid(folio); target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags); if (target_nid == NUMA_NO_NODE) @@ -2066,8 +2065,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, toptier) goto unlock; - if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && - !toptier) + if (!folio_has_cpupid(folio)) folio_xchg_access_time(folio, jiffies_to_msecs(jiffies)); } diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index 4775b3a3dabe..7f0360d4e3a0 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -6,6 +6,7 @@ #include <linux/memory.h> #include <linux/memory-tiers.h> #include <linux/notifier.h> +#include <linux/sched/sysctl.h> #include "internal.h" @@ -50,6 +51,22 @@ static const struct bus_type memory_tier_subsys = { .dev_name = "memory_tier", }; +/** + * folio_has_cpupid - check if a folio has cpupid information + * @folio: folio to check + * + * folio's _last_cpupid field is repurposed by memory tiering. In memory + * tiering mode, cpupid of slow memory folio (not toptier memory) is used to + * record page access time. + * + * Return: the folio _last_cpupid is used as cpupid + */ +bool folio_has_cpupid(struct folio *folio) +{ + return !(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || + node_is_toptier(folio_nid(folio)); +} + #ifdef CONFIG_MIGRATION static int top_tier_adistance; /* diff --git a/mm/memory.c b/mm/memory.c index 802d0d8a40f9..105e1a0157dd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5337,8 +5337,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. */ - if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && - !node_is_toptier(nid)) + if (!folio_has_cpupid(folio)) last_cpupid = (-1 & LAST_CPUPID_MASK); else last_cpupid = folio_last_cpupid(folio); diff --git a/mm/mprotect.c b/mm/mprotect.c index 222ab434da54..787c3c2bf1b6 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -161,8 +161,7 @@ static long change_pte_range(struct mmu_gather *tlb, if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && toptier) continue; - if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && - !toptier) + if (!folio_has_cpupid(folio)) folio_xchg_access_time(folio, jiffies_to_msecs(jiffies)); } -- 2.43.0 ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 2/2] memory tiering: introduce folio_has_cpupid() check 2024-07-19 14:43 ` [PATCH 2/2] memory tiering: introduce folio_has_cpupid() check Zi Yan @ 2024-07-20 7:50 ` Kefeng Wang 2024-07-20 13:47 ` Zi Yan 0 siblings, 1 reply; 6+ messages in thread From: Kefeng Wang @ 2024-07-20 7:50 UTC (permalink / raw) To: Zi Yan, Andrew Morton, linux-mm Cc: David Hildenbrand, Huang, Ying, Baolin Wang, linux-kernel On 2024/7/19 22:43, Zi Yan wrote: > Instead of open coded check for if memory tiering mode is on and a folio > is in the top tier memory, use a function to encapsulate the check. > > Signed-off-by: Zi Yan <ziy@nvidia.com> > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > --- > include/linux/memory-tiers.h | 8 ++++++++ > kernel/sched/fair.c | 3 +-- > mm/huge_memory.c | 6 ++---- > mm/memory-tiers.c | 17 +++++++++++++++++ > mm/memory.c | 3 +-- > mm/mprotect.c | 3 +-- > 6 files changed, 30 insertions(+), 10 deletions(-) > > diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h > index 0dc0cf2863e2..10c127d461c4 100644 > --- a/include/linux/memory-tiers.h > +++ b/include/linux/memory-tiers.h > @@ -73,6 +73,10 @@ static inline bool node_is_toptier(int node) > } > #endif > > + > +bool folio_has_cpupid(struct folio *folio); > + > + > #else > > #define numa_demotion_enabled false > @@ -151,5 +155,9 @@ static inline struct memory_dev_type *mt_find_alloc_memory_type(int adist, > static inline void mt_put_memory_types(struct list_head *memory_types) > { > } > +static inline bool folio_has_cpupid(struct folio *folio) > +{ > + return true; > +} Maybe better to move into mm.h since most folio_foo_cpupid()s are there? ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 2/2] memory tiering: introduce folio_has_cpupid() check 2024-07-20 7:50 ` Kefeng Wang @ 2024-07-20 13:47 ` Zi Yan 0 siblings, 0 replies; 6+ messages in thread From: Zi Yan @ 2024-07-20 13:47 UTC (permalink / raw) To: Kefeng Wang Cc: Andrew Morton, linux-mm, David Hildenbrand, Huang, Ying, Baolin Wang, linux-kernel [-- Attachment #1: Type: text/plain, Size: 1550 bytes --] On 20 Jul 2024, at 3:50, Kefeng Wang wrote: > On 2024/7/19 22:43, Zi Yan wrote: >> Instead of open coded check for if memory tiering mode is on and a folio >> is in the top tier memory, use a function to encapsulate the check. >> >> Signed-off-by: Zi Yan <ziy@nvidia.com> >> Reviewed-by: "Huang, Ying" <ying.huang@intel.com> >> --- >> include/linux/memory-tiers.h | 8 ++++++++ >> kernel/sched/fair.c | 3 +-- >> mm/huge_memory.c | 6 ++---- >> mm/memory-tiers.c | 17 +++++++++++++++++ >> mm/memory.c | 3 +-- >> mm/mprotect.c | 3 +-- >> 6 files changed, 30 insertions(+), 10 deletions(-) >> >> diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h >> index 0dc0cf2863e2..10c127d461c4 100644 >> --- a/include/linux/memory-tiers.h >> +++ b/include/linux/memory-tiers.h >> @@ -73,6 +73,10 @@ static inline bool node_is_toptier(int node) >> } >> #endif >> + >> +bool folio_has_cpupid(struct folio *folio); >> + >> + >> #else >> #define numa_demotion_enabled false >> @@ -151,5 +155,9 @@ static inline struct memory_dev_type *mt_find_alloc_memory_type(int adist, >> static inline void mt_put_memory_types(struct list_head *memory_types) >> { >> } >> +static inline bool folio_has_cpupid(struct folio *folio) >> +{ >> + return true; >> +} > > Maybe better to move into mm.h since most folio_foo_cpupid()s are there? Sounds good to me. Will do that in the next version. -- Best Regards, Yan, Zi [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 854 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() 2024-07-19 14:43 [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() Zi Yan 2024-07-19 14:43 ` [PATCH 2/2] memory tiering: introduce folio_has_cpupid() check Zi Yan @ 2024-07-20 8:11 ` Kefeng Wang 2024-07-20 12:36 ` Zi Yan 1 sibling, 1 reply; 6+ messages in thread From: Kefeng Wang @ 2024-07-20 8:11 UTC (permalink / raw) To: Zi Yan, Andrew Morton, linux-mm Cc: David Hildenbrand, Huang, Ying, Baolin Wang, linux-kernel On 2024/7/19 22:43, Zi Yan wrote: > last_cpupid is only available when memory tiering is off or the folio > is in toptier node. Complete the check to read last_cpupid when it is > available. > > Before the fix, the default last_cpupid will be used even if memory > tiering mode is turned off at runtime instead of the actual value. This > can prevent task_numa_fault() from getting right numa fault stats, but > should not cause any crash. User might see performance changes after the > fix. > > Reported-by: David Hildenbrand <david@redhat.com> > Closes: https://lore.kernel.org/linux-mm/9af34a6b-ca56-4a64-8aa6-ade65f109288@redhat.com/ > Fixes: 33024536bafd ("memory tiering: hot page selection with hint page fault latency") > Signed-off-by: Zi Yan <ziy@nvidia.com> > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> and we better to check numabalance mode in migrate_misplaced_folio()? --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2630,7 +2630,8 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, putback_movable_pages(&migratepages); if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); - if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node)) + if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) + &&!node_is_toptier(folio_nid(folio)) && node_is_toptier(node)) mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded); } > --- > mm/huge_memory.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index f4be468e06a4..825317aee88e 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1712,7 +1712,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > * For memory tiering mode, cpupid of slow memory page is used > * to record page access time. So use default value. > */ > - if (node_is_toptier(nid)) > + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || > + node_is_toptier(nid)) > last_cpupid = folio_last_cpupid(folio); > target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags); > if (target_nid == NUMA_NO_NODE) ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() 2024-07-20 8:11 ` [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() Kefeng Wang @ 2024-07-20 12:36 ` Zi Yan 0 siblings, 0 replies; 6+ messages in thread From: Zi Yan @ 2024-07-20 12:36 UTC (permalink / raw) To: Kefeng Wang Cc: Andrew Morton, linux-mm, David Hildenbrand, Huang, Ying, Baolin Wang, linux-kernel [-- Attachment #1: Type: text/plain, Size: 1941 bytes --] On 20 Jul 2024, at 4:11, Kefeng Wang wrote: > On 2024/7/19 22:43, Zi Yan wrote: >> last_cpupid is only available when memory tiering is off or the folio >> is in toptier node. Complete the check to read last_cpupid when it is >> available. >> >> Before the fix, the default last_cpupid will be used even if memory >> tiering mode is turned off at runtime instead of the actual value. This >> can prevent task_numa_fault() from getting right numa fault stats, but >> should not cause any crash. User might see performance changes after the >> fix. >> >> Reported-by: David Hildenbrand <david@redhat.com> >> Closes: https://lore.kernel.org/linux-mm/9af34a6b-ca56-4a64-8aa6-ade65f109288@redhat.com/ >> Fixes: 33024536bafd ("memory tiering: hot page selection with hint page fault latency") >> Signed-off-by: Zi Yan <ziy@nvidia.com> >> Reviewed-by: "Huang, Ying" <ying.huang@intel.com> >> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> Acked-by: David Hildenbrand <david@redhat.com> > > Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> > > and we better to check numabalance mode in migrate_misplaced_folio()? > > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2630,7 +2630,8 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, > putback_movable_pages(&migratepages); > if (nr_succeeded) { > count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); > - if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node)) > + if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) > + &&!node_is_toptier(folio_nid(folio)) && node_is_toptier(node)) > mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, > nr_succeeded); > } Yes, will add this as a separate fix. Thanks. -- Best Regards, Yan, Zi [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 854 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-07-20 13:47 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-07-19 14:43 [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() Zi Yan 2024-07-19 14:43 ` [PATCH 2/2] memory tiering: introduce folio_has_cpupid() check Zi Yan 2024-07-20 7:50 ` Kefeng Wang 2024-07-20 13:47 ` Zi Yan 2024-07-20 8:11 ` [PATCH 1/2] memory tiering: read last_cpupid correctly in do_huge_pmd_numa_page() Kefeng Wang 2024-07-20 12:36 ` Zi Yan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox