* [PATCH v3 0/8] mm: migrate: more folio conversion and unification
@ 2023-09-13 9:51 Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 1/8] mm: migrate: remove PageTransHuge check in numamigrate_isolate_page() Kefeng Wang
` (7 more replies)
0 siblings, 8 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
Convert more migrate functions to use a folio, it is also a preparation
for large folio migration support when numa balance.
The patch 1~2 remove unexpected specific assert for PageTransHuge page.
The Patch 3~6 convert several more migration functions to use folio.
1) add_page_for_migration() used by move_pages
2) migrate_misplaced_page()/numamigrate_isolate_page()
used by numa balance
The patch 7 remove PageHead() check to make hugetlb to migrate the
entire hugetlb page instead of -EACCES errno return when
passed the address of a tail page.
The patch 8 cleanup to unify and simplify code a bit in
add_page_for_migration()
Thanks for all comments and suggestions from Matthew, Hugh, Zi, Mike, Huang.
v3:
- update changelog of patch1
- use folio_estimated_sharers and comment it in migrate_misplaced_folio()
- collect RB/ACK
- rebased on 6.6-rc1
v2:
- keep page_mapcount() check and remove specific assert for
PageTransHuge page.
- separate patch7 to migrate the entire hugetlb page if a tail page passed,
which unified the behavior between HUGETLB and THP when move_page().
Kefeng Wang (8):
mm: migrate: remove PageTransHuge check in numamigrate_isolate_page()
mm: migrate: remove THP mapcount check in numamigrate_isolate_page()
mm: migrate: convert numamigrate_isolate_page() to
numamigrate_isolate_folio()
mm: migrate: convert migrate_misplaced_page() to
migrate_misplaced_folio()
mm: migrate: use __folio_test_movable()
mm: migrate: use a folio in add_page_for_migration()
mm: migrate: remove PageHead() check for HugeTLB in
add_page_for_migration()
mm: migrate: remove isolated variable in add_page_for_migration()
include/linux/migrate.h | 4 +-
mm/huge_memory.c | 2 +-
mm/memory.c | 2 +-
mm/migrate.c | 126 ++++++++++++++++++----------------------
4 files changed, 62 insertions(+), 72 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/8] mm: migrate: remove PageTransHuge check in numamigrate_isolate_page()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 2/8] mm: migrate: remove THP mapcount " Kefeng Wang
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
The assert VM_BUG_ON_PAGE(order && !PageTransHuge(page), page) is
not very useful,
1) for a tail/base page, order = 0, for a head page, the order > 0 &&
PageTransHuge() is true
2) there is a PageCompound() check and only base page is handled in
do_numa_page(), and do_huge_pmd_numa_page() only handle PMD-mapped
THP
3) even though the page is a tail page, isolate_lru_page() will post
a warning, and fail to isolate the page
4) if large folio/pte-mapped THP migration supported in the future,
we could migrate the entire folio if numa fault on a tail page
so just remove the check.
Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index b7fa020003f3..646d8ee7f102 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2483,8 +2483,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
int nr_pages = thp_nr_pages(page);
int order = compound_order(page);
- VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
-
/* Do not migrate THP mapped by multiple processes */
if (PageTransHuge(page) && total_mapcount(page) > 1)
return 0;
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 2/8] mm: migrate: remove THP mapcount check in numamigrate_isolate_page()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 1/8] mm: migrate: remove PageTransHuge check in numamigrate_isolate_page() Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 3/8] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio() Kefeng Wang
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
The check of THP mapped by multiple processes was introduced by commit
04fa5d6a6547 ("mm: migrate: check page_count of THP before migrating")
and refactor by commit 340ef3902cf2 ("mm: numa: cleanup flow of transhuge
page migration"), which is out of date, since migrate_misplaced_page()
is now using the standard migrate_pages() for small pages and THPs, the
reference count checking is in folio_migrate_mapping(), so let's remove
the special check for THP.
Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 646d8ee7f102..f2d86dfd8423 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2483,10 +2483,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
int nr_pages = thp_nr_pages(page);
int order = compound_order(page);
- /* Do not migrate THP mapped by multiple processes */
- if (PageTransHuge(page) && total_mapcount(page) > 1)
- return 0;
-
/* Avoid migrating to a node that is nearly full */
if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
int z;
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 3/8] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 1/8] mm: migrate: remove PageTransHuge check in numamigrate_isolate_page() Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 2/8] mm: migrate: remove THP mapcount " Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 4/8] mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio() Kefeng Wang
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
Rename numamigrate_isolate_page() to numamigrate_isolate_folio(), then
make it takes a folio and use folio API to save compound_head() calls.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index f2d86dfd8423..281eafdf8e63 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2478,10 +2478,9 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
return __folio_alloc_node(gfp, order, nid);
}
-static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
+static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
{
- int nr_pages = thp_nr_pages(page);
- int order = compound_order(page);
+ int nr_pages = folio_nr_pages(folio);
/* Avoid migrating to a node that is nearly full */
if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
@@ -2493,22 +2492,23 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
if (managed_zone(pgdat->node_zones + z))
break;
}
- wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
+ wakeup_kswapd(pgdat->node_zones + z, 0,
+ folio_order(folio), ZONE_MOVABLE);
return 0;
}
- if (!isolate_lru_page(page))
+ if (!folio_isolate_lru(folio))
return 0;
- mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page),
+ node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio),
nr_pages);
/*
- * Isolating the page has taken another reference, so the
- * caller's reference can be safely dropped without the page
+ * Isolating the folio has taken another reference, so the
+ * caller's reference can be safely dropped without the folio
* disappearing underneath us during migration.
*/
- put_page(page);
+ folio_put(folio);
return 1;
}
@@ -2542,7 +2542,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
if (page_is_file_lru(page) && PageDirty(page))
goto out;
- isolated = numamigrate_isolate_page(pgdat, page);
+ isolated = numamigrate_isolate_folio(pgdat, page_folio(page));
if (!isolated)
goto out;
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 4/8] mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
` (2 preceding siblings ...)
2023-09-13 9:51 ` [PATCH v3 3/8] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio() Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 5/8] mm: migrate: use __folio_test_movable() Kefeng Wang
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
At present, numa balance only support base page and PMD-mapped THP,
but we will expand to support to migrate large folio/pte-mapped THP
in the future, it is better to make migrate_misplaced_page() to take
a folio instead of a page, and rename it to migrate_misplaced_folio(),
it is a preparation, also this remove several compound_head() calls.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
include/linux/migrate.h | 4 ++--
mm/huge_memory.c | 2 +-
mm/memory.c | 2 +-
mm/migrate.c | 39 +++++++++++++++++++++------------------
4 files changed, 25 insertions(+), 22 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 711dd9412561..2ce13e8a309b 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -142,10 +142,10 @@ const struct movable_operations *page_movable_ops(struct page *page)
}
#ifdef CONFIG_NUMA_BALANCING
-int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
+int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
int node);
#else
-static inline int migrate_misplaced_page(struct page *page,
+static inline int migrate_misplaced_folio(struct folio *folio,
struct vm_area_struct *vma, int node)
{
return -EAGAIN; /* can't migrate now */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3e9443082035..36075e428a37 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1540,7 +1540,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
spin_unlock(vmf->ptl);
writable = false;
- migrated = migrate_misplaced_page(page, vma, target_nid);
+ migrated = migrate_misplaced_folio(page_folio(page), vma, target_nid);
if (migrated) {
flags |= TNF_MIGRATED;
page_nid = target_nid;
diff --git a/mm/memory.c b/mm/memory.c
index 4c9e6fc2dcf7..983a40f8ee62 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4815,7 +4815,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
writable = false;
/* Migrate to the requested node */
- if (migrate_misplaced_page(page, vma, target_nid)) {
+ if (migrate_misplaced_folio(page_folio(page), vma, target_nid)) {
page_nid = target_nid;
flags |= TNF_MIGRATED;
} else {
diff --git a/mm/migrate.c b/mm/migrate.c
index 281eafdf8e63..caf60b58b44c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2513,55 +2513,58 @@ static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
}
/*
- * Attempt to migrate a misplaced page to the specified destination
+ * Attempt to migrate a misplaced folio to the specified destination
* node. Caller is expected to have an elevated reference count on
- * the page that will be dropped by this function before returning.
+ * the folio that will be dropped by this function before returning.
*/
-int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
- int node)
+int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
+ int node)
{
pg_data_t *pgdat = NODE_DATA(node);
int isolated;
int nr_remaining;
unsigned int nr_succeeded;
LIST_HEAD(migratepages);
- int nr_pages = thp_nr_pages(page);
+ int nr_pages = folio_nr_pages(folio);
/*
- * Don't migrate file pages that are mapped in multiple processes
+ * Don't migrate file folios that are mapped in multiple processes
* with execute permissions as they are probably shared libraries.
+ * To check if the folio is shared, ideally we want to make sure
+ * every page is mapped to the same process. Doing that is very
+ * expensive, so check the estimated mapcount of the folio instead.
*/
- if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
+ if (folio_estimated_sharers(folio) != 1 && folio_is_file_lru(folio) &&
(vma->vm_flags & VM_EXEC))
goto out;
/*
- * Also do not migrate dirty pages as not all filesystems can move
- * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
+ * Also do not migrate dirty folios as not all filesystems can move
+ * dirty folios in MIGRATE_ASYNC mode which is a waste of cycles.
*/
- if (page_is_file_lru(page) && PageDirty(page))
+ if (folio_is_file_lru(folio) && folio_test_dirty(folio))
goto out;
- isolated = numamigrate_isolate_folio(pgdat, page_folio(page));
+ isolated = numamigrate_isolate_folio(pgdat, folio);
if (!isolated)
goto out;
- list_add(&page->lru, &migratepages);
+ list_add(&folio->lru, &migratepages);
nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio,
NULL, node, MIGRATE_ASYNC,
MR_NUMA_MISPLACED, &nr_succeeded);
if (nr_remaining) {
if (!list_empty(&migratepages)) {
- list_del(&page->lru);
- mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON +
- page_is_file_lru(page), -nr_pages);
- putback_lru_page(page);
+ list_del(&folio->lru);
+ node_stat_mod_folio(folio, NR_ISOLATED_ANON +
+ folio_is_file_lru(folio), -nr_pages);
+ folio_putback_lru(folio);
}
isolated = 0;
}
if (nr_succeeded) {
count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
- if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+ if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node))
mod_node_page_state(pgdat, PGPROMOTE_SUCCESS,
nr_succeeded);
}
@@ -2569,7 +2572,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
return isolated;
out:
- put_page(page);
+ folio_put(folio);
return 0;
}
#endif /* CONFIG_NUMA_BALANCING */
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 5/8] mm: migrate: use __folio_test_movable()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
` (3 preceding siblings ...)
2023-09-13 9:51 ` [PATCH v3 4/8] mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio() Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 6/8] mm: migrate: use a folio in add_page_for_migration() Kefeng Wang
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
Use __folio_test_movable(), no need to convert from folio to page again.
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index caf60b58b44c..264923aac04e 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -157,8 +157,8 @@ void putback_movable_pages(struct list_head *l)
list_del(&folio->lru);
/*
* We isolated non-lru movable folio so here we can use
- * __PageMovable because LRU folio's mapping cannot have
- * PAGE_MAPPING_MOVABLE.
+ * __folio_test_movable because LRU folio's mapping cannot
+ * have PAGE_MAPPING_MOVABLE.
*/
if (unlikely(__folio_test_movable(folio))) {
VM_BUG_ON_FOLIO(!folio_test_isolated(folio), folio);
@@ -943,7 +943,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int rc = -EAGAIN;
- bool is_lru = !__PageMovable(&src->page);
+ bool is_lru = !__folio_test_movable(src);
VM_BUG_ON_FOLIO(!folio_test_locked(src), src);
VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst);
@@ -990,7 +990,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
* src is freed; but stats require that PageAnon be left as PageAnon.
*/
if (rc == MIGRATEPAGE_SUCCESS) {
- if (__PageMovable(&src->page)) {
+ if (__folio_test_movable(src)) {
VM_BUG_ON_FOLIO(!folio_test_isolated(src), src);
/*
@@ -1082,7 +1082,7 @@ static void migrate_folio_done(struct folio *src,
/*
* Compaction can migrate also non-LRU pages which are
* not accounted to NR_ISOLATED_*. They can be recognized
- * as __PageMovable
+ * as __folio_test_movable
*/
if (likely(!__folio_test_movable(src)))
mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
@@ -1103,7 +1103,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
int rc = -EAGAIN;
int page_was_mapped = 0;
struct anon_vma *anon_vma = NULL;
- bool is_lru = !__PageMovable(&src->page);
+ bool is_lru = !__folio_test_movable(src);
bool locked = false;
bool dst_locked = false;
@@ -1261,7 +1261,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
int rc;
int page_was_mapped = 0;
struct anon_vma *anon_vma = NULL;
- bool is_lru = !__PageMovable(&src->page);
+ bool is_lru = !__folio_test_movable(src);
struct list_head *prev;
__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 6/8] mm: migrate: use a folio in add_page_for_migration()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
` (4 preceding siblings ...)
2023-09-13 9:51 ` [PATCH v3 5/8] mm: migrate: use __folio_test_movable() Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 7/8] mm: migrate: remove PageHead() check for HugeTLB " Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 8/8] mm: migrate: remove isolated variable " Kefeng Wang
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
Use a folio in add_page_for_migration() to save compound_head() calls.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 40 +++++++++++++++++++---------------------
1 file changed, 19 insertions(+), 21 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 264923aac04e..cf5c9254fdad 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2057,6 +2057,7 @@ static int add_page_for_migration(struct mm_struct *mm, const void __user *p,
struct vm_area_struct *vma;
unsigned long addr;
struct page *page;
+ struct folio *folio;
int err;
bool isolated;
@@ -2079,45 +2080,42 @@ static int add_page_for_migration(struct mm_struct *mm, const void __user *p,
if (!page)
goto out;
- if (is_zone_device_page(page))
- goto out_putpage;
+ folio = page_folio(page);
+ if (folio_is_zone_device(folio))
+ goto out_putfolio;
err = 0;
- if (page_to_nid(page) == node)
- goto out_putpage;
+ if (folio_nid(folio) == node)
+ goto out_putfolio;
err = -EACCES;
if (page_mapcount(page) > 1 && !migrate_all)
- goto out_putpage;
+ goto out_putfolio;
- if (PageHuge(page)) {
+ if (folio_test_hugetlb(folio)) {
if (PageHead(page)) {
- isolated = isolate_hugetlb(page_folio(page), pagelist);
+ isolated = isolate_hugetlb(folio, pagelist);
err = isolated ? 1 : -EBUSY;
}
} else {
- struct page *head;
-
- head = compound_head(page);
- isolated = isolate_lru_page(head);
+ isolated = folio_isolate_lru(folio);
if (!isolated) {
err = -EBUSY;
- goto out_putpage;
+ goto out_putfolio;
}
err = 1;
- list_add_tail(&head->lru, pagelist);
- mod_node_page_state(page_pgdat(head),
- NR_ISOLATED_ANON + page_is_file_lru(head),
- thp_nr_pages(head));
+ list_add_tail(&folio->lru, pagelist);
+ node_stat_mod_folio(folio,
+ NR_ISOLATED_ANON + folio_is_file_lru(folio),
+ folio_nr_pages(folio));
}
-out_putpage:
+out_putfolio:
/*
- * Either remove the duplicate refcount from
- * isolate_lru_page() or drop the page ref if it was
- * not isolated.
+ * Either remove the duplicate refcount from folio_isolate_lru()
+ * or drop the folio ref if it was not isolated.
*/
- put_page(page);
+ folio_put(folio);
out:
mmap_read_unlock(mm);
return err;
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 7/8] mm: migrate: remove PageHead() check for HugeTLB in add_page_for_migration()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
` (5 preceding siblings ...)
2023-09-13 9:51 ` [PATCH v3 6/8] mm: migrate: use a folio in add_page_for_migration() Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 8/8] mm: migrate: remove isolated variable " Kefeng Wang
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
There is some different between hugeTLB and THP behave when passed the
address of a tail page, for THP, it will migrate the entire THP page,
but for HugeTLB, it will return -EACCES, or -ENOENT before commit
e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()"),
-EACCES The page is mapped by multiple processes and can be moved
only if MPOL_MF_MOVE_ALL is specified.
-ENOENT The page is not present.
But when check manual[1], both of the two errnos are not suitable, it
is better to keep the same behave between hugetlb and THP when passed
the address of a tail page, so let's just remove the PageHead() check
for HugeTLB.
[1] https://man7.org/linux/man-pages/man2/move_pages.2.html
Suggested-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index cf5c9254fdad..7b07c97f5a6f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2093,10 +2093,8 @@ static int add_page_for_migration(struct mm_struct *mm, const void __user *p,
goto out_putfolio;
if (folio_test_hugetlb(folio)) {
- if (PageHead(page)) {
- isolated = isolate_hugetlb(folio, pagelist);
- err = isolated ? 1 : -EBUSY;
- }
+ isolated = isolate_hugetlb(folio, pagelist);
+ err = isolated ? 1 : -EBUSY;
} else {
isolated = folio_isolate_lru(folio);
if (!isolated) {
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 8/8] mm: migrate: remove isolated variable in add_page_for_migration()
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
` (6 preceding siblings ...)
2023-09-13 9:51 ` [PATCH v3 7/8] mm: migrate: remove PageHead() check for HugeTLB " Kefeng Wang
@ 2023-09-13 9:51 ` Kefeng Wang
7 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2023-09-13 9:51 UTC (permalink / raw)
To: Andrew Morton
Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
Mike Kravetz, hughd, Kefeng Wang
Directly check the return of isolate_hugetlb() and folio_isolate_lru()
to remove isolated variable, also setup err = -EBUSY in advance before
isolation, and update err only when successfully queued for migration,
which could help us to unify and simplify code a bit.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 7b07c97f5a6f..a5d739603458 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2059,7 +2059,6 @@ static int add_page_for_migration(struct mm_struct *mm, const void __user *p,
struct page *page;
struct folio *folio;
int err;
- bool isolated;
mmap_read_lock(mm);
addr = (unsigned long)untagged_addr_remote(mm, p);
@@ -2092,15 +2091,13 @@ static int add_page_for_migration(struct mm_struct *mm, const void __user *p,
if (page_mapcount(page) > 1 && !migrate_all)
goto out_putfolio;
+ err = -EBUSY;
if (folio_test_hugetlb(folio)) {
- isolated = isolate_hugetlb(folio, pagelist);
- err = isolated ? 1 : -EBUSY;
+ if (isolate_hugetlb(folio, pagelist))
+ err = 1;
} else {
- isolated = folio_isolate_lru(folio);
- if (!isolated) {
- err = -EBUSY;
+ if (!folio_isolate_lru(folio))
goto out_putfolio;
- }
err = 1;
list_add_tail(&folio->lru, pagelist);
--
2.27.0
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-09-13 10:11 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-13 9:51 [PATCH v3 0/8] mm: migrate: more folio conversion and unification Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 1/8] mm: migrate: remove PageTransHuge check in numamigrate_isolate_page() Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 2/8] mm: migrate: remove THP mapcount " Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 3/8] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio() Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 4/8] mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio() Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 5/8] mm: migrate: use __folio_test_movable() Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 6/8] mm: migrate: use a folio in add_page_for_migration() Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 7/8] mm: migrate: remove PageHead() check for HugeTLB " Kefeng Wang
2023-09-13 9:51 ` [PATCH v3 8/8] mm: migrate: remove isolated variable " Kefeng Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox