* [PATCH v2 1/6] mm/huge_memory: convert has_hwpoisoned into a pure folio flag
2025-01-13 13:16 [PATCH v2 0/6] mm: hugetlb+THP folio and migration cleanups David Hildenbrand
@ 2025-01-13 13:16 ` David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 2/6] mm/hugetlb: rename isolate_hugetlb() to folio_isolate_hugetlb() David Hildenbrand
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2025-01-13 13:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, David Hildenbrand, Andrew Morton, Muchun Song,
Matthew Wilcox (Oracle)
Let's stop setting it on pages, there is no need to anymore.
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 6 ++----
mm/huge_memory.c | 2 +-
2 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 14226d6bd6f84..3f6a64ff968a7 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -948,11 +948,9 @@ TESTPAGEFLAG_FALSE(TransCompound, transcompound)
*
* This flag is set by hwpoison handler. Cleared by THP split or free page.
*/
-PAGEFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
- TESTSCFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
+FOLIO_FLAG(has_hwpoisoned, FOLIO_SECOND_PAGE)
#else
-PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned)
- TESTSCFLAG_FALSE(HasHWPoisoned, has_hwpoisoned)
+FOLIO_FLAG_FALSE(has_hwpoisoned)
#endif
/*
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2654a95487499..3d3ebdc002d59 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3290,7 +3290,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
lruvec = folio_lruvec_lock(folio);
- ClearPageHasHWPoisoned(head);
+ folio_clear_has_hwpoisoned(folio);
for (i = nr - new_nr; i >= new_nr; i -= new_nr) {
struct folio *tail;
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v2 2/6] mm/hugetlb: rename isolate_hugetlb() to folio_isolate_hugetlb()
2025-01-13 13:16 [PATCH v2 0/6] mm: hugetlb+THP folio and migration cleanups David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 1/6] mm/huge_memory: convert has_hwpoisoned into a pure folio flag David Hildenbrand
@ 2025-01-13 13:16 ` David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 3/6] mm/migrate: don't call folio_putback_active_hugetlb() on dst hugetlb folio David Hildenbrand
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2025-01-13 13:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, David Hildenbrand, Andrew Morton, Muchun Song,
Matthew Wilcox (Oracle),
Baolin Wang
Let's make the function name match "folio_isolate_lru()", and add some
kernel doc.
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/hugetlb.h | 4 ++--
mm/gup.c | 2 +-
mm/hugetlb.c | 23 ++++++++++++++++++++---
mm/mempolicy.c | 2 +-
mm/migrate.c | 6 +++---
5 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 49ec2362ce926..c95ad5cd7894d 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -153,7 +153,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
vm_flags_t vm_flags);
long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
long freed);
-bool isolate_hugetlb(struct folio *folio, struct list_head *list);
+bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list);
int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison);
int get_huge_page_for_hwpoison(unsigned long pfn, int flags,
bool *migratable_cleared);
@@ -414,7 +414,7 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
return NULL;
}
-static inline bool isolate_hugetlb(struct folio *folio, struct list_head *list)
+static inline bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list)
{
return false;
}
diff --git a/mm/gup.c b/mm/gup.c
index 00a1269cbee0a..2cc3a9d28e70e 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2344,7 +2344,7 @@ static unsigned long collect_longterm_unpinnable_folios(
continue;
if (folio_test_hugetlb(folio)) {
- isolate_hugetlb(folio, movable_folio_list);
+ folio_isolate_hugetlb(folio, movable_folio_list);
continue;
}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9a5596022c4b3..a93b508bc0e0e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2808,7 +2808,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
* Fail with -EBUSY if not possible.
*/
spin_unlock_irq(&hugetlb_lock);
- isolated = isolate_hugetlb(old_folio, list);
+ isolated = folio_isolate_hugetlb(old_folio, list);
ret = isolated ? 0 : -EBUSY;
spin_lock_irq(&hugetlb_lock);
goto free_new;
@@ -2893,7 +2893,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
if (hstate_is_gigantic(h))
return -ENOMEM;
- if (folio_ref_count(folio) && isolate_hugetlb(folio, list))
+ if (folio_ref_count(folio) && folio_isolate_hugetlb(folio, list))
ret = 0;
else if (!folio_ref_count(folio))
ret = alloc_and_dissolve_hugetlb_folio(h, folio, list);
@@ -7417,7 +7417,24 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h)
#endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */
-bool isolate_hugetlb(struct folio *folio, struct list_head *list)
+/**
+ * folio_isolate_hugetlb - try to isolate an allocated hugetlb folio
+ * @folio: the folio to isolate
+ * @list: the list to add the folio to on success
+ *
+ * Isolate an allocated (refcount > 0) hugetlb folio, marking it as
+ * isolated/non-migratable, and moving it from the active list to the
+ * given list.
+ *
+ * Isolation will fail if @folio is not an allocated hugetlb folio, or if
+ * it is already isolated/non-migratable.
+ *
+ * On success, an additional folio reference is taken that must be dropped
+ * using folio_putback_active_hugetlb() to undo the isolation.
+ *
+ * Return: True if isolation worked, otherwise False.
+ */
+bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list)
{
bool ret = true;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index f83b73236ffe7..bbaadbeeb2919 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -647,7 +647,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
*/
if ((flags & MPOL_MF_MOVE_ALL) ||
(!folio_likely_mapped_shared(folio) && !hugetlb_pmd_shared(pte)))
- if (!isolate_hugetlb(folio, qp->pagelist))
+ if (!folio_isolate_hugetlb(folio, qp->pagelist))
qp->nr_failed++;
unlock:
spin_unlock(ptl);
diff --git a/mm/migrate.c b/mm/migrate.c
index caadbe393aa21..80887cadb2774 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -128,7 +128,7 @@ static void putback_movable_folio(struct folio *folio)
*
* This function shall be used whenever the isolated pageset has been
* built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
- * and isolate_hugetlb().
+ * and folio_isolate_hugetlb().
*/
void putback_movable_pages(struct list_head *l)
{
@@ -169,7 +169,7 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
bool isolated, lru;
if (folio_test_hugetlb(folio))
- return isolate_hugetlb(folio, list);
+ return folio_isolate_hugetlb(folio, list);
lru = !__folio_test_movable(folio);
if (lru)
@@ -2203,7 +2203,7 @@ static int __add_folio_for_migration(struct folio *folio, int node,
return -EACCES;
if (folio_test_hugetlb(folio)) {
- if (isolate_hugetlb(folio, pagelist))
+ if (folio_isolate_hugetlb(folio, pagelist))
return 1;
} else if (folio_isolate_lru(folio)) {
list_add_tail(&folio->lru, pagelist);
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v2 3/6] mm/migrate: don't call folio_putback_active_hugetlb() on dst hugetlb folio
2025-01-13 13:16 [PATCH v2 0/6] mm: hugetlb+THP folio and migration cleanups David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 1/6] mm/huge_memory: convert has_hwpoisoned into a pure folio flag David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 2/6] mm/hugetlb: rename isolate_hugetlb() to folio_isolate_hugetlb() David Hildenbrand
@ 2025-01-13 13:16 ` David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 4/6] mm/hugetlb: rename folio_putback_active_hugetlb() to folio_putback_hugetlb() David Hildenbrand
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2025-01-13 13:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, David Hildenbrand, Andrew Morton, Muchun Song,
Matthew Wilcox (Oracle),
Baolin Wang
We replaced a simple put_page() by a putback_active_hugepage() call in
commit 3aaa76e125c1 ("mm: migrate: hugetlb: putback destination hugepage
to active list"), to set the "active" flag on the dst hugetlb folio.
Nowadays, we decoupled the "active" list from the flag, by calling the
flag "migratable".
Calling "putback" on something that wasn't allocated is weird and not
future proof, especially if we might reach that path when migration failed
and we just want to free the freshly allocated hugetlb folio.
Let's simply handle the migratable flag and the active list flag in
move_hugetlb_state(), where we know that allocation succeeded and already
handle the temporary flag; use a simple folio_put() to return our
reference.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/hugetlb.c | 10 ++++++++++
mm/migrate.c | 8 ++++----
2 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a93b508bc0e0e..95579bd8c4fe0 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7529,6 +7529,16 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re
}
spin_unlock_irq(&hugetlb_lock);
}
+
+ /*
+ * Our old folio is isolated and has "migratable" cleared until it
+ * is putback. As migration succeeded, set the new folio "migratable"
+ * and add it to the active list.
+ */
+ spin_lock_irq(&hugetlb_lock);
+ folio_set_hugetlb_migratable(new_folio);
+ list_move_tail(&new_folio->lru, &(folio_hstate(new_folio))->hugepage_activelist);
+ spin_unlock_irq(&hugetlb_lock);
}
static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
diff --git a/mm/migrate.c b/mm/migrate.c
index 80887cadb2774..7e23e78f1e57b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1542,14 +1542,14 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
list_move_tail(&src->lru, ret);
/*
- * If migration was not successful and there's a freeing callback, use
- * it. Otherwise, put_page() will drop the reference grabbed during
- * isolation.
+ * If migration was not successful and there's a freeing callback,
+ * return the folio to that special allocator. Otherwise, simply drop
+ * our additional reference.
*/
if (put_new_folio)
put_new_folio(dst, private);
else
- folio_putback_active_hugetlb(dst);
+ folio_put(dst);
return rc;
}
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v2 4/6] mm/hugetlb: rename folio_putback_active_hugetlb() to folio_putback_hugetlb()
2025-01-13 13:16 [PATCH v2 0/6] mm: hugetlb+THP folio and migration cleanups David Hildenbrand
` (2 preceding siblings ...)
2025-01-13 13:16 ` [PATCH v2 3/6] mm/migrate: don't call folio_putback_active_hugetlb() on dst hugetlb folio David Hildenbrand
@ 2025-01-13 13:16 ` David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 5/6] mm/hugetlb-cgroup: convert hugetlb_cgroup_css_offline() to work on folios David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 6/6] mm/hugetlb: use folio->lru int demote_free_hugetlb_folios() David Hildenbrand
5 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2025-01-13 13:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, David Hildenbrand, Andrew Morton, Muchun Song,
Matthew Wilcox (Oracle),
Baolin Wang
Now that folio_putback_hugetlb() is only called on folios that were
previously isolated through folio_isolate_hugetlb(), let's rename it to
match folio_putback_lru().
Add some kernel doc to clarify how this function is supposed to be used.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c | 15 +++++++++++++--
mm/migrate.c | 6 +++---
3 files changed, 18 insertions(+), 7 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c95ad5cd7894d..ec8c0ccc8f959 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -157,7 +157,7 @@ bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list);
int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison);
int get_huge_page_for_hwpoison(unsigned long pfn, int flags,
bool *migratable_cleared);
-void folio_putback_active_hugetlb(struct folio *folio);
+void folio_putback_hugetlb(struct folio *folio);
void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int reason);
void hugetlb_fix_reserve_counts(struct inode *inode);
extern struct mutex *hugetlb_fault_mutex_table;
@@ -430,7 +430,7 @@ static inline int get_huge_page_for_hwpoison(unsigned long pfn, int flags,
return 0;
}
-static inline void folio_putback_active_hugetlb(struct folio *folio)
+static inline void folio_putback_hugetlb(struct folio *folio)
{
}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 95579bd8c4fe0..bc4ab45d9b93b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7430,7 +7430,7 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h)
* it is already isolated/non-migratable.
*
* On success, an additional folio reference is taken that must be dropped
- * using folio_putback_active_hugetlb() to undo the isolation.
+ * using folio_putback_hugetlb() to undo the isolation.
*
* Return: True if isolation worked, otherwise False.
*/
@@ -7482,7 +7482,18 @@ int get_huge_page_for_hwpoison(unsigned long pfn, int flags,
return ret;
}
-void folio_putback_active_hugetlb(struct folio *folio)
+/**
+ * folio_putback_hugetlb - unisolate a hugetlb folio
+ * @folio: the isolated hugetlb folio
+ *
+ * Putback/un-isolate the hugetlb folio that was previous isolated using
+ * folio_isolate_hugetlb(): marking it non-isolated/migratable and putting it
+ * back onto the active list.
+ *
+ * Will drop the additional folio reference obtained through
+ * folio_isolate_hugetlb().
+ */
+void folio_putback_hugetlb(struct folio *folio)
{
spin_lock_irq(&hugetlb_lock);
folio_set_hugetlb_migratable(folio);
diff --git a/mm/migrate.c b/mm/migrate.c
index 7e23e78f1e57b..be9e3b48cd622 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -137,7 +137,7 @@ void putback_movable_pages(struct list_head *l)
list_for_each_entry_safe(folio, folio2, l, lru) {
if (unlikely(folio_test_hugetlb(folio))) {
- folio_putback_active_hugetlb(folio);
+ folio_putback_hugetlb(folio);
continue;
}
list_del(&folio->lru);
@@ -1454,7 +1454,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
if (folio_ref_count(src) == 1) {
/* page was freed from under us. So we are done. */
- folio_putback_active_hugetlb(src);
+ folio_putback_hugetlb(src);
return MIGRATEPAGE_SUCCESS;
}
@@ -1537,7 +1537,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
folio_unlock(src);
out:
if (rc == MIGRATEPAGE_SUCCESS)
- folio_putback_active_hugetlb(src);
+ folio_putback_hugetlb(src);
else if (rc != -EAGAIN)
list_move_tail(&src->lru, ret);
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v2 5/6] mm/hugetlb-cgroup: convert hugetlb_cgroup_css_offline() to work on folios
2025-01-13 13:16 [PATCH v2 0/6] mm: hugetlb+THP folio and migration cleanups David Hildenbrand
` (3 preceding siblings ...)
2025-01-13 13:16 ` [PATCH v2 4/6] mm/hugetlb: rename folio_putback_active_hugetlb() to folio_putback_hugetlb() David Hildenbrand
@ 2025-01-13 13:16 ` David Hildenbrand
2025-01-13 13:16 ` [PATCH v2 6/6] mm/hugetlb: use folio->lru int demote_free_hugetlb_folios() David Hildenbrand
5 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2025-01-13 13:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, David Hildenbrand, Andrew Morton, Muchun Song,
Matthew Wilcox (Oracle)
Let's convert hugetlb_cgroup_css_offline() and
hugetlb_cgroup_move_parent() to work on folios. hugepage_activelist
contains folios, not pages.
While at it, rename page_hcg simply to hcg, removing most of the "page"
terminology.
This removes an unnecessary call to compound_head().
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/hugetlb_cgroup.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 89a8ad45a533d..bb9578bd99f98 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -195,24 +195,23 @@ static void hugetlb_cgroup_css_free(struct cgroup_subsys_state *css)
* cannot fail.
*/
static void hugetlb_cgroup_move_parent(int idx, struct hugetlb_cgroup *h_cg,
- struct page *page)
+ struct folio *folio)
{
unsigned int nr_pages;
struct page_counter *counter;
- struct hugetlb_cgroup *page_hcg;
+ struct hugetlb_cgroup *hcg;
struct hugetlb_cgroup *parent = parent_hugetlb_cgroup(h_cg);
- struct folio *folio = page_folio(page);
- page_hcg = hugetlb_cgroup_from_folio(folio);
+ hcg = hugetlb_cgroup_from_folio(folio);
/*
* We can have pages in active list without any cgroup
* ie, hugepage with less than 3 pages. We can safely
* ignore those pages.
*/
- if (!page_hcg || page_hcg != h_cg)
+ if (!hcg || hcg != h_cg)
goto out;
- nr_pages = compound_nr(page);
+ nr_pages = folio_nr_pages(folio);
if (!parent) {
parent = root_h_cgroup;
/* root has no limit */
@@ -235,13 +234,13 @@ static void hugetlb_cgroup_css_offline(struct cgroup_subsys_state *css)
{
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(css);
struct hstate *h;
- struct page *page;
+ struct folio *folio;
do {
for_each_hstate(h) {
spin_lock_irq(&hugetlb_lock);
- list_for_each_entry(page, &h->hugepage_activelist, lru)
- hugetlb_cgroup_move_parent(hstate_index(h), h_cg, page);
+ list_for_each_entry(folio, &h->hugepage_activelist, lru)
+ hugetlb_cgroup_move_parent(hstate_index(h), h_cg, folio);
spin_unlock_irq(&hugetlb_lock);
}
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v2 6/6] mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()
2025-01-13 13:16 [PATCH v2 0/6] mm: hugetlb+THP folio and migration cleanups David Hildenbrand
` (4 preceding siblings ...)
2025-01-13 13:16 ` [PATCH v2 5/6] mm/hugetlb-cgroup: convert hugetlb_cgroup_css_offline() to work on folios David Hildenbrand
@ 2025-01-13 13:16 ` David Hildenbrand
2025-01-13 15:18 ` Sidhartha Kumar
5 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2025-01-13 13:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, David Hildenbrand, Andrew Morton, Muchun Song,
Matthew Wilcox (Oracle)
We are demoting hugetlb folios to smaller hugetlb folios; let's avoid
messing with pages where avoidable and handle it more similar to
__split_huge_page_tail().
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/hugetlb.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bc4ab45d9b93b..a94d52dd53728 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3822,13 +3822,15 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
struct page *page = folio_page(folio, i);
+ /* Careful: see __split_huge_page_tail() */
+ struct folio *new_folio = (struct folio *)page;
- page->mapping = NULL;
clear_compound_head(page);
prep_compound_page(page, dst->order);
- init_new_hugetlb_folio(dst, page_folio(page));
- list_add(&page->lru, &dst_list);
+ new_folio->mapping = NULL;
+ init_new_hugetlb_folio(dst, new_folio);
+ list_add(&new_folio->lru, &dst_list);
}
}
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v2 6/6] mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()
2025-01-13 13:16 ` [PATCH v2 6/6] mm/hugetlb: use folio->lru int demote_free_hugetlb_folios() David Hildenbrand
@ 2025-01-13 15:18 ` Sidhartha Kumar
0 siblings, 0 replies; 8+ messages in thread
From: Sidhartha Kumar @ 2025-01-13 15:18 UTC (permalink / raw)
To: David Hildenbrand, linux-kernel
Cc: linux-mm, Andrew Morton, Muchun Song, Matthew Wilcox (Oracle)
On 1/13/25 8:16 AM, David Hildenbrand wrote:
> We are demoting hugetlb folios to smaller hugetlb folios; let's avoid
> messing with pages where avoidable and handle it more similar to
> __split_huge_page_tail().
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> mm/hugetlb.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index bc4ab45d9b93b..a94d52dd53728 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3822,13 +3822,15 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
>
> for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
> struct page *page = folio_page(folio, i);
> + /* Careful: see __split_huge_page_tail() */
> + struct folio *new_folio = (struct folio *)page;
>
> - page->mapping = NULL;
> clear_compound_head(page);
> prep_compound_page(page, dst->order);
>
> - init_new_hugetlb_folio(dst, page_folio(page));
> - list_add(&page->lru, &dst_list);
> + new_folio->mapping = NULL;
> + init_new_hugetlb_folio(dst, new_folio);
> + list_add(&new_folio->lru, &dst_list);
> }
> }
>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
^ permalink raw reply [flat|nested] 8+ messages in thread