* [PATCH v3 01/12] mm: page_alloc: remove pcppage migratetype caching fix
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 02/12] hugetlb: Use a folio in free_hpage_workfn() Mike Kravetz
` (11 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Johannes Weiner, Mike Kravetz
From: Johannes Weiner <hannes@cmpxchg.org>
Mike reports the following crash in -next:
[ 28.643019] page:ffffea0004fb4280 refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x13ed0a
[ 28.645455] flags: 0x200000000000000(node=0|zone=2)
[ 28.646835] page_type: 0xffffffff()
[ 28.647886] raw: 0200000000000000 dead000000000100 dead000000000122 0000000000000000
[ 28.651170] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[ 28.653124] page dumped because: VM_BUG_ON_PAGE(is_migrate_isolate(mt))
[ 28.654769] ------------[ cut here ]------------
[ 28.655972] kernel BUG at mm/page_alloc.c:1231!
This VM_BUG_ON() used to check that the cached pcppage_migratetype set
by free_unref_page() wasn't MIGRATE_ISOLATE.
When I removed the caching, I erroneously changed the assert to check
that no isolated pages are on the pcplist. This is quite different,
because pages can be isolated *after* they had been put on the
freelist already (which is handled just fine).
IOW, this was purely a sanity check on the migratetype caching. With
that gone, the check should have been removed as well. Do that now.
Reported-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/page_alloc.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 123494dbd731..1400e674ab86 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1227,9 +1227,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
count -= nr_pages;
pcp->count -= nr_pages;
- /* MIGRATE_ISOLATE page should not go to pcplists */
- VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
-
__free_one_page(page, pfn, zone, order, mt, FPI_NONE);
trace_mm_page_pcpu_drain(page, order, mt);
} while (count > 0 && !list_empty(list));
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 02/12] hugetlb: Use a folio in free_hpage_workfn()
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 01/12] mm: page_alloc: remove pcppage migratetype caching fix Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 03/12] hugetlb: Remove a few calls to page_folio() Mike Kravetz
` (10 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz, Sidhartha Kumar
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
update_and_free_hugetlb_folio puts the memory on hpage_freelist as a folio
so we can take it off the list as a folio.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
mm/hugetlb.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index af74e83d92aa..6c6f19cc6046 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1780,22 +1780,22 @@ static void free_hpage_workfn(struct work_struct *work)
node = llist_del_all(&hpage_freelist);
while (node) {
- struct page *page;
+ struct folio *folio;
struct hstate *h;
- page = container_of((struct address_space **)node,
- struct page, mapping);
+ folio = container_of((struct address_space **)node,
+ struct folio, mapping);
node = node->next;
- page->mapping = NULL;
+ folio->mapping = NULL;
/*
* The VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio) in
* folio_hstate() is going to trigger because a previous call to
* remove_hugetlb_folio() will clear the hugetlb bit, so do
* not use folio_hstate() directly.
*/
- h = size_to_hstate(page_size(page));
+ h = size_to_hstate(folio_size(folio));
- __update_and_free_hugetlb_folio(h, page_folio(page));
+ __update_and_free_hugetlb_folio(h, folio);
cond_resched();
}
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 03/12] hugetlb: Remove a few calls to page_folio()
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 01/12] mm: page_alloc: remove pcppage migratetype caching fix Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 02/12] hugetlb: Use a folio in free_hpage_workfn() Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 04/12] hugetlb: Convert remove_pool_huge_page() to remove_pool_hugetlb_folio() Mike Kravetz
` (9 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz, Sidhartha Kumar
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Anything found on a linked list threaded through ->lru is guaranteed to
be a folio as the compound_head found in a tail page overlaps the ->lru
member of struct page. So we can pull folios directly off these lists
no matter whether pages or folios were added to the list.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
mm/hugetlb.c | 26 +++++++++++---------------
1 file changed, 11 insertions(+), 15 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6c6f19cc6046..7bbdc71fb34d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1829,11 +1829,9 @@ static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio,
static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list)
{
- struct page *page, *t_page;
- struct folio *folio;
+ struct folio *folio, *t_folio;
- list_for_each_entry_safe(page, t_page, list, lru) {
- folio = page_folio(page);
+ list_for_each_entry_safe(folio, t_folio, list, lru) {
update_and_free_hugetlb_folio(h, folio, false);
cond_resched();
}
@@ -2208,8 +2206,7 @@ static struct page *remove_pool_huge_page(struct hstate *h,
bool acct_surplus)
{
int nr_nodes, node;
- struct page *page = NULL;
- struct folio *folio;
+ struct folio *folio = NULL;
lockdep_assert_held(&hugetlb_lock);
for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
@@ -2219,15 +2216,14 @@ static struct page *remove_pool_huge_page(struct hstate *h,
*/
if ((!acct_surplus || h->surplus_huge_pages_node[node]) &&
!list_empty(&h->hugepage_freelists[node])) {
- page = list_entry(h->hugepage_freelists[node].next,
- struct page, lru);
- folio = page_folio(page);
+ folio = list_entry(h->hugepage_freelists[node].next,
+ struct folio, lru);
remove_hugetlb_folio(h, folio, acct_surplus);
break;
}
}
- return page;
+ return &folio->page;
}
/*
@@ -3343,15 +3339,15 @@ static void try_to_free_low(struct hstate *h, unsigned long count,
* Collect pages to be freed on a list, and free after dropping lock
*/
for_each_node_mask(i, *nodes_allowed) {
- struct page *page, *next;
+ struct folio *folio, *next;
struct list_head *freel = &h->hugepage_freelists[i];
- list_for_each_entry_safe(page, next, freel, lru) {
+ list_for_each_entry_safe(folio, next, freel, lru) {
if (count >= h->nr_huge_pages)
goto out;
- if (PageHighMem(page))
+ if (folio_test_highmem(folio))
continue;
- remove_hugetlb_folio(h, page_folio(page), false);
- list_add(&page->lru, &page_list);
+ remove_hugetlb_folio(h, folio, false);
+ list_add(&folio->lru, &page_list);
}
}
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 04/12] hugetlb: Convert remove_pool_huge_page() to remove_pool_hugetlb_folio()
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (2 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 03/12] hugetlb: Remove a few calls to page_folio() Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 05/12] hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles Mike Kravetz
` (8 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz, Sidhartha Kumar
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Convert the callers to expect a folio and remove the unnecesary conversion
back to a struct page.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
mm/hugetlb.c | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7bbdc71fb34d..744e214c7d9b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1439,7 +1439,7 @@ static int hstate_next_node_to_alloc(struct hstate *h,
}
/*
- * helper for remove_pool_huge_page() - return the previously saved
+ * helper for remove_pool_hugetlb_folio() - return the previously saved
* node ["this node"] from which to free a huge page. Advance the
* next node id whether or not we find a free huge page to free so
* that the next attempt to free addresses the next node.
@@ -2201,9 +2201,8 @@ static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
* an additional call to free the page to low level allocators.
* Called with hugetlb_lock locked.
*/
-static struct page *remove_pool_huge_page(struct hstate *h,
- nodemask_t *nodes_allowed,
- bool acct_surplus)
+static struct folio *remove_pool_hugetlb_folio(struct hstate *h,
+ nodemask_t *nodes_allowed, bool acct_surplus)
{
int nr_nodes, node;
struct folio *folio = NULL;
@@ -2223,7 +2222,7 @@ static struct page *remove_pool_huge_page(struct hstate *h,
}
}
- return &folio->page;
+ return folio;
}
/*
@@ -2577,7 +2576,6 @@ static void return_unused_surplus_pages(struct hstate *h,
unsigned long unused_resv_pages)
{
unsigned long nr_pages;
- struct page *page;
LIST_HEAD(page_list);
lockdep_assert_held(&hugetlb_lock);
@@ -2598,15 +2596,17 @@ static void return_unused_surplus_pages(struct hstate *h,
* evenly across all nodes with memory. Iterate across these nodes
* until we can no longer free unreserved surplus pages. This occurs
* when the nodes with surplus pages have no free pages.
- * remove_pool_huge_page() will balance the freed pages across the
+ * remove_pool_hugetlb_folio() will balance the freed pages across the
* on-line nodes with memory and will handle the hstate accounting.
*/
while (nr_pages--) {
- page = remove_pool_huge_page(h, &node_states[N_MEMORY], 1);
- if (!page)
+ struct folio *folio;
+
+ folio = remove_pool_hugetlb_folio(h, &node_states[N_MEMORY], 1);
+ if (!folio)
goto out;
- list_add(&page->lru, &page_list);
+ list_add(&folio->lru, &page_list);
}
out:
@@ -3401,7 +3401,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
nodemask_t *nodes_allowed)
{
unsigned long min_count, ret;
- struct page *page;
LIST_HEAD(page_list);
NODEMASK_ALLOC(nodemask_t, node_alloc_noretry, GFP_KERNEL);
@@ -3523,11 +3522,13 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
* Collect pages to be removed on list without dropping lock
*/
while (min_count < persistent_huge_pages(h)) {
- page = remove_pool_huge_page(h, nodes_allowed, 0);
- if (!page)
+ struct folio *folio;
+
+ folio = remove_pool_hugetlb_folio(h, nodes_allowed, 0);
+ if (!folio)
break;
- list_add(&page->lru, &page_list);
+ list_add(&folio->lru, &page_list);
}
/* free the pages after dropping lock */
spin_unlock_irq(&hugetlb_lock);
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 05/12] hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (3 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 04/12] hugetlb: Convert remove_pool_huge_page() to remove_pool_hugetlb_folio() Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 06/12] hugetlb: restructure pool allocations Mike Kravetz
` (7 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz, James Houghton
update_and_free_pages_bulk is designed to free a list of hugetlb pages
back to their associated lower level allocators. This may require
allocating vmemmmap pages associated with each hugetlb page. The
hugetlb page destructor must be changed before pages are freed to lower
level allocators. However, the destructor must be changed under the
hugetlb lock. This means there is potentially one lock cycle per page.
Minimize the number of lock cycles in update_and_free_pages_bulk by:
1) allocating necessary vmemmap for all hugetlb pages on the list
2) take hugetlb lock and clear destructor for all pages on the list
3) free all pages on list back to low level allocators
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: James Houghton <jthoughton@google.com>
---
mm/hugetlb.c | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 744e214c7d9b..52f695222450 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1830,7 +1830,46 @@ static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio,
static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list)
{
struct folio *folio, *t_folio;
+ bool clear_dtor = false;
+ /*
+ * First allocate required vmemmmap (if necessary) for all folios on
+ * list. If vmemmap can not be allocated, we can not free folio to
+ * lower level allocator, so add back as hugetlb surplus page.
+ * add_hugetlb_folio() removes the page from THIS list.
+ * Use clear_dtor to note if vmemmap was successfully allocated for
+ * ANY page on the list.
+ */
+ list_for_each_entry_safe(folio, t_folio, list, lru) {
+ if (folio_test_hugetlb_vmemmap_optimized(folio)) {
+ if (hugetlb_vmemmap_restore(h, &folio->page)) {
+ spin_lock_irq(&hugetlb_lock);
+ add_hugetlb_folio(h, folio, true);
+ spin_unlock_irq(&hugetlb_lock);
+ } else
+ clear_dtor = true;
+ }
+ }
+
+ /*
+ * If vmemmmap allocation was performed on any folio above, take lock
+ * to clear destructor of all folios on list. This avoids the need to
+ * lock/unlock for each individual folio.
+ * The assumption is vmemmap allocation was performed on all or none
+ * of the folios on the list. This is true expect in VERY rare cases.
+ */
+ if (clear_dtor) {
+ spin_lock_irq(&hugetlb_lock);
+ list_for_each_entry(folio, list, lru)
+ __clear_hugetlb_destructor(h, folio);
+ spin_unlock_irq(&hugetlb_lock);
+ }
+
+ /*
+ * Free folios back to low level allocators. vmemmap and destructors
+ * were taken care of above, so update_and_free_hugetlb_folio will
+ * not need to take hugetlb lock.
+ */
list_for_each_entry_safe(folio, t_folio, list, lru) {
update_and_free_hugetlb_folio(h, folio, false);
cond_resched();
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 06/12] hugetlb: restructure pool allocations
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (4 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 05/12] hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 07/12] hugetlb: perform vmemmap optimization on a list of pages Mike Kravetz
` (6 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz
Allocation of a hugetlb page for the hugetlb pool is done by the routine
alloc_pool_huge_page. This routine will allocate contiguous pages from
a low level allocator, prep the pages for usage as a hugetlb page and
then add the resulting hugetlb page to the pool.
In the 'prep' stage, optional vmemmap optimization is done. For
performance reasons we want to perform vmemmap optimization on multiple
hugetlb pages at once. To do this, restructure the hugetlb pool
allocation code such that vmemmap optimization can be isolated and later
batched.
The code to allocate hugetlb pages from bootmem was also modified to
allow batching.
No functional changes, only code restructure.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/hugetlb.c | 183 ++++++++++++++++++++++++++++++++++++++++-----------
1 file changed, 144 insertions(+), 39 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 52f695222450..77313c9e0fa8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1963,16 +1963,21 @@ static void __prep_account_new_huge_page(struct hstate *h, int nid)
h->nr_huge_pages_node[nid]++;
}
-static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio)
+static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio)
{
folio_set_hugetlb(folio);
- hugetlb_vmemmap_optimize(h, &folio->page);
INIT_LIST_HEAD(&folio->lru);
hugetlb_set_folio_subpool(folio, NULL);
set_hugetlb_cgroup(folio, NULL);
set_hugetlb_cgroup_rsvd(folio, NULL);
}
+static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio)
+{
+ init_new_hugetlb_folio(h, folio);
+ hugetlb_vmemmap_optimize(h, &folio->page);
+}
+
static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid)
{
__prep_new_hugetlb_folio(h, folio);
@@ -2169,16 +2174,9 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h,
return page_folio(page);
}
-/*
- * Common helper to allocate a fresh hugetlb page. All specific allocators
- * should use this function to get new hugetlb pages
- *
- * Note that returned page is 'frozen': ref count of head page and all tail
- * pages is zero.
- */
-static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h,
- gfp_t gfp_mask, int nid, nodemask_t *nmask,
- nodemask_t *node_alloc_noretry)
+static struct folio *__alloc_fresh_hugetlb_folio(struct hstate *h,
+ gfp_t gfp_mask, int nid, nodemask_t *nmask,
+ nodemask_t *node_alloc_noretry)
{
struct folio *folio;
bool retry = false;
@@ -2191,6 +2189,7 @@ static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h,
nid, nmask, node_alloc_noretry);
if (!folio)
return NULL;
+
if (hstate_is_gigantic(h)) {
if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) {
/*
@@ -2205,32 +2204,84 @@ static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h,
return NULL;
}
}
- prep_new_hugetlb_folio(h, folio, folio_nid(folio));
return folio;
}
+static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h,
+ gfp_t gfp_mask, int nid, nodemask_t *nmask,
+ nodemask_t *node_alloc_noretry)
+{
+ struct folio *folio;
+
+ folio = __alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask,
+ node_alloc_noretry);
+ if (folio)
+ init_new_hugetlb_folio(h, folio);
+ return folio;
+}
+
/*
- * Allocates a fresh page to the hugetlb allocator pool in the node interleaved
- * manner.
+ * Common helper to allocate a fresh hugetlb page. All specific allocators
+ * should use this function to get new hugetlb pages
+ *
+ * Note that returned page is 'frozen': ref count of head page and all tail
+ * pages is zero.
*/
-static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
- nodemask_t *node_alloc_noretry)
+static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h,
+ gfp_t gfp_mask, int nid, nodemask_t *nmask,
+ nodemask_t *node_alloc_noretry)
{
struct folio *folio;
- int nr_nodes, node;
+
+ folio = __alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask,
+ node_alloc_noretry);
+ if (!folio)
+ return NULL;
+
+ prep_new_hugetlb_folio(h, folio, folio_nid(folio));
+ return folio;
+}
+
+static void prep_and_add_allocated_folios(struct hstate *h,
+ struct list_head *folio_list)
+{
+ struct folio *folio, *tmp_f;
+
+ /*
+ * Add all new pool pages to free lists in one lock cycle
+ */
+ spin_lock_irq(&hugetlb_lock);
+ list_for_each_entry_safe(folio, tmp_f, folio_list, lru) {
+ __prep_account_new_huge_page(h, folio_nid(folio));
+ enqueue_hugetlb_folio(h, folio);
+ }
+ spin_unlock_irq(&hugetlb_lock);
+
+ INIT_LIST_HEAD(folio_list);
+}
+
+/*
+ * Allocates a fresh hugetlb page in a node interleaved manner. The page
+ * will later be added to the appropriate hugetlb pool.
+ */
+static struct folio *alloc_pool_huge_folio(struct hstate *h,
+ nodemask_t *nodes_allowed,
+ nodemask_t *node_alloc_noretry)
+{
gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
+ int nr_nodes, node;
for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) {
- folio = alloc_fresh_hugetlb_folio(h, gfp_mask, node,
+ struct folio *folio;
+
+ folio = only_alloc_fresh_hugetlb_folio(h, gfp_mask, node,
nodes_allowed, node_alloc_noretry);
- if (folio) {
- free_huge_folio(folio); /* free it into the hugepage allocator */
- return 1;
- }
+ if (folio)
+ return folio;
}
- return 0;
+ return NULL;
}
/*
@@ -3196,19 +3247,29 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid)
*/
static void __init gather_bootmem_prealloc(void)
{
+ LIST_HEAD(folio_list);
struct huge_bootmem_page *m;
+ struct hstate *h, *prev_h = NULL;
list_for_each_entry(m, &huge_boot_pages, list) {
struct page *page = virt_to_page(m);
struct folio *folio = page_folio(page);
- struct hstate *h = m->hstate;
+
+ h = m->hstate;
+ /*
+ * It is possible to gave multiple huge page sizes (hstates)
+ * in this list. If so, process each size separately.
+ */
+ if (h != prev_h && prev_h != NULL)
+ prep_and_add_allocated_folios(prev_h, &folio_list);
+ prev_h = h;
VM_BUG_ON(!hstate_is_gigantic(h));
WARN_ON(folio_ref_count(folio) != 1);
if (prep_compound_gigantic_folio(folio, huge_page_order(h))) {
WARN_ON(folio_test_reserved(folio));
- prep_new_hugetlb_folio(h, folio, folio_nid(folio));
- free_huge_folio(folio); /* add to the hugepage allocator */
+ init_new_hugetlb_folio(h, folio);
+ list_add(&folio->lru, &folio_list);
} else {
/* VERY unlikely inflated ref count on a tail page */
free_gigantic_folio(folio, huge_page_order(h));
@@ -3222,6 +3283,8 @@ static void __init gather_bootmem_prealloc(void)
adjust_managed_page_count(page, pages_per_huge_page(h));
cond_resched();
}
+
+ prep_and_add_allocated_folios(h, &folio_list);
}
static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid)
{
@@ -3254,9 +3317,22 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid)
h->max_huge_pages_node[nid] = i;
}
+/*
+ * NOTE: this routine is called in different contexts for gigantic and
+ * non-gigantic pages.
+ * - For gigantic pages, this is called early in the boot process and
+ * pages are allocated from memblock allocated or something similar.
+ * Gigantic pages are actually added to pools later with the routine
+ * gather_bootmem_prealloc.
+ * - For non-gigantic pages, this is called later in the boot process after
+ * all of mm is up and functional. Pages are allocated from buddy and
+ * then added to hugetlb pools.
+ */
static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
{
unsigned long i;
+ struct folio *folio;
+ LIST_HEAD(folio_list);
nodemask_t *node_alloc_noretry;
bool node_specific_alloc = false;
@@ -3298,14 +3374,25 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
for (i = 0; i < h->max_huge_pages; ++i) {
if (hstate_is_gigantic(h)) {
+ /*
+ * gigantic pages not added to list as they are not
+ * added to pools now.
+ */
if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE))
break;
- } else if (!alloc_pool_huge_page(h,
- &node_states[N_MEMORY],
- node_alloc_noretry))
- break;
+ } else {
+ folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
+ node_alloc_noretry);
+ if (!folio)
+ break;
+ list_add(&folio->lru, &folio_list);
+ }
cond_resched();
}
+
+ /* list will be empty if hstate_is_gigantic */
+ prep_and_add_allocated_folios(h, &folio_list);
+
if (i < h->max_huge_pages) {
char buf[32];
@@ -3439,7 +3526,9 @@ static int adjust_pool_surplus(struct hstate *h, nodemask_t *nodes_allowed,
static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
nodemask_t *nodes_allowed)
{
- unsigned long min_count, ret;
+ unsigned long min_count;
+ unsigned long allocated;
+ struct folio *folio;
LIST_HEAD(page_list);
NODEMASK_ALLOC(nodemask_t, node_alloc_noretry, GFP_KERNEL);
@@ -3516,7 +3605,8 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
break;
}
- while (count > persistent_huge_pages(h)) {
+ allocated = 0;
+ while (count > (persistent_huge_pages(h) + allocated)) {
/*
* If this allocation races such that we no longer need the
* page, free_huge_folio will handle it by freeing the page
@@ -3527,15 +3617,32 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
/* yield cpu to avoid soft lockup */
cond_resched();
- ret = alloc_pool_huge_page(h, nodes_allowed,
+ folio = alloc_pool_huge_folio(h, nodes_allowed,
node_alloc_noretry);
- spin_lock_irq(&hugetlb_lock);
- if (!ret)
+ if (!folio) {
+ prep_and_add_allocated_folios(h, &page_list);
+ spin_lock_irq(&hugetlb_lock);
goto out;
+ }
+
+ list_add(&folio->lru, &page_list);
+ allocated++;
/* Bail for signals. Probably ctrl-c from user */
- if (signal_pending(current))
+ if (signal_pending(current)) {
+ prep_and_add_allocated_folios(h, &page_list);
+ spin_lock_irq(&hugetlb_lock);
goto out;
+ }
+
+ spin_lock_irq(&hugetlb_lock);
+ }
+
+ /* Add allocated pages to the pool */
+ if (!list_empty(&page_list)) {
+ spin_unlock_irq(&hugetlb_lock);
+ prep_and_add_allocated_folios(h, &page_list);
+ spin_lock_irq(&hugetlb_lock);
}
/*
@@ -3561,8 +3668,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
* Collect pages to be removed on list without dropping lock
*/
while (min_count < persistent_huge_pages(h)) {
- struct folio *folio;
-
folio = remove_pool_hugetlb_folio(h, nodes_allowed, 0);
if (!folio)
break;
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 07/12] hugetlb: perform vmemmap optimization on a list of pages
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (5 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 06/12] hugetlb: restructure pool allocations Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 08/12] hugetlb: perform vmemmap restoration " Mike Kravetz
` (5 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz
When adding hugetlb pages to the pool, we first create a list of the
allocated pages before adding to the pool. Pass this list of pages to a
new routine hugetlb_vmemmap_optimize_folios() for vmemmap optimization.
We also modify the routine vmemmap_should_optimize() to check for pages
that are already optimized. There are code paths that might request
vmemmap optimization twice and we want to make sure this is not
attempted.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
mm/hugetlb.c | 5 +++++
mm/hugetlb_vmemmap.c | 11 +++++++++++
mm/hugetlb_vmemmap.h | 5 +++++
3 files changed, 21 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 77313c9e0fa8..214603898ad0 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2248,6 +2248,11 @@ static void prep_and_add_allocated_folios(struct hstate *h,
{
struct folio *folio, *tmp_f;
+ /*
+ * Send list for bulk vmemmap optimization processing
+ */
+ hugetlb_vmemmap_optimize_folios(h, folio_list);
+
/*
* Add all new pool pages to free lists in one lock cycle
*/
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 0bde38626d25..c17784f36dc3 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -483,6 +483,9 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
/* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head)
{
+ if (HPageVmemmapOptimized((struct page *)head))
+ return false;
+
if (!READ_ONCE(vmemmap_optimize_enabled))
return false;
@@ -572,6 +575,14 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
SetHPageVmemmapOptimized(head);
}
+void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
+{
+ struct folio *folio;
+
+ list_for_each_entry(folio, folio_list, lru)
+ hugetlb_vmemmap_optimize(h, &folio->page);
+}
+
static struct ctl_table hugetlb_vmemmap_sysctls[] = {
{
.procname = "hugetlb_optimize_vmemmap",
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 25bd0e002431..036494e040ca 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -13,6 +13,7 @@
#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head);
void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head);
+void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list);
/*
* Reserve one vmemmap page, all vmemmap addresses are mapped to it. See
@@ -47,6 +48,10 @@ static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page
{
}
+static inline void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
+{
+}
+
static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct hstate *h)
{
return 0;
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 08/12] hugetlb: perform vmemmap restoration on a list of pages
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (6 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 07/12] hugetlb: perform vmemmap optimization on a list of pages Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 09/12] hugetlb: batch freeing of vmemmap pages Mike Kravetz
` (4 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz
The routine update_and_free_pages_bulk already performs vmemmap
restoration on the list of hugetlb pages in a separate step. In
preparation for more functionality to be added in this step, create a
new routine hugetlb_vmemmap_restore_folios() that will restore
vmemmap for a list of folios.
This new routine must provide sufficient feedback about errors and
actual restoration performed so that update_and_free_pages_bulk can
perform optimally.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/hugetlb.c | 36 ++++++++++++++++++------------------
mm/hugetlb_vmemmap.c | 37 +++++++++++++++++++++++++++++++++++++
mm/hugetlb_vmemmap.h | 11 +++++++++++
3 files changed, 66 insertions(+), 18 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 214603898ad0..ccfd0c71f0e7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1829,36 +1829,36 @@ static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio,
static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list)
{
+ int ret;
+ unsigned long restored;
struct folio *folio, *t_folio;
- bool clear_dtor = false;
/*
- * First allocate required vmemmmap (if necessary) for all folios on
- * list. If vmemmap can not be allocated, we can not free folio to
- * lower level allocator, so add back as hugetlb surplus page.
- * add_hugetlb_folio() removes the page from THIS list.
- * Use clear_dtor to note if vmemmap was successfully allocated for
- * ANY page on the list.
+ * First allocate required vmemmmap (if necessary) for all folios.
*/
- list_for_each_entry_safe(folio, t_folio, list, lru) {
- if (folio_test_hugetlb_vmemmap_optimized(folio)) {
- if (hugetlb_vmemmap_restore(h, &folio->page)) {
- spin_lock_irq(&hugetlb_lock);
+ ret = hugetlb_vmemmap_restore_folios(h, list, &restored);
+
+ /*
+ * If there was an error restoring vmemmap for ANY folios on the list,
+ * add them back as surplus hugetlb pages. add_hugetlb_folio() removes
+ * the folio from THIS list.
+ */
+ if (ret < 0) {
+ spin_lock_irq(&hugetlb_lock);
+ list_for_each_entry_safe(folio, t_folio, list, lru)
+ if (folio_test_hugetlb_vmemmap_optimized(folio))
add_hugetlb_folio(h, folio, true);
- spin_unlock_irq(&hugetlb_lock);
- } else
- clear_dtor = true;
- }
+ spin_unlock_irq(&hugetlb_lock);
}
/*
- * If vmemmmap allocation was performed on any folio above, take lock
- * to clear destructor of all folios on list. This avoids the need to
+ * If vmemmmap allocation was performed on ANY folio , take lock to
+ * clear destructor of all folios on list. This avoids the need to
* lock/unlock for each individual folio.
* The assumption is vmemmap allocation was performed on all or none
* of the folios on the list. This is true expect in VERY rare cases.
*/
- if (clear_dtor) {
+ if (restored) {
spin_lock_irq(&hugetlb_lock);
list_for_each_entry(folio, list, lru)
__clear_hugetlb_destructor(h, folio);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index c17784f36dc3..0eeb503d8a4c 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -480,6 +480,43 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
return ret;
}
+/**
+ * hugetlb_vmemmap_restore_folios - restore vmemmap for every folio on the list.
+ * @h: struct hstate.
+ * @folio_list: list of folios.
+ * @restored: Set to number of folios for which vmemmap was restored
+ * successfully if caller passes a non-NULL pointer.
+ *
+ * Return: %0 if vmemmap exists for all folios on the list. If an error is
+ * encountered restoring vmemmap for ANY folio, an error code
+ * will be returned to the caller. It is then the responsibility
+ * of the caller to check the hugetlb vmemmap optimized flag of
+ * each folio to determine if vmemmap was actually restored.
+ */
+int hugetlb_vmemmap_restore_folios(const struct hstate *h,
+ struct list_head *folio_list,
+ unsigned long *restored)
+{
+ unsigned long num_restored;
+ struct folio *folio;
+ int ret = 0, t_ret;
+
+ num_restored = 0;
+ list_for_each_entry(folio, folio_list, lru) {
+ if (folio_test_hugetlb_vmemmap_optimized(folio)) {
+ t_ret = hugetlb_vmemmap_restore(h, &folio->page);
+ if (t_ret)
+ ret = t_ret;
+ else
+ num_restored++;
+ }
+ }
+
+ if (*restored)
+ *restored = num_restored;
+ return ret;
+}
+
/* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head)
{
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 036494e040ca..c8c9125225de 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -12,6 +12,8 @@
#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head);
+int hugetlb_vmemmap_restore_folios(const struct hstate *h,
+ struct list_head *folio_list, unsigned long *restored);
void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head);
void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list);
@@ -44,6 +46,15 @@ static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct page *h
return 0;
}
+static inline int hugetlb_vmemmap_restore_folios(const struct hstate *h,
+ struct list_head *folio_list,
+ unsigned long *restored)
+{
+ if (restored)
+ *restored = 0;
+ return 0;
+}
+
static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
{
}
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 09/12] hugetlb: batch freeing of vmemmap pages
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (7 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 08/12] hugetlb: perform vmemmap restoration " Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 10/12] hugetlb: batch PMD split for bulk vmemmap dedup Mike Kravetz
` (3 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz
Now that batching of hugetlb vmemmap optimization processing is possible,
batch the freeing of vmemmap pages. When freeing vmemmap pages for a
hugetlb page, we add them to a list that is freed after the entire batch
has been processed.
This enhances the ability to return contiguous ranges of memory to the
low level allocators.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/hugetlb_vmemmap.c | 85 ++++++++++++++++++++++++++++++--------------
1 file changed, 59 insertions(+), 26 deletions(-)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 0eeb503d8a4c..8f8a559ff6ac 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -222,6 +222,9 @@ static void free_vmemmap_page_list(struct list_head *list)
{
struct page *page, *next;
+ if (list_empty(list))
+ return;
+
list_for_each_entry_safe(page, next, list, lru)
free_vmemmap_page(page);
}
@@ -251,7 +254,7 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
}
entry = mk_pte(walk->reuse_page, pgprot);
- list_add_tail(&page->lru, walk->vmemmap_pages);
+ list_add(&page->lru, walk->vmemmap_pages);
set_pte_at(&init_mm, addr, pte, entry);
}
@@ -306,18 +309,20 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
* @end: end address of the vmemmap virtual address range that we want to
* remap.
* @reuse: reuse address.
+ * @vmemmap_pages: list to deposit vmemmap pages to be freed. It is callers
+ * responsibility to free pages.
*
* Return: %0 on success, negative error code otherwise.
*/
static int vmemmap_remap_free(unsigned long start, unsigned long end,
- unsigned long reuse)
+ unsigned long reuse,
+ struct list_head *vmemmap_pages)
{
int ret;
- LIST_HEAD(vmemmap_pages);
struct vmemmap_remap_walk walk = {
.remap_pte = vmemmap_remap_pte,
.reuse_addr = reuse,
- .vmemmap_pages = &vmemmap_pages,
+ .vmemmap_pages = vmemmap_pages,
};
int nid = page_to_nid((struct page *)start);
gfp_t gfp_mask = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
@@ -334,7 +339,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
if (walk.reuse_page) {
copy_page(page_to_virt(walk.reuse_page),
(void *)walk.reuse_addr);
- list_add(&walk.reuse_page->lru, &vmemmap_pages);
+ list_add(&walk.reuse_page->lru, vmemmap_pages);
}
/*
@@ -365,15 +370,13 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
walk = (struct vmemmap_remap_walk) {
.remap_pte = vmemmap_restore_pte,
.reuse_addr = reuse,
- .vmemmap_pages = &vmemmap_pages,
+ .vmemmap_pages = vmemmap_pages,
};
vmemmap_remap_range(reuse, end, &walk);
}
mmap_read_unlock(&init_mm);
- free_vmemmap_page_list(&vmemmap_pages);
-
return ret;
}
@@ -389,7 +392,7 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
page = alloc_pages_node(nid, gfp_mask, 0);
if (!page)
goto out;
- list_add_tail(&page->lru, list);
+ list_add(&page->lru, list);
}
return 0;
@@ -576,24 +579,17 @@ static bool vmemmap_should_optimize(const struct hstate *h, const struct page *h
return true;
}
-/**
- * hugetlb_vmemmap_optimize - optimize @head page's vmemmap pages.
- * @h: struct hstate.
- * @head: the head page whose vmemmap pages will be optimized.
- *
- * This function only tries to optimize @head's vmemmap pages and does not
- * guarantee that the optimization will succeed after it returns. The caller
- * can use HPageVmemmapOptimized(@head) to detect if @head's vmemmap pages
- * have been optimized.
- */
-void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
+static int __hugetlb_vmemmap_optimize(const struct hstate *h,
+ struct page *head,
+ struct list_head *vmemmap_pages)
{
+ int ret = 0;
unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
unsigned long vmemmap_reuse;
VM_WARN_ON_ONCE(!PageHuge(head));
if (!vmemmap_should_optimize(h, head))
- return;
+ return ret;
static_branch_inc(&hugetlb_optimize_vmemmap_key);
@@ -603,21 +599,58 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
/*
* Remap the vmemmap virtual address range [@vmemmap_start, @vmemmap_end)
- * to the page which @vmemmap_reuse is mapped to, then free the pages
- * which the range [@vmemmap_start, @vmemmap_end] is mapped to.
+ * to the page which @vmemmap_reuse is mapped to. Add pages previously
+ * mapping the range to vmemmap_pages list so that they can be freed by
+ * the caller.
*/
- if (vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse))
+ ret = vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse, vmemmap_pages);
+ if (ret)
static_branch_dec(&hugetlb_optimize_vmemmap_key);
else
SetHPageVmemmapOptimized(head);
+
+ return ret;
+}
+
+/**
+ * hugetlb_vmemmap_optimize - optimize @head page's vmemmap pages.
+ * @h: struct hstate.
+ * @head: the head page whose vmemmap pages will be optimized.
+ *
+ * This function only tries to optimize @head's vmemmap pages and does not
+ * guarantee that the optimization will succeed after it returns. The caller
+ * can use HPageVmemmapOptimized(@head) to detect if @head's vmemmap pages
+ * have been optimized.
+ */
+void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
+{
+ LIST_HEAD(vmemmap_pages);
+
+ __hugetlb_vmemmap_optimize(h, head, &vmemmap_pages);
+ free_vmemmap_page_list(&vmemmap_pages);
}
void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
{
struct folio *folio;
+ LIST_HEAD(vmemmap_pages);
- list_for_each_entry(folio, folio_list, lru)
- hugetlb_vmemmap_optimize(h, &folio->page);
+ list_for_each_entry(folio, folio_list, lru) {
+ int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
+ &vmemmap_pages);
+
+ /*
+ * Pages may have been accumulated, thus free what we have
+ * and try again.
+ */
+ if (ret == -ENOMEM) {
+ free_vmemmap_page_list(&vmemmap_pages);
+ INIT_LIST_HEAD(&vmemmap_pages);
+ __hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
+ }
+ }
+
+ free_vmemmap_page_list(&vmemmap_pages);
}
static struct ctl_table hugetlb_vmemmap_sysctls[] = {
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 10/12] hugetlb: batch PMD split for bulk vmemmap dedup
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (8 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 09/12] hugetlb: batch freeing of vmemmap pages Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 11/12] hugetlb: batch TLB flushes when freeing vmemmap Mike Kravetz
` (2 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz
From: Joao Martins <joao.m.martins@oracle.com>
In an effort to minimize amount of TLB flushes, batch all PMD splits
belonging to a range of pages in order to perform only 1 (global) TLB
flush.
Add a flags field to the walker and pass whether it's a bulk allocation
or just a single page to decide to remap. First value
(VMEMMAP_SPLIT_NO_TLB_FLUSH) designates the request to not do the TLB
flush when we split the PMD.
Rebased and updated by Mike Kravetz
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/hugetlb_vmemmap.c | 79 +++++++++++++++++++++++++++++++++++++++++---
1 file changed, 75 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 8f8a559ff6ac..c952e95a829c 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -27,6 +27,7 @@
* @reuse_addr: the virtual address of the @reuse_page page.
* @vmemmap_pages: the list head of the vmemmap pages that can be freed
* or is mapped from.
+ * @flags: used to modify behavior in bulk operations
*/
struct vmemmap_remap_walk {
void (*remap_pte)(pte_t *pte, unsigned long addr,
@@ -35,9 +36,11 @@ struct vmemmap_remap_walk {
struct page *reuse_page;
unsigned long reuse_addr;
struct list_head *vmemmap_pages;
+#define VMEMMAP_SPLIT_NO_TLB_FLUSH BIT(0)
+ unsigned long flags;
};
-static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start)
+static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, bool flush)
{
pmd_t __pmd;
int i;
@@ -80,7 +83,8 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start)
/* Make pte visible before pmd. See comment in pmd_install(). */
smp_wmb();
pmd_populate_kernel(&init_mm, pmd, pgtable);
- flush_tlb_kernel_range(start, start + PMD_SIZE);
+ if (flush)
+ flush_tlb_kernel_range(start, start + PMD_SIZE);
} else {
pte_free_kernel(&init_mm, pgtable);
}
@@ -127,11 +131,20 @@ static int vmemmap_pmd_range(pud_t *pud, unsigned long addr,
do {
int ret;
- ret = split_vmemmap_huge_pmd(pmd, addr & PMD_MASK);
+ ret = split_vmemmap_huge_pmd(pmd, addr & PMD_MASK,
+ walk->flags & VMEMMAP_SPLIT_NO_TLB_FLUSH);
if (ret)
return ret;
next = pmd_addr_end(addr, end);
+
+ /*
+ * We are only splitting, not remapping the hugetlb vmemmap
+ * pages.
+ */
+ if (!walk->remap_pte)
+ continue;
+
vmemmap_pte_range(pmd, addr, next, walk);
} while (pmd++, addr = next, addr != end);
@@ -198,7 +211,8 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
return ret;
} while (pgd++, addr = next, addr != end);
- flush_tlb_kernel_range(start, end);
+ if (walk->remap_pte)
+ flush_tlb_kernel_range(start, end);
return 0;
}
@@ -300,6 +314,36 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
}
+/**
+ * vmemmap_remap_split - split the vmemmap virtual address range [@start, @end)
+ * backing PMDs of the directmap into PTEs
+ * @start: start address of the vmemmap virtual address range that we want
+ * to remap.
+ * @end: end address of the vmemmap virtual address range that we want to
+ * remap.
+ * @reuse: reuse address.
+ *
+ * Return: %0 on success, negative error code otherwise.
+ */
+static int vmemmap_remap_split(unsigned long start, unsigned long end,
+ unsigned long reuse)
+{
+ int ret;
+ struct vmemmap_remap_walk walk = {
+ .remap_pte = NULL,
+ .flags = VMEMMAP_SPLIT_NO_TLB_FLUSH,
+ };
+
+ /* See the comment in the vmemmap_remap_free(). */
+ BUG_ON(start - reuse != PAGE_SIZE);
+
+ mmap_read_lock(&init_mm);
+ ret = vmemmap_remap_range(reuse, end, &walk);
+ mmap_read_unlock(&init_mm);
+
+ return ret;
+}
+
/**
* vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end)
* to the page which @reuse is mapped to, then free vmemmap
@@ -323,6 +367,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
.remap_pte = vmemmap_remap_pte,
.reuse_addr = reuse,
.vmemmap_pages = vmemmap_pages,
+ .flags = 0,
};
int nid = page_to_nid((struct page *)start);
gfp_t gfp_mask = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
@@ -371,6 +416,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
.remap_pte = vmemmap_restore_pte,
.reuse_addr = reuse,
.vmemmap_pages = vmemmap_pages,
+ .flags = 0,
};
vmemmap_remap_range(reuse, end, &walk);
@@ -422,6 +468,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
.remap_pte = vmemmap_restore_pte,
.reuse_addr = reuse,
.vmemmap_pages = &vmemmap_pages,
+ .flags = 0,
};
/* See the comment in the vmemmap_remap_free(). */
@@ -630,11 +677,35 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
free_vmemmap_page_list(&vmemmap_pages);
}
+static void hugetlb_vmemmap_split(const struct hstate *h, struct page *head)
+{
+ unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
+ unsigned long vmemmap_reuse;
+
+ if (!vmemmap_should_optimize(h, head))
+ return;
+
+ vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h);
+ vmemmap_reuse = vmemmap_start;
+ vmemmap_start += HUGETLB_VMEMMAP_RESERVE_SIZE;
+
+ /*
+ * Split PMDs on the vmemmap virtual address range [@vmemmap_start,
+ * @vmemmap_end]
+ */
+ vmemmap_remap_split(vmemmap_start, vmemmap_end, vmemmap_reuse);
+}
+
void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
{
struct folio *folio;
LIST_HEAD(vmemmap_pages);
+ list_for_each_entry(folio, folio_list, lru)
+ hugetlb_vmemmap_split(h, &folio->page);
+
+ flush_tlb_all();
+
list_for_each_entry(folio, folio_list, lru) {
int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
&vmemmap_pages);
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 11/12] hugetlb: batch TLB flushes when freeing vmemmap
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (9 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 10/12] hugetlb: batch PMD split for bulk vmemmap dedup Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:15 ` [PATCH v3 12/12] hugetlb: batch TLB flushes when restoring vmemmap Mike Kravetz
2023-09-15 22:22 ` [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz
From: Joao Martins <joao.m.martins@oracle.com>
Now that a list of pages is deduplicated at once, the TLB
flush can be batched for all vmemmap pages that got remapped.
Expand the flags field value to pass whether to skip the TLB flush
on remap of the PTE.
The TLB flush is global as we don't have guarantees from caller
that the set of folios is contiguous, or to add complexity in
composing a list of kVAs to flush.
Modified by Mike Kravetz to perform TLB flush on single folio if an
error is encountered.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/hugetlb_vmemmap.c | 44 +++++++++++++++++++++++++++++++++-----------
1 file changed, 33 insertions(+), 11 deletions(-)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index c952e95a829c..921f2fa7cf1b 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -37,6 +37,7 @@ struct vmemmap_remap_walk {
unsigned long reuse_addr;
struct list_head *vmemmap_pages;
#define VMEMMAP_SPLIT_NO_TLB_FLUSH BIT(0)
+#define VMEMMAP_REMAP_NO_TLB_FLUSH BIT(1)
unsigned long flags;
};
@@ -211,7 +212,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
return ret;
} while (pgd++, addr = next, addr != end);
- if (walk->remap_pte)
+ if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH))
flush_tlb_kernel_range(start, end);
return 0;
@@ -355,19 +356,21 @@ static int vmemmap_remap_split(unsigned long start, unsigned long end,
* @reuse: reuse address.
* @vmemmap_pages: list to deposit vmemmap pages to be freed. It is callers
* responsibility to free pages.
+ * @flags: modifications to vmemmap_remap_walk flags
*
* Return: %0 on success, negative error code otherwise.
*/
static int vmemmap_remap_free(unsigned long start, unsigned long end,
unsigned long reuse,
- struct list_head *vmemmap_pages)
+ struct list_head *vmemmap_pages,
+ unsigned long flags)
{
int ret;
struct vmemmap_remap_walk walk = {
.remap_pte = vmemmap_remap_pte,
.reuse_addr = reuse,
.vmemmap_pages = vmemmap_pages,
- .flags = 0,
+ .flags = flags,
};
int nid = page_to_nid((struct page *)start);
gfp_t gfp_mask = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
@@ -628,7 +631,8 @@ static bool vmemmap_should_optimize(const struct hstate *h, const struct page *h
static int __hugetlb_vmemmap_optimize(const struct hstate *h,
struct page *head,
- struct list_head *vmemmap_pages)
+ struct list_head *vmemmap_pages,
+ unsigned long flags)
{
int ret = 0;
unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
@@ -639,6 +643,18 @@ static int __hugetlb_vmemmap_optimize(const struct hstate *h,
return ret;
static_branch_inc(&hugetlb_optimize_vmemmap_key);
+ /*
+ * Very Subtle
+ * If VMEMMAP_REMAP_NO_TLB_FLUSH is set, TLB flushing is not performed
+ * immediately after remapping. As a result, subsequent accesses
+ * and modifications to struct pages associated with the hugetlb
+ * page could be to the OLD struct pages. Set the vmemmap optimized
+ * flag here so that it is copied to the new head page. This keeps
+ * the old and new struct pages in sync.
+ * If there is an error during optimization, we will immediately FLUSH
+ * the TLB and clear the flag below.
+ */
+ SetHPageVmemmapOptimized(head);
vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h);
vmemmap_reuse = vmemmap_start;
@@ -650,11 +666,12 @@ static int __hugetlb_vmemmap_optimize(const struct hstate *h,
* mapping the range to vmemmap_pages list so that they can be freed by
* the caller.
*/
- ret = vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse, vmemmap_pages);
- if (ret)
+ ret = vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse,
+ vmemmap_pages, flags);
+ if (ret) {
static_branch_dec(&hugetlb_optimize_vmemmap_key);
- else
- SetHPageVmemmapOptimized(head);
+ ClearHPageVmemmapOptimized(head);
+ }
return ret;
}
@@ -673,7 +690,7 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
{
LIST_HEAD(vmemmap_pages);
- __hugetlb_vmemmap_optimize(h, head, &vmemmap_pages);
+ __hugetlb_vmemmap_optimize(h, head, &vmemmap_pages, 0);
free_vmemmap_page_list(&vmemmap_pages);
}
@@ -708,19 +725,24 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l
list_for_each_entry(folio, folio_list, lru) {
int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
- &vmemmap_pages);
+ &vmemmap_pages,
+ VMEMMAP_REMAP_NO_TLB_FLUSH);
/*
* Pages may have been accumulated, thus free what we have
* and try again.
*/
if (ret == -ENOMEM) {
+ flush_tlb_all();
free_vmemmap_page_list(&vmemmap_pages);
INIT_LIST_HEAD(&vmemmap_pages);
- __hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
+ __hugetlb_vmemmap_optimize(h, &folio->page,
+ &vmemmap_pages,
+ VMEMMAP_REMAP_NO_TLB_FLUSH);
}
}
+ flush_tlb_all();
free_vmemmap_page_list(&vmemmap_pages);
}
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v3 12/12] hugetlb: batch TLB flushes when restoring vmemmap
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (10 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 11/12] hugetlb: batch TLB flushes when freeing vmemmap Mike Kravetz
@ 2023-09-15 22:15 ` Mike Kravetz
2023-09-15 22:22 ` [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:15 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Mike Kravetz
Update the internal hugetlb restore vmemmap code path such that TLB
flushing can be batched. Use the existing mechanism of passing the
VMEMMAP_REMAP_NO_TLB_FLUSH flag to indicate flushing should not be
performed for individual pages. The routine hugetlb_vmemmap_restore_folios
is the only user of this new mechanism, and it will perform a global
flush after all vmemmap is restored.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/hugetlb_vmemmap.c | 39 ++++++++++++++++++++++++---------------
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 921f2fa7cf1b..0e9074a09afd 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -460,18 +460,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
* @end: end address of the vmemmap virtual address range that we want to
* remap.
* @reuse: reuse address.
+ * @flags: modify behavior for bulk operations
*
* Return: %0 on success, negative error code otherwise.
*/
static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
- unsigned long reuse)
+ unsigned long reuse, unsigned long flags)
{
LIST_HEAD(vmemmap_pages);
struct vmemmap_remap_walk walk = {
.remap_pte = vmemmap_restore_pte,
.reuse_addr = reuse,
.vmemmap_pages = &vmemmap_pages,
- .flags = 0,
+ .flags = flags,
};
/* See the comment in the vmemmap_remap_free(). */
@@ -493,17 +494,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0);
-/**
- * hugetlb_vmemmap_restore - restore previously optimized (by
- * hugetlb_vmemmap_optimize()) vmemmap pages which
- * will be reallocated and remapped.
- * @h: struct hstate.
- * @head: the head page whose vmemmap pages will be restored.
- *
- * Return: %0 if @head's vmemmap pages have been reallocated and remapped,
- * negative error code otherwise.
- */
-int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
+static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, unsigned long flags)
{
int ret;
unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
@@ -524,7 +515,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
* When a HugeTLB page is freed to the buddy allocator, previously
* discarded vmemmap pages must be allocated and remapping.
*/
- ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse);
+ ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, flags);
if (!ret) {
ClearHPageVmemmapOptimized(head);
static_branch_dec(&hugetlb_optimize_vmemmap_key);
@@ -533,6 +524,21 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
return ret;
}
+/**
+ * hugetlb_vmemmap_restore - restore previously optimized (by
+ * hugetlb_vmemmap_optimize()) vmemmap pages which
+ * will be reallocated and remapped.
+ * @h: struct hstate.
+ * @head: the head page whose vmemmap pages will be restored.
+ *
+ * Return: %0 if @head's vmemmap pages have been reallocated and remapped,
+ * negative error code otherwise.
+ */
+int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
+{
+ return __hugetlb_vmemmap_restore(h, head, 0);
+}
+
/**
* hugetlb_vmemmap_restore_folios - restore vmemmap for every folio on the list.
* @h: struct hstate.
@@ -557,7 +563,8 @@ int hugetlb_vmemmap_restore_folios(const struct hstate *h,
num_restored = 0;
list_for_each_entry(folio, folio_list, lru) {
if (folio_test_hugetlb_vmemmap_optimized(folio)) {
- t_ret = hugetlb_vmemmap_restore(h, &folio->page);
+ t_ret = __hugetlb_vmemmap_restore(h, &folio->page,
+ VMEMMAP_REMAP_NO_TLB_FLUSH);
if (t_ret)
ret = t_ret;
else
@@ -565,6 +572,8 @@ int hugetlb_vmemmap_restore_folios(const struct hstate *h,
}
}
+ flush_tlb_all();
+
if (*restored)
*restored = num_restored;
return ret;
--
2.41.0
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH v3 00/12] Batch hugetlb vmemmap modification operations
2023-09-15 22:15 [PATCH v3 00/12] Batch hugetlb vmemmap modification operations Mike Kravetz
` (11 preceding siblings ...)
2023-09-15 22:15 ` [PATCH v3 12/12] hugetlb: batch TLB flushes when restoring vmemmap Mike Kravetz
@ 2023-09-15 22:22 ` Mike Kravetz
12 siblings, 0 replies; 14+ messages in thread
From: Mike Kravetz @ 2023-09-15 22:22 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Muchun Song, Joao Martins, Oscar Salvador, David Hildenbrand,
Miaohe Lin, David Rientjes, Anshuman Khandual, Naoya Horiguchi,
Barry Song, Michal Hocko, Matthew Wilcox, Xiongchun Duan,
Andrew Morton, Usama Arif
On 09/15/23 15:15, Mike Kravetz wrote:
> The following series attempts to reduce amount of time spent in TLB flushing.
> The idea is to batch the vmemmap modification operations for multiple hugetlb
> pages. Instead of doing one or two TLB flushes for each page, we do two TLB
> flushes for each batch of pages. One flush after splitting pages mapped at
> the PMD level, and another after remapping vmemmap associated with all
> hugetlb pages. Results of such batching are as follows:
>
> Joao Martins (2):
> hugetlb: batch PMD split for bulk vmemmap dedup
> hugetlb: batch TLB flushes when freeing vmemmap
>
> Johannes Weiner (1):
> mm: page_alloc: remove pcppage migratetype caching fix
>
> Matthew Wilcox (Oracle) (3):
> hugetlb: Use a folio in free_hpage_workfn()
> hugetlb: Remove a few calls to page_folio()
> hugetlb: Convert remove_pool_huge_page() to
> remove_pool_hugetlb_folio()
>
> Mike Kravetz (6):
> hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles
> hugetlb: restructure pool allocations
> hugetlb: perform vmemmap optimization on a list of pages
> hugetlb: perform vmemmap restoration on a list of pages
> hugetlb: batch freeing of vmemmap pages
> hugetlb: batch TLB flushes when restoring vmemmap
>
> mm/hugetlb.c | 288 ++++++++++++++++++++++++++++++++-----------
> mm/hugetlb_vmemmap.c | 255 ++++++++++++++++++++++++++++++++------
> mm/hugetlb_vmemmap.h | 16 +++
> mm/page_alloc.c | 3 -
> 4 files changed, 452 insertions(+), 110 deletions(-)
Just realized that I should have based this on top of/taken into account
this series as well:
https://lore.kernel.org/linux-mm/20230913105401.519709-5-usama.arif@bytedance.com/
Sorry!
Changes should be minimal, but modifying the same code.
--
Mike Kravetz
^ permalink raw reply [flat|nested] 14+ messages in thread