From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
shy828301@gmail.com,
"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 2/5] mm/khugepaged: Convert hpage_collapse_scan_pmd() to use folios
Date: Mon, 16 Oct 2023 13:05:07 -0700 [thread overview]
Message-ID: <20231016200510.7387-3-vishal.moola@gmail.com> (raw)
In-Reply-To: <20231016200510.7387-1-vishal.moola@gmail.com>
Replaces 5 calls to compound_head(), and removes 1466 bytes of kernel
text.
Previously, to determine if any pte was shared, the page mapcount
corresponding exactly to the pte was checked. This gave us a precise
number of shared ptes. Using folio_estimated_sharers() instead uses
the mapcount of the head page, giving us an estimate for tail page ptes.
This means if a tail page's mapcount is greater than its head page's
mapcount, folio_estimated_sharers() would be underestimating the number of
shared ptes, and vice versa.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
mm/khugepaged.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 7a552fe16c92..67aac53b31c8 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1245,7 +1245,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
pte_t *pte, *_pte;
int result = SCAN_FAIL, referenced = 0;
int none_or_zero = 0, shared = 0;
- struct page *page = NULL;
+ struct folio *folio = NULL;
unsigned long _address;
spinlock_t *ptl;
int node = NUMA_NO_NODE, unmapped = 0;
@@ -1316,13 +1316,13 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
if (pte_write(pteval))
writable = true;
- page = vm_normal_page(vma, _address, pteval);
- if (unlikely(!page) || unlikely(is_zone_device_page(page))) {
+ folio = vm_normal_folio(vma, _address, pteval);
+ if (unlikely(!folio) || unlikely(folio_is_zone_device(folio))) {
result = SCAN_PAGE_NULL;
goto out_unmap;
}
- if (page_mapcount(page) > 1) {
+ if (folio_estimated_sharers(folio) > 1) {
++shared;
if (cc->is_khugepaged &&
shared > khugepaged_max_ptes_shared) {
@@ -1332,29 +1332,27 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
}
}
- page = compound_head(page);
-
/*
* Record which node the original page is from and save this
* information to cc->node_load[].
* Khugepaged will allocate hugepage from the node has the max
* hit record.
*/
- node = page_to_nid(page);
+ node = folio_nid(folio);
if (hpage_collapse_scan_abort(node, cc)) {
result = SCAN_SCAN_ABORT;
goto out_unmap;
}
cc->node_load[node]++;
- if (!PageLRU(page)) {
+ if (!folio_test_lru(folio)) {
result = SCAN_PAGE_LRU;
goto out_unmap;
}
- if (PageLocked(page)) {
+ if (folio_test_locked(folio)) {
result = SCAN_PAGE_LOCK;
goto out_unmap;
}
- if (!PageAnon(page)) {
+ if (!folio_test_anon(folio)) {
result = SCAN_PAGE_ANON;
goto out_unmap;
}
@@ -1369,7 +1367,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
* has excessive GUP pins (i.e. 512). Anyway the same check
* will be done again later the risk seems low.
*/
- if (!is_refcount_suitable(page)) {
+ if (!is_refcount_suitable(&folio->page)) {
result = SCAN_PAGE_COUNT;
goto out_unmap;
}
@@ -1379,8 +1377,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
* enough young pte to justify collapsing the page
*/
if (cc->is_khugepaged &&
- (pte_young(pteval) || page_is_young(page) ||
- PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm,
+ (pte_young(pteval) || folio_test_young(folio) ||
+ folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm,
address)))
referenced++;
}
@@ -1402,7 +1400,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
*mmap_locked = false;
}
out:
- trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced,
+ trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
none_or_zero, result, unmapped);
return result;
}
--
2.40.1
next prev parent reply other threads:[~2023-10-16 20:07 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-16 20:05 [PATCH 0/5] Some khugepaged folio conversions Vishal Moola (Oracle)
2023-10-16 20:05 ` [PATCH 1/5] mm/khugepaged: Convert __collapse_huge_page_isolate() to use folios Vishal Moola (Oracle)
2023-10-16 20:05 ` Vishal Moola (Oracle) [this message]
2023-10-17 20:41 ` [PATCH 2/5] mm/khugepaged: Convert hpage_collapse_scan_pmd() " Yang Shi
2023-10-18 17:33 ` Vishal Moola
2023-10-16 20:05 ` [PATCH 3/5] mm/khugepaged: Convert is_refcount_suitable() " Vishal Moola (Oracle)
2023-10-16 20:05 ` [PATCH 4/5] mm/khugepaged: Convert alloc_charge_hpage() " Vishal Moola (Oracle)
2023-10-17 3:48 ` Kefeng Wang
2023-10-18 17:21 ` Vishal Moola
2023-10-16 20:05 ` [PATCH 5/5] mm/khugepaged: Convert collapse_pte_mapped_thp() " Vishal Moola (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231016200510.7387-3-vishal.moola@gmail.com \
--to=vishal.moola@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shy828301@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox