* [PATCH v2] mm: mincore: use pte_batch_bint() to batch process large folios
@ 2025-05-08 4:09 Baolin Wang
2025-05-08 7:42 ` Barry Song
0 siblings, 1 reply; 3+ messages in thread
From: Baolin Wang @ 2025-05-08 4:09 UTC (permalink / raw)
To: akpm, david
Cc: 21cnbao, ryan.roberts, dev.jain, ziy, baolin.wang, linux-mm,
linux-kernel
When I tested the mincore() syscall, I observed that it takes longer with
64K mTHP enabled on my Arm64 server. The reason is the mincore_pte_range()
still checks each PTE individually, even when the PTEs are contiguous,
which is not efficient.
Thus we can use pte_batch_hint() to get the batch number of the present
contiguous PTEs, which can improve the performance. I tested the mincore()
syscall with 1G anonymous memory populated with 64K mTHP, and observed an
obvious performance improvement:
w/o patch w/ patch changes
6022us 549us +91%
Moreover, I also tested mincore() with disabling mTHP/THP, and did not
see any obvious regression for base pages.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
Changes from v1:
- Change to use pte_batch_hint() to get the batch number, per Ryan.
Note: I observed the min_t() can introduce a slight performance regression
for base pages, so I change to add a batch size check for base pages,
which can resolve the performance regression issue.
---
mm/mincore.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/mm/mincore.c b/mm/mincore.c
index 832f29f46767..2e6a9123305e 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -21,6 +21,7 @@
#include <linux/uaccess.h>
#include "swap.h"
+#include "internal.h"
static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
unsigned long end, struct mm_walk *walk)
@@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pte_t *ptep;
unsigned char *vec = walk->private;
int nr = (end - addr) >> PAGE_SHIFT;
+ int step, i;
ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
@@ -118,16 +120,23 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
walk->action = ACTION_AGAIN;
return 0;
}
- for (; addr != end; ptep++, addr += PAGE_SIZE) {
+ for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
pte_t pte = ptep_get(ptep);
+ step = 1;
/* We need to do cache lookup too for pte markers */
if (pte_none_mostly(pte))
__mincore_unmapped_range(addr, addr + PAGE_SIZE,
vma, vec);
- else if (pte_present(pte))
- *vec = 1;
- else { /* pte is a swap entry */
+ else if (pte_present(pte)) {
+ unsigned int batch = pte_batch_hint(ptep, pte);
+
+ if (batch > 1)
+ step = min_t(unsigned int, batch, nr);
+
+ for (i = 0; i < step; i++)
+ vec[i] = 1;
+ } else { /* pte is a swap entry */
swp_entry_t entry = pte_to_swp_entry(pte);
if (non_swap_entry(entry)) {
@@ -146,7 +155,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
#endif
}
}
- vec++;
+ vec += step;
}
pte_unmap_unlock(ptep - 1, ptl);
out:
--
2.43.5
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2] mm: mincore: use pte_batch_bint() to batch process large folios
2025-05-08 4:09 [PATCH v2] mm: mincore: use pte_batch_bint() to batch process large folios Baolin Wang
@ 2025-05-08 7:42 ` Barry Song
2025-05-08 7:57 ` Baolin Wang
0 siblings, 1 reply; 3+ messages in thread
From: Barry Song @ 2025-05-08 7:42 UTC (permalink / raw)
To: Baolin Wang
Cc: akpm, david, ryan.roberts, dev.jain, ziy, linux-mm, linux-kernel
On Thu, May 8, 2025 at 4:09 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
> When I tested the mincore() syscall, I observed that it takes longer with
> 64K mTHP enabled on my Arm64 server. The reason is the mincore_pte_range()
> still checks each PTE individually, even when the PTEs are contiguous,
> which is not efficient.
>
> Thus we can use pte_batch_hint() to get the batch number of the present
> contiguous PTEs, which can improve the performance. I tested the mincore()
> syscall with 1G anonymous memory populated with 64K mTHP, and observed an
> obvious performance improvement:
>
> w/o patch w/ patch changes
> 6022us 549us +91%
>
> Moreover, I also tested mincore() with disabling mTHP/THP, and did not
> see any obvious regression for base pages.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> Changes from v1:
> - Change to use pte_batch_hint() to get the batch number, per Ryan.
>
> Note: I observed the min_t() can introduce a slight performance regression
> for base pages, so I change to add a batch size check for base pages,
> which can resolve the performance regression issue.
> ---
> mm/mincore.c | 19 ++++++++++++++-----
> 1 file changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/mm/mincore.c b/mm/mincore.c
> index 832f29f46767..2e6a9123305e 100644
> --- a/mm/mincore.c
> +++ b/mm/mincore.c
> @@ -21,6 +21,7 @@
>
> #include <linux/uaccess.h>
> #include "swap.h"
> +#include "internal.h"
>
> static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
> unsigned long end, struct mm_walk *walk)
> @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> pte_t *ptep;
> unsigned char *vec = walk->private;
> int nr = (end - addr) >> PAGE_SHIFT;
> + int step, i;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> if (ptl) {
> @@ -118,16 +120,23 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> walk->action = ACTION_AGAIN;
> return 0;
> }
> - for (; addr != end; ptep++, addr += PAGE_SIZE) {
> + for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
> pte_t pte = ptep_get(ptep);
>
> + step = 1;
> /* We need to do cache lookup too for pte markers */
> if (pte_none_mostly(pte))
> __mincore_unmapped_range(addr, addr + PAGE_SIZE,
> vma, vec);
> - else if (pte_present(pte))
> - *vec = 1;
> - else { /* pte is a swap entry */
> + else if (pte_present(pte)) {
> + unsigned int batch = pte_batch_hint(ptep, pte);
> +
> + if (batch > 1)
> + step = min_t(unsigned int, batch, nr);
Not quite sure if nr should be (end - addr) / PAGE_SIZE as nr
is always the initial value. For example, nr = 50, and we have
scanned 48 PTEs, then we have 2 ptes left. No?
> +
> + for (i = 0; i < step; i++)
> + vec[i] = 1;
> + } else { /* pte is a swap entry */
> swp_entry_t entry = pte_to_swp_entry(pte);
>
> if (non_swap_entry(entry)) {
> @@ -146,7 +155,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> #endif
> }
> }
> - vec++;
> + vec += step;
> }
> pte_unmap_unlock(ptep - 1, ptl);
> out:
> --
> 2.43.5
>
Thanks
Barry
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2] mm: mincore: use pte_batch_bint() to batch process large folios
2025-05-08 7:42 ` Barry Song
@ 2025-05-08 7:57 ` Baolin Wang
0 siblings, 0 replies; 3+ messages in thread
From: Baolin Wang @ 2025-05-08 7:57 UTC (permalink / raw)
To: Barry Song
Cc: akpm, david, ryan.roberts, dev.jain, ziy, linux-mm, linux-kernel
On 2025/5/8 15:42, Barry Song wrote:
> On Thu, May 8, 2025 at 4:09 PM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>> When I tested the mincore() syscall, I observed that it takes longer with
>> 64K mTHP enabled on my Arm64 server. The reason is the mincore_pte_range()
>> still checks each PTE individually, even when the PTEs are contiguous,
>> which is not efficient.
>>
>> Thus we can use pte_batch_hint() to get the batch number of the present
>> contiguous PTEs, which can improve the performance. I tested the mincore()
>> syscall with 1G anonymous memory populated with 64K mTHP, and observed an
>> obvious performance improvement:
>>
>> w/o patch w/ patch changes
>> 6022us 549us +91%
>>
>> Moreover, I also tested mincore() with disabling mTHP/THP, and did not
>> see any obvious regression for base pages.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> Changes from v1:
>> - Change to use pte_batch_hint() to get the batch number, per Ryan.
>>
>> Note: I observed the min_t() can introduce a slight performance regression
>> for base pages, so I change to add a batch size check for base pages,
>> which can resolve the performance regression issue.
>> ---
>> mm/mincore.c | 19 ++++++++++++++-----
>> 1 file changed, 14 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/mincore.c b/mm/mincore.c
>> index 832f29f46767..2e6a9123305e 100644
>> --- a/mm/mincore.c
>> +++ b/mm/mincore.c
>> @@ -21,6 +21,7 @@
>>
>> #include <linux/uaccess.h>
>> #include "swap.h"
>> +#include "internal.h"
>>
>> static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
>> unsigned long end, struct mm_walk *walk)
>> @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>> pte_t *ptep;
>> unsigned char *vec = walk->private;
>> int nr = (end - addr) >> PAGE_SHIFT;
>> + int step, i;
>>
>> ptl = pmd_trans_huge_lock(pmd, vma);
>> if (ptl) {
>> @@ -118,16 +120,23 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>> walk->action = ACTION_AGAIN;
>> return 0;
>> }
>> - for (; addr != end; ptep++, addr += PAGE_SIZE) {
>> + for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
>> pte_t pte = ptep_get(ptep);
>>
>> + step = 1;
>> /* We need to do cache lookup too for pte markers */
>> if (pte_none_mostly(pte))
>> __mincore_unmapped_range(addr, addr + PAGE_SIZE,
>> vma, vec);
>> - else if (pte_present(pte))
>> - *vec = 1;
>> - else { /* pte is a swap entry */
>> + else if (pte_present(pte)) {
>> + unsigned int batch = pte_batch_hint(ptep, pte);
>> +
>> + if (batch > 1)
>> + step = min_t(unsigned int, batch, nr);
>
> Not quite sure if nr should be (end - addr) / PAGE_SIZE as nr
> is always the initial value. For example, nr = 50, and we have
> scanned 48 PTEs, then we have 2 ptes left. No?
Ah, you are right. I missed this part when I revised the original
patch[1]. Thanks for pointing this out.
[1]
https://lore.kernel.org/all/6a8418ba-dbd1-489f-929b-e31831bea0cf@linux.alibaba.com/
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-05-08 7:57 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-08 4:09 [PATCH v2] mm: mincore: use pte_batch_bint() to batch process large folios Baolin Wang
2025-05-08 7:42 ` Barry Song
2025-05-08 7:57 ` Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox