linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	akpm@linux-foundation.org, hughd@google.com
Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com,
	ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] mm: mincore: use folio_pte_batch() to batch process large folios
Date: Sun, 30 Mar 2025 20:57:19 +0100	[thread overview]
Message-ID: <f6f4f4ff-0074-4ba7-b2a5-02727661843c@linux.alibaba.com> (raw)
In-Reply-To: <54886038-3707-4ea0-bd84-00a8f4a19a6a@arm.com>



On 2025/3/27 22:08, Ryan Roberts wrote:
> On 25/03/2025 23:38, Baolin Wang wrote:
>> When I tested the mincore() syscall, I observed that it takes longer with
>> 64K mTHP enabled on my Arm64 server. The reason is the mincore_pte_range()
>> still checks each PTE individually, even when the PTEs are contiguous,
>> which is not efficient.
>>
>> Thus we can use folio_pte_batch() to get the batch number of the present
>> contiguous PTEs, which can improve the performance. I tested the mincore()
>> syscall with 1G anonymous memory populated with 64K mTHP, and observed an
>> obvious performance improvement:
>>
>> w/o patch		w/ patch		changes
>> 6022us			1115us			+81%
>>
>> Moreover, I also tested mincore() with disabling mTHP/THP, and did not
>> see any obvious regression.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/mincore.c | 27 ++++++++++++++++++++++-----
>>   1 file changed, 22 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/mincore.c b/mm/mincore.c
>> index 832f29f46767..88be180b5550 100644
>> --- a/mm/mincore.c
>> +++ b/mm/mincore.c
>> @@ -21,6 +21,7 @@
>>   
>>   #include <linux/uaccess.h>
>>   #include "swap.h"
>> +#include "internal.h"
>>   
>>   static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
>>   			unsigned long end, struct mm_walk *walk)
>> @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>>   	pte_t *ptep;
>>   	unsigned char *vec = walk->private;
>>   	int nr = (end - addr) >> PAGE_SHIFT;
>> +	int step, i;
>>   
>>   	ptl = pmd_trans_huge_lock(pmd, vma);
>>   	if (ptl) {
>> @@ -118,16 +120,31 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>>   		walk->action = ACTION_AGAIN;
>>   		return 0;
>>   	}
>> -	for (; addr != end; ptep++, addr += PAGE_SIZE) {
>> +	for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
>>   		pte_t pte = ptep_get(ptep);
>>   
>> +		step = 1;
>>   		/* We need to do cache lookup too for pte markers */
>>   		if (pte_none_mostly(pte))
>>   			__mincore_unmapped_range(addr, addr + PAGE_SIZE,
>>   						 vma, vec);
>> -		else if (pte_present(pte))
>> -			*vec = 1;
>> -		else { /* pte is a swap entry */
>> +		else if (pte_present(pte)) {
>> +			if (pte_batch_hint(ptep, pte) > 1) {
>> +				struct folio *folio = vm_normal_folio(vma, addr, pte);
>> +
>> +				if (folio && folio_test_large(folio)) {
>> +					const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
>> +								FPB_IGNORE_SOFT_DIRTY;
>> +					int max_nr = (end - addr) / PAGE_SIZE;
>> +
>> +					step = folio_pte_batch(folio, addr, ptep, pte,
>> +							max_nr, fpb_flags, NULL, NULL, NULL);
>> +				}
>> +			}
> 
> You could simplify to the following, I think, to avoid needing to grab the folio
> and call folio_pte_batch():
> 
> 			else if (pte_present(pte)) {
> 				int max_nr = (end - addr) / PAGE_SIZE;
> 				step = min(pte_batch_hint(ptep, pte), max_nr);
> 			} ...
> 
> I expect the regression you are seeing here is all due to calling ptep_get() for
> every pte in the contpte batch, which will cause 16 memory reads per pte (to
> gather the access/dirty bits). For small folios its just 1 read per pte.

Right.

> pte_batch_hint() will skip forward in blocks of 16 so you now end up with the
> same number as for the small folio case. You don't need all the fancy extras
> that folio_pte_batch() gives you here.

Sounds reasonable. Your suggestion looks simple, but my method can batch 
the whole large folio (such as large folios containing more than 16 
contiguous PTEs) at once. Anyway, let me do some performance 
measurements for your suggestion. Thanks.


  parent reply	other threads:[~2025-03-30 19:57 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-26  3:38 [PATCH 0/2] Fix mincore() tmpfs test failure Baolin Wang
2025-03-26  3:38 ` [PATCH 1/2] selftests: mincore: fix tmpfs mincore " Baolin Wang
2025-03-27 14:36   ` Zi Yan
2025-03-30 19:47     ` Baolin Wang
2025-04-01 12:54   ` David Hildenbrand
2025-04-07  3:49     ` Baolin Wang
2025-04-07  7:49       ` David Hildenbrand
2025-04-07  8:35         ` Baolin Wang
2025-03-26  3:38 ` [PATCH 2/2] mm: mincore: use folio_pte_batch() to batch process large folios Baolin Wang
2025-03-27 10:49   ` Oscar Salvador
2025-03-27 11:54     ` Baolin Wang
2025-03-27 14:08   ` Ryan Roberts
2025-03-28 13:10     ` Oscar Salvador
2025-03-30 19:57     ` Baolin Wang [this message]
2025-04-01 10:45       ` Ryan Roberts
2025-04-01 13:04         ` David Hildenbrand
2025-04-07  6:33           ` Baolin Wang
2025-04-14 13:46             ` Ryan Roberts
2025-05-07  5:12   ` Dev Jain
2025-05-07  9:48     ` Baolin Wang
2025-05-07  9:54       ` David Hildenbrand
2025-05-07 10:03         ` Baolin Wang
2025-05-07 11:14           ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f6f4f4ff-0074-4ba7-b2a5-02727661843c@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox