linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] filemap: optimize folio refount update in filemap_map_pages
@ 2025-09-04 13:27 Jinjiang Tu
  2025-09-04 18:46 ` David Hildenbrand
  0 siblings, 1 reply; 2+ messages in thread
From: Jinjiang Tu @ 2025-09-04 13:27 UTC (permalink / raw)
  To: willy, akpm, david, linux-mm; +Cc: wangkefeng.wang, tujinjiang

There are two meaningless folio refcount update for order0 folio in
filemap_map_pages(). First, filemap_map_order0_folio() adds folio refcount
after the folio is mapped to pte. And then, filemap_map_pages() drops a
refcount grabbed by next_uptodate_folio(). We could remain the refcount
unchanged in this case.

As Matthew metenioned in [1], it is safe to call folio_unlock() before
calling folio_put() here, because the folio is in page cache with refcount
held, and truncation will wait for the unlock.

Optimize filemap_map_folio_range() with the same method too.

With this patch, we can get 8% performance gain for lmbench testcase
'lat_pagefault -P 1 file' in order0 folio case, the size of file is 512M.

[1]: https://lore.kernel.org/all/aKcU-fzxeW3xT5Wv@casper.infradead.org/

Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
---

v3:
 * use folio_ref_dec() and optimize large folio case too, suggested by
David Hildenbrand.

v2:
 * Don't move folio_unlock(), suggested by Matthew.

 mm/filemap.c | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 751838ef05e5..0c067c811dfa 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3639,6 +3639,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
 			unsigned long addr, unsigned int nr_pages,
 			unsigned long *rss, unsigned short *mmap_miss)
 {
+	unsigned int ref_from_caller = 1;
 	vm_fault_t ret = 0;
 	struct page *page = folio_page(folio, start);
 	unsigned int count = 0;
@@ -3672,7 +3673,8 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
 		if (count) {
 			set_pte_range(vmf, folio, page, count, addr);
 			*rss += count;
-			folio_ref_add(folio, count);
+			folio_ref_add(folio, count - ref_from_caller);
+			ref_from_caller = 0;
 			if (in_range(vmf->address, addr, count * PAGE_SIZE))
 				ret = VM_FAULT_NOPAGE;
 		}
@@ -3687,12 +3689,16 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
 	if (count) {
 		set_pte_range(vmf, folio, page, count, addr);
 		*rss += count;
-		folio_ref_add(folio, count);
+		folio_ref_add(folio, count - ref_from_caller);
+		ref_from_caller = 0;
 		if (in_range(vmf->address, addr, count * PAGE_SIZE))
 			ret = VM_FAULT_NOPAGE;
 	}
 
 	vmf->pte = old_ptep;
+	if (ref_from_caller)
+		/* Locked folios cannot get truncated. */
+		folio_ref_dec(folio);
 
 	return ret;
 }
@@ -3705,7 +3711,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
 	struct page *page = &folio->page;
 
 	if (PageHWPoison(page))
-		return ret;
+		goto out;
 
 	/* See comment of filemap_map_folio_range() */
 	if (!folio_test_workingset(folio))
@@ -3717,15 +3723,18 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
 	 * the fault-around logic.
 	 */
 	if (!pte_none(ptep_get(vmf->pte)))
-		return ret;
+		goto out;
 
 	if (vmf->address == addr)
 		ret = VM_FAULT_NOPAGE;
 
 	set_pte_range(vmf, folio, page, 1, addr);
 	(*rss)++;
-	folio_ref_inc(folio);
+	return ret;
 
+out:
+	/* Locked folios cannot get truncated. */
+	folio_ref_dec(folio);
 	return ret;
 }
 
@@ -3785,7 +3794,6 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
 					nr_pages, &rss, &mmap_miss);
 
 		folio_unlock(folio);
-		folio_put(folio);
 	} while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL);
 	add_mm_counter(vma->vm_mm, folio_type, rss);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
-- 
2.43.0



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH v3] filemap: optimize folio refount update in filemap_map_pages
  2025-09-04 13:27 [PATCH v3] filemap: optimize folio refount update in filemap_map_pages Jinjiang Tu
@ 2025-09-04 18:46 ` David Hildenbrand
  0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand @ 2025-09-04 18:46 UTC (permalink / raw)
  To: Jinjiang Tu, willy, akpm, linux-mm; +Cc: wangkefeng.wang

On 04.09.25 15:27, Jinjiang Tu wrote:
> There are two meaningless folio refcount update for order0 folio in
> filemap_map_pages(). First, filemap_map_order0_folio() adds folio refcount
> after the folio is mapped to pte. And then, filemap_map_pages() drops a
> refcount grabbed by next_uptodate_folio(). We could remain the refcount
> unchanged in this case.
> 
> As Matthew metenioned in [1], it is safe to call folio_unlock() before
> calling folio_put() here, because the folio is in page cache with refcount
> held, and truncation will wait for the unlock.
> 
> Optimize filemap_map_folio_range() with the same method too.
> 
> With this patch, we can get 8% performance gain for lmbench testcase
> 'lat_pagefault -P 1 file' in order0 folio case, the size of file is 512M.
> 
> [1]: https://lore.kernel.org/all/aKcU-fzxeW3xT5Wv@casper.infradead.org/
> 
> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
> ---

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-09-04 18:46 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-04 13:27 [PATCH v3] filemap: optimize folio refount update in filemap_map_pages Jinjiang Tu
2025-09-04 18:46 ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox