linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chenghao Duan <duanchenghao@kylinos.cn>
To: pasha.tatashin@soleen.com, rppt@kernel.org, pratyush@kernel.org,
	akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Cc: jianghaoran@kylinos.cn, Chenghao Duan <duanchenghao@kylinos.cn>
Subject: [PATCH v2 2/4] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
Date: Mon, 23 Mar 2026 19:07:45 +0800	[thread overview]
Message-ID: <20260323110747.193569-3-duanchenghao@kylinos.cn> (raw)
In-Reply-To: <20260323110747.193569-1-duanchenghao@kylinos.cn>

Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios()
to improve performance when restoring large memfds.

Currently, shmem_recalc_inode() is called for each folio during restore,
which is O(n) expensive operations. This patch collects the number of
successfully added folios and calls shmem_recalc_inode() once after the
loop completes, reducing complexity to O(1).

Additionally, fix the error path to also call shmem_recalc_inode() for
the folios that were successfully added before the error occurred.

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index 953440994ad2..2a01eaff03c2 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -395,7 +395,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	struct inode *inode = file_inode(file);
 	struct address_space *mapping = inode->i_mapping;
 	struct folio *folio;
-	long npages;
+	long npages, nr_added_pages = 0;
 	int err = -EIO;
 	long i;
 
@@ -450,12 +450,14 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			goto unlock_folio;
 		}
 
-		shmem_recalc_inode(inode, npages, 0);
+		nr_added_pages += npages;
 		folio_add_lru(folio);
 		folio_unlock(folio);
 		folio_put(folio);
 	}
 
+	shmem_recalc_inode(inode, nr_added_pages, 0);
+
 	return 0;
 
 unlock_folio:
@@ -474,6 +476,8 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			folio_put(folio);
 	}
 
+	shmem_recalc_inode(inode, nr_added_pages, 0);
+
 	return err;
 }
 
-- 
2.25.1



  parent reply	other threads:[~2026-03-23 11:08 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-23 11:07 [PATCH v2 0/4] Modify memfd_luo code Chenghao Duan
2026-03-23 11:07 ` [PATCH v2 1/4] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
2026-03-23 11:07 ` Chenghao Duan [this message]
2026-03-23 11:07 ` [PATCH v2 3/4] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
2026-03-23 11:07 ` [PATCH v2 4/4] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
2026-03-23 21:09 ` [PATCH v2 0/4] Modify memfd_luo code Andrew Morton
2026-03-24  1:54   ` Chenghao Duan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260323110747.193569-3-duanchenghao@kylinos.cn \
    --to=duanchenghao@kylinos.cn \
    --cc=akpm@linux-foundation.org \
    --cc=jianghaoran@kylinos.cn \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=pratyush@kernel.org \
    --cc=rppt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox