linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chenghao Duan <duanchenghao@kylinos.cn>
To: pasha.tatashin@soleen.com, rppt@kernel.org, pratyush@kernel.org,
	akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Cc: duanchenghao@kylinos.cn, jianghaoran@kylinos.cn
Subject: [PATCH v1 1/3] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
Date: Thu, 19 Mar 2026 09:28:43 +0800	[thread overview]
Message-ID: <20260319012845.29570-2-duanchenghao@kylinos.cn> (raw)
In-Reply-To: <20260319012845.29570-1-duanchenghao@kylinos.cn>

Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios()
to improve performance when restoring large memfds.

Currently, shmem_recalc_inode() is called for each folio during restore,
which is O(n) expensive operations. This patch collects the number of
successfully added folios and calls shmem_recalc_inode() once after the
loop completes, reducing complexity to O(1).

Additionally, fix the error path to also call shmem_recalc_inode() for
the folios that were successfully added before the error occurred.

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index b8edb9f981d7..5ddd3657d8be 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -397,6 +397,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	struct folio *folio;
 	int err = -EIO;
 	long i;
+	u64 nr_added = 0;
 
 	for (i = 0; i < nr_folios; i++) {
 		const struct memfd_luo_folio_ser *pfolio = &folios_ser[i];
@@ -448,12 +449,15 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			goto unlock_folio;
 		}
 
-		shmem_recalc_inode(inode, 1, 0);
+		nr_added++;
 		folio_add_lru(folio);
 		folio_unlock(folio);
 		folio_put(folio);
 	}
 
+	if (nr_added)
+		shmem_recalc_inode(inode, nr_added, 0);
+
 	return 0;
 
 unlock_folio:
@@ -472,6 +476,9 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			folio_put(folio);
 	}
 
+	if (nr_added)
+		shmem_recalc_inode(inode, nr_added, 0);
+
 	return err;
 }
 
-- 
2.25.1



  reply	other threads:[~2026-03-19  1:29 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19  1:28 [PATCH v1 0/3] Modify memfd_luo code Chenghao Duan
2026-03-19  1:28 ` Chenghao Duan [this message]
2026-03-19 15:28   ` [PATCH v1 1/3] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Pasha Tatashin
2026-03-20  9:53     ` Pratyush Yadav
2026-03-20 10:02   ` Pratyush Yadav
2026-03-19  1:28 ` [PATCH v1 2/3] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
2026-03-19 16:20   ` Pasha Tatashin
2026-03-20 10:04   ` Pratyush Yadav
2026-03-20 11:37   ` Mike Rapoport
2026-03-19  1:28 ` [PATCH v1 3/3] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
2026-03-19 16:24   ` Pasha Tatashin
2026-03-20  9:51   ` Pratyush Yadav
2026-03-20 11:35     ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260319012845.29570-2-duanchenghao@kylinos.cn \
    --to=duanchenghao@kylinos.cn \
    --cc=akpm@linux-foundation.org \
    --cc=jianghaoran@kylinos.cn \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=pratyush@kernel.org \
    --cc=rppt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox