linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jinjiang Tu <tujinjiang@huawei.com>
To: <akpm@linux-foundation.org>, <david@redhat.com>, <yangge1116@126.com>
Cc: <linux-mm@kvack.org>, <wangkefeng.wang@huawei.com>,
	<tujinjiang@huawei.com>
Subject: [PATCH] mm/swap: set active flag after adding the folio to activate fbatch
Date: Fri, 11 Apr 2025 16:28:57 +0800	[thread overview]
Message-ID: <20250411082857.2426539-1-tujinjiang@huawei.com> (raw)

We notiched a 12.3% performance regression for LibMicro pwrite testcase
due to commit 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before
adding to LRU batch").

The testcase is executed as follows, and the file is tmpfs file.
  pwrite -E -C 200 -L -S -W -N "pwrite_t1k" -s 1k -I 500 -f $TFILE

this testcase writes 1KB (only one page) to the tmpfs and repeats
this step for many times. The Flame graph shows the performance regression
comes from folio_mark_accessed() and workingset_activation().

folio_mark_accessed() is called for the same page for many times.
Before the commit, each call will add the page to activate fbatch. When
the fbatch is full, the fbatch is drained and the page is promoted to
active list. And then, folio_mark_accessed() does nothing in later calls.

But after the commit, the folio clear lru flags after it is added to
activate fbatch. After then, folio_mark_accessed will never call
folio_activate() again due to the page is without lru flag, and the fbatch
will not be full and the folio will not be marked active, later
folio_mark_accessed() calls will always call workingset_activation(),
leading to performance regression.

Besides, repeated workingset_age_nonresident() call before the folio is
drained from activate fbatch leads to unreasonable lruvec->nonresident_age.

To fix it, set active flag after the folio is cleared lru flag when adding
the folio to activate fbatch.

Fixes: 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding to LRU batch")
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
---
 mm/swap.c | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index 77b2d5997873..f0de837988b4 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -175,6 +175,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
 	folios_put(fbatch);
 }
 
+static void lru_activate(struct lruvec *lruvec, struct folio *folio);
+
 static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
 		struct folio *folio, move_fn_t move_fn,
 		bool on_lru, bool disable_irq)
@@ -184,6 +186,14 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
 	if (on_lru && !folio_test_clear_lru(folio))
 		return;
 
+	if (move_fn == lru_activate) {
+		if (folio_test_unevictable(folio)) {
+			folio_set_lru(folio);
+			return;
+		}
+		folio_set_active(folio);
+	}
+
 	folio_get(folio);
 
 	if (disable_irq)
@@ -299,12 +309,15 @@ static void lru_activate(struct lruvec *lruvec, struct folio *folio)
 {
 	long nr_pages = folio_nr_pages(folio);
 
-	if (folio_test_active(folio) || folio_test_unevictable(folio))
-		return;
-
+	/*
+	 * We check unevictable flag isn't set and set active flag
+	 * after we clear lru flag. Unevictable and active flag
+	 * couldn't be modified before we set lru flag again.
+	 */
+	VM_WARN_ON_ONCE(!folio_test_active(folio));
+	VM_WARN_ON_ONCE(folio_test_unevictable(folio));
 
 	lruvec_del_folio(lruvec, folio);
-	folio_set_active(folio);
 	lruvec_add_folio(lruvec, folio);
 	trace_mm_lru_activate(folio);
 
@@ -341,6 +354,11 @@ void folio_activate(struct folio *folio)
 	if (!folio_test_clear_lru(folio))
 		return;
 
+	if (folio_test_unevictable(folio) || folio_test_active(folio)) {
+		folio_set_lru(folio);
+		return;
+	}
+	folio_set_active(folio);
 	lruvec = folio_lruvec_lock_irq(folio);
 	lru_activate(lruvec, folio);
 	unlock_page_lruvec_irq(lruvec);
-- 
2.43.0



             reply	other threads:[~2025-04-11  8:39 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-11  8:28 Jinjiang Tu [this message]
2025-04-14 11:47 ` Lorenzo Stoakes
2025-04-14 21:59   ` Andrew Morton
2025-04-15 10:26   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250411082857.2426539-1-tujinjiang@huawei.com \
    --to=tujinjiang@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=wangkefeng.wang@huawei.com \
    --cc=yangge1116@126.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox