From: Andrew Morton <akpm@linux-foundation.org>
To: linux-mm@kvack.org
Subject: [PATCH] mm/vmscan.c:shrink_folio_list(): save a tabstop
Date: Mon, 15 Dec 2025 11:12:41 -0800 [thread overview]
Message-ID: <20251215111241.c346395171e299a21064efc7@linux-foundation.org> (raw)
An effort to make shrink_folio_list() less painful.
From: Andrew Morton <akpm@linux-foundation.org>
Subject: mm/vmscan.c:shrink_folio_list(): save a tabstop
Date: Mon Dec 15 11:05:56 AM PST 2025
We have some needlessly deep indentation in this huge function due to
if (expr1) {
if (expr2) {
...
}
}
Convert this to
if (expr1 && expr2) {
...
}
Also, reflow that big block comment to fit in 80 cols.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/vmscan.c | 96 +++++++++++++++++++++++++-------------------------
1 file changed, 48 insertions(+), 48 deletions(-)
--- a/mm/vmscan.c~mm-vmscanc-shrink_folio_list-save-a-tabstop
+++ a/mm/vmscan.c
@@ -1276,58 +1276,58 @@ retry:
* Try to allocate it some swap space here.
* Lazyfree folio could be freed directly
*/
- if (folio_test_anon(folio) && folio_test_swapbacked(folio)) {
- if (!folio_test_swapcache(folio)) {
- if (!(sc->gfp_mask & __GFP_IO))
- goto keep_locked;
- if (folio_maybe_dma_pinned(folio))
- goto keep_locked;
- if (folio_test_large(folio)) {
- /* cannot split folio, skip it */
- if (folio_expected_ref_count(folio) !=
- folio_ref_count(folio) - 1)
- goto activate_locked;
- /*
- * Split partially mapped folios right away.
- * We can free the unmapped pages without IO.
- */
- if (data_race(!list_empty(&folio->_deferred_list) &&
- folio_test_partially_mapped(folio)) &&
- split_folio_to_list(folio, folio_list))
- goto activate_locked;
- }
- if (folio_alloc_swap(folio)) {
- int __maybe_unused order = folio_order(folio);
+ if (folio_test_anon(folio) && folio_test_swapbacked(folio) &&
+ !folio_test_swapcache(folio)) {
+ if (!(sc->gfp_mask & __GFP_IO))
+ goto keep_locked;
+ if (folio_maybe_dma_pinned(folio))
+ goto keep_locked;
+ if (folio_test_large(folio)) {
+ /* cannot split folio, skip it */
+ if (folio_expected_ref_count(folio) !=
+ folio_ref_count(folio) - 1)
+ goto activate_locked;
+ /*
+ * Split partially mapped folios right away.
+ * We can free the unmapped pages without IO.
+ */
+ if (data_race(!list_empty(&folio->_deferred_list) &&
+ folio_test_partially_mapped(folio)) &&
+ split_folio_to_list(folio, folio_list))
+ goto activate_locked;
+ }
+ if (folio_alloc_swap(folio)) {
+ int __maybe_unused order = folio_order(folio);
- if (!folio_test_large(folio))
- goto activate_locked_split;
- /* Fallback to swap normal pages */
- if (split_folio_to_list(folio, folio_list))
- goto activate_locked;
+ if (!folio_test_large(folio))
+ goto activate_locked_split;
+ /* Fallback to swap normal pages */
+ if (split_folio_to_list(folio, folio_list))
+ goto activate_locked;
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- if (nr_pages >= HPAGE_PMD_NR) {
- count_memcg_folio_events(folio,
- THP_SWPOUT_FALLBACK, 1);
- count_vm_event(THP_SWPOUT_FALLBACK);
- }
-#endif
- count_mthp_stat(order, MTHP_STAT_SWPOUT_FALLBACK);
- if (folio_alloc_swap(folio))
- goto activate_locked_split;
+ if (nr_pages >= HPAGE_PMD_NR) {
+ count_memcg_folio_events(folio,
+ THP_SWPOUT_FALLBACK, 1);
+ count_vm_event(THP_SWPOUT_FALLBACK);
}
- /*
- * Normally the folio will be dirtied in unmap because its
- * pte should be dirty. A special case is MADV_FREE page. The
- * page's pte could have dirty bit cleared but the folio's
- * SwapBacked flag is still set because clearing the dirty bit
- * and SwapBacked flag has no lock protected. For such folio,
- * unmap will not set dirty bit for it, so folio reclaim will
- * not write the folio out. This can cause data corruption when
- * the folio is swapped in later. Always setting the dirty flag
- * for the folio solves the problem.
- */
- folio_mark_dirty(folio);
+#endif
+ count_mthp_stat(order, MTHP_STAT_SWPOUT_FALLBACK);
+ if (folio_alloc_swap(folio))
+ goto activate_locked_split;
}
+ /*
+ * Normally the folio will be dirtied in unmap because
+ * its pte should be dirty. A special case is MADV_FREE
+ * page. The page's pte could have dirty bit cleared but
+ * the folio's SwapBacked flag is still set because
+ * clearing the dirty bit and SwapBacked flag has no
+ * lock protected. For such folio, unmap will not set
+ * dirty bit for it, so folio reclaim will not write the
+ * folio out. This can cause data corruption when the
+ * folio is swapped in later. Always setting the dirty
+ * flag for the folio solves the problem.
+ */
+ folio_mark_dirty(folio);
}
/*
_
reply other threads:[~2025-12-15 19:12 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251215111241.c346395171e299a21064efc7@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox