From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: akpm@linuxfoundation.org, linux-mm@kvack.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Subject: [PATCH v2 15/26] vmscan: Remove remaining uses of page in shrink_page_list
Date: Wed, 4 May 2022 19:28:46 +0100 [thread overview]
Message-ID: <20220504182857.4013401-16-willy@infradead.org> (raw)
In-Reply-To: <20220504182857.4013401-1-willy@infradead.org>
These are all straightforward conversions to the folio API.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/vmscan.c | 115 ++++++++++++++++++++++++++--------------------------
1 file changed, 57 insertions(+), 58 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 85c9758f6f32..cc9b93c7fa0c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1524,7 +1524,6 @@ static unsigned int shrink_page_list(struct list_head *page_list,
retry:
while (!list_empty(page_list)) {
struct address_space *mapping;
- struct page *page;
struct folio *folio;
enum page_references references = PAGEREF_RECLAIM;
bool dirty, writeback, may_enter_fs;
@@ -1534,31 +1533,31 @@ static unsigned int shrink_page_list(struct list_head *page_list,
folio = lru_to_folio(page_list);
list_del(&folio->lru);
- page = &folio->page;
- if (!trylock_page(page))
+ if (!folio_trylock(folio))
goto keep;
- VM_BUG_ON_PAGE(PageActive(page), page);
+ VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
- nr_pages = compound_nr(page);
+ nr_pages = folio_nr_pages(folio);
- /* Account the number of base pages even though THP */
+ /* Account the number of base pages */
sc->nr_scanned += nr_pages;
- if (unlikely(!page_evictable(page)))
+ if (unlikely(!folio_evictable(folio)))
goto activate_locked;
if (!sc->may_unmap && folio_mapped(folio))
goto keep_locked;
may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
- (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO));
+ (folio_test_swapcache(folio) &&
+ (sc->gfp_mask & __GFP_IO));
/*
* The number of dirty pages determines if a node is marked
* reclaim_congested. kswapd will stall and start writing
- * pages if the tail of the LRU is all dirty unqueued pages.
+ * folios if the tail of the LRU is all dirty unqueued folios.
*/
folio_check_dirty_writeback(folio, &dirty, &writeback);
if (dirty || writeback)
@@ -1568,21 +1567,21 @@ static unsigned int shrink_page_list(struct list_head *page_list,
stat->nr_unqueued_dirty += nr_pages;
/*
- * Treat this page as congested if
- * pages are cycling through the LRU so quickly that the
- * pages marked for immediate reclaim are making it to the
- * end of the LRU a second time.
+ * Treat this folio as congested if folios are cycling
+ * through the LRU so quickly that the folios marked
+ * for immediate reclaim are making it to the end of
+ * the LRU a second time.
*/
- if (writeback && PageReclaim(page))
+ if (writeback && folio_test_reclaim(folio))
stat->nr_congested += nr_pages;
/*
* If a folio at the tail of the LRU is under writeback, there
* are three cases to consider.
*
- * 1) If reclaim is encountering an excessive number of folios
- * under writeback and this folio is both under
- * writeback and has the reclaim flag set then it
+ * 1) If reclaim is encountering an excessive number
+ * of folios under writeback and this folio has both
+ * the writeback and reclaim flags set, then it
* indicates that folios are being queued for I/O but
* are being recycled through the LRU before the I/O
* can complete. Waiting on the folio itself risks an
@@ -1633,16 +1632,16 @@ static unsigned int shrink_page_list(struct list_head *page_list,
!folio_test_reclaim(folio) || !may_enter_fs) {
/*
* This is slightly racy -
- * folio_end_writeback() might have just
- * cleared the reclaim flag, then setting
- * reclaim here ends up interpreted as
- * the readahead flag - but that does
- * not matter enough to care. What we
- * do want is for this folio to have
- * the reclaim flag set next time memcg
- * reclaim reaches the tests above, so
- * it will then folio_wait_writeback()
- * to avoid OOM; and it's also appropriate
+ * folio_end_writeback() might have
+ * just cleared the reclaim flag, then
+ * setting the reclaim flag here ends up
+ * interpreted as the readahead flag - but
+ * that does not matter enough to care.
+ * What we do want is for this folio to
+ * have the reclaim flag set next time
+ * memcg reclaim reaches the tests above,
+ * so it will then wait for writeback to
+ * avoid OOM; and it's also appropriate
* in global reclaim.
*/
folio_set_reclaim(folio);
@@ -1670,37 +1669,37 @@ static unsigned int shrink_page_list(struct list_head *page_list,
goto keep_locked;
case PAGEREF_RECLAIM:
case PAGEREF_RECLAIM_CLEAN:
- ; /* try to reclaim the page below */
+ ; /* try to reclaim the folio below */
}
/*
- * Before reclaiming the page, try to relocate
+ * Before reclaiming the folio, try to relocate
* its contents to another node.
*/
if (do_demote_pass &&
- (thp_migration_supported() || !PageTransHuge(page))) {
- list_add(&page->lru, &demote_pages);
- unlock_page(page);
+ (thp_migration_supported() || !folio_test_large(folio))) {
+ list_add(&folio->lru, &demote_pages);
+ folio_unlock(folio);
continue;
}
/*
* Anonymous process memory has backing store?
* Try to allocate it some swap space here.
- * Lazyfree page could be freed directly
+ * Lazyfree folio could be freed directly
*/
- if (PageAnon(page) && PageSwapBacked(page)) {
- if (!PageSwapCache(page)) {
+ if (folio_test_anon(folio) && folio_test_swapbacked(folio)) {
+ if (!folio_test_swapcache(folio)) {
if (!(sc->gfp_mask & __GFP_IO))
goto keep_locked;
if (folio_maybe_dma_pinned(folio))
goto keep_locked;
- if (PageTransHuge(page)) {
- /* cannot split THP, skip it */
+ if (folio_test_large(folio)) {
+ /* cannot split folio, skip it */
if (!can_split_folio(folio, NULL))
goto activate_locked;
/*
- * Split pages without a PMD map right
+ * Split folios without a PMD map right
* away. Chances are some or all of the
* tail pages can be freed without IO.
*/
@@ -1725,20 +1724,19 @@ static unsigned int shrink_page_list(struct list_head *page_list,
may_enter_fs = true;
}
- } else if (PageSwapBacked(page) && PageTransHuge(page)) {
- /* Split shmem THP */
+ } else if (folio_test_swapbacked(folio) &&
+ folio_test_large(folio)) {
+ /* Split shmem folio */
if (split_folio_to_list(folio, page_list))
goto keep_locked;
}
/*
- * THP may get split above, need minus tail pages and update
- * nr_pages to avoid accounting tail pages twice.
- *
- * The tail pages that are added into swap cache successfully
- * reach here.
+ * If the folio was split above, the tail pages will make
+ * their own pass through this function and be accounted
+ * then.
*/
- if ((nr_pages > 1) && !PageTransHuge(page)) {
+ if ((nr_pages > 1) && !folio_test_large(folio)) {
sc->nr_scanned -= (nr_pages - 1);
nr_pages = 1;
}
@@ -1898,11 +1896,11 @@ static unsigned int shrink_page_list(struct list_head *page_list,
sc->target_mem_cgroup))
goto keep_locked;
- unlock_page(page);
+ folio_unlock(folio);
free_it:
/*
- * THP may get swapped out in a whole, need account
- * all base pages.
+ * Folio may get swapped out as a whole, need to account
+ * all pages in it.
*/
nr_reclaimed += nr_pages;
@@ -1910,10 +1908,10 @@ static unsigned int shrink_page_list(struct list_head *page_list,
* Is there need to periodically free_page_list? It would
* appear not as the counts should be low
*/
- if (unlikely(PageTransHuge(page)))
- destroy_compound_page(page);
+ if (unlikely(folio_test_large(folio)))
+ destroy_compound_page(&folio->page);
else
- list_add(&page->lru, &free_pages);
+ list_add(&folio->lru, &free_pages);
continue;
activate_locked_split:
@@ -1939,18 +1937,19 @@ static unsigned int shrink_page_list(struct list_head *page_list,
count_memcg_folio_events(folio, PGACTIVATE, nr_pages);
}
keep_locked:
- unlock_page(page);
+ folio_unlock(folio);
keep:
- list_add(&page->lru, &ret_pages);
- VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
+ list_add(&folio->lru, &ret_pages);
+ VM_BUG_ON_FOLIO(folio_test_lru(folio) ||
+ folio_test_unevictable(folio), folio);
}
/* 'page_list' is always empty here */
- /* Migrate pages selected for demotion */
+ /* Migrate folios selected for demotion */
nr_reclaimed += demote_page_list(&demote_pages, pgdat);
- /* Pages that could not be demoted are still in @demote_pages */
+ /* Folios that could not be demoted are still in @demote_pages */
if (!list_empty(&demote_pages)) {
- /* Pages which failed to demoted go back on @page_list for retry: */
+ /* Folios which weren't demoted go back on @page_list for retry: */
list_splice_init(&demote_pages, page_list);
do_demote_pass = false;
goto retry;
--
2.34.1
next prev parent reply other threads:[~2022-05-04 18:29 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-04 18:28 [PATCH v2 00/26] Folio patches for 5.19 Matthew Wilcox (Oracle)
2022-05-04 18:28 ` [PATCH v2 01/26] shmem: Convert shmem_alloc_hugepage() to use vma_alloc_folio() Matthew Wilcox (Oracle)
2022-05-05 15:30 ` Christoph Hellwig
2022-05-05 17:29 ` Zi Yan
2022-05-04 18:28 ` [PATCH v2 02/26] mm/huge_memory: Convert do_huge_pmd_anonymous_page() " Matthew Wilcox (Oracle)
2022-05-05 15:31 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 03/26] alpha: Fix alloc_zeroed_user_highpage_movable() Matthew Wilcox (Oracle)
2022-05-05 15:31 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 04/26] mm: Remove alloc_pages_vma() Matthew Wilcox (Oracle)
2022-05-05 15:34 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 05/26] vmscan: Use folio_mapped() in shrink_page_list() Matthew Wilcox (Oracle)
2022-05-05 15:34 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 06/26] vmscan: Convert the writeback handling in shrink_page_list() to folios Matthew Wilcox (Oracle)
2022-05-05 15:35 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 07/26] swap: Turn get_swap_page() into folio_alloc_swap() Matthew Wilcox (Oracle)
2022-05-05 15:35 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 08/26] swap: Convert add_to_swap() to take a folio Matthew Wilcox (Oracle)
2022-05-05 15:35 ` Christoph Hellwig
2022-05-06 1:21 ` Andrew Morton
2022-05-06 1:39 ` Matthew Wilcox
2022-05-04 18:28 ` [PATCH v2 09/26] vmscan: Convert dirty page handling to folios Matthew Wilcox (Oracle)
2022-05-05 15:36 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 10/26] vmscan: Convert page buffer handling to use folios Matthew Wilcox (Oracle)
2022-05-05 15:36 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 11/26] vmscan: Convert lazy freeing to folios Matthew Wilcox (Oracle)
2022-05-05 15:37 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 12/26] vmscan: Move initialisation of mapping down Matthew Wilcox (Oracle)
2022-05-05 15:37 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 13/26] vmscan: Convert the activate_locked portion of shrink_page_list to folios Matthew Wilcox (Oracle)
2022-05-05 15:37 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 14/26] mm: Allow can_split_folio() to be called when THP are disabled Matthew Wilcox (Oracle)
2022-05-05 15:38 ` Christoph Hellwig
2022-05-04 18:28 ` Matthew Wilcox (Oracle) [this message]
2022-05-05 15:38 ` [PATCH v2 15/26] vmscan: Remove remaining uses of page in shrink_page_list Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 16/26] mm/shmem: Use a folio in shmem_unused_huge_shrink Matthew Wilcox (Oracle)
2022-05-05 15:38 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 17/26] mm/swap: Add folio_throttle_swaprate Matthew Wilcox (Oracle)
2022-05-05 15:39 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 18/26] mm/shmem: Convert shmem_add_to_page_cache to take a folio Matthew Wilcox (Oracle)
2022-05-05 15:39 ` Christoph Hellwig
2022-05-11 3:06 ` Mike Kravetz
2022-05-11 3:25 ` Matthew Wilcox
2022-05-04 18:28 ` [PATCH v2 19/26] mm/shmem: Turn shmem_should_replace_page into shmem_should_replace_folio Matthew Wilcox (Oracle)
2022-05-05 15:40 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 20/26] mm/shmem: Add shmem_alloc_folio() Matthew Wilcox (Oracle)
2022-05-05 15:40 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 21/26] mm/shmem: Convert shmem_alloc_and_acct_page to use a folio Matthew Wilcox (Oracle)
2022-05-05 15:40 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 22/26] mm/shmem: Convert shmem_getpage_gfp " Matthew Wilcox (Oracle)
2022-05-05 15:41 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 23/26] mm/shmem: Convert shmem_swapin_page() to shmem_swapin_folio() Matthew Wilcox (Oracle)
2022-05-05 15:41 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 24/26] mm: Add folio_mapping_flags() Matthew Wilcox (Oracle)
2022-05-05 15:41 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 25/26] mm: Add folio_test_movable() Matthew Wilcox (Oracle)
2022-05-05 15:42 ` Christoph Hellwig
2022-05-04 18:28 ` [PATCH v2 26/26] mm/migrate: Convert move_to_new_page() into move_to_new_folio() Matthew Wilcox (Oracle)
2022-05-05 15:42 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220504182857.4013401-16-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linuxfoundation.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox