From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: linux-mm@kvack.org, akpm@linux-foundation.org
Cc: Yu Zhao <yuzhao@google.com>,
"T . J . Alumbaugh" <talumbau@google.com>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Subject: [PATCH 2/3] mm/lru_gen: lru_gen_look_around simplification
Date: Tue, 13 Jun 2023 17:30:46 +0530 [thread overview]
Message-ID: <20230613120047.149573-2-aneesh.kumar@linux.ibm.com> (raw)
In-Reply-To: <20230613120047.149573-1-aneesh.kumar@linux.ibm.com>
To store generation details in folio_flags we need lru_gen_mm_walk
structure in which we batch the nr_pages update. A follow-up patch wants to
avoid compiling all the lru_gen mm walk-related code on architectures that
don't support it. Split out the look-around generation update by marking
folio active into a separate helper which will be used in that case.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
mm/vmscan.c | 57 +++++++++++++++++++++++++++++++++++------------------
1 file changed, 38 insertions(+), 19 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index edfe073b475e..f277beba556c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4619,6 +4619,39 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc)
* rmap/PT walk feedback
******************************************************************************/
+static void __look_around_gen_update(struct folio *folio, int new_gen)
+{
+ int old_gen;
+
+ old_gen = folio_lru_gen(folio);
+ if (old_gen < 0)
+ folio_set_referenced(folio);
+ else if (old_gen != new_gen)
+ folio_activate(folio);
+}
+
+static inline bool current_reclaim_state_can_swap(void)
+{
+ if (current->reclaim_state)
+ return current->reclaim_state->mm_walk->can_swap;
+ return true;
+}
+
+static void look_around_gen_update(struct folio *folio, int new_gen)
+{
+ int old_gen;
+ struct lru_gen_mm_walk *walk;
+
+ walk = current->reclaim_state ? current->reclaim_state->mm_walk : NULL;
+ if (walk) {
+ old_gen = folio_update_gen(folio, new_gen);
+ if (old_gen >= 0 && old_gen != new_gen)
+ update_batch_size(walk, folio, old_gen, new_gen);
+ return;
+ }
+ return __look_around_gen_update(folio, new_gen);
+}
+
/*
* This function exploits spatial locality when shrink_folio_list() walks the
* rmap. It scans the adjacent PTEs of a young PTE and promotes hot pages. If
@@ -4631,7 +4664,6 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
int i;
unsigned long start;
unsigned long end;
- struct lru_gen_mm_walk *walk;
int young = 0;
pte_t *pte = pvmw->pte;
unsigned long addr = pvmw->address;
@@ -4640,7 +4672,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
struct pglist_data *pgdat = folio_pgdat(folio);
struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
DEFINE_MAX_SEQ(lruvec);
- int old_gen, new_gen = lru_gen_from_seq(max_seq);
+ int new_gen = lru_gen_from_seq(max_seq);
lockdep_assert_held(pvmw->ptl);
VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio);
@@ -4648,9 +4680,6 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
if (spin_is_contended(pvmw->ptl))
return;
- /* avoid taking the LRU lock under the PTL when possible */
- walk = current->reclaim_state ? current->reclaim_state->mm_walk : NULL;
-
start = max(addr & PMD_MASK, pvmw->vma->vm_start);
end = min(addr | ~PMD_MASK, pvmw->vma->vm_end - 1) + 1;
@@ -4683,7 +4712,9 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
if (!pte_young(pte[i]))
continue;
- folio = get_pfn_folio(pfn, memcg, pgdat, !walk || walk->can_swap);
+ folio = get_pfn_folio(pfn, memcg, pgdat,
+ current_reclaim_state_can_swap());
+
if (!folio)
continue;
@@ -4697,19 +4728,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
!folio_test_swapcache(folio)))
folio_mark_dirty(folio);
- if (walk) {
- old_gen = folio_update_gen(folio, new_gen);
- if (old_gen >= 0 && old_gen != new_gen)
- update_batch_size(walk, folio, old_gen, new_gen);
-
- continue;
- }
-
- old_gen = folio_lru_gen(folio);
- if (old_gen < 0)
- folio_set_referenced(folio);
- else if (old_gen != new_gen)
- folio_activate(folio);
+ look_around_gen_update(folio, new_gen);
}
arch_leave_lazy_mmu_mode();
--
2.40.1
next prev parent reply other threads:[~2023-06-13 12:01 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-13 12:00 [PATCH 1/3] mm/lru_gen: Move some code around so that next patch is simpler Aneesh Kumar K.V
2023-06-13 12:00 ` Aneesh Kumar K.V [this message]
2023-06-13 12:00 ` [PATCH 3/3] mm/lru_gen: Don't build multi-gen LRU page table walk code on architecture not supported Aneesh Kumar K.V
2023-06-13 12:23 ` Matthew Wilcox
2023-06-13 13:28 ` Aneesh Kumar K V
2023-06-13 13:36 ` Matthew Wilcox
2023-06-13 13:47 ` Aneesh Kumar K V
2023-06-21 2:27 ` kernel test robot
2023-06-24 14:53 ` Aneesh Kumar K.V
2023-06-25 19:34 ` Yu Zhao
2023-06-26 10:52 ` Aneesh Kumar K V
2023-06-26 17:04 ` Yu Zhao
2023-06-27 11:48 ` Aneesh Kumar K V
2023-06-27 19:10 ` Yu Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230613120047.149573-2-aneesh.kumar@linux.ibm.com \
--to=aneesh.kumar@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
--cc=talumbau@google.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox