From: Byungchul Park <byungchul@sk.com>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: kernel_team@skhynix.com, akpm@linux-foundation.org,
ying.huang@intel.com, vernhao@tencent.com,
mgorman@techsingularity.net, hughd@google.com,
willy@infradead.org, david@redhat.com, peterz@infradead.org,
luto@kernel.org, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com
Subject: [PATCH v10 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush
Date: Fri, 10 May 2024 15:52:02 +0900 [thread overview]
Message-ID: <20240510065206.76078-9-byungchul@sk.com> (raw)
In-Reply-To: <20240510065206.76078-1-byungchul@sk.com>
Functionally, no change. This is a preparation for luf mechanism that
requires to recognize read-only tlb entries and handle them in a
different way. The newly introduced API in this patch, fold_ubc(), will
be used by luf mechanism.
Signed-off-by: Byungchul Park <byungchul@sk.com>
---
include/linux/sched.h | 1 +
mm/internal.h | 4 ++++
mm/rmap.c | 34 ++++++++++++++++++++++++++++++++--
3 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2aa48adad226..0915390b1b5e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1339,6 +1339,7 @@ struct task_struct {
#endif
struct tlbflush_unmap_batch tlb_ubc;
+ struct tlbflush_unmap_batch tlb_ubc_ro;
unsigned short int ugen;
/* Cache last used pipe for splice(): */
diff --git a/mm/internal.h b/mm/internal.h
index 0d4c74e76de6..805f0e6ecab4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1100,6 +1100,7 @@ extern struct workqueue_struct *mm_percpu_wq;
void try_to_unmap_flush(void);
void try_to_unmap_flush_dirty(void);
void flush_tlb_batched_pending(struct mm_struct *mm);
+void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src);
#else
static inline void try_to_unmap_flush(void)
{
@@ -1110,6 +1111,9 @@ static inline void try_to_unmap_flush_dirty(void)
static inline void flush_tlb_batched_pending(struct mm_struct *mm)
{
}
+static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src)
+{
+}
#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
extern const struct trace_print_flags pageflag_names[];
diff --git a/mm/rmap.c b/mm/rmap.c
index cf8a99a49aef..328b5e2217e6 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -635,6 +635,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
}
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+void fold_ubc(struct tlbflush_unmap_batch *dst,
+ struct tlbflush_unmap_batch *src)
+{
+ if (!src->flush_required)
+ return;
+
+ /*
+ * Fold src to dst.
+ */
+ arch_tlbbatch_fold(&dst->arch, &src->arch);
+ dst->writable = dst->writable || src->writable;
+ dst->flush_required = true;
+
+ /*
+ * Reset src.
+ */
+ arch_tlbbatch_clear(&src->arch);
+ src->flush_required = false;
+ src->writable = false;
+}
+
/*
* Flush TLB entries for recently unmapped pages from remote CPUs. It is
* important if a PTE was dirty when it was unmapped that it's flushed
@@ -644,7 +666,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
void try_to_unmap_flush(void)
{
struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro;
+ fold_ubc(tlb_ubc, tlb_ubc_ro);
if (!tlb_ubc->flush_required)
return;
@@ -658,8 +682,9 @@ void try_to_unmap_flush(void)
void try_to_unmap_flush_dirty(void)
{
struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro;
- if (tlb_ubc->writable)
+ if (tlb_ubc->writable || tlb_ubc_ro->writable)
try_to_unmap_flush();
}
@@ -676,13 +701,18 @@ void try_to_unmap_flush_dirty(void)
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
unsigned long uaddr)
{
- struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc;
int batch;
bool writable = pte_dirty(pteval);
if (!pte_accessible(mm, pteval))
return;
+ if (pte_write(pteval))
+ tlb_ubc = ¤t->tlb_ubc;
+ else
+ tlb_ubc = ¤t->tlb_ubc_ro;
+
arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
tlb_ubc->flush_required = true;
--
2.17.1
next prev parent reply other threads:[~2024-05-10 6:52 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-10 6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
2024-05-10 6:51 ` [PATCH v10 01/12] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
2024-05-10 6:51 ` [PATCH v10 02/12] arm64: tlbflush: " Byungchul Park
2024-05-10 6:51 ` [PATCH v10 03/12] riscv, tlb: " Byungchul Park
2024-05-10 6:51 ` [PATCH v10 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
2024-05-10 6:51 ` [PATCH v10 05/12] mm: buddy: make room for a new variable, ugen, in struct page Byungchul Park
2024-05-10 6:52 ` [PATCH v10 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy Byungchul Park
2024-05-10 6:52 ` [PATCH v10 07/12] mm: add a parameter, unmap generation number, to free_unref_folios() Byungchul Park
2024-05-10 6:52 ` Byungchul Park [this message]
2024-05-10 6:52 ` [PATCH v10 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped Byungchul Park
2024-05-10 6:52 ` [PATCH v10 10/12] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
2024-05-10 6:52 ` [PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration Byungchul Park
2024-05-10 6:52 ` [PATCH v10 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim Byungchul Park
2024-05-11 6:54 ` [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Huang, Ying
2024-05-13 1:41 ` Byungchul Park
2024-05-11 7:15 ` Huang, Ying
2024-05-13 1:44 ` Byungchul Park
2024-05-22 2:16 ` Byungchul Park
2024-05-22 7:38 ` Huang, Ying
2024-05-22 10:27 ` Byungchul Park
2024-05-22 14:15 ` Byungchul Park
2024-05-24 17:16 ` Dave Hansen
2024-05-27 1:57 ` Byungchul Park
2024-05-27 2:43 ` Dave Hansen
2024-05-27 3:46 ` Byungchul Park
2024-05-27 4:19 ` Byungchul Park
2024-05-27 4:25 ` Byungchul Park
2024-05-27 22:58 ` Byungchul Park
2024-05-29 2:16 ` Huang, Ying
2024-05-30 1:02 ` Byungchul Park
2024-05-27 3:10 ` Huang, Ying
2024-05-27 3:56 ` Byungchul Park
2024-05-28 15:14 ` Dave Hansen
2024-05-29 5:00 ` Byungchul Park
2024-05-29 16:41 ` Dave Hansen
2024-05-30 0:50 ` Byungchul Park
2024-05-30 0:59 ` Byungchul Park
2024-05-30 1:11 ` Huang, Ying
2024-05-30 1:33 ` Byungchul Park
2024-05-30 7:18 ` Byungchul Park
2024-05-30 8:24 ` Huang, Ying
2024-05-30 8:41 ` Byungchul Park
2024-05-30 13:50 ` Dave Hansen
2024-05-31 2:06 ` Byungchul Park
2024-05-30 9:33 ` Byungchul Park
2024-05-31 1:45 ` Huang, Ying
2024-05-31 2:20 ` Byungchul Park
2024-05-28 8:41 ` David Hildenbrand
2024-05-29 4:39 ` Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240510065206.76078-9-byungchul@sk.com \
--to=byungchul@sk.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=kernel_team@skhynix.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rjgolo@gmail.com \
--cc=tglx@linutronix.de \
--cc=vernhao@tencent.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox