From: Xu Lu <luxu.kernel@bytedance.com>
To: pjw@kernel.org, palmer@dabbelt.com, aou@eecs.berkeley.edu,
alex@ghiti.fr, kees@kernel.org, mingo@redhat.com,
peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, akpm@linux-foundation.org,
david@redhat.com, apatel@ventanamicro.com, guoren@kernel.org
Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, Xu Lu <luxu.kernel@bytedance.com>
Subject: [RFC PATCH v2 3/9] riscv: mm: Grab mm_count to avoid mm getting released
Date: Thu, 27 Nov 2025 22:11:11 +0800 [thread overview]
Message-ID: <20251127141117.87420-4-luxu.kernel@bytedance.com> (raw)
In-Reply-To: <20251127141117.87420-1-luxu.kernel@bytedance.com>
We maintain an array of mm_structs whose ASIDs are active on the current
CPU. To avoid these mm_structs getting released, we grab their mm_count
before loaded them into the array. And drop their mm_count via tasklet
when they are evicted out of the array.
Signed-off-by: Xu Lu <luxu.kernel@bytedance.com>
---
arch/riscv/include/asm/mmu.h | 4 +++
arch/riscv/mm/tlbflush.c | 47 ++++++++++++++++++++++++++++++++++++
2 files changed, 51 insertions(+)
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
index cf8e6eac77d52..913fa535b3d19 100644
--- a/arch/riscv/include/asm/mmu.h
+++ b/arch/riscv/include/asm/mmu.h
@@ -30,6 +30,10 @@ typedef struct {
#ifdef CONFIG_RISCV_ISA_SUPM
u8 pmlen;
#endif
+#ifdef CONFIG_RISCV_LAZY_TLB_FLUSH
+ atomic_t lazy_tlb_cnt;
+ void *next;
+#endif
} mm_context_t;
/* Lock the pointer masking mode because this mm is multithreaded */
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 0b1c21c7aafb8..4b2ce06cbe6bd 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -104,12 +104,57 @@ struct flush_tlb_range_data {
};
#ifdef CONFIG_RISCV_LAZY_TLB_FLUSH
+
DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_info, tlbinfo) = {
.rwlock = __RW_LOCK_UNLOCKED(tlbinfo.rwlock),
.active_mm = NULL,
.next_gen = 1,
.contexts = { { NULL, 0, }, },
};
+
+static DEFINE_PER_CPU(mm_context_t *, mmdrop_victims);
+
+static void mmdrop_lazy_mms(struct tasklet_struct *tasklet)
+{
+ mm_context_t *victim = xchg_relaxed(this_cpu_ptr(&mmdrop_victims), NULL);
+ struct mm_struct *mm = NULL;
+
+ while (victim) {
+ mm = container_of(victim, struct mm_struct, context);
+ while (atomic_dec_return_relaxed(&victim->lazy_tlb_cnt) != 0)
+ mmdrop_lazy_tlb(mm);
+ victim = victim->next;
+ }
+}
+
+static DEFINE_PER_CPU(struct tasklet_struct, mmdrop_tasklets) = {
+ .count = ATOMIC_INIT(0),
+ .callback = mmdrop_lazy_mms,
+ .use_callback = true,
+};
+
+static inline void mmgrab_lazy_mm(struct mm_struct *mm)
+{
+ mmgrab_lazy_tlb(mm);
+ atomic_inc(&mm->context.lazy_tlb_cnt);
+}
+
+static inline void mmdrop_lazy_mm(struct mm_struct *mm)
+{
+ mm_context_t **head, *list, *context = &mm->context;
+
+ if (atomic_inc_return_relaxed(&context->lazy_tlb_cnt) == 1) {
+ head = this_cpu_ptr(&mmdrop_victims);
+
+ do {
+ list = *head;
+ context->next = list;
+ } while (cmpxchg_relaxed(head, list, context) != list);
+
+ tasklet_schedule(this_cpu_ptr(&mmdrop_tasklets));
+ }
+}
+
#endif /* CONFIG_RISCV_LAZY_TLB_FLUSH */
static void __ipi_flush_tlb_range_asid(void *info)
@@ -292,6 +337,7 @@ void local_load_tlb_mm(struct mm_struct *mm)
info->active_mm = mm;
if (contexts[pos].mm != mm) {
+ mmgrab_lazy_mm(mm);
victim = contexts[pos].mm;
contexts[pos].mm = mm;
}
@@ -302,6 +348,7 @@ void local_load_tlb_mm(struct mm_struct *mm)
if (victim) {
cpumask_clear_cpu(raw_smp_processor_id(), mm_cpumask(victim));
local_flush_tlb_all_asid(get_mm_asid(victim));
+ mmdrop_lazy_mm(victim);
}
}
--
2.20.1
next prev parent reply other threads:[~2025-11-27 14:12 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-27 14:11 [RFC PATCH v2 0/9] riscv: mm: Introduce lazy tlb flush Xu Lu
2025-11-27 14:11 ` [RFC PATCH v2 1/9] riscv: Introduce RISCV_LAZY_TLB_FLUSH config Xu Lu
2025-11-27 14:11 ` [RFC PATCH v2 2/9] riscv: mm: Apply a threshold to the number of active ASIDs on each CPU Xu Lu
2025-11-27 14:11 ` Xu Lu [this message]
2025-11-27 14:11 ` [RFC PATCH v2 4/9] fork: Add arch override for do_shoot_lazy_tlb() Xu Lu
2025-11-27 14:11 ` [RFC PATCH v2 5/9] riscv: mm: Introduce arch_do_shoot_lazy_tlb Xu Lu
2025-11-27 14:11 ` [RFC PATCH v2 6/9] riscv: mm: Introduce percpu TLB Flush queue Xu Lu
2025-11-27 14:11 ` [RFC PATCH v2 7/9] riscv: mm: Defer the TLB Flush to switch_mm Xu Lu
2025-11-27 14:11 ` [RFC PATCH v2 8/9] riscv: mm: Clear mm_cpumask during local_flush_tlb_all_asid() Xu Lu
2025-11-27 14:11 ` [RFC PATCH v2 9/9] riscv: mm: Clear mm_cpumask during local_flush_tlb_all() Xu Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251127141117.87420-4-luxu.kernel@bytedance.com \
--to=luxu.kernel@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=alex@ghiti.fr \
--cc=aou@eecs.berkeley.edu \
--cc=apatel@ventanamicro.com \
--cc=david@redhat.com \
--cc=guoren@kernel.org \
--cc=juri.lelli@redhat.com \
--cc=kees@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=mingo@redhat.com \
--cc=palmer@dabbelt.com \
--cc=peterz@infradead.org \
--cc=pjw@kernel.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox