From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38372C5B549 for ; Fri, 30 May 2025 09:31:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CBA976B00A9; Fri, 30 May 2025 05:31:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6B416B00AA; Fri, 30 May 2025 05:31:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B32F86B00AB; Fri, 30 May 2025 05:31:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 91CD56B00A9 for ; Fri, 30 May 2025 05:31:49 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3E5E682BB5 for ; Fri, 30 May 2025 09:31:49 +0000 (UTC) X-FDA: 83499057138.09.5FE1617 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf10.hostedemail.com (Postfix) with ESMTP id 45EE6C0008 for ; Fri, 30 May 2025 09:31:47 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=I0ixPud6; spf=pass (imf10.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748597507; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lqEqFVUYhIQ6uCyFMSKyc3XjCtHWAqaCcpg8jFQsxWM=; b=0e6JnhfrOYkzRm3Lid1eVP9fv2PUOtPFVKAwXUlkoUBQXAJf/YNnYFasjoGiMZFtUpC9ce +zjpppsvdKSwA8vMK3NIzAYXvTpaeOPiqyTNVYgD+XCn7pYBbC9LVdoavToarOwz/2ls6Q 6Kd72umWKm0yivFS9sOErC7ekfsipnU= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=I0ixPud6; spf=pass (imf10.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748597507; a=rsa-sha256; cv=none; b=cJZIvocgDEQH6VEDgVDNOspeQvPwuyxyoKImUl11bjkzq1lpS/WRX34n+frzDz5OTy2lx8 8CPWI5rveLCenFiaey33jKM3sGviRLbseG0bE1rT2yNBXu36R3MQFZaL2WDpjyGVeWJZA+ v6FjBddP0WiuVXGp8qnDA68BizKqZxM= Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-306bf444ba2so1549056a91.1 for ; Fri, 30 May 2025 02:31:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1748597506; x=1749202306; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lqEqFVUYhIQ6uCyFMSKyc3XjCtHWAqaCcpg8jFQsxWM=; b=I0ixPud6FcNNLGprYczC6Ca/1Ql7hcP66qihjUCL77/widzXN6QLZkOH/cxrmFryU2 ZHrNTgPrJWLxElmdra83ErS+ONFbxHuvDfjcj+kbGoKciCfzcB46kitxBQr3vSJAgk24 oB1VfJ3t9MdVWtuHIfcTQ6+M8Tck+/SpKP2jkDoFbZoEKWUUV1HDlia4A5GXnuYTM3V7 GjeUAlT4XI0eJPEoHsKVZOgm420XDC+dF/5D3Dr8DLxpG8I7/nTtysjYd71Lu1yqmOkJ VRcdR4hK8Dxc2m1glotrpJE52KrK8QZTPOjcEVV+IBmGLcHUdcM3oWgTkfDP1+/0zGIl 9yWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748597506; x=1749202306; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lqEqFVUYhIQ6uCyFMSKyc3XjCtHWAqaCcpg8jFQsxWM=; b=cruuzoZbILcaFp/GDzUd+dip8T8omHKFZzvJtsbTIUtu3XaCaU9ija52wa4BW3IoUN 8HyD08dQT3cGm7X/pnsd6G7ElJLGCHY1wv53L9cyNfGGrZbGWV1WMWwqQ6em271rwjJ8 BGZxSHm/D53cR2p7KEi0mfCToY4qk5OuVZr8BGRaKLrRQpXffdo3tStADeKRiDX6czt+ 1jVLEZZ9xIZC4CH5pvFb8gZTn6fEFAXsmoh2V3IeVfoHe279co0RduWZj3FHn/kyKpe5 jJj3cPCwHz3Yg+2WP4LYSB1LU3r27AB6H/iyUKxcSrtSvgGLoTfjwkd4Yt8sdKfQmW0b xxeg== X-Forwarded-Encrypted: i=1; AJvYcCXE9mkOYO16ueF4QbyFG4yWYm3eLkrbngkg3OObIfBqfZujJuAek+12uRxVZc4sw4pjCZEdwTBtew==@kvack.org X-Gm-Message-State: AOJu0YyG3IlzjFNGsccEKVZvRIlsIgRdzxEbs2yslENqDkHpmt2iBnnw ZJT3HomPNdjYzf8kDrM1svMkEb9IRg/GjS9h2ltGp+12gRh9yQFGrytmTv9szjxF67Q= X-Gm-Gg: ASbGnctTh64QkCUllgdQD8c85MUtHqg8oZ0px/ZTe8EuE34EWZQ80VmMCkxLGhJ0qBI Qd58C9wWOzaEIEJm2vRYaDMf8QjOPprD1D3yyWEMZrfzqz775jnqCG8UdfmnfsewjgnysksJb/v 0tqauLyzDRcF4DQNyNenFN3hd4+Q+kB4URtzb+q0FcOTM5kyZ5YRMZhOxjs5vDeQSZ6yvIujW0p dXO8WG3bu79U6hwM2EhWPA9NntW3rottTJK9dFpummjee376mFBQDyxMjesOAVgJb/6jiytIR38 NoqWkcNfbgeXustJcsDbIiHVo6BEhRJfV6VwMnSzTYFR6sQz99crKUugCHIUWpqAfpAISlALfpM YIGRlL45ybA== X-Google-Smtp-Source: AGHT+IGF62FX0HQNv9LilxlCiViF0h8d1bY/G1W0X5Q0qQ2cdYg03wna6Pf30NCPziT41+dazZ7wew== X-Received: by 2002:a17:90b:3b50:b0:312:1c83:58e9 with SMTP id 98e67ed59e1d1-3124150e464mr3530070a91.5.1748597505955; Fri, 30 May 2025 02:31:45 -0700 (PDT) Received: from FQ627FTG20.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e29f7b8sm838724a91.2.2025.05.30.02.31.31 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 30 May 2025 02:31:45 -0700 (PDT) From: Bo Li To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, luto@kernel.org, kees@kernel.org, akpm@linux-foundation.org, david@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, peterz@infradead.org Cc: dietmar.eggemann@arm.com, hpa@zytor.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, jannh@google.com, pfalcato@suse.de, riel@surriel.com, harry.yoo@oracle.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, yinhongbo@bytedance.com, dengliang.1214@bytedance.com, xieyongji@bytedance.com, chaiwen.cc@bytedance.com, songmuchun@bytedance.com, yuanzhu@bytedance.com, chengguozhu@bytedance.com, sunjiadong.lff@bytedance.com, Bo Li Subject: [RFC v2 13/35] RPAL: add tlb flushing support Date: Fri, 30 May 2025 17:27:41 +0800 Message-Id: X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: x6gpyk6756t6th831ucgotagfugx9414 X-Rspamd-Queue-Id: 45EE6C0008 X-Rspamd-Server: rspam11 X-HE-Tag: 1748597507-423803 X-HE-Meta: U2FsdGVkX18QKTwGvcjt4kxVlwOrA540+UfL5aONpDXBgDIu5L9C8SBj4f/5SB1rymVML9lyDFXoQHIuxOg+YaXOuYjy3qJSdLPvYiNbSRwrcJs2qSQeYRreUCnixxDLrvvwjLsU5CARN5sa1fKN+IYWHzf7YtX3ytZE58Ph4mj8nsswJfxNyMXrVCbayR4Pv4q0dLky7bzDgBcZXGEnXDtiKhgJudIU5Da1LuCB/HyPBuwFNyDpUU2Lc6RZlCxqF2KyNmq4RcrXrgRkcOeTw5TtGUn4dYdemJUihhkb0A/HH0U3Zac7phY+dS4OEIBiyPUqeFfkW1JAtjXsmWhCMWBBEfwR6y85mO3/53kj4238FSC9peTj9mYAOQahLNQrEto0j1vEjvkjcwkKh7D/Dv4dysYIRQ63GaIcQUq31UsNwV2YYxkSjRR0N+Bbkhph55LEQffDN/NbuTbh+OWAalpyFsIk4MiGFQ3A2ePGExtb+zMdc/dVvEdq//rMfseuXAVGKfVoCAOT8/toJrlAiZWaa9rpjK/Eor0DX6Cp86VPcT/QiBwvwtcOvK6h6T4cFqa9VNU9MFG6BeYzWgfuqjsPl2+gqB6KVaJsmBBviQgCaBoEp3YrIKpSOD/MniyjoAd45u0lMbWBVBJwL153/QmU5FNrCVJbQLUQusFMgqECGqQXx0nrJKmoowraSrmvbsKii/OAbqHe5D4WXxLebnbp7xGzWc+OubUfUFWtiVfNj/077XnYaQwOO69NOZB8vJGp35jzW3tor3kyqD17K2kDhyGTf2udDeb3TkecyVQGmxf/GH5Hp21E08LZQgp5pl0A6howbUy7MvUm32AREL0wuFQgt2e32MFFFm/3w1+plfqgmhf5xMD0fxnVQ5vcJ/VrkdBzYzntmwOaLvgUy1zdGh/ay8EsZjJ4w6ahZF6GvMwg7WcK/7U7KVOXobGJdR6/Lev7p1Dw3nPH/xx nzt5rn30 /YST0QJXPKSidSGpX2UYMYeN4eNLQk+0QX5scYbsr0nygCVl3Y1Ie645HtUsl6Xd6K6Nq9EIxmCrdspYMQau7MsZtnMvdXHhuxJcUBNTZokFlBd+RMPTQl0rRjMh6n1ZxTCQ/5G8IGKzpp9a8ODQkDg3c+ewwv0WbIDB4PXssxNhoQwacvXIrnGiBhNVTQNl9ySWg8IJLLD38ynwjGavOZrov7wiRIhFRkqNGfvxSDNlZGSJIqD5ELal5dqAMwJ8ccDstUS80k68SxY+TdWDWUCWRIFDh4wl+NvVQwmXJY9n6bDIPXq8RELoEfA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When a thread flushes the TLB, since the address space is shared, not only other threads in the current process but also other processes that share the address space may access the corresponding memory (related to the TLB flush). Therefore, the cpuset used for TLB flushing should be the union of the mm_cpumasks of all processes that share the address space. This patch extend flush_tlb_info to store other process's mm_struct, and when a CPU in the union of the mm_cpumasks if invoked to handle tlb flushing, it will check whether cpu_tlbstate.loaded_mm matches any of mm_structs stored in flush_tlb_info. If match, the CPU will do local tlb flushing for that mm_struct. Signed-off-by: Bo Li --- arch/x86/include/asm/tlbflush.h | 10 ++ arch/x86/mm/tlb.c | 172 ++++++++++++++++++++++++++++++++ arch/x86/rpal/internal.h | 3 - include/linux/rpal.h | 12 +++ mm/rmap.c | 4 + 5 files changed, 198 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index e9b81876ebe4..f57b745af75c 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -227,6 +227,11 @@ struct flush_tlb_info { u8 stride_shift; u8 freed_tables; u8 trim_cpumask; +#ifdef CONFIG_RPAL + struct mm_struct **mm_list; + u64 *tlb_gen_list; + int nr_mm; +#endif }; void flush_tlb_local(void); @@ -356,6 +361,11 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } +#ifdef CONFIG_RPAL +void rpal_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm); +#endif + static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 39f80111e6f1..a0fe17b13887 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -1361,6 +1362,169 @@ void flush_tlb_multi(const struct cpumask *cpumask, __flush_tlb_multi(cpumask, info); } +#ifdef CONFIG_RPAL +static void rpal_flush_tlb_func_remote(void *info) +{ + struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm); + struct flush_tlb_info *f = info; + struct flush_tlb_info tf = *f; + int i; + + /* As it comes from RPAL path, f->mm cannot be NULL */ + if (f->mm == loaded_mm) { + flush_tlb_func(f); + return; + } + + for (i = 0; i < f->nr_mm; i++) { + /* We always have f->mm_list[i] != NULL */ + if (f->mm_list[i] == loaded_mm) { + tf.mm = f->mm_list[i]; + tf.new_tlb_gen = f->tlb_gen_list[i]; + flush_tlb_func(&tf); + return; + } + } +} + +static void rpal_flush_tlb_func_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) +{ + count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); + if (info->end == TLB_FLUSH_ALL) + trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL); + else + trace_tlb_flush(TLB_REMOTE_SEND_IPI, + (info->end - info->start) >> PAGE_SHIFT); + + if (info->freed_tables || mm_in_asid_transition(info->mm)) + on_each_cpu_mask(cpumask, rpal_flush_tlb_func_remote, + (void *)info, true); + else + on_each_cpu_cond_mask(should_flush_tlb, + rpal_flush_tlb_func_remote, (void *)info, + 1, cpumask); +} + +static void rpal_flush_tlb_func_local(struct mm_struct *mm, int cpu, + struct flush_tlb_info *info, + u64 new_tlb_gen) +{ + struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm); + + if (loaded_mm == info->mm) { + lockdep_assert_irqs_enabled(); + local_irq_disable(); + flush_tlb_func(info); + local_irq_enable(); + } else { + int i; + + for (i = 0; i < info->nr_mm; i++) { + if (info->mm_list[i] == loaded_mm) { + lockdep_assert_irqs_enabled(); + local_irq_disable(); + info->mm = info->mm_list[i]; + info->new_tlb_gen = info->tlb_gen_list[i]; + flush_tlb_func(info); + info->mm = mm; + info->new_tlb_gen = new_tlb_gen; + local_irq_enable(); + } + } + } +} + +static void rpal_flush_tlb_mm_range(struct mm_struct *mm, int cpu, + struct flush_tlb_info *info, u64 new_tlb_gen) +{ + struct rpal_service *cur = mm->rpal_rs; + cpumask_t merged_mask; + struct mm_struct *mm_list[MAX_REQUEST_SERVICE]; + u64 tlb_gen_list[MAX_REQUEST_SERVICE]; + int nr_mm = 0; + int i; + + cpumask_copy(&merged_mask, mm_cpumask(mm)); + if (cur) { + struct rpal_service *tgt; + struct mm_struct *tgt_mm; + + rpal_for_each_requested_service(cur, i) { + struct rpal_mapped_service *node; + + if (i == cur->id) + continue; + node = rpal_get_mapped_node(cur, i); + if (!rpal_service_mapped(node)) + continue; + + tgt = rpal_get_service(node->rs); + if (!tgt) + continue; + tgt_mm = tgt->mm; + if (!mmget_not_zero(tgt_mm)) { + rpal_put_service(tgt); + continue; + } + mm_list[nr_mm] = tgt_mm; + tlb_gen_list[nr_mm] = inc_mm_tlb_gen(tgt_mm); + + nr_mm++; + cpumask_or(&merged_mask, &merged_mask, + mm_cpumask(tgt_mm)); + rpal_put_service(tgt); + } + info->mm_list = mm_list; + info->tlb_gen_list = tlb_gen_list; + info->nr_mm = nr_mm; + } + + if (cpumask_any_but(&merged_mask, cpu) < nr_cpu_ids) + rpal_flush_tlb_func_multi(&merged_mask, info); + else + rpal_flush_tlb_func_local(mm, cpu, info, new_tlb_gen); + + for (i = 0; i < nr_mm; i++) + mmput_async(mm_list[i]); +} + +void rpal_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm) +{ + struct rpal_service *cur = mm->rpal_rs; + struct rpal_service *tgt; + struct mm_struct *tgt_mm; + int i; + + rpal_for_each_requested_service(cur, i) { + struct rpal_mapped_service *node; + + if (i == cur->id) + continue; + + node = rpal_get_mapped_node(cur, i); + if (!rpal_service_mapped(node)) + continue; + + tgt = rpal_get_service(node->rs); + if (!tgt) + continue; + tgt_mm = tgt->mm; + if (!mmget_not_zero(tgt_mm)) { + rpal_put_service(tgt); + continue; + } + inc_mm_tlb_gen(tgt_mm); + cpumask_or(&batch->cpumask, &batch->cpumask, + mm_cpumask(tgt_mm)); + mmu_notifier_arch_invalidate_secondary_tlbs(tgt_mm, 0, -1UL); + rpal_put_service(tgt); + mmput_async(tgt_mm); + } +} +#endif + /* * See Documentation/arch/x86/tlb.rst for details. We choose 33 * because it is large enough to cover the vast majority (at @@ -1439,6 +1603,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables, new_tlb_gen); +#if IS_ENABLED(CONFIG_RPAL) + if (mm->rpal_rs) + rpal_flush_tlb_mm_range(mm, cpu, info, new_tlb_gen); + else { +#endif /* * flush_tlb_multi() is not optimized for the common case in which only * a local TLB flush is needed. Optimize this use-case by calling @@ -1456,6 +1625,9 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, flush_tlb_func(info); local_irq_enable(); } +#if IS_ENABLED(CONFIG_RPAL) + } +#endif put_flush_tlb_info(); put_cpu(); diff --git a/arch/x86/rpal/internal.h b/arch/x86/rpal/internal.h index c504b6efff64..cf6d608a994a 100644 --- a/arch/x86/rpal/internal.h +++ b/arch/x86/rpal/internal.h @@ -12,9 +12,6 @@ #include #include -#define RPAL_REQUEST_MAP 0x1 -#define RPAL_REVERSE_MAP 0x2 - extern bool rpal_inited; /* service.c */ diff --git a/include/linux/rpal.h b/include/linux/rpal.h index b9622f0235bf..36be1ab6a9f3 100644 --- a/include/linux/rpal.h +++ b/include/linux/rpal.h @@ -80,6 +80,11 @@ /* No more than 15 services can be requested due to limitation of MPK. */ #define MAX_REQUEST_SERVICE 15 +enum { + RPAL_REQUEST_MAP, + RPAL_REVERSE_MAP, +}; + extern unsigned long rpal_cap; enum rpal_task_flag_bits { @@ -326,6 +331,13 @@ rpal_get_mapped_node(struct rpal_service *rs, int id) return &rs->service_map[id]; } +static inline bool rpal_service_mapped(struct rpal_mapped_service *node) +{ + unsigned long type = (1 << RPAL_REQUEST_MAP) | (1 << RPAL_REVERSE_MAP); + + return (node->type & type) == type; +} + #ifdef CONFIG_RPAL static inline struct rpal_service *rpal_current_service(void) { diff --git a/mm/rmap.c b/mm/rmap.c index 67bb273dfb80..e68384f97ab9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -682,6 +682,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, return; arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, start, end); +#ifdef CONFIG_RPAL + if (mm->rpal_rs) + rpal_tlbbatch_add_pending(&tlb_ubc->arch, mm); +#endif tlb_ubc->flush_required = true; /* -- 2.20.1