linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Yu Zhao <yuzhao@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Muchun Song <muchun.song@linux.dev>,
	Thomas Gleixner <tglx@linutronix.de>,
	Will Deacon <will@kernel.org>,
	Douglas Anderson <dianders@chromium.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Nanyong Sun <sunnanyong@huawei.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v1 4/6] arm64: broadcast IPIs to pause remote CPUs
Date: Tue, 22 Oct 2024 17:15:03 +0100	[thread overview]
Message-ID: <868qug3yig.wl-maz@kernel.org> (raw)
In-Reply-To: <20241021042218.746659-5-yuzhao@google.com>

On Mon, 21 Oct 2024 05:22:16 +0100,
Yu Zhao <yuzhao@google.com> wrote:
> 
> Broadcast pseudo-NMI IPIs to pause remote CPUs for a short period of
> time, and then reliably resume them when the local CPU exits critical
> sections that preclude the execution of remote CPUs.
> 
> A typical example of such critical sections is BBM on kernel PTEs.
> HugeTLB Vmemmap Optimization (HVO) on arm64 was disabled by
> commit 060a2c92d1b6 ("arm64: mm: hugetlb: Disable
> HUGETLB_PAGE_OPTIMIZE_VMEMMAP") due to the folllowing reason:
> 
>   This is deemed UNPREDICTABLE by the Arm architecture without a
>   break-before-make sequence (make the PTE invalid, TLBI, write the
>   new valid PTE). However, such sequence is not possible since the
>   vmemmap may be concurrently accessed by the kernel.
> 
> Supporting BBM on kernel PTEs is one of the approaches that can make
> HVO theoretically safe on arm64.

Is the safety only theoretical? I would have expected that we'd use an
approach that is absolutely rock-solid.

> 
> Note that it is still possible for the paused CPUs to perform
> speculative translations. Such translations would cause spurious
> kernel PFs, which should be properly handled by
> is_spurious_el1_translation_fault().

Speculative translation faults are never reported, that'd be a CPU
bug. *Spurious* translation faults can be reported if the CPU doesn't
implement FEAT_ETS2, for example, and that has to do with the ordering
of memory access wrt page-table walking for the purpose of translations.

> 
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  arch/arm64/include/asm/smp.h |  3 ++
>  arch/arm64/kernel/smp.c      | 92 +++++++++++++++++++++++++++++++++---
>  2 files changed, 88 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
> index 2510eec026f7..cffb0cfed961 100644
> --- a/arch/arm64/include/asm/smp.h
> +++ b/arch/arm64/include/asm/smp.h
> @@ -133,6 +133,9 @@ bool cpus_are_stuck_in_kernel(void);
>  extern void crash_smp_send_stop(void);
>  extern bool smp_crash_stop_failed(void);
>  
> +void pause_remote_cpus(void);
> +void resume_remote_cpus(void);
> +
>  #endif /* ifndef __ASSEMBLY__ */
>  
>  #endif /* ifndef __ASM_SMP_H */
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 3b3f6b56e733..68829c6de1b1 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -85,7 +85,12 @@ static int ipi_irq_base __ro_after_init;
>  static int nr_ipi __ro_after_init = NR_IPI;
>  static struct irq_desc *ipi_desc[MAX_IPI] __ro_after_init;
>  
> -static bool crash_stop;
> +enum {
> +	SEND_STOP = BIT(0),
> +	CRASH_STOP = BIT(1),
> +};
> +
> +static unsigned long stop_in_progress;
>  
>  static void ipi_setup(int cpu);
>  
> @@ -917,6 +922,79 @@ static void __noreturn ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs
>  #endif
>  }
>  
> +static DEFINE_SPINLOCK(cpu_pause_lock);

PREEMPT_RT will turn this into a sleeping lock. Is it safe to sleep as
you are dealing with kernel mappings?

> +static cpumask_t paused_cpus;
> +static cpumask_t resumed_cpus;
> +
> +static void pause_local_cpu(void)
> +{
> +	int cpu = smp_processor_id();
> +
> +	cpumask_clear_cpu(cpu, &resumed_cpus);
> +	/*
> +	 * Paired with pause_remote_cpus() to confirm that this CPU not only
> +	 * will be paused but also can be reliably resumed.
> +	 */
> +	smp_wmb();
> +	cpumask_set_cpu(cpu, &paused_cpus);
> +	/* paused_cpus must be set before waiting on resumed_cpus. */
> +	barrier();

I'm not sure what this is trying to enforce. Yes, the compiler won't
reorder the set and the test. But your comment seems to indicate that
also need to make sure the CPU preserves that ordering, and short of a
DMB, the test below could be reordered.

> +	while (!cpumask_test_cpu(cpu, &resumed_cpus))
> +		cpu_relax();
> +	/* A typical example for sleep and wake-up functions. */

I'm not sure this is "typical",...

> +	smp_mb();
> +	cpumask_clear_cpu(cpu, &paused_cpus);
> +}
> +
> +void pause_remote_cpus(void)
> +{
> +	cpumask_t cpus_to_pause;
> +
> +	lockdep_assert_cpus_held();
> +	lockdep_assert_preemption_disabled();
> +
> +	cpumask_copy(&cpus_to_pause, cpu_online_mask);
> +	cpumask_clear_cpu(smp_processor_id(), &cpus_to_pause);

This bitmap is manipulated outside of your cpu_pause_lock. What
guarantees you can't have two CPUs stepping on each other here?

> +
> +	spin_lock(&cpu_pause_lock);
> +
> +	WARN_ON_ONCE(!cpumask_empty(&paused_cpus));
> +
> +	smp_cross_call(&cpus_to_pause, IPI_CPU_STOP_NMI);
> +
> +	while (!cpumask_equal(&cpus_to_pause, &paused_cpus))
> +		cpu_relax();

This can be a lot of things to compare, specially that you are
explicitly mentioning large systems. Why can't this be implemented as
a counter instead?

Overall, this looks like stop_machine() in disguise. Why can't this
use the existing infrastructure?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


  reply	other threads:[~2024-10-22 16:15 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-21  4:22 [PATCH v1 0/6] mm/arm64: re-enable HVO Yu Zhao
2024-10-21  4:22 ` [PATCH v1 1/6] mm/hugetlb_vmemmap: batch update PTEs Yu Zhao
2024-10-21  4:22 ` [PATCH v1 2/6] mm/hugetlb_vmemmap: add arch-independent helpers Yu Zhao
2024-10-21  4:22 ` [PATCH v1 3/6] irqchip/gic-v3: support SGI broadcast Yu Zhao
2024-10-22  0:24   ` kernel test robot
2024-10-22 15:03   ` Marc Zyngier
2024-10-25  5:07     ` Yu Zhao
2024-10-25 16:14       ` Marc Zyngier
2024-10-25 17:31         ` Yu Zhao
2024-10-29 19:02           ` Marc Zyngier
2024-10-29 19:53             ` Yu Zhao
2024-10-21  4:22 ` [PATCH v1 4/6] arm64: broadcast IPIs to pause remote CPUs Yu Zhao
2024-10-22 16:15   ` Marc Zyngier [this message]
2024-10-28 22:11     ` Yu Zhao
2024-10-29 19:36       ` Marc Zyngier
2024-10-31 18:10         ` Yu Zhao
2024-10-21  4:22 ` [PATCH v1 5/6] arm64: pause remote CPUs to update vmemmap Yu Zhao
2024-10-21  4:22 ` [PATCH v1 6/6] arm64: select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=868qug3yig.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=dianders@chromium.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=muchun.song@linux.dev \
    --cc=sunnanyong@huawei.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox