linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nicholas Piggin <npiggin@gmail.com>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	"Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>,
	Jens Axboe <axboe@kernel.dk>,
	"David S . Miller" <davem@davemloft.net>,
	linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	Peter Zijlstra <peterz@infradead.org>,
	sparclinux@vger.kernel.org
Subject: Re: [PATCH v2 3/4] sparc64: remove mm_cpumask clearing to fix kthread_use_mm race
Date: Mon, 14 Sep 2020 17:00:29 +1000	[thread overview]
Message-ID: <1600066040.vnmz9nxhwt.astroid@bobo.none> (raw)
In-Reply-To: <20200914045219.3736466-4-npiggin@gmail.com>

Excerpts from Nicholas Piggin's message of September 14, 2020 2:52 pm:

[...]

> The basic fix for sparc64 is to remove its mm_cpumask clearing code. The
> optimisation could be effectively restored by sending IPIs to mm_cpumask
> members and having them remove themselves from mm_cpumask. This is more
> tricky so I leave it as an exercise for someone with a sparc64 SMP.
> powerpc has a (currently similarly broken) example.

So this compiles and boots on qemu, but qemu does not support any
sparc64 machines with SMP. Attempting some simple hacks doesn't get
me far because openbios isn't populating an SMP device tree, which
blows up everywhere.

The patch is _relatively_ simple, hopefully it shouldn't explode, so
it's probably ready for testing on real SMP hardware, if someone has
a few cycles.

Thanks,
Nick

> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/sparc/kernel/smp_64.c | 65 ++++++++------------------------------
>  1 file changed, 14 insertions(+), 51 deletions(-)
> 
> diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
> index e286e2badc8a..e38d8bf454e8 100644
> --- a/arch/sparc/kernel/smp_64.c
> +++ b/arch/sparc/kernel/smp_64.c
> @@ -1039,38 +1039,9 @@ void smp_fetch_global_pmu(void)
>   * are flush_tlb_*() routines, and these run after flush_cache_*()
>   * which performs the flushw.
>   *
> - * The SMP TLB coherency scheme we use works as follows:
> - *
> - * 1) mm->cpu_vm_mask is a bit mask of which cpus an address
> - *    space has (potentially) executed on, this is the heuristic
> - *    we use to avoid doing cross calls.
> - *
> - *    Also, for flushing from kswapd and also for clones, we
> - *    use cpu_vm_mask as the list of cpus to make run the TLB.
> - *
> - * 2) TLB context numbers are shared globally across all processors
> - *    in the system, this allows us to play several games to avoid
> - *    cross calls.
> - *
> - *    One invariant is that when a cpu switches to a process, and
> - *    that processes tsk->active_mm->cpu_vm_mask does not have the
> - *    current cpu's bit set, that tlb context is flushed locally.
> - *
> - *    If the address space is non-shared (ie. mm->count == 1) we avoid
> - *    cross calls when we want to flush the currently running process's
> - *    tlb state.  This is done by clearing all cpu bits except the current
> - *    processor's in current->mm->cpu_vm_mask and performing the
> - *    flush locally only.  This will force any subsequent cpus which run
> - *    this task to flush the context from the local tlb if the process
> - *    migrates to another cpu (again).
> - *
> - * 3) For shared address spaces (threads) and swapping we bite the
> - *    bullet for most cases and perform the cross call (but only to
> - *    the cpus listed in cpu_vm_mask).
> - *
> - *    The performance gain from "optimizing" away the cross call for threads is
> - *    questionable (in theory the big win for threads is the massive sharing of
> - *    address space state across processors).
> + * mm->cpu_vm_mask is a bit mask of which cpus an address
> + * space has (potentially) executed on, this is the heuristic
> + * we use to limit cross calls.
>   */
>  
>  /* This currently is only used by the hugetlb arch pre-fault
> @@ -1080,18 +1051,13 @@ void smp_fetch_global_pmu(void)
>  void smp_flush_tlb_mm(struct mm_struct *mm)
>  {
>  	u32 ctx = CTX_HWBITS(mm->context);
> -	int cpu = get_cpu();
>  
> -	if (atomic_read(&mm->mm_users) == 1) {
> -		cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
> -		goto local_flush_and_out;
> -	}
> +	get_cpu();
>  
>  	smp_cross_call_masked(&xcall_flush_tlb_mm,
>  			      ctx, 0, 0,
>  			      mm_cpumask(mm));
>  
> -local_flush_and_out:
>  	__flush_tlb_mm(ctx, SECONDARY_CONTEXT);
>  
>  	put_cpu();
> @@ -1114,17 +1080,15 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
>  {
>  	u32 ctx = CTX_HWBITS(mm->context);
>  	struct tlb_pending_info info;
> -	int cpu = get_cpu();
> +
> +	get_cpu();
>  
>  	info.ctx = ctx;
>  	info.nr = nr;
>  	info.vaddrs = vaddrs;
>  
> -	if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
> -		cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
> -	else
> -		smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
> -				       &info, 1);
> +	smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
> +			       &info, 1);
>  
>  	__flush_tlb_pending(ctx, nr, vaddrs);
>  
> @@ -1134,14 +1098,13 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
>  void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr)
>  {
>  	unsigned long context = CTX_HWBITS(mm->context);
> -	int cpu = get_cpu();
>  
> -	if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
> -		cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
> -	else
> -		smp_cross_call_masked(&xcall_flush_tlb_page,
> -				      context, vaddr, 0,
> -				      mm_cpumask(mm));
> +	get_cpu();
> +
> +	smp_cross_call_masked(&xcall_flush_tlb_page,
> +			      context, vaddr, 0,
> +			      mm_cpumask(mm));
> +
>  	__flush_tlb_page(context, vaddr);
>  
>  	put_cpu();
> -- 
> 2.23.0
> 
> 


  reply	other threads:[~2020-09-14  7:00 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-14  4:52 [PATCH v2 0/4] more mm switching vs TLB shootdown and lazy tlb fixes Nicholas Piggin
2020-09-14  4:52 ` [PATCH v2 1/4] mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race Nicholas Piggin
2020-09-14 10:56   ` peterz
2020-09-15  2:48     ` Nicholas Piggin
2020-09-15 11:26       ` Michael Ellerman
2020-09-18 12:18       ` Michael Ellerman
2020-09-14  4:52 ` [PATCH v2 2/4] powerpc: select ARCH_WANT_IRQS_OFF_ACTIVATE_MM Nicholas Piggin
2020-09-14  4:52 ` [PATCH v2 3/4] sparc64: remove mm_cpumask clearing to fix kthread_use_mm race Nicholas Piggin
2020-09-14  7:00   ` Nicholas Piggin [this message]
2020-09-14 10:23     ` Anatoly Pugachev
2020-09-15  2:49       ` Nicholas Piggin
2020-09-14 19:59   ` David Miller
2020-09-15  3:24     ` Nicholas Piggin
2020-09-15 19:42       ` David Miller
2020-09-14  4:52 ` [PATCH v2 4/4] powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm Nicholas Piggin
2020-09-24 12:28 ` [PATCH v2 0/4] more mm switching vs TLB shootdown and lazy tlb fixes Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1600066040.vnmz9nxhwt.astroid@bobo.none \
    --to=npiggin@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=axboe@kernel.dk \
    --cc=davem@davemloft.net \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=peterz@infradead.org \
    --cc=sparclinux@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox