From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4C02C433E2 for ; Mon, 14 Sep 2020 04:52:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 80ACE20EDD for ; Mon, 14 Sep 2020 04:52:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lm4Mggd2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 80ACE20EDD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0ABBC6B005A; Mon, 14 Sep 2020 00:52:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 054D96B005C; Mon, 14 Sep 2020 00:52:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAE466B005D; Mon, 14 Sep 2020 00:52:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id D2E1C6B005A for ; Mon, 14 Sep 2020 00:52:52 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 90AAA82EBA15 for ; Mon, 14 Sep 2020 04:52:52 +0000 (UTC) X-FDA: 77260446984.04.rod96_070741a27105 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 5E60C80846B9 for ; Mon, 14 Sep 2020 04:52:52 +0000 (UTC) X-HE-Tag: rod96_070741a27105 X-Filterd-Recvd-Size: 9501 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Sep 2020 04:52:51 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id fa1so4813515pjb.0 for ; Sun, 13 Sep 2020 21:52:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BsgBvEdcPlC4FOpZ0B4ZRJc7oZi/0lTX9jrx1MeIFsE=; b=lm4Mggd2pzqrGBMVYnKwM7b/EYs+tQEgSODr69h0iLSy8WoVDZK4CDne8dWpI9DpM6 D7lHzUn1CY1LOkA3CpZ/wATmlG5/iwf0Wkf5TtwYG9G4Usond68GfMqQA4O898mUIaeR 3zIjx82y1PSgmAO26tZtmQ0n5flDErynkErvw5Lejp/HbFchEBel+ZXI5a5yAcs8DBAN YRPyFl1OkgYqPFNp6tr/WKTXbnVpTecPa0/OAopkNhdqBzqHeAKxUoLP2ODmdTkjHSwD ZlwklUEpqRI97FeYUv0aDOxgxM9ruMntwNpm9kf3/e7pKQBU+eM9UBVlO1C9BfETCUoJ JUpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BsgBvEdcPlC4FOpZ0B4ZRJc7oZi/0lTX9jrx1MeIFsE=; b=iD0PhyjNShJB0/8Tl5v0/F7KmWN9cMukpgG8boIAd0UUlYP8vROXB/JM54EJQ09b06 0egc6Ot/HFGIy9/VIg61LiGV0bzlwfaGadw2ICSd1ButboxoDauDXu7soG8uVkw4P4Cv VTiHPEWVfGGI+DEDsy+msQlxAuRbBEdo4bfZP+qUzZkqQq9Oj6VyDIbYpnIg9Y3qta08 xpjzSmvqyaShgFaJTmsscfXIkSEYXD62J+t0cxllQxRrYm+wdZND7GsqKMISrhXO75Gh fAjHvBs0lF3p5MJec3A5kukymjiT9dKnPezcYHRCIdX0YrLiySVsppKUPpdECgwTZPSy 2DUQ== X-Gm-Message-State: AOAM5333Dwgz6whMTuP2FSIZp+jG7sjywP6egMDXkyP3IZMWd5SoSkQj eDuv+YMBlMpus0AGXlzC/bHlggKtG3Q= X-Google-Smtp-Source: ABdhPJx3KPGLeRQoSE8dRCssQcbtGYQHCGhUf48Xq9kBcMqMZXfkcdYWdvgTWFqkpt5j1OQSPOfkNQ== X-Received: by 2002:a17:90b:3708:: with SMTP id mg8mr12159476pjb.39.1600059170766; Sun, 13 Sep 2020 21:52:50 -0700 (PDT) Received: from bobo.ozlabs.ibm.com ([203.185.249.227]) by smtp.gmail.com with ESMTPSA id a13sm6945312pgq.41.2020.09.13.21.52.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Sep 2020 21:52:50 -0700 (PDT) From: Nicholas Piggin To: "linux-mm @ kvack . org" Cc: Nicholas Piggin , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, "Aneesh Kumar K . V" , Andrew Morton , Jens Axboe , Peter Zijlstra , "David S . Miller" Subject: [PATCH v2 3/4] sparc64: remove mm_cpumask clearing to fix kthread_use_mm race Date: Mon, 14 Sep 2020 14:52:18 +1000 Message-Id: <20200914045219.3736466-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200914045219.3736466-1-npiggin@gmail.com> References: <20200914045219.3736466-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5E60C80846B9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The de facto (and apparently uncommented) standard for using an mm had, thanks to this code in sparc if nothing else, been that you must have a reference on mm_users *and that reference must have been obtained with mmget()*, i.e., from a thread with a reference to mm_users that had used the mm. The introduction of mmget_not_zero() in commit d2005e3f41d4 ("userfaultfd: don't pin the user memory in userfaultfd_file_create()") allowed mm_count holders to aoperate on user mappings asynchronously from the actual threads using the mm, but they were not to load those mappings into their TLB (i.e., walking vmas and page tables is okay, kthread_use_mm() is not). io_uring 2b188cc1bb857 ("Add io_uring IO interface") added code which does a kthread_use_mm() from a mmget_not_zero() refcount. The problem with this is code which previously assumed mm =3D=3D current-= >mm and mm->mm_users =3D=3D 1 implies the mm will remain single-threaded at least until this thread creates another mm_users reference, has now broken. arch/sparc/kernel/smp_64.c: if (atomic_read(&mm->mm_users) =3D=3D 1) { cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); goto local_flush_and_out; } vs fs/io_uring.c if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) || !mmget_not_zero(ctx->sqo_mm))) return -EFAULT; kthread_use_mm(ctx->sqo_mm); mmget_not_zero() could come in right after the mm_users =3D=3D 1 test, th= en kthread_use_mm() which sets its CPU in the mm_cpumask. That update could be lost if cpumask_copy() occurs afterward. I propose we fix this by allowing mmget_not_zero() to be a first-class reference, and not have this obscure undocumented and unchecked restriction. The basic fix for sparc64 is to remove its mm_cpumask clearing code. The optimisation could be effectively restored by sending IPIs to mm_cpumask members and having them remove themselves from mm_cpumask. This is more tricky so I leave it as an exercise for someone with a sparc64 SMP. powerpc has a (currently similarly broken) example. Signed-off-by: Nicholas Piggin --- arch/sparc/kernel/smp_64.c | 65 ++++++++------------------------------ 1 file changed, 14 insertions(+), 51 deletions(-) diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index e286e2badc8a..e38d8bf454e8 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -1039,38 +1039,9 @@ void smp_fetch_global_pmu(void) * are flush_tlb_*() routines, and these run after flush_cache_*() * which performs the flushw. * - * The SMP TLB coherency scheme we use works as follows: - * - * 1) mm->cpu_vm_mask is a bit mask of which cpus an address - * space has (potentially) executed on, this is the heuristic - * we use to avoid doing cross calls. - * - * Also, for flushing from kswapd and also for clones, we - * use cpu_vm_mask as the list of cpus to make run the TLB. - * - * 2) TLB context numbers are shared globally across all processors - * in the system, this allows us to play several games to avoid - * cross calls. - * - * One invariant is that when a cpu switches to a process, and - * that processes tsk->active_mm->cpu_vm_mask does not have the - * current cpu's bit set, that tlb context is flushed locally. - * - * If the address space is non-shared (ie. mm->count =3D=3D 1) we avo= id - * cross calls when we want to flush the currently running process's - * tlb state. This is done by clearing all cpu bits except the curre= nt - * processor's in current->mm->cpu_vm_mask and performing the - * flush locally only. This will force any subsequent cpus which run - * this task to flush the context from the local tlb if the process - * migrates to another cpu (again). - * - * 3) For shared address spaces (threads) and swapping we bite the - * bullet for most cases and perform the cross call (but only to - * the cpus listed in cpu_vm_mask). - * - * The performance gain from "optimizing" away the cross call for thr= eads is - * questionable (in theory the big win for threads is the massive sha= ring of - * address space state across processors). + * mm->cpu_vm_mask is a bit mask of which cpus an address + * space has (potentially) executed on, this is the heuristic + * we use to limit cross calls. */ =20 /* This currently is only used by the hugetlb arch pre-fault @@ -1080,18 +1051,13 @@ void smp_fetch_global_pmu(void) void smp_flush_tlb_mm(struct mm_struct *mm) { u32 ctx =3D CTX_HWBITS(mm->context); - int cpu =3D get_cpu(); =20 - if (atomic_read(&mm->mm_users) =3D=3D 1) { - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - goto local_flush_and_out; - } + get_cpu(); =20 smp_cross_call_masked(&xcall_flush_tlb_mm, ctx, 0, 0, mm_cpumask(mm)); =20 -local_flush_and_out: __flush_tlb_mm(ctx, SECONDARY_CONTEXT); =20 put_cpu(); @@ -1114,17 +1080,15 @@ void smp_flush_tlb_pending(struct mm_struct *mm, = unsigned long nr, unsigned long { u32 ctx =3D CTX_HWBITS(mm->context); struct tlb_pending_info info; - int cpu =3D get_cpu(); + + get_cpu(); =20 info.ctx =3D ctx; info.nr =3D nr; info.vaddrs =3D vaddrs; =20 - if (mm =3D=3D current->mm && atomic_read(&mm->mm_users) =3D=3D 1) - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - else - smp_call_function_many(mm_cpumask(mm), tlb_pending_func, - &info, 1); + smp_call_function_many(mm_cpumask(mm), tlb_pending_func, + &info, 1); =20 __flush_tlb_pending(ctx, nr, vaddrs); =20 @@ -1134,14 +1098,13 @@ void smp_flush_tlb_pending(struct mm_struct *mm, = unsigned long nr, unsigned long void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr) { unsigned long context =3D CTX_HWBITS(mm->context); - int cpu =3D get_cpu(); =20 - if (mm =3D=3D current->mm && atomic_read(&mm->mm_users) =3D=3D 1) - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - else - smp_cross_call_masked(&xcall_flush_tlb_page, - context, vaddr, 0, - mm_cpumask(mm)); + get_cpu(); + + smp_cross_call_masked(&xcall_flush_tlb_page, + context, vaddr, 0, + mm_cpumask(mm)); + __flush_tlb_page(context, vaddr); =20 put_cpu(); --=20 2.23.0