From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97208C43461 for ; Tue, 15 Sep 2020 02:48:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 088BC20829 for ; Tue, 15 Sep 2020 02:48:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UNowgRQA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 088BC20829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 864116B0089; Mon, 14 Sep 2020 22:48:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 814F86B008A; Mon, 14 Sep 2020 22:48:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72B266B008C; Mon, 14 Sep 2020 22:48:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 5CE916B0089 for ; Mon, 14 Sep 2020 22:48:10 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 19698180AD807 for ; Tue, 15 Sep 2020 02:48:10 +0000 (UTC) X-FDA: 77263761540.24.shelf88_580d6942710d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id DB5351A4A0 for ; Tue, 15 Sep 2020 02:48:09 +0000 (UTC) X-HE-Tag: shelf88_580d6942710d X-Filterd-Recvd-Size: 6606 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 02:48:09 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id k13so569229plk.3 for ; Mon, 14 Sep 2020 19:48:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:subject:to:cc:references:in-reply-to:mime-version :message-id:content-transfer-encoding; bh=aaOa8JjlqjTYS++ZFG8V22WjJNOWJKLkYNHW3VsHVMc=; b=UNowgRQAskv4ypUvq6XstkGMFUL+0a1LNQ7XnYyO/kh7MT3kChrzvcNUDLkFn3mdsP s6O81eOoQInRV7/VIJQghCVOhnXRT8CmjJ6UZEPVNdNRkhl7faweiUcYa8ejPeFk/hXD qHKkGcwa1bRPPFWqTlRF2Aixi2xAx6LqDVi30iEbRoW6qt/PXhrh2NpuwCQRlOnPa3l3 Khvk76AG8oUaeIZC2ZfBPhqbc7Z9QD65bknPwkW2VmQb/L2HMLqw4iVbQh09TI1yJW25 1NVtwZT+zEFws8QlQtUcHofwO8Tj+XiNf++GFb3dbX6nbz6pwgFzbHc6wLkUftWrYGXU NHlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:subject:to:cc:references:in-reply-to :mime-version:message-id:content-transfer-encoding; bh=aaOa8JjlqjTYS++ZFG8V22WjJNOWJKLkYNHW3VsHVMc=; b=JvW7KLPkeo2Vq4sEZfww66syPf9onttVWl9Ad99D4qG295QKG/ITaIxayRnfmTDAt1 g5VtuBAgYAVIVC5fO3eZVKMPRmmkduJmr3LrvZiwXK1OmUWXDiwNJCBpmp8T+H6kUgvz P5vECaNHY+BLrS4F9IriN2+WJMIogL3MyHTGl/dj/06WpJGVuBS0cV/G4HLNy35FC6Un 8CyTS+aLo4W7R7fp5rSKybERTXbV67wNBtgttZuCNNDk9c1EAV55DBroyk3sHCozfwRK 9esmCwJebxErGbfmcB6SZzlp1hs97Jnoyu84U53WyRNMdBU97DcofM+CDDW2kcND8p7A N7Xg== X-Gm-Message-State: AOAM530rH8g03IvO3JAM7aTPfYXOOu/U8up2ld+a0a0bRrggkanGewHc 82Xdzhji7WHQl8o6OsF2heg= X-Google-Smtp-Source: ABdhPJyjUmqnGNxdYzpnpKmbQbS/ctXbaoS1dXlPnCU1j530Lt4t/dt/Xq3lLLFQcXT1ZBfX4DE48Q== X-Received: by 2002:a17:902:b193:: with SMTP id s19mr17069704plr.125.1600138088501; Mon, 14 Sep 2020 19:48:08 -0700 (PDT) Received: from localhost ([203.185.249.227]) by smtp.gmail.com with ESMTPSA id t24sm9858396pgo.51.2020.09.14.19.48.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Sep 2020 19:48:08 -0700 (PDT) Date: Tue, 15 Sep 2020 12:48:02 +1000 From: Nicholas Piggin Subject: Re: [PATCH v2 1/4] mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race To: peterz@infradead.org Cc: Andrew Morton , "Aneesh Kumar K . V" , Jens Axboe , Dave Hansen , "David S . Miller" , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "linux-mm @ kvack . org" , linuxppc-dev@lists.ozlabs.org, Andy Lutomirski , sparclinux@vger.kernel.org References: <20200914045219.3736466-1-npiggin@gmail.com> <20200914045219.3736466-2-npiggin@gmail.com> <20200914105617.GP1362448@hirez.programming.kicks-ass.net> In-Reply-To: <20200914105617.GP1362448@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Message-Id: <1600137586.nypnz3sbcl.astroid@bobo.none> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: DB5351A4A0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Excerpts from peterz@infradead.org's message of September 14, 2020 8:56 pm: > On Mon, Sep 14, 2020 at 02:52:16PM +1000, Nicholas Piggin wrote: >> Reading and modifying current->mm and current->active_mm and switching >> mm should be done with irqs off, to prevent races seeing an intermediate >> state. >>=20 >> This is similar to commit 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB >> invalidate"). At exec-time when the new mm is activated, the old one >> should usually be single-threaded and no longer used, unless something >> else is holding an mm_users reference (which may be possible). >>=20 >> Absent other mm_users, there is also a race with preemption and lazy tlb >> switching. Consider the kernel_execve case where the current thread is >> using a lazy tlb active mm: >>=20 >> call_usermodehelper() >> kernel_execve() >> old_mm =3D current->mm; >> active_mm =3D current->active_mm; >> *** preempt *** --------------------> schedule() >> prev->active_mm =3D NULL; >> mmdrop(prev active_mm); >> ... >> <-------------------- schedule() >> current->mm =3D mm; >> current->active_mm =3D mm; >> if (!old_mm) >> mmdrop(active_mm); >>=20 >> If we switch back to the kernel thread from a different mm, there is a >> double free of the old active_mm, and a missing free of the new one. >>=20 >> Closing this race only requires interrupts to be disabled while ->mm >> and ->active_mm are being switched, but the TLB problem requires also >> holding interrupts off over activate_mm. Unfortunately not all archs >> can do that yet, e.g., arm defers the switch if irqs are disabled and >> expects finish_arch_post_lock_switch() to be called to complete the >> flush; um takes a blocking lock in activate_mm(). >>=20 >> So as a first step, disable interrupts across the mm/active_mm updates >> to close the lazy tlb preempt race, and provide an arch option to >> extend that to activate_mm which allows architectures doing IPI based >> TLB shootdowns to close the second race. >>=20 >> This is a bit ugly, but in the interest of fixing the bug and backportin= g >> before all architectures are converted this is a compromise. >>=20 >> Signed-off-by: Nicholas Piggin >=20 > Acked-by: Peter Zijlstra (Intel) >=20 > I'm thinking we want this selected on x86 as well. Andy? Thanks for the ack. The plan was to take it through the powerpc tree, but if you'd want x86 to select it, maybe a topic branch? Although Michael will be away during the next merge window so I don't want to get too fancy. Would you mind doing it in a follow up merge after powerpc, being that it's (I think) a small change? I do think all archs should be selecting this, and we want to remove the divergent code paths from here as soon as possible. I was planning to send patches for the N+1 window at least for all the easy archs. But the sooner the better really, we obviously want to share code coverage with x86 :) Thanks, Nick