From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E2EEC43334 for ; Mon, 11 Jul 2022 15:04:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 947CE6B00DC; Mon, 11 Jul 2022 11:04:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F7446B00DE; Mon, 11 Jul 2022 11:04:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C08D6B00E8; Mon, 11 Jul 2022 11:04:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6CC8A6B00DC for ; Mon, 11 Jul 2022 11:04:42 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 32ABACEC for ; Mon, 11 Jul 2022 15:04:42 +0000 (UTC) X-FDA: 79675140804.02.A4B04DA Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) by imf11.hostedemail.com (Postfix) with ESMTP id DE082400AD for ; Mon, 11 Jul 2022 15:04:40 +0000 (UTC) Received: by mail-lf1-f43.google.com with SMTP id r9so1167505lfp.10 for ; Mon, 11 Jul 2022 08:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=RgUS82K2zhP325p8wjy8l4MlvN11hbzE1cFNOBCQxvk=; b=GdTS2TUW+MKBGeytBsDh+01B8SSagYtpNa8iZEYMVgtRkJXiVtGKG/Sn6L5ulw8VZj USog3UCmjJyVfyYP4CazkBpndDGdLflG5tl8u+wgZCDolfKCyP2ImFnZKFQAv9N9Gv/p 8Y/enTeqUR01OOvMjLRme6G7rwT7cxL4FTfKviVk8xEEzEfRyAvRfFJcTF8DVUHf6do9 zY60LXSuWxJPQXKyauTzwiqNm9ayDmYh5YRxwV9s/jEiDKgIk0G6M/fZfABsSksWwWGh ahxN9jgXWZPpmmyxkO/ozmGhIlPfacXrl6v9AHHIlS2S2T6heTMNhQWPqsgSt3T8gvgi Tslw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=RgUS82K2zhP325p8wjy8l4MlvN11hbzE1cFNOBCQxvk=; b=M62HgtDSng66BBmn+bNN/XwDeLYdu8V7m+fDwdLGgg/1mmHeSB5pj95NBEBbDTOTkh atfN89dO0xYBLJOtG8xfDIzPTqrHT2T05hsDW0VTd/c1E4swg3NpYtX5L9YmSQI3sC2v w/qKr1SXM9yuArg0jllSJ/DKoLL87e/NScnBIrFPiXGJHbw3h36HzYwfv7Wpd7bpOSir 0QOqyE/xqBU3lkIMF31ZZrTL71pZZ9i6D8WDdcrD3HuFIdSHRwMSjtmtmkNpNIwyL5zz nGAvnbMpT3xAOsNlA+gAt+JhvmHnGXeaLGkff4Feva+p07RFc9CwoKdxPZ8XqxgkixXL Emyw== X-Gm-Message-State: AJIora9pNBqHqOy0/7O/i5LrT4CJbMtQGm4byS88JOEsZyWB4DWGJqjI I6V1rZw7BPoUEwVq0RO8Ga6AUSk1tjMSIbOBvHla6w== X-Google-Smtp-Source: AGRyM1t+zU3BMjodTIBb1odDySeQMaK1DpIwSRMsOXG7y3vqRH+ot5y07mqKhFJNT1tfn6xpW3t0PAbm6Dio/ktMQOg= X-Received: by 2002:a05:6512:33c4:b0:489:da1c:76cc with SMTP id d4-20020a05651233c400b00489da1c76ccmr5240202lfg.237.1657551879124; Mon, 11 Jul 2022 08:04:39 -0700 (PDT) MIME-Version: 1.0 References: <20220708071802.751003711@infradead.org> <20220708071834.149930530@infradead.org> In-Reply-To: From: Jann Horn Date: Mon, 11 Jul 2022 17:04:02 +0200 Message-ID: Subject: Re: [PATCH 4/4] mmu_gather: Force tlb-flush VM_PFNMAP vmas To: Peter Zijlstra Cc: Linus Torvalds , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dave Airlie , Daniel Vetter , Andrew Morton , Guo Ren , David Miller Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657551880; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RgUS82K2zhP325p8wjy8l4MlvN11hbzE1cFNOBCQxvk=; b=Lqo+rTd7PdJO6Rg8tS0U73dRwsZyuUC1m7kL6Q7N08sluoLSz2QWOOukZlcOitblpH0tGr ASF3K0J+vgM/LG4JF58sLZHkKmVKEuPwG1Tcs7R3BSklkiNLKRNRDldroP7S+5Ou/U1OT2 0AqEZMYkp11wMTHasJaSxkJwSbMRY5s= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657551880; a=rsa-sha256; cv=none; b=Dyw8GHglCb0OwnUZlzzpbkhfVwkQXwC/iz+R0LwwDKVrKJykJrDBW6wlP894fhATz44FkL XX3Ic2ziyFmd1jIselqDePl0AvyI8rK7m8ZkU5sBn0uYeB/6TtUMJJZFR49Jrxem7VWF0x tOnSQdwivUHcOwbVm1FCvGDBY1YdGHk= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=GdTS2TUW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of jannh@google.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=jannh@google.com X-Stat-Signature: huzsgaryepy6pkdx948jg3tcr84wt3ht X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=GdTS2TUW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of jannh@google.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=jannh@google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DE082400AD X-HE-Tag: 1657551880-155589 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Jul 9, 2022 at 10:38 AM Peter Zijlstra wrote: > On Fri, Jul 08, 2022 at 04:04:38PM +0200, Jann Horn wrote: > > On Fri, Jul 8, 2022 at 9:19 AM Peter Zijlstra wrote: > > > @@ -507,16 +502,22 @@ static inline void tlb_start_vma(struct > > > > > > static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > > > { > > > - if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) > > > + if (tlb->fullmm) > > > return; > > > > Is this correct, or would there still be a race between MM teardown > > (which sets ->fullmm, see exit_mmap()->tlb_gather_mmu_fullmm()) and > > unmap_mapping_range()? My understanding is that ->fullmm only > > guarantees a flush at tlb_finish_mmu(), but here we're trying to > > ensure a flush before unlink_file_vma(). > > fullmm is when the last user of the mm goes away, there should not be (FWIW, there also seems to be an error path in write_ldt -> free_ldt_pgtables -> tlb_gather_mmu_fullmm where ->fullmm can be set for a TLB shootdown in a live process, but that's irrelevant for this patch.) > any races on the address space at that time. Also see the comment with > tlb_gather_mmu_fullmm() and its users. Ah, right, aside from the LDT weirdness, fullmm is only used in exit_mmap, and at that point there can be no more parallel access to the MM except for remote memory reaping (which is synchronized against using mmap_write_lock()) and rmap walks... > Subject: mmu_gather: Force TLB-flush VM_PFNMAP|VM_MIXEDMAP vmas > From: Peter Zijlstra > Date: Thu Jul 7 11:51:16 CEST 2022 > > Jann reported a race between munmap() and unmap_mapping_range(), where > unmap_mapping_range() will no-op once unmap_vmas() has unlinked the > VMA; however munmap() will not yet have invalidated the TLBs. > > Therefore unmap_mapping_range() will complete while there are still > (stale) TLB entries for the specified range. > > Mitigate this by force flushing TLBs for VM_PFNMAP ranges. > > Reported-by: Jann Horn > Signed-off-by: Peter Zijlstra (Intel) Looks good to me.