From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C8E1C43334 for ; Fri, 8 Jul 2022 14:10:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 727C26B0073; Fri, 8 Jul 2022 10:10:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B13D6B0074; Fri, 8 Jul 2022 10:10:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 552646B0075; Fri, 8 Jul 2022 10:10:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3FC716B0073 for ; Fri, 8 Jul 2022 10:10:29 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 154966044F for ; Fri, 8 Jul 2022 14:10:29 +0000 (UTC) X-FDA: 79664117778.05.D8DC71F Received: from mail-wr1-f45.google.com (mail-wr1-f45.google.com [209.85.221.45]) by imf07.hostedemail.com (Postfix) with ESMTP id 1ABCF4001B for ; Fri, 8 Jul 2022 14:10:27 +0000 (UTC) Received: by mail-wr1-f45.google.com with SMTP id a5so16041981wrx.12 for ; Fri, 08 Jul 2022 07:10:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=aVTg2EfXoOZ9XnqujjRV0ij3i1Nj1NxQHoIMerWnwe8=; b=lgQvQnNGWjFCROQU/kiWQi5k0UhXTi37XTUY7d6h1QDsC/QQnOmyeMJwtf7J1ojI8q /q21CHdvbxLtiW0HUyUpEzIZS+2Zs/IRnNgPFasj3Jb8S2YoT//h7zUZTOj1ut074AzV mqE7LnYme3lOZs2fBH1HR9GoXUJBpVCDiMIDieeP8M3oMYnVLalBzooEQlSX5gVCG1/Y VRvaiIRXgFePqxvtQgjWm1KbtK0k/7BevtCT7H/mr1NOoxRkeeXLUkyU7FmzgSjavfIo chmSknvjXZB49IbDyIB8gCaYwPPkvuMKp+gm7b9Nu7FAaC/tV5ZaxjeIc2hZ5HfqUapN 7MOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=aVTg2EfXoOZ9XnqujjRV0ij3i1Nj1NxQHoIMerWnwe8=; b=1WYhyXWFb5C83K1AntZ7r9BZyvj7X+VjaWS9Lan450LMyIWMk66haLZKPau2hhJqed H9dVDaJThRdBaCxY8M6c1RGoVrLVRk1WODM6ZDWBiv8TRasC7teP52viHV8F7N7bpzyy w4QJ8ie0L3uxB5CLwZNtnbMiDaM2cVCm5/Vtt4b9KKa9Sawq92jPSCvH7xV4b+j9euxC f78dTbDAvK9cE+yJWNU8vhwhDb9jxOINXUdmGmTyCf+ykdfpqCeHb6jwrX5+hBnVXqfm gzU/tFMbtjw0VbBVs6Hl+uaP0eC+UtYzGDm8EmxN1W8doZS9KLwqizfPi8m9ohDOWKA/ kzZA== X-Gm-Message-State: AJIora8GQ17xmEXmG/1/xLhpfk/ckXspYNJll2NVn4V/C+XJgE7t9qQb +qEtad3cgGrG6YaKIgqq836ZpOJu9f7HAg9B0PFwQwH8jxS+bg== X-Google-Smtp-Source: AGRyM1tmGrnZQtA2Cboc1CQNgT3n9CwH9z2FbssFjlcKIQ1HIzsuu0ZixwkFtEDPNh1Z/l+47lS8iXuoc854j0libmA= X-Received: by 2002:a2e:9f56:0:b0:25a:7381:6929 with SMTP id v22-20020a2e9f56000000b0025a73816929mr1995165ljk.93.1657289115126; Fri, 08 Jul 2022 07:05:15 -0700 (PDT) MIME-Version: 1.0 References: <20220708071802.751003711@infradead.org> <20220708071834.149930530@infradead.org> In-Reply-To: <20220708071834.149930530@infradead.org> From: Jann Horn Date: Fri, 8 Jul 2022 16:04:38 +0200 Message-ID: Subject: Re: [PATCH 4/4] mmu_gather: Force tlb-flush VM_PFNMAP vmas To: Peter Zijlstra Cc: Linus Torvalds , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dave Airlie , Daniel Vetter , Andrew Morton , Guo Ren , David Miller Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657289428; a=rsa-sha256; cv=none; b=qARudUFa2jJyR0Nnhv/ZBh1q8MyDynF3CNghZh0egwOLXbwTxDtlTFZih33Et2YNuLqX+T PCU+bj3kAYEMYDsjfkvcS2UyDtoVvI7ceRtF5yrlqXfEh7ec7LnwFunKCCpS8+c9N2Umay 16u2X0yd9hcMmAQAoGhdLIj5YvAAD6k= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lgQvQnNG; spf=pass (imf07.hostedemail.com: domain of jannh@google.com designates 209.85.221.45 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657289428; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aVTg2EfXoOZ9XnqujjRV0ij3i1Nj1NxQHoIMerWnwe8=; b=QIMD92vKxNS15pNjMpZ6yyTUkoKPY5DmyMTQ/4VwvMhEaeQmqDIH4uj6pFPrzUGkCi/Jnh ZAsBDYnTJTEsXg+tbxL25NhnfRZk529TDNb7JQ2nWe6pAVGXbZjpYSuztiaWtNDDyCLLRJ UWbvAg5piy6oXOhmuNKtN6JM6pH30mU= X-Stat-Signature: q5qn1fwrdzioyqxhuek3r3kytua5b4ak X-Rspamd-Queue-Id: 1ABCF4001B Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lgQvQnNG; spf=pass (imf07.hostedemail.com: domain of jannh@google.com designates 209.85.221.45 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1657289427-168107 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 8, 2022 at 9:19 AM Peter Zijlstra wrote: > Jann reported a race between munmap() and unmap_mapping_range(), where > unmap_mapping_range() will no-op once unmap_vmas() has unlinked the > VMA; however munmap() will not yet have invalidated the TLBs. > > Therefore unmap_mapping_range() will complete while there are still > (stale) TLB entries for the specified range. > > Mitigate this by force flushing TLBs for VM_PFNMAP ranges. > > Signed-off-by: Peter Zijlstra (Intel) > --- > include/asm-generic/tlb.h | 33 +++++++++++++++++---------------- > 1 file changed, 17 insertions(+), 16 deletions(-) > > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -303,6 +303,7 @@ struct mmu_gather { > */ > unsigned int vma_exec : 1; > unsigned int vma_huge : 1; > + unsigned int vma_pfn : 1; > > unsigned int batch_count; > > @@ -373,7 +374,6 @@ tlb_update_vma_flags(struct mmu_gather * > #else /* CONFIG_MMU_GATHER_NO_RANGE */ > > #ifndef tlb_flush > - > /* > * When an architecture does not provide its own tlb_flush() implementation > * but does have a reasonably efficient flush_vma_range() implementation > @@ -393,6 +393,9 @@ static inline void tlb_flush(struct mmu_ > flush_tlb_range(&vma, tlb->start, tlb->end); > } > } > +#endif > + > +#endif /* CONFIG_MMU_GATHER_NO_RANGE */ > > static inline void > tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) > @@ -410,17 +413,9 @@ tlb_update_vma_flags(struct mmu_gather * > */ > tlb->vma_huge = is_vm_hugetlb_page(vma); > tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); > + tlb->vma_pfn = !!(vma->vm_flags & VM_PFNMAP); We should probably handle VM_MIXEDMAP the same way as VM_PFNMAP here, I think? Conceptually I think the same issue can happen with device-owned pages that aren't managed by the kernel's page allocator, and for those, VM_MIXEDMAP is the same as VM_PFNMAP. > } > > -#else > - > -static inline void > -tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } > - > -#endif > - > -#endif /* CONFIG_MMU_GATHER_NO_RANGE */ > - > static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) > { > /* > @@ -507,16 +502,22 @@ static inline void tlb_start_vma(struct > > static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > { > - if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) > + if (tlb->fullmm) > return; Is this correct, or would there still be a race between MM teardown (which sets ->fullmm, see exit_mmap()->tlb_gather_mmu_fullmm()) and unmap_mapping_range()? My understanding is that ->fullmm only guarantees a flush at tlb_finish_mmu(), but here we're trying to ensure a flush before unlink_file_vma(). > /* > - * Do a TLB flush and reset the range at VMA boundaries; this avoids > - * the ranges growing with the unused space between consecutive VMAs, > - * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on > - * this. > + * VM_PFNMAP is more fragile because the core mm will not track the > + * page mapcount -- there might not be page-frames for these PFNs after > + * all. Force flush TLBs for such ranges to avoid munmap() vs > + * unmap_mapping_range() races. Maybe add: "We do *not* guarantee that after munmap() has passed through tlb_end_vma(), there are no more stale TLB entries for this VMA; there could be a parallel PTE-zapping operation that has zapped PTEs before we looked at them but hasn't done the corresponding TLB flush yet. However, such a parallel zap can't be done through the mm_struct (we've unlinked the VMA), so it would have to be done under the ->i_mmap_sem in read mode, which we synchronize against in unlink_file_vma()." I'm not convinced it's particularly nice to do a flush in tlb_end_vma() when we can't make guarantees about the TLB state wrt parallel invalidations, and when we only really care about having a flush between unmap_vmas() and free_pgtables(), but I guess it works? > */ > - tlb_flush_mmu_tlbonly(tlb); > + if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { > + /* > + * Do a TLB flush and reset the range at VMA boundaries; this avoids > + * the ranges growing with the unused space between consecutive VMAs. > + */ > + tlb_flush_mmu_tlbonly(tlb); > + } > } > > /* > >