From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89AA6C43334 for ; Fri, 8 Jul 2022 13:36:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 261366B0073; Fri, 8 Jul 2022 09:36:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2103B8E0001; Fri, 8 Jul 2022 09:36:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B29A6B0075; Fri, 8 Jul 2022 09:36:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EC6FA6B0073 for ; Fri, 8 Jul 2022 09:36:13 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id C361A80FAA for ; Fri, 8 Jul 2022 13:36:13 +0000 (UTC) X-FDA: 79664031426.15.E11641E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf20.hostedemail.com (Postfix) with ESMTP id 26CC91C005D for ; Fri, 8 Jul 2022 13:36:12 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 75E2E627B5; Fri, 8 Jul 2022 13:36:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5914C341C6; Fri, 8 Jul 2022 13:36:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657287371; bh=o2KSwDFs5DfzOXVzIO7vzMubdIpBYjtEzTcmGGuDQyI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Z9emsyrJrYTYbSfBVr+Ws5JU7z4eB3w2za9nZYxTI+Eul1U0YNktpPkELUusEJCts OFB2FU8y310puqbeI8zWMcmNqNz96CwETZ2rG6CaFctWB9UltUTMSQ4f/rvm97zCMs VjokYr5Ieyou51WfxzsGjpGwOtoteEaJl92G52H0hFHDetMR0e+8+6k7aOnWhm5jMr y4dzgmtTWZk2QPrHeZjMjuEbeWUoM3QLqcfpBq5KrnP7MDDizerTj65LBrO9J7Aopm dPniDSAIXpcA2kYCeHduKX+iY8hwdf++4LaWfbahVpJfJrzuj6BT6RTYHg5ZqUbQb3 TAVEbDFS+7z5Q== Date: Fri, 8 Jul 2022 14:36:06 +0100 From: Will Deacon To: Peter Zijlstra Cc: Jann Horn , Linus Torvalds , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dave Airlie , Daniel Vetter , Andrew Morton , Guo Ren , David Miller Subject: Re: [PATCH 4/4] mmu_gather: Force tlb-flush VM_PFNMAP vmas Message-ID: <20220708133605.GE5989@willie-the-truck> References: <20220708071802.751003711@infradead.org> <20220708071834.149930530@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220708071834.149930530@infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657287373; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aX4YNGv499DhZQ6Cjk1GxbdGQqLxUKEtJDjQZz2FVKo=; b=kveefn2WSkyvC03O4cOu4V0sC4CypVSAi8pocVHiWxUmqY80aKLmv0OkMnfXjy+BG7UJ2m P3PuAfSD5itlSlOHVOWRRAGMC4TVpR5Xw5vDEPRoJSHoFo7uYtpdE7f0GtnhAwXwGxNBGH zJIxStY0RGuiSTgng3LC7DsWOc6rWxc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657287373; a=rsa-sha256; cv=none; b=F2kqhpRTtT5JnVzpQkpTZXuoEPmyHY9vqxJbeuegFb+BE1TLjVwzovG6l/ufi0vmq9V16s NYSW5J2mk1aFqK+uf4J7SlWcsDmF6bTJD2f2SkvOBv0U0JIgHEudV1HXYLxdXvu3VmlViP JuFinkLQ1m+VHa3xFcR+p4/pab8yw9Y= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Z9emsyrJ; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of will@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=will@kernel.org X-Stat-Signature: 3q6h3u7wjtzq167twgpicy4pqhjisa9h X-Rspamd-Queue-Id: 26CC91C005D Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Z9emsyrJ; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of will@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=will@kernel.org X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1657287372-323931 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 08, 2022 at 09:18:06AM +0200, Peter Zijlstra wrote: > Jann reported a race between munmap() and unmap_mapping_range(), where > unmap_mapping_range() will no-op once unmap_vmas() has unlinked the > VMA; however munmap() will not yet have invalidated the TLBs. > > Therefore unmap_mapping_range() will complete while there are still > (stale) TLB entries for the specified range. > > Mitigate this by force flushing TLBs for VM_PFNMAP ranges. > > Signed-off-by: Peter Zijlstra (Intel) > --- > include/asm-generic/tlb.h | 33 +++++++++++++++++---------------- > 1 file changed, 17 insertions(+), 16 deletions(-) > > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -303,6 +303,7 @@ struct mmu_gather { > */ > unsigned int vma_exec : 1; > unsigned int vma_huge : 1; > + unsigned int vma_pfn : 1; > > unsigned int batch_count; > > @@ -373,7 +374,6 @@ tlb_update_vma_flags(struct mmu_gather * > #else /* CONFIG_MMU_GATHER_NO_RANGE */ > > #ifndef tlb_flush > - > /* > * When an architecture does not provide its own tlb_flush() implementation > * but does have a reasonably efficient flush_vma_range() implementation > @@ -393,6 +393,9 @@ static inline void tlb_flush(struct mmu_ > flush_tlb_range(&vma, tlb->start, tlb->end); > } > } > +#endif > + > +#endif /* CONFIG_MMU_GATHER_NO_RANGE */ > > static inline void > tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) > @@ -410,17 +413,9 @@ tlb_update_vma_flags(struct mmu_gather * > */ > tlb->vma_huge = is_vm_hugetlb_page(vma); > tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); > + tlb->vma_pfn = !!(vma->vm_flags & VM_PFNMAP); > } > > -#else > - > -static inline void > -tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } > - > -#endif > - > -#endif /* CONFIG_MMU_GATHER_NO_RANGE */ > - > static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) > { > /* > @@ -507,16 +502,22 @@ static inline void tlb_start_vma(struct > > static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > { > - if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) > + if (tlb->fullmm) > return; > > /* > - * Do a TLB flush and reset the range at VMA boundaries; this avoids > - * the ranges growing with the unused space between consecutive VMAs, > - * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on > - * this. > + * VM_PFNMAP is more fragile because the core mm will not track the > + * page mapcount -- there might not be page-frames for these PFNs after > + * all. Force flush TLBs for such ranges to avoid munmap() vs > + * unmap_mapping_range() races. > */ > - tlb_flush_mmu_tlbonly(tlb); > + if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { > + /* > + * Do a TLB flush and reset the range at VMA boundaries; this avoids > + * the ranges growing with the unused space between consecutive VMAs. > + */ > + tlb_flush_mmu_tlbonly(tlb); > + } We already have the vma here, so I'm not sure how much the new 'vma_pfn' field really buys us over checking the 'vm_flags', but perhaps that's cleanup for another day. Acked-by: Will Deacon Will