From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 685D6C02182 for ; Thu, 23 Jan 2025 16:44:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B13736B0092; Thu, 23 Jan 2025 11:44:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC3E66B0093; Thu, 23 Jan 2025 11:44:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B28C6B0095; Thu, 23 Jan 2025 11:44:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7B9FE6B0092 for ; Thu, 23 Jan 2025 11:44:20 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 36C1B80129 for ; Thu, 23 Jan 2025 16:44:20 +0000 (UTC) X-FDA: 83039289480.18.F31C673 Received: from out-176.mta0.migadu.com (out-176.mta0.migadu.com [91.218.175.176]) by imf23.hostedemail.com (Postfix) with ESMTP id 566E1140009 for ; Thu, 23 Jan 2025 16:44:18 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=i34gYHvc; spf=pass (imf23.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.176 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737650658; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=tq6WrEXDOwQH8ujhQCQztubvc77Na0hjUaUvAtMBHDU=; b=uaPsSIPz7rOs7ekjwuBJjkwqujaQs6v7RsRLy7aGMKI2sYFB4w94/GUy2P4HVRd6+VBkqC lXimy3Sm7QL/9UNSiJh/KU0v189Ndioei35pxUZJQmKStBW2c44Iq2P6crlxYGj46rUGio 9jFEdsxLmdyPQ3LeFWmHNg6ZuCLFxBM= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=i34gYHvc; spf=pass (imf23.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.176 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737650658; a=rsa-sha256; cv=none; b=0hVffuaWXzIm213tHCa9SzMAPpK6+9Oeh8LRqCOMSNmfgph0zi8DNZWXhObSEZqwRwX4Od 4DSu2XgpP/3U+twb1L/+4pMgk+QqAneHLjepLThyNEBIRORaY5FZhVqtuE7Nd1fBw/8W7a lBDqaS2mw7QttQcUmuTyFQTIxBdT+rg= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1737650651; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=tq6WrEXDOwQH8ujhQCQztubvc77Na0hjUaUvAtMBHDU=; b=i34gYHvc+biDzphvaZftqRCYQACwkL6IF7oZHQuPzSaq/zmgBKPPeVKIJHdPGBn7IGtgwO dVdPIOBZaMoEDLux6kZEK73MrKnOYbDawzd1gmY+SWuCC4dhwqtqP6B+MZhwos81k0RauF iGsNS2cR9sDVp6zWaE/ZwBGTx1LFN48= From: Roman Gushchin To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Roman Gushchin , Jann Horn , Peter Zijlstra , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Hugh Dickins , linux-arch@vger.kernel.org Subject: [PATCH v3] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables() Date: Thu, 23 Jan 2025 16:43:58 +0000 Message-ID: <20250123164358.2384447-1-roman.gushchin@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 566E1140009 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 1r3w1f5ddmxmwesf9e1s4tp5xy8rnob7 X-HE-Tag: 1737650658-889463 X-HE-Meta: U2FsdGVkX18fVLpZ7hdiHqGShfqRV5PGhKSveP7aNTEA/4LZtNHZlrdwFUtn+cmEevSbrXx0RS64ltHV9TZfkfdDt/X80zrxLp50UQwodChlz49tbKTkKiVcYLxpx3/XP+NRf4MSbqAvGqXGu9DMmfOUAfGeeZ7niYQc2PQ7PueMXzaTmYecT4HFDn9uO1LAF4LR0L1/Ek3E6hm3O7is88yO5RfrVAN6hMyCA4MenczsmTJy2PJc+sJHEADjyOSWuIC9T1YGLXT8ZtEjGYlMY94Do8ODverRDo/9bo97y+IIE0aqh/pl4Mc1E8yiWxirJFeVtrvBaXGuRRhDbVMzFbQCD8wuQkRArl1yuOY+jL34CmUf0V142FFCEQK8Xm49+bBsmgDHmHEE8pV1NqkLwo558DgGdcqjMQY7iQcUGgzcO0TxD7zKv10LAB5WBInuS0dOzjZUpvCYKnMK5HTAHdLZT5W7DHZ4TfVRq+Tr0/wDa1g9HvDTxti+mDs/UeV+hRvkSA8MCpozIDuCQQJfQdH+fNu7N8C8C4msLNgPnuFQi8Tx7SsCYODM/HA1T0/2mNuVA6MN0KX9gmUdTrhzuKzhJJu5djqWIM0VqMlzhsTH2H/oLybY4K7tbQZZlIo81DF5UOQBy22ER51wJXarJ3oy/tMrxdZjpJbU3VC7W9Md8fmHuFpnumXU/U2Vw5D0ZbQYirztvkEd9f7XJtASDxgUlhYNp1xUIkziFG8pND4agEqoFhVk4LrUaxa4mH+LQjU8kxWhpks3SABgYR+lvV/O+/dlirKl+7pqkIGOD5ml4HkGtM71YVvtrxaaQZPsu9SmtLhBMa1XaaMFiFYVAiSGPeAsSG3jekBIgXrnmMeTVgpCDPrftK2A4JdxQxnqBa5HLW7GDWmmTxJjSt/vENQ0dbAeakwO0daeFgp34apHyThvPY11qD5yP9oaWcTLf1rQLpj8WAKADldoe/n NNrakvo4 IFuFNDZQGAol1jYf3ou+nKr43Lmoqbh/Yr2cDZiWgnvYk0KTTBBVoPQqRLD9Xr2JNYE3UHwKiExwUrI6gDPQsdB96S8AuhJeu4m293jVniwax71hUzV2beEbywE8FlRvmSm9tdYsqFpgZQqF+EJI6T0IwZ+dqcxkI0kYbCApERNQjL55/XnYkPUpprX8eC+TAZ4OGIzkuy6FtFcYFMKUisyq3HfrASky+baUXya2w7TpWynRBitwc/vuxiCqgXvZOoxV0tDm8PffnrclMHQazPo2usJmiFeW4eisIxkw0NGew2spi3tbWrkLoHav28sFBFdtIzODCKzlIqoAq6SRVRsomlj1B8ZskQWcmm3sEZEYbWJmgpdPs8jjQCoOuGrptMWc3FJ88UrAZIZcbiHciRa/n6siIr/W6peD6yqhqhDSOgteoR2WsmnPftx+lG9uNGZxLGs11wS1a/dTXcHbbK/nD2w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas") added a forced tlbflush to tlb_vma_end(), which is required to avoid a race between munmap() and unmap_mapping_range(). However it added some overhead to other paths where tlb_vma_end() is used, but vmas are not removed, e.g. madvise(MADV_DONTNEED). Fix this by moving the tlb flush out of tlb_end_vma() into free_pgtables(), somewhat similar to the stable version of the original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush for PFNMAP mappings before unlink_file_vma()"). Note, that if tlb->fullmm is set, no flush is required, as the whole mm is about to be destroyed. --- v3: - added initialization of vma_pfn in __tlb_reset_range() (by Hugh D.) v2: - moved vma_pfn flag handling into tlb.h (by Peter Z.) - added comments (by Peter Z.) - fixed the vma_pfn flag setting (by Hugh D.) Suggested-by: Jann Horn Signed-off-by: Roman Gushchin Cc: Peter Zijlstra Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Hugh Dickins Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org --- include/asm-generic/tlb.h | 49 ++++++++++++++++++++++++++++----------- mm/memory.c | 7 ++++++ 2 files changed, 42 insertions(+), 14 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 709830274b75..cdc95b69b91d 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -380,8 +380,16 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->cleared_pmds = 0; tlb->cleared_puds = 0; tlb->cleared_p4ds = 0; + + /* + * vma_pfn can only be set in tlb_start_vma(), so let's + * initialize it here. Also a tlb flush issued by + * tlb_flush_mmu_pfnmap() will cancel the vma_pfn state, + * so that unnecessary subsequent flushes are avoided. + */ + tlb->vma_pfn = 0; /* - * Do not reset mmu_gather::vma_* fields here, we do not + * Do not reset other mmu_gather::vma_* fields here, we do not * call into tlb_start_vma() again to set them if there is an * intermediate flush. */ @@ -449,7 +457,14 @@ tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) */ tlb->vma_huge = is_vm_hugetlb_page(vma); tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); - tlb->vma_pfn = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)); + + /* + * vma_pfn is checked and cleared by tlb_flush_mmu_pfnmap() + * for a set of vma's, so it should be set if at least one vma + * has VM_PFNMAP or VM_MIXEDMAP flags set. + */ + if (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) + tlb->vma_pfn = 1; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -466,6 +481,20 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) __tlb_reset_range(tlb); } +static inline void tlb_flush_mmu_pfnmap(struct mmu_gather *tlb) +{ + /* + * VM_PFNMAP and VM_MIXEDMAP maps are fragile because the core mm + * doesn't track the page mapcount -- there might not be page-frames + * for these PFNs after all. Force flush TLBs for such ranges to avoid + * munmap() vs unmap_mapping_range() races. + * Ensure we have no stale TLB entries by the time this mapping is + * removed from the rmap. + */ + if (unlikely(!tlb->fullmm && tlb->vma_pfn)) + tlb_flush_mmu_tlbonly(tlb); +} + static inline void tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) { @@ -549,22 +578,14 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { - if (tlb->fullmm) + if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) return; /* - * VM_PFNMAP is more fragile because the core mm will not track the - * page mapcount -- there might not be page-frames for these PFNs after - * all. Force flush TLBs for such ranges to avoid munmap() vs - * unmap_mapping_range() races. + * Do a TLB flush and reset the range at VMA boundaries; this avoids + * the ranges growing with the unused space between consecutive VMAs. */ - if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { - /* - * Do a TLB flush and reset the range at VMA boundaries; this avoids - * the ranges growing with the unused space between consecutive VMAs. - */ - tlb_flush_mmu_tlbonly(tlb); - } + tlb_flush_mmu_tlbonly(tlb); } /* diff --git a/mm/memory.c b/mm/memory.c index 398c031be9ba..c2a9effb2e32 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, { struct unlink_vma_file_batch vb; + /* + * VM_PFNMAP and VM_MIXEDMAP maps require a special handling here: + * force flush TLBs for such ranges to avoid munmap() vs + * unmap_mapping_range() races. + */ + tlb_flush_mmu_pfnmap(tlb); + do { unsigned long addr = vma->vm_start; struct vm_area_struct *next; -- 2.48.1.262.g85cc9f2d1e-goog