From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CA3FC02181 for ; Wed, 22 Jan 2025 10:35:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0EFD280001; Wed, 22 Jan 2025 05:35:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ABBE86B0088; Wed, 22 Jan 2025 05:35:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9849F280001; Wed, 22 Jan 2025 05:35:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 768466B0085 for ; Wed, 22 Jan 2025 05:35:01 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0D1D51214E9 for ; Wed, 22 Jan 2025 10:35:01 +0000 (UTC) X-FDA: 83034730002.23.F6CE134 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf03.hostedemail.com (Postfix) with ESMTP id 2FB0920007 for ; Wed, 22 Jan 2025 10:34:59 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=chqTEjix; spf=pass (imf03.hostedemail.com: domain of hughd@google.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737542099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8U/payZ20c9Ax3t2jNC8bRsYJo5GmzmF42je0IW1q6s=; b=r1zAp+92inb6vY3/i3pdlTmxuKcGgrZXlqNxcRtQc0tUuMxQvyQ0cQKVteYuUlFzb5Fidd 8GoFfLkxdT2mdkOngC0MVrQySYC7G4xt1qX4QLF4PyRM6sJailyDyC72Qp47B8a/U3Cwr5 c/zOMCETx6LfyqlzowhOoIew2EYiF5Q= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=chqTEjix; spf=pass (imf03.hostedemail.com: domain of hughd@google.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737542099; a=rsa-sha256; cv=none; b=v34rBjVGa1kD2eM/H3Gfu17L+pE+AqGOhCWYIx6JfVt4/D+jXG7gnohOwz0FzNKki4aPpX V1LpZQ3YjDFPsi28O/kZO5DK0RUPRsfX72YBIy2/o8mq57p3eb5WxmA53ydQH30gCGKKxZ AEsvNTY+7fL7iiPemHMr9Mta3vYkcHo= Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-2ef87d24c2dso8831157a91.1 for ; Wed, 22 Jan 2025 02:34:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1737542098; x=1738146898; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=8U/payZ20c9Ax3t2jNC8bRsYJo5GmzmF42je0IW1q6s=; b=chqTEjixLkZC17mN+dhWCXm+flsly1Cs9ikw6hjzV/6ZQI9jQbNOSNMJKPuWBDvfi8 VqWPV49TgE40xeN3lGkj/sJWBrDTg4+NU8WKFB5iZ+MQU+xu7OZD0H1nP7NXy4uSToFG 56QYlGo6Oj0ztfbNaokm4ajdiQUF0TLlReD4srsHbY3DPrbxZmDPplk98QSDS4FgBaaC pbyqxh403V6WbOrXd5GaQ1rD113Wf7JSyovUKBTzcJSgNo8FQ7gkALF2LXsIosX11KHT B48xog8gZAte+hxp7CXUCKOmISTH0x8nwosb0ooyOthZ4qWFK8XmTXC23m/z0wLjl5ra CHxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737542098; x=1738146898; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8U/payZ20c9Ax3t2jNC8bRsYJo5GmzmF42je0IW1q6s=; b=mEuKAhHyMdn7g3Oz/IuHUJzr9GnGmNjzlsW+wncpqJHE6TxCYl0x6lYzxH/lwnbXS/ 5BEeQF6bI08vi6z+pfOSe/0/sN0vvGA63tn6WjdOd5igDAWXLPDovqO+nRB5pzcygzPx GBqGR9YxQhXQMfsnh3ymBNzoeMNHvUwbLK3aVB4wB87FQ8PDddazjdm69cDhTXqpEthF ZWBh2VznyOqlqBfryI6+TehMKv8yeaU5P9dITNvH1Qda8TtVpdFNwzhYWcAKoWzK9HUQ ixADBMEH8kCPD0Xx6jmTE0cMfUvf7VOvBBdNtkB33twPhEinJIvoGWepOS8O2haKurDE Dh2Q== X-Forwarded-Encrypted: i=1; AJvYcCWf4zv8qyJFKbSCNpdMl55kKdKFNvFV9DpuDObq0FVn6mOtWcc7Ro3MLKIKvYehGK7uKS/kssmacw==@kvack.org X-Gm-Message-State: AOJu0Yy8mCIhRKeU0b9xS/GiyinPsWaQ3CSt3QKFaDYvkBXB+Di59+Tu wQc15l+WSyH7DoC/6RlZydG6Cj3qvRTloM4eB7o5Wnz9FdN/wb26RcktU5SUvA== X-Gm-Gg: ASbGncvz3BGwyDi0nLXLLNI4lHjiLp0+hNVrN8QTD6Uyr6FvygUz8fKFa3+jlFSz6MG Irp9sF5UbxKoVZ6llqpli6h0E851OdW0Qny31V9qpvl2ecgNHqZLj7id/pOUR/by1bQll0MTl2v bnZ4C18bEeF48h+s/1nSTpWrioai2i35v1sBxDDP0K8mYFqd8FDmTENKXG8aCfiKP0RwbylvjhH VcJCudsCRuorvfgccDKVqaAeVmGgF83xKj3TLtQk6Z/eLJzseDC3E0XnoJoM4u29CLiwV85BQtB Q19OdBaYn6yZLFMVLNd26FhGqRslH9xQqwZN8G/Z8WerwXyGQ87ifQ== X-Google-Smtp-Source: AGHT+IEA7uPz8eU9TDBP3U4S5r/5kxWYft7JncmGKt91LbEvuF4KcW1eWWFG5Z0CDvmZRNuxUgbBtg== X-Received: by 2002:a05:6a00:428d:b0:725:ae5f:7f06 with SMTP id d2e1a72fcca58-72dafadbc37mr29647191b3a.23.1737542097673; Wed, 22 Jan 2025 02:34:57 -0800 (PST) Received: from darker.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72dabaa751asm10651865b3a.162.2025.01.22.02.34.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Jan 2025 02:34:56 -0800 (PST) Date: Wed, 22 Jan 2025 02:34:45 -0800 (PST) From: Hugh Dickins To: Roman Gushchin cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Jann Horn , Peter Zijlstra , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Hugh Dickins , linux-arch@vger.kernel.org Subject: Re: [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables() In-Reply-To: <20250121200929.188542-1-roman.gushchin@linux.dev> Message-ID: References: <20250121200929.188542-1-roman.gushchin@linux.dev> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 2FB0920007 X-Stat-Signature: y3z7b1da59uccwd1hn4ong4rjpibx95g X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1737542099-846768 X-HE-Meta: U2FsdGVkX18rCIZ+euTl+7Urolk3bWPIWvBaEL8+iHgzSQr451WaqFZI8ffhKTTjW+IW4tTUGdN4Jl2lqx8Q/WJGFSfDF3TziSFMrdw50Sn9D205B7M8Q3MhZm1CBhrAYrnAOqs0hysNN8Uuo88KLL4QCd6SHMScc9X4vGpNiH4TKQgDW1epOlc5xiJ0H3BjfgHXpZvl8Ix1P8OHA7kkAMuhq8qmD4gnZ64KI08vqTXVewccWdSxCdJNu81DZjSalhFQgXZUAcQPL9tmONlzwtHoPxipJ5OCOZ5pkiCHtT0PMtg88RkFAPPmZRixzL4SXkrBal80fsDjobVCHB4bfxTgJ0xe/YOMepEyJhhI20pDpycemW2zOgu8m2K+41+IiuRR+7c36Vj9XlVGb8zWzzTCGm05VS4VN51OZxilcC/WbEWwnR8MSsMw4FQbTTR7QGBej1w03Ao9Ct1qVEuszQT8zBtmxgT2NVJWVtEKON1XFBkFmh5mi+L61UxeABXci/SEDI8JsXigdAJpC7Ol+s68HcVFbRAcQ4oCqzr9/yU/x0HUpaeSX5pQ95CwteZvzvf2cJ5I3g8PGeP3RTN/fqTeTTt/md9+ud3q3Uk+Ue+whSWIfBQ9OxBCqTGb637G1rIGY/EiWUMYX4PLiHQF2sS2gB5YqYDVZnSj6qnT+S0ukgIFkWlTwVrsGGCnvqzHnflSMZOYmw3mnoqeoGqZcQlq4Ep520I1gU3FfKEt64gtSY8ssRNCiKJ0FbxTtUlkBzv+mT/jpGielyGw2PhL2gGW2g4hcXB+agN1gNY+LB+6l6AX8fo9WMj/ZlEKawIEnsomJZrUYuFarht/rhVPaJMv7J4NHv6Df6SNLuuOv08gvxIXVZ5RTTH7FD2YLb2PpeCX4jTEcsugztX/pyZSrQOrEZSxFwbpmOYV6bkU1QuT7MOjutc2LLfmz7IivUxc+Dri4Lnh3Fh1GZnJlOc uFqdcZIB 3CbYuCE/pgNdzJBm9j/EeeY04pipEl/xQwp45M8z49WUedevcUKomAZ/sK/TcdEBR4x3zQoaMsbYQN82RAAZ2IATTISoO5o+Old53VCZugQqGkT7XQDO9EsFgc0+leBDIEV7UA+1XI7iaTPoOVGhlB1M+CS2/0/98kpOid7YMei8oKDOO8mMz65BepSwn7CF7q6CYJAhDtHOEIDg0nAnshDwzAecmJcJb0VFhqg8Ae79zG3IN3FpffX+V3lzgGz2yFu60Lcr+xJrZBqsoJg3cjfm4pUPD5c5986nHLT1iz0YWn1I2vj80xBAkbABFUhCz1Qtx6XoikqcRr5k4mLbbenkzICCWC+XUsc/xIt7zbYoLgGvEYACqwBx5MuAYrBAtoeMdEsKAKOF6XAsB393rFfu/g16slfARH8/1zVFiTnWf5hIFU08SRPecUMuH6gJp24Ng2tgbwDdKu7WhhtB4OUltB50+kH6Z0tMNqGNH74p0Ogn4f+68ZETDTBaDXz98OtqOAxoSQRb3G4H1N5gKH5/0cIrz8ibvLJSEit+rso/fex9oMTnJxZHKgpwKKPvLvvUmC879E80ASufGsrOkzcttETpGAAxnvw2M6YhEPYUg1A9YpU98X5BUCw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 21 Jan 2025, Roman Gushchin wrote: > Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas") > added a forced tlbflush to tlb_vma_end(), Yes, I think that was a poor way of fixing the bug in question. > which is required to avoid a > race between munmap() and unmap_mapping_range(). However it added some > overhead to other paths where tlb_vma_end() is used, but vmas are not > removed, e.g. madvise(MADV_DONTNEED). Right. > > Fix this by moving the tlb flush out of tlb_end_vma() into > free_pgtables(), somewhat similar to the stable version of the > original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush > for PFNMAP mappings before unlink_file_vma()"). Something like this patch will be a good improvement: but not this version of the patch. Because the mmu_gather may be gathering across many vmas, but tlb_start_vma(), well, its "tlb_update_vma_flags()", says tlb->vma_pfn = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)); so a following vma may reset vma_pfn too soon: more care is needed. But probably vma_pfn should be reset to 0 somewhere, to avoid an extra TLB flush in free_pgtables() when it has already been done. Perhaps vma_pfn should follow the same pattern of initialization, setting and clearing as cleared_ptes etc, instead of following vma_huge and vma_exec. Perhaps, but it is something different, and I've not yet checked enough to be sure: tlb.h is still a maze too twisty for me. Hugh (after power outage interrupted reply) > > Note, that if tlb->fullmm is set, no flush is required, as the whole > mm is about to be destroyed. > > Suggested-by: Jann Horn > Signed-off-by: Roman Gushchin > Cc: Peter Zijlstra > Cc: Will Deacon > Cc: "Aneesh Kumar K.V" > Cc: Andrew Morton > Cc: Nick Piggin > Cc: Hugh Dickins > Cc: linux-arch@vger.kernel.org > Cc: linux-mm@kvack.org > --- > include/asm-generic/tlb.h | 16 ++++------------ > mm/memory.c | 7 +++++++ > 2 files changed, 11 insertions(+), 12 deletions(-) > > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h > index 709830274b75..411daa96f57a 100644 > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -549,22 +549,14 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * > > static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > { > - if (tlb->fullmm) > + if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) > return; > > /* > - * VM_PFNMAP is more fragile because the core mm will not track the > - * page mapcount -- there might not be page-frames for these PFNs after > - * all. Force flush TLBs for such ranges to avoid munmap() vs > - * unmap_mapping_range() races. > + * Do a TLB flush and reset the range at VMA boundaries; this avoids > + * the ranges growing with the unused space between consecutive VMAs. > */ > - if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { > - /* > - * Do a TLB flush and reset the range at VMA boundaries; this avoids > - * the ranges growing with the unused space between consecutive VMAs. > - */ > - tlb_flush_mmu_tlbonly(tlb); > - } > + tlb_flush_mmu_tlbonly(tlb); > } > > /* > diff --git a/mm/memory.c b/mm/memory.c > index 398c031be9ba..2071415f68dd 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, > { > struct unlink_vma_file_batch vb; > > + /* > + * Ensure we have no stale TLB entries by the time this mapping is > + * removed from the rmap. > + */ > + if (tlb->vma_pfn && !tlb->fullmm) > + tlb_flush_mmu(tlb); > + > do { > unsigned long addr = vma->vm_start; > struct vm_area_struct *next; > -- > 2.48.0.rc2.279.g1de40edade-goog