From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 679FCC4828F for ; Wed, 7 Feb 2024 12:24:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA4AF6B0075; Wed, 7 Feb 2024 07:24:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C54CC6B0078; Wed, 7 Feb 2024 07:24:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B44186B007D; Wed, 7 Feb 2024 07:24:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A5B686B0075 for ; Wed, 7 Feb 2024 07:24:33 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7AAEAA0D69 for ; Wed, 7 Feb 2024 12:24:33 +0000 (UTC) X-FDA: 81764926026.14.1C774E4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf02.hostedemail.com (Postfix) with ESMTP id D3B938001A for ; Wed, 7 Feb 2024 12:24:31 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf02.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707308672; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zJtNgNrc3XCCOXRxcGlB5+xouEYxteZlGeXj43clDyo=; b=Ug0npwatllRaG2sFxHXLRJcRTNPkhRybo5PKItgdlaP1sgKuGWgFibgnY6ux4SvjIDacRT pfAnuecH/jAvgYAjFFAankJFwG3/r4AoPnLoDJUB8qXrPYCCWhBdcN5JAb7A+qSv0tSKM9 asax4zKPHAgSUBQ1uwPBfiWx5cGXeg8= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf02.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707308672; a=rsa-sha256; cv=none; b=mVmgxUx6UamJYgmXjO1XKcXHp0yO8W+OOKq15b/yhZpP4rh/tfwvk8AUcNUfJFMq0z624x khLh34j8hlKr7EnmnSIYqJy3akDJ3HGlge0dg0O5cSr8LgqdBicY7SpN3Jtum35wP51Qct w2hAFmkRNMpmtyynPcCQledVRLcCI9M= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 754951FB; Wed, 7 Feb 2024 04:25:13 -0800 (PST) Received: from FVFF77S0Q05N.cambridge.arm.com (FVFF77S0Q05N.cambridge.arm.com [10.1.26.150]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 750803F5A1; Wed, 7 Feb 2024 04:24:29 -0800 (PST) Date: Wed, 7 Feb 2024 12:24:26 +0000 From: Mark Rutland To: Will Deacon Cc: Matthew Wilcox , Nanyong Sun , Catalin Marinas , mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org, anshuman.khandual@arm.com, wangkefeng.wang@huawei.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize Message-ID: References: <20240113094436.2506396-1-sunnanyong@huawei.com> <20240207111252.GA22167@willie-the-truck> <20240207121125.GA22234@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240207121125.GA22234@willie-the-truck> X-Rspamd-Queue-Id: D3B938001A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: bt13nrm8cxs894ooc8rqth73qxiobnfd X-HE-Tag: 1707308671-579116 X-HE-Meta: U2FsdGVkX1+Tn2SCNmI+1ZmYGLuKQy3e7eZhdPiQlyZm4F5auvi9p6HdJ05jSHD/E7rXHO6EJZROiEcZPVATQCAHeD/KAwwQeU5eXQCDftEOS0Q4NZIc9XbgE1MFg8joZ5z9sMIcOkuOuq0zvLTHF3VqCKitBK9WxrSCoICsbo8j+q1Zju0mD9Spz4oXNFs82Wq3TrsYekt6d0cLtXW36L8xgX6VAaOOUXF1+yEUkK4J9NynZ/PT/1lFLEjEET12ahXSCYgEVHgmu2wZ15FdZlvHU1lkhbvvvcBQL/ukhobIeGGIU8Q7v3T8Kj7SafLMS08nlgKYLbgQX3yOUBPWcV/EcB/7SUY3/lMlznDVxteXu9ZkUbkj+bDKtPsou1TikgF1DizXYjn3V6+BjP6SvaIYg36fIPoPy8urklqh5t3DtCV5ZgBoBuJeZEFQwme2N0dtYqAT3UZCWKJJE8txi+KFltwR/pIqy7MKgdGCu5Ulu8la8U0XIhlkWJgZuHL1vG6nwvvk1gZHBkET+X8FQ5Ipm9O8YmVpanRgXxsRsp6APO4nQ+NYpwOOv6vTI7f+MtiYh/ocDkj4sKFw58O0/n07BK9zo9epfazwjeWmwtvpIEyP5lMZeaCLtSInz3x4BT4PM4RloJSjPsrFv6JmKMrRLf8DdpAuAIuO+L3d9ni6WLB1BoOzFrmeCuoLSJVPWgp4PC9zpIXeHuo19N+iuQhfrpuYizZKX6+4EIsRibHW2WukdrQeo0FBH5+vrpv24bJ5r6jcwiBNGZ7LE582QgXwrWf1uOxB+an+Qi5SHg8EbXZrAf6ZDUZWvhzuMuxsgeLE/R4P3p4nCH4E+kPo2sd2OVu/UlQ+bHif0rxVmoMBmFV/jRKqHOat87GQdNCsXk6bdOZHGtXDp1YiIriSWg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Feb 07, 2024 at 12:11:25PM +0000, Will Deacon wrote: > On Wed, Feb 07, 2024 at 11:21:17AM +0000, Matthew Wilcox wrote: > > On Wed, Feb 07, 2024 at 11:12:52AM +0000, Will Deacon wrote: > > > On Sat, Jan 27, 2024 at 01:04:15PM +0800, Nanyong Sun wrote: > > > > > > > > On 2024/1/26 2:06, Catalin Marinas wrote: > > > > > On Sat, Jan 13, 2024 at 05:44:33PM +0800, Nanyong Sun wrote: > > > > > > HVO was previously disabled on arm64 [1] due to the lack of necessary > > > > > > BBM(break-before-make) logic when changing page tables. > > > > > > This set of patches fix this by adding necessary BBM sequence when > > > > > > changing page table, and supporting vmemmap page fault handling to > > > > > > fixup kernel address translation fault if vmemmap is concurrently accessed. > > > > > I'm not keen on this approach. I'm not even sure it's safe. In the > > > > > second patch, you take the init_mm.page_table_lock on the fault path but > > > > > are we sure this is unlocked when the fault was taken? > > > > I think this situation is impossible. In the implementation of the second > > > > patch, when the page table is being corrupted > > > > (the time window when a page fault may occur), vmemmap_update_pte() already > > > > holds the init_mm.page_table_lock, > > > > and unlock it until page table update is done.Another thread could not hold > > > > the init_mm.page_table_lock and > > > > also trigger a page fault at the same time. > > > > If I have missed any points in my thinking, please correct me. Thank you. > > > > > > It still strikes me as incredibly fragile to handle the fault and trying > > > to reason about all the users of 'struct page' is impossible. For example, > > > can the fault happen from irq context? > > > > The pte lock cannot be taken in irq context (which I think is what > > you're asking?) While it is not possible to reason about all users of > > struct page, we are somewhat relieved of that work by noting that this is > > only for hugetlbfs, so we don't need to reason about slab, page tables, > > netmem or zsmalloc. > > My concern is that an interrupt handler tries to access a 'struct page' > which faults due to another core splitting a pmd mapping for the vmemmap. > In this case, I think we'll end up trying to resolve the fault from irq > context, which will try to take the spinlock. I think that (as per my comments on patch 2), a similar deadlock can happen on RT even if the vmemmap is only accessed in regular process context, and at minimum this needs better comentary and/or lockdep assertions. I'd also prefer that we dropped this for now. > Avoiding the fault would make this considerably more robust and the > architecture has introduced features to avoid break-before-make in some > circumstances (see FEAT_BBM and its levels), so having this optimisation > conditional on that would seem to be a better approach in my opinion. FWIW, that's my position too. Mark.