From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B20CCC021B2 for ; Sat, 22 Feb 2025 13:17:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6B466B007B; Sat, 22 Feb 2025 08:17:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D1B8E6B0083; Sat, 22 Feb 2025 08:17:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE3286B0085; Sat, 22 Feb 2025 08:17:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A10B96B007B for ; Sat, 22 Feb 2025 08:17:56 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 28B711C9E73 for ; Sat, 22 Feb 2025 13:17:56 +0000 (UTC) X-FDA: 83147633352.19.972B560 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf27.hostedemail.com (Postfix) with ESMTP id 6F9E540007 for ; Sat, 22 Feb 2025 13:17:54 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf27.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740230274; a=rsa-sha256; cv=none; b=bOmsa8Wz0JERUPmRMxm/OWx+C/keDhoZwNJdczuU78UTUoDjKGeePNDt7oJ5o2lkpXFUmj 3nTu4/VpZpbob19l36eeQtVyCz4DwG2f+rkWVC0aQgLdTmjvCuBpqSu653FoGPAaEN6JOw NpgpBi9bUqIz8QBjKCXUODRVdgko6Bo= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf27.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740230274; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9CgxyAkzkzC+kqLdjAZT0dTilq3ap9KRK2oKmLc5R7w=; b=hsi0pdVVOMAkLVhiBYL8a4h9WEbU/45rDKMMenCakskC6HzF85b3HZrtkKvJ6KcQfkqFbC Ll1OiWzdkSdvmlGUOWo9m3SMuBH2yqyertspTv3DLhn5Tu4NI3y1euKaGjz5fSkciUfdkR 9yHFK/ZDuIabUoBa8yWgrOvopTPupfw= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 1550F5C572C; Sat, 22 Feb 2025 13:17:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 221E4C4CED1; Sat, 22 Feb 2025 13:17:49 +0000 (UTC) Date: Sat, 22 Feb 2025 13:17:47 +0000 From: Catalin Marinas To: Ryan Roberts Cc: Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings Message-ID: References: <20250217140809.1702789-1-ryan.roberts@arm.com> <20250217140809.1702789-8-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250217140809.1702789-8-ryan.roberts@arm.com> X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6F9E540007 X-Stat-Signature: oi5dyi775ydug46pa9x1a8steizohw5f X-Rspam-User: X-HE-Tag: 1740230274-254135 X-HE-Meta: U2FsdGVkX1/w44qg2Qg4zxpSDdxrE2Gt0YyQn7dKViB6eF+yfuv+FdpaUj2WM7ISPSdhs525Rnm1v+I3zaza9JJW9omXhZdWE02pxHOI/+FSVaEpMxdTpBeoIBREXSVl+dBnU2WbUXj+yIfbwmd+D3NbT5c4WM+vI0WCcdJ7RdRxhpW5YqIWZYohVSoz5P62fvYwRuScMRDy0pO5IwSlrPVl6getHzmGMeVwMluVUrAhPRbKu4h+JzMNdpgIDlu00qRETdNuLpzEIXCLva0AkkLMjKwqjgTqMFdp0JVOYcgQUAffAbOc06nTWAr0m5oSI7eaOXoGzvhHipAu3CHRcciUayHNLdprXuTX/twlmMAQFmdUEwp4HhhJEsIRdgGCPkkG1YWfEKu+pJlUHrEXH1KadNjLGaQ3L6OwwqRlhUkqOepC6BvxDn2X6X1MDwJCnTJ6mNiJ14w0TuITc9rrTKj0JDq8fRhLgYjzUQ8XMVaKbehKze04TnrGPmAqj78MSZOs2g4aCVv2eZzCtQ4olFXx2PWWU9Cv+HD/Hrq0T9UTFB7iVB8M1fsB+JXEWsxJ9O7uJresMDAzq1yLFU2aqRwP2Vh6CKFCLlcjzkUETN13jIA1MmslCb0h/bV62+0xyH5WtND+AFz07jHy2hHzQ8oxMYvJMvImFdGUylBSwcgFKyIU9mvKcks/G0US0gYpvOhYlD/Lh0ML3KzQ2xH/z1u/5Uis/mTdcBfltZ2ZUzZXQP6rp3r2h8UHG/4zTE41Uyo5lac8uOXLoxkjSSJ8nIkH+ePpFra6rQ+kIGS6en3CqdSLeTbOtoaZ3jM41y0d0LsO20nNh71WwTug1cEF+YOWQLBoSoCozWB26LcoTD9R12GUa0KwHCABMT0Cs8hM3DgAQWeETSwow2gOh+z1heLagnEPLqPP7ljrTH7bvvVSJEADdAbSzbySfUFBuz2fdpiOqJiJxR1F3fdwHRT NYkM2g5O +M0f96mcJbDtKhecfsUuK2NtTbXuJrUu++Sw6n3NZZG+IPB6ZbfbRizDqSwCbcN7+BTciS+e65oNWmE3kZcVBJG+fHcHRf1fkTtnZ1Cz13+MLaFTGzkVjw9Arwu121Yo2bDTQoUjhx0Km78Xs2v8Fg/8vNCSlUTT4Tr3Cdfq0TgTSdoE53NBMAdAKRBNriLyfH4s62e6QV2WrUw67cJdQaYA1bjg2pktpJn1KL+eTOagh+dCrkOCP2lXHMjxf0GtlpqjD5TnnQOnOknxY1kDo+O4+gcqah+TZv7piQQOYJa1YMuKgBNp+hIKFl3tEuJCD3sFAvLqmsubII8rjBP23W2+9ZukLIgFAc71L X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Feb 17, 2025 at 02:07:59PM +0000, Ryan Roberts wrote: > __set_pte_complete(), set_pmd(), set_pud(), set_p4d() and set_pgd() are > used to write entries into pgtables. And they issue barriers (currently > dsb and isb) to ensure that the written values are observed by the table > walker prior to any program-order-future memory access to the mapped > location. > > Over the years some of these functions have received optimizations: In > particular, commit 7f0b1bf04511 ("arm64: Fix barriers used for page > table modifications") made it so that the barriers were only emitted for > valid-kernel mappings for set_pte() (now __set_pte_complete()). And > commit 0795edaf3f1f ("arm64: pgtable: Implement p[mu]d_valid() and check > in set_p[mu]d()") made it so that set_pmd()/set_pud() only emitted the > barriers for valid mappings. The assumption probably was that set_pmd/pud() are called a lot less often than set_pte() as they cover larger address ranges (whether table or leaf). > set_p4d()/set_pgd() continue to emit the barriers unconditionally. We probably missed them, should have been the same as set_pmd(). > This is all very confusing to the casual observer; surely the rules > should be invariant to the level? Let's change this so that every level > consistently emits the barriers only when setting valid, non-user > entries (both table and leaf). Also see commit d0b7a302d58a ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"") why we added back the ISBs to the pmd/pud accessors and the last paragraph on why we are ok with the spurious faults for PTEs. For user mappings, the translation fault is routed through the usual path that can handle mapping new entries, so I think we are fine. But it's worth double-checking Will's comment (unless he only referred to kernel table entries). > It seems obvious that if it is ok to elide barriers all but valid kernel > mappings at pte level, it must also be ok to do this for leaf entries at > other levels: If setting an entry to invalid, a tlb maintenance > operation must surely follow to synchronise the TLB and this contains > the required barriers. Setting to invalid is fine indeed, handled by the TLB flushing code, hence the pmd_valid() checks. > If setting a valid user mapping, the previous > mapping must have been invalid and there must have been a TLB > maintenance operation (complete with barriers) to honour > break-before-make. That's not entirely true for change_protection() for example or the fork() path when we make the entries read-only from writeable without BBM. We could improve these cases as well, I haven't looked in detail. ptep_modify_prot_commit() via change_pte_range() can defer the barriers to tlb_end_vma(). Something similar on the copy_present_ptes() path. > So the worst that can happen is we take an extra > fault (which will imply the DSB + ISB) and conclude that there is > nothing to do. These are the arguments for doing this optimization at > pte level and they also apply to leaf mappings at other levels. It's worth clarifying Will's comment in the commit I mentioned above. > For table entries, the same arguments hold: If unsetting a table entry, > a TLB is required and this will emit the required barriers. If setting a > table entry, the previous value must have been invalid and the table > walker must already be able to observe that. Additionally the contents > of the pgtable being pointed to in the newly set entry must be visible > before the entry is written and this is enforced via smp_wmb() (dmb) in > the pgtable allocation functions and in __split_huge_pmd_locked(). But > this last part could never have been enforced by the barriers in > set_pXd() because they occur after updating the entry. So ultimately, > the wost that can happen by eliding these barriers for user table > entries is an extra fault. > > I observe roughly the same number of page faults (107M) with and without > this change when compiling the kernel on Apple M2. That's microarch specific, highly dependent on timing, so you may never see a difference. > +static inline bool pmd_valid_not_user(pmd_t pmd) > +{ > + /* > + * User-space table entries always have (PXN && !UXN). All other > + * combinations indicate it's a table entry for kernel space. > + * Valid-not-user leaf entries follow the same rules as > + * pte_valid_not_user(). > + */ > + if (pmd_table(pmd)) > + return !((pmd_val(pmd) & (PMD_TABLE_PXN | PMD_TABLE_UXN)) == PMD_TABLE_PXN); > + return pte_valid_not_user(pmd_pte(pmd)); > +} With the 128-bit format I think we lost the PXN/UXNTable bits, though we have software bits if we need to. I just wonder whether it's worth the hassle of skipping some barriers for user non-leaf entries. Did you see any improvement in practice? -- Catalin