From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4656AC4345F for ; Mon, 15 Apr 2024 09:28:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE80E6B00A1; Mon, 15 Apr 2024 05:28:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C98696B00A3; Mon, 15 Apr 2024 05:28:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B86866B00A4; Mon, 15 Apr 2024 05:28:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9B1136B00A1 for ; Mon, 15 Apr 2024 05:28:57 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5805EA0515 for ; Mon, 15 Apr 2024 09:28:57 +0000 (UTC) X-FDA: 82011241914.13.6C659E7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf23.hostedemail.com (Postfix) with ESMTP id 6ED5814000A for ; Mon, 15 Apr 2024 09:28:55 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713173335; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sTBHT3+xgtXfeTzX4+hQ0tSlAhRtTggdkG3n6rxbFGk=; b=kLb13NjkIa7JX59Y4Dr9FLfs//BYIuTRi6RWHVBNtDe+2xk7CrTG/HmwGnN0B4kNBMUJhk DDa7yKio1zqdoDOgDC6iS0xIdx1XGBFSukn5lMgFz80BxoW9cEB3c7oNHsbED4s6qondV/ k4Dgnnk0o2ewL+W1DSmue8zduop63KY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713173335; a=rsa-sha256; cv=none; b=bUEcN+qZyYD2ax9SencVXFB+V3IRs0he5QWF+POMY/hcamCQmLLWz3v4+3xCsaHfyNf/Oc 8fVSSpuo0X6ci3VqyUMt7IB4HLmBvJI0u0penml7qt/07+Nd7yDVN6fIZe/L3ta9w1clOB w/BOr0Q2JnukzyX4dzLBhrIJJcCZudg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D11082F4; Mon, 15 Apr 2024 02:29:22 -0700 (PDT) Received: from [10.57.75.121] (unknown [10.57.75.121]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B6E6B3F64C; Mon, 15 Apr 2024 02:28:52 -0700 (PDT) Message-ID: Date: Mon, 15 Apr 2024 10:28:51 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v1 0/4] Reduce cost of ptep_get_lockless on arm64 Content-Language: en-GB To: David Hildenbrand , Mark Rutland , Catalin Marinas , Will Deacon , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Andrew Morton , Muchun Song Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240215121756.2734131-1-ryan.roberts@arm.com> <0ae22147-e1a1-4bcb-8a4c-f900f3f8c39e@redhat.com> <374d8500-4625-4bff-a934-77b5f34cf2ec@arm.com> <8bd9e136-8575-4c40-bae2-9b015d823916@redhat.com> <86680856-2532-495b-951a-ea7b2b93872f@arm.com> <35236bbf-3d9a-40e9-84b5-e10e10295c0c@redhat.com> <4fba71aa-8a63-4a27-8eaf-92a69b2cff0d@arm.com> <5a23518b-7974-4b03-bd6e-80ecf6c39484@redhat.com> <81aa23ca-18b1-4430-9ad1-00a2c5af8fc2@arm.com> <70a36403-aefd-4311-b612-84e602465689@redhat.com> From: Ryan Roberts In-Reply-To: <70a36403-aefd-4311-b612-84e602465689@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Stat-Signature: 3xi9u1fbgr1htw5hjbjuk81pdfzqeuoo X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6ED5814000A X-HE-Tag: 1713173335-769521 X-HE-Meta: U2FsdGVkX18BK5wjL+Pq5KFcQBhQrZhhux3fzRA1uXN9MBVrYrUb0H4AK+qRJbScxuwQ/5VGkqc3/yo5l3WdJEkjme8jYSMuBm1zc4Tem4IpoMve/OLSDs5tUYYGcMiwSyDfs5raP2XnejsBephBX3tsnsI0hLzEN7uX0V0z+D4NQbpLSmHA3J8EIp9vsCirOmLqjeIWp1Dq4ABMCii6RxURHgm7WqTbC3E/topCfKIHZLDYntXzqEZgPZ8A3VKqb+67dgiGyQb7m+wv0Fv4EQksyfJFTJwozeORFd/PL3fmur+eoHRY5tdgT+6oDeSb2qSa/Q8LwQ3twnd8zQIP83cKLS3RNTVVfDB6ds8vu1fA44pbGjlc5rD7c/eVhJAEtCErSctD3TriAHNCDPT0rzpWSFaTWnHP/ucTv3oM181Xim99sbxIziKbdTQ7ktAyFfYvi/XzamsH7AszIfUuX0BhxzFQwznHtHx38dhZJjqG3ABH+Ya1LHrjc4PETmTwDUZYauB2ae0tolvGkOosFWh/IblTaRFDtACEv5HP3RkcD+OWSTTD441OTHV1rDAqg6qDocHkiwlmSliImNApIeIaZ84GXMBfQ5+hIen2HlphG2UZzR5PbwpQUv+FYeK8g9mKaBwrDhwiL7d/832TmGUwXHh5MJuWD2sTugByDtzm94lH8W1yYhrBOvaIVDLA6mvrueBksHgEttupu9GAC07y6Suqq6JxvPxuzi4pc7Fctd8rsbbXvS3I9OsU0WzMwesKCEMDd0gt3tLOgw8b7KndhxngUhngPTeE2bFKXxzBEW6ztMGUnVdy1xD8dnLf3ozeSj2e338v48kxW8UJW7zNzChG2hhS64z+wf3WUySeu20mKbVhw9TbLhMDpO/bj14SijPiGSA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/04/2024 21:16, David Hildenbrand wrote: >> >> Yes agreed - 2 types; "lockless walkers that later recheck under PTL" and >> "lockless walkers that never take the PTL". >> >> Detail: the part about disabling interrupts and TLB flush syncing is >> arch-specifc. That's not how arm64 does it (the hw broadcasts the TLBIs). But >> you make that clear further down. > > Yes, but disabling interrupts is also required for RCU-freeing of page tables > such that they can be walked safely. The TLB flush IPI is arch-specific and > indeed to sync against PTE invalidation (before generic GUP-fast). > [...] > >>>> >>>> Could it be this easy? My head is hurting... >>> >>> I think what has to happen is: >>> >>> (1) pte_get_lockless() must return the same value as ptep_get() as long as there >>> are no races. No removal/addition of access/dirty bits etc. >> >> Today's arm64 ptep_get() guarantees this. >> >>> >>> (2) Lockless page table walkers that later verify under the PTL can handle >>> serious "garbage PTEs". This is our page fault handler. >> >> This isn't really a property of a ptep_get_lockless(); its a statement about a >> class of users. I agree with the statement. > > Yes. That's a requirement for the user of ptep_get_lockless(), such as page > fault handlers. Well, mostly "not GUP". > >> >>> >>> (3) Lockless page table walkers that cannot verify under PTL cannot handle >>> arbitrary garbage PTEs. This is GUP-fast. Two options: >>> >>> (3a) pte_get_lockless() can atomically read the PTE: We re-check later if the >>> atomically-read PTE is still unchanged (without PTL). No IPI for TLB flushes >>> required. This is the common case. HW might concurrently set access/dirty bits, >>> so we can race with that. But we don't read garbage. >> >> Today's arm64 ptep_get() cannot garantee that the access/dirty bits are >> consistent for contpte ptes. That's the bit that complicates the current >> ptep_get_lockless() implementation. >> >> But the point I was trying to make is that GUP-fast does not actually care about >> *all* the fields being consistent (e.g. access/dirty). So we could spec >> pte_get_lockless() to say that "all fields in the returned pte are guarranteed >> to be self-consistent except for access and dirty information, which may be >> inconsistent if a racing modification occured". > > We *might* have KVM in the future want to check that a PTE is dirty, such that > we can only allow dirty PTEs to be writable in a secondary MMU. That's not there > yet, but one thing I was discussing on the list recently. Burried in: > > https://lkml.kernel.org/r/20240320005024.3216282-1-seanjc@google.com > > We wouldn't care about racing modifications, as long as MMU notifiers will > properly notify us when the PTE would lose its dirty bits. > > But getting false-positive dirty bits would be problematic. > >> >> This could mean that the access/dirty state *does* change for a given page while >> GUP-fast is walking it, but GUP-fast *doesn't* detect that change. I *think* >> that failing to detect this is benign. > > I mean, HW could just set the dirty/access bit immediately after the check. So > if HW concurrently sets the bit and we don't observe that change when we > recheck, I think that would be perfectly fine. Yes indeed; that's my point - GUP-fast doesn't care about access/dirty (or soft-dirty or uffd-wp). But if you don't want to change the ptep_get_lockless() spec to explicitly allow this (because you have the KVM use case where false-positive dirty is problematic), then I think we are stuck with ptep_get_lockless() as implemented for arm64 today. > >> >> Aside: GUP-fast currently rechecks the pte originally obtained with >> ptep_get_lockless(), using ptep_get(). Is that correct? ptep_get() must conform >> to (1), so either it returns the same pte or it returns a different pte or >> garbage. But that garbage could just happen to be the same as the originally >> obtained pte. So in that case, it would have a false match. I think this needs >> to be changed to ptep_get_lockless()? > > I *think* it's fine, because the case where it would make a difference (x86-PAE) > still requires the TLB flush IPI to sync against PTE changes, and that check > would likely be wrong in one way or the other. So for x86-pae, that check is > just moot either way. > > That my theory, at least. > > (but this "let's fake-read atomically although we don't, but let's do like we > could in some specific circumstances" is really hard to get) > > I was wondering a while ago if we are missing a memory barrier before the checl, > but I think the one from obtaining the page reference gets the job done (at > least that's what I remember). >