From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9216E77197 for ; Tue, 7 Jan 2025 06:08:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 357A56B00B7; Tue, 7 Jan 2025 01:08:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 304F46B00B9; Tue, 7 Jan 2025 01:08:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17FB16B00BA; Tue, 7 Jan 2025 01:08:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E634F6B00B7 for ; Tue, 7 Jan 2025 01:08:16 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 544E11C5AF5 for ; Tue, 7 Jan 2025 06:08:16 +0000 (UTC) X-FDA: 82979625792.17.1273E74 Received: from mail-vs1-f46.google.com (mail-vs1-f46.google.com [209.85.217.46]) by imf23.hostedemail.com (Postfix) with ESMTP id 7E8E714000A for ; Tue, 7 Jan 2025 06:08:14 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Y+Ka+LV9; spf=pass (imf23.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.46 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736230094; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GRVwfwslXJGI74kLFIi5nDLkpW6F2l/Qx8c6R/av2xg=; b=0pia/GU9iYXNfFGmveUWjiGtapAQlEm5sgiBoaOtBXQpaOFnwJUrFGRZglPQugHg1tWlYT KMdYQg3yFcL/SwhRB16dZ5g7RDGxnKhC4R58QS8z4bQskGtvQpo5pxxUzRBrskgRvzuSq4 OKOWk5+eQOfqhIbFh2T5lqsR/DKwf9I= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Y+Ka+LV9; spf=pass (imf23.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.46 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736230094; a=rsa-sha256; cv=none; b=jkVJCgr04+6oe0sjXy9A0wPhwMb7FDuIEMljYap2MSTLVwW2qwFJrxsNmtTerXIciL7tSd P+RKpjqArfKKOIGKY+4KuSSz27hlIaxDJuX4jwgoT3xN6Gv2p3/TymrQSAVraI4HVRmoZz Knf01tImpjOvYZgQrVjrIvSByMbD5/w= Received: by mail-vs1-f46.google.com with SMTP id ada2fe7eead31-4afeb79b52fso4273348137.0 for ; Mon, 06 Jan 2025 22:08:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736230093; x=1736834893; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=GRVwfwslXJGI74kLFIi5nDLkpW6F2l/Qx8c6R/av2xg=; b=Y+Ka+LV9ADunZ3TKB9uFwBj1cuvlBHmIb9f3dslUXnnyL9H6Ihp9Mz80Sh3C5Rr0kQ 63whqnqenCrv7hAUDDNkT835Dp/zFdSmLT+G/QthJSp5ZaFM8JY2CWB0q/ZiOV3IA0KI U4xV57O2/9i0j9KIlZbA0xD/nSvYRjYpHQt+uWY2c2aMw3tgQhSog0wWtoVrq2eSEjUG XH4BzTcac5sIgXbraLQesEvaeIlkI5LrW5T9ZMhcw4r2dOApaBFb9uEcIPbYHO1tVpL6 wz57ELu5IbZzm58Se8PNkNcKJ/r6lHqADijiaajIDRZLBKm4SEGT/t1gtpzWz5aNj1XB 5KYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736230093; x=1736834893; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GRVwfwslXJGI74kLFIi5nDLkpW6F2l/Qx8c6R/av2xg=; b=C11Xch1kpxQhYBooGGzdCBOJWBPxZaEvgrxAvn3mkqcXSr8UGvyR7JOTSRyd0IjmuP BrA/GufIKJQj3ijk4WLhYMv5b++R/hFx9JdGjBevQgCZISsmzBDN0wZFIKtb7Ov1l0oZ CaplZWpea7rPUnPjRiIfbLYCtpGibsU09NRFfJKvFkjnJQeQ7AfaFKPNNVg/o27yaVe/ VI+4uyAnRrRhKXTRe/LQ5rcxSts63IcLC0ixdRnGdaB0Txh/8/jAG6+7389XOMsqN5tG hZjrwoAkkkiTkTzUkt3nKhltD0u6wMgWxwHbAZm59Q0xU32aY4ixrqDXhqozE4gVMeau pU9A== X-Forwarded-Encrypted: i=1; AJvYcCWjMPKIyZdlxnbViNc5llMSj0GpTwh0h/9XvbtihBuvabX7D3q7as86nt4yHZHCj8IRZ5l9g6dyTQ==@kvack.org X-Gm-Message-State: AOJu0YyjtHPFE03fbbszo80BPnKp3Bh/Dj4bvHu6pSEGb8HsD/sfemCf xgLtQoH/MG/3KR+20bskAQL4HrH15ledQiyxPjcXbxWdPeuxRUbgI5ZjXpq0+z+3N9x1ecajt8i Hpy9MRJa59oZPcSwedzqHWc/61w637gxAiX/L X-Gm-Gg: ASbGncvE5HzmA70tyLgA9VWqkei35C49sXvi0c3RK2FJjQNf5Eb5MlkuWcza0wy5jDz CrTNSO50NdilCIsD04UOwxQmKohiocKMG6/tE++lJInnkYeGA/MT6fcwqBgfbjIfXOAFWnEA= X-Google-Smtp-Source: AGHT+IEZaqmWCdbsnlneTQG3404W2PKonER8bomGI+FtnJ1u//oEs3rufk2b98smus/r1ny3nrGHnU/2nnAkoAcwgVE= X-Received: by 2002:a05:6102:e13:b0:4b2:4877:80f4 with SMTP id ada2fe7eead31-4b2cc116628mr52396664137.0.1736230093357; Mon, 06 Jan 2025 22:08:13 -0800 (PST) MIME-Version: 1.0 References: <20241107202033.2721681-1-yuzhao@google.com> <20241125152203.GA954@willie-the-truck> <20241128142028.GA3506@willie-the-truck> In-Reply-To: <20241128142028.GA3506@willie-the-truck> From: Yu Zhao Date: Mon, 6 Jan 2025 23:07:36 -0700 Message-ID: Subject: Re: [PATCH v2 0/6] mm/arm64: re-enable HVO To: Will Deacon Cc: Andrew Morton , Catalin Marinas , Marc Zyngier , Muchun Song , Thomas Gleixner , Douglas Anderson , Mark Rutland , Nanyong Sun , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam05 X-Stat-Signature: xb3pbafdbzqujp6hbhkxkd4ky1hcfnkt X-Rspamd-Queue-Id: 7E8E714000A X-Rspam-User: X-HE-Tag: 1736230094-671458 X-HE-Meta: U2FsdGVkX1+OUIenHOVVE4EQcUtn1dLyKanGlDBr2NLPQBWSpgfENF7hIezTKUlTgxnjHj2ii53l4qaq4fmI/Ht9nQuz6EQtfp2/Rrv3Q4QHyN446RTm+2kTW69pv4bgoHFESIRT9i//crY/PPByK2BTTxXzplhBeP6SPauD9zmybhEVA9yFkg8uRVII/7sgeFATJfyo6b6qdxDwLnxTrVFB0yGW20vpOgVzDjTpzSNcit81ngDOsXGi8qVNH3srcJIBuahILXuuP1DGrMxOtWXX8PpTFTcGZaY2YppRXBpuX27FXJ2JCETm7jhJRoVxv9wJCA/gDaQTZimsLxb+rvjzQZWMJqfzNpxmFvQwq64PqZvfM67AZjzjv+FwF/oHUc+5Nf4sMdMuU36WOjFsR39muA+WZMc4WD/4AiwPwzkb/8g4FmPhi79hpK1fYrieda8Flp6dgakUQeH4m4Pndl9nE5N/4U5E3YYKXQAJQqNI1DQ7oVNDFoBwFLEd5DrLh4Yx37BqRE4GKDkds36hjWPbPrXV04/SzktWw0reTI0ram1evCijySUEc9/yVmpXv3S1ZOptNcT2XQ4hq5QmApurpvF9glx0Dk4FvS17BpozIiHts4xVKNuRHo6WFHTpegvtK0MA6u1MD1ZAIdR+0nPNMc5sg3VvRgpSROOL3BkYIi1rCmHkV5Ju6Rj+yt87/BNHbihY6Fbe3BzdwGnJ6swRIX/Hb0tsyZvAgDbFnImD+tDP0dXY0nRLjteQ6iwb65KvH/2LKX9nFR1XkuR9ImGwzhKDUgfsAup5wOQMT87Xbw7MvrQeJslUSFa7Y/SRdVWjeiHazR3bdMUV1et1wWDdMQ1m+THaYZgqtYLkOm5zxfnDdhkeBOpq5IXzz3r9wkaq5r4uylP+NqgP6W94nJd2ImsX4yCWG9/lAwbayxXMiczagjPdG5uJfycwp2I/WfLxyewQJEKr4GPSvI8 D9t63CRB 6f/H3LUJQqiJo90cbaYQgaCNLjds7jS9qA5bHt4BpAAL+7J0j3LL32Bh+bKXI+MKp6qQi2lmuMIhwDnqY5Rioz7QtbXOosBXqWNh9BS98uHHQni8lzmtN59/9ywSkrTSxOszzBOxafuZLxJ4o53heHTWwCMkZCXac6xh0/8cQfRT901T0gZrcXsLqqya0cMwo/gE0cXI/4cmXRKBcEQM+/ltJrR7BpXQDrv2tI0TpXhFg2bPi+0q8tkc5DQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Nov 28, 2024 at 7:20=E2=80=AFAM Will Deacon wrote= : > > On Mon, Nov 25, 2024 at 03:22:47PM -0700, Yu Zhao wrote: > > On Mon, Nov 25, 2024 at 8:22=E2=80=AFAM Will Deacon w= rote: > > > On Thu, Nov 07, 2024 at 01:20:27PM -0700, Yu Zhao wrote: > > > > HVO was disabled by commit 060a2c92d1b6 ("arm64: mm: hugetlb: Disab= le > > > > HUGETLB_PAGE_OPTIMIZE_VMEMMAP") due to the following reason: > > > > > > > > This is deemed UNPREDICTABLE by the Arm architecture without a > > > > break-before-make sequence (make the PTE invalid, TLBI, write the > > > > new valid PTE). However, such sequence is not possible since the > > > > vmemmap may be concurrently accessed by the kernel. > > > > > > > > This series presents one of the previously discussed approaches to > > > > re-enable HugeTLB Vmemmap Optimization (HVO) on arm64. > > > > > > Before jumping into the new mechanisms here, I'd really like to > > > understand how the current code is intended to work in the relatively > > > simple case where the vmemmap is page-mapped to start with (i.e. when= we > > > don't need to worry about block-splitting). > > > > > > In that case, who are the concurrent users of the vmemmap that we nee= d > > > to worry about? > > > > Any speculative PFN walkers who either only read `struct page[]` or > > attempt to increment page->_refcount if it's not zero. > > > > > Is it solely speculative references via > > > page_ref_add_unless() or are there others? > > > > page_ref_add_unless() needs to be successful before writes can follow; > > speculative reads are always allowed. > > > > > Looking at page_ref_add_unless(), what serialises that against > > > __hugetlb_vmemmap_restore_folio()? I see there's a synchronize_rcu() > > > call in the latter, but what prevents an RCU reader coming in > > > immediately after that? > > > > In page_ref_add_unless(), the condtion `!page_is_fake_head(page) && > > page_ref_count(page)` returns false before a PTE becomes RO. > > > > For HVO, i.e., a PTE being switched from RW to RO, page_ref_count() is > > frozen (remains zero), followed by synchronize_rcu(). After the > > switch, page_is_fake_head() is true and it appears before > > page_ref_count() is unfrozen (become non-zero), so the condition > > remains false. > > > > For de-HVO, i.e., a PTE being switched from RO to RW, page_ref_count() > > again is frozen, followed by synchronize_rcu(). Only this time > > page_is_fake_head() is false after the switch, and again it appears > > before page_ref_count() is unfrozen. To answer your question, readers > > coming in immediately after that won't be able to see non-zero > > page_ref_count() before it sees page_is_fake_head() being false. IOW, > > regarding whether it is RW, the condition can be false negative but > > never false positive. > > Thanks, but I'm still not seeing how this works. When you say "appears > before", I don't see any memory barriers in page_ref_add_unless() that > enforce that e.g. the refcount and the flags are checked in order and Right, there is a missing barrier in page_ref_add_unless() and the order of those two checks, i.e., page_is_fake_head() and then page_ref_count() is wrong. I posted a fix here [1]. [1] https://lore.kernel.org/20250107043505.351925-1-yuzhao@google.com/ > I can't see how the synchronize_rcu() helps either as it's called really > earlyi (I think that's just there for the static key). That fix makes sure no speculative PFN walkers will try to modify page->_refcount during the transition from the counter being frozen to modifiable. synchronize_rcu() makes sure something similar won't happen during the transition from the counter being modifiable to frozen. > If page_is_fake_head() is reliable, then I'm thinking we could use that > to steer page_ref_add_unless() away from the tail pages during the > remapping operations and it would be fine to use a break-before-make > sequence. The struct page pointer passed into page_is_fake_head() would become inaccessible during BBM. So it would just crash there. That's why I think we either have to handle kernel PFs or pause other CPUs. (page_is_fake_head() works by detecting whether it's accessing the original struct page or a remapped (r/o) one, and the latter has a signature for it to tell.)