linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhao <yuzhao@google.com>
To: Will Deacon <will@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	 Marc Zyngier <maz@kernel.org>,
	Muchun Song <muchun.song@linux.dev>,
	 Thomas Gleixner <tglx@linutronix.de>,
	Douglas Anderson <dianders@chromium.org>,
	 Mark Rutland <mark.rutland@arm.com>,
	Nanyong Sun <sunnanyong@huawei.com>,
	 linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,  linux-mm@kvack.org
Subject: Re: [PATCH v2 0/6] mm/arm64: re-enable HVO
Date: Mon, 25 Nov 2024 15:22:47 -0700	[thread overview]
Message-ID: <CAOUHufYUMYcf=uF7=2zj-PsGXePCDdsRHJGa8t-e-k9VUvYyQQ@mail.gmail.com> (raw)
In-Reply-To: <20241125152203.GA954@willie-the-truck>

On Mon, Nov 25, 2024 at 8:22 AM Will Deacon <will@kernel.org> wrote:
>
> Hi Yu Zhao,
>
> On Thu, Nov 07, 2024 at 01:20:27PM -0700, Yu Zhao wrote:
> > HVO was disabled by commit 060a2c92d1b6 ("arm64: mm: hugetlb: Disable
> > HUGETLB_PAGE_OPTIMIZE_VMEMMAP") due to the following reason:
> >
> >   This is deemed UNPREDICTABLE by the Arm architecture without a
> >   break-before-make sequence (make the PTE invalid, TLBI, write the
> >   new valid PTE). However, such sequence is not possible since the
> >   vmemmap may be concurrently accessed by the kernel.
> >
> > This series presents one of the previously discussed approaches to
> > re-enable HugeTLB Vmemmap Optimization (HVO) on arm64.
>
> Before jumping into the new mechanisms here, I'd really like to
> understand how the current code is intended to work in the relatively
> simple case where the vmemmap is page-mapped to start with (i.e. when we
> don't need to worry about block-splitting).
>
> In that case, who are the concurrent users of the vmemmap that we need
> to worry about?

Any speculative PFN walkers who either only read `struct page[]` or
attempt to increment page->_refcount if it's not zero.

> Is it solely speculative references via
> page_ref_add_unless() or are there others?

page_ref_add_unless() needs to be successful before writes can follow;
speculative reads are always allowed.

> Looking at page_ref_add_unless(), what serialises that against
> __hugetlb_vmemmap_restore_folio()? I see there's a synchronize_rcu()
> call in the latter, but what prevents an RCU reader coming in
> immediately after that?

In page_ref_add_unless(), the condtion `!page_is_fake_head(page) &&
page_ref_count(page)` returns false before a PTE becomes RO.

For HVO, i.e., a PTE being switched from RW to RO, page_ref_count() is
frozen (remains zero), followed by synchronize_rcu(). After the
switch, page_is_fake_head() is true and it appears before
page_ref_count() is unfrozen (become non-zero), so the condition
remains false.

For de-HVO, i.e., a PTE being switched from RO to RW, page_ref_count()
again is frozen, followed by synchronize_rcu(). Only this time
page_is_fake_head() is false after the switch, and again it appears
before page_ref_count() is unfrozen. To answer your question, readers
coming in immediately after that won't be able to see non-zero
page_ref_count() before it sees page_is_fake_head() being false. IOW,
regarding whether it is RW, the condition can be false negative but
never false positive.

> Even if we resolve the BBM issues, we still need to get the
> synchronisation right so that we don't e.g. attempt a cmpxchg() to a
> read-only mapping, as the CAS instruction requires write permission on
> arm64 even if the comparison ultimately fails.

Correct. This applies to x86 as well, i.e., CAS on RO memory crashes
the kernel, even if CAS would fail otherwise.

> So please help me to understand the basics of HVO before we get bogged
> down by the block-splitting on arm64.

Gladly. Please let me know if anything from the core MM side is unclear.


  reply	other threads:[~2024-11-25 22:23 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-07 20:20 Yu Zhao
2024-11-07 20:20 ` [PATCH v2 1/6] mm/hugetlb_vmemmap: batch-update PTEs Yu Zhao
2024-11-07 20:20 ` [PATCH v2 2/6] mm/hugetlb_vmemmap: add arch-independent helpers Yu Zhao
2024-11-07 20:20 ` [PATCH v2 3/6] irqchip/gic-v3: support SGI broadcast Yu Zhao
2024-11-07 20:20 ` [PATCH v2 4/6] arm64: broadcast IPIs to pause remote CPUs Yu Zhao
2024-11-07 20:20 ` [PATCH v2 5/6] arm64: pause remote CPUs to update vmemmap Yu Zhao
2024-11-07 20:20 ` [PATCH v2 6/6] arm64: select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP Yu Zhao
2024-11-25 15:22 ` [PATCH v2 0/6] mm/arm64: re-enable HVO Will Deacon
2024-11-25 22:22   ` Yu Zhao [this message]
2024-11-28 14:20     ` Will Deacon
2025-01-07  6:07       ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOUHufYUMYcf=uF7=2zj-PsGXePCDdsRHJGa8t-e-k9VUvYyQQ@mail.gmail.com' \
    --to=yuzhao@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=dianders@chromium.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=sunnanyong@huawei.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox