From: Anshuman Khandual <anshuman.khandual@arm.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
WANG Xuerui <kernel@xen0n.name>,
Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
Helge Deller <deller@gmx.de>,
Madhavan Srinivasan <maddy@linux.ibm.com>,
Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
Naveen N Rao <naveen@kernel.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
"David S. Miller" <davem@davemloft.net>,
Andreas Larsson <andreas@gaisler.com>,
Arnd Bergmann <arnd@arndb.de>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
Christoph Hellwig <hch@infradead.org>,
David Hildenbrand <david@redhat.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Mark Rutland <mark.rutland@arm.com>, Dev Jain <dev.jain@arm.com>,
Kevin Brodsky <kevin.brodsky@arm.com>,
Alexandre Ghiti <alexghiti@rivosinc.com>
Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH v2 2/4] arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes
Date: Thu, 20 Feb 2025 12:07:35 +0530 [thread overview]
Message-ID: <50f48574-241d-42d8-b811-3e422c41e21a@arm.com> (raw)
In-Reply-To: <5477d161-12e7-4475-a6e9-ff3921989673@arm.com>
On 2/19/25 14:28, Ryan Roberts wrote:
> On 19/02/2025 08:45, Anshuman Khandual wrote:
>>
>>
>> On 2/17/25 19:34, Ryan Roberts wrote:
>>> arm64 supports multiple huge_pte sizes. Some of the sizes are covered by
>>> a single pte entry at a particular level (PMD_SIZE, PUD_SIZE), and some
>>> are covered by multiple ptes at a particular level (CONT_PTE_SIZE,
>>> CONT_PMD_SIZE). So the function has to figure out the size from the
>>> huge_pte pointer. This was previously done by walking the pgtable to
>>> determine the level and by using the PTE_CONT bit to determine the
>>> number of ptes at the level.
>>>
>>> But the PTE_CONT bit is only valid when the pte is present. For
>>> non-present pte values (e.g. markers, migration entries), the previous
>>> implementation was therefore erroniously determining the size. There is
typo - s/erroniously/erroneously ^^^^^^
>>> at least one known caller in core-mm, move_huge_pte(), which may call
>>> huge_ptep_get_and_clear() for a non-present pte. So we must be robust to
>>> this case. Additionally the "regular" ptep_get_and_clear() is robust to
>>> being called for non-present ptes so it makes sense to follow the
>>> behaviour.
>>>
>>> Fix this by using the new sz parameter which is now provided to the
>>> function. Additionally when clearing each pte in a contig range, don't
>>> gather the access and dirty bits if the pte is not present.
>>>
>>> An alternative approach that would not require API changes would be to
>>> store the PTE_CONT bit in a spare bit in the swap entry pte for the
>>> non-present case. But it felt cleaner to follow other APIs' lead and
>>> just pass in the size.
>>>
>>> As an aside, PTE_CONT is bit 52, which corresponds to bit 40 in the swap
>>> entry offset field (layout of non-present pte). Since hugetlb is never
>>> swapped to disk, this field will only be populated for markers, which
>>> always set this bit to 0 and hwpoison swap entries, which set the offset
>>> field to a PFN; So it would only ever be 1 for a 52-bit PVA system where
>>> memory in that high half was poisoned (I think!). So in practice, this
>>> bit would almost always be zero for non-present ptes and we would only
>>> clear the first entry if it was actually a contiguous block. That's
>>> probably a less severe symptom than if it was always interpretted as 1
typo - s/interpretted/interpreted ^^^^^^
>>> and cleared out potentially-present neighboring PTEs.
>>>
>>> Cc: stable@vger.kernel.org
>>> Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit")
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> ---
>>> arch/arm64/mm/hugetlbpage.c | 40 ++++++++++++++++---------------------
>>> 1 file changed, 17 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>>> index 06db4649af91..614b2feddba2 100644
>>> --- a/arch/arm64/mm/hugetlbpage.c
>>> +++ b/arch/arm64/mm/hugetlbpage.c
>>> @@ -163,24 +163,23 @@ static pte_t get_clear_contig(struct mm_struct *mm,
>>> unsigned long pgsize,
>>> unsigned long ncontig)
>>> {
>>> - pte_t orig_pte = __ptep_get(ptep);
>>> - unsigned long i;
>>> -
>>> - for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
>>> - pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
>>> -
>>> - /*
>>> - * If HW_AFDBM is enabled, then the HW could turn on
>>> - * the dirty or accessed bit for any page in the set,
>>> - * so check them all.
>>> - */
>>> - if (pte_dirty(pte))
>>> - orig_pte = pte_mkdirty(orig_pte);
>>> -
>>> - if (pte_young(pte))
>>> - orig_pte = pte_mkyoung(orig_pte);
>>> + pte_t pte, tmp_pte;
>>> + bool present;
>>> +
>>> + pte = __ptep_get_and_clear(mm, addr, ptep);
>>> + present = pte_present(pte);
>>
>> pte_present() may not be evaluated for standard huge pages at [PMD|PUD]_SIZE
>> e.g when ncontig = 1 in the argument.
>
> Sorry I'm not quite sure what you're suggesting here? Are you proposing that
> pte_present() should be moved into the loop so that we only actually call it
> when we are going to consume it? I'm happy to do that if it's the preference,
Right, pte_present() is only required for the cont huge pages but not for the
normal huge pages.
> but I thought it was neater to hoist it out of the loop.
Agreed, but when possible pte_present() cost should be avoided for the normal
huge pages where it is not required.
>
>>
>>> + while (--ncontig) {
>>
>> Should this be converted into a for loop instead just to be in sync with other
>> similar iterators in this file.
>>
>> for (i = 1; i < ncontig; i++, addr += pgsize, ptep++)
>> {
>> tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
>> if (present) {
>> if (pte_dirty(tmp_pte))
>> pte = pte_mkdirty(pte);
>> if (pte_young(tmp_pte))
>> pte = pte_mkyoung(pte);
>> }
>> }
>
> I think the way you have written this it's incorrect. Let's say we have 16 ptes
> in the block. We want to iterate over the last 15 of them (we have already read
> pte 0). But you're iterating over the first 15 because you don't increment addr
> and ptep until after you've been around the loop the first time. So we would
> need to explicitly increment those 2 before entering the loop. But that is only
> neccessary if ncontig > 1. Personally I think my approach is neater...
Thinking about this again. Just wondering should not a pte_present()
check on each entries being cleared along with (ncontig > 1) in this
existing loop before transferring over the dirty and accessed bits -
also work as intended with less code churn ?
static pte_t get_clear_contig(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep,
unsigned long pgsize,
unsigned long ncontig)
{
pte_t orig_pte = __ptep_get(ptep);
unsigned long i;
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
if (ncontig > 1 && !pte_present(pte))
continue;
/*
* If HW_AFDBM is enabled, then the HW could turn on
* the dirty or accessed bit for any page in the set,
* so check them all.
*/
if (pte_dirty(pte))
orig_pte = pte_mkdirty(orig_pte);
if (pte_young(pte))
orig_pte = pte_mkyoung(orig_pte);
}
return orig_pte;
}
* Normal huge pages
- enters the for loop just once
- clears the single entry
- always transfers dirty and access bits
- pte_present() does not matter as ncontig = 1
* Contig huge pages
- enters the for loop ncontig times - for each sub page
- clears all sub page entries
- transfers dirty and access bits only when pte_present()
- pte_present() is relevant as ncontig > 1
>
>>
>>> + ptep++;
>>> + addr += pgsize;
>>> + tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
>>> + if (present) {
>>> + if (pte_dirty(tmp_pte))
>>> + pte = pte_mkdirty(pte);
>>> + if (pte_young(tmp_pte))
>>> + pte = pte_mkyoung(pte);
>>> + }
>>> }
>>> - return orig_pte;
>>> + return pte;
>>> }
>>>
>>> static pte_t get_clear_contig_flush(struct mm_struct *mm,
>>> @@ -401,13 +400,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
>>> {
>>> int ncontig;
>>> size_t pgsize;
>>> - pte_t orig_pte = __ptep_get(ptep);
>>> -
>>> - if (!pte_cont(orig_pte))
>>> - return __ptep_get_and_clear(mm, addr, ptep);
>>> -
>>> - ncontig = find_num_contig(mm, addr, ptep, &pgsize);
>>>
>>> + ncontig = num_contig_ptes(sz, &pgsize);
>>> return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
>>> }
>>>
>
next prev parent reply other threads:[~2025-02-20 6:38 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-17 14:04 [PATCH v2 0/4] Fixes for hugetlb and vmalloc on arm64 Ryan Roberts
2025-02-17 14:04 ` [PATCH v2 1/4] mm: hugetlb: Add huge page size param to huge_ptep_get_and_clear() Ryan Roberts
2025-02-19 8:28 ` Anshuman Khandual
2025-02-19 19:00 ` Catalin Marinas
2025-02-20 9:46 ` David Hildenbrand
2025-02-24 10:49 ` Alexandre Ghiti
2025-02-25 14:25 ` Alexander Gordeev
2025-02-25 15:43 ` Ryan Roberts
2025-02-26 7:16 ` Alexander Gordeev
2025-02-26 7:37 ` Christophe Leroy
2025-02-17 14:04 ` [PATCH v2 2/4] arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes Ryan Roberts
2025-02-19 8:45 ` Anshuman Khandual
2025-02-19 8:58 ` Ryan Roberts
2025-02-20 6:37 ` Anshuman Khandual [this message]
2025-02-21 9:55 ` Catalin Marinas
2025-02-21 10:26 ` Ryan Roberts
2025-02-19 19:03 ` Catalin Marinas
2025-02-21 15:31 ` Will Deacon
2025-02-24 12:11 ` Ryan Roberts
2025-02-25 22:18 ` Will Deacon
2025-02-26 8:09 ` Ryan Roberts
2025-02-24 11:27 ` Alexandre Ghiti
2025-02-17 14:04 ` [PATCH v2 3/4] arm64: hugetlb: Fix flush_hugetlb_tlb_range() invalidation level Ryan Roberts
2025-02-19 8:56 ` Anshuman Khandual
2025-02-19 19:04 ` Catalin Marinas
2025-02-17 14:04 ` [PATCH v2 4/4] mm: Don't skip arch_sync_kernel_mappings() in error paths Ryan Roberts
2025-02-19 19:07 ` Catalin Marinas
2025-02-19 19:10 ` [PATCH v2 0/4] Fixes for hugetlb and vmalloc on arm64 Catalin Marinas
2025-02-21 15:35 ` Will Deacon
2025-02-24 12:13 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50f48574-241d-42d8-b811-3e422c41e21a@arm.com \
--to=anshuman.khandual@arm.com \
--cc=James.Bottomley@HansenPartnership.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=alexghiti@rivosinc.com \
--cc=andreas@gaisler.com \
--cc=aou@eecs.berkeley.edu \
--cc=arnd@arndb.de \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=chenhuacai@kernel.org \
--cc=christophe.leroy@csgroup.eu \
--cc=davem@davemloft.net \
--cc=david@redhat.com \
--cc=deller@gmx.de \
--cc=dev.jain@arm.com \
--cc=gerald.schaefer@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=hch@infradead.org \
--cc=kernel@xen0n.name \
--cc=kevin.brodsky@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maddy@linux.ibm.com \
--cc=mark.rutland@arm.com \
--cc=mpe@ellerman.id.au \
--cc=muchun.song@linux.dev \
--cc=naveen@kernel.org \
--cc=npiggin@gmail.com \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=ryan.roberts@arm.com \
--cc=stable@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=tsbogend@alpha.franken.de \
--cc=urezki@gmail.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox