From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01B4AC021AD for ; Thu, 20 Feb 2025 06:38:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DFA62802AE; Thu, 20 Feb 2025 01:38:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 168CF2802AD; Thu, 20 Feb 2025 01:38:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00A372802AE; Thu, 20 Feb 2025 01:38:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D6E102802AD for ; Thu, 20 Feb 2025 01:38:01 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 19EF1120CEE for ; Thu, 20 Feb 2025 06:38:01 +0000 (UTC) X-FDA: 83139367962.28.0C42958 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf08.hostedemail.com (Postfix) with ESMTP id 0014616000A for ; Thu, 20 Feb 2025 06:37:58 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740033479; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SqNo4SrFn9ynC9BSNBrIXJUI8BYB8iCtLJNIO8yBn9M=; b=SdtluxF9lwwrVG6PXljloP430QAPK65cYEtj/4lbyVpT3N7hFeV7MevEgqUiXp+1k2Hb4k CvsVwfOL95Z6S4c3kWk3HByh8EY5GyOyIw0YTscMDlKyVOf7XHgKxrpLvIbodSYehYOqeP 9PnB/9bJRd3Ka8rs579YdOjqYxeMYvI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740033479; a=rsa-sha256; cv=none; b=Atjh0HKKHm+52mPF3b72NCCPTpFNY9HMc/FRJK1HPBkAE+LGNWg2E6IXCkdJj3FdXHZHg0 4F6z5H8NLlFTbbGHFDN4rnbY/c4PBEp5BKB1Nqxgd/vXLVKY3x/PX2MQmc0sBJQqFjuv46 Et//e5OzfjPz/bdU5WZmzX72SrcfLe0= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 020DE2A2A; Wed, 19 Feb 2025 22:38:16 -0800 (PST) Received: from [10.163.38.27] (unknown [10.163.38.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8BF183F5A1; Wed, 19 Feb 2025 22:37:39 -0800 (PST) Message-ID: <50f48574-241d-42d8-b811-3e422c41e21a@arm.com> Date: Thu, 20 Feb 2025 12:07:35 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 2/4] arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes To: Ryan Roberts , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Andreas Larsson , Arnd Bergmann , Muchun Song , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Dev Jain , Kevin Brodsky , Alexandre Ghiti Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <20250217140419.1702389-1-ryan.roberts@arm.com> <20250217140419.1702389-3-ryan.roberts@arm.com> <5477d161-12e7-4475-a6e9-ff3921989673@arm.com> Content-Language: en-US From: Anshuman Khandual In-Reply-To: <5477d161-12e7-4475-a6e9-ff3921989673@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0014616000A X-Stat-Signature: 7sp7bcdjdawhfxd9ecpywcfq4sdd9stq X-HE-Tag: 1740033478-877996 X-HE-Meta: U2FsdGVkX1/Tt2Uu1k+hAYAzz7mo7DdVhXKEjibWMBGshD+IybzrmQ8+O5wg2hBE+tnspYgK3KdnzT7EHCs0NAAl+qLcsMllH3axi/IuNTLc1/JpoY9Fl+HM+bIo5vYf3vw78VSPmRtLwjuZR0AAlPVEOeeNvTKnw+BhaD1+HfFINZJhSVSTv/Rar9SdceR5Ya2N8Yhc7nDNhT5DXHXAacPoq1AKAPnCtt7ncNhAZbP6p3Ey/yWx2vNlRYkM1udO1nVybT02MINvF4ihz5mkXawbH1JS22ORQOG9lSti732i9MlVJ+ZlvoF1kA39+qGli3mGdwrBh/z15DtRmsRTxiCI7UIwJRuf3LQVm/TL0IKrbKEm+cxWV73vaPy3Mc2835x0N/lbHW8xFIPWThscmIGPMGQA2aFfDs3++iMpdH2wWxnLQNe5JP8nDJyNYcQ503vws44qJZe+Kzz1YkytUZbHRw0tsufo+nohD8JV2WEORLfaxuOlOWwhUbdbZDTNVDGz+FBS6XSAJcsmQngtU5zWkAOXa8TNLch0Kb1U3q5KPjEhI45kM+paWBKs3VZGHHALTTWKwk3cdjzuXzCimV4icAvemfLILm/ZfX0sONIB2xrjBzX3JeERL21kDejV9S9J7ZXnkq/V4hJpmKSChYdQ/sX3N87S5criwCfPSPbL5NjbYSvCGztrBzc/LybL5iv15gtbmlIvfWKgO0U2R1CqFLbMZ8qdWFTKoHznKAC6md5KoDPAcTD1LToTxkAYO2A5+mkW4HSIr9JMwthImggSYdREcDtn20Xj9wzSk4t5XZzXkv7ZFzOsIdJz22ogJLGmkW7OPA3sQo2F6bxMwX5Ib+d5sk0FwAjvIEGquxQp9tcTJWL17iBVMXIwvDvdm19Axr99qRAhFCtx8D8yjakNW+MNoKOwEYUo/f8j2qroqUK0NJUeDkFTgqUbPzIpUSG3IGISsYhR9jyJq7T lJG1lHJH ckVwow4ceESWKFahxqvQFR5kCgCt2BmNJ58Y3sKYXkBhTlj6yJBa8e1GLBQ69eabNf54MRPp4BJUlGQytUFF8FU5M728Hzjv9h3TwY6z/igCDOwD+Rftgxd8Ec4a57knfsKnmdzuk5XLQOTvcu7/157eJgI6jyiTBSfJX7YcEuhLjUETJnC0d2khlVysgycGAa3773BNiTfxx2jhrzSoN47pNh8XxPKiQ9MFKXvO8e0cNN3ubUdVwSBWg/i33Zl5l3kh62W3lCIUrLSsztUXAlzl/iA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2/19/25 14:28, Ryan Roberts wrote: > On 19/02/2025 08:45, Anshuman Khandual wrote: >> >> >> On 2/17/25 19:34, Ryan Roberts wrote: >>> arm64 supports multiple huge_pte sizes. Some of the sizes are covered by >>> a single pte entry at a particular level (PMD_SIZE, PUD_SIZE), and some >>> are covered by multiple ptes at a particular level (CONT_PTE_SIZE, >>> CONT_PMD_SIZE). So the function has to figure out the size from the >>> huge_pte pointer. This was previously done by walking the pgtable to >>> determine the level and by using the PTE_CONT bit to determine the >>> number of ptes at the level. >>> >>> But the PTE_CONT bit is only valid when the pte is present. For >>> non-present pte values (e.g. markers, migration entries), the previous >>> implementation was therefore erroniously determining the size. There is typo - s/erroniously/erroneously ^^^^^^ >>> at least one known caller in core-mm, move_huge_pte(), which may call >>> huge_ptep_get_and_clear() for a non-present pte. So we must be robust to >>> this case. Additionally the "regular" ptep_get_and_clear() is robust to >>> being called for non-present ptes so it makes sense to follow the >>> behaviour. >>> >>> Fix this by using the new sz parameter which is now provided to the >>> function. Additionally when clearing each pte in a contig range, don't >>> gather the access and dirty bits if the pte is not present. >>> >>> An alternative approach that would not require API changes would be to >>> store the PTE_CONT bit in a spare bit in the swap entry pte for the >>> non-present case. But it felt cleaner to follow other APIs' lead and >>> just pass in the size. >>> >>> As an aside, PTE_CONT is bit 52, which corresponds to bit 40 in the swap >>> entry offset field (layout of non-present pte). Since hugetlb is never >>> swapped to disk, this field will only be populated for markers, which >>> always set this bit to 0 and hwpoison swap entries, which set the offset >>> field to a PFN; So it would only ever be 1 for a 52-bit PVA system where >>> memory in that high half was poisoned (I think!). So in practice, this >>> bit would almost always be zero for non-present ptes and we would only >>> clear the first entry if it was actually a contiguous block. That's >>> probably a less severe symptom than if it was always interpretted as 1 typo - s/interpretted/interpreted ^^^^^^ >>> and cleared out potentially-present neighboring PTEs. >>> >>> Cc: stable@vger.kernel.org >>> Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit") >>> Signed-off-by: Ryan Roberts >>> --- >>> arch/arm64/mm/hugetlbpage.c | 40 ++++++++++++++++--------------------- >>> 1 file changed, 17 insertions(+), 23 deletions(-) >>> >>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c >>> index 06db4649af91..614b2feddba2 100644 >>> --- a/arch/arm64/mm/hugetlbpage.c >>> +++ b/arch/arm64/mm/hugetlbpage.c >>> @@ -163,24 +163,23 @@ static pte_t get_clear_contig(struct mm_struct *mm, >>> unsigned long pgsize, >>> unsigned long ncontig) >>> { >>> - pte_t orig_pte = __ptep_get(ptep); >>> - unsigned long i; >>> - >>> - for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { >>> - pte_t pte = __ptep_get_and_clear(mm, addr, ptep); >>> - >>> - /* >>> - * If HW_AFDBM is enabled, then the HW could turn on >>> - * the dirty or accessed bit for any page in the set, >>> - * so check them all. >>> - */ >>> - if (pte_dirty(pte)) >>> - orig_pte = pte_mkdirty(orig_pte); >>> - >>> - if (pte_young(pte)) >>> - orig_pte = pte_mkyoung(orig_pte); >>> + pte_t pte, tmp_pte; >>> + bool present; >>> + >>> + pte = __ptep_get_and_clear(mm, addr, ptep); >>> + present = pte_present(pte); >> >> pte_present() may not be evaluated for standard huge pages at [PMD|PUD]_SIZE >> e.g when ncontig = 1 in the argument. > > Sorry I'm not quite sure what you're suggesting here? Are you proposing that > pte_present() should be moved into the loop so that we only actually call it > when we are going to consume it? I'm happy to do that if it's the preference, Right, pte_present() is only required for the cont huge pages but not for the normal huge pages. > but I thought it was neater to hoist it out of the loop. Agreed, but when possible pte_present() cost should be avoided for the normal huge pages where it is not required. > >> >>> + while (--ncontig) { >> >> Should this be converted into a for loop instead just to be in sync with other >> similar iterators in this file. >> >> for (i = 1; i < ncontig; i++, addr += pgsize, ptep++) >> { >> tmp_pte = __ptep_get_and_clear(mm, addr, ptep); >> if (present) { >> if (pte_dirty(tmp_pte)) >> pte = pte_mkdirty(pte); >> if (pte_young(tmp_pte)) >> pte = pte_mkyoung(pte); >> } >> } > > I think the way you have written this it's incorrect. Let's say we have 16 ptes > in the block. We want to iterate over the last 15 of them (we have already read > pte 0). But you're iterating over the first 15 because you don't increment addr > and ptep until after you've been around the loop the first time. So we would > need to explicitly increment those 2 before entering the loop. But that is only > neccessary if ncontig > 1. Personally I think my approach is neater... Thinking about this again. Just wondering should not a pte_present() check on each entries being cleared along with (ncontig > 1) in this existing loop before transferring over the dirty and accessed bits - also work as intended with less code churn ? static pte_t get_clear_contig(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long pgsize, unsigned long ncontig) { pte_t orig_pte = __ptep_get(ptep); unsigned long i; for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { pte_t pte = __ptep_get_and_clear(mm, addr, ptep); if (ncontig > 1 && !pte_present(pte)) continue; /* * If HW_AFDBM is enabled, then the HW could turn on * the dirty or accessed bit for any page in the set, * so check them all. */ if (pte_dirty(pte)) orig_pte = pte_mkdirty(orig_pte); if (pte_young(pte)) orig_pte = pte_mkyoung(orig_pte); } return orig_pte; } * Normal huge pages - enters the for loop just once - clears the single entry - always transfers dirty and access bits - pte_present() does not matter as ncontig = 1 * Contig huge pages - enters the for loop ncontig times - for each sub page - clears all sub page entries - transfers dirty and access bits only when pte_present() - pte_present() is relevant as ncontig > 1 > >> >>> + ptep++; >>> + addr += pgsize; >>> + tmp_pte = __ptep_get_and_clear(mm, addr, ptep); >>> + if (present) { >>> + if (pte_dirty(tmp_pte)) >>> + pte = pte_mkdirty(pte); >>> + if (pte_young(tmp_pte)) >>> + pte = pte_mkyoung(pte); >>> + } >>> } >>> - return orig_pte; >>> + return pte; >>> } >>> >>> static pte_t get_clear_contig_flush(struct mm_struct *mm, >>> @@ -401,13 +400,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, >>> { >>> int ncontig; >>> size_t pgsize; >>> - pte_t orig_pte = __ptep_get(ptep); >>> - >>> - if (!pte_cont(orig_pte)) >>> - return __ptep_get_and_clear(mm, addr, ptep); >>> - >>> - ncontig = find_num_contig(mm, addr, ptep, &pgsize); >>> >>> + ncontig = num_contig_ptes(sz, &pgsize); >>> return get_clear_contig(mm, addr, ptep, pgsize, ncontig); >>> } >>> >