From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F243C4345F for ; Thu, 2 May 2024 03:04:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D77F96B007B; Wed, 1 May 2024 23:04:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D278B6B0082; Wed, 1 May 2024 23:04:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C17416B0083; Wed, 1 May 2024 23:04:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A52F26B007B for ; Wed, 1 May 2024 23:04:00 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4167DC0CC4 for ; Thu, 2 May 2024 03:04:00 +0000 (UTC) X-FDA: 82071961440.15.E0ACFDF Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id 70B951A000B for ; Thu, 2 May 2024 03:03:58 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714619038; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=b2HN+0UNCkL/RdW8voJk8lYIxWXkk06U9rx9Ba2nYCA=; b=IYviD+kDQXuNgMbWy9GEfIep75yPDXZ0+2cOYZ/hvG6zYciNMatNJULO9wuszLGvgosxPy Si6RCcxMmz5PALjtOifzccMRJjzt/OPviV1qCwG6zpmbRNT+BrL+sEH0rzUhwMHE1JT9DR 6I4niqieYMAKZ7c1Nra/o8KgZhZpnF4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714619038; a=rsa-sha256; cv=none; b=Iu5tcEz0+11h6Wm+J+Q0oilCGYBA2ouEcqBoheiPIqq82hK6HHen4mr22n0CzYn/wFa2VZ 74IKnRNpMMD2wtKFl5+kbgazEL76WYdekNJxaSFEoQ4nm5GXcFBt+SEpsIq5wU+JdfVAZe vuOX9juTjL2prd/RY019eI4pomcN1Vg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F72A2F4; Wed, 1 May 2024 20:04:23 -0700 (PDT) Received: from [10.162.42.72] (a077893.blr.arm.com [10.162.42.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2B1503F71E; Wed, 1 May 2024 20:03:48 -0700 (PDT) Message-ID: Date: Thu, 2 May 2024 08:33:45 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3] mm: Fix race between __split_huge_pmd_locked() and GUP-fast Content-Language: en-US To: Ryan Roberts , Andrew Morton , Catalin Marinas , Will Deacon , Mark Rutland , Zi Yan , "Aneesh Kumar K.V" , Jonathan Corbet , Nicholas Piggin , Christophe Leroy , "Naveen N. Rao" , Christian Borntraeger , Sven Schnelle , "David S. Miller" , Andreas Larsson , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <20240501143310.1381675-1-ryan.roberts@arm.com> From: Anshuman Khandual In-Reply-To: <20240501143310.1381675-1-ryan.roberts@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 70B951A000B X-Stat-Signature: c4e5boj8wecc5bxos9k3cucbr6s9jx16 X-Rspam-User: X-HE-Tag: 1714619038-934350 X-HE-Meta: U2FsdGVkX185q384xDOPkHhx7gvQFYIQt7c25F40vN9weVC1ioB2w8oBL75zX3drXrqyaO8fDXIQrYglAX3Gs+Stab9Twt9B1j4XzINI6XNuyE/Zstf/h4Nvxwk8EAve3v6hVAjer6l4UkgGFYwYUmQ4uC5d4Rtlz+LEesTaIRtt5pdgV/BZWABW0ivm7Y3IH9BJeXc6EtIg1iK4NCpjWFKoybiEG0WlzewNOlp5ySZHfsI5xJsTTi2ey81TCTgO9lBgUYe4mZ0Yv5L2oYzh3n4BR3u4Lvg/qNrX5ANBYUOdaAaUOfIdJrE925NxcYp3kcxXN3g2y4nEHMc3QJkQpXSYYlcZGh4d2VzXUz+TwCvGegTLBCYoxMZPGyiFSdeYpduvx6Yisfnh3H2IhMhrQYrD5qBNYs4IqNdj40j/lIKz/FHLg8cT+JhIqfU/7ioAVS+hfuJczUsPN5jd1sErx0p6739oLfJkeJWLwayl1KGCefLQml8tLBR7UgFyoZCrPnYHlzwc0zTPc3r4GY6cngFixCyXNR8v2QMqzHw1XhG4sSz4coUFogAEUoJENaFZ9iqzPql3KZ0sQlzmn0s4jgQkfpjjkT912id0dhIfzHcfdUuRpnNhEH2dMHYN4iMKQahW8T4gQbM2sMjffPdWQ1zhih245bpCTnUGlBsdk4k208EE1TUKL/2XwBM2cPurABrYZczNbD6OgunWoKxDBkCzvcQAAmaS33yCGp/siTW4h+BPUSbwDjoVDeBPBPxZLA5033bpOtbkcAOiu/TiqDTqDH3TZBzO4xKXwiBtL1FFU1PC9NylruEIjlDdOF8upolrwjxebiydMvy/zzgCxGtqTNe5QDcg8JbiRQyjktZ4VZFf+Ta4nK447c49QCfdWC66ohobuMgF2D1Lhs42KVK8ZQK8aNkKZC0eVCC9Vr/NOHYjN/pEHGj6O2c8W8lhXxoWng+WQ5F5RapsXZe IL3T3mVm +XrBDNPI0yjTYvrDUzZCOoX6ZXSMl2NN10p+c+ADmfNEDVdHGaDlmTZmdVeoAUgV1aXCjfQRDRm5rlecN6OYgga7rUglxqT4lQKZh8i7KckDFSMMWpqJqEJr5etCg5xtLJHyWZp6MP+erfC/oICKcOn75d2irHa7teKY3nYQZLECa4eB8cChHDyuAMVctQYY89pXijAsK1vzGpPZVMc7TW5wBYcZUhmMEtt3KaKtZkI45uLztx3VJ1vxpr7IjlvBuriSowUzRDrhWcG9QHob0lKcIZRAODESgQ5zx4frrgFxi9PE0ESsHvQ+FVXJSnw9MwBq2v6a4NQfNzqes5N41zVi17A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 5/1/24 20:03, Ryan Roberts wrote: > __split_huge_pmd_locked() can be called for a present THP, devmap or > (non-present) migration entry. It calls pmdp_invalidate() > unconditionally on the pmdp and only determines if it is present or not > based on the returned old pmd. This is a problem for the migration entry > case because pmd_mkinvalid(), called by pmdp_invalidate() must only be > called for a present pmd. > > On arm64 at least, pmd_mkinvalid() will mark the pmd such that any > future call to pmd_present() will return true. And therefore any > lockless pgtable walker could see the migration entry pmd in this state > and start interpretting the fields as if it were present, leading to > BadThings (TM). GUP-fast appears to be one such lockless pgtable walker. > > x86 does not suffer the above problem, but instead pmd_mkinvalid() will > corrupt the offset field of the swap entry within the swap pte. See link > below for discussion of that problem. > > Fix all of this by only calling pmdp_invalidate() for a present pmd. And > for good measure let's add a warning to all implementations of > pmdp_invalidate[_ad](). I've manually reviewed all other > pmdp_invalidate[_ad]() call sites and believe all others to be > conformant. > > This is a theoretical bug found during code review. I don't have any > test case to trigger it in practice. > > Cc: stable@vger.kernel.org > Link: https://lore.kernel.org/all/0dd7827a-6334-439a-8fd0-43c98e6af22b@arm.com/ > Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path") > Signed-off-by: Ryan Roberts > --- > > Right v3; this goes back to the original approach in v1 to fix core-mm rather > than push the fix into arm64, since we discovered that x86 can't handle > pmd_mkinvalid() being called for non-present pmds either. This is a better approach indeed. > > I'm pulling in more arch maintainers because this version adds some warnings in > arch code to help spot incorrect usage. > > Although Catalin had already accepted v2 (fixing arm64) [2] into for-next/fixes, > he's agreed to either remove or revert it. > > > Changes since v1 [1] > ==================== > > - Improve pmdp_mkinvalid() docs to make it clear it can only be called for > present pmd (per JohnH, Zi Yan) > - Added warnings to arch overrides of pmdp_invalidate[_ad]() (per Zi Yan) > - Moved comment next to new location of pmpd_invalidate() (per Zi Yan) > > > [1] https://lore.kernel.org/linux-mm/20240425170704.3379492-1-ryan.roberts@arm.com/ > [2] https://lore.kernel.org/all/20240430133138.732088-1-ryan.roberts@arm.com/ > > Thanks, > Ryan > > > Documentation/mm/arch_pgtable_helpers.rst | 6 ++- > arch/powerpc/mm/book3s64/pgtable.c | 1 + > arch/s390/include/asm/pgtable.h | 4 +- > arch/sparc/mm/tlb.c | 1 + > arch/x86/mm/pgtable.c | 2 + > mm/huge_memory.c | 49 ++++++++++++----------- > mm/pgtable-generic.c | 2 + > 7 files changed, 39 insertions(+), 26 deletions(-) > > diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst > index 2466d3363af7..ad50ca6f495e 100644 > --- a/Documentation/mm/arch_pgtable_helpers.rst > +++ b/Documentation/mm/arch_pgtable_helpers.rst > @@ -140,7 +140,8 @@ PMD Page Table Helpers > +---------------------------+--------------------------------------------------+ > | pmd_swp_clear_soft_dirty | Clears a soft dirty swapped PMD | > +---------------------------+--------------------------------------------------+ > -| pmd_mkinvalid | Invalidates a mapped PMD [1] | > +| pmd_mkinvalid | Invalidates a present PMD; do not call for | > +| | non-present PMD [1] | > +---------------------------+--------------------------------------------------+ > | pmd_set_huge | Creates a PMD huge mapping | > +---------------------------+--------------------------------------------------+ > @@ -196,7 +197,8 @@ PUD Page Table Helpers > +---------------------------+--------------------------------------------------+ > | pud_mkdevmap | Creates a ZONE_DEVICE mapped PUD | > +---------------------------+--------------------------------------------------+ > -| pud_mkinvalid | Invalidates a mapped PUD [1] | > +| pud_mkinvalid | Invalidates a present PUD; do not call for | > +| | non-present PUD [1] | > +---------------------------+--------------------------------------------------+ > | pud_set_huge | Creates a PUD huge mapping | > +---------------------------+--------------------------------------------------+ LGTM but guess this will conflict with your other patch for mm/debug_vm_pgtable.c if you choose to update pud_mkinvalid() description for pmd_leaf(). https://lore.kernel.org/all/20240501144439.1389048-1-ryan.roberts@arm.com/ > diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c > index 83823db3488b..2975ea0841ba 100644 > --- a/arch/powerpc/mm/book3s64/pgtable.c > +++ b/arch/powerpc/mm/book3s64/pgtable.c > @@ -170,6 +170,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, > { > unsigned long old_pmd; > > + VM_WARN_ON_ONCE(!pmd_present(*pmdp)); > old_pmd = pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID); > flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); > return __pmd(old_pmd); > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > index 60950e7a25f5..480bea44559d 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -1768,8 +1768,10 @@ static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, > static inline pmd_t pmdp_invalidate(struct vm_area_struct *vma, > unsigned long addr, pmd_t *pmdp) > { > - pmd_t pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID); > + pmd_t pmd; > > + VM_WARN_ON_ONCE(!pmd_present(*pmdp)); > + pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID); > return pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd); > } > > diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c > index b44d79d778c7..ef69127d7e5e 100644 > --- a/arch/sparc/mm/tlb.c > +++ b/arch/sparc/mm/tlb.c > @@ -249,6 +249,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, > { > pmd_t old, entry; > > + VM_WARN_ON_ONCE(!pmd_present(*pmdp)); > entry = __pmd(pmd_val(*pmdp) & ~_PAGE_VALID); > old = pmdp_establish(vma, address, pmdp, entry); > flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE); > diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c > index d007591b8059..103cbccf1d7d 100644 > --- a/arch/x86/mm/pgtable.c > +++ b/arch/x86/mm/pgtable.c > @@ -631,6 +631,8 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma, > pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, > pmd_t *pmdp) > { > + VM_WARN_ON_ONCE(!pmd_present(*pmdp)); > + > /* > * No flush is necessary. Once an invalid PTE is established, the PTE's > * access and dirty bits cannot be updated. > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 89f58c7603b2..dd1fc105f70b 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2493,32 +2493,11 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > return __split_huge_zero_page_pmd(vma, haddr, pmd); > } > > - /* > - * Up to this point the pmd is present and huge and userland has the > - * whole access to the hugepage during the split (which happens in > - * place). If we overwrite the pmd with the not-huge version pointing > - * to the pte here (which of course we could if all CPUs were bug > - * free), userland could trigger a small page size TLB miss on the > - * small sized TLB while the hugepage TLB entry is still established in > - * the huge TLB. Some CPU doesn't like that. > - * See http://support.amd.com/TechDocs/41322_10h_Rev_Gd.pdf, Erratum > - * 383 on page 105. Intel should be safe but is also warns that it's > - * only safe if the permission and cache attributes of the two entries > - * loaded in the two TLB is identical (which should be the case here). > - * But it is generally safer to never allow small and huge TLB entries > - * for the same virtual address to be loaded simultaneously. So instead > - * of doing "pmd_populate(); flush_pmd_tlb_range();" we first mark the > - * current pmd notpresent (atomically because here the pmd_trans_huge > - * must remain set at all times on the pmd until the split is complete > - * for this pmd), then we flush the SMP TLB and finally we write the > - * non-huge version of the pmd entry with pmd_populate. > - */ > - old_pmd = pmdp_invalidate(vma, haddr, pmd); > - > - pmd_migration = is_pmd_migration_entry(old_pmd); > + pmd_migration = is_pmd_migration_entry(*pmd); > if (unlikely(pmd_migration)) { > swp_entry_t entry; > > + old_pmd = *pmd; > entry = pmd_to_swp_entry(old_pmd); > page = pfn_swap_entry_to_page(entry); > write = is_writable_migration_entry(entry); > @@ -2529,6 +2508,30 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > soft_dirty = pmd_swp_soft_dirty(old_pmd); > uffd_wp = pmd_swp_uffd_wp(old_pmd); > } else { > + /* > + * Up to this point the pmd is present and huge and userland has > + * the whole access to the hugepage during the split (which > + * happens in place). If we overwrite the pmd with the not-huge > + * version pointing to the pte here (which of course we could if > + * all CPUs were bug free), userland could trigger a small page > + * size TLB miss on the small sized TLB while the hugepage TLB > + * entry is still established in the huge TLB. Some CPU doesn't > + * like that. See > + * http://support.amd.com/TechDocs/41322_10h_Rev_Gd.pdf, Erratum > + * 383 on page 105. Intel should be safe but is also warns that > + * it's only safe if the permission and cache attributes of the > + * two entries loaded in the two TLB is identical (which should > + * be the case here). But it is generally safer to never allow > + * small and huge TLB entries for the same virtual address to be > + * loaded simultaneously. So instead of doing "pmd_populate(); > + * flush_pmd_tlb_range();" we first mark the current pmd > + * notpresent (atomically because here the pmd_trans_huge must > + * remain set at all times on the pmd until the split is > + * complete for this pmd), then we flush the SMP TLB and finally > + * we write the non-huge version of the pmd entry with > + * pmd_populate. > + */ > + old_pmd = pmdp_invalidate(vma, haddr, pmd); > page = pmd_page(old_pmd); > folio = page_folio(page); > if (pmd_dirty(old_pmd)) { > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c > index 4fcd959dcc4d..a78a4adf711a 100644 > --- a/mm/pgtable-generic.c > +++ b/mm/pgtable-generic.c > @@ -198,6 +198,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) > pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, > pmd_t *pmdp) > { > + VM_WARN_ON_ONCE(!pmd_present(*pmdp)); > pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mkinvalid(*pmdp)); > flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); > return old; > @@ -208,6 +209,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, > pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, > pmd_t *pmdp) > { > + VM_WARN_ON_ONCE(!pmd_present(*pmdp)); > return pmdp_invalidate(vma, address, pmdp); > } > #endif Rest LGTM but let's wait for this to run on multiple platforms. Reviewed-by: Anshuman Khandual