From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD129CD4841 for ; Fri, 22 Sep 2023 16:14:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 577866B02B3; Fri, 22 Sep 2023 12:14:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 500386B02B5; Fri, 22 Sep 2023 12:14:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37A226B02DA; Fri, 22 Sep 2023 12:14:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 21AA56B02B3 for ; Fri, 22 Sep 2023 12:14:24 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E356B160141 for ; Fri, 22 Sep 2023 16:14:23 +0000 (UTC) X-FDA: 81264730806.03.892B53D Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf16.hostedemail.com (Postfix) with ESMTP id 7E9D8180028 for ; Fri, 22 Sep 2023 16:14:20 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=r2KQI46M; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of will@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=will@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695399261; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h15BO+2hA5DZuHXtvRUKzXrPKOzsFf08nLHT4NabPc4=; b=kyIdEhPnPhgWFxSLbzmqjbvezIcMUE3CI7w81iU0s5H+zyS5u3iJklQLs89rKMz3j8e9Z3 QEpWTS+jqYEhKQWp9e8dSkwVx60ml+63Q9V+o0rM6Ij6q2pvqEycOEgAot4NK+P0NCzR8o B6949O/Y9vToES+/nZfqB8NyffF1zxk= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=r2KQI46M; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of will@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=will@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695399261; a=rsa-sha256; cv=none; b=Sn8ITUfhUmOuAQISMBPkLOVyJgSZQ4dqC74V4fQ7pXV+CtaUZarm7Ouix0ufKGOg9mgb07 R90su2++JP8VnR8niwXNP9EJUehNeAw1MtvufNED+wm+guSzDk+DMi9IUCFHBnc8BSs7tQ J4BsFG4E3JupcZDbUIJKsPN6ib1zU10= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id D5ACBCE2342; Fri, 22 Sep 2023 16:14:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52A71C433C7; Fri, 22 Sep 2023 16:14:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695399254; bh=ghtojmtAqFRtX4PcgZLlC2VPw2Ttv8ZnkZD0uHf6G3Y=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=r2KQI46MuFtdt7QHNa6AqgiJz/M5ENzMW7kPwq5MHDWzFycJ2+pY4u3MqsA9xu0nC W/HNBy22Xw7JneqjviL2IaJu3dGzWLn16NZ5LDkp581LGZMdDZg4052edSPlLkbLHO EwdDs5kw6QxWcMB8DPa5TapO2QHn2N4oQOpAikE02mi4L04UnKYumRRdcwACdUxhgT xMQKjaY0zqYX4QHMGOi217/faMzXeE4+N/6foqQ34cnGlWvVCbUdZdJPU2NbwsyjUC p3AMpsc8yh3FbtfuQO9iVhIkFE9bRCE4VRqafiuyTvaj8qLItH92nbbD52ZVQBMP1Q LphK+HBo/3yDg== Date: Fri, 22 Sep 2023 17:14:04 +0100 From: Will Deacon To: Ryan Roberts Cc: Catalin Marinas , "James E.J. Bottomley" , Helge Deller , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Arnd Bergmann , Mike Kravetz , Muchun Song , SeongJae Park , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Anshuman Khandual , Peter Xu , Axel Rasmussen , Qi Zheng , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Subject: Re: [PATCH v2 2/2] arm64: hugetlb: Fix set_huge_pte_at() to work with all swap entries Message-ID: <20230922161404.GA23332@willie-the-truck> References: <20230922115804.2043771-1-ryan.roberts@arm.com> <20230922115804.2043771-3-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230922115804.2043771-3-ryan.roberts@arm.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: 7E9D8180028 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ogfbx3dp3gyy1wcmbbq99ri5zofk9657 X-HE-Tag: 1695399260-688335 X-HE-Meta: U2FsdGVkX1/RBMv40vvJXgLmhs7As0IMWrWxI5MijmxLXmFo7WKjXc48k3+Bg7zd5jeJ2IJ/CR+mlo7OatSF0DUDVadq1PHBfM0pQ1kvJUc+X/Blt69kEET1TqXxANRkhEDVGINIbyzFdA4Chi9x8f15dnhpkr8FvbJyr64frIgKio4vzG3nOYWODeflLVVVnzdE4zDhcBvPJl/yJQtNJ5VLhr00gD3KxK2N6MuF19hIma1wEbM3TY7utoewDIoCBHiuj0Q5bckT3sOgdp9WFno/mo0j4emvwXDJtzRycYXQwV5fP/TI4oWbSzu6vW7jg/HAfAGkwhXCG52IsX54+IaSTqeLbAUYGYXHF1hNh4aLHMlo040Z+4glZXtMQlpCebf72RzG4F2uYnKhbjPzz7HWoCZ27fuUcpt7SSgHUAslxuJ6dKz9Avg9TzN2pIWCKINPN9sQW3unW/cl/4oj3MAnPN6mP6C2V7PwS3F0lM6CU5Do1euWXq+scJlC0xYlAl/FxQ/ZTPbyQl6BxUSjEahMEcJecoQPrJFR6xAOfX1Xu+/4xKRZZh1vsq6IEgWnd6yfhA/O+jC4aYWywCpqFjFWToHN2jerdJWHeMEdUFHaz/qYdg2F2YzetVXKCItMy5oiEx+Jotq1vHLPbb7qajCqeuoIM/mWmbNkjQonJiR4/dYhimKApzYK7g0X2nvaGRsrbYQMEBrhNLMOe5+R5H+YA5/tAsvajJP7JiN0NSnF/Iv9CqyDczsXhpaAX/fe4am0E3md9XQlbG6fIy2+m65H7BZ5zX9d0GPEQjh4PDQ11HwxOsLGtiAuLYdxmcYgfvHtN9GjKz/L3WUVm1+R+MTlEsa6iCd7cfXPOQv/VF/LRFsVDafKso1dEt80QW1njVaXZgCp4iNM9UFERCTcvrfCgphdjRQZqhxglbRIpZI2TrERBq4PKcT10xYvWLsuvYtZwFDmtKMsrRl+gLW hunMrUYQ ZQ2LjyZ52flWupOs7VufDmNOg7smELjwOE4RP+jcs3vjyp5ajAxkeJETPHQK8dt8g4ynAkF/rgVCwqrqas5bW7oJq3oTgE6FxuyVacYDTq/8lqcENIGxReQu1t3ZSc1AO4YZbjCGw0DbAqM+wyvth3B2P2alkyzkkhSqGFwoDEkEgg2I/Sfnhg+JCbJfZmmI80j8MjUBTRbT/sQa5DWwxma7pKUGQ7amQT11Cdh7DxAnHjO1A6MHN3iHlBWRZ2OqXYNKj8lIazGOdq/7nN2QcFucoACT8S4xa4HtYIe+KwFL+DIMwD1kBQrmZ0fEoJ43L3z4NKyQJqFBlBbyKmfEYfSvoes7Iej3epqh30G96Nb+YCDo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Sep 22, 2023 at 12:58:04PM +0100, Ryan Roberts wrote: > When called with a swap entry that does not embed a PFN (e.g. > PTE_MARKER_POISONED or PTE_MARKER_UFFD_WP), the previous implementation > of set_huge_pte_at() would either cause a BUG() to fire (if > CONFIG_DEBUG_VM is enabled) or cause a dereference of an invalid address > and subsequent panic. > > arm64's huge pte implementation supports multiple huge page sizes, some > of which are implemented in the page table with multiple contiguous > entries. So set_huge_pte_at() needs to work out how big the logical pte > is, so that it can also work out how many physical ptes (or pmds) need > to be written. It previously did this by grabbing the folio out of the > pte and querying its size. > > However, there are cases when the pte being set is actually a swap > entry. But this also used to work fine, because for huge ptes, we only > ever saw migration entries and hwpoison entries. And both of these types > of swap entries have a PFN embedded, so the code would grab that and > everything still worked out. > > But over time, more calls to set_huge_pte_at() have been added that set > swap entry types that do not embed a PFN. And this causes the code to go > bang. The triggering case is for the uffd poison test, commit > 99aa77215ad0 ("selftests/mm: add uffd unit test for UFFDIO_POISON"), > which causes a PTE_MARKER_POISONED swap entry to be set, coutesey of > commit 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for > hugetlbfs") - added in v6.5-rc7. Although review shows that there are > other call sites that set PTE_MARKER_UFFD_WP (which also has no PFN), > these don't trigger on arm64 because arm64 doesn't support UFFD WP. > > Arguably, the root cause is really due to commit 18f3962953e4 ("mm: > hugetlb: kill set_huge_swap_pte_at()"), which aimed to simplify the > interface to the core code by removing set_huge_swap_pte_at() (which > took a page size parameter) and replacing it with calls to > set_huge_pte_at() where the size was inferred from the folio, as > descibed above. While that commit didn't break anything at the time, it > did break the interface because it couldn't handle swap entries without > PFNs. And since then new callers have come along which rely on this > working. But given the brokeness is only observable after commit > 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"), > that one gets the Fixes tag. > > Now that we have modified the set_huge_pte_at() interface to pass the > huge page size in the previous patch, we can trivially fix this issue. > > Signed-off-by: Ryan Roberts > Fixes: 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs") > Cc: # 6.5+ > --- > arch/arm64/mm/hugetlbpage.c | 17 +++-------------- > 1 file changed, 3 insertions(+), 14 deletions(-) > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > index a7f8c8db3425..13fd592228b1 100644 > --- a/arch/arm64/mm/hugetlbpage.c > +++ b/arch/arm64/mm/hugetlbpage.c > @@ -241,13 +241,6 @@ static void clear_flush(struct mm_struct *mm, > flush_tlb_range(&vma, saddr, addr); > } > > -static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) > -{ > - VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); > - > - return page_folio(pfn_to_page(swp_offset_pfn(entry))); > -} > - > void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte, unsigned long sz) > { > @@ -257,13 +250,10 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > unsigned long pfn, dpfn; > pgprot_t hugeprot; > > - if (!pte_present(pte)) { > - struct folio *folio; > - > - folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte)); > - ncontig = num_contig_ptes(folio_size(folio), &pgsize); > + ncontig = num_contig_ptes(sz, &pgsize); > > - for (i = 0; i < ncontig; i++, ptep++) > + if (!pte_present(pte)) { > + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) > set_pte_at(mm, addr, ptep, pte); Our set_pte_at() doesn't use 'addr' for anything and the old code didn't even bother to increment it here! I'm fine adding that, but it feels unrelated to the issue which this patch is actually fixing. Either way: Acked-by: Will Deacon Will