From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DBF7C0218B for ; Thu, 23 Jan 2025 14:38:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B57EC6B0085; Thu, 23 Jan 2025 09:38:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B07DE6B0088; Thu, 23 Jan 2025 09:38:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CF5F6B0089; Thu, 23 Jan 2025 09:38:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 817116B0085 for ; Thu, 23 Jan 2025 09:38:53 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E58EC497F5 for ; Thu, 23 Jan 2025 14:38:52 +0000 (UTC) X-FDA: 83038973304.16.87FF7C1 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf08.hostedemail.com (Postfix) with ESMTP id D6C17160016 for ; Thu, 23 Jan 2025 14:38:50 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737643131; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zCPYDk/F7yVW1W2YwMztrSBUBJqYBg7NbwO05QS+wR4=; b=RmIMZ6tX1eNbacJgOslGkyX1cW85i/Bn6LEb0jfnBJfTwTBfYI60YZ1oQLWRei9bBbpVdX 0zgKO6hgA86pJCQeLxORlXmkYw2cTYVCpUVgDj212ONkaXV2WJdrOQUdi1XH8NjcdCHoQb ftZfMmjHwpbP6TumQqunozFZTg+LAU0= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737643131; a=rsa-sha256; cv=none; b=55uikgXx4MRL54Tnwm+69tIcDw0tlqHbqQREzroNLpRC9szu8+3iCwXbiie200lxVRvOaX pqcgVnz6zlH4YWSDy+/hhPLown31P237rHImWKl4Sc8Zxj1dTWYtcIiALKp2I5sBriyvW+ 9kRCZvs3FYb7KIu2VnSOQpv7nYgWG54= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 184AB1063; Thu, 23 Jan 2025 06:39:18 -0800 (PST) Received: from [10.1.33.169] (XHFQ2J9959.cambridge.arm.com [10.1.33.169]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 209FC3F694; Thu, 23 Jan 2025 06:38:48 -0800 (PST) Message-ID: <850479be-000a-45a7-9669-491d4200a988@arm.com> Date: Thu, 23 Jan 2025 14:38:46 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 1/2] mm: Clear uffd-wp PTE/PMD state on mremap() Content-Language: en-GB To: Andrew Morton , Muchun Song , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Shuah Khan , Peter Xu , David Hildenbrand , =?UTF-8?Q?Miko=C5=82aj_Lenczewski?= , Mark Rutland Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, stable@vger.kernel.org References: <20250107144755.1871363-1-ryan.roberts@arm.com> <20250107144755.1871363-2-ryan.roberts@arm.com> From: Ryan Roberts In-Reply-To: <20250107144755.1871363-2-ryan.roberts@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: D6C17160016 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: eojiwrypquk5n44wdbr9csdmpscioy8e X-HE-Tag: 1737643130-962852 X-HE-Meta: U2FsdGVkX1+5TipYHNhMc56Jcx9TlCP/dpH/39sQG/gwz9jJsFbfcf/W0AoBvTTaSHGWsXAUHNcaslFWxTsD1+Q1olOCDmD3fyZ1HBOYa5uVidW9RCaMKDtbsGXB4XMD+meEGP4eZXkee3PkR2DNCaF6K0AYU0cS3u+6uHb3sfNJAj3cDx2VUF7ew+0glfgmU3F+reXUUNdmeg50tKP6euKcmtnoPZKstLNxv6UdPQig6ybmn3sGTRMMEPMZROJRpe4xNgV4FaB7i4XDilYioPqBN+H4LxgQQfIYByB26T2EB5Hz8uu/7FhxYKhdZ90DLGCEGAoZCLPHEcT00BO2Zx7nBzjNVf2Fn45TBXUenyf41PE70/SCjwL8meclRsIOta67Y54q34axE+oTanJZXtYJaVyjQnoPjMy2b/12NakfoH9MehJH4aztEMUE6uL3fCxg3J87z9zUhHri2TaFC4+AevL36C6grERvIiZe9McwS+4MDSVDEmtbQIwIhl37FGu9c/ZdDCYbgTCBtd+QFeYNW8RD8w5pU1fga4+QfBh7WFMv6VSUDdmFg4fuHFC0t8Xom8VJ6X7hEugvlNwoW6tSNExBAxUDdh4dyN1/u2KPp2Lnk6JzTFLcwV4t+yKxbmBtrw0evW9GvcQWczOgs90GEjMlxKp13PyZOR7eaJ6+KtzCCspB+DFvGy5tk1tYuRKos+LwNhY6wfCKQ+g/JiRkGXNlgbhDIkEsOJAK3DVKE9vOGoBxUFfK07ULrWFrzQ2wLP/sByYHavRM/gNsAZg9Mb3dY/vXPLhJQ6/myb4xM08ZOFjsDoTKye9tjkJoBCWfo32zaUdRRm8B+HG74JOTzTUdFbH5rLeXZR4B8PNjUN+XoQ/x7LAr8QoIRXUzAfMKSasHf3fooG8IVdgg6gtidKMiosJRP+/7c+EuscjJJeSFQvK3TSIC87Jd5AqST91RuPbNHfNkJEAFhBr 2jsZa60h vJ9D0UCDABbKFD8v+t5gGGZJmGhmuFsfbgaAerI1b5YFlHdv2IQeEO20elh1UsGz/OGggn8WCt8rADuYG7igNIWLA0UxmRuG6HD7kTkFY2WO/vJAalMlmRNAfJ6n5huqbX7IsPzpcgdJ7bM58rYwfJg35LoNfz9eqMIxQJ5To+BlU+SlDBgVI7SWdxg67u4OOL1We1pFTS4PyRQDPRI5gWg9ifCm0pFY1WX0dfYutJ6JTgQxtud91aLmUDp/MuoPXs9qU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: I think there might be a bug in this after all... On 07/01/2025 14:47, Ryan Roberts wrote: > When mremap()ing a memory region previously registered with userfaultfd > as write-protected but without UFFD_FEATURE_EVENT_REMAP, an > inconsistency in flag clearing leads to a mismatch between the vma flags > (which have uffd-wp cleared) and the pte/pmd flags (which do not have > uffd-wp cleared). This mismatch causes a subsequent mprotect(PROT_WRITE) > to trigger a warning in page_table_check_pte_flags() due to setting the > pte to writable while uffd-wp is still set. > > Fix this by always explicitly clearing the uffd-wp pte/pmd flags on any > such mremap() so that the values are consistent with the existing > clearing of VM_UFFD_WP. Be careful to clear the logical flag regardless > of its physical form; a PTE bit, a swap PTE bit, or a PTE marker. Cover > PTE, huge PMD and hugetlb paths. > > Co-developed-by: Mikołaj Lenczewski > Signed-off-by: Mikołaj Lenczewski > Signed-off-by: Ryan Roberts > Closes: https://lore.kernel.org/linux-mm/810b44a8-d2ae-4107-b665-5a42eae2d948@arm.com/ > Fixes: 63b2d4174c4a ("userfaultfd: wp: add the writeprotect API to userfaultfd ioctl") > Cc: stable@vger.kernel.org > --- > include/linux/userfaultfd_k.h | 12 ++++++++++++ > mm/huge_memory.c | 12 ++++++++++++ > mm/hugetlb.c | 14 +++++++++++++- > mm/mremap.c | 32 +++++++++++++++++++++++++++++++- > 4 files changed, 68 insertions(+), 2 deletions(-) > > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > index cb40f1a1d081..75342022d144 100644 > --- a/include/linux/userfaultfd_k.h > +++ b/include/linux/userfaultfd_k.h > @@ -247,6 +247,13 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, > vma_is_shmem(vma); > } > > +static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma) > +{ > + struct userfaultfd_ctx *uffd_ctx = vma->vm_userfaultfd_ctx.ctx; > + > + return uffd_ctx && (uffd_ctx->features & UFFD_FEATURE_EVENT_REMAP) == 0; > +} > + > extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *); > extern void dup_userfaultfd_complete(struct list_head *); > void dup_userfaultfd_fail(struct list_head *); > @@ -402,6 +409,11 @@ static inline bool userfaultfd_wp_async(struct vm_area_struct *vma) > return false; > } > > +static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma) > +{ > + return false; > +} > + > #endif /* CONFIG_USERFAULTFD */ > > static inline bool userfaultfd_wp_use_markers(struct vm_area_struct *vma) > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index c89aed1510f1..2654a9548749 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2212,6 +2212,16 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd) > return pmd; > } > > +static pmd_t clear_uffd_wp_pmd(pmd_t pmd) > +{ > + if (pmd_present(pmd)) > + pmd = pmd_clear_uffd_wp(pmd); > + else if (is_swap_pmd(pmd)) > + pmd = pmd_swp_clear_uffd_wp(pmd); > + > + return pmd; > +} > + > bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, > unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) > { > @@ -2250,6 +2260,8 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, > pgtable_trans_huge_deposit(mm, new_pmd, pgtable); > } > pmd = move_soft_dirty_pmd(pmd); > + if (vma_has_uffd_without_event_remap(vma)) > + pmd = clear_uffd_wp_pmd(pmd); > set_pmd_at(mm, new_addr, new_pmd, pmd); > if (force_flush) > flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE); > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 354eec6f7e84..cdbc55d5384f 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5454,6 +5454,7 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, > unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte, > unsigned long sz) > { > + bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); > struct hstate *h = hstate_vma(vma); > struct mm_struct *mm = vma->vm_mm; > spinlock_t *src_ptl, *dst_ptl; > @@ -5470,7 +5471,18 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, > spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); > > pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); > - set_huge_pte_at(mm, new_addr, dst_pte, pte, sz); > + > + if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) > + huge_pte_clear(mm, new_addr, dst_pte, sz); This is checking if the source huge_pte is a uffd-wp marker and clearing the destination if so. The destination could have previously held arbitrary valid mappings, I guess? But huge_pte_clear() does not call page_table_check_pte_clear(). So any previous valid mapping will not have it's page_table_check ref count decremented? I think it should be replaced with: huge_ptep_get_and_clear(mm, new_addr, dst_pte); Since there is no huge_ptep_clear(). The tests I wrote are always mremapping into PROT_NONE space so they don't hit this condition. If people agree this is a bug, I'll send out a fix. Thanks, Ryan > + else { > + if (need_clear_uffd_wp) { > + if (pte_present(pte)) > + pte = huge_pte_clear_uffd_wp(pte); > + else if (is_swap_pte(pte)) > + pte = pte_swp_clear_uffd_wp(pte); > + } > + set_huge_pte_at(mm, new_addr, dst_pte, pte, sz); > + } > > if (src_ptl != dst_ptl) > spin_unlock(src_ptl); > diff --git a/mm/mremap.c b/mm/mremap.c > index 60473413836b..cff7f552f909 100644 > --- a/mm/mremap.c > +++ b/mm/mremap.c > @@ -138,6 +138,7 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > struct vm_area_struct *new_vma, pmd_t *new_pmd, > unsigned long new_addr, bool need_rmap_locks) > { > + bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); > struct mm_struct *mm = vma->vm_mm; > pte_t *old_pte, *new_pte, pte; > pmd_t dummy_pmdval; > @@ -216,7 +217,18 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > force_flush = true; > pte = move_pte(pte, old_addr, new_addr); > pte = move_soft_dirty_pte(pte); > - set_pte_at(mm, new_addr, new_pte, pte); > + > + if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) > + pte_clear(mm, new_addr, new_pte); > + else { > + if (need_clear_uffd_wp) { > + if (pte_present(pte)) > + pte = pte_clear_uffd_wp(pte); > + else if (is_swap_pte(pte)) > + pte = pte_swp_clear_uffd_wp(pte); > + } > + set_pte_at(mm, new_addr, new_pte, pte); > + } > } > > arch_leave_lazy_mmu_mode(); > @@ -278,6 +290,15 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > if (WARN_ON_ONCE(!pmd_none(*new_pmd))) > return false; > > + /* If this pmd belongs to a uffd vma with remap events disabled, we need > + * to ensure that the uffd-wp state is cleared from all pgtables. This > + * means recursing into lower page tables in move_page_tables(), and we > + * can reuse the existing code if we simply treat the entry as "not > + * moved". > + */ > + if (vma_has_uffd_without_event_remap(vma)) > + return false; > + > /* > * We don't have to worry about the ordering of src and dst > * ptlocks because exclusive mmap_lock prevents deadlock. > @@ -333,6 +354,15 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, > if (WARN_ON_ONCE(!pud_none(*new_pud))) > return false; > > + /* If this pud belongs to a uffd vma with remap events disabled, we need > + * to ensure that the uffd-wp state is cleared from all pgtables. This > + * means recursing into lower page tables in move_page_tables(), and we > + * can reuse the existing code if we simply treat the entry as "not > + * moved". > + */ > + if (vma_has_uffd_without_event_remap(vma)) > + return false; > + > /* > * We don't have to worry about the ordering of src and dst > * ptlocks because exclusive mmap_lock prevents deadlock.