From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E1F6CD4F4D for ; Thu, 5 Sep 2024 06:33:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E037A6B015F; Thu, 5 Sep 2024 02:33:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D8B686B0160; Thu, 5 Sep 2024 02:33:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C06696B0161; Thu, 5 Sep 2024 02:33:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9FE496B015F for ; Thu, 5 Sep 2024 02:33:34 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3BB6EA1998 for ; Thu, 5 Sep 2024 06:33:34 +0000 (UTC) X-FDA: 82529718348.10.50DE437 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) by imf25.hostedemail.com (Postfix) with ESMTP id 586DEA000A for ; Thu, 5 Sep 2024 06:33:32 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=RLu7mycF; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf25.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.170 as permitted sender) smtp.mailfrom=muchun.song@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725517934; a=rsa-sha256; cv=none; b=FJVymTJ4nabr+qXYpgduvN0iCn2rAoksiInsYC62soy5yKn4FLllA75eTC4JZbOIGp0B9g PqmkXarVxpTuWvjsCKzG9J22dG4gn8Zq6yzT8H59UDYZCeInfXxJ13biVF/yOS1UK1grQp RBnBWSHxa4EAX1TgbW9SnH56CVFI/l0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=RLu7mycF; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf25.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.170 as permitted sender) smtp.mailfrom=muchun.song@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725517934; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ksx22GPeHsqr/AARD3Kz3GMedfunDXYI9hMA6uigG5g=; b=BNTqsMBeOma3wb0O5Iew9XNZkStL4fhtRwchSUIR0pw/XF6TYtCnH6K1mVF/wnmOY0i5sp 8f7yYwvvZfY4aicB4MK+pySeMowW2m/XtJJLT8aMpC5EELXIzoBRy4oGRBY2RrnE5ujdt4 vpxXB9W0inGKgXgR3RXA7WBhbRGiu+U= Content-Type: text/plain; charset=us-ascii DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1725518009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ksx22GPeHsqr/AARD3Kz3GMedfunDXYI9hMA6uigG5g=; b=RLu7mycFOgF59AgkyN6sSQyBId6kbznssyisAv642gvR8ipqt29Lkad550xwM3XC5i0xuy w6vXjYG+TmTANMF23rlD1fddyYK7eqC0z5dDGe19Hl9oqA7jJgbrpuYJ7VxlJVwrKc2DzX 9JP+wn68fbsvBkV8OiiriUiBUa01AUs= Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3776.700.51\)) Subject: Re: [PATCH v2 07/14] mm: khugepaged: collapse_pte_mapped_thp() use pte_offset_map_rw_nolock() X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: Date: Thu, 5 Sep 2024 14:32:48 +0800 Cc: David Hildenbrand , Hugh Dickins , Matthew Wilcox , "Vlastimil Babka (SUSE)" , Andrew Morton , Mike Rapoport , Vishal Moola , Peter Xu , Ryan Roberts , christophe.leroy2@cs-soprasteria.com, LKML , Linux Memory Management List , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Content-Transfer-Encoding: quoted-printable Message-Id: <05955456-8743-448A-B7A4-BC45FABEA628@linux.dev> References: <24be821f-a95f-47f1-879a-c392a79072cc@linux.dev> To: Qi Zheng X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 586DEA000A X-Stat-Signature: c45ksrf3z5so34ouz33dd473onhgb3pd X-Rspam-User: X-HE-Tag: 1725518012-25160 X-HE-Meta: U2FsdGVkX1/gX2cKoaNCB/uDhKDDNfSPhSJsvatqtkIiasH9+L3Cp087Or/M6xJ5ucopH4klXLSs1dZGFfP/NV/2BIQFOKpqRgcDQ+Gl7quyCrVqibgMS4jn/JfzOttTaZ+5/5euxqlr0ZOJrOt2kCue8tmtoxsxpyidYZOm4BvqlQTuiKBVUfb8WTLrdSKELZNnFRDz4LsQh4Xtxyy+UgWLLagMqwZBRYqJdh+li81Q9qImzCseti6msfKU/Sh83yUqY9U+G5CCo0MPs9Kbkofi8WoJkrR75blFipgI7SQoQnX/4Xo1aG+Xsx8OCFQHJRh3sTg9UlgUWrIqDDNz7D5iL9G8x3h2oNDR8wA2QdWK7W7y2SviF65WiJRQb0Vxb2FNNymovZZAlwifJV/N/8g+KndvY3VZHCAc0QlKS5TYIOy2Yj/LYyrtW0GctgJ7/2h3MiZw2LsCjssNWb3y4qnjGbB4BQErAeqHBJhr6W8LMYMs7/GQ1Yx59aokhx/tWsN8AykCDVgmE8zYFuKLVVaXOh2tQZ+xgic5CuAriinsdOxwMmce+efxkcfHMD2kPVzyDKXQJpdoZWzx4k9iwAHMwXT+xMI3jbh+52tTApUaH0hWLkcVPeNuU8ONHYIsO7arSQYeD5HCVNqTLKZ1txYc5TP456gsounsTbRqU7mUUD8Fpo1liFw/MRJ+qP6IMy7XIWu1ZE4ZmYZhkN8w5a1FN4OA8tbXMAjv3UhIfozPb1O8D0mLkZFogk16IpG2/OkEEuJ2vcQJcIrZD2EcJ8hQSIFz4LkMAoXjspKoSSYEwha7r7UA253rOh2wPQL834/O/KaxP9A+ZLsLVUxVmZnUEAncBehLmdbgI9HHIagdXCTd3rxMhWZ3Esegqo63v0kZk/3hJo9IjvZdyg2+nLvVXbOBm7ImxvlEIyl1iPMfZgGN8SOF6wk4aVfjmqggxuHeDXRBE3wssiHdrmw iFk+h4tn 46VdNjLLFmhLHgmYh5XAXPSBciG8oI5iBcSJuYvNVe+JuebJMlGfurZeeJrC86GKl/ULERUx2xKmFqHNeoO1muPUTB7FIgWXk7W1cGxvnL+fsFdgiartMNTkJfBYCARgD/2+gYiwpY7s3qmqY6r7oT+RtBEFwuQa0dQ9WPleO5yLjIzFmrldVdbbVhodeaisNu9yIFJaZL9+u+1GEcUVkAcqBD0g4r8yPTJWDPN4vIkmdhkHp0LCP4OIRatvqkFU8YxAq7dTMEgLv5hnVLPifuSQLflsKpHoJOJUyGH8u9H4queajvQ4xfUcxqiAGwkGixXhqaj+U1WEUqqM5koUCKkNKsc246Tybb6rN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: > On Aug 30, 2024, at 14:54, Qi Zheng = wrote: >=20 >=20 >=20 > On 2024/8/29 16:10, Muchun Song wrote: >> On 2024/8/22 15:13, Qi Zheng wrote: >>> In collapse_pte_mapped_thp(), we may modify the pte and pmd entry = after >>> acquring the ptl, so convert it to using pte_offset_map_rw_nolock(). = At >>> this time, the write lock of mmap_lock is not held, and the = pte_same() >>> check is not performed after the PTL held. So we should get pgt_pmd = and do >>> pmd_same() check after the ptl held. >>>=20 >>> For the case where the ptl is released first and then the pml is = acquired, >>> the PTE page may have been freed, so we must do pmd_same() check = before >>> reacquiring the ptl. >>>=20 >>> Signed-off-by: Qi Zheng >>> --- >>> mm/khugepaged.c | 16 +++++++++++++++- >>> 1 file changed, 15 insertions(+), 1 deletion(-) >>>=20 >>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >>> index 53bfa7f4b7f82..15d3f7f3c65f2 100644 >>> --- a/mm/khugepaged.c >>> +++ b/mm/khugepaged.c >>> @@ -1604,7 +1604,7 @@ int collapse_pte_mapped_thp(struct mm_struct = *mm, unsigned long addr, >>> if (userfaultfd_armed(vma) && !(vma->vm_flags & VM_SHARED)) >>> pml =3D pmd_lock(mm, pmd); >>> - start_pte =3D pte_offset_map_nolock(mm, pmd, haddr, &ptl); >>> + start_pte =3D pte_offset_map_rw_nolock(mm, pmd, haddr, = &pgt_pmd, &ptl); >>> if (!start_pte) /* mmap_lock + page lock should prevent = this */ >>> goto abort; >>> if (!pml) >>> @@ -1612,6 +1612,9 @@ int collapse_pte_mapped_thp(struct mm_struct = *mm, unsigned long addr, >>> else if (ptl !=3D pml) >>> spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); >>> + if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) >>> + goto abort; >>> + >>> /* step 2: clear page table and adjust rmap */ >>> for (i =3D 0, addr =3D haddr, pte =3D start_pte; >>> i < HPAGE_PMD_NR; i++, addr +=3D PAGE_SIZE, pte++) { >>> @@ -1657,6 +1660,16 @@ int collapse_pte_mapped_thp(struct mm_struct = *mm, unsigned long addr, >>> /* step 4: remove empty page table */ >>> if (!pml) { >>> pml =3D pmd_lock(mm, pmd); >>> + /* >>> + * We called pte_unmap() and release the ptl before = acquiring >>> + * the pml, which means we left the RCU critical section, = so the >>> + * PTE page may have been freed, so we must do pmd_same() = check >>> + * before reacquiring the ptl. >>> + */ >>> + if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) { >>> + spin_unlock(pml); >>> + goto pmd_change; >> Seems we forget to flush TLB since we've cleared some pte entry? >=20 > See comment above the ptep_clear(): >=20 > /* > * Must clear entry, or a racing truncate may re-remove it. > * TLB flush can be left until pmdp_collapse_flush() does it. > * PTE dirty? Shmem page is already dirty; file is read-only. > */ >=20 > The TLB flush was handed over to pmdp_collapse_flush(). If a But you skipped pmdp_collapse_flush(). > concurrent thread free the PTE page at this time, the TLB will > also be flushed after pmd_clear(). >=20 >>> + } >>> if (ptl !=3D pml) >>> spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); >>> } >>> @@ -1688,6 +1701,7 @@ int collapse_pte_mapped_thp(struct mm_struct = *mm, unsigned long addr, >>> pte_unmap_unlock(start_pte, ptl); >>> if (pml && pml !=3D ptl) >>> spin_unlock(pml); >>> +pmd_change: >>> if (notified) >>> mmu_notifier_invalidate_range_end(&range); >>> drop_folio: