From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B8A8C433F5 for ; Tue, 3 May 2022 02:19:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 417536B0071; Mon, 2 May 2022 22:19:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39E086B0073; Mon, 2 May 2022 22:19:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 217DC6B0074; Mon, 2 May 2022 22:19:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 0CDDF6B0071 for ; Mon, 2 May 2022 22:19:11 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D1FAE6202E for ; Tue, 3 May 2022 02:19:10 +0000 (UTC) X-FDA: 79422824460.15.BC9DF05 Received: from out199-10.us.a.mail.aliyun.com (out199-10.us.a.mail.aliyun.com [47.90.199.10]) by imf24.hostedemail.com (Postfix) with ESMTP id C2E6318008C for ; Tue, 3 May 2022 02:19:03 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=31;SR=0;TI=SMTPD_---0VC4Od8S_1651544341; Received: from 30.39.210.51(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VC4Od8S_1651544341) by smtp.aliyun-inc.com(127.0.0.1); Tue, 03 May 2022 10:19:03 +0800 Message-ID: <48a05075-a323-e7f1-9e99-6c0d106eb2cb@linux.alibaba.com> Date: Tue, 3 May 2022 10:19:46 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1 Subject: Re: [PATCH 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping To: Gerald Schaefer Cc: akpm@linux-foundation.org, mike.kravetz@oracle.com, catalin.marinas@arm.com, will@kernel.org, tsbogend@alpha.franken.de, James.Bottomley@HansenPartnership.com, deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net, arnd@arndb.de, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org References: <20220429220214.4cfc5539@thinkpad> <20220502160232.589a6111@thinkpad> From: Baolin Wang In-Reply-To: <20220502160232.589a6111@thinkpad> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: j8qtyhpohrfmyesty45beaksw91tyrpa X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C2E6318008C Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.10 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Rspam-User: X-HE-Tag: 1651544343-114109 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/2/2022 10:02 PM, Gerald Schaefer wrote: > On Sat, 30 Apr 2022 11:22:33 +0800 > Baolin Wang wrote: > >> >> >> On 4/30/2022 4:02 AM, Gerald Schaefer wrote: >>> On Fri, 29 Apr 2022 16:14:43 +0800 >>> Baolin Wang wrote: >>> >>>> On some architectures (like ARM64), it can support CONT-PTE/PMD size >>>> hugetlb, which means it can support not only PMD/PUD size hugetlb: >>>> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page >>>> size specified. >>>> >>>> When unmapping a hugetlb page, we will get the relevant page table >>>> entry by huge_pte_offset() only once to nuke it. This is correct >>>> for PMD or PUD size hugetlb, since they always contain only one >>>> pmd entry or pud entry in the page table. >>>> >>>> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, >>>> since they can contain several continuous pte or pmd entry with >>>> same page table attributes, so we will nuke only one pte or pmd >>>> entry for this CONT-PTE/PMD size hugetlb page. >>>> >>>> And now we only use try_to_unmap() to unmap a poisoned hugetlb page, >>>> which means now we will unmap only one pte entry for a CONT-PTE or >>>> CONT-PMD size poisoned hugetlb page, and we can still access other >>>> subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, >>>> which will cause serious issues possibly. >>>> >>>> So we should change to use huge_ptep_clear_flush() to nuke the >>>> hugetlb page table to fix this issue, which already considered >>>> CONT-PTE and CONT-PMD size hugetlb. >>>> >>>> Note we've already used set_huge_swap_pte_at() to set a poisoned >>>> swap entry for a poisoned hugetlb page. >>>> >>>> Signed-off-by: Baolin Wang >>>> --- >>>> mm/rmap.c | 34 +++++++++++++++++----------------- >>>> 1 file changed, 17 insertions(+), 17 deletions(-) >>>> >>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>> index 7cf2408..1e168d7 100644 >>>> --- a/mm/rmap.c >>>> +++ b/mm/rmap.c >>>> @@ -1564,28 +1564,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >>>> break; >>>> } >>>> } >>>> + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); >>> >>> Unlike in your patch 2/3, I do not see that this (huge) pteval would later >>> be used again with set_huge_pte_at() instead of set_pte_at(). Not sure if >>> this (huge) pteval could end up at a set_pte_at() later, but if yes, then >>> this would be broken on s390, and you'd need to use set_huge_pte_at() >>> instead of set_pte_at() like in your patch 2/3. >> >> IIUC, As I said in the commit message, we will only unmap a poisoned >> hugetlb page by try_to_unmap(), and the poisoned hugetlb page will be >> remapped with a poisoned entry by set_huge_swap_pte_at() in >> try_to_unmap_one(). So I think no need change to use set_huge_pte_at() >> instead of set_pte_at() for other cases, since the hugetlb page will not >> hit other cases. >> >> if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { >> pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); >> if (folio_test_hugetlb(folio)) { >> hugetlb_count_sub(folio_nr_pages(folio), mm); >> set_huge_swap_pte_at(mm, address, pvmw.pte, pteval, >> vma_mmu_pagesize(vma)); >> } else { >> dec_mm_counter(mm, mm_counter(&folio->page)); >> set_pte_at(mm, address, pvmw.pte, pteval); >> } >> >> } > > OK, but wouldn't the pteval be overwritten here with > pteval = swp_entry_to_pte(make_hwpoison_entry(subpage))? > IOW, what sense does it make to save the returned pteval from > huge_ptep_clear_flush(), when it is never being used anywhere? Please see previous code, we'll use the original pte value to check if it is uffd-wp armed, and if need to mark it dirty though the hugetlbfs is set noop_dirty_folio(). pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval); /* Set the dirty flag on the folio now the pte is gone. */ if (pte_dirty(pteval)) folio_mark_dirty(folio);