From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CDE4C433F5 for ; Sat, 30 Apr 2022 03:22:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D2336B0072; Fri, 29 Apr 2022 23:21:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 283136B0073; Fri, 29 Apr 2022 23:21:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 149D26B0074; Fri, 29 Apr 2022 23:21:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 0309E6B0072 for ; Fri, 29 Apr 2022 23:21:59 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C94CF295EE for ; Sat, 30 Apr 2022 03:21:58 +0000 (UTC) X-FDA: 79412096316.26.68DD543 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf14.hostedemail.com (Postfix) with ESMTP id 0DE2210003C for ; Sat, 30 Apr 2022 03:21:54 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R851e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=31;SR=0;TI=SMTPD_---0VBla4YD_1651288907; Received: from 30.32.86.96(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VBla4YD_1651288907) by smtp.aliyun-inc.com(127.0.0.1); Sat, 30 Apr 2022 11:21:50 +0800 Message-ID: Date: Sat, 30 Apr 2022 11:22:33 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1 Subject: Re: [PATCH 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping To: Gerald Schaefer Cc: akpm@linux-foundation.org, mike.kravetz@oracle.com, catalin.marinas@arm.com, will@kernel.org, tsbogend@alpha.franken.de, James.Bottomley@HansenPartnership.com, deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net, arnd@arndb.de, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org References: <20220429220214.4cfc5539@thinkpad> From: Baolin Wang In-Reply-To: <20220429220214.4cfc5539@thinkpad> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 0DE2210003C X-Stat-Signature: c9gbffc1usi3kai3zh9s3joaep5cngmd X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf14.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com X-Rspamd-Server: rspam09 X-HE-Tag: 1651288914-339447 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/30/2022 4:02 AM, Gerald Schaefer wrote: > On Fri, 29 Apr 2022 16:14:43 +0800 > Baolin Wang wrote: > >> On some architectures (like ARM64), it can support CONT-PTE/PMD size >> hugetlb, which means it can support not only PMD/PUD size hugetlb: >> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page >> size specified. >> >> When unmapping a hugetlb page, we will get the relevant page table >> entry by huge_pte_offset() only once to nuke it. This is correct >> for PMD or PUD size hugetlb, since they always contain only one >> pmd entry or pud entry in the page table. >> >> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, >> since they can contain several continuous pte or pmd entry with >> same page table attributes, so we will nuke only one pte or pmd >> entry for this CONT-PTE/PMD size hugetlb page. >> >> And now we only use try_to_unmap() to unmap a poisoned hugetlb page, >> which means now we will unmap only one pte entry for a CONT-PTE or >> CONT-PMD size poisoned hugetlb page, and we can still access other >> subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, >> which will cause serious issues possibly. >> >> So we should change to use huge_ptep_clear_flush() to nuke the >> hugetlb page table to fix this issue, which already considered >> CONT-PTE and CONT-PMD size hugetlb. >> >> Note we've already used set_huge_swap_pte_at() to set a poisoned >> swap entry for a poisoned hugetlb page. >> >> Signed-off-by: Baolin Wang >> --- >> mm/rmap.c | 34 +++++++++++++++++----------------- >> 1 file changed, 17 insertions(+), 17 deletions(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 7cf2408..1e168d7 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1564,28 +1564,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >> break; >> } >> } >> + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > > Unlike in your patch 2/3, I do not see that this (huge) pteval would later > be used again with set_huge_pte_at() instead of set_pte_at(). Not sure if > this (huge) pteval could end up at a set_pte_at() later, but if yes, then > this would be broken on s390, and you'd need to use set_huge_pte_at() > instead of set_pte_at() like in your patch 2/3. IIUC, As I said in the commit message, we will only unmap a poisoned hugetlb page by try_to_unmap(), and the poisoned hugetlb page will be remapped with a poisoned entry by set_huge_swap_pte_at() in try_to_unmap_one(). So I think no need change to use set_huge_pte_at() instead of set_pte_at() for other cases, since the hugetlb page will not hit other cases. if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); set_huge_swap_pte_at(mm, address, pvmw.pte, pteval, vma_mmu_pagesize(vma)); } else { dec_mm_counter(mm, mm_counter(&folio->page)); set_pte_at(mm, address, pvmw.pte, pteval); } } > > Please note that huge_ptep_get functions do not return valid PTEs on s390, > and such PTEs must never be set directly with set_pte_at(), but only with > set_huge_pte_at(). > > Background is that, for hugetlb pages, we are of course not really dealing > with PTEs at this level, but rather PMDs or PUDs, depending on hugetlb size. > On s390, the layout is quite different for PTEs and PMDs / PUDs, and > unfortunately the hugetlb code is not properly reflecting this by using > PMD or PUD types, like the THP code does. > > So, as work-around, on s390, the huge_ptep_xxx functions will return > only fake PTEs, which must be converted again to a proper PMD or PUD, > before writing them to the page table, which is what happens in > set_huge_pte_at(), but not in set_pte_at(). Thanks for your explanation. As I said as above, I think we've already handled the hugetlb with set_huge_swap_pte_at() in try_to_unmap_one().