From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75D2CC433F5 for ; Fri, 24 Sep 2021 09:26:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ECAB461038 for ; Fri, 24 Sep 2021 09:26:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ECAB461038 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 358A36B006C; Fri, 24 Sep 2021 05:26:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 30951900002; Fri, 24 Sep 2021 05:26:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1820A6B0073; Fri, 24 Sep 2021 05:26:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 05C376B006C for ; Fri, 24 Sep 2021 05:26:22 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A5FC2284AD for ; Fri, 24 Sep 2021 09:26:21 +0000 (UTC) X-FDA: 78621936162.07.B70F665 Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by imf04.hostedemail.com (Postfix) with ESMTP id 6013F50000B2 for ; Fri, 24 Sep 2021 09:26:21 +0000 (UTC) Received: by mail-lf1-f48.google.com with SMTP id u18so36678291lfd.12 for ; Fri, 24 Sep 2021 02:26:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=eQHMLuz9imzYljMNKyIhF91ZYs0BCNyD8xwIGKNkN7U=; b=K6VGYySSbJ8LF7c4NCbJWG3xW2uOQkA+fi1nij+7WCJZ5DXce/cWIiKDSobNVPOlpv GXaKdf250/IfcZU9AO+GqCzOsAWxjYlzdCvP8vJA3tLMEQmgMscb7Mu4B6okvRBa3Iyf UQm4D9/fDmcylpB/VmQSBhH4njbHN06MllBoK1jYapKkG4fUvcedsirkaO8jn2m9x261 8ZNJ9wSe2tlCIslyOt2EGYVkmGazplzCTpc2B9Ltkou0A9gtPIiVqfz2Lb/fg/atwJ3B dK8Byy4CyjyxEN3IKjsKtDobAy1B0J/BQ279v4POxX55PQPxK6qMPMoLyu0xGuwPjint dmdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=eQHMLuz9imzYljMNKyIhF91ZYs0BCNyD8xwIGKNkN7U=; b=pp9dlbTmj7uJ/7ZxdlYCXOj9c8ds8otOOs++5P+xrYC4KMKvi5YyCspLVLIYTc1ylT 8slXMmFPYmkLnQZz9oRBoDq6/IOIjPf3cn//nnqkh7SBan3vx9JyVIwX5DVhayzqlFD4 W/hG6rncM2PjsH8TsZAELzmzAqmRwv8bwjJ5cOIw3ZdlyHGhp5HIpzgYDX+f5PutACZT lh/0IhvHvAyV05eZCWx6ytGq6oXgf4eoVJTuquNvgPY05kLtrv49xKScbLF2pzJaFhg/ 9eYeoonQVV7jjXi8+HubIVrxnt+s9FATgwuVMq2TtLcWJq8UVT73AiofEq7cEiqd7dJJ OG1w== X-Gm-Message-State: AOAM530SYdYnTDHHsyy/hIoFXwtgqD9CeLYovW/9b6EokS3RgFrDmC/E BY4JnwKpjkBQ8pXBcrSejRXgpg== X-Google-Smtp-Source: ABdhPJz/C8Dz5zH1BzBPiet4Qs8ElsusgdiOSm2BdOw4HF+3vvM7N/tNKcDNnWnGMSNC0txZK6w6Qw== X-Received: by 2002:a05:651c:2109:: with SMTP id a9mr2287794ljq.166.1632475579601; Fri, 24 Sep 2021 02:26:19 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id s10sm696114lfc.28.2021.09.24.02.26.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Sep 2021 02:26:18 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id 7C221103068; Fri, 24 Sep 2021 12:26:21 +0300 (+03) Date: Fri, 24 Sep 2021 12:26:21 +0300 From: "Kirill A. Shutemov" To: Yang Shi Cc: HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+jIOebtOS5nyk=?= , Hugh Dickins , "Kirill A. Shutemov" , Matthew Wilcox , Peter Xu , Oscar Salvador , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Subject: Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Message-ID: <20210924092621.kbg4byfidfzgjk3g@box> References: <20210923032830.314328-1-shy828301@gmail.com> <20210923032830.314328-2-shy828301@gmail.com> <20210923143901.mdc6rejuh7hmr5vh@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6013F50000B2 X-Stat-Signature: 44tbjm3xaw38yfzubohdrbkukauh5fae Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=shutemov-name.20210112.gappssmtp.com header.s=20210112 header.b=K6VGYySS; dmarc=none; spf=none (imf04.hostedemail.com: domain of kirill@shutemov.name has no SPF policy when checking 209.85.167.48) smtp.mailfrom=kirill@shutemov.name X-HE-Tag: 1632475581-379768 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 23, 2021 at 01:39:49PM -0700, Yang Shi wrote: > On Thu, Sep 23, 2021 at 10:15 AM Yang Shi wrote: > > > > On Thu, Sep 23, 2021 at 7:39 AM Kirill A. Shutemov wrote: > > > > > > On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote: > > > > When handling shmem page fault the THP with corrupted subpage could be PMD > > > > mapped if certain conditions are satisfied. But kernel is supposed to > > > > send SIGBUS when trying to map hwpoisoned page. > > > > > > > > There are two paths which may do PMD map: fault around and regular fault. > > > > > > > > Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") > > > > the thing was even worse in fault around path. The THP could be PMD mapped as > > > > long as the VMA fits regardless what subpage is accessed and corrupted. After > > > > this commit as long as head page is not corrupted the THP could be PMD mapped. > > > > > > > > In the regulat fault path the THP could be PMD mapped as long as the corrupted > > > > > > s/regulat/regular/ > > > > > > > page is not accessed and the VMA fits. > > > > > > > > This loophole could be fixed by iterating every subpage to check if any > > > > of them is hwpoisoned or not, but it is somewhat costly in page fault path. > > > > > > > > So introduce a new page flag called HasHWPoisoned on the first tail page. It > > > > indicates the THP has hwpoisoned subpage(s). It is set if any subpage of THP > > > > is found hwpoisoned by memory failure and cleared when the THP is freed or > > > > split. > > > > > > > > Cc: > > > > Suggested-by: Kirill A. Shutemov > > > > Signed-off-by: Yang Shi > > > > --- > > > > > > ... > > > > > > > diff --git a/mm/filemap.c b/mm/filemap.c > > > > index dae481293b5d..740b7afe159a 100644 > > > > --- a/mm/filemap.c > > > > +++ b/mm/filemap.c > > > > @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) > > > > } > > > > > > > > if (pmd_none(*vmf->pmd) && PageTransHuge(page)) { > > > > - vm_fault_t ret = do_set_pmd(vmf, page); > > > > - if (!ret) { > > > > - /* The page is mapped successfully, reference consumed. */ > > > > - unlock_page(page); > > > > - return true; > > > > - } > > > > + vm_fault_t ret = do_set_pmd(vmf, page); > > > > + if (ret == VM_FAULT_FALLBACK) > > > > + goto out; > > > > > > Hm.. What? I don't get it. Who will establish page table in the pmd then? > > > > Aha, yeah. It should jump to the below PMD populate section. Will fix > > it in the next version. > > > > > > > > > + if (!ret) { > > > > + /* The page is mapped successfully, reference consumed. */ > > > > + unlock_page(page); > > > > + return true; > > > > + } > > > > } > > > > > > > > if (pmd_none(*vmf->pmd)) { > > > > @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) > > > > return true; > > > > } > > > > > > > > +out: > > > > return false; > > > > } > > > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > > index 5e9ef0fc261e..0574b1613714 100644 > > > > --- a/mm/huge_memory.c > > > > +++ b/mm/huge_memory.c > > > > @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > > > /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ > > > > lruvec = lock_page_lruvec(head); > > > > > > > > + ClearPageHasHWPoisoned(head); > > > > + > > > > > > Do we serialize the new flag with lock_page() or what? I mean what > > > prevents the flag being set again after this point, but before > > > ClearPageCompound()? > > > > No, not in this patch. But I think we could use refcount. THP split > > would freeze refcount and the split is guaranteed to succeed after > > that point, so refcount can be checked in memory failure. The > > SetPageHasHWPoisoned() call could be moved to __get_hwpoison_page() > > when get_unless_page_zero() bumps the refcount successfully. If the > > refcount is zero it means the THP is under split or being freed, we > > don't care about these two cases. > > Setting the flag in __get_hwpoison_page() would make this patch depend > on patch #3. However, this patch probably will be backported to older > versions. To ease the backport, I'd like to have the refcount check in > the same place where THP is checked. So, something like "if > (PageTransHuge(hpage) && page_count(hpage) != 0)". > > Then the call to set the flag could be moved to __get_hwpoison_page() > in the following patch (after patch #3). Does this sound good to you? Could you show the code I'm not sure I follow. page_count(hpage) check looks racy to me. What if split happens just after the check? -- Kirill A. Shutemov