From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16126C433FE for ; Thu, 14 Oct 2021 19:16:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B426161151 for ; Thu, 14 Oct 2021 19:16:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B426161151 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5684A94000A; Thu, 14 Oct 2021 15:16:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CA6D940008; Thu, 14 Oct 2021 15:16:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 344A794000A; Thu, 14 Oct 2021 15:16:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 27135940008 for ; Thu, 14 Oct 2021 15:16:32 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CA56830C87 for ; Thu, 14 Oct 2021 19:16:31 +0000 (UTC) X-FDA: 78695999382.32.FC947FA Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf27.hostedemail.com (Postfix) with ESMTP id 7E29970000A0 for ; Thu, 14 Oct 2021 19:16:30 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id q12so3501622pgq.11 for ; Thu, 14 Oct 2021 12:16:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MT2VyhwVPZMlSykeLgQd12alBWvS20j5jxYoTfBmcF8=; b=qFnstvGFAe11+1N+bn66uD3hNPdSSlpucyErJYWQmrli/uHhgv+0zU1wnoE9hIuZWW pHZGZKYeAB94W3hp1t6X59GkEgLqpJMlbwIbocWNGlm5wqU88A0CdRaf3YZL10SMxnmV w0vcCOF/B2I6Nabvo1MINSenc9TTmPAnVczGTRGsscNiA1fXQSQhdyya7/lVrRT4Pl2C myzDuHkHzTIp80UXs8YcSNvuGrnMNFQGTAvLh2rOHA0jusiVJJAxFUxrVU7EeERKeCDm HKUY1O0nuUK1ZltJjHfzEHN7MjsegfVRsOUDKra5KqJQ7U2P84DgWCi3WuIJx1jHs3wS yYyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MT2VyhwVPZMlSykeLgQd12alBWvS20j5jxYoTfBmcF8=; b=x0U+Wk8giht4czJFU5QS0EQRvo4vliVUxYKJWbCd5YyQQtHc5fxevOQY05SpittUn8 iFkM0O3ezPx4eEU92Ea4PDZZLM/Dw9lps30ev7CNPoJXd45F62vOx2MT77klyCT9aPJc +Yke0DtokP17gO1GHP/cSBSaiMzvouS2DvunYjQiCvT1jt35DUR98r0eWuC4jhKaTJ9h cmGOVHmPoCXFn+KSvJZtE2AXGH5beQligUW28ebjZK0RHgH1RRc1yeRiuTsJWrGIE1ee QXGue77Piwzrro5u+e61wfBkTLZWiAMYw/0k5A0NidRQepMeFuS+GAwPBjMZhe8Lhem2 w3FQ== X-Gm-Message-State: AOAM532BI4E8ayT+j8ZkR0ZsmOir4ANLQGNAbWSnzTDR05LswMe14wvX cbxTycPtKra336m8kTMDUMk= X-Google-Smtp-Source: ABdhPJyTvk+9EuoEqGKW2x4LPDHKDtEGLc6hK3T7WNa6tagJM1xlywb9F6NYzZuj4Dz1FPsIQjZeXg== X-Received: by 2002:a05:6a00:1901:b0:44b:e041:f07f with SMTP id y1-20020a056a00190100b0044be041f07fmr7172477pfi.52.1634238990422; Thu, 14 Oct 2021 12:16:30 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id x129sm3253922pfc.140.2021.10.14.12.16.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Oct 2021 12:16:29 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 4/6] mm: hwpoison: refactor refcount check handling Date: Thu, 14 Oct 2021 12:16:13 -0700 Message-Id: <20211014191615.6674-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211014191615.6674-1-shy828301@gmail.com> References: <20211014191615.6674-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7E29970000A0 X-Stat-Signature: r5p3z3k9sww9waxk8ofi7xcoqdk6t35j Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qFnstvGF; spf=pass (imf27.hostedemail.com: domain of shy828301@gmail.com designates 209.85.215.177 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1634238990-205690 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Memory failure will report failure if the page still has extra pinned refcount other than from hwpoison after the handler is done. Actually the check is not necessary for all handlers, so move the check into specific handlers. This would make the following keeping shmem page in page cache patch easier. There may be expected extra pin for some cases, for example, when the page is dirty and in swapcache. Suggested-by: Naoya Horiguchi Signed-off-by: Naoya Horiguchi Signed-off-by: Yang Shi --- mm/memory-failure.c | 93 +++++++++++++++++++++++++++++++-------------- 1 file changed, 64 insertions(+), 29 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 2809d12f16af..cdf8ccd0865f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -806,12 +806,44 @@ static int truncate_error_page(struct page *p, unsi= gned long pfn, return ret; } =20 +struct page_state { + unsigned long mask; + unsigned long res; + enum mf_action_page_type type; + + /* Callback ->action() has to unlock the relevant page inside it. */ + int (*action)(struct page_state *ps, struct page *p); +}; + +/* + * Return true if page is still referenced by others, otherwise return + * false. + * + * The extra_pins is true when one extra refcount is expected. + */ +static bool has_extra_refcount(struct page_state *ps, struct page *p, + bool extra_pins) +{ + int count =3D page_count(p) - 1; + + if (extra_pins) + count -=3D 1; + + if (count > 0) { + pr_err("Memory failure: %#lx: %s still referenced by %d users\n", + page_to_pfn(p), action_page_types[ps->type], count); + return true; + } + + return false; +} + /* * Error hit kernel page. * Do nothing, try to be lucky and not touch this instead. For a few cas= es we * could be more sophisticated. */ -static int me_kernel(struct page *p, unsigned long pfn) +static int me_kernel(struct page_state *ps, struct page *p) { unlock_page(p); return MF_IGNORED; @@ -820,9 +852,9 @@ static int me_kernel(struct page *p, unsigned long pf= n) /* * Page in unknown state. Do nothing. */ -static int me_unknown(struct page *p, unsigned long pfn) +static int me_unknown(struct page_state *ps, struct page *p) { - pr_err("Memory failure: %#lx: Unknown page state\n", pfn); + pr_err("Memory failure: %#lx: Unknown page state\n", page_to_pfn(p)); unlock_page(p); return MF_FAILED; } @@ -830,7 +862,7 @@ static int me_unknown(struct page *p, unsigned long p= fn) /* * Clean (or cleaned) page cache page. */ -static int me_pagecache_clean(struct page *p, unsigned long pfn) +static int me_pagecache_clean(struct page_state *ps, struct page *p) { int ret; struct address_space *mapping; @@ -867,9 +899,13 @@ static int me_pagecache_clean(struct page *p, unsign= ed long pfn) * * Open: to take i_rwsem or not for this? Right now we don't. */ - ret =3D truncate_error_page(p, pfn, mapping); + ret =3D truncate_error_page(p, page_to_pfn(p), mapping); out: unlock_page(p); + + if (has_extra_refcount(ps, p, false)) + ret =3D MF_FAILED; + return ret; } =20 @@ -878,7 +914,7 @@ static int me_pagecache_clean(struct page *p, unsigne= d long pfn) * Issues: when the error hit a hole page the error is not properly * propagated. */ -static int me_pagecache_dirty(struct page *p, unsigned long pfn) +static int me_pagecache_dirty(struct page_state *ps, struct page *p) { struct address_space *mapping =3D page_mapping(p); =20 @@ -922,7 +958,7 @@ static int me_pagecache_dirty(struct page *p, unsigne= d long pfn) mapping_set_error(mapping, -EIO); } =20 - return me_pagecache_clean(p, pfn); + return me_pagecache_clean(ps, p); } =20 /* @@ -944,9 +980,10 @@ static int me_pagecache_dirty(struct page *p, unsign= ed long pfn) * Clean swap cache pages can be directly isolated. A later page fault w= ill * bring in the known good data from disk. */ -static int me_swapcache_dirty(struct page *p, unsigned long pfn) +static int me_swapcache_dirty(struct page_state *ps, struct page *p) { int ret; + bool extra_pins =3D false; =20 ClearPageDirty(p); /* Trigger EIO in shmem: */ @@ -954,10 +991,17 @@ static int me_swapcache_dirty(struct page *p, unsig= ned long pfn) =20 ret =3D delete_from_lru_cache(p) ? MF_FAILED : MF_DELAYED; unlock_page(p); + + if (ret =3D=3D MF_DELAYED) + extra_pins =3D true; + + if (has_extra_refcount(ps, p, extra_pins)) + ret =3D MF_FAILED; + return ret; } =20 -static int me_swapcache_clean(struct page *p, unsigned long pfn) +static int me_swapcache_clean(struct page_state *ps, struct page *p) { int ret; =20 @@ -965,6 +1009,10 @@ static int me_swapcache_clean(struct page *p, unsig= ned long pfn) =20 ret =3D delete_from_lru_cache(p) ? MF_FAILED : MF_RECOVERED; unlock_page(p); + + if (has_extra_refcount(ps, p, false)) + ret =3D MF_FAILED; + return ret; } =20 @@ -974,7 +1022,7 @@ static int me_swapcache_clean(struct page *p, unsign= ed long pfn) * - Error on hugepage is contained in hugepage unit (not in raw page un= it.) * To narrow down kill region to one page, we need to break up pmd. */ -static int me_huge_page(struct page *p, unsigned long pfn) +static int me_huge_page(struct page_state *ps, struct page *p) { int res; struct page *hpage =3D compound_head(p); @@ -985,7 +1033,7 @@ static int me_huge_page(struct page *p, unsigned lon= g pfn) =20 mapping =3D page_mapping(hpage); if (mapping) { - res =3D truncate_error_page(hpage, pfn, mapping); + res =3D truncate_error_page(hpage, page_to_pfn(p), mapping); unlock_page(hpage); } else { res =3D MF_FAILED; @@ -1003,6 +1051,9 @@ static int me_huge_page(struct page *p, unsigned lo= ng pfn) } } =20 + if (has_extra_refcount(ps, p, false)) + res =3D MF_FAILED; + return res; } =20 @@ -1028,14 +1079,7 @@ static int me_huge_page(struct page *p, unsigned l= ong pfn) #define slab (1UL << PG_slab) #define reserved (1UL << PG_reserved) =20 -static struct page_state { - unsigned long mask; - unsigned long res; - enum mf_action_page_type type; - - /* Callback ->action() has to unlock the relevant page inside it. */ - int (*action)(struct page *p, unsigned long pfn); -} error_states[] =3D { +static struct page_state error_states[] =3D { { reserved, reserved, MF_MSG_KERNEL, me_kernel }, /* * free pages are specially detected outside this table: @@ -1095,19 +1139,10 @@ static int page_action(struct page_state *ps, str= uct page *p, unsigned long pfn) { int result; - int count; =20 /* page p should be unlocked after returning from ps->action(). */ - result =3D ps->action(p, pfn); + result =3D ps->action(ps, p); =20 - count =3D page_count(p) - 1; - if (ps->action =3D=3D me_swapcache_dirty && result =3D=3D MF_DELAYED) - count--; - if (count > 0) { - pr_err("Memory failure: %#lx: %s still referenced by %d users\n", - pfn, action_page_types[ps->type], count); - result =3D MF_FAILED; - } action_result(pfn, ps->type, result); =20 /* Could do more checks here if page looks ok */ --=20 2.26.2