From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0C06C433EF for ; Fri, 29 Apr 2022 14:26:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A2726B0073; Fri, 29 Apr 2022 10:26:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52C876B0074; Fri, 29 Apr 2022 10:26:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3575B6B0075; Fri, 29 Apr 2022 10:26:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 20C796B0073 for ; Fri, 29 Apr 2022 10:26:10 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id F0ECE8071A for ; Fri, 29 Apr 2022 14:26:09 +0000 (UTC) X-FDA: 79410141300.18.5DFEF98 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf11.hostedemail.com (Postfix) with ESMTP id 614A34001F for ; Fri, 29 Apr 2022 14:26:06 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id i1so1145260plg.7 for ; Fri, 29 Apr 2022 07:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Bv/fStdq6ItGSIRgfJIMLGnpap9t2FGZ2OvlOaFaeWo=; b=B9bahcbgBihTWUcGpr1y9wdTOE681iugnbfHR9qCDQA4YABUzHRuUKS27S18bX3/ok AHi/ufJYHhcOLpBWgPszCtyGT5POqSKIi23SH7fqKhKnfDw4YX5504HVLTUe3Il3kbga +Yr5187+ILwCXa9r3uOa4P7oXT7DqOmioyrLtU2ZEt6W1aTejmcK/48UJFhwVPshNcPS Uj26EpLI6dgRChSmFpizSZMaSWhrtQ6LgIiw22sJROjdcIEHk/bdbDHnOvuyJtFXA4VU ziC1PuRgCVhN4C6xJweBqxxuKQWGjDr5AXoq0jKMmuRLjMjKkz4sYHcpjcPU9Q1BxU5u JidA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Bv/fStdq6ItGSIRgfJIMLGnpap9t2FGZ2OvlOaFaeWo=; b=yRfGzczAcjeUzD6Az2jcHi10KPW1e13j81EN8dmws6b9IeT2qoAmXASKCzbTTYU/du extra5p6j42DG1eBdCn7RfukuHQfrnXipHzJjX+/CTzt6VzjId6+lMZ4TGO6jxFozDtO 0Vw16BE7wPuq7NSxmIjG3A1ASX0dP8LmoP2mH/PRbgWh+mJpqlguMnO4qv3hMLfPJM29 X/ZQB31Fy1xGIhoX8O5O1fwF0XmlJv0zA77UkzY+at2DlCTcqym/0naG4IVDP/A+DLjE ISrz/4yF0Yh4zBb207r2Sc2l4SftnmnxZb09VUTa5pZEvexeRv3ZCMqw6/kbNd7qtFqz sdiA== X-Gm-Message-State: AOAM533cRzwuFbpNzSQnD9SOfHDLEl54sTXLb5u4G8OYwNxeM+C2+EiW UKBo15OG8Y+0JgTLQiof7uH1XB9f3Bz/HA== X-Google-Smtp-Source: ABdhPJyDytkpIn1JuFR5NC67SSAiS8dbKsZT5/R63KZRQjoACdv5mT58ogS7YGaoccx35GcYCqcZHg== X-Received: by 2002:a17:902:db0e:b0:15e:60a0:7b9 with SMTP id m14-20020a170902db0e00b0015e60a007b9mr8572959plx.55.1651242368521; Fri, 29 Apr 2022 07:26:08 -0700 (PDT) Received: from always-x1.www.tendawifi.com ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a38-20020a056a001d2600b004fae885424dsm3494734pfx.72.2022.04.29.07.26.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 07:26:07 -0700 (PDT) From: zhenwei pi To: akpm@linux-foundation.org, naoya.horiguchi@nec.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, zhenwei pi , Wu Fengguang Subject: [PATCH 1/4] mm/memory-failure.c: move clear_hwpoisoned_pages Date: Fri, 29 Apr 2022 22:22:03 +0800 Message-Id: <20220429142206.294714-2-pizhenwei@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220429142206.294714-1-pizhenwei@bytedance.com> References: <20220429142206.294714-1-pizhenwei@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 614A34001F X-Stat-Signature: y8hp8qqnt1s79framt5mxuq9bh167ydt X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=B9bahcbg; spf=pass (imf11.hostedemail.com: domain of pizhenwei@bytedance.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=pizhenwei@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam09 X-HE-Tag: 1651242366-434152 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: clear_hwpoisoned_pages() clears HWPoison flag and decreases the number of poisoned pages, this actually works as part of memory failure. Move this function from sparse.c to memory-failure.c, finally there is no CONFIG_MEMORY_FAILURE in sparse.c. Cc: Wu Fengguang Signed-off-by: zhenwei pi --- mm/internal.h | 11 +++++++++++ mm/memory-failure.c | 21 +++++++++++++++++++++ mm/sparse.c | 27 --------------------------- 3 files changed, 32 insertions(+), 27 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index cf16280ce132..e8add8df4e0f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -758,4 +758,15 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); DECLARE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); +/* + * mm/memory-failure.c + */ +#ifdef CONFIG_MEMORY_FAILURE +void clear_hwpoisoned_pages(struct page *memmap, int nr_pages); +#else +static inline void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) +{ +} +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 27760c19bad7..46d9fb612dcc 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2401,3 +2401,24 @@ int soft_offline_page(unsigned long pfn, int flags) return ret; } + +void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) +{ + int i; + + /* + * A further optimization is to have per section refcounted + * num_poisoned_pages. But that would need more space per memmap, so + * for now just do a quick global check to speed up this routine in the + * absence of bad pages. + */ + if (atomic_long_read(&num_poisoned_pages) == 0) + return; + + for (i = 0; i < nr_pages; i++) { + if (PageHWPoison(&memmap[i])) { + num_poisoned_pages_dec(); + ClearPageHWPoison(&memmap[i]); + } + } +} diff --git a/mm/sparse.c b/mm/sparse.c index 952f06d8f373..e983c38fac8f 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -916,33 +916,6 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn, return 0; } -#ifdef CONFIG_MEMORY_FAILURE -static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) -{ - int i; - - /* - * A further optimization is to have per section refcounted - * num_poisoned_pages. But that would need more space per memmap, so - * for now just do a quick global check to speed up this routine in the - * absence of bad pages. - */ - if (atomic_long_read(&num_poisoned_pages) == 0) - return; - - for (i = 0; i < nr_pages; i++) { - if (PageHWPoison(&memmap[i])) { - num_poisoned_pages_dec(); - ClearPageHWPoison(&memmap[i]); - } - } -} -#else -static inline void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) -{ -} -#endif - void sparse_remove_section(struct mem_section *ms, unsigned long pfn, unsigned long nr_pages, unsigned long map_offset, struct vmem_altmap *altmap) -- 2.20.1