From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25AA0C5475B for ; Mon, 11 Mar 2024 11:44:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA3036B008A; Mon, 11 Mar 2024 07:44:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A52E26B008C; Mon, 11 Mar 2024 07:44:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91A196B0092; Mon, 11 Mar 2024 07:44:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8378A6B008A for ; Mon, 11 Mar 2024 07:44:32 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5B6514030E for ; Mon, 11 Mar 2024 11:44:32 +0000 (UTC) X-FDA: 81884575584.29.53762BD Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf27.hostedemail.com (Postfix) with ESMTP id 2D24340016 for ; Mon, 11 Mar 2024 11:44:28 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710157470; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6jW+MLxyYOHGYEHG/C02wjbOd+uyHZwkCeCWsOpjXdY=; b=eEmR9hazLhg3nCXlq7CK2uyt0N38mIaXpxN5AVZ03WNk8g0Vgzq7lKIstduaY/7m8Ih+QY Yageky1R63B0JvY6JhqlDpX7Tu4bysqMKzAdbujX3lgdsu7E46d/Huk2kKrdT9D00kwWum oc17gt82Jwa/k+BWVpilzvMVDWk89kU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710157470; a=rsa-sha256; cv=none; b=pUeYW6ir7FeNL50aODTAPtj1TtGMH15L9fWdHFRAsGApaMPv+ytMYDrzqPchTkokBKUZN6 Sv5T9RQkRXEq18+JqoQzvFg+TMdaowOelQLxiZirzn/utzik6SVHAWdHRDfb2g5vcHWD3+ J7j0uoYwEhHTYPtCezXZWDTNBuITsYU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4TtZcl2fhDz1h1w1; Mon, 11 Mar 2024 19:41:59 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id B491F1A0178; Mon, 11 Mar 2024 19:44:24 +0800 (CST) Received: from [10.173.135.154] (10.173.135.154) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 11 Mar 2024 19:44:24 +0800 Subject: Re: [PATCH 7/8] mm/memory-failure: Convert hwpoison_user_mappings to take a folio To: "Matthew Wilcox (Oracle)" CC: , Naoya Horiguchi , Andrew Morton References: <20240229212036.2160900-1-willy@infradead.org> <20240229212036.2160900-8-willy@infradead.org> From: Miaohe Lin Message-ID: <6cde5fc3-0614-e716-6402-6377a0c28d5b@huawei.com> Date: Mon, 11 Mar 2024 19:44:23 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20240229212036.2160900-8-willy@infradead.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.135.154] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-Rspamd-Queue-Id: 2D24340016 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: zxhbu99476tnduiwyqxg8yy4si7bjkf9 X-HE-Tag: 1710157468-127017 X-HE-Meta: U2FsdGVkX190lzsEaFOyJcf5sJkvt3l35/DHs1pDVroaBD5lp3Akb9k3vkiZ6ym63fBPOTutMCwzxbsLpfcbQY0CQRbjiaOAVhJhUZs0+x4K6wTDT1K0QrkjBASUPAReFUO+dXWp8JMaUmLLW+cBRvtQkDH33KWjN0b+EB/GnN6EV7mzQd456Jwt8At5ehINt+lb3qTQNCz7UKuC5f/j4lvGrm0bz8NpdoOMg+A1idatX3KNPGhpHu1HQTozpTsX1LFhaoFOXiKqsOjv/KLOXf3pMcTevbejg+BIdeEGA+/tiRLUCT/WUpsREzYsOCthtyxKMgMUau3MRfkedSP0dr48GeAJC497S2cWFUoRHfr9r5QPhr5SBOkLffTrUrq9fVznMAo+cjpB3faqfjMAtlcrfSq/gS7qPnyvBiQcuc3gU2eHpZUnsLHF2G0Y/NghkiZ89i0Zx47DTQFB/Qbae82tgcrN2OD6FXeSZZJhG+fT7lZ5QXzs1/ElzchfjzO84pJs9I2OZ14MydrY5IsgXQAIbHcpP3SXNq8E7FJ2biz+GRJ14kwtsOkBsgMLcgMo3oBBWEoFr4eOGiYm+k/+oIJDHU5bUn/0IAVjzPmW9j2Y4I3l8cBcwKtjluhBNAXKJC5c9cbiLXAIP9/wjWTCTJNpFT1ByUksnnMghU9mkTuu4lumvm/r7HC8MAievNiwe8K1EeJh9bzN+kEUaWq2Ps5mAinub4gl3TcMbzEevJaL2qcNRbNWjQB4hov59FKTtWVPfUGiGOLzUhYdTDUlEnYYiFALkq9YFCXrvNo8oD2sKhAvqFnui24fD2XX0Phw/MAWmJxnU+2fXnu5mB3Z8tHMUlEH/gzJlpvRByY26Vo52MEUovX/W+vduzl59wO8oL/j8cdt35z6lFJ1z8RZyHfL5DIQxj7jX1AbVxBGudkZStwbjOn8pVrs7Ij6RJ5XzY3lYWPhSW/S8R13sHh cQ3hgn3k Tfv05e4V/EzCU/f/xVieVB6pNtkz364CdH3VoVnLrtWH3Mr5pP9TpYotNok+7rBk+3CrKqRX/PF/t+bSbbPDvdxPPjx3cvN94uZYNoPXwhYV5Hvul6mNaI9uE2+sodbqdbUmbdQMUDxnuijKFgAkEEXLOsy1knPApoPBRrujwz6onfWpcuvUQD4mAT1OW/2pc+20dQHsGIS6CCXY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/1 5:20, Matthew Wilcox (Oracle) wrote: > Pass the folio from the callers, and use it throughout instead of hpage. > Saves dozens of calls to compound_head(). > --- > mm/memory-failure.c | 30 +++++++++++++++--------------- > 1 file changed, 15 insertions(+), 15 deletions(-) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 74e87a0a792c..56bc83372e30 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1559,24 +1559,24 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) > * Do all that is necessary to remove user space mappings. Unmap > * the pages and send SIGBUS to the processes if the data was dirty. > */ > -static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, > - int flags, struct page *hpage) > +static bool hwpoison_user_mappings(struct folio *folio, struct page *p, > + unsigned long pfn, int flags) hwpoison_user_mappings() is called with folio refcnt held, so I think it should be safe to use folio directly. > { > - struct folio *folio = page_folio(hpage); > enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; > struct address_space *mapping; > LIST_HEAD(tokill); > bool unmap_success; > int forcekill; > - bool mlocked = PageMlocked(hpage); > + bool mlocked = folio_test_mlocked(folio); > > /* > * Here we are interested only in user-mapped pages, so skip any > * other types of pages. > */ > - if (PageReserved(p) || PageSlab(p) || PageTable(p) || PageOffline(p)) > + if (folio_test_reserved(folio) || folio_test_slab(folio) || > + folio_test_pgtable(folio) || folio_test_offline(folio)) > return true; > - if (!(PageLRU(hpage) || PageHuge(p))) > + if (!(folio_test_lru(folio) || folio_test_hugetlb(folio))) > return true; > > /* > @@ -1586,7 +1586,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, > if (!page_mapped(p)) > return true; > > - if (PageSwapCache(p)) { > + if (folio_test_swapcache(folio)) { > pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); > ttu &= ~TTU_HWPOISON; > } > @@ -1597,11 +1597,11 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, > * XXX: the dirty test could be racy: set_page_dirty() may not always > * be called inside page lock (it's recommended but not enforced). > */ > - mapping = page_mapping(hpage); > - if (!(flags & MF_MUST_KILL) && !PageDirty(hpage) && mapping && > + mapping = folio_mapping(folio); > + if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping && > mapping_can_writeback(mapping)) { > - if (page_mkclean(hpage)) { > - SetPageDirty(hpage); > + if (folio_mkclean(folio)) { > + folio_set_dirty(folio); > } else { > ttu &= ~TTU_HWPOISON; > pr_info("%#lx: corrupted page was clean: dropped without side effects\n", > @@ -1616,7 +1616,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, > */ > collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); > > - if (PageHuge(hpage) && !PageAnon(hpage)) { > + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { > /* > * For hugetlb pages in shared mappings, try_to_unmap > * could potentially call huge_pmd_unshare. Because of > @@ -1656,7 +1656,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, > * use a more force-full uncatchable kill to prevent > * any accesses to the poisoned memory. > */ > - forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL) || > + forcekill = folio_test_dirty(folio) || (flags & MF_MUST_KILL) || > !unmap_success; > kill_procs(&tokill, forcekill, !unmap_success, pfn, flags); > > @@ -2100,7 +2100,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb > > page_flags = folio->flags; > > - if (!hwpoison_user_mappings(p, pfn, flags, &folio->page)) { > + if (!hwpoison_user_mappings(folio, p, pfn, flags)) { > folio_unlock(folio); > return action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED); > } > @@ -2367,7 +2367,7 @@ int memory_failure(unsigned long pfn, int flags) > * Now take care of user space mappings. > * Abort on fail: __filemap_remove_folio() assumes unmapped page. > */ > - if (!hwpoison_user_mappings(p, pfn, flags, p)) { > + if (!hwpoison_user_mappings(folio, p, pfn, flags)) { folio should always be quivalent to p in normal 4k page case. > res = action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED); > goto unlock_page; > } > This patch looks good to me. Thanks. Acked-by: Miaohe Lin