From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E06CEB64D7 for ; Sat, 17 Jun 2023 02:18:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7448C6B0072; Fri, 16 Jun 2023 22:18:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CD3E6B0075; Fri, 16 Jun 2023 22:18:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 546F68E0001; Fri, 16 Jun 2023 22:18:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3F00F6B0072 for ; Fri, 16 Jun 2023 22:18:57 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 07A2E140DAB for ; Sat, 17 Jun 2023 02:18:57 +0000 (UTC) X-FDA: 80910631914.13.7008BD0 Received: from mail-yw1-f170.google.com (mail-yw1-f170.google.com [209.85.128.170]) by imf22.hostedemail.com (Postfix) with ESMTP id 4086DC0008 for ; Sat, 17 Jun 2023 02:18:54 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=t9IXKtZY; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.128.170 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686968334; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pOFci3gQgrdX82h8XRvfgOYsHWhRE165e0Pr8Okwmz8=; b=5R3qS08JqEgsrfNTR/2WJpvyAo7Xqxa5+GkqiQqt7IaNQTcc5KJfKBVmAN32IUsmkTEuou mXlLUPU0q+PPwHBPPDBtni5OUydKV0gdiGfc8mudOoSEmjnBz7YqgFAQI/yuYQhT0lQ54l 6k/CqfsQfzeR+CWmVjwZTxXccAHQ3vs= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=t9IXKtZY; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.128.170 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686968334; a=rsa-sha256; cv=none; b=D7ZzPF5lJzLCywuCZ8KrWHxDxfDQnodADut2nHANFqie5LWeJZug9G6fhp3TbGda7bSkjX GHe7n1N4oRWfpTjkVL4nKGSqptAAnsTDe2NP7GLB5dcSIoL4fYJHP0Dcvr9vsBB8mKzzc9 erdJIuHCOE1QCejnXHg5qjyj6m3UrTo= Received: by mail-yw1-f170.google.com with SMTP id 00721157ae682-56fff21c2ebso16602207b3.3 for ; Fri, 16 Jun 2023 19:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686968333; x=1689560333; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=pOFci3gQgrdX82h8XRvfgOYsHWhRE165e0Pr8Okwmz8=; b=t9IXKtZY7wkYqVnEj8GXmVcJ60w5Txwc1G1EVQ0UhJR7vcKJ3XtMO10HxoXXqYSRj3 7k6bL0tiFPyW3pS5iDdkKGFuNS8isGisqJfFI5+gyWZkVs0Oa/re8vKXTVRCOeJpBYqv /TLyzN2f4w5XNKwQdDRjU3WY2B1VDANHGuFbj6UbUeyL9N0yKETQf1ct9HAoO8dz0yc8 nC43kXVNqYiwUUpyM8b36fYBRoPI4Hz0LyOvlFL+uvQp/a4yBq0iafp82H7nIlYM0nrk 2bkRjdbdbPkeKnGAKxuP/WIr64lJg+UMnNANd74WpseMcfS4pToNoKRxBdRvArxEDs3B P/tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686968333; x=1689560333; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pOFci3gQgrdX82h8XRvfgOYsHWhRE165e0Pr8Okwmz8=; b=XpMwQGoijXSE+HR5yitEeq8YtF+S8OoXialWZMC9c89pQ9SqlnTj+2ITO7kBA81Yi5 8nM6HgQp4Knx2jY1TAHBR50s0bRMc5U663IK7qGoCGNKc7V8gLxFIC2p3CtlJjPYjWqf sbB3vtBJqyVSTCDBBz2Zph/X5f6XsrUiVJ06+5synPOzLHRP0ETvwRTdSAhib44w8xx8 I2JQ0A3UAkHmvnxM888YpY9ADgHXB9Z2mI4dWtdtQP4HUnys65qTpmcLN/lW4YblxCuh tcRkZRtlkMkzf/LLSoVQxbO1nPk//d3fF06M3gyN6884KhPaZUK5evjaRVVEs5WN6IEw H4yw== X-Gm-Message-State: AC+VfDzrFrdj/99DCyWQdBbWYTuEvB82YbDrKJBLVJXzBzqshlJM9ntz 8Ml70jrUByoApMdI/6Oeh+UKhZlIcyChmLR+lWNiDw== X-Google-Smtp-Source: ACHHUZ5g+A6dpsh3e+eCHMA0DmE96nkTXFEWJUnZ7bnNSAVOmmQVPBS7jGrcdJiOx0/jkvIrDsZJIcyfoW6IZ2trPKE= X-Received: by 2002:a0d:ca0d:0:b0:56d:770:c315 with SMTP id m13-20020a0dca0d000000b0056d0770c315mr3688060ywd.49.1686968333211; Fri, 16 Jun 2023 19:18:53 -0700 (PDT) MIME-Version: 1.0 References: <20230517235314.GB10757@monkey> <20230519224214.GB3581@monkey> <20230522044557.GA845371@hori.linux.bs1.fc.nec.co.jp> <20230523024305.GA920098@hori.linux.bs1.fc.nec.co.jp> <20230612041901.GA3083591@ik1-406-35019.vs.sakura.ne.jp> <20230616233447.GB7371@monkey> In-Reply-To: <20230616233447.GB7371@monkey> From: Jiaqi Yan Date: Fri, 16 Jun 2023 19:18:41 -0700 Message-ID: Subject: Re: [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list To: Mike Kravetz Cc: Naoya Horiguchi , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , "songmuchun@bytedance.com" , "shy828301@gmail.com" , "linmiaohe@huawei.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "duenwen@google.com" , "axelrasmussen@google.com" , "jthoughton@google.com" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4086DC0008 X-Stat-Signature: e4sbzhnb3t4odse7gwf958ja85y8bc3r X-HE-Tag: 1686968334-915436 X-HE-Meta: U2FsdGVkX18I/kT2EpJOXEa06AfBmTDHdoiSNJKEjiCOwQNbXP/+HsVqTY8Ab9HzaZL60ZZXQbO+oBKqFPGAOxXO08thWbor0gQH4cRK/veJbOs6QdCaYJK/Vlis+BMupqaLUfPSOCxi8r/uBtyDhEmnDqOSfDZ8NW9OyvllQOE4gKCiXvr/Ps2XkK26B2JEUS5SVLQ407EAiksB2+OLpT3mqKqpnRos4uQBvGvR639QPMvKy1FniIqqjgN41Kp+/BXigdBU8M19NXgYJ5VOO2mkONdBn6F6THtTAb3WFHrhzHtn/Zsexym+8Dw8iYH0OSLIR9ABCJspFzVQ2B6o10JCn/GG/xyk5FKMo8hmRt6NeDKmkSDM+qgF3zPknhf+pSmXZKNkgq2QRIQr+gbaktTWUTtWV134PSeq3zOWYhmRkWgRDxQkTjAkXQm/pNLvrhOR52n2r+g1QeVkejmQJubjeXzPJn305FyBR/Ny8KvpYr1k7P6pbEvZSpKBVaUpk170/KVychwDgLY4vPqaciydkJxKHvrTwuqFeh5Q+cVuS3jaTXCzeOl/h8eJIqpWI2xHG3O9rzAUiUw7Ax+URZwSPio2VxTIavAwEBq8BK8yC+fAYZqcFdcBjh/Bc0HLPXlYQdgxKFP5CWP0l8EuCP+IhctWbbz24sxXB6fYCUKp0Tbo038zYw5oN2zJ8W9PbLO0nj+8ZARr5y0OCkI1zHFd9ijClD2wRhiGS84ppRqIx/+Mma6J63KwcyK6Ntd0HUDa/yBvY1APCMzmIEnWYya8hThdq0kUWreYIPCTEgDgZ6wxWuQgT2DCFTQUrjOIXMrzSuJHYzh5qiVaZPOZmMCn89EREw13hu8D9bvYvTRTMCkq7TGVDkXabzHiq9GJMD5SvWEvXx8nc1qPzlGce9X5eBitV6ZBBDAM8YwTBhZJ6i35KNrfLwMucAAZqpj5OPqA08n39b/Xoh9Jw4k jmgGh99I JKTM633SKrZD6KF+0hkSVu/FnRDr1qsuAo5+/QTgx0g64prxt2yyt7+xQmXqm4aYKMeKqhvalJuHdc6S3nDp2tyEkEqQiHBkGkFA3yrw9VCQ4cDRRn7VXTvOdlnIL8FwKA2uyAY4o5Y2wnYBANbcI6wR554FM2spFXhv/HTyWqSkFuO1MK8K9hQDUhj4Yb3FOb2o+61RpJ0755ZMWfBandv8jFxP9eem+V56p8l0zh00a+JHbP1y4BOEJ/dP14JsJlRbzlmyFkWipM+yfjP/X/tE9w4mGLDxM9fEXGg7fS3TJwXdUIb+PT3uf0pGroCBc5PBkyNLkoarO+dl03V0QZhHMWA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jun 16, 2023 at 4:35=E2=80=AFPM Mike Kravetz wrote: > > On 06/16/23 14:19, Jiaqi Yan wrote: > > On Sun, Jun 11, 2023 at 9:19=E2=80=AFPM Naoya Horiguchi > > wrote: > > > > > > On Fri, Jun 09, 2023 at 10:48:47PM -0700, Jiaqi Yan wrote: > > > > On Thu, May 25, 2023 at 5:28=E2=80=AFPM Jiaqi Yan wrote: > > > > > > > > > > On Mon, May 22, 2023 at 7:43=E2=80=AFPM HORIGUCHI NAOYA(=E5=A0=80= =E5=8F=A3=E3=80=80=E7=9B=B4=E4=B9=9F) > > > > > wrote: > > > > > > > > > > > > On Mon, May 22, 2023 at 11:22:49AM -0700, Jiaqi Yan wrote: > > > > > > > On Sun, May 21, 2023 at 9:50=E2=80=AFPM HORIGUCHI NAOYA(=E5= =A0=80=E5=8F=A3=E3=80=80=E7=9B=B4=E4=B9=9F) > > > > > > > wrote: > > > > > > > > > > > > > > > > On Fri, May 19, 2023 at 03:42:14PM -0700, Mike Kravetz wrot= e: > > > > > > > > > On 05/19/23 13:54, Jiaqi Yan wrote: > > > > > > > > > > On Wed, May 17, 2023 at 4:53=E2=80=AFPM Mike Kravetz wrote: > > > > > > > > > > > > > > > > > > > > > > On 05/17/23 16:09, Jiaqi Yan wrote: > > > > > > > > > > > > Adds the functionality to search a subpage's corres= ponding raw_hwp_page > > > > > > > > > > > > in hugetlb page's HWPOISON list. This functionality= can also tell if a > > > > > > > > > > > > subpage is a raw HWPOISON page. > > > > > > > > > > > > > > > > > > > > > > > > Exports this functionality to be immediately used i= n the read operation > > > > > > > > > > > > for hugetlbfs. > > > > > > > > > > > > > > > > > > > > > > > > Signed-off-by: Jiaqi Yan > > > > > > > > > > > > --- > > > > > > > > > > > > include/linux/mm.h | 23 +++++++++++++++++++++++ > > > > > > > > > > > > mm/memory-failure.c | 26 ++++++++++++++++---------= - > > > > > > > > > > > > 2 files changed, 39 insertions(+), 10 deletions(-) > > > > > > > > > > > > > > > > > > > > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.= h > > > > > > > > > > > > index 27ce77080c79..f191a4119719 100644 > > > > > > > > > > > > --- a/include/linux/mm.h > > > > > > > > > > > > +++ b/include/linux/mm.h > > > > > > > > > > > > > > > > > > > > > > Any reason why you decided to add the following to li= nux/mm.h instead of > > > > > > > > > > > linux/hugetlb.h? Since it is hugetlb specific I woul= d have thought > > > > > > > > > > > hugetlb.h was more appropriate. > > > > > > > > > > > > > > > > > > > > > > > @@ -3683,6 +3683,29 @@ enum mf_action_page_type { > > > > > > > > > > > > */ > > > > > > > > > > > > extern const struct attribute_group memory_failure= _attr_group; > > > > > > > > > > > > > > > > > > > > > > > > +#ifdef CONFIG_HUGETLB_PAGE > > > > > > > > > > > > +/* > > > > > > > > > > > > + * Struct raw_hwp_page represents information abou= t "raw error page", > > > > > > > > > > > > + * constructing singly linked list from ->_hugetlb= _hwpoison field of folio. > > > > > > > > > > > > + */ > > > > > > > > > > > > +struct raw_hwp_page { > > > > > > > > > > > > + struct llist_node node; > > > > > > > > > > > > + struct page *page; > > > > > > > > > > > > +}; > > > > > > > > > > > > + > > > > > > > > > > > > +static inline struct llist_head *raw_hwp_list_head= (struct folio *folio) > > > > > > > > > > > > +{ > > > > > > > > > > > > + return (struct llist_head *)&folio->_hugetlb_= hwpoison; > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +/* > > > > > > > > > > > > + * Given @subpage, a raw page in a hugepage, find = its location in @folio's > > > > > > > > > > > > + * _hugetlb_hwpoison list. Return NULL if @subpage= is not in the list. > > > > > > > > > > > > + */ > > > > > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct foli= o *folio, > > > > > > > > > > > > + struct page *s= ubpage); > > > > > > > > > > > > +#endif > > > > > > > > > > > > + > > > > > > > > > > > > #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || define= d(CONFIG_HUGETLBFS) > > > > > > > > > > > > extern void clear_huge_page(struct page *page, > > > > > > > > > > > > unsigned long addr_hint, > > > > > > > > > > > > diff --git a/mm/memory-failure.c b/mm/memory-failur= e.c > > > > > > > > > > > > index 5b663eca1f29..c49e6c2d1f07 100644 > > > > > > > > > > > > --- a/mm/memory-failure.c > > > > > > > > > > > > +++ b/mm/memory-failure.c > > > > > > > > > > > > @@ -1818,18 +1818,24 @@ EXPORT_SYMBOL_GPL(mf_dax_ki= ll_procs); > > > > > > > > > > > > #endif /* CONFIG_FS_DAX */ > > > > > > > > > > > > > > > > > > > > > > > > #ifdef CONFIG_HUGETLB_PAGE > > > > > > > > > > > > -/* > > > > > > > > > > > > - * Struct raw_hwp_page represents information abou= t "raw error page", > > > > > > > > > > > > - * constructing singly linked list from ->_hugetlb= _hwpoison field of folio. > > > > > > > > > > > > - */ > > > > > > > > > > > > -struct raw_hwp_page { > > > > > > > > > > > > - struct llist_node node; > > > > > > > > > > > > - struct page *page; > > > > > > > > > > > > -}; > > > > > > > > > > > > > > > > > > > > > > > > -static inline struct llist_head *raw_hwp_list_head= (struct folio *folio) > > > > > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct foli= o *folio, > > > > > > > > > > > > + struct page *s= ubpage) > > > > > > > > > > > > { > > > > > > > > > > > > - return (struct llist_head *)&folio->_hugetlb_= hwpoison; > > > > > > > > > > > > + struct llist_node *t, *tnode; > > > > > > > > > > > > + struct llist_head *raw_hwp_head =3D raw_hwp_l= ist_head(folio); > > > > > > > > > > > > + struct raw_hwp_page *hwp_page =3D NULL; > > > > > > > > > > > > + struct raw_hwp_page *p; > > > > > > > > > > > > + > > > > > > > > > > > > + llist_for_each_safe(tnode, t, raw_hwp_head->f= irst) { > > > > > > > > > > > > > > > > > > > > > > IIUC, in rare error cases a hugetlb page can be poiso= ned WITHOUT a > > > > > > > > > > > raw_hwp_list. This is indicated by the hugetlb page = specific flag > > > > > > > > > > > RawHwpUnreliable or folio_test_hugetlb_raw_hwp_unreli= able(). > > > > > > > > > > > > > > > > > > > > > > Looks like this routine does not consider that case. = Seems like it should > > > > > > > > > > > always return the passed subpage if folio_test_hugetl= b_raw_hwp_unreliable() > > > > > > > > > > > is true? > > > > > > > > > > > > > > > > > > > > Thanks for catching this. I wonder should this routine = consider > > > > > > > > > > RawHwpUnreliable or should the caller do. > > > > > > > > > > > > > > > > > > > > find_raw_hwp_page now returns raw_hwp_page* in the llis= t entry to > > > > > > > > > > caller (valid one at the moment), but once RawHwpUnreli= able is set, > > > > > > > > > > all the raw_hwp_page in the llist will be kfree(), and = the returned > > > > > > > > > > value becomes dangling pointer to caller (if the caller= holds that > > > > > > > > > > caller long enough). Maybe returning a bool would be sa= fer to the > > > > > > > > > > caller? If the routine returns bool, then checking RawH= wpUnreliable > > > > > > > > > > can definitely be within the routine. > > > > > > > > > > > > > > > > > > I think the check for RawHwpUnreliable should be within t= his routine. > > > > > > > > > Looking closer at the code, I do not see any way to synch= ronize this. > > > > > > > > > It looks like manipulation in the memory-failure code wou= ld be > > > > > > > > > synchronized via the mf_mutex. However, I do not see how= traversal and > > > > > > > > > freeing of the raw_hwp_list called from __update_and_fre= e_hugetlb_folio > > > > > > > > > is synchronized against memory-failure code modifying the= list. > > > > > > > > > > > > > > > > > > Naoya, can you provide some thoughts? > > > > Hi Mike, > > > > Now looking again this, I think concurrent adding and deleting are > > fine with each other and with themselves, because raw_hwp_list is > > lock-less llist. > > Correct. > > > As for synchronizing traversal with adding and deleting, I wonder is > > it a good idea to make __update_and_free_hugetlb_folio hold > > hugetlb_lock before it folio_clear_hugetlb_hwpoison(which traverse + > > delete raw_hwp_list)? In hugetlb, get_huge_page_for_hwpoison already > > takes hugetlb_lock; it seems to me __update_and_free_hugetlb_folio is > > missing the lock. > > I do not think the lock is needed. However, while looking more closely > at this I think I discovered another issue. > This is VERY subtle. > Perhaps Naoya can help verify if my reasoning below is correct. > > In __update_and_free_hugetlb_folio we are not operating on a hugetlb page= . > Why is this? > Before calling update_and_free_hugetlb_folio we call remove_hugetlb_folio= . > The purpose of remove_hugetlb_folio is to remove the huge page from the > list AND compound page destructor indicating this is a hugetlb page is ch= anged. > This is all done while holding the hugetlb lock. So, the test for > folio_test_hugetlb(folio) is false. > > We have technically a compound non-hugetlb page with a non-null raw_hwp_l= ist. > > Important note: at this time we have not reallocated vmemmap pages if > hugetlb page was vmemmap optimized. That is done later in > __update_and_free_hugetlb_folio. > > The 'good news' is that after this point get_huge_page_for_hwpoison will > not recognize this as a hugetlb page, so nothing will be added to the > list. There is no need to worry about entries being added to the list > during traversal. > > The 'bad news' is that if we get a memory error at this time we will > treat it as a memory error on a regular compound page. So, > TestSetPageHWPoison(p) in memory_failure() may try to write a read only > struct page. :( At least I think this is an issue. Would it help if dissolve_free_huge_page doesn't unlock hugetlb_lock until update_and_free_hugetlb_folio is done, or basically until dissolve_free_huge_page is done? TestSetPageHWPoison in memory_failure is called after try_memory_failure_hugetlb, and folio_test_hugetlb is tested within __get_huge_page_for_hwpoison, which is wrapped by the hugetlb_lock. So by the time dissolve_free_huge_page returns, subpages already go through hugetlb_vmemmap_restore and __destroy_compound_gigantic_folio and become non-compound raw pages (folios). Now folio_test_hugetlb(p)=3Dfalse will be correct for memory_failure, and it can recover p as a dissolved non-hugetlb page. > -- > Mike Kravetz