From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A97EC7EE2E for ; Sat, 10 Jun 2023 05:49:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6773E6B0074; Sat, 10 Jun 2023 01:49:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 626D46B0075; Sat, 10 Jun 2023 01:49:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C8938E0002; Sat, 10 Jun 2023 01:49:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3B44D6B0074 for ; Sat, 10 Jun 2023 01:49:03 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D28F4AE327 for ; Sat, 10 Jun 2023 05:49:02 +0000 (UTC) X-FDA: 80885759724.14.93AE03E Received: from mail-yb1-f171.google.com (mail-yb1-f171.google.com [209.85.219.171]) by imf11.hostedemail.com (Postfix) with ESMTP id E041840003 for ; Sat, 10 Jun 2023 05:48:59 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="E+W/H2ih"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.219.171 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686376139; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MNKbpa9Pu7wgeLeX9XS4+eS+VovsVvMpWBnZ1K6EmSQ=; b=0IrfTxoF6WZF+p0bAhXsUJKxsa4k/pN7rs7dxRbtJHdr7ZxffAflut+1jqpDysjshvudUz Zok17kB85c5Pc7lybIfG+L9cv3Hsdm6Pw+TfIUcsdb7uAuqFedakC+hJflxlnFy5iE/RMG fd7WEYKUWj0Wt30xeMwOOJH8BRAwwyQ= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="E+W/H2ih"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.219.171 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686376139; a=rsa-sha256; cv=none; b=I75CEFzNN/DW+fplV5zxclG4Xdu44UeDCUBvJ8Pr4sL7NOL96U1Ezl1NGT1N4RocFTaAZ8 DwUeJkBCmoMSwAA6W3F2SII3swNVWUXnohqQNVqPlZDTpbo/yYs9Vd47F9q4+j/q1i97OT JVAkpi4HmO4vPPsrQz1KelgFFJLxwV0= Received: by mail-yb1-f171.google.com with SMTP id 3f1490d57ef6-bacbc7a2998so2393147276.3 for ; Fri, 09 Jun 2023 22:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686376139; x=1688968139; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=MNKbpa9Pu7wgeLeX9XS4+eS+VovsVvMpWBnZ1K6EmSQ=; b=E+W/H2ihv6nz8SrQsk7DZqfzgtf9G2USASXlraBQmTefoFEMfctooJbmyDW4et4XlG PEC/YuDMj3hGY7dqNqefEM9VHwKeEkFA9SYKhpKwl7M95KQg4N2eFXI7xpGje525YzsT Bo+U/OKv3BegK6z0UsRPzpvOP5U4C8aazN2DOyTfuLAeU5XxYBKE2T+KKF/wxuofGBGx x4ok2Us19SVAbXdNGyDvaJxBFPY1XeFdzcoeWkiisOOaV1DP/yZt7NDOEb003i26hw/Z 6QudsLZs9g6rUEgXuFjT4NF1kflhv84zkny6G/fkh+/tqWGek5v4n9gM51I6K9RxGOc2 2ykg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686376139; x=1688968139; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MNKbpa9Pu7wgeLeX9XS4+eS+VovsVvMpWBnZ1K6EmSQ=; b=kvmtg/v3rY6lWJZjIjJh/jUbxXRmDANMiGjij10J7zck8dAvf5cQyDcQhLVOtivB1Z QKCyickcD+ltMdOHpBHj+NxhLOc2Zu/so32R61cD+8E39Lm8BNvusJlTAS0618rJxR6K GoVuN53gDfaPwl0x/mX9fhK1b3bXJ3aS4+0Pc+ALj3+B8lsYAxNVk9N9TPC5yt+TsH/J tsavgvz1KC4YyrXMxITsTjsgXeGLM9tvj+8Pvr9TvrvTRV4B8k2FoOwnh7ez0c03cNyW zaHGBcLh1jSuKablAdjU2gnki9kJZJ4sVO8x1AqObgdG0kQ9Oh8K/rzEC64OoyLt+Ppb EC3w== X-Gm-Message-State: AC+VfDxQ5UHDVZFqBGu5q6RJSFhuqJkvc8k7tRSbbF0zqhJKeTDzvFP9 Fvjl7PQUyMw1DS8vCdg9FrQt7aUrkXsgVcjz2zr/ng== X-Google-Smtp-Source: ACHHUZ76KAH8I31gAJF+2K0bUt4R3l0POI3dGOONWkYHMDH5hLc1vqPlJbkULsWtPuWo5a4vEOu8FSL+BfSXppyqo9I= X-Received: by 2002:a0d:db4a:0:b0:56a:3b3e:bc6 with SMTP id d71-20020a0ddb4a000000b0056a3b3e0bc6mr3489976ywe.14.1686376138762; Fri, 09 Jun 2023 22:48:58 -0700 (PDT) MIME-Version: 1.0 References: <20230517160948.811355-1-jiaqiyan@google.com> <20230517160948.811355-2-jiaqiyan@google.com> <20230517235314.GB10757@monkey> <20230519224214.GB3581@monkey> <20230522044557.GA845371@hori.linux.bs1.fc.nec.co.jp> <20230523024305.GA920098@hori.linux.bs1.fc.nec.co.jp> In-Reply-To: From: Jiaqi Yan Date: Fri, 9 Jun 2023 22:48:47 -0700 Message-ID: Subject: Re: [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list To: =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= Cc: Mike Kravetz , "songmuchun@bytedance.com" , "shy828301@gmail.com" , "linmiaohe@huawei.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "duenwen@google.com" , "axelrasmussen@google.com" , "jthoughton@google.com" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: E041840003 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: w97u4678z5bbzrk8epotp8cbjn5x4mru X-HE-Tag: 1686376139-906377 X-HE-Meta: U2FsdGVkX1/BrW3sqaeBSCdL17Bbmknnn1QwWsJqqJut5YQEKZ7Pg5plt/owzKqe71Pi129zA3FkAsN9ml3dUdhN3dKioGV+IE21AAhFgNfnyNvLtHQwLBAUOYHlR8Iv+YSMHunodoZTJIfYyDJ7gtPZlvMAZaq6FynrqjLdaDV/VfX83C9xVGXpPvCkPEvlaaTpwU5wn5mD0oNOsoc7fKEmoEbJbaNWf5GQnEshULXzBWwEa+mzG2nGh1K+5Ff+37EBpq0d2CZ9rIQi9z0xTuJ1iJNc8KSLsfe/Ou1oqIbmPSU2KzX30IL2kMvp0vYLNTtaaeNnF+co48US83eeJ3aqbx7i2tmDAEFdVPydBTzr/ScnirXb7rCS0RGa4t4dIj9QyHoS/pH3OiGHWxkho+pu8v32fNEooXiSZ0ElY5BUkOk4Ag46kkTiyeE74hfk/n7Uv/OPugBxVtYbDKrbGcOzMR5dAcL+dfu74XYheVuCzSxqLkVO/xbIc5q165l9zhH5Em0GGc6LuTDAY5SH/MC2yaYScrOQ6pj8LrbWRqZK59aRX5TD2OMqIgv9sUxy5N3sbEfzQhR4+AbGdmYFYoFmtCOR+WxJ24NueIIvxliEyVkBZP1o7lwD3wuEHxSl1GfZI1vht2QE0gMoEKg7DbWFf5v7hbzQooA1Iml9az1w0fhE0psE95Uz3Np4M7be/ZcR2d/amhGu/gusjuS4ZAb8e916i/cCUFnXT3/oQiMSe97LJzkLgyiRGCytMFxMuYRLkarpPZ3koWx5zgeOlxjxcKryyBZyadf6JhbY2B/b80b5dZEgniku9QgrsUVVB+6/VTvQdXwFMRoWVqcgpi2oyBcPA7Wa1ICHUZOLNcXL1xfvVphWK/vazMry60ZP5cFcGYvVr30JfkRy76kSIZWU9IPqq0Fr4oiqCnlkRjk4DF+k1sRwCHiJcfgxyLrXI/7FSocRBAN9I59SHdv 2dxD0HB0 OtVuFw5/vdOIsJ7a3I8lEp294IncdmQtGqmKIsFzwB2I+V8612FozUtc1ZFegH9/T9NnBdTBSs0P0sWQo0JIiDeVm7fgSefnCAeXHmndSJoUxfljgJZgsFYXHQp/8Jx1aNzTTvv2PnuVfRTL63HRwJUM8KqToFL0NMqqGkYLhpl0qfvaEViakaZuXCXM+TCsn+3k8j4gzhGTgBKX4bwmeEYWFR0ZSG95c1ApTiFuDzD+KqiK1SsfLOHf+iRaL0Axf1YGJJwo1rXJxOrmtdf6EH2puAiHNe3U9yO3UkTrU7w0Q3bY2ws+1vpDPxWo1gyrKBiDV0D+uN6gFPu1MPm5KoQHYWA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 25, 2023 at 5:28=E2=80=AFPM Jiaqi Yan wro= te: > > On Mon, May 22, 2023 at 7:43=E2=80=AFPM HORIGUCHI NAOYA(=E5=A0=80=E5=8F= =A3=E3=80=80=E7=9B=B4=E4=B9=9F) > wrote: > > > > On Mon, May 22, 2023 at 11:22:49AM -0700, Jiaqi Yan wrote: > > > On Sun, May 21, 2023 at 9:50=E2=80=AFPM HORIGUCHI NAOYA(=E5=A0=80=E5= =8F=A3=E3=80=80=E7=9B=B4=E4=B9=9F) > > > wrote: > > > > > > > > On Fri, May 19, 2023 at 03:42:14PM -0700, Mike Kravetz wrote: > > > > > On 05/19/23 13:54, Jiaqi Yan wrote: > > > > > > On Wed, May 17, 2023 at 4:53=E2=80=AFPM Mike Kravetz wrote: > > > > > > > > > > > > > > On 05/17/23 16:09, Jiaqi Yan wrote: > > > > > > > > Adds the functionality to search a subpage's corresponding = raw_hwp_page > > > > > > > > in hugetlb page's HWPOISON list. This functionality can als= o tell if a > > > > > > > > subpage is a raw HWPOISON page. > > > > > > > > > > > > > > > > Exports this functionality to be immediately used in the re= ad operation > > > > > > > > for hugetlbfs. > > > > > > > > > > > > > > > > Signed-off-by: Jiaqi Yan > > > > > > > > --- > > > > > > > > include/linux/mm.h | 23 +++++++++++++++++++++++ > > > > > > > > mm/memory-failure.c | 26 ++++++++++++++++---------- > > > > > > > > 2 files changed, 39 insertions(+), 10 deletions(-) > > > > > > > > > > > > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > > > > > > index 27ce77080c79..f191a4119719 100644 > > > > > > > > --- a/include/linux/mm.h > > > > > > > > +++ b/include/linux/mm.h > > > > > > > > > > > > > > Any reason why you decided to add the following to linux/mm.h= instead of > > > > > > > linux/hugetlb.h? Since it is hugetlb specific I would have t= hought > > > > > > > hugetlb.h was more appropriate. > > > > > > > > > > > > > > > @@ -3683,6 +3683,29 @@ enum mf_action_page_type { > > > > > > > > */ > > > > > > > > extern const struct attribute_group memory_failure_attr_gr= oup; > > > > > > > > > > > > > > > > +#ifdef CONFIG_HUGETLB_PAGE > > > > > > > > +/* > > > > > > > > + * Struct raw_hwp_page represents information about "raw e= rror page", > > > > > > > > + * constructing singly linked list from ->_hugetlb_hwpoiso= n field of folio. > > > > > > > > + */ > > > > > > > > +struct raw_hwp_page { > > > > > > > > + struct llist_node node; > > > > > > > > + struct page *page; > > > > > > > > +}; > > > > > > > > + > > > > > > > > +static inline struct llist_head *raw_hwp_list_head(struct = folio *folio) > > > > > > > > +{ > > > > > > > > + return (struct llist_head *)&folio->_hugetlb_hwpoison= ; > > > > > > > > +} > > > > > > > > + > > > > > > > > +/* > > > > > > > > + * Given @subpage, a raw page in a hugepage, find its loca= tion in @folio's > > > > > > > > + * _hugetlb_hwpoison list. Return NULL if @subpage is not = in the list. > > > > > > > > + */ > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct folio *folio= , > > > > > > > > + struct page *subpage); > > > > > > > > +#endif > > > > > > > > + > > > > > > > > #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG= _HUGETLBFS) > > > > > > > > extern void clear_huge_page(struct page *page, > > > > > > > > unsigned long addr_hint, > > > > > > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > > > > > > > index 5b663eca1f29..c49e6c2d1f07 100644 > > > > > > > > --- a/mm/memory-failure.c > > > > > > > > +++ b/mm/memory-failure.c > > > > > > > > @@ -1818,18 +1818,24 @@ EXPORT_SYMBOL_GPL(mf_dax_kill_procs= ); > > > > > > > > #endif /* CONFIG_FS_DAX */ > > > > > > > > > > > > > > > > #ifdef CONFIG_HUGETLB_PAGE > > > > > > > > -/* > > > > > > > > - * Struct raw_hwp_page represents information about "raw e= rror page", > > > > > > > > - * constructing singly linked list from ->_hugetlb_hwpoiso= n field of folio. > > > > > > > > - */ > > > > > > > > -struct raw_hwp_page { > > > > > > > > - struct llist_node node; > > > > > > > > - struct page *page; > > > > > > > > -}; > > > > > > > > > > > > > > > > -static inline struct llist_head *raw_hwp_list_head(struct = folio *folio) > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct folio *folio= , > > > > > > > > + struct page *subpage) > > > > > > > > { > > > > > > > > - return (struct llist_head *)&folio->_hugetlb_hwpoison= ; > > > > > > > > + struct llist_node *t, *tnode; > > > > > > > > + struct llist_head *raw_hwp_head =3D raw_hwp_list_head= (folio); > > > > > > > > + struct raw_hwp_page *hwp_page =3D NULL; > > > > > > > > + struct raw_hwp_page *p; > > > > > > > > + > > > > > > > > + llist_for_each_safe(tnode, t, raw_hwp_head->first) { > > > > > > > > > > > > > > IIUC, in rare error cases a hugetlb page can be poisoned WITH= OUT a > > > > > > > raw_hwp_list. This is indicated by the hugetlb page specific= flag > > > > > > > RawHwpUnreliable or folio_test_hugetlb_raw_hwp_unreliable(). > > > > > > > > > > > > > > Looks like this routine does not consider that case. Seems l= ike it should > > > > > > > always return the passed subpage if folio_test_hugetlb_raw_hw= p_unreliable() > > > > > > > is true? > > > > > > > > > > > > Thanks for catching this. I wonder should this routine consider > > > > > > RawHwpUnreliable or should the caller do. > > > > > > > > > > > > find_raw_hwp_page now returns raw_hwp_page* in the llist entry = to > > > > > > caller (valid one at the moment), but once RawHwpUnreliable is = set, > > > > > > all the raw_hwp_page in the llist will be kfree(), and the retu= rned > > > > > > value becomes dangling pointer to caller (if the caller holds t= hat > > > > > > caller long enough). Maybe returning a bool would be safer to t= he > > > > > > caller? If the routine returns bool, then checking RawHwpUnreli= able > > > > > > can definitely be within the routine. > > > > > > > > > > I think the check for RawHwpUnreliable should be within this rout= ine. > > > > > Looking closer at the code, I do not see any way to synchronize t= his. > > > > > It looks like manipulation in the memory-failure code would be > > > > > synchronized via the mf_mutex. However, I do not see how travers= al and > > > > > freeing of the raw_hwp_list called from __update_and_free_hugetl= b_folio > > > > > is synchronized against memory-failure code modifying the list. > > > > > > > > > > Naoya, can you provide some thoughts? > > > > > > > > Thanks for elaborating the issue. I think that making find_raw_hwp= _page() and > > > > folio_clear_hugetlb_hwpoison() do their works within mf_mutex can b= e one solution. > > > > try_memory_failure_hugetlb(), one of the callers of folio_clear_hug= etlb_hwpoison(), > > > > already calls it within mf_mutex, so some wrapper might be needed t= o implement > > > > calling path from __update_and_free_hugetlb_folio() to take mf_mute= x. > > > > > > > > It might be a concern that mf_mutex is a big lock to cover overall = hwpoison > > > > subsystem, but I think that the impact is not so big if the changed= code paths > > > > take mf_mutex only after checking hwpoisoned hugepage. Maybe using= folio_lock > > > > to synchronize accesses to the raw_hwp_list could be possible, but = currently > > > > __get_huge_page_for_hwpoison() calls folio_set_hugetlb_hwpoison() w= ithout > > > > folio_lock, so this approach needs update on locking rule and it so= unds more > > > > error-prone to me. > > > > > > Thanks Naoya, since memory_failure is the main user of raw_hwp_list, = I > > > agree mf_mutex could help to sync its two del_all operations (one fro= m > > > try_memory_failure_hugetlb and one from > > > __update_and_free_hugetlb_folio). > > > > > > I still want to ask a perhaps stupid question, somewhat related to ho= w > > > to implement find_raw_hwp_page() correctly. It seems > > > llist_for_each_safe should only be used to traverse after list entrie= s > > > already *deleted* via llist_del_all. But the llist_for_each_safe call= s > > > in memory_failure today is used *directly* on the raw_hwp_list. This > > > is quite different from other users of llist_for_each_safe (for > > > example, kernel/bpf/memalloc.c). > > > > Oh, I don't noticed that when writing the original code. (I just chose > > llist_for_each_safe because I just wanted struct llist_node as a singly > > linked list.) > > And maybe because you can avoid doing INIT_LIST_HEAD (which seems > doable in folio_set_hugetlb_hwpoison if hugepage is hwpoison-ed for > the first time)? > > > > > > Why is it correct? I guess mostly > > > because they are sync-ed under mf_mutex (except the missing coverage > > > on __update_and_free_hugetlb_folio)? > > > > Yes, and there seems no good reason to use the macro llist_for_each_saf= e > > here. I think it's OK to switch to simpler one like list_for_each whic= h > > should be supposed to be called directly. To do this, struct raw_hwp_p= age > > need to have @node (typed struct list_head instead of struct llist_node= ). Hi Naoya, a maybe-stupid question on list vs llist: _hugetlb_hwpoison in folio is a void*. struct list_head is composed of two pointers to list_node (prev and next) so folio just can't hold a list_head in the _hugetlb_hwpoison field, right? llist_head on the other hand only contains one pointer to llist_node first. I wonder if this is one of the reason you picked llist instead of list in the first place. The reason I ask is while I was testing my refactor draft, I constantly see the refcount of the 3rd subpage in the folio got corrupted. I am not sure about the exact reason but it feels to me related to the above reason. > > I will start to work on a separate patch to switch to list_head, and > make sure operations from __update_and_free_hugetlb_folio and > memory_failure are serialized (hopefully without intro new locks and > just use mf_mutex). > > > > > Thanks, > > Naoya Horiguchi