From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CE14EB64D7 for ; Fri, 16 Jun 2023 21:19:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B0736B0072; Fri, 16 Jun 2023 17:19:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95FDB8E0002; Fri, 16 Jun 2023 17:19:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 800498E0001; Fri, 16 Jun 2023 17:19:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6F33B6B0072 for ; Fri, 16 Jun 2023 17:19:19 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3A03AB0891 for ; Fri, 16 Jun 2023 21:19:19 +0000 (UTC) X-FDA: 80909876838.21.3B01515 Received: from mail-yw1-f182.google.com (mail-yw1-f182.google.com [209.85.128.182]) by imf20.hostedemail.com (Postfix) with ESMTP id 59C471C0009 for ; Fri, 16 Jun 2023 21:19:16 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=GYluVuaG; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.128.182 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686950356; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f8jo71L/wnw9C65fC3wBYtWb/3Td8mS5trrbwy3tbXE=; b=yIX5XWOI2ZwSmoz/vt018a0q2i2CnuMHx3wMmMI7W/9ZsMJuNBpb+TKwsOGBv5qovA8H8L YkEai4s/PzpG6MlaDDX473SjeWXa4VwjOScX8qPwG21TTZWGVl+F1qYyFtdDSXtnMnGEuN Sh2/PvMFzWxE6+ujTKUuSXS1VC+xlmk= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=GYluVuaG; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.128.182 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686950356; a=rsa-sha256; cv=none; b=vB7VdCh8OrVXKc25/6jYRDo77esSKcXCJ+RXwlU++pqQU6TPi85cJ4A6PDe7KcUgnD2/OU PPQSx/N4IBohteh7388uVMMFZRNFBB6JIGNLbX4Tm1MH7UiaasWm1IKTU/RbCuvdcjU/bL ZtGNNXfcSfxDm6hcA1Y4Rre335W9C+k= Received: by mail-yw1-f182.google.com with SMTP id 00721157ae682-5704ddda6dfso14359357b3.1 for ; Fri, 16 Jun 2023 14:19:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686950355; x=1689542355; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=f8jo71L/wnw9C65fC3wBYtWb/3Td8mS5trrbwy3tbXE=; b=GYluVuaGinGspU2FwWJC0g5IziWPPxovWsn11amKV4ZczetN3f3tAYIJsRmg/8XrHD /OC6IYQ+NBOGSuuvrnpTavIJhrRbA9C2a8bcN6ccquT0HNTssMiWbqLqXRWrzTfUOZYc AVU18m5xeWEP9I2gK5E5tIFzGuShDlTUDGrSbKejyoSX+51i8UhUz2yVXUinLYMJPPhE ImMTfhxe+7j+AIIccrAd534coZ0c7IsY1seNS5BeFvruxybisB/YvxT9YNtgdS4DPjSv u1JQIN3aP7zPE9hf7BoMYKcSv+Y/Jl/5wD/3OinbxIVWQxSgKqqHJDNp/R5gYcJ16Vc/ BIBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686950355; x=1689542355; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f8jo71L/wnw9C65fC3wBYtWb/3Td8mS5trrbwy3tbXE=; b=ivHtP+3zxV/J9rBLxDcvxxUXt25FhvfunLdMYyjd91O3m71jtZteOhFVCo8sq8WVM7 SU6+lCk/mTtO4SQilzBdlRFi8+xQaE/zj67n+9OmQKQbTK3gOBh95W6a/nHY3EskIge2 EUPPC1cl/ncD71f9Fk/X2elfTCJxitxtVe3e1VLZuOcCJc8sg44MpU4mE88+V8hsuRF7 sPjdmsEFRauV658MkrzYm0fhi/XAgToZmM32JrukBUOLfokLunyNbTGEYUCO1ytA/KLg zLM+gzxYC3+cO3r3/cAqkcvu4MDQq1nN5YPpvlzuicZk4Ur4HIbgyCBZ2pQbkA9YZsMs bbNA== X-Gm-Message-State: AC+VfDyB57FzA36wV163ew5urTck9Ms8sbHqKki92U5MRfEh0IKkPr6Y 5ws3xo4dRfyv8gV25QGMYqkK/sjKbSNG8TlKzdw17g== X-Google-Smtp-Source: ACHHUZ60k8+tKuIRDhDO1Ssd92XbKRizuBCcQesr8d60QCeCm4LgTzC2PxMQUJHtak95xy3N+2L895DB7eknjeIxzyE= X-Received: by 2002:a81:8007:0:b0:56d:40da:1fc2 with SMTP id q7-20020a818007000000b0056d40da1fc2mr3278277ywf.50.1686950355285; Fri, 16 Jun 2023 14:19:15 -0700 (PDT) MIME-Version: 1.0 References: <20230517160948.811355-1-jiaqiyan@google.com> <20230517160948.811355-2-jiaqiyan@google.com> <20230517235314.GB10757@monkey> <20230519224214.GB3581@monkey> <20230522044557.GA845371@hori.linux.bs1.fc.nec.co.jp> <20230523024305.GA920098@hori.linux.bs1.fc.nec.co.jp> <20230612041901.GA3083591@ik1-406-35019.vs.sakura.ne.jp> In-Reply-To: <20230612041901.GA3083591@ik1-406-35019.vs.sakura.ne.jp> From: Jiaqi Yan Date: Fri, 16 Jun 2023 14:19:03 -0700 Message-ID: Subject: Re: [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list To: Naoya Horiguchi , Mike Kravetz Cc: =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , "songmuchun@bytedance.com" , "shy828301@gmail.com" , "linmiaohe@huawei.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "duenwen@google.com" , "axelrasmussen@google.com" , "jthoughton@google.com" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: dqa6jjyj47x86euoj5afpfqsx47mhfcc X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 59C471C0009 X-HE-Tag: 1686950356-960332 X-HE-Meta: U2FsdGVkX190tGA2ohsdA0pXIjO1aVp3AwaFO7ddnAW3crWzipGnV+NhmtWoykk/KHZs6XVUpfCX6Y5WknlokbhVEuJayN8euEnIKzhoCzM5C9S24bO+1Gw6jGzZkJQVJ7SXNRFi8C+d/C6aK9ZGTXYl95AP0p/eDcowVsbWzzSV1SQmc76gxPy+1u7HAC5+QZJ/dTjdxnjDyCsII7lvp8z5YL1B2GXzPsEdIoOCSmtA+oqCM0q1qbRPzh1b/vrj8Zijg46pT7VhLOCLWjfKjWi0IFj4KqLxRI+DfXF9dbdvWtWtCGNiKp1pFgQPZE3tjyPkJq8/eQenRS+L/cSXnYf/jFYncSRHMY/JqpjiDUWOIIkI4pjbHeJj8WrIfJhUEzphyluB2fyvaSrEyN9+ZnGogQs7dw8d11+NCMfcBclMZ+27JgClp0tbXAxN/cw5KqIdgOWRXmY/XSU6DOpK+7klIArXrdTlloE3JNjc5nkDkwGbqZM0UhyB/tx9pxP84bHlMeObO1A675rkj4tWNnXltVx5XUeKcd1HbPF3qzn53nk2iAxcsmZUFftZzmftYNarUoECJ0h9Nfdg50pK6TcJHqIcHBEulWE9gj/NuJ21kfS0rom4PRPGf2psLsStrN0iLDItLRORNo/TD7vSOg0HxLeeWOt3Bmn6X1gtmzp5GyulGrjHQuyg00egqoYYrjOSsUQ9peKlE8xqq30NazOjvceV1mPoSh+rNXZjhE1ndZZAiU3zpoSFL/fKltLzUVb2akkDPru1hWPPFdBu8f+X03qIsSoqFtI58tHGRSgUjPAmF1zZ70P8+YoLuPWNCZBCqtFnw/3b2/jlyZU/MXPpl4JYaYkfA/ZPmTrpgwIPpH5rB+MxZFX2tRy976czqiLZvCTdUI9EiV2vAl6wVJHVSxcahwAYVOVapBzkKSI2dtntrrt8xXh8/4ixaObvANug17yhDpYP8TN4hUw vlKTlxaH sdQT5qsadeOG+apm+FHLvHQLQPDuLLhR55AhWqoE08wzrrmXy2LP/+khLlP5TwXG2jYuloQKNjE020cpOxJiTYTtEkXllmw0DorO20tWPrX6x0wli20jeHJisgNR0OqXGSsEYdQLaKip9bWm8XT/QPx6WHDcGXtfjUBiQrRkL47Kdffrmdeo5mqiex5dWN1y23hwcG2CTW2H+yLAihnbaYj25c0hVoR3uDmDV2hKMsZ6MULsGKCWxsJhJFQ4KcU71RSZLTifNtYCuoTib0YZKfBS7CIoC07LwO29cVzAK/LZj/m4cRseVDxuOykJUguLa4rjaAurp0blqihec0kT7Cqp/Ag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Jun 11, 2023 at 9:19=E2=80=AFPM Naoya Horiguchi wrote: > > On Fri, Jun 09, 2023 at 10:48:47PM -0700, Jiaqi Yan wrote: > > On Thu, May 25, 2023 at 5:28=E2=80=AFPM Jiaqi Yan = wrote: > > > > > > On Mon, May 22, 2023 at 7:43=E2=80=AFPM HORIGUCHI NAOYA(=E5=A0=80=E5= =8F=A3=E3=80=80=E7=9B=B4=E4=B9=9F) > > > wrote: > > > > > > > > On Mon, May 22, 2023 at 11:22:49AM -0700, Jiaqi Yan wrote: > > > > > On Sun, May 21, 2023 at 9:50=E2=80=AFPM HORIGUCHI NAOYA(=E5=A0=80= =E5=8F=A3=E3=80=80=E7=9B=B4=E4=B9=9F) > > > > > wrote: > > > > > > > > > > > > On Fri, May 19, 2023 at 03:42:14PM -0700, Mike Kravetz wrote: > > > > > > > On 05/19/23 13:54, Jiaqi Yan wrote: > > > > > > > > On Wed, May 17, 2023 at 4:53=E2=80=AFPM Mike Kravetz wrote: > > > > > > > > > > > > > > > > > > On 05/17/23 16:09, Jiaqi Yan wrote: > > > > > > > > > > Adds the functionality to search a subpage's correspond= ing raw_hwp_page > > > > > > > > > > in hugetlb page's HWPOISON list. This functionality can= also tell if a > > > > > > > > > > subpage is a raw HWPOISON page. > > > > > > > > > > > > > > > > > > > > Exports this functionality to be immediately used in th= e read operation > > > > > > > > > > for hugetlbfs. > > > > > > > > > > > > > > > > > > > > Signed-off-by: Jiaqi Yan > > > > > > > > > > --- > > > > > > > > > > include/linux/mm.h | 23 +++++++++++++++++++++++ > > > > > > > > > > mm/memory-failure.c | 26 ++++++++++++++++---------- > > > > > > > > > > 2 files changed, 39 insertions(+), 10 deletions(-) > > > > > > > > > > > > > > > > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > > > > > > > > index 27ce77080c79..f191a4119719 100644 > > > > > > > > > > --- a/include/linux/mm.h > > > > > > > > > > +++ b/include/linux/mm.h > > > > > > > > > > > > > > > > > > Any reason why you decided to add the following to linux/= mm.h instead of > > > > > > > > > linux/hugetlb.h? Since it is hugetlb specific I would ha= ve thought > > > > > > > > > hugetlb.h was more appropriate. > > > > > > > > > > > > > > > > > > > @@ -3683,6 +3683,29 @@ enum mf_action_page_type { > > > > > > > > > > */ > > > > > > > > > > extern const struct attribute_group memory_failure_att= r_group; > > > > > > > > > > > > > > > > > > > > +#ifdef CONFIG_HUGETLB_PAGE > > > > > > > > > > +/* > > > > > > > > > > + * Struct raw_hwp_page represents information about "r= aw error page", > > > > > > > > > > + * constructing singly linked list from ->_hugetlb_hwp= oison field of folio. > > > > > > > > > > + */ > > > > > > > > > > +struct raw_hwp_page { > > > > > > > > > > + struct llist_node node; > > > > > > > > > > + struct page *page; > > > > > > > > > > +}; > > > > > > > > > > + > > > > > > > > > > +static inline struct llist_head *raw_hwp_list_head(str= uct folio *folio) > > > > > > > > > > +{ > > > > > > > > > > + return (struct llist_head *)&folio->_hugetlb_hwpo= ison; > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +/* > > > > > > > > > > + * Given @subpage, a raw page in a hugepage, find its = location in @folio's > > > > > > > > > > + * _hugetlb_hwpoison list. Return NULL if @subpage is = not in the list. > > > > > > > > > > + */ > > > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct folio *f= olio, > > > > > > > > > > + struct page *subpa= ge); > > > > > > > > > > +#endif > > > > > > > > > > + > > > > > > > > > > #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CO= NFIG_HUGETLBFS) > > > > > > > > > > extern void clear_huge_page(struct page *page, > > > > > > > > > > unsigned long addr_hint, > > > > > > > > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > > > > > > > > > index 5b663eca1f29..c49e6c2d1f07 100644 > > > > > > > > > > --- a/mm/memory-failure.c > > > > > > > > > > +++ b/mm/memory-failure.c > > > > > > > > > > @@ -1818,18 +1818,24 @@ EXPORT_SYMBOL_GPL(mf_dax_kill_p= rocs); > > > > > > > > > > #endif /* CONFIG_FS_DAX */ > > > > > > > > > > > > > > > > > > > > #ifdef CONFIG_HUGETLB_PAGE > > > > > > > > > > -/* > > > > > > > > > > - * Struct raw_hwp_page represents information about "r= aw error page", > > > > > > > > > > - * constructing singly linked list from ->_hugetlb_hwp= oison field of folio. > > > > > > > > > > - */ > > > > > > > > > > -struct raw_hwp_page { > > > > > > > > > > - struct llist_node node; > > > > > > > > > > - struct page *page; > > > > > > > > > > -}; > > > > > > > > > > > > > > > > > > > > -static inline struct llist_head *raw_hwp_list_head(str= uct folio *folio) > > > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct folio *f= olio, > > > > > > > > > > + struct page *subpa= ge) > > > > > > > > > > { > > > > > > > > > > - return (struct llist_head *)&folio->_hugetlb_hwpo= ison; > > > > > > > > > > + struct llist_node *t, *tnode; > > > > > > > > > > + struct llist_head *raw_hwp_head =3D raw_hwp_list_= head(folio); > > > > > > > > > > + struct raw_hwp_page *hwp_page =3D NULL; > > > > > > > > > > + struct raw_hwp_page *p; > > > > > > > > > > + > > > > > > > > > > + llist_for_each_safe(tnode, t, raw_hwp_head->first= ) { > > > > > > > > > > > > > > > > > > IIUC, in rare error cases a hugetlb page can be poisoned = WITHOUT a > > > > > > > > > raw_hwp_list. This is indicated by the hugetlb page spec= ific flag > > > > > > > > > RawHwpUnreliable or folio_test_hugetlb_raw_hwp_unreliable= (). > > > > > > > > > > > > > > > > > > Looks like this routine does not consider that case. See= ms like it should > > > > > > > > > always return the passed subpage if folio_test_hugetlb_ra= w_hwp_unreliable() > > > > > > > > > is true? > > > > > > > > > > > > > > > > Thanks for catching this. I wonder should this routine cons= ider > > > > > > > > RawHwpUnreliable or should the caller do. > > > > > > > > > > > > > > > > find_raw_hwp_page now returns raw_hwp_page* in the llist en= try to > > > > > > > > caller (valid one at the moment), but once RawHwpUnreliable= is set, > > > > > > > > all the raw_hwp_page in the llist will be kfree(), and the = returned > > > > > > > > value becomes dangling pointer to caller (if the caller hol= ds that > > > > > > > > caller long enough). Maybe returning a bool would be safer = to the > > > > > > > > caller? If the routine returns bool, then checking RawHwpUn= reliable > > > > > > > > can definitely be within the routine. > > > > > > > > > > > > > > I think the check for RawHwpUnreliable should be within this = routine. > > > > > > > Looking closer at the code, I do not see any way to synchroni= ze this. > > > > > > > It looks like manipulation in the memory-failure code would b= e > > > > > > > synchronized via the mf_mutex. However, I do not see how tra= versal and > > > > > > > freeing of the raw_hwp_list called from __update_and_free_hu= getlb_folio > > > > > > > is synchronized against memory-failure code modifying the lis= t. > > > > > > > > > > > > > > Naoya, can you provide some thoughts? Hi Mike, Now looking again this, I think concurrent adding and deleting are fine with each other and with themselves, because raw_hwp_list is lock-less llist. As for synchronizing traversal with adding and deleting, I wonder is it a good idea to make __update_and_free_hugetlb_folio hold hugetlb_lock before it folio_clear_hugetlb_hwpoison(which traverse + delete raw_hwp_list)? In hugetlb, get_huge_page_for_hwpoison already takes hugetlb_lock; it seems to me __update_and_free_hugetlb_folio is missing the lock. > > > > > > > > > > > > Thanks for elaborating the issue. I think that making find_raw= _hwp_page() and > > > > > > folio_clear_hugetlb_hwpoison() do their works within mf_mutex c= an be one solution. > > > > > > try_memory_failure_hugetlb(), one of the callers of folio_clear= _hugetlb_hwpoison(), > > > > > > already calls it within mf_mutex, so some wrapper might be need= ed to implement > > > > > > calling path from __update_and_free_hugetlb_folio() to take mf_= mutex. > > > > > > > > > > > > It might be a concern that mf_mutex is a big lock to cover over= all hwpoison > > > > > > subsystem, but I think that the impact is not so big if the cha= nged code paths > > > > > > take mf_mutex only after checking hwpoisoned hugepage. Maybe u= sing folio_lock > > > > > > to synchronize accesses to the raw_hwp_list could be possible, = but currently > > > > > > __get_huge_page_for_hwpoison() calls folio_set_hugetlb_hwpoison= () without > > > > > > folio_lock, so this approach needs update on locking rule and i= t sounds more > > > > > > error-prone to me. > > > > > > > > > > Thanks Naoya, since memory_failure is the main user of raw_hwp_li= st, I > > > > > agree mf_mutex could help to sync its two del_all operations (one= from > > > > > try_memory_failure_hugetlb and one from > > > > > __update_and_free_hugetlb_folio). > > > > > > > > > > I still want to ask a perhaps stupid question, somewhat related t= o how > > > > > to implement find_raw_hwp_page() correctly. It seems > > > > > llist_for_each_safe should only be used to traverse after list en= tries > > > > > already *deleted* via llist_del_all. But the llist_for_each_safe = calls > > > > > in memory_failure today is used *directly* on the raw_hwp_list. T= his > > > > > is quite different from other users of llist_for_each_safe (for > > > > > example, kernel/bpf/memalloc.c). > > > > > > > > Oh, I don't noticed that when writing the original code. (I just ch= ose > > > > llist_for_each_safe because I just wanted struct llist_node as a si= ngly > > > > linked list.) > > > > > > And maybe because you can avoid doing INIT_LIST_HEAD (which seems > > > doable in folio_set_hugetlb_hwpoison if hugepage is hwpoison-ed for > > > the first time)? > > > > > > > > > > > > Why is it correct? I guess mostly > > > > > because they are sync-ed under mf_mutex (except the missing cover= age > > > > > on __update_and_free_hugetlb_folio)? > > > > > > > > Yes, and there seems no good reason to use the macro llist_for_each= _safe > > > > here. I think it's OK to switch to simpler one like list_for_each = which > > > > should be supposed to be called directly. To do this, struct raw_h= wp_page > > > > need to have @node (typed struct list_head instead of struct llist_= node). > > > > Hi Naoya, a maybe-stupid question on list vs llist: _hugetlb_hwpoison > > in folio is a void*. struct list_head is composed of two pointers to > > list_node (prev and next) so folio just can't hold a list_head in the > > _hugetlb_hwpoison field, right? llist_head on the other hand only > > contains one pointer to llist_node first. I wonder if this is one of > > the reason you picked llist instead of list in the first place. > > Yes, that's one reason to use llist_head, and another (minor) reason is t= hat > we don't need doubly-linked list here. Hi Naoya, Even with hugetlb_lock, I think we should still fix __folio_free_raw_hwp: call llist_del_all first, then traverse the list and free raw_hwp_page entries. Then folio_clear_hugetlb_hwpoison from both memory_failure and hugetlb will be safe given llist_del_all on llist is safe with itself. In my v2, I tried both (1.taking hugetlb in __update_and_free_hugetlb_folio, 2. call llist_del_all first in __folio_free_raw_hwp). I also changed find_raw_hwp_page to is_raw_hwp_subpage (only returns bool, and takes hugetlb_lock for traversing raw_hwp_list). So far I didn't run into any problems with my selftest. > > Thanks, > Naoya Horiguchi > > > > > The reason I ask is while I was testing my refactor draft, I > > constantly see the refcount of the 3rd subpage in the folio got > > corrupted. I am not sure about the exact reason but it feels to me > > related to the above reason. > > > > > > > > I will start to work on a separate patch to switch to list_head, and > > > make sure operations from __update_and_free_hugetlb_folio and > > > memory_failure are serialized (hopefully without intro new locks and > > > just use mf_mutex). > > > > > > > > > > > Thanks, > > > > Naoya Horiguchi > > > >