From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 827F9C7EE23 for ; Mon, 12 Jun 2023 04:19:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DADAD6B0072; Mon, 12 Jun 2023 00:19:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D5D0B6B0074; Mon, 12 Jun 2023 00:19:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C24888E0002; Mon, 12 Jun 2023 00:19:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B351D6B0072 for ; Mon, 12 Jun 2023 00:19:18 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7FCB140247 for ; Mon, 12 Jun 2023 04:19:18 +0000 (UTC) X-FDA: 80892791196.18.CFFDCE7 Received: from out-54.mta1.migadu.com (out-54.mta1.migadu.com [95.215.58.54]) by imf17.hostedemail.com (Postfix) with ESMTP id E2A0F40008 for ; Mon, 12 Jun 2023 04:19:14 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=YdaAc3df; spf=pass (imf17.hostedemail.com: domain of naoya.horiguchi@linux.dev designates 95.215.58.54 as permitted sender) smtp.mailfrom=naoya.horiguchi@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686543555; a=rsa-sha256; cv=none; b=Jk91f/Mm5ei2Q3+9eci3NpL4ZqYwWwEMjdvKEzERmf/BzS7qFOfTPASx/6GTvMHH7erT+T FkYNyPVURd6Vr1ipUTq8emctLX61y7YHk7Lq4Ok3O+9aHRCwiw/W9c6U/adA3IozFk0CeZ uyi6P5E99f8H1Xm6QqZf1leOF6618yk= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=YdaAc3df; spf=pass (imf17.hostedemail.com: domain of naoya.horiguchi@linux.dev designates 95.215.58.54 as permitted sender) smtp.mailfrom=naoya.horiguchi@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686543555; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=opOgHt17eiUp65TQ0zjKIaKhcl3B5yuyEyfLy/NBFtA=; b=e9rx78dX3sHHfGvgzZ/Mzm4aj5Brng+NgoZZpuXk3sTGGg0Z8dUC+8muahVpQjDF0K7YhS c66wONi5nKik+MpRae4fRLIU8QuJx6pI2EXjxAsJf5imAacquaEPWVxVJbFug2ICYvzLJJ 4un5f8UZzyHPGvr3iR65zSMoUjSBwXE= Date: Mon, 12 Jun 2023 13:19:01 +0900 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1686543551; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=opOgHt17eiUp65TQ0zjKIaKhcl3B5yuyEyfLy/NBFtA=; b=YdaAc3dfbatHJJI5IwryMrdE2UPwdwZE7B6YPavsgbc+6rkRa2ZYdVKBzhhm48DaKtEqSM nGG42bwphNTp7JjRNqlcU/gHV8A92I3nxPBzK3XqxXYg1MgC0MYB0qTh5pZUDhbx9kvBcF pJ9A6UPMF2bBYppmax7nix34WLOBVF8= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Naoya Horiguchi To: Jiaqi Yan Cc: HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+jIOebtOS5nyk=?= , Mike Kravetz , "songmuchun@bytedance.com" , "shy828301@gmail.com" , "linmiaohe@huawei.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "duenwen@google.com" , "axelrasmussen@google.com" , "jthoughton@google.com" Subject: Re: [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list Message-ID: <20230612041901.GA3083591@ik1-406-35019.vs.sakura.ne.jp> References: <20230517160948.811355-1-jiaqiyan@google.com> <20230517160948.811355-2-jiaqiyan@google.com> <20230517235314.GB10757@monkey> <20230519224214.GB3581@monkey> <20230522044557.GA845371@hori.linux.bs1.fc.nec.co.jp> <20230523024305.GA920098@hori.linux.bs1.fc.nec.co.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E2A0F40008 X-Stat-Signature: cg5sfz7qjtfbru5jmzj9hzojfjrz13fp X-Rspam-User: X-HE-Tag: 1686543554-726650 X-HE-Meta: U2FsdGVkX19PBkLJ17qe2KyCANPpcqCu8smuouTUS+bwPM4Lu0c3qhvr5+03StV6cdWVtEFGnzCfhK17zzZisubGcEjCLzvs8NKCzR0jeVs6BElRZm7ohfytq4u4S8UdeXrUPOsdes0RPCb20UnF0/34FEGqLY6UpR6zjGsFy8uf8uqIt6W63aWH2Om4WtwAuEgbW1eBUHgyCc/vPsnPzmkQobOFCFwKvjQ1api2JIX3wrdQObvlcwrbnCuJnA+3ZfKWqiHeV3pTrVasOKU1g5t2SLjxKiwV0gkzhCKllmFkmAYW3gHgqDobo/qW4GcwUcRja+I5IEFY5qnJF2knYbVMwKEqg0fYDOz2rUbjJiNb8mmmdLdo0WFgpj6Qdhlq0WzyoOeckN+LM6P/349i46yKEz6n0Y7gxK2p5uoEzq/LJhvSp2ZLVwC8jbXbRNO/LfuocjFwop5gPip6UCKW36BXHGu+4zkWIDwEuKbNujD9J/VL8WmLAr3iVWjmKWVHsM9BUwQyHjqfiDz80cZkkfva50W9QZtWFgkJSd620qqHoLLuQBktohIqOD9mk29GbGpawhGymUc+l8QsuMO6IpXaITI6/J2EnsgBsDdSQHbwi1ib/8i6XOE3Hz5lD5JUFwUmCcAVGlvk69AAWyzdDFA+wcbI6DptTe0oRF2PpJsEhdmZvSv49IK8lBQoQeLxVhFJwBe/DxPZCxA82w0o9hyt2sY49j+h8KfzA8BWpu2ucCWyXfLLe/7xgBbP1lTwPCIRXPaeMW4h6IVbTTADXGGTJGe2+FOtUI+6hxeYLr9J+MhwcLP8ZhscdQbgdu96uV2c9BYGuAt36Tf2XU99cvAOkZsKs8u2WNWAcMKtYw3lGeMMfbATSLgf8FS1TrTJY+tqVKUsLhPp2tdZBZncNXNUVbv4B6cqsv2QOaalgqHBE27vbPXMIf7g/WECEP/crPt89KQ9ZIFEPwzy3/8 1pwk/7pQ lGiovd+yvRICmUzug3YmLa7bb+ExO4hC9GZGW8NnNX/Y5Wy4OrEt9Qh5OwHaYPsq37Z/eTxzRZ0L238GWhw5m54HmhiLV7C9M2VxrWVbFIrfQ54gUUt2UCz6FU08vo45ocsKmfj6sV49XXnFpgMAkbJJ+tMWwgrRc0s/I X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jun 09, 2023 at 10:48:47PM -0700, Jiaqi Yan wrote: > On Thu, May 25, 2023 at 5:28 PM Jiaqi Yan wrote: > > > > On Mon, May 22, 2023 at 7:43 PM HORIGUCHI NAOYA(堀口 直也) > > wrote: > > > > > > On Mon, May 22, 2023 at 11:22:49AM -0700, Jiaqi Yan wrote: > > > > On Sun, May 21, 2023 at 9:50 PM HORIGUCHI NAOYA(堀口 直也) > > > > wrote: > > > > > > > > > > On Fri, May 19, 2023 at 03:42:14PM -0700, Mike Kravetz wrote: > > > > > > On 05/19/23 13:54, Jiaqi Yan wrote: > > > > > > > On Wed, May 17, 2023 at 4:53 PM Mike Kravetz wrote: > > > > > > > > > > > > > > > > On 05/17/23 16:09, Jiaqi Yan wrote: > > > > > > > > > Adds the functionality to search a subpage's corresponding raw_hwp_page > > > > > > > > > in hugetlb page's HWPOISON list. This functionality can also tell if a > > > > > > > > > subpage is a raw HWPOISON page. > > > > > > > > > > > > > > > > > > Exports this functionality to be immediately used in the read operation > > > > > > > > > for hugetlbfs. > > > > > > > > > > > > > > > > > > Signed-off-by: Jiaqi Yan > > > > > > > > > --- > > > > > > > > > include/linux/mm.h | 23 +++++++++++++++++++++++ > > > > > > > > > mm/memory-failure.c | 26 ++++++++++++++++---------- > > > > > > > > > 2 files changed, 39 insertions(+), 10 deletions(-) > > > > > > > > > > > > > > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > > > > > > > index 27ce77080c79..f191a4119719 100644 > > > > > > > > > --- a/include/linux/mm.h > > > > > > > > > +++ b/include/linux/mm.h > > > > > > > > > > > > > > > > Any reason why you decided to add the following to linux/mm.h instead of > > > > > > > > linux/hugetlb.h? Since it is hugetlb specific I would have thought > > > > > > > > hugetlb.h was more appropriate. > > > > > > > > > > > > > > > > > @@ -3683,6 +3683,29 @@ enum mf_action_page_type { > > > > > > > > > */ > > > > > > > > > extern const struct attribute_group memory_failure_attr_group; > > > > > > > > > > > > > > > > > > +#ifdef CONFIG_HUGETLB_PAGE > > > > > > > > > +/* > > > > > > > > > + * Struct raw_hwp_page represents information about "raw error page", > > > > > > > > > + * constructing singly linked list from ->_hugetlb_hwpoison field of folio. > > > > > > > > > + */ > > > > > > > > > +struct raw_hwp_page { > > > > > > > > > + struct llist_node node; > > > > > > > > > + struct page *page; > > > > > > > > > +}; > > > > > > > > > + > > > > > > > > > +static inline struct llist_head *raw_hwp_list_head(struct folio *folio) > > > > > > > > > +{ > > > > > > > > > + return (struct llist_head *)&folio->_hugetlb_hwpoison; > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +/* > > > > > > > > > + * Given @subpage, a raw page in a hugepage, find its location in @folio's > > > > > > > > > + * _hugetlb_hwpoison list. Return NULL if @subpage is not in the list. > > > > > > > > > + */ > > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct folio *folio, > > > > > > > > > + struct page *subpage); > > > > > > > > > +#endif > > > > > > > > > + > > > > > > > > > #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > > > > > > > > > extern void clear_huge_page(struct page *page, > > > > > > > > > unsigned long addr_hint, > > > > > > > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > > > > > > > > index 5b663eca1f29..c49e6c2d1f07 100644 > > > > > > > > > --- a/mm/memory-failure.c > > > > > > > > > +++ b/mm/memory-failure.c > > > > > > > > > @@ -1818,18 +1818,24 @@ EXPORT_SYMBOL_GPL(mf_dax_kill_procs); > > > > > > > > > #endif /* CONFIG_FS_DAX */ > > > > > > > > > > > > > > > > > > #ifdef CONFIG_HUGETLB_PAGE > > > > > > > > > -/* > > > > > > > > > - * Struct raw_hwp_page represents information about "raw error page", > > > > > > > > > - * constructing singly linked list from ->_hugetlb_hwpoison field of folio. > > > > > > > > > - */ > > > > > > > > > -struct raw_hwp_page { > > > > > > > > > - struct llist_node node; > > > > > > > > > - struct page *page; > > > > > > > > > -}; > > > > > > > > > > > > > > > > > > -static inline struct llist_head *raw_hwp_list_head(struct folio *folio) > > > > > > > > > +struct raw_hwp_page *find_raw_hwp_page(struct folio *folio, > > > > > > > > > + struct page *subpage) > > > > > > > > > { > > > > > > > > > - return (struct llist_head *)&folio->_hugetlb_hwpoison; > > > > > > > > > + struct llist_node *t, *tnode; > > > > > > > > > + struct llist_head *raw_hwp_head = raw_hwp_list_head(folio); > > > > > > > > > + struct raw_hwp_page *hwp_page = NULL; > > > > > > > > > + struct raw_hwp_page *p; > > > > > > > > > + > > > > > > > > > + llist_for_each_safe(tnode, t, raw_hwp_head->first) { > > > > > > > > > > > > > > > > IIUC, in rare error cases a hugetlb page can be poisoned WITHOUT a > > > > > > > > raw_hwp_list. This is indicated by the hugetlb page specific flag > > > > > > > > RawHwpUnreliable or folio_test_hugetlb_raw_hwp_unreliable(). > > > > > > > > > > > > > > > > Looks like this routine does not consider that case. Seems like it should > > > > > > > > always return the passed subpage if folio_test_hugetlb_raw_hwp_unreliable() > > > > > > > > is true? > > > > > > > > > > > > > > Thanks for catching this. I wonder should this routine consider > > > > > > > RawHwpUnreliable or should the caller do. > > > > > > > > > > > > > > find_raw_hwp_page now returns raw_hwp_page* in the llist entry to > > > > > > > caller (valid one at the moment), but once RawHwpUnreliable is set, > > > > > > > all the raw_hwp_page in the llist will be kfree(), and the returned > > > > > > > value becomes dangling pointer to caller (if the caller holds that > > > > > > > caller long enough). Maybe returning a bool would be safer to the > > > > > > > caller? If the routine returns bool, then checking RawHwpUnreliable > > > > > > > can definitely be within the routine. > > > > > > > > > > > > I think the check for RawHwpUnreliable should be within this routine. > > > > > > Looking closer at the code, I do not see any way to synchronize this. > > > > > > It looks like manipulation in the memory-failure code would be > > > > > > synchronized via the mf_mutex. However, I do not see how traversal and > > > > > > freeing of the raw_hwp_list called from __update_and_free_hugetlb_folio > > > > > > is synchronized against memory-failure code modifying the list. > > > > > > > > > > > > Naoya, can you provide some thoughts? > > > > > > > > > > Thanks for elaborating the issue. I think that making find_raw_hwp_page() and > > > > > folio_clear_hugetlb_hwpoison() do their works within mf_mutex can be one solution. > > > > > try_memory_failure_hugetlb(), one of the callers of folio_clear_hugetlb_hwpoison(), > > > > > already calls it within mf_mutex, so some wrapper might be needed to implement > > > > > calling path from __update_and_free_hugetlb_folio() to take mf_mutex. > > > > > > > > > > It might be a concern that mf_mutex is a big lock to cover overall hwpoison > > > > > subsystem, but I think that the impact is not so big if the changed code paths > > > > > take mf_mutex only after checking hwpoisoned hugepage. Maybe using folio_lock > > > > > to synchronize accesses to the raw_hwp_list could be possible, but currently > > > > > __get_huge_page_for_hwpoison() calls folio_set_hugetlb_hwpoison() without > > > > > folio_lock, so this approach needs update on locking rule and it sounds more > > > > > error-prone to me. > > > > > > > > Thanks Naoya, since memory_failure is the main user of raw_hwp_list, I > > > > agree mf_mutex could help to sync its two del_all operations (one from > > > > try_memory_failure_hugetlb and one from > > > > __update_and_free_hugetlb_folio). > > > > > > > > I still want to ask a perhaps stupid question, somewhat related to how > > > > to implement find_raw_hwp_page() correctly. It seems > > > > llist_for_each_safe should only be used to traverse after list entries > > > > already *deleted* via llist_del_all. But the llist_for_each_safe calls > > > > in memory_failure today is used *directly* on the raw_hwp_list. This > > > > is quite different from other users of llist_for_each_safe (for > > > > example, kernel/bpf/memalloc.c). > > > > > > Oh, I don't noticed that when writing the original code. (I just chose > > > llist_for_each_safe because I just wanted struct llist_node as a singly > > > linked list.) > > > > And maybe because you can avoid doing INIT_LIST_HEAD (which seems > > doable in folio_set_hugetlb_hwpoison if hugepage is hwpoison-ed for > > the first time)? > > > > > > > > > Why is it correct? I guess mostly > > > > because they are sync-ed under mf_mutex (except the missing coverage > > > > on __update_and_free_hugetlb_folio)? > > > > > > Yes, and there seems no good reason to use the macro llist_for_each_safe > > > here. I think it's OK to switch to simpler one like list_for_each which > > > should be supposed to be called directly. To do this, struct raw_hwp_page > > > need to have @node (typed struct list_head instead of struct llist_node). > > Hi Naoya, a maybe-stupid question on list vs llist: _hugetlb_hwpoison > in folio is a void*. struct list_head is composed of two pointers to > list_node (prev and next) so folio just can't hold a list_head in the > _hugetlb_hwpoison field, right? llist_head on the other hand only > contains one pointer to llist_node first. I wonder if this is one of > the reason you picked llist instead of list in the first place. Yes, that's one reason to use llist_head, and another (minor) reason is that we don't need doubly-linked list here. Thanks, Naoya Horiguchi > > The reason I ask is while I was testing my refactor draft, I > constantly see the refcount of the 3rd subpage in the folio got > corrupted. I am not sure about the exact reason but it feels to me > related to the above reason. > > > > > I will start to work on a separate patch to switch to list_head, and > > make sure operations from __update_and_free_hugetlb_folio and > > memory_failure are serialized (hopefully without intro new locks and > > just use mf_mutex). > > > > > > > > Thanks, > > > Naoya Horiguchi > >