From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F0B2EB64DD for ; Fri, 23 Jun 2023 16:40:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2F4B8D0006; Fri, 23 Jun 2023 12:40:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C45558D0001; Fri, 23 Jun 2023 12:40:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE3BD8D0006; Fri, 23 Jun 2023 12:40:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9438A8D0001 for ; Fri, 23 Jun 2023 12:40:24 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id AA6641602C8 for ; Fri, 23 Jun 2023 16:40:23 +0000 (UTC) X-FDA: 80934575526.03.7EE9A02 Received: from mail-yb1-f171.google.com (mail-yb1-f171.google.com [209.85.219.171]) by imf14.hostedemail.com (Postfix) with ESMTP id B7FD7100004 for ; Fri, 23 Jun 2023 16:40:21 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=EFg8gBkN; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.219.171 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687538421; a=rsa-sha256; cv=none; b=aWBqGZpFZqGsosTNLjYLpkNSob2jPwnt/X12+2o07qYFO2IahOHCVvWqQG/3PhNMTm/azV qMoZLeYo9qXmU9gtcEq5ushLubfVgs3pBlFY9rJEaV5P5SpMQTeSQ9uuyGL+KNkJJBBrge hykh3HM89ZHJdzcrI62riCX16tulEgo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=EFg8gBkN; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.219.171 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687538421; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h+FHKatDr+jcK/HOnfDqSrTTHFQBX6gH0Sv7rCX0RAY=; b=bT9MrxU+o4AkJM8e3uN5JWlKZ7QGAbneleIQHG3Yd+O6w6zbdSqBA1p4B8XfEqW76lzml1 bUPW1uirkOGKixDgQd0XD2nxe9Zj4uYqfbP3HMJjh8oLq+0vnuB6Pf73C21TFe51G3l6OV iohwxKfCowQrPsdf6sA8GDpk4VVXYrM= Received: by mail-yb1-f171.google.com with SMTP id 3f1490d57ef6-bd6d9d7da35so746907276.0 for ; Fri, 23 Jun 2023 09:40:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687538420; x=1690130420; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=h+FHKatDr+jcK/HOnfDqSrTTHFQBX6gH0Sv7rCX0RAY=; b=EFg8gBkNyJlmaty1/F7vR0BphOYFHVAHqn29CIOA8mKhx1JrPSNG2DZmqgMIyXW29f 0fXFp14H3OzjlonRHmhpjBbAucTQLTHLsKtHeE8z3tMBXmYQWr8tyTOt9Cj067THeTOR 69AJ2UEEwAlFMwuukQuOwBzHmcO7i1UpRU27ALNynDpmcRjtDTY0HfQRSI3DFje9Jneq 7kcQPvqgHXnyveGgAD5JKRmSbr28px23mu5j4zuU5sqviPDbz27eDU3ch/HQCjwzh2PF 2v809M4wfHjLt6xaOL5T4przAb84wqz/0iGSFHt0PFiaL+Ur8LW3LCza9215Z1hJUNId NlOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687538420; x=1690130420; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h+FHKatDr+jcK/HOnfDqSrTTHFQBX6gH0Sv7rCX0RAY=; b=fgEkEbLfym6ZCDFZG8fVpirje84pzNiWRL1T7PrqMKROz6FICjdWlvlFJ7HPa05YKv 1/N4HeiLMDTvJJ0Hhgjm6YldGwP9I4LREQw5QkCg16AGThUtNL/tLcAL1EOw7y2SeELu XVzDnjZXPaz2UTZoTvvsPr0oC6S1HdsHtcxqTCZt6xT5w1CN6/K7eL7CDwXY51N5TYrM 3Rwx1yEA7zV/FH76Rqqe4npMedfgR3jvG3PdxXlM8wYqr4W1Iui4F44Ji3hTuk4dVbWq uFCFjCrDBexDnroxAQO2+Y7AjjC+TnkHbGsyUXqoGJ8fPO7egKj0f7T7dwXKkz8DmJt7 34QQ== X-Gm-Message-State: AC+VfDz21h9k34qlbPsLSzoduV0HUVhy4lJ5+KUfMqEpajlQ35h44iaU DPlEaZ5JCn5RlVZU8HP+cqKuAu6LLviUPzVBLz556A== X-Google-Smtp-Source: ACHHUZ66r30aoY0eazG6VEUjfL6NJpertwiFiURv84M2h66MkG9qph5y9X2D93N9wpbkLdmARmgq8Erq3tac+WRYca4= X-Received: by 2002:a25:2d02:0:b0:bcc:a4a6:bf34 with SMTP id t2-20020a252d02000000b00bcca4a6bf34mr17333351ybt.37.1687538420562; Fri, 23 Jun 2023 09:40:20 -0700 (PDT) MIME-Version: 1.0 References: <20230612041901.GA3083591@ik1-406-35019.vs.sakura.ne.jp> <20230616233447.GB7371@monkey> <20230617225927.GA3540@monkey> <20230619082330.GA1612447@ik1-406-35019.vs.sakura.ne.jp> <20230620180533.GA3567@monkey> <20230620223909.GB3567@monkey> <20230623041924.GA44732@monkey> In-Reply-To: <20230623041924.GA44732@monkey> From: Jiaqi Yan Date: Fri, 23 Jun 2023 09:40:09 -0700 Message-ID: Subject: Re: [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list To: Mike Kravetz Cc: Naoya Horiguchi , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , "songmuchun@bytedance.com" , "shy828301@gmail.com" , "linmiaohe@huawei.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "duenwen@google.com" , "axelrasmussen@google.com" , "jthoughton@google.com" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B7FD7100004 X-Stat-Signature: u3unqxdd7mwdzmht8ts7k9cqrubguty1 X-HE-Tag: 1687538421-507172 X-HE-Meta: U2FsdGVkX1+nwylKSTWXFoKyO9DI26BFgm2RDSEZSG/lIBUwdlSBDs9PqdHNvmbyW9fZ+m17eCEZuSUgOMbFE4ed7CLVLbIYLIAcXT19gZadFoEFvTFppiLVaXin9PhuI0oDjWLsQpKwR12k6IHAg2p5tYilE3X9t4IA81uANRsgwQojLZoIaVaYiGgjIRmpeFCqc8DqrB1mRmKs91wJcBUtdfwoHaNMSRzxgP7iUd3EBFmmsQPtjG1BWgcX249FZZC+gcuy3a2gEY6/sHerGiD4P1NBtZC5eibHkVF5Pg94yEMF/yzjPkYTgASnbFm3q3VxRlO+KUa7OOQq5SSQDx2UweSGecHZbR77QsIatZarzuy/F1rWaO/XdRSzr1ONKyLPkOWqBlAINRfQBrDzuNqdXqWuYLs9aw3/wt5PDxZqblgjavkTNuImaUhPicjz3w4pcTkGl8H8qGprK5SUUKc+QUx1MNaAFOxu/NMb/pHm1NfBO+aGGGlZt1N67JgFJ64EDU47LN9iC2s9r9pdKlgXDRMk+5KF6+gkhMo21znC+u82QdORdqiaZpk/YwzW28lkFF0DPCK8wnonMMHkgsCGGQL9DdVf352y0uBN+dgE6v3Hc1LtM9O7sPM0HxtARNBsI+ZrgQdsYZTcWu4hciF3ae/aE6W2JwuOhST/FcoR6ussK3/dNAPjaVN9zZHKO3HeaOcu0X3ZMtEYYNQCnXvyxteR5Lucn00CAMNob6gHCQp/zltbLz9SJ79uOeq/dALocZ2e7dBbytIbD3akf35WRIN6FHtacxAb9Y4xZOxsVfRxhH6Q3yZcpSGL9JlD3S4Mt7a22AOUTQvKa7Gnzy7svZ0+2AI5EES4QzWNhvksVlNh9PHPj9s7Pd/sBPISqVyxMuTMzsJ1vDv2Z52TAK6HnW1mnU/Uv1Kd6vTThNWgOpdJuSgb8chZSrmSi0JRJvE8Jux7wRunTmhUAXh dk5FY5L3 YPIkmKdTtsjThmgObggKLNCaaiRRDgl5vJFtoMiiLI+AqfHsc1vWthQZq8vvl4rwmIyJrQio+XSbMRgwhxq2HY0MtEJfb9owQpy1+auHRiUAbBHgV5nTMa8RxQ2gJ/9LXj7+9cQhAe4IA9mnHi4Bw/rZp7KinbIW+wVEzLu0WrN4SJxVlcWpzpk0mBVbSTJzbcNb4PqD26/XTyGGP5vm7YynstIrfU4FknLU1G24hvxLu7v3x11EcdCFE4JihLyCBw1uLvB1ddEE2kvdcTrAEYFaoEBUY/8Czduj1hnP+kYthmhkF0Zs79CIgudG0Mli45dbr3P6ChCQom3wAMC0o3nb2InBWtjTuaxt1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jun 22, 2023 at 9:19=E2=80=AFPM Mike Kravetz wrote: > > On 06/22/23 17:45, Jiaqi Yan wrote: > > On Tue, Jun 20, 2023 at 3:39=E2=80=AFPM Mike Kravetz wrote: > > > > > > On 06/20/23 11:05, Mike Kravetz wrote: > > > > On 06/19/23 17:23, Naoya Horiguchi wrote: > > > > > > > > > > Considering this issue as one specific to memory error handling, = checking > > > > > HPG_vmemmap_optimized in __get_huge_page_for_hwpoison() might be = helpful to > > > > > detect the race. Then, an idea like the below diff (not tested) = can make > > > > > try_memory_failure_hugetlb() retry (with retaking hugetlb_lock) t= o wait > > > > > for complete the allocation of vmemmap pages. > > > > > > > > > > @@ -1938,8 +1938,11 @@ int __get_huge_page_for_hwpoison(unsigned = long pfn, int flags, > > > > > int ret =3D 2; /* fallback to normal page handling */ > > > > > bool count_increased =3D false; > > > > > > > > > > - if (!folio_test_hugetlb(folio)) > > > > > + if (!folio_test_hugetlb(folio)) { > > > > > + if (folio_test_hugetlb_vmemmap_optimized(folio)) > > > > > + ret =3D -EBUSY; > > > > > > > > The hugetlb specific page flags (HPG_vmemmap_optimized here) reside= in > > > > the folio->private field. > > > > > > > > In the case where the folio is a non-hugetlb folio, the folio->priv= ate field > > > > could be any arbitrary value. As such, the test for vmemmap_optimi= zed may > > > > return a false positive. We could end up retrying for an arbitrari= ly > > > > long time. > > > > > > > > I am looking at how to restructure the code which removes and frees > > > > hugetlb pages so that folio_test_hugetlb() would remain true until > > > > vmemmap pages are allocated. The easiest way to do this would intr= oduce > > > > another hugetlb lock/unlock cycle in the page freeing path. This w= ould > > > > undo some of the speedups in the series: > > > > https://lore.kernel.org/all/20210409205254.242291-4-mike.kravetz@or= acle.com/T/#m34321fbcbdf8bb35dfe083b05d445e90ecc1efab > > > > > > > > > > Perhaps something like this? Minimal testing. > > > > Thanks for putting up a fix, Mike! > > > > > > > > From e709fb4da0b6249973f9bf0540c9da0e4c585fe2 Mon Sep 17 00:00:00 200= 1 > > > From: Mike Kravetz > > > Date: Tue, 20 Jun 2023 14:48:39 -0700 > > > Subject: [PATCH] hugetlb: Do not clear hugetlb dtor until allocating = vmemmap > > > > > > Freeing a hugetlb page and releasing base pages back to the underlyin= g > > > allocator such as buddy or cma is performed in two steps: > > > - remove_hugetlb_folio() is called to remove the folio from hugetlb > > > lists, get a ref on the page and remove hugetlb destructor. This > > > all must be done under the hugetlb lock. After this call, the page > > > can be treated as a normal compound page or a collection of base > > > size pages. > > > - update_and_free_hugetlb_folio() is called to allocate vmemmap if > > > needed and the free routine of the underlying allocator is called > > > on the resulting page. We can not hold the hugetlb lock here. > > > > > > One issue with this scheme is that a memory error could occur between > > > these two steps. In this case, the memory error handling code treats > > > the old hugetlb page as a normal compound page or collection of base > > > pages. It will then try to SetPageHWPoison(page) on the page with an > > > error. If the page with error is a tail page without vmemmap, a writ= e > > > error will occur when trying to set the flag. > > > > > > Address this issue by modifying remove_hugetlb_folio() and > > > update_and_free_hugetlb_folio() such that the hugetlb destructor is n= ot > > > cleared until after allocating vmemmap. Since clearing the destructo= r > > > required holding the hugetlb lock, the clearing is done in > > > remove_hugetlb_folio() if the vmemmap is present. This saves a > > > lock/unlock cycle. Otherwise, destructor is cleared in > > > update_and_free_hugetlb_folio() after allocating vmemmap. > > > > > > Note that this will leave hugetlb pages in a state where they are mar= ked > > > free (by hugetlb specific page flag) and have a ref count. This is n= ot > > > a normal state. The only code that would notice is the memory error > > > code, and it is set up to retry in such a case. > > > > > > A subsequent patch will create a routine to do bulk processing of > > > vmemmap allocation. This will eliminate a lock/unlock cycle for each > > > hugetlb page in the case where we are freeing a bunch of pages. > > > > > > Fixes: ??? > > > Signed-off-by: Mike Kravetz > > > --- > > > mm/hugetlb.c | 75 +++++++++++++++++++++++++++++++++++---------------= -- > > > 1 file changed, 51 insertions(+), 24 deletions(-) > > > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > > index d76574425da3..f7f64470aee0 100644 > > > --- a/mm/hugetlb.c > > > +++ b/mm/hugetlb.c > > > @@ -1579,9 +1579,37 @@ static inline void destroy_compound_gigantic_f= olio(struct folio *folio, > > > unsigned int order) {= } > > > #endif > > > > > > +static inline void __clear_hugetlb_destructor(struct hstate *h, > > > + struct folio *folio) > > > +{ > > > + lockdep_assert_held(&hugetlb_lock); > > > + > > > + /* > > > + * Very subtle > > > + * > > > + * For non-gigantic pages set the destructor to the normal co= mpound > > > + * page dtor. This is needed in case someone takes an additi= onal > > > + * temporary ref to the page, and freeing is delayed until th= ey drop > > > + * their reference. > > > + * > > > + * For gigantic pages set the destructor to the null dtor. T= his > > > + * destructor will never be called. Before freeing the gigan= tic > > > + * page destroy_compound_gigantic_folio will turn the folio i= nto a > > > + * simple group of pages. After this the destructor does not > > > + * apply. > > > + * > > > + */ > > > + if (hstate_is_gigantic(h)) > > > + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); > > > + else > > > + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); > > > +} > > > + > > > /* > > > - * Remove hugetlb folio from lists, and update dtor so that the foli= o appears > > > - * as just a compound page. > > > + * Remove hugetlb folio from lists. > > > + * If vmemmap exists for the folio, update dtor so that the folio ap= pears > > > + * as just a compound page. Otherwise, wait until after allocating = vmemmap > > > + * to update dtor. > > > * > > > * A reference is held on the folio, except in the case of demote. > > > * > > > @@ -1612,31 +1640,19 @@ static void __remove_hugetlb_folio(struct hst= ate *h, struct folio *folio, > > > } > > > > > > /* > > > - * Very subtle > > > - * > > > - * For non-gigantic pages set the destructor to the normal co= mpound > > > - * page dtor. This is needed in case someone takes an additi= onal > > > - * temporary ref to the page, and freeing is delayed until th= ey drop > > > - * their reference. > > > - * > > > - * For gigantic pages set the destructor to the null dtor. T= his > > > - * destructor will never be called. Before freeing the gigan= tic > > > - * page destroy_compound_gigantic_folio will turn the folio i= nto a > > > - * simple group of pages. After this the destructor does not > > > - * apply. > > > - * > > > - * This handles the case where more than one ref is held when= and > > > - * after update_and_free_hugetlb_folio is called. > > > - * > > > - * In the case of demote we do not ref count the page as it w= ill soon > > > - * be turned into a page of smaller size. > > > + * We can only clear the hugetlb destructor after allocating = vmemmap > > > + * pages. Otherwise, someone (memory error handling) may try= to write > > > + * to tail struct pages. > > > + */ > > > + if (!folio_test_hugetlb_vmemmap_optimized(folio)) > > > + __clear_hugetlb_destructor(h, folio); > > > + > > > + /* > > > + * In the case of demote we do not ref count the page as it = will soon > > > + * be turned into a page of smaller size. > > > */ > > > if (!demote) > > > folio_ref_unfreeze(folio, 1); > > > - if (hstate_is_gigantic(h)) > > > - folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); > > > - else > > > - folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); > > > > > > h->nr_huge_pages--; > > > h->nr_huge_pages_node[nid]--; > > > @@ -1705,6 +1721,7 @@ static void __update_and_free_hugetlb_folio(str= uct hstate *h, > > > { > > > int i; > > > struct page *subpage; > > > + bool clear_dtor =3D folio_test_hugetlb_vmemmap_optimized(foli= o); > > > > Can this test on vmemmap_optimized still tell us if we should > > __clear_hugetlb_destructor? From my reading: > > 1. If a hugetlb folio is still vmemmap optimized in > > __remove_hugetlb_folio, __remove_hugetlb_folio won't > > __clear_hugetlb_destructor. > > 2. Then hugetlb_vmemmap_restore in dissolve_free_huge_page will clear > > HPG_vmemmap_optimized if it succeeds. > > 3. Now when dissolve_free_huge_page gets into > > __update_and_free_hugetlb_folio, we will see clear_dtor to be false > > and __clear_hugetlb_destructor won't be called. > > Good catch! That is indeed a problem with this patch. Glad that I could help. > > > > > Or maybe I misunderstood, and what you really want to do is never > > __clear_hugetlb_destructor so that folio_test_hugetlb is always true? > > No, that was a bug with this patch. > > We could ALWAYS wait until __update_and_free_hugetlb_folio to clear the > hugetlb destructor. However, we have to take hugetlb lock to clear it. > If the page does not have vmemmap optimized, the we can clear the > destructor earlier in __remove_hugetlb_folio and avoid the lock/unlock > cycle. In the past, we have had complaints about the time required to > allocate and free a large quantity of hugetlb pages. Most of that time > is spent in the low level allocators. However, I do not want to add > something like an extra lock/unlock cycle unless absolutely necessary. > > I'll try to think of a cleaner and more fool proof way to address this. > > IIUC, this is an existing issue. Your patch series does not depend > this being fixed. Thanks Mike, I was about to send out V2 today. > -- > Mike Kravetz > > > > > > > > > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported= ()) > > > return; > > > @@ -1735,6 +1752,16 @@ static void __update_and_free_hugetlb_folio(st= ruct hstate *h, > > > if (unlikely(folio_test_hwpoison(folio))) > > > folio_clear_hugetlb_hwpoison(folio); > > > > > > + /* > > > + * If vmemmap pages were allocated above, then we need to cle= ar the > > > + * hugetlb destructor under the hugetlb lock. > > > + */ > > > + if (clear_dtor) { > > > + spin_lock_irq(&hugetlb_lock); > > > + __clear_hugetlb_destructor(h, folio); > > > + spin_unlock_irq(&hugetlb_lock); > > > + } > > > + > > > for (i =3D 0; i < pages_per_huge_page(h); i++) { > > > subpage =3D folio_page(folio, i); > > > subpage->flags &=3D ~(1 << PG_locked | 1 << PG_error = | > > > -- > > > 2.41.0 > > >