From: Jiaqi Yan <jiaqiyan@google.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Naoya Horiguchi" <naoya.horiguchi@linux.dev>,
"HORIGUCHI NAOYA(堀口 直也)" <naoya.horiguchi@nec.com>,
"songmuchun@bytedance.com" <songmuchun@bytedance.com>,
"shy828301@gmail.com" <shy828301@gmail.com>,
"linmiaohe@huawei.com" <linmiaohe@huawei.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"duenwen@google.com" <duenwen@google.com>,
"axelrasmussen@google.com" <axelrasmussen@google.com>,
"jthoughton@google.com" <jthoughton@google.com>
Subject: Re: [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list
Date: Thu, 22 Jun 2023 17:45:32 -0700 [thread overview]
Message-ID: <CACw3F53iPiLrJt4pyaX2aaZ5BVg9tj8x_k6-v7=9Xn1nrh=UCw@mail.gmail.com> (raw)
In-Reply-To: <20230620223909.GB3567@monkey>
On Tue, Jun 20, 2023 at 3:39 PM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 06/20/23 11:05, Mike Kravetz wrote:
> > On 06/19/23 17:23, Naoya Horiguchi wrote:
> > >
> > > Considering this issue as one specific to memory error handling, checking
> > > HPG_vmemmap_optimized in __get_huge_page_for_hwpoison() might be helpful to
> > > detect the race. Then, an idea like the below diff (not tested) can make
> > > try_memory_failure_hugetlb() retry (with retaking hugetlb_lock) to wait
> > > for complete the allocation of vmemmap pages.
> > >
> > > @@ -1938,8 +1938,11 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
> > > int ret = 2; /* fallback to normal page handling */
> > > bool count_increased = false;
> > >
> > > - if (!folio_test_hugetlb(folio))
> > > + if (!folio_test_hugetlb(folio)) {
> > > + if (folio_test_hugetlb_vmemmap_optimized(folio))
> > > + ret = -EBUSY;
> >
> > The hugetlb specific page flags (HPG_vmemmap_optimized here) reside in
> > the folio->private field.
> >
> > In the case where the folio is a non-hugetlb folio, the folio->private field
> > could be any arbitrary value. As such, the test for vmemmap_optimized may
> > return a false positive. We could end up retrying for an arbitrarily
> > long time.
> >
> > I am looking at how to restructure the code which removes and frees
> > hugetlb pages so that folio_test_hugetlb() would remain true until
> > vmemmap pages are allocated. The easiest way to do this would introduce
> > another hugetlb lock/unlock cycle in the page freeing path. This would
> > undo some of the speedups in the series:
> > https://lore.kernel.org/all/20210409205254.242291-4-mike.kravetz@oracle.com/T/#m34321fbcbdf8bb35dfe083b05d445e90ecc1efab
> >
>
> Perhaps something like this? Minimal testing.
Thanks for putting up a fix, Mike!
>
> From e709fb4da0b6249973f9bf0540c9da0e4c585fe2 Mon Sep 17 00:00:00 2001
> From: Mike Kravetz <mike.kravetz@oracle.com>
> Date: Tue, 20 Jun 2023 14:48:39 -0700
> Subject: [PATCH] hugetlb: Do not clear hugetlb dtor until allocating vmemmap
>
> Freeing a hugetlb page and releasing base pages back to the underlying
> allocator such as buddy or cma is performed in two steps:
> - remove_hugetlb_folio() is called to remove the folio from hugetlb
> lists, get a ref on the page and remove hugetlb destructor. This
> all must be done under the hugetlb lock. After this call, the page
> can be treated as a normal compound page or a collection of base
> size pages.
> - update_and_free_hugetlb_folio() is called to allocate vmemmap if
> needed and the free routine of the underlying allocator is called
> on the resulting page. We can not hold the hugetlb lock here.
>
> One issue with this scheme is that a memory error could occur between
> these two steps. In this case, the memory error handling code treats
> the old hugetlb page as a normal compound page or collection of base
> pages. It will then try to SetPageHWPoison(page) on the page with an
> error. If the page with error is a tail page without vmemmap, a write
> error will occur when trying to set the flag.
>
> Address this issue by modifying remove_hugetlb_folio() and
> update_and_free_hugetlb_folio() such that the hugetlb destructor is not
> cleared until after allocating vmemmap. Since clearing the destructor
> required holding the hugetlb lock, the clearing is done in
> remove_hugetlb_folio() if the vmemmap is present. This saves a
> lock/unlock cycle. Otherwise, destructor is cleared in
> update_and_free_hugetlb_folio() after allocating vmemmap.
>
> Note that this will leave hugetlb pages in a state where they are marked
> free (by hugetlb specific page flag) and have a ref count. This is not
> a normal state. The only code that would notice is the memory error
> code, and it is set up to retry in such a case.
>
> A subsequent patch will create a routine to do bulk processing of
> vmemmap allocation. This will eliminate a lock/unlock cycle for each
> hugetlb page in the case where we are freeing a bunch of pages.
>
> Fixes: ???
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
> mm/hugetlb.c | 75 +++++++++++++++++++++++++++++++++++-----------------
> 1 file changed, 51 insertions(+), 24 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d76574425da3..f7f64470aee0 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1579,9 +1579,37 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio,
> unsigned int order) { }
> #endif
>
> +static inline void __clear_hugetlb_destructor(struct hstate *h,
> + struct folio *folio)
> +{
> + lockdep_assert_held(&hugetlb_lock);
> +
> + /*
> + * Very subtle
> + *
> + * For non-gigantic pages set the destructor to the normal compound
> + * page dtor. This is needed in case someone takes an additional
> + * temporary ref to the page, and freeing is delayed until they drop
> + * their reference.
> + *
> + * For gigantic pages set the destructor to the null dtor. This
> + * destructor will never be called. Before freeing the gigantic
> + * page destroy_compound_gigantic_folio will turn the folio into a
> + * simple group of pages. After this the destructor does not
> + * apply.
> + *
> + */
> + if (hstate_is_gigantic(h))
> + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR);
> + else
> + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR);
> +}
> +
> /*
> - * Remove hugetlb folio from lists, and update dtor so that the folio appears
> - * as just a compound page.
> + * Remove hugetlb folio from lists.
> + * If vmemmap exists for the folio, update dtor so that the folio appears
> + * as just a compound page. Otherwise, wait until after allocating vmemmap
> + * to update dtor.
> *
> * A reference is held on the folio, except in the case of demote.
> *
> @@ -1612,31 +1640,19 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio,
> }
>
> /*
> - * Very subtle
> - *
> - * For non-gigantic pages set the destructor to the normal compound
> - * page dtor. This is needed in case someone takes an additional
> - * temporary ref to the page, and freeing is delayed until they drop
> - * their reference.
> - *
> - * For gigantic pages set the destructor to the null dtor. This
> - * destructor will never be called. Before freeing the gigantic
> - * page destroy_compound_gigantic_folio will turn the folio into a
> - * simple group of pages. After this the destructor does not
> - * apply.
> - *
> - * This handles the case where more than one ref is held when and
> - * after update_and_free_hugetlb_folio is called.
> - *
> - * In the case of demote we do not ref count the page as it will soon
> - * be turned into a page of smaller size.
> + * We can only clear the hugetlb destructor after allocating vmemmap
> + * pages. Otherwise, someone (memory error handling) may try to write
> + * to tail struct pages.
> + */
> + if (!folio_test_hugetlb_vmemmap_optimized(folio))
> + __clear_hugetlb_destructor(h, folio);
> +
> + /*
> + * In the case of demote we do not ref count the page as it will soon
> + * be turned into a page of smaller size.
> */
> if (!demote)
> folio_ref_unfreeze(folio, 1);
> - if (hstate_is_gigantic(h))
> - folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR);
> - else
> - folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR);
>
> h->nr_huge_pages--;
> h->nr_huge_pages_node[nid]--;
> @@ -1705,6 +1721,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> {
> int i;
> struct page *subpage;
> + bool clear_dtor = folio_test_hugetlb_vmemmap_optimized(folio);
Can this test on vmemmap_optimized still tell us if we should
__clear_hugetlb_destructor? From my reading:
1. If a hugetlb folio is still vmemmap optimized in
__remove_hugetlb_folio, __remove_hugetlb_folio won't
__clear_hugetlb_destructor.
2. Then hugetlb_vmemmap_restore in dissolve_free_huge_page will clear
HPG_vmemmap_optimized if it succeeds.
3. Now when dissolve_free_huge_page gets into
__update_and_free_hugetlb_folio, we will see clear_dtor to be false
and __clear_hugetlb_destructor won't be called.
Or maybe I misunderstood, and what you really want to do is never
__clear_hugetlb_destructor so that folio_test_hugetlb is always true?
>
> if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> return;
> @@ -1735,6 +1752,16 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> if (unlikely(folio_test_hwpoison(folio)))
> folio_clear_hugetlb_hwpoison(folio);
>
> + /*
> + * If vmemmap pages were allocated above, then we need to clear the
> + * hugetlb destructor under the hugetlb lock.
> + */
> + if (clear_dtor) {
> + spin_lock_irq(&hugetlb_lock);
> + __clear_hugetlb_destructor(h, folio);
> + spin_unlock_irq(&hugetlb_lock);
> + }
> +
> for (i = 0; i < pages_per_huge_page(h); i++) {
> subpage = folio_page(folio, i);
> subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
> --
> 2.41.0
>
next prev parent reply other threads:[~2023-06-23 0:45 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-17 16:09 [PATCH v1 0/3] Improve hugetlbfs read on HWPOISON hugepages Jiaqi Yan
2023-05-17 16:09 ` [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list Jiaqi Yan
2023-05-17 23:53 ` Mike Kravetz
2023-05-19 20:54 ` Jiaqi Yan
2023-05-19 22:42 ` Mike Kravetz
2023-05-22 4:50 ` HORIGUCHI NAOYA(堀口 直也)
2023-05-22 18:22 ` Jiaqi Yan
2023-05-23 2:43 ` HORIGUCHI NAOYA(堀口 直也)
2023-05-26 0:28 ` Jiaqi Yan
2023-06-10 5:48 ` Jiaqi Yan
2023-06-12 4:19 ` Naoya Horiguchi
2023-06-16 21:19 ` Jiaqi Yan
2023-06-16 23:34 ` Mike Kravetz
2023-06-17 2:18 ` Jiaqi Yan
2023-06-17 22:59 ` Mike Kravetz
2023-06-19 8:23 ` Naoya Horiguchi
2023-06-20 18:05 ` Mike Kravetz
2023-06-20 22:39 ` Mike Kravetz
2023-06-23 0:45 ` Jiaqi Yan [this message]
2023-06-23 4:19 ` Mike Kravetz
2023-06-23 16:40 ` Jiaqi Yan
2023-05-17 16:09 ` [PATCH v1 2/3] hugetlbfs: improve read HWPOISON hugepage Jiaqi Yan
2023-05-18 22:18 ` Mike Kravetz
2023-05-19 20:54 ` Jiaqi Yan
2023-05-17 16:09 ` [PATCH v1 3/3] selftests/mm: add tests for HWPOISON hugetlbfs read Jiaqi Yan
2023-05-23 7:35 ` kernel test robot
2023-05-17 23:30 ` [PATCH v1 0/3] Improve hugetlbfs read on HWPOISON hugepages Mike Kravetz
2023-05-18 16:02 ` Jiaqi Yan
2023-05-18 16:10 ` Jiaqi Yan
2023-05-18 22:24 ` Mike Kravetz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CACw3F53iPiLrJt4pyaX2aaZ5BVg9tj8x_k6-v7=9Xn1nrh=UCw@mail.gmail.com' \
--to=jiaqiyan@google.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=duenwen@google.com \
--cc=jthoughton@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=naoya.horiguchi@linux.dev \
--cc=naoya.horiguchi@nec.com \
--cc=shy828301@gmail.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox