linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: James Houghton <jthoughton@google.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 Jiaqi Yan <jiaqiyan@google.com>,
	Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	 Muchun Song <songmuchun@bytedance.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	 Axel Rasmussen <axelrasmussen@google.com>,
	Michal Hocko <mhocko@suse.com>,
	 Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH v2 2/2] hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles
Date: Wed, 19 Jul 2023 17:02:26 -0700	[thread overview]
Message-ID: <CADrL8HU9QbtU=Rs7jCVgOw-ykv1DTQukBiZwqVi16dVdcadG0A@mail.gmail.com> (raw)
In-Reply-To: <20230718164646.GA10413@monkey>

On Tue, Jul 18, 2023 at 9:47 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 07/18/23 09:31, James Houghton wrote:
> > On Mon, Jul 17, 2023 at 5:50 PM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> > > +        * destructor of all pages on list.
> > > +        */
> > > +       if (clear_dtor) {
> > > +               spin_lock_irq(&hugetlb_lock);
> > > +               list_for_each_entry(page, list, lru)
> > > +                       __clear_hugetlb_destructor(h, page_folio(page));
> > > +               spin_unlock_irq(&hugetlb_lock);
> > >         }
> >
> > I'm not too familiar with this code, but the above block seems weird
> > to me. If we successfully allocated the vmemmap for *any* folio, we
> > clear the hugetlb destructor for all the folios? I feel like we should
> > only be clearing the hugetlb destructor for all folios if the vmemmap
> > allocation succeeded for *all* folios. If the code is functionally
> > correct as is, I'm a little bit confused why we need `clear_dtor`; it
> > seems like this function doesn't really need it. (I could have some
> > huge misunderstanding here.)
> >
>
> Yes, it is a bit strange.
>
> I was thinking this has to also handle the case where hugetlb vmemmap
> optimization is off system wide.  In that case, clear_dtor would never
> be set and there is no sense in ever walking the list and calling
> __clear_hugetlb_destructor() would would be a NOOP in this case.  Think
> of the case where there are TBs of hugetlb pages.
>
> That is one of the reasons I made __clear_hugetlb_destructor() check
> for the need to modify the destructor.  The other reason is in the
> dissolve_free_huge_page() code path where we allocate vmemmap.  I
> suppose, there could be an explicit call to __clear_hugetlb_destructor()
> in dissolve_free_huge_page.  But, I thought it might be better if
> we just handled both cases here.
>
> My thinking is that the clear_dtor boolean would tell us if vmemmap was
> restored for ANY hugetlb page.  I am aware that just because vmemmap was
> allocated for one page, does not mean that it was allocated for others.
> However, in the common case where hugetlb vmemmap optimization is on
> system wide, we would have allocated vmemmap for all pages on the list
> and would need to clear the destructor for them all.
>
> So, clear_dtor is really just an optimization for the
> hugetlb_free_vmemmap=off case.  Perhaps that is just over thinking and
> not a useful miro-optimization.

Ok I think I understand; I think the micro-optimization is fine to
add. But I think there's still a bug here:

If we have two vmemmap-optimized hugetlb pages and restoring the page
structs for one of them fails, that page will end up with the
incorrect dtor (add_hugetlb_folio will set it properly, but then we
clear it afterwards because clear_dtor was set).

What do you think?


  reply	other threads:[~2023-07-20  0:03 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-18  0:49 [PATCH v2 0/2] Fix hugetlb free path race with memory errors Mike Kravetz
2023-07-18  0:49 ` [PATCH v2 1/2] hugetlb: Do not clear hugetlb dtor until allocating vmemmap Mike Kravetz
2023-07-18 16:14   ` James Houghton
2023-07-19  2:34   ` Muchun Song
2023-07-20  1:34   ` Jiaqi Yan
2023-07-26  8:48   ` Naoya Horiguchi
2023-07-18  0:49 ` [PATCH v2 2/2] hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles Mike Kravetz
2023-07-18 16:31   ` James Houghton
2023-07-18 16:46     ` Mike Kravetz
2023-07-20  0:02       ` James Houghton [this message]
2023-07-20  0:18         ` Mike Kravetz
2023-07-20  0:50           ` James Houghton
2023-07-19  3:35   ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADrL8HU9QbtU=Rs7jCVgOw-ykv1DTQukBiZwqVi16dVdcadG0A@mail.gmail.com' \
    --to=jthoughton@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=jiaqiyan@google.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=songmuchun@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox