linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ackerley Tng <ackerleytng@google.com>
To: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: mawupeng1@huawei.com, akpm@linux-foundation.org,
	mike.kravetz@oracle.com,  david@redhat.com,
	muchun.song@linux.dev, linux-mm@kvack.org,
	 linux-kernel@vger.kernel.org, kernel-team@meta.com
Subject: Re: [RFC PATCH] mm: hugetlb: Fix incorrect fallback for subpool
Date: Wed, 11 Jun 2025 08:55:30 -0700	[thread overview]
Message-ID: <diqz7c1iw9vx.fsf@ackerleytng-ctop.c.googlers.com> (raw)
In-Reply-To: <20250331212343.66780-1-joshua.hahnjy@gmail.com> (message from Joshua Hahn on Mon, 31 Mar 2025 14:23:41 -0700)

Joshua Hahn <joshua.hahnjy@gmail.com> writes:

> On Tue, 25 Mar 2025 14:16:34 +0800 Wupeng Ma <mawupeng1@huawei.com> wrote:
>
>> During our testing with hugetlb subpool enabled, we observe that
>> hstate->resv_huge_pages may underflow into negative values. Root cause
>> analysis reveals a race condition in subpool reservation fallback handling
>> as follow:
>> 
>> hugetlb_reserve_pages()
>>     /* Attempt subpool reservation */
>>     gbl_reserve = hugepage_subpool_get_pages(spool, chg);
>> 
>>     /* Global reservation may fail after subpool allocation */
>>     if (hugetlb_acct_memory(h, gbl_reserve) < 0)
>>         goto out_put_pages;
>> 
>> out_put_pages:
>>     /* This incorrectly restores reservation to subpool */
>>     hugepage_subpool_put_pages(spool, chg);
>> 
>> When hugetlb_acct_memory() fails after subpool allocation, the current
>> implementation over-commits subpool reservations by returning the full
>> 'chg' value instead of the actual allocated 'gbl_reserve' amount. This
>> discrepancy propagates to global reservations during subsequent releases,
>> eventually causing resv_huge_pages underflow.
>> 
>> This problem can be trigger easily with the following steps:
>> 1. reverse hugepage for hugeltb allocation
>> 2. mount hugetlbfs with min_size to enable hugetlb subpool
>> 3. alloc hugepages with two task(make sure the second will fail due to
>>    insufficient amount of hugepages)
>> 4. with for a few seconds and repeat step 3 which will make
>>    hstate->resv_huge_pages to go below zero.
>> 
>> To fix this problem, return corrent amount of pages to subpool during the
>> fallback after hugepage_subpool_get_pages is called.
>> 
>> Fixes: 1c5ecae3a93f ("hugetlbfs: add minimum size accounting to subpools")
>> Signed-off-by: Wupeng Ma <mawupeng1@huawei.com>
>
> Hi Wupeng,
> Thank you for the fix! This is a problem that we've also seen happen in
> our fleet at Meta. I was able to recreate the issue that you mentioned -- to
> explicitly lay down the steps I used:
>
> 1. echo 1 > /proc/sys/vm/nr_hugepages
> 2. mkdir /mnt/hugetlb-pool
> 3.mount -t hugetlbfs -o min_size=2M none /mnt/hugetlb-pool
> 4. (./get_hugepage &) && (./get_hugepage &)
>     # get_hugepage just opens a file in /mnt/hugetlb-pool and mmaps 2M into it.

Hi Joshua,

Would you be able to share the source for ./get_hugepage? I'm trying to
reproduce this too.

Does ./get_hugepage just mmap and then spin in an infinite loop?

Do you have to somehow limit allocation of surplus HugeTLB pages from
the buddy allocator?

Thanks!

> 5. sleep 3
> 6. (./get_hugepage &) && (./get_hugepage &)
> 7. cat /proc/meminfo | grep HugePages_Rsvd
>
> ... and (6) shows that HugePages_Rsvd has indeed underflowed to U64_MAX!
>
> I've also verified that applying your fix and then re-running the reproducer
> shows no underflow.
>
> Reviewed-by: Joshua Hahn <joshua.hahnjy@gmail.com>
> Tested-by: Joshua Hahn <joshua.hahnjy@gmail.com>
>
> Sent using hkml (https://github.com/sjp38/hackermail)


  reply	other threads:[~2025-06-11 15:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-25  6:16 Wupeng Ma
2025-03-31 21:23 ` Joshua Hahn
2025-06-11 15:55   ` Ackerley Tng [this message]
2025-06-12  0:54     ` Joshua Hahn
2025-04-09 13:58 ` Joshua Hahn
2025-04-10  0:51   ` mawupeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=diqz7c1iw9vx.fsf@ackerleytng-ctop.c.googlers.com \
    --to=ackerleytng@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=joshua.hahnjy@gmail.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mawupeng1@huawei.com \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox