linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Joshua Hahn <joshua.hahnjy@gmail.com>
To: Ackerley Tng <ackerleytng@google.com>
Cc: mawupeng1@huawei.com, akpm@linux-foundation.org,
	mike.kravetz@oracle.com, david@redhat.com, muchun.song@linux.dev,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	kernel-team@meta.com
Subject: Re: [RFC PATCH] mm: hugetlb: Fix incorrect fallback for subpool
Date: Wed, 11 Jun 2025 17:54:41 -0700	[thread overview]
Message-ID: <20250612005448.571615-1-joshua.hahnjy@gmail.com> (raw)
In-Reply-To: <diqz7c1iw9vx.fsf@ackerleytng-ctop.c.googlers.com>

On Wed, 11 Jun 2025 08:55:30 -0700 Ackerley Tng <ackerleytng@google.com> wrote:

> Joshua Hahn <joshua.hahnjy@gmail.com> writes:
> 
> > On Tue, 25 Mar 2025 14:16:34 +0800 Wupeng Ma <mawupeng1@huawei.com> wrote:
> >
> >> During our testing with hugetlb subpool enabled, we observe that
> >> hstate->resv_huge_pages may underflow into negative values. Root cause
> >> analysis reveals a race condition in subpool reservation fallback handling
> >> as follow:
> >> 
> >> hugetlb_reserve_pages()
> >>     /* Attempt subpool reservation */
> >>     gbl_reserve = hugepage_subpool_get_pages(spool, chg);
> >> 
> >>     /* Global reservation may fail after subpool allocation */
> >>     if (hugetlb_acct_memory(h, gbl_reserve) < 0)
> >>         goto out_put_pages;
> >> 
> >> out_put_pages:
> >>     /* This incorrectly restores reservation to subpool */
> >>     hugepage_subpool_put_pages(spool, chg);
> >> 
> >> When hugetlb_acct_memory() fails after subpool allocation, the current
> >> implementation over-commits subpool reservations by returning the full
> >> 'chg' value instead of the actual allocated 'gbl_reserve' amount. This
> >> discrepancy propagates to global reservations during subsequent releases,
> >> eventually causing resv_huge_pages underflow.
> >> 
> >> This problem can be trigger easily with the following steps:
> >> 1. reverse hugepage for hugeltb allocation
> >> 2. mount hugetlbfs with min_size to enable hugetlb subpool
> >> 3. alloc hugepages with two task(make sure the second will fail due to
> >>    insufficient amount of hugepages)
> >> 4. with for a few seconds and repeat step 3 which will make
> >>    hstate->resv_huge_pages to go below zero.
> >> 
> >> To fix this problem, return corrent amount of pages to subpool during the
> >> fallback after hugepage_subpool_get_pages is called.
> >> 
> >> Fixes: 1c5ecae3a93f ("hugetlbfs: add minimum size accounting to subpools")
> >> Signed-off-by: Wupeng Ma <mawupeng1@huawei.com>
> >
> > Hi Wupeng,
> > Thank you for the fix! This is a problem that we've also seen happen in
> > our fleet at Meta. I was able to recreate the issue that you mentioned -- to
> > explicitly lay down the steps I used:
> >
> > 1. echo 1 > /proc/sys/vm/nr_hugepages
> > 2. mkdir /mnt/hugetlb-pool
> > 3.mount -t hugetlbfs -o min_size=2M none /mnt/hugetlb-pool
> > 4. (./get_hugepage &) && (./get_hugepage &)
> >     # get_hugepage just opens a file in /mnt/hugetlb-pool and mmaps 2M into it.
> 
> Hi Joshua,
> 
> Would you be able to share the source for ./get_hugepage? I'm trying to
> reproduce this too.
> 
> Does ./get_hugepage just mmap and then spin in an infinite loop?
> 
> Do you have to somehow limit allocation of surplus HugeTLB pages from
> the buddy allocator?
> 
> Thanks!

Hi Ackerley,

The script I used for get_hugepage is very simple : -) No need to even spin
infinitely! I just make a file descriptor, ftruncate it to 2M, and mmap
into it. For good measure I set addr[0] = '.', sleep for 1 second, and then
munmap the area afterwards.

Here is a simplified version of the script (no error handling):
int fd = open("/mnt/hugetlb-pool/hugetlb_file", O_RDWR | O_CREAT, 0666);
ftruncate(fd, 2*1024*1024);
char *addr = mmap(NULL, 2*1024*1024, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
addr[0] = '.';
sleep(1);
munmap(addr, 2*1024*1024);
close(fd);

Hope this helps! Please let me know if it doesn't work, I would be happy
to investigate this with you. Have a great day!
Joshua

Sent using hkml (https://github.com/sjp38/hackermail)


  reply	other threads:[~2025-06-12  0:54 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-25  6:16 Wupeng Ma
2025-03-31 21:23 ` Joshua Hahn
2025-06-11 15:55   ` Ackerley Tng
2025-06-12  0:54     ` Joshua Hahn [this message]
2025-04-09 13:58 ` Joshua Hahn
2025-04-10  0:51   ` mawupeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250612005448.571615-1-joshua.hahnjy@gmail.com \
    --to=joshua.hahnjy@gmail.com \
    --cc=ackerleytng@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mawupeng1@huawei.com \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox