linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Li Zhe" <lizhe.67@bytedance.com>
To: <fvdl@google.com>
Cc: <akpm@linux-foundation.org>, <david@kernel.org>,
	 <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
	 <lizhe.67@bytedance.com>, <muchun.song@linux.dev>,
	<osalvador@suse.de>
Subject: Re: [PATCH 4/8] mm/hugetlb: introduce per-node sysfs interface "zeroable_hugepages"
Date: Tue, 30 Dec 2025 10:41:16 +0800	[thread overview]
Message-ID: <20251230024118.5263-1-lizhe.67@bytedance.com> (raw)
In-Reply-To: <CAPTztWYMhh3+V=-jXaMz5muTsj8fBX29umgJcsW7JfHA2LouuA@mail.gmail.com>

> On Mon, 29 Dec 2025 10:57:23 -0800, fvdl@google.com wrote:
> 
> On Mon, Dec 29, 2025 at 4:26 AM Li Zhe <lizhe.67@bytedance.com> wrote:
> >
> > On Fri, 26 Dec 2025 10:51:01 -0800, fvdl@google.com wrote:
> >
> > > > +static ssize_t zeroable_hugepages_show(struct kobject *kobj,
> > > > +                                       struct kobj_attribute *attr, char *buf)
> > > > +{
> > > > +       struct hstate *h;
> > > > +       unsigned long free_huge_pages_zero;
> > > > +       int nid;
> > > > +
> > > > +       h = kobj_to_hstate(kobj, &nid);
> > > > +       if (WARN_ON(nid == NUMA_NO_NODE))
> > > > +               return -EPERM;
> > > > +
> > > > +       free_huge_pages_zero = h->free_huge_pages_node[nid] -
> > > > +                              h->free_huge_pages_zero_node[nid];
> > > > +
> > > > +       return sprintf(buf, "%lu\n", free_huge_pages_zero);
> > > > +}
> > > > +
> > > > +static inline bool zero_should_abort(struct hstate *h, int nid)
> > > > +{
> > > > +       return (h->free_huge_pages_zero_node[nid] ==
> > > > +               h->free_huge_pages_node[nid]) ||
> > > > +               list_empty(&h->hugepage_freelists[nid]);
> > > > +}
> > > > +
> > > > +static void zero_free_hugepages_nid(struct hstate *h,
> > > > +                                  int nid, unsigned int nr_zero)
> > > > +{
> > > > +       struct list_head *freelist = &h->hugepage_freelists[nid];
> > > > +       unsigned int nr_zerod = 0;
> > > > +       struct folio *folio;
> > > > +
> > > > +       if (zero_should_abort(h, nid))
> > > > +               return;
> > > > +
> > > > +       spin_lock_irq(&hugetlb_lock);
> > > > +
> > > > +       while (nr_zerod < nr_zero) {
> > > > +
> > > > +               if (zero_should_abort(h, nid) || fatal_signal_pending(current))
> > > > +                       break;
> > > > +
> > > > +               freelist = freelist->prev;
> > > > +               if (unlikely(list_is_head(freelist, &h->hugepage_freelists[nid])))
> > > > +                       break;
> > > > +               folio = list_entry(freelist, struct folio, lru);
> > > > +
> > > > +               if (folio_test_hugetlb_zeroed(folio) ||
> > > > +                   folio_test_hugetlb_zeroing(folio))
> > > > +                       continue;
> > > > +
> > > > +               folio_set_hugetlb_zeroing(folio);
> > > > +
> > > > +               /*
> > > > +                * Incrementing this here is a bit of a fib, since
> > > > +                * the page hasn't been cleared yet (it will be done
> > > > +                * immediately after dropping the lock below). But
> > > > +                * it keeps the count consistent with the overall
> > > > +                * free count in case the page gets taken off the
> > > > +                * freelist while we're working on it.
> > > > +                */
> > > > +               h->free_huge_pages_zero_node[nid]++;
> > > > +               spin_unlock_irq(&hugetlb_lock);
> > > > +
> > > > +               /*
> > > > +                * HWPoison pages may show up on the freelist.
> > > > +                * Don't try to zero it out, but do set the flag
> > > > +                * and counts, so that we don't consider it again.
> > > > +                */
> > > > +               if (!folio_test_hwpoison(folio))
> > > > +                       folio_zero_user(folio, 0);
> > > > +
> > > > +               cond_resched();
> > > > +
> > > > +               spin_lock_irq(&hugetlb_lock);
> > > > +               folio_set_hugetlb_zeroed(folio);
> > > > +               folio_clear_hugetlb_zeroing(folio);
> > > > +
> > > > +               /*
> > > > +                * If the page is still on the free list, move
> > > > +                * it to the head.
> > > > +                */
> > > > +               if (folio_test_hugetlb_freed(folio))
> > > > +                       list_move(&folio->lru, &h->hugepage_freelists[nid]);
> > > > +
> > > > +               /*
> > > > +                * If someone was waiting for the zero to
> > > > +                * finish, wake them up.
> > > > +                */
> > > > +               if (waitqueue_active(&h->dqzero_wait[nid]))
> > > > +                       wake_up(&h->dqzero_wait[nid]);
> > > > +               nr_zerod++;
> > > > +               freelist = &h->hugepage_freelists[nid];
> > > > +       }
> > > > +       spin_unlock_irq(&hugetlb_lock);
> > > > +}
> > >
> > > Nit: s/nr_zerod/nr_zeroed/
> >
> > Thank you for the reminder. I will address this issue in v2.
> >
> > > Feels like the list logic can be cleaned up a bit here. Since the
> > > zeroed folios are at the head of the list, and the dirty ones at the
> > > tail, and you start walking from the tail, you don't need to check if
> > > you circled back to the head - just stop if you encounter a prezeroed
> > > folio. If you encounter a prezeroed folio while walking from the tail,
> > > that means that all other folios from that one to the head will also
> > > be prezeroed already.
> >
> > Thank you for the thoughtful suggestion. Your line of reasoning is,
> > in most situations, perfectly valid. Under extreme concurrency,
> > however, a corner case can still appear. Imagine two processes
> > simultaneously zeroing huge pages: Process A enters
> > zero_free_hugepages_nid(), completes the zeroing of one huge page,
> > and marks the folio in the list as pre-zeroed. Should Process B enter
> > the same function moments later and decide to exit as soon as it
> > meets a prezeroed folio, the intended parallel zeroing would quietly
> > fall back to a single-threaded pace.
> 
> Hm, setting the prezeroed bit and moving the folio to the front of the
> free list happens while holding hugetlb_lock. In other words, if you
> encounter a folio with the prezeroed bit set while holding
> hugetlb_lock, it will always be in a contiguous stretch of prezeroed
> folios at the head of the free list.
> 
> Since the check for 'is this already prezeroed' is done while holding
> hugetlb_lock, you know for sure that the folio is part of a list of
> prezeroed folios at the head, and you can stop, right?

Sorry for the confusion earlier. You're right, this does make
zero_free_hugepages_nid() simpler. I'll update it in v2.

Thanks,
Zhe


  reply	other threads:[~2025-12-30  2:41 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-25  8:20 [PATCH 0/8] Introduce a huge-page pre-zeroing mechanism 李喆
2025-12-25  8:20 ` [PATCH 1/8] mm/hugetlb: add pre-zeroed framework 李喆
2025-12-26  9:24   ` Raghavendra K T
2025-12-26  9:48     ` Li Zhe
2025-12-25  8:20 ` [PATCH 2/8] mm/hugetlb: convert to prep_account_new_hugetlb_folio() 李喆
2025-12-25  8:20 ` [PATCH 3/8] mm/hugetlb: move the huge folio to the end of the list during enqueue 李喆
2025-12-25  8:20 ` [PATCH 4/8] mm/hugetlb: introduce per-node sysfs interface "zeroable_hugepages" 李喆
2025-12-26 18:51   ` Frank van der Linden
2025-12-29 12:25     ` Li Zhe
2025-12-29 18:57       ` Frank van der Linden
2025-12-30  2:41         ` Li Zhe [this message]
2025-12-25  8:20 ` [PATCH 5/8] mm/hugetlb: simplify function hugetlb_sysfs_add_hstate() 李喆
2025-12-25  8:20 ` [PATCH 6/8] mm/hugetlb: relocate the per-hstate struct kobject pointer 李喆
2025-12-25  8:20 ` [PATCH 7/8] mm/hugetlb: add epoll support for interface "zeroable_hugepages" 李喆
2025-12-25  8:20 ` [PATCH 8/8] mm/hugetlb: limit event generation frequency of function do_zero_free_notify() 李喆
2025-12-26 18:32 ` [PATCH 0/8] Introduce a huge-page pre-zeroing mechanism Frank van der Linden
2025-12-26 21:42   ` Frank van der Linden
2025-12-29 12:28     ` Li Zhe
2025-12-27  7:21 ` Mateusz Guzik
2025-12-29 12:31   ` Li Zhe
2025-12-28 21:44 ` Andrew Morton
2025-12-29 12:34   ` Li Zhe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251230024118.5263-1-lizhe.67@bytedance.com \
    --to=lizhe.67@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=fvdl@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox