From: Matthew Wilcox <willy@infradead.org>
To: Zi Yan <ziy@nvidia.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>,
Yang Shi <shy828301@gmail.com>,
David Hildenbrand <david@redhat.com>,
David Rientjes <rientjes@google.com>,
Yosry Ahmed <yosryahmed@google.com>,
James Houghton <jthoughton@google.com>,
Naoya Horiguchi <naoya.horiguchi@nec.com>,
Miaohe Lin <linmiaohe@huawei.com>,
lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
Peter Xu <peterx@redhat.com>, Michal Hocko <mhocko@suse.com>,
Axel Rasmussen <axelrasmussen@google.com>,
Jiaqi Yan <jiaqiyan@google.com>
Subject: Re: [LSF/MM/BPF TOPIC] HGM for hugetlbfs
Date: Fri, 9 Jun 2023 20:57:01 +0100 [thread overview]
Message-ID: <ZIOEDTUBrBg6tepk@casper.infradead.org> (raw)
In-Reply-To: <6B42EC7F-7EB6-45E0-AF4D-F4F0FA7A012E@nvidia.com>
On Thu, Jun 08, 2023 at 09:57:34PM -0400, Zi Yan wrote:
> On the hugetlbfs backend, PMD sharing, MAP_PRIVATE, reducing struct page
> storage all look features core mm might want. Merging these features back
> to core mm might be a good first step.
>
> I thought about replacing hugetlbfs backend with THP (with my 1GB THP support),
> but find that not all THP features are necessary for hugetlbfs users or
> compatible with existing hugetlbfs. For example, hugetlbfs does not need
> transparent page split, since user just wants that big page size. And page
> split might not get along with reducing struct page storage feature.
But with HGM, we actually do want to split the page because part of it
has hit a hwpoison event. What these customers don't need is support
for misaligned mappings or partial mappings. If they map a 1GB page,
they do it 1GB aligned and in multiples of 1GB. And they tell us in
advance that's what they're doing.
> In sum, I think we might not need all THP features (page table entry split
> and huge page split) to replace hugetlbfs and we might just need to enable
> core mm to handle any size folio and hugetlb pages are just folios that
> can go as large as 1GB. As a result, hugetlb pages can take advantage of
> all core mm features, like hwpoison.
Yes, this is more or less in line with my work. And yet there are still
problems to solve:
- mapcount (discussed elsewhere in the thread)
- page cache index scaling (Sid is working on this)
- page table sharing (mshare)
- reserved memory
> > I seem to remember Zi trying to use CMA for 1G THP allocations. However, I
> > am not sure if using CMA would be sufficient. IIUC, allocating from CMA could
> > still require page migrations to put together a 1G contiguous area. In a pool
> > as used by hugetlb, 1G pages are pre-allocated and sitting in the pool. The
> > downside of such a pool is that the memory can not be used for other purposes
> > and sits 'idle' if not allocated.
>
> Yes, I tried that. One big issue is that at free time a 1GB THP needs to be freed
> back to a CMA pool instead of buddy allocator, but THP can be split and after
> split, it is really hard to tell whether a page is from a CMA pool or not.
>
> hugetlb pages does not support page split yet, so the issue might not be
> relevant. But if a THP cannot be split freely, is it a still THP? So it comes
> back to my question: do we really want 1GB THP or just core mm can handle
> any size folios?
We definitely want the core MM to be able to handle folios of arbitrary
size. There are a pile of places still to fix, eg if you map a
misaligned 1GB page, you can see N PTEs followed by 511 PMDs followed by
512-N PTEs. There are a lot of places that assume pmd_page() returns
both a head page and the precise page, and those will need to be fixed.
There's a reason I limit page cache to PMD_ORDER today.
> > Hate to even bring this up, but there are complaints today about 'allocation
> > time' of 1GB pages from the hugetlb pool. This 'allocation time' is actually
> > the time it takes to clear/zero 1G of memory. Only reason I mention is
> > using something like CMA to allocate 1G pages (at fault time) may add
> > unacceptable latency.
>
> One solution I had in mind is that you could zero these 1GB pages at free
> time in a worker thread, so that you do not pay the penalty at page allocation
> time. But it would not work if the allocation comes right after a page is
> freed.
It rather goes against the principle of the user should pay the cost.
If we got the zeroing for free, that'd be one thing, but it feels like
we're robbing Peter (of CPU time) to pay Paul.
> At the end, let me ask this again: do we want 1GB THP to replace hugetlb
> or enable core mm to handle any size folios and change 1GB hugetlb page
> to a 1GB folio?
I don't see this as an either-or. The core MM needs to be enhanced to
handle arbitrary sized folios, but the hugetlbfs interface needs to be
kept around for ever. What we need from a maintainability point of view
is removing how special hugetlbfs is.
next prev parent reply other threads:[~2023-06-09 19:57 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-06 19:19 Mike Kravetz
2023-03-14 15:37 ` James Houghton
2023-04-12 1:44 ` David Rientjes
2023-05-24 20:26 ` James Houghton
2023-05-26 3:00 ` David Rientjes
[not found] ` <20230602172723.GA3941@monkey>
2023-06-06 22:40 ` David Rientjes
2023-06-07 7:38 ` David Hildenbrand
2023-06-07 7:51 ` Yosry Ahmed
2023-06-07 8:13 ` David Hildenbrand
2023-06-07 22:06 ` Mike Kravetz
2023-06-08 0:02 ` David Rientjes
2023-06-08 6:34 ` David Hildenbrand
2023-06-08 18:50 ` Yang Shi
2023-06-08 21:23 ` Mike Kravetz
2023-06-09 1:57 ` Zi Yan
2023-06-09 15:17 ` Pasha Tatashin
2023-06-09 19:04 ` Ankur Arora
2023-06-09 19:57 ` Matthew Wilcox [this message]
2023-06-08 20:10 ` Matthew Wilcox
2023-06-09 2:59 ` David Rientjes
2023-06-13 14:59 ` Jason Gunthorpe
2023-06-13 15:15 ` David Hildenbrand
2023-06-13 15:45 ` Peter Xu
2023-06-08 21:54 ` [Lsf-pc] " Dan Williams
2023-06-08 22:35 ` Mike Kravetz
2023-06-09 3:36 ` Dan Williams
2023-06-09 20:20 ` James Houghton
2023-06-13 15:17 ` Jason Gunthorpe
2023-06-07 14:40 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZIOEDTUBrBg6tepk@casper.infradead.org \
--to=willy@infradead.org \
--cc=axelrasmussen@google.com \
--cc=david@redhat.com \
--cc=jiaqiyan@google.com \
--cc=jthoughton@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=naoya.horiguchi@nec.com \
--cc=peterx@redhat.com \
--cc=rientjes@google.com \
--cc=shy828301@gmail.com \
--cc=yosryahmed@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox