linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <muchun.song@linux.dev>
To: Matthew Wilcox <willy@infradead.org>
Cc: James Houghton <jthoughton@google.com>,
	Peter Xu <peterx@redhat.com>,
	lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Subject: Re: [LSF/MM/BPF TOPIC] Hugetlb Unifications
Date: Fri, 1 Mar 2024 14:51:09 +0800	[thread overview]
Message-ID: <44708637-2258-4AA0-97C1-77BC7EDEEE63@linux.dev> (raw)
In-Reply-To: <ZeFZrg85598czNN6@casper.infradead.org>



> On Mar 1, 2024, at 12:29, Matthew Wilcox <willy@infradead.org> wrote:
> 
> On Thu, Feb 29, 2024 at 05:37:23PM -0800, James Houghton wrote:
>> - It has HVO (which can hopefully be dropped in a memdesc world)
> 
> I've spent a bit of time thinking about this.  I'll keep this x86-64
> specific just to have concrete numbers.
> 
> Currently a 2MB htlb page without HVO occupies 64 * 512 = 32kB.  With HVO,
> it's reduced to 8kB.  A 1GB htlb page occupies 64 * 256k = 8MB, with HVO,
> it's still 8kB (right?)

Correct in the past. In the first version, HVO needs 2 pages (8k) for
vmemmap, however, it only needs only one page (4k) for it whatever the
huge page sizes (2MB or 1GB) now.

> 
> In a memdesc world, a 2MB page without HVO consumes 8 * 512 = 4kB.
> There's no room for savings here.  But a 1GB page takes 8 * 256k = 2MB.
> There's still almost 2MB of savings to be had here, so I suspect some
> people will still want it.

Agree. With 2MB pages, there is no savings with HVO, but it saves a lot
for 1GB huge pages.

> 
> Hopefully Yu Zhao's zone proposal lets us enable HVO for THP.  At least
> 1GB ones.

Hopefully see it.

Thanks.

> 
> I do have a proposal to turn mmap into a much more dynamic data structure
> where we'd go from a fixed 8 bytes per page to around 16 bytes per
> allocation.  But it depends on memdescs working first, and we haven't
> demonstrated that yet, so it's not worth talking about.  It's much more
> complicated than 8 bytes per page, so it may not be worth doing.



  reply	other threads:[~2024-03-01  6:51 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-22  8:50 Peter Xu
2024-02-22 20:36 ` Frank van der Linden
2024-02-22 22:21   ` Matthew Wilcox
2024-02-22 22:16 ` Pasha Tatashin
2024-02-22 22:31   ` Matthew Wilcox
2024-02-22 22:58     ` Pasha Tatashin
2024-03-01  1:37 ` James Houghton
2024-03-01  3:11   ` Peter Xu
2024-03-06 23:24     ` James Houghton
2024-03-07 10:06       ` Peter Xu
2024-03-01  4:29   ` Matthew Wilcox
2024-03-01  6:51     ` Muchun Song [this message]
2024-03-01 16:44       ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=44708637-2258-4AA0-97C1-77BC7EDEEE63@linux.dev \
    --to=muchun.song@linux.dev \
    --cc=jthoughton@google.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=peterx@redhat.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox