linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [LSF/MM/BPF] Whither Highmem?
@ 2023-05-07 23:20 Matthew Wilcox
  2023-05-07 23:43 ` Kirill A. Shutemov
  0 siblings, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2023-05-07 23:20 UTC (permalink / raw)
  To: lsf-pc, linux-mm; +Cc: linux-fsdevel


I see there's a couple of spots on the schedule open, so here's something
fun we could talk about.

Highmem was originally introduced to support PAE36 (up to 64GB) on x86
in the late 90s.  It's since been used to support a similar extension
on ARM (maybe other 32-bit architectures?)

Things have changed a bit since then.  There aren't a lot of systems
left which have more than 4GB of memory _and_ are incapable of running a
64-bit kernel.  We certainly don't see 64GB systems; maybe 8GB systems,
but 64-bit registers are cheap and if you're willing to solder 8GB of
RAM to the board, you're probably willing to splurge on a 64-bit ALU.

The objection might be raised that kmap_local() is cheap, and it is.
But when we have multi-page folios, particularly for filesystem metadata,
it gets awkward to use kmap because it only supports individual pages;
you can't kmap an entire folio.

So I'd like to explore removing support for keeping filesystem metadata
and page tables in highmem.  Anon memory and file memory should probably
remain supported in highmem.

Interested?


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LSF/MM/BPF] Whither Highmem?
  2023-05-07 23:20 [LSF/MM/BPF] Whither Highmem? Matthew Wilcox
@ 2023-05-07 23:43 ` Kirill A. Shutemov
  2023-05-08  0:23   ` Matthew Wilcox
  0 siblings, 1 reply; 4+ messages in thread
From: Kirill A. Shutemov @ 2023-05-07 23:43 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: lsf-pc, linux-mm, linux-fsdevel

On Mon, May 08, 2023 at 12:20:42AM +0100, Matthew Wilcox wrote:
> 
> I see there's a couple of spots on the schedule open, so here's something
> fun we could talk about.
> 
> Highmem was originally introduced to support PAE36 (up to 64GB) on x86
> in the late 90s.  It's since been used to support a similar extension
> on ARM (maybe other 32-bit architectures?)
> 
> Things have changed a bit since then.  There aren't a lot of systems
> left which have more than 4GB of memory _and_ are incapable of running a
> 64-bit kernel.

Actual limit is lower. With 3G/1G userspace/kernel split you will have
somewhere about 700Mb of virtual address space for direct mapping.

But, I would like to get rid of highmem too. Not sure how realistic it is.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LSF/MM/BPF] Whither Highmem?
  2023-05-07 23:43 ` Kirill A. Shutemov
@ 2023-05-08  0:23   ` Matthew Wilcox
  2023-05-08  0:59     ` Kirill A. Shutemov
  0 siblings, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2023-05-08  0:23 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: lsf-pc, linux-mm, linux-fsdevel

On Mon, May 08, 2023 at 02:43:30AM +0300, Kirill A. Shutemov wrote:
> On Mon, May 08, 2023 at 12:20:42AM +0100, Matthew Wilcox wrote:
> > 
> > I see there's a couple of spots on the schedule open, so here's something
> > fun we could talk about.
> > 
> > Highmem was originally introduced to support PAE36 (up to 64GB) on x86
> > in the late 90s.  It's since been used to support a similar extension
> > on ARM (maybe other 32-bit architectures?)
> > 
> > Things have changed a bit since then.  There aren't a lot of systems
> > left which have more than 4GB of memory _and_ are incapable of running a
> > 64-bit kernel.
> 
> Actual limit is lower. With 3G/1G userspace/kernel split you will have
> somewhere about 700Mb of virtual address space for direct mapping.
> 
> But, I would like to get rid of highmem too. Not sure how realistic it is.

Right, I was using 4GB because on x86, we have two config options that
enable highmem, CONFIG_HIGHMEM4G and CONFIG_HIGHMEM64G.  If we get rid
of the latter, it could be a nice win?

Also, the more highmem we have, the more kinds of things we need to put in
highmem.  Say we have a 3:1 ratio of high to lowmem.  On my 16GB laptop,
I have 5GB of Cached and 8.5GB of Anon.  That's 13.5GB, so assuming that
ratio would be similar for a 4GB laptop, it's 5.4:1 and storing _just_
anon & cached pages in highmem would be more than enough.

(fwiw, PageTables is 125MB)

Maybe there's a workload that needs, eg page tables or fs metadata to
be stored in highmem.  Other than pathological attempts to map one
page per 2MB, I don't think those exist.

Something I forgot to say is that I do not think we'll see highmem being
needed on 64-bit systems.  We already have CPUs with 128-bit registers,
and have since the Pentium 3.  128-bit ALUs are missing, but as long as
we're very firm with CPU vendors that this is the kind of nonsense up
with which we shall not put, I think we can get 128-bit normal registers
at the same time that they change the page tables to support more than
57 bits of physical memory.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LSF/MM/BPF] Whither Highmem?
  2023-05-08  0:23   ` Matthew Wilcox
@ 2023-05-08  0:59     ` Kirill A. Shutemov
  0 siblings, 0 replies; 4+ messages in thread
From: Kirill A. Shutemov @ 2023-05-08  0:59 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: lsf-pc, linux-mm, linux-fsdevel

On Mon, May 08, 2023 at 01:23:24AM +0100, Matthew Wilcox wrote:
> On Mon, May 08, 2023 at 02:43:30AM +0300, Kirill A. Shutemov wrote:
> > On Mon, May 08, 2023 at 12:20:42AM +0100, Matthew Wilcox wrote:
> > > 
> > > I see there's a couple of spots on the schedule open, so here's something
> > > fun we could talk about.
> > > 
> > > Highmem was originally introduced to support PAE36 (up to 64GB) on x86
> > > in the late 90s.  It's since been used to support a similar extension
> > > on ARM (maybe other 32-bit architectures?)
> > > 
> > > Things have changed a bit since then.  There aren't a lot of systems
> > > left which have more than 4GB of memory _and_ are incapable of running a
> > > 64-bit kernel.
> > 
> > Actual limit is lower. With 3G/1G userspace/kernel split you will have
> > somewhere about 700Mb of virtual address space for direct mapping.
> > 
> > But, I would like to get rid of highmem too. Not sure how realistic it is.
> 
> Right, I was using 4GB because on x86, we have two config options that
> enable highmem, CONFIG_HIGHMEM4G and CONFIG_HIGHMEM64G.  If we get rid
> of the latter, it could be a nice win?

Not really. CONFIG_HIGHMEM64G is basically synonym for PAE that has more
goodies beyond wider phys_addr_t, like NX bit support. PAE and HIGHMEM4G
are mutually exclusive (but NOHIGHMEM is fine).

> Also, the more highmem we have, the more kinds of things we need to put in
> highmem.  Say we have a 3:1 ratio of high to lowmem.  On my 16GB laptop,
> I have 5GB of Cached and 8.5GB of Anon.  That's 13.5GB, so assuming that
> ratio would be similar for a 4GB laptop, it's 5.4:1 and storing _just_
> anon & cached pages in highmem would be more than enough.
> 
> (fwiw, PageTables is 125MB)
> 
> Maybe there's a workload that needs, eg page tables or fs metadata to
> be stored in highmem.  Other than pathological attempts to map one
> page per 2MB, I don't think those exist.
> 
> Something I forgot to say is that I do not think we'll see highmem being
> needed on 64-bit systems.

I hope not. CPU designers must know by now to provide at least 2-bit wider
virtual address space than physical address space.

> We already have CPUs with 128-bit registers,
> and have since the Pentium 3.  128-bit ALUs are missing, but as long as
> we're very firm with CPU vendors that this is the kind of nonsense up
> with which we shall not put, I think we can get 128-bit normal registers
> at the same time that they change the page tables to support more than
> 57 bits of physical memory.

Current architectural limit on x86 PA is 52.

I think we would need 128-bit PTEs before we run out of physical address
space (or get 128-bit GPRs). We don't have bits for new features in PTE
and started to eat out PFN bits.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-05-08  1:00 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-07 23:20 [LSF/MM/BPF] Whither Highmem? Matthew Wilcox
2023-05-07 23:43 ` Kirill A. Shutemov
2023-05-08  0:23   ` Matthew Wilcox
2023-05-08  0:59     ` Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox