From: Matthew Wilcox <willy@infradead.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org
Subject: Re: [LSF/MM/BPF] Whither Highmem?
Date: Mon, 8 May 2023 01:23:24 +0100 [thread overview]
Message-ID: <ZFhA/CGgfo71jPtK@casper.infradead.org> (raw)
In-Reply-To: <20230507234330.cnzbumof2hdl4ci6@box.shutemov.name>
On Mon, May 08, 2023 at 02:43:30AM +0300, Kirill A. Shutemov wrote:
> On Mon, May 08, 2023 at 12:20:42AM +0100, Matthew Wilcox wrote:
> >
> > I see there's a couple of spots on the schedule open, so here's something
> > fun we could talk about.
> >
> > Highmem was originally introduced to support PAE36 (up to 64GB) on x86
> > in the late 90s. It's since been used to support a similar extension
> > on ARM (maybe other 32-bit architectures?)
> >
> > Things have changed a bit since then. There aren't a lot of systems
> > left which have more than 4GB of memory _and_ are incapable of running a
> > 64-bit kernel.
>
> Actual limit is lower. With 3G/1G userspace/kernel split you will have
> somewhere about 700Mb of virtual address space for direct mapping.
>
> But, I would like to get rid of highmem too. Not sure how realistic it is.
Right, I was using 4GB because on x86, we have two config options that
enable highmem, CONFIG_HIGHMEM4G and CONFIG_HIGHMEM64G. If we get rid
of the latter, it could be a nice win?
Also, the more highmem we have, the more kinds of things we need to put in
highmem. Say we have a 3:1 ratio of high to lowmem. On my 16GB laptop,
I have 5GB of Cached and 8.5GB of Anon. That's 13.5GB, so assuming that
ratio would be similar for a 4GB laptop, it's 5.4:1 and storing _just_
anon & cached pages in highmem would be more than enough.
(fwiw, PageTables is 125MB)
Maybe there's a workload that needs, eg page tables or fs metadata to
be stored in highmem. Other than pathological attempts to map one
page per 2MB, I don't think those exist.
Something I forgot to say is that I do not think we'll see highmem being
needed on 64-bit systems. We already have CPUs with 128-bit registers,
and have since the Pentium 3. 128-bit ALUs are missing, but as long as
we're very firm with CPU vendors that this is the kind of nonsense up
with which we shall not put, I think we can get 128-bit normal registers
at the same time that they change the page tables to support more than
57 bits of physical memory.
next prev parent reply other threads:[~2023-05-08 0:23 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-07 23:20 Matthew Wilcox
2023-05-07 23:43 ` Kirill A. Shutemov
2023-05-08 0:23 ` Matthew Wilcox [this message]
2023-05-08 0:59 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZFhA/CGgfo71jPtK@casper.infradead.org \
--to=willy@infradead.org \
--cc=kirill@shutemov.name \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox