From: Ryan Roberts <ryan.roberts@arm.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Linux-MM <linux-mm@kvack.org>,
Catalin Marinas <Catalin.Marinas@arm.com>,
Mark Rutland <mark.rutland@arm.com>,
Ruben Ayrapetyan <Ruben.Ayrapetyan@arm.com>
Subject: Folios for anonymous memory
Date: Wed, 15 Feb 2023 12:38:13 +0000 [thread overview]
Message-ID: <4c991dcb-c5bb-86bb-5a29-05df24429607@arm.com> (raw)
Hi Matthew, all,
I’ve recently been looking into some potential performance improvements, and
think that folios could help with making these improvements a reality. I’m
hoping that you can answer some questions to help figure out if this makes sense.
First a quick summary of my bench-marking; I’ve been running a Kernel
Compilation test as well as the Speedometer browser performance benchmark (among
others), while trying to better understand the impact of page size on both HW
and SW. To do this, I’ve hacked the arm64 arch code to separate the HW page size
(4K) from the kernel page size (16K). Then I ran 3 kernels (baseline-4k,
baseline-16k, and my hacked up hybrid-16k-4k) - all based on v6.1 - with the aim
of determining the speedups due solely to SW overhead reduction (baseline-4k ->
hybrid-16k-4k), and the speedups due to HW overhead reduction (baseline-4k ->
(baseline-16k - hybrid-16k-4k)).
Results as follows:
Kernel Compilation:
Speed up due to SW overhead reduction: 6.5%
Speed up due to HW overhead reduction: 5.0%
Total speed up: 11.5%
Speedometer 2.0:
Speed up due to SW overhead reduction: 5.3%
Speed up due to HW overhead reduction: 5.1%
Total speed up: 10.4%
Digging into the reasons for the SW-side speedup, it boils down to less
book-keeping - 4x fewer page faults, 4x fewer pages to manage locks/refcounts/…
for, which leads to faster abort and syscall handling. I think these phenomena
are well understood in the Folio context? Although for these workloads, the
memory is primarily anonymous.
I’d like to figure out how to realise some of these benefits in a kernel that
still maintains a 4K page user ABI. Reading over old threads, LWN and watching
Matthew’s talk at OSS last summer, it sounds like this is exactly what Folios
intend to solve?
So a few questions:
- I’ve seen folios for anon memory listed as future work; what’s the current
status? Is anyone looking at this? It’s something that I would be interested to
take a look at if not (although don’t take that as an actual commitment yet!).
- My understanding is that as of v6.0, at least, XFS was the only FS supporting
large folios? Has that picture changed? Is there any likelihood of seeing ext4
and f2fs support anytime soon?
- Matthew mentioned in the talk that he had data showing memory fragmentation
becoming less of an issue as more users we allocating large folios. Is that data
or the experimental approach public?
Thanks,
Ryan
next reply other threads:[~2023-02-15 12:38 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-15 12:38 Ryan Roberts [this message]
2023-02-15 15:13 ` Matthew Wilcox
2023-02-15 16:51 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4c991dcb-c5bb-86bb-5a29-05df24429607@arm.com \
--to=ryan.roberts@arm.com \
--cc=Catalin.Marinas@arm.com \
--cc=Ruben.Ayrapetyan@arm.com \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox