From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Cc: linux-cxl@vger.kernel.org, Byungchul Park <byungchul@sk.com>,
Honggyu Kim <honggyu.kim@sk.com>
Subject: [LSF/MM/BPF TOPIC] Restricting or migrating unmovable kernel allocations from slow tier
Date: Sat, 1 Feb 2025 22:29:23 +0900 [thread overview]
Message-ID: <Z54hUTXRsw0LYQ8b@localhost.localdomain> (raw)
Hi,
Byungchul and I would like to suggest a topic about the performance impact of
kernel allocations on CXL memory.
As CXL-enabled servers and memory devices are being developed, CXL-supported
hardware is expected to continue emerging in the coming years.
The Linux kernel supports hot-plugging CXL memory via dax/kmem functionality.
The hot-plugged memory allows either unmovable kernel allocations
(ZONE_NORMAL), or restricts them to movable allocations (ZONE_MOVABLE)
depending on the hot-plug policy.
Recently, Byungchul and I observed a measurable performance degradation with
memhp_default_state=online compared to memhp_default_state=online_movable
on a server where the ratio of memory capacity between DRAM and CXL is 1:2
when running the llama.cpp workload with the default mempolicy.
The workload performs LLM inference and pressures the memory subsystem
due to its large working set size.
Obviously, allowing kernel allocations from CXL memory degrades performance
because kernel memory like page tables, kernel stacks, and slab allocations,
is accessed frequently and may reside in physical memory with significantly
higher access latency.
However, as far as I can tell there are at least two reasons why we need to
support ZONE_NORMAL for CXL memory (please add if there are more):
1. When hot-plugging a huge amount of CXL memory, the size of
the struct page array might not fit into DRAM
-> This could be relaxed with memmap_on_memory
2. To hot-unplug CXL memory, pages in CXL memory should be migrated to DRAM,
which means sometimes some portion of CXL memory should be ZONE_NORMAL.
So, there are certain cases where we want CXL memory to include ZONE_NORMAL,
but this also degrades performance if we allow _all_ kinds of kernel
allocations to be served from CXL memory.
For ideal performance, it would be beneficial to either:
1) Restrict allocating certain types (e.g. page tables, kernel stacks,
slabs) of kernel memory from slow tier, or
2) Allow migrating certain types of kernel memory from slow tier to
fast tier.
At LSF/MM/BPF, I would like to discuss potential directions for addressing
this problem, ensuring the enablement of CXL memory while minimizing its
performance degradation.
Restricting certain types of kernel allocations from slow tier
==============================================================
We could restrict some kernel allocations to fast tier by passing a
nodemask to __alloc_pages() (with only nodes in fast tier set) or
using a GFP flag like __GFP_FAST_TIER which does the same thing.
This prevents kernel allocations from slow tier and thus avoids
performance degradation due to the high access latency of CXL.
However, binding all leaf page tables to fast tier might not be ideal
due to 1) increased latency from premature reclamation
and 2) premature OOM kill [1].
Migrating certain types of kernel allocations from slow to fast tier
====================================================================
Rather than binding kernel allocations to fast tier and causing premature
reclamation & OOM kill, policies for migrating kernel pages may be more
effective, such as:
- Migrating page tables to fast tier,
triggered by data-page promotion [1]
- Migrating to fast tier when there is low memory pressure:
- Migrating slab movable objects [2]
- Migrating kernel stacks (if that's feasible)
although this sounds more intrusive and we need to think about robust policies
that do not degrade existing traditional memory systems.
Any opinions will be appreciated.
Thanks!
[1] https://dl.acm.org/doi/10.1145/3459898.3463907
[2] https://lore.kernel.org/linux-mm/20190411013441.5415-1-tobin@kernel.org
next reply other threads:[~2025-02-01 13:29 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-01 13:29 Hyeonggon Yoo [this message]
2025-02-01 14:04 ` Matthew Wilcox
2025-02-01 15:13 ` Hyeonggon Yoo
2025-02-01 16:30 ` Gregory Price
2025-02-01 18:48 ` Matthew Wilcox
2025-02-03 22:09 ` Dan Williams
2025-02-07 7:20 ` Byungchul Park
2025-02-07 8:57 ` Gregory Price
2025-02-07 9:27 ` Gregory Price
2025-02-07 9:34 ` Honggyu Kim
2025-02-07 9:54 ` Gregory Price
2025-02-07 10:49 ` Byungchul Park
2025-02-10 2:33 ` Harry (Hyeonggon) Yoo
2025-02-10 3:19 ` Matthew Wilcox
2025-02-10 6:00 ` Gregory Price
2025-02-10 7:17 ` Byungchul Park
2025-02-10 15:47 ` Gregory Price
2025-02-10 15:55 ` Matthew Wilcox
2025-02-10 16:06 ` Gregory Price
2025-02-11 1:53 ` Byungchul Park
2025-02-21 1:52 ` Harry Yoo
2025-02-25 4:54 ` [LSF/MM/BPF TOPIC] Gathering ideas to reduce ZONE_NORMAL cost Byungchul Park
2025-02-25 5:06 ` [LSF/MM/BPF TOPIC] Restricting or migrating unmovable kernel allocations from slow tier Byungchul Park
2025-03-03 15:55 ` Gregory Price
2025-02-07 10:14 ` Byungchul Park
2025-02-10 7:02 ` Byungchul Park
2025-02-04 9:59 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z54hUTXRsw0LYQ8b@localhost.localdomain \
--to=42.hyeyoo@gmail.com \
--cc=byungchul@sk.com \
--cc=honggyu.kim@sk.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox