From: "Huang, Ying" <ying.huang@linux.alibaba.com>
To: Gregory Price <gourry@gourry.net>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
nehagholkar@meta.com, abhishekd@meta.com, kernel-team@meta.com,
david@redhat.com, nphamcs@gmail.com, akpm@linux-foundation.org,
hannes@cmpxchg.org, kbusch@meta.com
Subject: Re: [RFC v2 PATCH 0/5] Promotion of Unmapped Page Cache Folios.
Date: Sun, 22 Dec 2024 15:09:44 +0800 [thread overview]
Message-ID: <87wmfsi47b.fsf@DESKTOP-5N7EMDA> (raw)
In-Reply-To: <Z2bVWWuGe0aiv-t_@gourry-fedora-PF4VCD3F> (Gregory Price's message of "Sat, 21 Dec 2024 09:48:57 -0500")
Gregory Price <gourry@gourry.net> writes:
> On Sat, Dec 21, 2024 at 01:18:04PM +0800, Huang, Ying wrote:
>> Gregory Price <gourry@gourry.net> writes:
>>
>> >
>> > Single-reader DRAM: ~16.0-16.4s
>> > Single-reader CXL (after demotion): ~16.8-17s
>>
>> The difference is trivial. This makes me thought that why we need this
>> patchset?
>>
>
> That's 3-6% performance in this contrived case.
This is small too.
> We're working to testing a real workload we know suffers from this
> problem as it is long-running. Should be early in the new year hopefully.
Good!
To demonstrate the max possible performance gain. We can use a pure
file read/write benchmark such as fio, run in on pure DRAM and pure CXL.
Then the difference is the max possible performance gain we can get.
>> > Next we turned promotion on with only a single reader running.
>> >
>> > Before promotions:
>> > Node 0 MemFree: 636478112 kB
>> > Node 0 FilePages: 59009156 kB
>> > Node 1 MemFree: 250336004 kB
>> > Node 1 FilePages: 14979628 kB
>>
>> Why are there some many file pages on node 1 even if there're a lot of
>> free pages on node 0? You moved some file pages from node 0 to node 1?
>>
>
> This was explicit and explained in the test notes:
>
> First we ran with promotion disabled to show consistent overhead as
> a result of forcing a file out to CXL memory. We first ran a single
> reader to see uncontended performance, launched many readers to force
> demotions, then dropped back to a single reader to observe.
>
> The goal here was to simply demonstrate functionality and stability.
Got it.
>> > After promotions:
>> > Node 0 MemFree: 632267268 kB
>> > Node 0 FilePages: 72204968 kB
>> > Node 1 MemFree: 262567056 kB
>> > Node 1 FilePages: 2918768 kB
>> >
>> > Single-reader (after_promotion): ~16.5s
>
> This represents a 2.5-6% speedup depending on the spread.
>
>> >
>> > numa_migrate_prep: 93 - time(3969867917) count(42576860)
>> > migrate_misplaced_folio_prepare: 491 - time(3433174319) count(6985523)
>> > migrate_misplaced_folio: 1635 - time(11426529980) count(6985523)
>> >
>> > Thoughts on a good throttling heuristic would be appreciated here.
>>
>> We do have a throttle mechanism already, for example, you can used
>>
>> $ echo 100 > /proc/sys/kernel/numa_balancing_promote_rate_limit_MBps
>>
>> to rate limit the promotion throughput under 100 MB/s for each DRAM
>> node.
>>
>
> Can easily piggyback on that, just wasn't sure if overloading it was
> an acceptable idea.
It's the recommended setup in the original PMEM promotion
implementation. Please check commit c959924b0dc5 ("memory tiering:
adjust hot threshold automatically").
> Although since that promotion rate limit is also
> per-task (as far as I know, will need to read into it a bit more) this
> is probably fine.
It's not per-task. Please read the code, especially
should_numa_migrate_memory().
---
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2024-12-22 7:10 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-10 21:37 Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 1/5] migrate: Allow migrate_misplaced_folio_prepare() to accept a NULL VMA Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 2/5] memory: move conditionally defined enums use inside ifdef tags Gregory Price
2024-12-27 10:34 ` Donet Tom
2024-12-27 15:42 ` Gregory Price
2024-12-29 14:49 ` Donet Tom
2024-12-10 21:37 ` [RFC v2 PATCH 3/5] memory: allow non-fault migration in numa_migrate_check path Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 4/5] vmstat: add page-cache numa hints Gregory Price
2024-12-27 10:48 ` Donet Tom
2024-12-27 15:49 ` Gregory Price
2024-12-29 14:57 ` Donet Tom
2025-01-03 10:18 ` Donet Tom
2025-01-03 19:19 ` Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 5/5] migrate,sysfs: add pagecache promotion Gregory Price
2024-12-27 11:01 ` Donet Tom
2024-12-27 15:56 ` Gregory Price
2024-12-29 15:00 ` Donet Tom
2024-12-21 5:18 ` [RFC v2 PATCH 0/5] Promotion of Unmapped Page Cache Folios Huang, Ying
2024-12-21 14:48 ` Gregory Price
2024-12-22 7:09 ` Huang, Ying [this message]
2024-12-22 16:22 ` Gregory Price
2024-12-27 2:16 ` Huang, Ying
2024-12-27 15:40 ` Gregory Price
2024-12-27 19:09 ` Gregory Price
2024-12-28 3:38 ` Gregory Price
2024-12-31 7:32 ` Gregory Price
2025-01-02 2:58 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87wmfsi47b.fsf@DESKTOP-5N7EMDA \
--to=ying.huang@linux.alibaba.com \
--cc=abhishekd@meta.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=kbusch@meta.com \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nehagholkar@meta.com \
--cc=nphamcs@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox