linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Gregory Price <gourry@gourry.net>
To: "Huang, Ying" <ying.huang@linux.alibaba.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	nehagholkar@meta.com, abhishekd@meta.com, kernel-team@meta.com,
	david@redhat.com, nphamcs@gmail.com, akpm@linux-foundation.org,
	hannes@cmpxchg.org, kbusch@meta.com
Subject: Re: [RFC v2 PATCH 0/5] Promotion of Unmapped Page Cache Folios.
Date: Fri, 27 Dec 2024 14:09:50 -0500	[thread overview]
Message-ID: <Z277fuEdZldMdmQA@gourry-fedora-PF4VCD3F> (raw)
In-Reply-To: <Z27KdHq2cwPg0w7S@gourry-fedora-PF4VCD3F>

On Fri, Dec 27, 2024 at 10:40:36AM -0500, Gregory Price wrote:
> > Can we measure the largest improvement?  For example, run the benchmark
> > with all file pages in DRAM and CXL.mem via numa binding, and compare.
> 
> I can probably come up with something, will rework some stuff.
>

so I did as you suggested, I made a program that allocates a 16GB
buffer, initializes it, them membinds itself to node1 before accessing
the file to force it into pagecache, then i ran a bunch of tests.

Completely unexpected result: ~25% overhead from an inexplicable source.

baseline - no membind()
./test
Read loop took 0.93 seconds

drop caches

./test - w/ membind(1) just before file open
Read loop took 1.16 seconds

node 1 size: 262144 MB
node 1 free: 245756 MB <- file confirmed in cache


kill and relaunch without membind to avoid any funny business
./test
Read loop took 1.16 seconds

enable promotion
Read loop took 3.37 seconds <- migration overhead
... snip ...
Read loop took 1.17 seconds <- stabilizes here

node 1 size: 262144 MB
node 1 free: 262144 MB <- pagecache promoted

Absolutely bizarre result: there is 0% CXL usage ocurring, but the
overhead we originally measured is still present.

This overhead persists even if i do the following
  - disable pagecache promotion
  - disable numa_balancing
  - offline CXL memory entirely

This is actually pretty wild. I presume this must imply the folio flags
are mucked up after migration and we're incurring a bunch of overhead 
on access for no reason. At the very least it doesn't appear to be
an isolated folio issue:

nr_isolated_anon 0
nr_isolated_file 0

I'll have to dig into this further, I wonder if this happens with mapped
memory as well.

~Gregory


  reply	other threads:[~2024-12-27 19:09 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-10 21:37 Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 1/5] migrate: Allow migrate_misplaced_folio_prepare() to accept a NULL VMA Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 2/5] memory: move conditionally defined enums use inside ifdef tags Gregory Price
2024-12-27 10:34   ` Donet Tom
2024-12-27 15:42     ` Gregory Price
2024-12-29 14:49       ` Donet Tom
2024-12-10 21:37 ` [RFC v2 PATCH 3/5] memory: allow non-fault migration in numa_migrate_check path Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 4/5] vmstat: add page-cache numa hints Gregory Price
2024-12-27 10:48   ` Donet Tom
2024-12-27 15:49     ` Gregory Price
2024-12-29 14:57       ` Donet Tom
2025-01-03 10:18   ` Donet Tom
2025-01-03 19:19     ` Gregory Price
2024-12-10 21:37 ` [RFC v2 PATCH 5/5] migrate,sysfs: add pagecache promotion Gregory Price
2024-12-27 11:01   ` Donet Tom
2024-12-27 15:56     ` Gregory Price
2024-12-29 15:00       ` Donet Tom
2024-12-21  5:18 ` [RFC v2 PATCH 0/5] Promotion of Unmapped Page Cache Folios Huang, Ying
2024-12-21 14:48   ` Gregory Price
2024-12-22  7:09     ` Huang, Ying
2024-12-22 16:22       ` Gregory Price
2024-12-27  2:16         ` Huang, Ying
2024-12-27 15:40           ` Gregory Price
2024-12-27 19:09             ` Gregory Price [this message]
2024-12-28  3:38               ` Gregory Price
2024-12-31  7:32                 ` Gregory Price
2025-01-02  2:58                   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z277fuEdZldMdmQA@gourry-fedora-PF4VCD3F \
    --to=gourry@gourry.net \
    --cc=abhishekd@meta.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=kbusch@meta.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nehagholkar@meta.com \
    --cc=nphamcs@gmail.com \
    --cc=ying.huang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox