From: Balbir Singh <balbirs@nvidia.com>
To: Alistair Popple <apopple@nvidia.com>, linux-mm@kvack.org
Cc: lsf-pc@lists.linux-foundation.org, david@redhat.com,
willy@infradead.org, ziy@nvidia.com, jhubbard@nvidia.com,
jgg@nvidia.com
Subject: Re: [LSF/MM/BPF TOPIC] The future of ZONE_DEVICE pages
Date: Fri, 31 Jan 2025 14:29:40 +1100 [thread overview]
Message-ID: <da7c589e-d512-450d-b6f8-188dc6ef038c@nvidia.com> (raw)
In-Reply-To: <ytwoxegtun3j2454zi4vybahqbvdhwmspq6eg2odvupqw2m2yr@cdbgu6sjfqze>
On 1/31/25 13:59, Alistair Popple wrote:
> I have a few topics that I would like to discuss around ZONE_DEVICE pages
> and their current and future usage in the kernel. Generally these pages are
> used to represent various forms of device memory (PCIe BAR space, coherent
> accelerator memory, persistent memory, unaddressable device memory). All
> of these require special treatment by the core MM so many features must be
> implemented specifically for ZONE_DEVICE pages.
>
> I would like to get feedback on several ideas I've had for a while:
>
> Large page migration for ZONE_DEVICE pages
> ==========================================
>
> Currently large ZONE_DEVICE pages only exist for persistent memory use cases
> (DAX, FS DAX). This involves a special reference counting scheme which I hope to
> have fixed[1] by the time of the LSF/MM/BPF. Fixing this allows for other higher
> order ZONE_DEVICE folios.
>
> Specifically I would like to introduce the possiblity of migrating large CPU
> folios to unaddressable (DEVICE_PRIVATE) or coherent (DEVICE_COHERENT) memory.
> The current interfaces (migrate_vma) don't allow that as they require all folios
> to be split.
>
> Some of the issues are:
>
> 1. What should the interface look like?
>
> These are non-lru pages, so likely there is overlap with "non-lru page migration
> in a memdesc world"[2]
>
> 2. How do we allow merging/splitting of pages during migration?
>
> This is neccessary because when migrating back from device memory there may not
> be enough large CPU pages available.
>
> 3. Any other issues?
I'd definitely be interested in the above topic. In general, I see a lot of overlap
of folio and struct page code, I think folio is a good abstraction and I am hoping some
day we just have folio's to abstract the size of the page.
>
> [1] - https://lore.kernel.org/linux-mm/cover.11189864684e31260d1408779fac9db80122047b.1736488799.git-series.apopple@nvidia.com/
> [2] - https://lore.kernel.org/linux-mm/2612ac8a-d0a9-452b-a53d-75ffc6166224@redhat.com/
>
> File-backed DEVICE_PRIVATE/COHERENT pages
> =========================================
>
> Currently DEVICE_PRVIATE and DEVICE_COHERENT pages are only supported for
> private anonymous memory. This prevents devices from having local access to
> shared or file-backed mappings instead relying on remote DMA access which limits
> performance.
>
> I have been prototyping allowing ZONE_DEVICE pages in the page cache with
> a callback when the CPU requires access. This approach seems promising and
> relatively straight-forward but I would like some early feedback on either this
> or alternate approaches that I should investigate.
>
I assume this is for mapped page cache pages?
> Combining P2PDMA and DEVICE_PRIVATE pages
> =========================================
>
> Currently device memory that cannot be directly accessed via the CPU can be
> represented by DEVICE_PRIVATE pages allowing it to be mapped and treated like
> a normal virtual page by userpsace. Many devices also support accessing device
> memory directly from the CPU via a PCIe BAR.
>
> This access requires a P2PDMA page, meaning there are potentially two pages
> tracking the same piece of physical memory. This not only seems wasteful but
> fraught - for example device drivers need to keep page lifetimes in sync. I
> would like to discuss ways of solving this.
+1 for the topics
Balbir Singh
next prev parent reply other threads:[~2025-01-31 3:29 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-31 2:59 Alistair Popple
2025-01-31 3:29 ` Balbir Singh [this message]
2025-01-31 3:58 ` Zi Yan
2025-01-31 5:50 ` Alistair Popple
2025-01-31 15:34 ` Zi Yan
2025-01-31 8:47 ` David Hildenbrand
2025-02-05 10:12 ` Alistair Popple
2025-01-31 14:52 ` Jason Gunthorpe
2025-02-02 8:22 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=da7c589e-d512-450d-b6f8-188dc6ef038c@nvidia.com \
--to=balbirs@nvidia.com \
--cc=apopple@nvidia.com \
--cc=david@redhat.com \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox