linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@linux.alibaba.com>
To: Jordan Niethe <jniethe@nvidia.com>
Cc: linux-mm@kvack.org,  balbirs@nvidia.com,
	 matthew.brost@intel.com, akpm@linux-foundation.org,
	 linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	 david@redhat.com,  ziy@nvidia.com, apopple@nvidia.com,
	 lorenzo.stoakes@oracle.com,  lyude@redhat.com, dakr@kernel.org,
	 airlied@gmail.com,  simona@ffwll.ch, rcampbell@nvidia.com,
	 mpenttil@redhat.com,  jgg@nvidia.com, willy@infradead.org,
	 linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org,
	 jgg@ziepe.ca,  Felix.Kuehling@amd.com, jhubbard@nvidia.com
Subject: Re: [PATCH v3 00/13] Remove device private pages from physical address space
Date: Thu, 29 Jan 2026 21:49:40 +0800	[thread overview]
Message-ID: <875x8kbkaz.fsf@DESKTOP-5N7EMDA> (raw)
In-Reply-To: <20260123062309.23090-1-jniethe@nvidia.com> (Jordan Niethe's message of "Fri, 23 Jan 2026 17:22:56 +1100")

Hi, Jordan,

Jordan Niethe <jniethe@nvidia.com> writes:

> Introduction
> ------------
>
> The existing design of device private memory imposes limitations which
> render it non functional for certain systems and configurations - this
> series removes those limitations. These issues are:
>
>   1) Limited available physical address space 
>   2) Conflicts with arch64 mm implementation
>
> Limited available address space
> -------------------------------
>
> Device private memory is implemented by first reserving a region of the
> physical address space. This is a problem. The physical address space is
> not a resource that is directly under the kernel's control. Availability
> of suitable physical address space is constrained by the underlying
> hardware and firmware and may not always be available. 
>
> Device private memory assumes that it will be able to reserve a device
> memory sized chunk of physical address space. However, there is nothing
> guaranteeing that this will succeed, and there a number of factors that
> increase the likelihood of failure. We need to consider what else may
> exist in the physical address space. It is observed that certain VM
> configurations place very large PCI windows immediately after RAM. Large
> enough that there is no physical address space available at all for
> device private memory. This is more likely to occur on 43 bit physical
> width systems which have less physical address space.
>
> The fundamental issue is the physical address space is not a resource
> the kernel can rely on being to allocate from at will.  
>
> aarch64 issues
> --------------
>
> The current device private memory implementation has further issues on
> aarch64. On aarch64, vmemmap is sized to cover the ram only. Adding
> device private pages to the linear map then means that for device
> private page, pfn_to_page() will read beyond the end of vmemmap region
> leading to potential memory corruption. This means that device private
> memory does not work reliably on aarch64 [0].  
>
> New implementation
> ------------------
>
> This series changes device private memory so that it does not require
> allocation of physical address space and these problems are avoided.
> Instead of using the physical address space, we introduce a "device
> private address space" and allocate from there.
>
> A consequence of placing the device private pages outside of the
> physical address space is that they no longer have a PFN. However, it is
> still necessary to be able to look up a corresponding device private
> page from a device private PTE entry, which means that we still require
> some way to index into this device private address space. Instead of a
> PFN, device private pages use an offset into this device private address
> space to look up device private struct pages.
>
> The problem that then needs to be addressed is how to avoid confusing
> these device private offsets with PFNs. It is the limited usage
> of the device private pages themselves which make this possible. A
> device private page is only used for userspace mappings, we do not need
> to be concerned with them being used within the mm more broadly. This
> means that the only way that the core kernel looks up these pages is via
> the page table, where their PTE already indicates if they refer to a
> device private page via their swap type, e.g.  SWP_DEVICE_WRITE. We can
> use this information to determine if the PTE contains a PFN which should
> be looked up in the page map, or a device private offset which should be
> looked up elsewhere.
>
> This applies when we are creating PTE entries for device private pages -
> because they have their own type there are already must be handled
> separately, so it is a small step to convert them to a device private
> PFN now too.
>
> The first part of the series updates callers where device private
> offsets might now be encountered to track this extra state.
>
> The last patch contains the bulk of the work where we change how we
> convert between device private pages to device private offsets and then
> use a new interface for allocating device private pages without the need
> for reserving physical address space.
>
> By removing the device private pages from the physical address space,
> this series also opens up the possibility to moving away from tracking
> device private memory using struct pages in the future. This is
> desirable as on systems with large amounts of memory these device
> private struct pages use a signifiant amount of memory and take a
> significant amount of time to initialize.

Now device private pages are quite different from other pages, even in a
separate address pace.  IMHO, it may be better to make that as explicit
as possible.  For example, is it a good idea to put them in its own
zone, like ZONE_DEVICE_PRIVATE?  It appears not natural to put pages
from different address spaces into one zone.  And, this may make them
easier to be distinguished from other pages.

[snip]

---
Best Regards,
Huang, Ying


  parent reply	other threads:[~2026-01-29 13:49 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-23  6:22 Jordan Niethe
2026-01-23  6:22 ` [PATCH v3 01/13] mm/migrate_device: Introduce migrate_pfn_from_page() helper Jordan Niethe
2026-01-28  5:07   ` Kuehling, Felix
2026-01-29  1:06   ` Jordan Niethe
2026-01-23  6:22 ` [PATCH v3 02/13] drm/amdkfd: Use migrate pfns internally Jordan Niethe
2026-01-27 23:15   ` Balbir Singh
2026-01-28  5:08   ` Kuehling, Felix
2026-01-23  6:22 ` [PATCH v3 03/13] mm/migrate_device: Make migrate_device_{pfns,range}() take mpfns Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 04/13] mm/migrate_device: Add migrate PFN flag to track device private pages Jordan Niethe
2026-01-28  5:09   ` Kuehling, Felix
2026-01-23  6:23 ` [PATCH v3 05/13] mm/page_vma_mapped: Add flag to page_vma_mapped_walk::flags " Jordan Niethe
2026-01-27 21:01   ` Zi Yan
2026-01-23  6:23 ` [PATCH v3 06/13] mm: Add helpers to create migration entries from struct pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 07/13] mm: Add a new swap type for migration entries of device private pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 08/13] mm: Add softleaf support for device private migration entries Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 09/13] mm: Begin creating " Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 10/13] mm: Add helpers to create device private entries from struct pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 11/13] mm/util: Add flag to track device private pages in page snapshots Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 12/13] mm/hmm: Add flag to track device private pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 13/13] mm: Remove device private pages from the physical address space Jordan Niethe
2026-01-27  0:29   ` Jordan Niethe
2026-01-27 21:12   ` Zi Yan
2026-01-27 23:26     ` Jordan Niethe
2026-01-28  5:10   ` Kuehling, Felix
2026-01-29 13:49 ` Huang, Ying [this message]
2026-01-29 23:26   ` [PATCH v3 00/13] Remove device private pages from " Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=875x8kbkaz.fsf@DESKTOP-5N7EMDA \
    --to=ying.huang@linux.alibaba.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=david@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jgg@nvidia.com \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=jniethe@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lyude@redhat.com \
    --cc=matthew.brost@intel.com \
    --cc=mpenttil@redhat.com \
    --cc=rcampbell@nvidia.com \
    --cc=simona@ffwll.ch \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox