From: Alistair Popple <apopple@nvidia.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
nouveau@lists.freedesktop.org,
"Andrew Morton" <akpm@linux-foundation.org>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Jonathan Corbet" <corbet@lwn.net>, "Alex Shi" <alexs@kernel.org>,
"Yanteng Si" <si.yanteng@linux.dev>,
"Karol Herbst" <kherbst@redhat.com>,
"Lyude Paul" <lyude@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Jann Horn" <jannh@google.com>,
"Pasha Tatashin" <pasha.tatashin@soleen.com>,
"Peter Xu" <peterx@redhat.com>,
"Jason Gunthorpe" <jgg@nvidia.com>
Subject: Re: [PATCH v1 04/12] mm/rmap: implement make_device_exclusive() using folio_walk instead of rmap walk
Date: Thu, 30 Jan 2025 17:11:49 +1100 [thread overview]
Message-ID: <7tzcpx23vufmp5cxutnzhjgdj7kwqrw5drwochpv5ern7zknhj@h2s6y2qjbr3f> (raw)
In-Reply-To: <20250129115411.2077152-5-david@redhat.com>
On Wed, Jan 29, 2025 at 12:54:02PM +0100, David Hildenbrand wrote:
> We require a writable PTE and only support anonymous folio: we can only
> have exactly one PTE pointing at that page, which we can just lookup
> using a folio walk, avoiding the rmap walk and the anon VMA lock.
>
> So let's stop doing an rmap walk and perform a folio walk instead, so we
> can easily just modify a single PTE and avoid relying on rmap/mapcounts.
>
> We now effectively work on a single PTE instead of multiple PTEs of
> a large folio, allowing for conversion of individual PTEs from
> non-exclusive to device-exclusive -- note that the other way always
> worked on single PTEs.
>
> We can drop the MMU_NOTIFY_EXCLUSIVE MMU notifier call and document why
> that is not required: GUP will already take care of the
> MMU_NOTIFY_EXCLUSIVE call if required (there is already a device-exclusive
> entry) when not finding a present PTE and having to trigger a fault and
> ending up in remove_device_exclusive_entry().
I will have to look at this a bit more closely tomorrow but this doesn't seem
right to me. We may be transitioning from a present PTE (ie. a writable
anonymous mapping) to a non-present PTE (ie. a device-exclusive entry) and
therefore any secondary processors (eg. other GPUs, iommus, etc.) will need to
update their copies of the PTE. So I think the notifier call is needed.
> Note that the PTE is
> always writable, and we can always create a writable-device-exclusive
> entry.
>
> With this change, device-exclusive is fully compatible with THPs /
> large folios. We still require PMD-sized THPs to get PTE-mapped, and
> supporting PMD-mapped THP (without the PTE-remapping) is a different
> endeavour that might not be worth it at this point.
>
> This gets rid of the "folio_mapcount()" usage and let's us fix ordinary
> rmap walks (migration/swapout) next. Spell out that messing with the
> mapcount is wrong and must be fixed.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> mm/rmap.c | 188 ++++++++++++++++--------------------------------------
> 1 file changed, 55 insertions(+), 133 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 676df4fba5b0..49ffac6d27f8 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2375,131 +2375,6 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
> }
>
> #ifdef CONFIG_DEVICE_PRIVATE
> -struct make_exclusive_args {
> - struct mm_struct *mm;
> - unsigned long address;
> - void *owner;
> - bool valid;
> -};
> -
> -static bool page_make_device_exclusive_one(struct folio *folio,
> - struct vm_area_struct *vma, unsigned long address, void *priv)
> -{
> - struct mm_struct *mm = vma->vm_mm;
> - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0);
> - struct make_exclusive_args *args = priv;
> - pte_t pteval;
> - struct page *subpage;
> - bool ret = true;
> - struct mmu_notifier_range range;
> - swp_entry_t entry;
> - pte_t swp_pte;
> - pte_t ptent;
> -
> - mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
> - vma->vm_mm, address, min(vma->vm_end,
> - address + folio_size(folio)),
> - args->owner);
> - mmu_notifier_invalidate_range_start(&range);
> -
> - while (page_vma_mapped_walk(&pvmw)) {
> - /* Unexpected PMD-mapped THP? */
> - VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> -
> - ptent = ptep_get(pvmw.pte);
> - if (!pte_present(ptent)) {
> - ret = false;
> - page_vma_mapped_walk_done(&pvmw);
> - break;
> - }
> -
> - subpage = folio_page(folio,
> - pte_pfn(ptent) - folio_pfn(folio));
> - address = pvmw.address;
> -
> - /* Nuke the page table entry. */
> - flush_cache_page(vma, address, pte_pfn(ptent));
> - pteval = ptep_clear_flush(vma, address, pvmw.pte);
> -
> - /* Set the dirty flag on the folio now the pte is gone. */
> - if (pte_dirty(pteval))
> - folio_mark_dirty(folio);
> -
> - /*
> - * Check that our target page is still mapped at the expected
> - * address.
> - */
> - if (args->mm == mm && args->address == address &&
> - pte_write(pteval))
> - args->valid = true;
> -
> - /*
> - * Store the pfn of the page in a special migration
> - * pte. do_swap_page() will wait until the migration
> - * pte is removed and then restart fault handling.
> - */
> - if (pte_write(pteval))
> - entry = make_writable_device_exclusive_entry(
> - page_to_pfn(subpage));
> - else
> - entry = make_readable_device_exclusive_entry(
> - page_to_pfn(subpage));
> - swp_pte = swp_entry_to_pte(entry);
> - if (pte_soft_dirty(pteval))
> - swp_pte = pte_swp_mksoft_dirty(swp_pte);
> - if (pte_uffd_wp(pteval))
> - swp_pte = pte_swp_mkuffd_wp(swp_pte);
> -
> - set_pte_at(mm, address, pvmw.pte, swp_pte);
> -
> - /*
> - * There is a reference on the page for the swap entry which has
> - * been removed, so shouldn't take another.
> - */
> - folio_remove_rmap_pte(folio, subpage, vma);
> - }
> -
> - mmu_notifier_invalidate_range_end(&range);
> -
> - return ret;
> -}
> -
> -/**
> - * folio_make_device_exclusive - Mark the folio exclusively owned by a device.
> - * @folio: The folio to replace page table entries for.
> - * @mm: The mm_struct where the folio is expected to be mapped.
> - * @address: Address where the folio is expected to be mapped.
> - * @owner: passed to MMU_NOTIFY_EXCLUSIVE range notifier callbacks
> - *
> - * Tries to remove all the page table entries which are mapping this
> - * folio and replace them with special device exclusive swap entries to
> - * grant a device exclusive access to the folio.
> - *
> - * Context: Caller must hold the folio lock.
> - * Return: false if the page is still mapped, or if it could not be unmapped
> - * from the expected address. Otherwise returns true (success).
> - */
> -static bool folio_make_device_exclusive(struct folio *folio,
> - struct mm_struct *mm, unsigned long address, void *owner)
> -{
> - struct make_exclusive_args args = {
> - .mm = mm,
> - .address = address,
> - .owner = owner,
> - .valid = false,
> - };
> - struct rmap_walk_control rwc = {
> - .rmap_one = page_make_device_exclusive_one,
> - .done = folio_not_mapped,
> - .anon_lock = folio_lock_anon_vma_read,
> - .arg = &args,
> - };
> -
> - rmap_walk(folio, &rwc);
> -
> - return args.valid && !folio_mapcount(folio);
> -}
> -
> /**
> * make_device_exclusive() - Mark an address for exclusive use by a device
> * @mm: mm_struct of associated target process
> @@ -2530,9 +2405,12 @@ static bool folio_make_device_exclusive(struct folio *folio,
> struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
> void *owner, struct folio **foliop)
> {
> - struct folio *folio;
> + struct folio *folio, *fw_folio;
> + struct vm_area_struct *vma;
> + struct folio_walk fw;
> struct page *page;
> - long npages;
> + swp_entry_t entry;
> + pte_t swp_pte;
>
> mmap_assert_locked(mm);
>
> @@ -2540,12 +2418,16 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
> * Fault in the page writable and try to lock it; note that if the
> * address would already be marked for exclusive use by the device,
> * the GUP call would undo that first by triggering a fault.
> + *
> + * If any other device would already map this page exclusively, the
> + * fault will trigger a conversion to an ordinary
> + * (non-device-exclusive) PTE and issue a MMU_NOTIFY_EXCLUSIVE.
> */
> - npages = get_user_pages_remote(mm, addr, 1,
> - FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD,
> - &page, NULL);
> - if (npages != 1)
> - return ERR_PTR(npages);
> + page = get_user_page_vma_remote(mm, addr,
> + FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD,
> + &vma);
> + if (IS_ERR(page))
> + return page;
> folio = page_folio(page);
>
> if (!folio_test_anon(folio) || folio_test_hugetlb(folio)) {
> @@ -2558,11 +2440,51 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
> return ERR_PTR(-EBUSY);
> }
>
> - if (!folio_make_device_exclusive(folio, mm, addr, owner)) {
> + /*
> + * Let's do a second walk and make sure we still find the same page
> + * mapped writable. If we don't find what we expect, we will trigger
> + * GUP again to fix it up. Note that a page of an anonymous folio can
> + * only be mapped writable using exactly one page table mapping
> + * ("exclusive"), so there cannot be other mappings.
> + */
> + fw_folio = folio_walk_start(&fw, vma, addr, 0);
> + if (fw_folio != folio || fw.page != page ||
> + fw.level != FW_LEVEL_PTE || !pte_write(fw.pte)) {
> + if (fw_folio)
> + folio_walk_end(&fw, vma);
> folio_unlock(folio);
> folio_put(folio);
> return ERR_PTR(-EBUSY);
> }
> +
> + /* Nuke the page table entry so we get the uptodate dirty bit. */
> + flush_cache_page(vma, addr, page_to_pfn(page));
> + fw.pte = ptep_clear_flush(vma, addr, fw.ptep);
> +
> + /* Set the dirty flag on the folio now the pte is gone. */
> + if (pte_dirty(fw.pte))
> + folio_mark_dirty(folio);
> +
> + /*
> + * Store the pfn of the page in a special device-exclusive non-swap pte.
> + * do_swap_page() will trigger the conversion back while holding the
> + * folio lock.
> + */
> + entry = make_writable_device_exclusive_entry(page_to_pfn(page));
> + swp_pte = swp_entry_to_pte(entry);
> + if (pte_soft_dirty(fw.pte))
> + swp_pte = pte_swp_mksoft_dirty(swp_pte);
> + /* The pte is writable, uffd-wp does not apply. */
> + set_pte_at(mm, addr, fw.ptep, swp_pte);
> +
> + /*
> + * TODO: The device-exclusive non-swap PTE holds a folio reference but
> + * does not count as a mapping (mapcount), which is wrong and must be
> + * fixed, otherwise RMAP walks don't behave as expected.
> + */
> + folio_remove_rmap_pte(folio, page, vma);
> +
> + folio_walk_end(&fw, vma);
> *foliop = folio;
> return page;
> }
> --
> 2.48.1
>
next prev parent reply other threads:[~2025-01-30 6:12 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-29 11:53 [PATCH v1 00/12] mm: fixes for device-exclusive entries (hmm) David Hildenbrand
2025-01-29 11:53 ` [PATCH v1 01/12] mm/gup: reject FOLL_SPLIT_PMD with hugetlb VMAs David Hildenbrand
2025-01-29 21:42 ` John Hubbard
2025-01-30 8:56 ` David Hildenbrand
2025-01-30 5:46 ` Alistair Popple
2025-01-29 11:54 ` [PATCH v1 02/12] mm/rmap: reject hugetlb folios in folio_make_device_exclusive() David Hildenbrand
2025-01-30 5:47 ` Alistair Popple
2025-01-29 11:54 ` [PATCH v1 03/12] mm/rmap: convert make_device_exclusive_range() to make_device_exclusive() David Hildenbrand
2025-01-30 5:57 ` Alistair Popple
2025-01-30 9:04 ` David Hildenbrand
2025-01-31 0:28 ` Alistair Popple
2025-01-31 9:29 ` David Hildenbrand
2025-01-30 13:46 ` Simona Vetter
2025-01-30 15:56 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 04/12] mm/rmap: implement make_device_exclusive() using folio_walk instead of rmap walk David Hildenbrand
2025-01-30 6:11 ` Alistair Popple [this message]
2025-01-30 9:01 ` David Hildenbrand
2025-01-30 9:12 ` David Hildenbrand
2025-01-30 9:24 ` David Hildenbrand
2025-01-30 22:31 ` Alistair Popple
2025-02-04 10:56 ` David Hildenbrand
2025-01-30 9:40 ` Simona Vetter
2025-01-30 9:47 ` David Hildenbrand
2025-01-30 13:00 ` Simona Vetter
2025-01-30 15:59 ` David Hildenbrand
2025-01-31 17:00 ` Simona Vetter
2025-01-29 11:54 ` [PATCH v1 05/12] mm/memory: detect writability in restore_exclusive_pte() through can_change_pte_writable() David Hildenbrand
2025-01-30 9:51 ` Simona Vetter
2025-01-30 9:58 ` David Hildenbrand
2025-01-30 13:03 ` Simona Vetter
2025-01-30 23:06 ` Alistair Popple
2025-01-31 10:55 ` David Hildenbrand
2025-01-31 17:05 ` Simona Vetter
2025-02-04 10:58 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 06/12] mm: use single SWP_DEVICE_EXCLUSIVE entry type David Hildenbrand
2025-01-30 13:43 ` Simona Vetter
2025-01-30 23:28 ` Alistair Popple
2025-01-29 11:54 ` [PATCH v1 07/12] mm/page_vma_mapped: device-private entries are not migration entries David Hildenbrand
2025-01-30 23:36 ` Alistair Popple
2025-01-31 11:06 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 08/12] mm/rmap: handle device-exclusive entries correctly in try_to_unmap_one() David Hildenbrand
2025-01-30 10:10 ` Simona Vetter
2025-01-30 11:08 ` David Hildenbrand
2025-01-30 13:06 ` Simona Vetter
2025-01-30 14:08 ` Jason Gunthorpe
2025-01-30 16:10 ` Simona Vetter
2025-01-30 15:52 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 09/12] mm/rmap: handle device-exclusive entries correctly in try_to_migrate_one() David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 10/12] mm/rmap: handle device-exclusive entries correctly in folio_referenced_one() David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 11/12] mm/rmap: handle device-exclusive entries correctly in page_vma_mkclean_one() David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 12/12] mm/rmap: keep mapcount untouched for device-exclusive entries David Hildenbrand
2025-01-30 10:37 ` Simona Vetter
2025-01-30 11:42 ` David Hildenbrand
2025-01-30 13:19 ` Simona Vetter
2025-01-30 15:43 ` David Hildenbrand
2025-01-31 17:13 ` Simona Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7tzcpx23vufmp5cxutnzhjgdj7kwqrw5drwochpv5ern7zknhj@h2s6y2qjbr3f \
--to=apopple@nvidia.com \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alexs@kernel.org \
--cc=corbet@lwn.net \
--cc=dakr@kernel.org \
--cc=david@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=jannh@google.com \
--cc=jgg@nvidia.com \
--cc=jglisse@redhat.com \
--cc=kherbst@redhat.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lyude@redhat.com \
--cc=nouveau@lists.freedesktop.org \
--cc=pasha.tatashin@soleen.com \
--cc=peterx@redhat.com \
--cc=si.yanteng@linux.dev \
--cc=simona@ffwll.ch \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox