From: Simona Vetter <simona.vetter@ffwll.ch>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
nouveau@lists.freedesktop.org,
"Andrew Morton" <akpm@linux-foundation.org>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Jonathan Corbet" <corbet@lwn.net>, "Alex Shi" <alexs@kernel.org>,
"Yanteng Si" <si.yanteng@linux.dev>,
"Karol Herbst" <kherbst@redhat.com>,
"Lyude Paul" <lyude@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Jann Horn" <jannh@google.com>,
"Pasha Tatashin" <pasha.tatashin@soleen.com>,
"Peter Xu" <peterx@redhat.com>,
"Alistair Popple" <apopple@nvidia.com>,
"Jason Gunthorpe" <jgg@nvidia.com>
Subject: Re: [PATCH v1 12/12] mm/rmap: keep mapcount untouched for device-exclusive entries
Date: Thu, 30 Jan 2025 14:19:50 +0100 [thread overview]
Message-ID: <Z5t8dkujVv7xZiuV@phenom.ffwll.local> (raw)
In-Reply-To: <887df26d-b8bb-48df-af2f-21b220ef22e6@redhat.com>
On Thu, Jan 30, 2025 at 12:42:26PM +0100, David Hildenbrand wrote:
> On 30.01.25 11:37, Simona Vetter wrote:
> > On Wed, Jan 29, 2025 at 12:54:10PM +0100, David Hildenbrand wrote:
> > > Now that conversion to device-exclusive does no longer perform an
> > > rmap walk and the main page_vma_mapped_walk() users were taught to
> > > properly handle nonswap entries, let's treat device-exclusive entries just
> > > as if they would be present, similar to how we handle device-private
> > > entries already.
> >
> > So the reason for handling device-private entries in rmap is so that
> > drivers can rely on try_to_migrate and related code to invalidate all the
> > various ptes even for device private memory. Otherwise no one should hit
> > this path, at least if my understanding is correct.
>
> Right, device-private probably only happen to be seen on the migration path
> so far.
>
> >
> > So I'm very much worried about opening a can of worms here because I think
> > this adds a genuine new case to all the various callers.
>
> To be clear: it can all already happen.
>
> Assume you have a THP (or any mTHP today). You can easily trigger the
> scenario that folio_mapcount() != 0 with active device-exclusive entries,
> and you start doing rmap walks and stumble over these device-exclusive
> entries and *not* handle them properly. Note that more and more systems are
> configured to just give you THP unless you explicitly opted-out using
> MADV_NOHUGEPAGE early.
>
> Note that b756a3b5e7ea added that hunk that still walks these
> device-exclusive entries in rmap code, but didn't actually update the rmap
> walkers:
>
> @@ -102,7 +104,8 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
>
> /* Handle un-addressable ZONE_DEVICE memory */
> entry = pte_to_swp_entry(*pvmw->pte);
> - if (!is_device_private_entry(entry))
> + if (!is_device_private_entry(entry) &&
> + !is_device_exclusive_entry(entry))
> return false;
>
> pfn = swp_offset(entry);
>
> That was the right thing to do, because they resemble PROT_NONE entries and
> not migration entries or anything else that doesn't hold a folio reference).
Yeah I got that part. What I meant is that doubling down on this needs a
full audit and cannot rely on "we already have device private entries
going through these paths for much longer", which was the impression I
got. I guess it worked, thanks for doing that below :-)
And at least from my very rough understanding of mm, at least around all
this gpu stuff, tracking device exclusive mappings like real cpu mappings
makes sense, they do indeed act like PROT_NONE with some magic to restore
access on fault.
I do wonder a bit though what else is all not properly tracked because
they should be like prot_none except arent. I guess we'll find those as we
hit them :-/
> Fortunately, it's only the page_vma_mapped_walk() callers that need care.
>
> mm/rmap.c is handled with this series.
>
> mm/page_vma_mapped.c should work already.
>
> mm/migrate.c: does not apply
>
> mm/page_idle.c: likely should just skip !pte_present().
>
> mm/ksm.c might be fine, but likely we should just reject !pte_present().
>
> kernel/events/uprobes.c likely should reject !pte_present().
>
> mm/damon/paddr.c likely should reject !pte_present().
>
>
> I briefly though about a flag to indicate if a page_vma_mapped_walk()
> supports these non-present entries, but likely just fixing them up is
> easier+cleaner.
>
> Now that I looked at all, I might just write patches for them.
>
> >
> > > This fixes swapout/migration of folios with device-exclusive entries.
> > >
> > > Likely there are still some page_vma_mapped_walk() callers that are not
> > > fully prepared for these entries, and where we simply want to refuse
> > > !pte_present() entries. They have to be fixed independently; the ones in
> > > mm/rmap.c are prepared.
> >
> > The other worry is that maybe breaking migration is a feature, at least in
> > parts.
>
> Maybe breaking swap and migration is a feature in some reality, in this
> reality it's a BUG :)
Oh yeah I agree on those :-)
> If thp constantly reassembles a pmd entry because hey all the
> > memory is contig and userspace allocated a chunk of memory to place
> > atomics that alternate between cpu and gpu nicely separated by 4k pages,
> > then we'll thrash around invalidating ptes to no end. So might be more
> > fallout here.
>
> khugepaged will back off once it sees an exclusive entry, so collapsing
> could only happen once everything is non-exclusive. See
> __collapse_huge_page_isolate() as an example.
Ah ok. I think might be good to add that to the commit message, so that
people who don't understand mm deeply (like me) aren't worried when they
stumble over this change in the future again when digging around.
> It's really only page_vma_mapped_walk() callers that are affected by this
> change, not any other page table walkers.
I guess my mm understanding is just not up to that, but I couldn't figure
out why just looking at page_vma_mapped_walk() only is good enough?
> It's unfortunate that we now have to fix it all up, that original series
> should have never been merged that way.
Yeah looks like a rather big miss.
-Sima
--
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2025-01-30 13:19 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-29 11:53 [PATCH v1 00/12] mm: fixes for device-exclusive entries (hmm) David Hildenbrand
2025-01-29 11:53 ` [PATCH v1 01/12] mm/gup: reject FOLL_SPLIT_PMD with hugetlb VMAs David Hildenbrand
2025-01-29 21:42 ` John Hubbard
2025-01-30 8:56 ` David Hildenbrand
2025-01-30 5:46 ` Alistair Popple
2025-01-29 11:54 ` [PATCH v1 02/12] mm/rmap: reject hugetlb folios in folio_make_device_exclusive() David Hildenbrand
2025-01-30 5:47 ` Alistair Popple
2025-01-29 11:54 ` [PATCH v1 03/12] mm/rmap: convert make_device_exclusive_range() to make_device_exclusive() David Hildenbrand
2025-01-30 5:57 ` Alistair Popple
2025-01-30 9:04 ` David Hildenbrand
2025-01-31 0:28 ` Alistair Popple
2025-01-31 9:29 ` David Hildenbrand
2025-01-30 13:46 ` Simona Vetter
2025-01-30 15:56 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 04/12] mm/rmap: implement make_device_exclusive() using folio_walk instead of rmap walk David Hildenbrand
2025-01-30 6:11 ` Alistair Popple
2025-01-30 9:01 ` David Hildenbrand
2025-01-30 9:12 ` David Hildenbrand
2025-01-30 9:24 ` David Hildenbrand
2025-01-30 22:31 ` Alistair Popple
2025-02-04 10:56 ` David Hildenbrand
2025-01-30 9:40 ` Simona Vetter
2025-01-30 9:47 ` David Hildenbrand
2025-01-30 13:00 ` Simona Vetter
2025-01-30 15:59 ` David Hildenbrand
2025-01-31 17:00 ` Simona Vetter
2025-01-29 11:54 ` [PATCH v1 05/12] mm/memory: detect writability in restore_exclusive_pte() through can_change_pte_writable() David Hildenbrand
2025-01-30 9:51 ` Simona Vetter
2025-01-30 9:58 ` David Hildenbrand
2025-01-30 13:03 ` Simona Vetter
2025-01-30 23:06 ` Alistair Popple
2025-01-31 10:55 ` David Hildenbrand
2025-01-31 17:05 ` Simona Vetter
2025-02-04 10:58 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 06/12] mm: use single SWP_DEVICE_EXCLUSIVE entry type David Hildenbrand
2025-01-30 13:43 ` Simona Vetter
2025-01-30 23:28 ` Alistair Popple
2025-01-29 11:54 ` [PATCH v1 07/12] mm/page_vma_mapped: device-private entries are not migration entries David Hildenbrand
2025-01-30 23:36 ` Alistair Popple
2025-01-31 11:06 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 08/12] mm/rmap: handle device-exclusive entries correctly in try_to_unmap_one() David Hildenbrand
2025-01-30 10:10 ` Simona Vetter
2025-01-30 11:08 ` David Hildenbrand
2025-01-30 13:06 ` Simona Vetter
2025-01-30 14:08 ` Jason Gunthorpe
2025-01-30 16:10 ` Simona Vetter
2025-01-30 15:52 ` David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 09/12] mm/rmap: handle device-exclusive entries correctly in try_to_migrate_one() David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 10/12] mm/rmap: handle device-exclusive entries correctly in folio_referenced_one() David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 11/12] mm/rmap: handle device-exclusive entries correctly in page_vma_mkclean_one() David Hildenbrand
2025-01-29 11:54 ` [PATCH v1 12/12] mm/rmap: keep mapcount untouched for device-exclusive entries David Hildenbrand
2025-01-30 10:37 ` Simona Vetter
2025-01-30 11:42 ` David Hildenbrand
2025-01-30 13:19 ` Simona Vetter [this message]
2025-01-30 15:43 ` David Hildenbrand
2025-01-31 17:13 ` Simona Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z5t8dkujVv7xZiuV@phenom.ffwll.local \
--to=simona.vetter@ffwll.ch \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alexs@kernel.org \
--cc=apopple@nvidia.com \
--cc=corbet@lwn.net \
--cc=dakr@kernel.org \
--cc=david@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=jannh@google.com \
--cc=jgg@nvidia.com \
--cc=jglisse@redhat.com \
--cc=kherbst@redhat.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lyude@redhat.com \
--cc=nouveau@lists.freedesktop.org \
--cc=pasha.tatashin@soleen.com \
--cc=peterx@redhat.com \
--cc=si.yanteng@linux.dev \
--cc=simona@ffwll.ch \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox