From: David Hildenbrand <david@redhat.com>
To: Huan Yang <link@vivo.com>,
Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Rik van Riel <riel@surriel.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>,
Harry Yoo <harry.yoo@oracle.com>, Xu Xin <xu.xin16@zte.com.cn>,
Chengming Zhou <chengming.zhou@linux.dev>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Zi Yan <ziy@nvidia.com>,
Matthew Brost <matthew.brost@intel.com>,
Joshua Hahn <joshua.hahnjy@gmail.com>,
Rakie Kim <rakie.kim@sk.com>, Byungchul Park <byungchul@sk.com>,
Gregory Price <gourry@gourry.net>,
Ying Huang <ying.huang@linux.alibaba.com>,
Alistair Popple <apopple@nvidia.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Christian Brauner <brauner@kernel.org>,
Usama Arif <usamaarif642@gmail.com>, Yu Zhao <yuzhao@google.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 0/9] introduce PGTY_mgt_entry page_type
Date: Thu, 24 Jul 2025 10:59:18 +0200 [thread overview]
Message-ID: <86516155-f2d9-4e8d-9d27-bdcb59e2d129@redhat.com> (raw)
In-Reply-To: <20250724084441.380404-1-link@vivo.com>
On 24.07.25 10:44, Huan Yang wrote:
> Summary
> ==
> This patchset reuses page_type to store migrate entry count during the
> period from migrate entry setup to removal, enabling accelerated VMA
> traversal when removing migrate entries, following a similar principle to
> early termination when folio is unmapped in try_to_migrate.
I absolutely detest (ab)using page types for that, so no from my side
unless I am missing something important.
>
> In my self-constructed test scenario, the migration time can be reduced
How relevant is that in practice?
> from over 150+ms to around 30+ms, achieving nearly a 70% performance
> improvement. Additionally, the flame graph shows that the proportion of
> remove_migration_ptes can be reduced from 80%+ to 60%+.
>
> Notice: migrate entry specifically refers to migrate PTE entry, as large
> folio are not supported page type and 0 mapcount reuse.
>
> Principle
> ==
> When a page removes all PTEs in try_to_migrate and sets up a migrate PTE
> entry, we can determine whether the traversal of remaining VMAs can be
> terminated early by checking if mapcount is zero. This optimization
> helps improve performance during migration.
>
> However, when removing migrate PTE entries and setting up PTEs for the
> destination folio in remove_migration_ptes, there is no such information
> available to assist in deciding whether the traversal of remaining VMAs
> can be ended early. Therefore, it is necessary to traversal all VMAs
> associated with this folio.
Yes, we don't know how many migration entries are still pointing at the
page.
>
> In reality, when a folio is fully unmapped and before all migrate PTE
> entries are removed, the mapcount will always be zero. Since page_type
> and mapcount share a union, and referring to folio_mapcount, we can
> reuse page_type to record the number of migrate PTE entries of the
> current folio in the system as long as it's not a large folio. This
> reuse does not affect calls to folio_mapcount, which will always return
> zero.
> > Therefore, we can set the folio's page_type to PGTY_mgt_entry when
> try_to_migrate completes, the folio is already unmapped, and it's not a
> large folio. The remaining 24 bits can then be used to record the number
> of migrate PTE entries generated by try_to_migrate.
In the future the page type will no longer overlay the mapcount and,
consequently, be sticky.
>
> Then, in remove_migration_ptes, when the nr_mgt_entry count drops to
> zero, we can terminate the VMA traversal early.
>
> It's important to note that we need to initialize the folio's page_type
> to PGTY_mgt_entry and set the migrate entry count only while holding the
> rmap walk lock.This is because during the lock period, we can prevent
> new VMA fork (which would increase migrate entries) and VMA unmap
> (which would decrease migrate entries).
The more I read about PGTY_mgt_entry, the more I hate it.
>
> However, I doubt there is actually an additional critical section here, for
> example anon:
>
> Process Parent fork
> try_to_migrate
> anon_vma_clone
> write_lock
> avc_inster_tree tail
> ....
> folio_lock_anon_vma_read copy_pte_range
> vma_iter pte_lock
> .... pte_present copy
> ...
> pte_lock
> new forked pte clean
> ....
> remove_migration_ptes
> rmap_walk_anon_lock
>
> If my understanding is correct and such a critical section exists, it
> shouldn't cause any issues—newly added PTEs can still be properly
> removed and converted into migrate entries.
>
> But in this:
>
> Process Parent fork
> try_to_migrate
> anon_vma_clone
> write_lock
> avc_inster_tree
> ....
> folio_lock_anon_vma_read copy_pte_range
> vma_iter
> pte_lock
> migrate entry set
> .... pte_lock
> pte_nonpresent copy
> ....
> ....
> remove_migration_ptes
> rmap_walk_anon_lock
Just a note: migration entries also apply to non-anon folios.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-07-24 8:59 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-24 8:44 Huan Yang
2025-07-24 8:44 ` [RFC PATCH 1/9] mm: introduce PAGE_TYPE_SHIFT Huan Yang
2025-07-24 8:44 ` [RFC PATCH 2/9] mm: add page_type value helper Huan Yang
2025-07-24 8:44 ` [RFC PATCH 3/9] mm/rmap: simplify rmap_walk invoke Huan Yang
2025-07-24 8:44 ` [RFC PATCH 4/9] mm/rmap: add args in rmap_walk_control done hook Huan Yang
2025-07-24 8:44 ` [RFC PATCH 5/9] mm/rmap: introduce exit hook Huan Yang
2025-07-24 8:44 ` [RFC PATCH 6/9] mm/rmap: introduce migrate_walk_arg Huan Yang
2025-07-24 8:44 ` [RFC PATCH 7/9] mm/migrate: rename rmap_walk_arg folio Huan Yang
2025-07-24 8:44 ` [RFC PATCH 8/9] mm/migrate: infrastructure for migrate entry page_type Huan Yang
2025-07-24 8:44 ` [RFC PATCH 9/9] mm/migrate: apply " Huan Yang
2025-07-24 8:59 ` David Hildenbrand [this message]
2025-07-24 9:09 ` [RFC PATCH 0/9] introduce PGTY_mgt_entry page_type Huan Yang
2025-07-24 9:12 ` David Hildenbrand
2025-07-24 9:20 ` David Hildenbrand
2025-07-24 9:32 ` David Hildenbrand
2025-07-24 9:36 ` Huan Yang
2025-07-24 9:45 ` Lorenzo Stoakes
2025-07-24 9:56 ` Huan Yang
2025-07-24 9:58 ` Lorenzo Stoakes
2025-07-24 10:01 ` Huan Yang
2025-07-24 9:15 ` Lorenzo Stoakes
2025-07-24 9:29 ` Huan Yang
2025-07-25 1:37 ` Huang, Ying
2025-07-25 1:47 ` Huan Yang
2025-07-25 9:26 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=86516155-f2d9-4e8d-9d27-bdcb59e2d129@redhat.com \
--to=david@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=baolin.wang@linux.alibaba.com \
--cc=brauner@kernel.org \
--cc=byungchul@sk.com \
--cc=chengming.zhou@linux.dev \
--cc=gourry@gourry.net \
--cc=harry.yoo@oracle.com \
--cc=joshua.hahnjy@gmail.com \
--cc=link@vivo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=xu.xin16@zte.com.cn \
--cc=ying.huang@linux.alibaba.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox