From: "Mika Penttilä" <mpenttil@redhat.com>
To: Balbir Singh <balbirs@nvidia.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>, Zi Yan <ziy@nvidia.com>,
Joshua Hahn <joshua.hahnjy@gmail.com>,
Rakie Kim <rakie.kim@sk.com>, Byungchul Park <byungchul@sk.com>,
Gregory Price <gourry@gourry.net>,
Ying Huang <ying.huang@linux.alibaba.com>,
Alistair Popple <apopple@nvidia.com>,
Oscar Salvador <osalvador@suse.de>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lyude Paul <lyude@redhat.com>,
Danilo Krummrich <dakr@kernel.org>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
Ralph Campbell <rcampbell@nvidia.com>,
Matthew Brost <matthew.brost@intel.com>,
Francois Dugast <francois.dugast@intel.com>
Subject: Re: [v4 05/15] mm/migrate_device: handle partially mapped folios during collection
Date: Wed, 3 Sep 2025 11:26:08 +0300 [thread overview]
Message-ID: <ffac73b3-3c2f-402a-beb3-a98ba92c5335@redhat.com> (raw)
In-Reply-To: <6a178e78-9ccd-4845-b4ca-1e84f7d31b91@nvidia.com>
On 9/3/25 09:05, Balbir Singh wrote:
> On 9/3/25 14:40, Mika Penttilä wrote:
>> Hi,
>>
>> On 9/3/25 04:18, Balbir Singh wrote:
>>
>>> Extend migrate_vma_collect_pmd() to handle partially mapped large
>>> folios that require splitting before migration can proceed.
>>>
>>> During PTE walk in the collection phase, if a large folio is only
>>> partially mapped in the migration range, it must be split to ensure
>>> the folio is correctly migrated.
>>>
> <snip>
>
>>> +
>>> + /*
>>> + * The reason for finding pmd present with a
>>> + * large folio for the pte is partial unmaps.
>>> + * Split the folio now for the migration to be
>>> + * handled correctly
>>> + */
>> There are other reasons like vma splits for various reasons.
>>
> Yes, David had pointed that out as well, I meant to cleanup the comment change
> "The" to "One", I missed addressing it in the refactor, but easy to do
And of course now you split all mTHPs as well, which is different what we do today
(ignoring). Splitting might be the right thing to do, but maybe worth mentioning.
> Thanks,
> Balbir Singh
>
--Mika
next prev parent reply other threads:[~2025-09-03 8:26 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-03 1:18 [v4 00/15] mm: support device-private THP Balbir Singh
2025-09-03 1:18 ` [v4 01/15] mm/zone_device: support large zone device private folios Balbir Singh
2025-09-11 11:43 ` David Hildenbrand
2025-09-03 1:18 ` [v4 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
2025-09-03 1:18 ` [v4 03/15] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
2025-09-03 1:18 ` [v4 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
2025-09-03 1:18 ` [v4 05/15] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
2025-09-03 4:40 ` Mika Penttilä
2025-09-03 6:05 ` Balbir Singh
2025-09-03 8:26 ` Mika Penttilä [this message]
2025-09-04 9:37 ` kernel test robot
2025-09-03 1:18 ` [v4 06/15] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
2025-09-11 8:04 ` Wei Yang
2025-09-11 11:11 ` Mika Penttilä
2025-09-03 1:18 ` [v4 07/15] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
2025-09-03 1:18 ` [v4 08/15] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
2025-09-03 1:18 ` [v4 09/15] mm/memremap: add driver callback support for folio splitting Balbir Singh
2025-09-03 1:18 ` [v4 10/15] mm/migrate_device: add THP splitting during migration Balbir Singh
2025-09-03 1:18 ` [v4 11/15] lib/test_hmm: add large page allocation failure testing Balbir Singh
2025-09-03 1:18 ` [v4 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-09-03 1:18 ` [v4 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
2025-09-03 1:18 ` [v4 14/15] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-09-03 1:19 ` [v4 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ffac73b3-3c2f-402a-beb3-a98ba92c5335@redhat.com \
--to=mpenttil@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=byungchul@sk.com \
--cc=dakr@kernel.org \
--cc=damon@lists.linux.dev \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lyude@redhat.com \
--cc=matthew.brost@intel.com \
--cc=npache@redhat.com \
--cc=osalvador@suse.de \
--cc=rakie.kim@sk.com \
--cc=rcampbell@nvidia.com \
--cc=ryan.roberts@arm.com \
--cc=simona@ffwll.ch \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox