From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E11AC8303D for ; Fri, 4 Jul 2025 05:17:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5FFB44016D; Fri, 4 Jul 2025 01:17:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D11AC6B0102; Fri, 4 Jul 2025 01:17:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB1F444016D; Fri, 4 Jul 2025 01:17:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A888B6B00A2 for ; Fri, 4 Jul 2025 01:17:41 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 50ABAC19F0 for ; Fri, 4 Jul 2025 05:17:41 +0000 (UTC) X-FDA: 83625424722.23.C9B5ABB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf19.hostedemail.com (Postfix) with ESMTP id D6BF11A0003 for ; Fri, 4 Jul 2025 05:17:38 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WliQEziK; spf=pass (imf19.hostedemail.com: domain of mpenttil@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mpenttil@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751606259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cRUEvcfXdqvwdxgQ4+c1YZOWPsm20G4vBAXrqZXc2jk=; b=4Tfp6WW94sXNvpkI52b8iCaSJvgmPfS8RSVFf3Wr4bsUlvXhJX1IEA1lu5AQXst8nsj8Ob 2h/EoJyRghZw1CYi7xgkKCzlmIw1itY1mu/oNxkpJK2W4X3fduH7VI+mEDAvqsGURo8DPA jIKmOo8mBAbcHKflli2Y2P1e1aGlYAk= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WliQEziK; spf=pass (imf19.hostedemail.com: domain of mpenttil@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mpenttil@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751606259; a=rsa-sha256; cv=none; b=hEtAUIcKMuXwIS4VBm9Vh0pthua4FxiBchNFS6jkjHCFDyYgNnv+p7IWVTzDKKZamzuUXc H3DG1O/30WPBgjWc7yZvSbQ+e3PGk8PY+Pc71lKG2tH1+UhJc+FmIdysjAd/PS9dJJWGfx rtxQuK21kq2QlMOjiI2z7VAbVBGxvZQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751606258; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cRUEvcfXdqvwdxgQ4+c1YZOWPsm20G4vBAXrqZXc2jk=; b=WliQEziKBLMnGKBgOw8Kyg3vLsDKLzzm5MkHOf5Egp7CuLymgdvOWhI0o+d13/q2bVZDZF ygMyfwZyr72yVtPA9EamWnhNap33E6MyG/ha3tndpqykrrafZcIVoV5unCIYP6ySXA4wnD h9TdFVRnx5Otup6ncVKlL9bzI03g1vw= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-648-3WO3SHzIMhCKqerhOQe_bQ-1; Fri, 04 Jul 2025 01:17:36 -0400 X-MC-Unique: 3WO3SHzIMhCKqerhOQe_bQ-1 X-Mimecast-MFC-AGG-ID: 3WO3SHzIMhCKqerhOQe_bQ_1751606255 Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-5562a916832so319307e87.1 for ; Thu, 03 Jul 2025 22:17:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751606255; x=1752211055; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cRUEvcfXdqvwdxgQ4+c1YZOWPsm20G4vBAXrqZXc2jk=; b=QlFQ5Aa/Er2V8VNZfdxJnla0P3TuTN4m5Wm5vPp9Crh4VDFpdfv4chD2xekO3C1HRx 74x9Y6bWHrl3hHeh5paeePAAqCbyGO6fxTRc3HU2Hj+jS+T9Z6ksF4fXzIOv2srXx4aJ OU5hIgRCu0UuHIKsnvyGxEA42mugg6APJQ8q4+RoUfDlT3FwU+89yQ75Tnj230FHcYNc TtRBe5xQI3BWV84T71pEtNbfTgrkVDcp18fM9g9ScJ0oWjfqk0DAmD/vcDc+vdOYrAMn 27YJVmfjzHoIiSu4majHa/6bdKaAscoIkM2ijsZqiGVELLxo9tEUU8WDuuMAOqw3IFi4 TySw== X-Forwarded-Encrypted: i=1; AJvYcCUbC/JZJbdJSI+ydN4CtWlaQL8tq51K1RzFPWkjc+fdWtm7/rc2Piss+/ilkwWPx2u0nUgN2Fgwbw==@kvack.org X-Gm-Message-State: AOJu0Yy3lt9vhh0MY4whRQ5o+r6Dgp7xmNIGJnNTTK1UfIx2yUVc934i d+skw5L3nWH27P0Yyo1qlzAAlvEt4JdiXIZFFrxDRgo+H5wTYylF7KyFx8kTNTkvodddVc3nIMw 5sMwICRG+6dO5x+6U/Cmg+jS/AxFEhogUCnVZL8jgxYB5mlRzfrU= X-Gm-Gg: ASbGncvH5hbdAOHCbVUphky/G/R+V5SGEaq+PEiqQUcAlirpGb/uxl/WUVu2ehmJ3jK GK7jaeEeS0rW1vLa6Vga6y5eIy9J4EYT1+virAfPqmShIBr85fHvezeHPtIHfnQ37EPbh88ky9+ 1bvuZGbOhfh8oAWHIScgG7lDFnWxexSRSpct0VjIc5z2cz7rM78gCqjcHck/V8tkDfUi5T8BG2/ enLRPijrTwjRkpmmg50v6uoi9UyHlMGZVNv9FIBZqGPgXaF9+pe3SMm1hFtvD3C+9Qf8R3qUtpI 8EOxhkBo4ZVYpDWp1nxHtxAcJ8ZTccNHcql7YsftjXMI/+F/ X-Received: by 2002:a05:6512:6d5:b0:554:f76a:baba with SMTP id 2adb3069b0e04-557a132be69mr187664e87.3.1751606254899; Thu, 03 Jul 2025 22:17:34 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFrNxmEl+b8akt8r3Ay7zRPWHzT8OyQwvUzVXxV5pqjm04iOGu0LsvWHBNBVl8huhB/SBfX2A== X-Received: by 2002:a05:6512:6d5:b0:554:f76a:baba with SMTP id 2adb3069b0e04-557a132be69mr187645e87.3.1751606254348; Thu, 03 Jul 2025 22:17:34 -0700 (PDT) Received: from [192.168.1.86] (85-23-48-6.bb.dnainternet.fi. [85.23.48.6]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-556384d1271sm147541e87.252.2025.07.03.22.17.33 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 03 Jul 2025 22:17:33 -0700 (PDT) Message-ID: Date: Fri, 4 Jul 2025 08:17:33 +0300 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [v1 resend 08/12] mm/thp: add split during migration support To: Balbir Singh , linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Shuah Khan , David Hildenbrand , Barry Song , Baolin Wang , Ryan Roberts , Matthew Wilcox , Peter Xu , Zi Yan , Kefeng Wang , Jane Chu , Alistair Popple , Donet Tom References: <20250703233511.2028395-1-balbirs@nvidia.com> <20250703233511.2028395-9-balbirs@nvidia.com> From: =?UTF-8?Q?Mika_Penttil=C3=A4?= In-Reply-To: <20250703233511.2028395-9-balbirs@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 1ER6LvWk2x-udVPk6Dj4CXUzjS7ft9h7HY4VFXaGY14_1751606255 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Stat-Signature: f4z18urd7iouhrh7qfgm7a8no93im8ij X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D6BF11A0003 X-HE-Tag: 1751606258-287524 X-HE-Meta: U2FsdGVkX19bZnM59MCjrFGVU0JtW/e86IDC8Hz86SIfRKsUrstesqX5jS1cEGhsAQl0EkcNVCEMo4Ik3+p9lmOXoU5XoOPQWdVhgoiUMMrTZ1HKxqZA+Jj2JKcmAzeAITESsIvNUX1dfGcUwoR4VwawjptCeROPCYuQXBZFGyOgQZWKay3vtRlOyb7PFnSKEzKsjyVzwyHxLiwT3MQpkmjl9ZveO3CYluKgJ3Y6UJj3a3E6Jyh1J8uoCUYlnln/g+v69+3h2k+6PGxHYcaERD0sZHCedC2MAqoGzvHO9sEnpI8Du/OKalWyr9jJhxg6XyMhLIn89oCqjm8SlGXkjPjiicRujfwNYmoHvdlMVn5yhgCDV7YggylVVpf4tnBFktXAp0S64RetMdlz2+Q1Y/OhdUtgaGzD4AJI6qgiXvRy+pOFFrQdEXI+AQsPTWUIrgvz/ACVJiccJlhMEIkk5ptFCL/Qz5eucAWPo6zGbPR4lOexw+ClXS8Ebd/H1JCMZHNrv+H3Esl2ugjHBheCadYsp9j7mWQmj/JbHUT8rKsygZIddIoEaUXN/FuwAYuJgd7D15/QFjq8sE7V1GnDbfoIS/OZ69y0iBJkVdxQqNp9828f0HAtQWSuguU/IUkcowTe7nSFwmfXB1ypTonqAUvGB4GVx6a5Igm77+d8GU9a8GeuaOW8VddDAmKp/8MJqF8C7yR8EplGuFqphkXzsoMyGNVuU2tTeZMQBSVVfBB63QsNgxja7zM4E4KEfNDaHQyiUm1OEdot/Ba0DHwthJrIgFIZBD9FSR14Rp7K1PT0aEIYffHpEMXFe3Ab/6Asj6TRLkwcd4S66QQn0Sa8HQjXUq82s4PbH/fuMJUWNaYFpjxPaxnMg5aavJapV1Q9P8kWy/dXRHX8o8Sjzkws8gDWek40OXIIS66/PTS2FPTS/9SdDXzCPgVQWSj7ikZeNSeNgLOPu2fIwdnNqqs 2/exAyIe UbtN/XnQqWfGU9vlrGErv4OAQQ2nzpa9voYcl/JpCSt2vTRQKVJUuXOUX70tY0FnnjYdlJ8orqxJxNvsNeW/CZGZuJs/cXj3TilvTS7gxhvIfdXbffcUdYQYbaBMOyKbLTRGSRKIOdqzd6truV1+zn4i6sdtEuXZGKPOpFfnWEhOCs8vhWgnJttOV6Deavt1A4qwPnD3Me5tFG/UmNazwUxAQDkx+5uIiHBiJpTtVYzByep3LFiwdfsgPrOhxb8ncvMU7z/1wpGW7uu9enJEoOFZApGqqOumyXkQa6Snund9WPr5rDsqxRzT4l5aNiyfWzdJsRwInk3OVhBOUiIX/vzcCPcT8zYWdzh1gE26/GdT6beyZluwZigxf2gsSbb3GPs0x5yNPXpgJPYySdpl5tI2gwqet9CkoTzjznct0Zdb6N6B7Yyl3jyyS9RBCjyyRB/9/QKKU2xKdGOLKq62OXb6I37pwgQJVprV/I/8RHjWT4wq3TueVkoKDA2mWkWzzwJDPksnZK06XYf2M9M9rIqWEtIzVxvNACYSTiOmH/1IUsbiMQDLEErPJVskb1nIBVPLHPZm4BNP5H0K3/DwEgFNoIayEUa1oVOgnNzX9gh0F3g87HZ/XlZxPoRZFu7PemEf+5aJAS3sDC2wuJTImoLk4uCAZUGc4ekrLRlIcxmyItjbwogD3mkWNeKbsfyNp59GCcFDdzzUPrLm6q66DMb7CPHbkSqIrB6ULXU+pv1qhbxyxA9IWlMut5mQLEh107Bgta3Hl+3Zx7KG53V6dPHcnc7QT1zRcthbbs7ndKUGOMg3QiXmMsN5xBVOsaTWMrQb7WUIRFRm3xxg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 7/4/25 02:35, Balbir Singh wrote: > Support splitting pages during THP zone device migration as needed. > The common case that arises is that after setup, during migrate > the destination might not be able to allocate MIGRATE_PFN_COMPOUND > pages. > > Add a new routine migrate_vma_split_pages() to support the splitting > of already isolated pages. The pages being migrated are already unmapped > and marked for migration during setup (via unmap). folio_split() and > __split_unmapped_folio() take additional isolated arguments, to avoid > unmapping and remaping these pages and unlocking/putting the folio. > > Cc: Karol Herbst > Cc: Lyude Paul > Cc: Danilo Krummrich > Cc: David Airlie > Cc: Simona Vetter > Cc: "Jérôme Glisse" > Cc: Shuah Khan > Cc: David Hildenbrand > Cc: Barry Song > Cc: Baolin Wang > Cc: Ryan Roberts > Cc: Matthew Wilcox > Cc: Peter Xu > Cc: Zi Yan > Cc: Kefeng Wang > Cc: Jane Chu > Cc: Alistair Popple > Cc: Donet Tom > > Signed-off-by: Balbir Singh > --- > include/linux/huge_mm.h | 11 ++++++-- > mm/huge_memory.c | 54 ++++++++++++++++++++----------------- > mm/migrate_device.c | 59 ++++++++++++++++++++++++++++++++--------- > 3 files changed, 85 insertions(+), 39 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 65a1bdf29bb9..5f55a754e57c 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -343,8 +343,8 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add > vm_flags_t vm_flags); > > bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins); > -int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, > - unsigned int new_order); > +int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list, > + unsigned int new_order, bool isolated); > int min_order_for_split(struct folio *folio); > int split_folio_to_list(struct folio *folio, struct list_head *list); > bool uniform_split_supported(struct folio *folio, unsigned int new_order, > @@ -353,6 +353,13 @@ bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, > bool warns); > int folio_split(struct folio *folio, unsigned int new_order, struct page *page, > struct list_head *list); > + > +static inline int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, > + unsigned int new_order) > +{ > + return __split_huge_page_to_list_to_order(page, list, new_order, false); > +} > + > /* > * try_folio_split - try to split a @folio at @page using non uniform split. > * @folio: folio to be split > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index d55e36ae0c39..e00ddfed22fa 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3424,15 +3424,6 @@ static void __split_folio_to_order(struct folio *folio, int old_order, > new_folio->mapping = folio->mapping; > new_folio->index = folio->index + i; > > - /* > - * page->private should not be set in tail pages. Fix up and warn once > - * if private is unexpectedly set. > - */ > - if (unlikely(new_folio->private)) { > - VM_WARN_ON_ONCE_PAGE(true, new_head); > - new_folio->private = NULL; > - } > - > if (folio_test_swapcache(folio)) > new_folio->swap.val = folio->swap.val + i; > > @@ -3519,7 +3510,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, > struct page *split_at, struct page *lock_at, > struct list_head *list, pgoff_t end, > struct xa_state *xas, struct address_space *mapping, > - bool uniform_split) > + bool uniform_split, bool isolated) > { > struct lruvec *lruvec; > struct address_space *swap_cache = NULL; > @@ -3643,8 +3634,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, > percpu_ref_get_many(&release->pgmap->ref, > (1 << new_order) - 1); > > - lru_add_split_folio(origin_folio, release, lruvec, > - list); > + if (!isolated) > + lru_add_split_folio(origin_folio, release, > + lruvec, list); > > /* Some pages can be beyond EOF: drop them from cache */ > if (release->index >= end) { > @@ -3697,6 +3689,12 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, > if (nr_dropped) > shmem_uncharge(mapping->host, nr_dropped); > > + /* > + * Don't remap and unlock isolated folios > + */ > + if (isolated) > + return ret; > + > remap_page(origin_folio, 1 << order, > folio_test_anon(origin_folio) ? > RMP_USE_SHARED_ZEROPAGE : 0); > @@ -3790,6 +3788,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order, > * @lock_at: a page within @folio to be left locked to caller > * @list: after-split folios will be put on it if non NULL > * @uniform_split: perform uniform split or not (non-uniform split) > + * @isolated: The pages are already unmapped > * > * It calls __split_unmapped_folio() to perform uniform and non-uniform split. > * It is in charge of checking whether the split is supported or not and > @@ -3800,7 +3799,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order, > */ > static int __folio_split(struct folio *folio, unsigned int new_order, > struct page *split_at, struct page *lock_at, > - struct list_head *list, bool uniform_split) > + struct list_head *list, bool uniform_split, bool isolated) > { > struct deferred_split *ds_queue = get_deferred_split_queue(folio); > XA_STATE(xas, &folio->mapping->i_pages, folio->index); > @@ -3846,14 +3845,16 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > * is taken to serialise against parallel split or collapse > * operations. > */ > - anon_vma = folio_get_anon_vma(folio); > - if (!anon_vma) { > - ret = -EBUSY; > - goto out; > + if (!isolated) { > + anon_vma = folio_get_anon_vma(folio); > + if (!anon_vma) { > + ret = -EBUSY; > + goto out; > + } > + anon_vma_lock_write(anon_vma); > } > end = -1; > mapping = NULL; > - anon_vma_lock_write(anon_vma); > } else { > unsigned int min_order; > gfp_t gfp; > @@ -3920,7 +3921,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > goto out_unlock; > } > > - unmap_folio(folio); > + if (!isolated) > + unmap_folio(folio); > > /* block interrupt reentry in xa_lock and spinlock */ > local_irq_disable(); > @@ -3973,14 +3975,15 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > > ret = __split_unmapped_folio(folio, new_order, > split_at, lock_at, list, end, &xas, mapping, > - uniform_split); > + uniform_split, isolated); > } else { > spin_unlock(&ds_queue->split_queue_lock); > fail: > if (mapping) > xas_unlock(&xas); > local_irq_enable(); > - remap_page(folio, folio_nr_pages(folio), 0); > + if (!isolated) > + remap_page(folio, folio_nr_pages(folio), 0); > ret = -EAGAIN; > } > > @@ -4046,12 +4049,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > * Returns -EINVAL when trying to split to an order that is incompatible > * with the folio. Splitting to order 0 is compatible with all folios. > */ > -int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, > - unsigned int new_order) > +int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list, > + unsigned int new_order, bool isolated) > { > struct folio *folio = page_folio(page); > > - return __folio_split(folio, new_order, &folio->page, page, list, true); > + return __folio_split(folio, new_order, &folio->page, page, list, true, > + isolated); > } > > /* > @@ -4080,7 +4084,7 @@ int folio_split(struct folio *folio, unsigned int new_order, > struct page *split_at, struct list_head *list) > { > return __folio_split(folio, new_order, split_at, &folio->page, list, > - false); > + false, false); > } > > int min_order_for_split(struct folio *folio) > diff --git a/mm/migrate_device.c b/mm/migrate_device.c > index 41d0bd787969..acd2f03b178d 100644 > --- a/mm/migrate_device.c > +++ b/mm/migrate_device.c > @@ -813,6 +813,24 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > src[i] &= ~MIGRATE_PFN_MIGRATE; > return 0; > } > + > +static void migrate_vma_split_pages(struct migrate_vma *migrate, > + unsigned long idx, unsigned long addr, > + struct folio *folio) > +{ > + unsigned long i; > + unsigned long pfn; > + unsigned long flags; > + > + folio_get(folio); > + split_huge_pmd_address(migrate->vma, addr, true); > + __split_huge_page_to_list_to_order(folio_page(folio, 0), NULL, 0, true); We already have reference to folio, why is folio_get() needed ? Splitting the page splits pmd for anon folios, why is there split_huge_pmd_address() ? > + migrate->src[idx] &= ~MIGRATE_PFN_COMPOUND; > + flags = migrate->src[idx] & ((1UL << MIGRATE_PFN_SHIFT) - 1); > + pfn = migrate->src[idx] >> MIGRATE_PFN_SHIFT; > + for (i = 1; i < HPAGE_PMD_NR; i++) > + migrate->src[i+idx] = migrate_pfn(pfn + i) | flags; > +} > #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */ > static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > unsigned long addr, > @@ -822,6 +840,11 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > { > return 0; > } > + > +static void migrate_vma_split_pages(struct migrate_vma *migrate, > + unsigned long idx, unsigned long addr, > + struct folio *folio) > +{} > #endif > > /* > @@ -971,8 +994,9 @@ static void __migrate_device_pages(unsigned long *src_pfns, > struct migrate_vma *migrate) > { > struct mmu_notifier_range range; > - unsigned long i; > + unsigned long i, j; > bool notified = false; > + unsigned long addr; > > for (i = 0; i < npages; ) { > struct page *newpage = migrate_pfn_to_page(dst_pfns[i]); > @@ -1014,12 +1038,16 @@ static void __migrate_device_pages(unsigned long *src_pfns, > (!(dst_pfns[i] & MIGRATE_PFN_COMPOUND))) { > nr = HPAGE_PMD_NR; > src_pfns[i] &= ~MIGRATE_PFN_COMPOUND; > - src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; > - goto next; > + } else { > + nr = 1; > } > > - migrate_vma_insert_page(migrate, addr, &dst_pfns[i], > - &src_pfns[i]); > + for (j = 0; j < nr && i + j < npages; j++) { > + src_pfns[i+j] |= MIGRATE_PFN_MIGRATE; > + migrate_vma_insert_page(migrate, > + addr + j * PAGE_SIZE, > + &dst_pfns[i+j], &src_pfns[i+j]); > + } > goto next; > } > > @@ -1041,7 +1069,9 @@ static void __migrate_device_pages(unsigned long *src_pfns, > MIGRATE_PFN_COMPOUND); > goto next; > } > - src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; > + nr = 1 << folio_order(folio); > + addr = migrate->start + i * PAGE_SIZE; > + migrate_vma_split_pages(migrate, i, addr, folio); > } else if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) && > (dst_pfns[i] & MIGRATE_PFN_COMPOUND) && > !(src_pfns[i] & MIGRATE_PFN_COMPOUND)) { > @@ -1076,12 +1106,17 @@ static void __migrate_device_pages(unsigned long *src_pfns, > BUG_ON(folio_test_writeback(folio)); > > if (migrate && migrate->fault_page == page) > - extra_cnt = 1; > - r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt); > - if (r != MIGRATEPAGE_SUCCESS) > - src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; > - else > - folio_migrate_flags(newfolio, folio); > + extra_cnt++; > + for (j = 0; j < nr && i + j < npages; j++) { > + folio = page_folio(migrate_pfn_to_page(src_pfns[i+j])); > + newfolio = page_folio(migrate_pfn_to_page(dst_pfns[i+j])); > + > + r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt); > + if (r != MIGRATEPAGE_SUCCESS) > + src_pfns[i+j] &= ~MIGRATE_PFN_MIGRATE; > + else > + folio_migrate_flags(newfolio, folio); > + } > next: > i += nr; > }