From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22BE4C77B75 for ; Mon, 15 May 2023 07:13:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9ED89900003; Mon, 15 May 2023 03:13:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 99DB5900002; Mon, 15 May 2023 03:13:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 81926900003; Mon, 15 May 2023 03:13:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6EF42900002 for ; Mon, 15 May 2023 03:13:17 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2C2A04010D for ; Mon, 15 May 2023 07:13:17 +0000 (UTC) X-FDA: 80791623234.03.18BA931 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf09.hostedemail.com (Postfix) with ESMTP id 9D8F814000B for ; Mon, 15 May 2023 07:13:13 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="n+jZIu/U"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf09.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684134794; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BReo63BIVjBptBOYkKgCEBA8XNU3DRkl9LN8xo/bN2M=; b=BoPeV/VcVHVL1l56WNz6kdS+qgdSjf+2DqapDh3DuavtF3n1iKfw5M2q+RV5Jn7tSc5OlB zrPC6mooZZY6GA1/U70X13IHrslCLDr6fsJP4WqfSJbHe4xujJ1IrtoxHm9mAQ5nNbPrzJ TmGUpeSBFx0NLWGzLTLLDuj6SgUrpJ8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="n+jZIu/U"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf09.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684134794; a=rsa-sha256; cv=none; b=BJD4nmYGC/CYmXY3fnBEykYfv+iCYz6Z2wCye9AUNYvcQ5aZh65BaiWgYwGKoewAevsW/f 0nvYIo5Jlzt9Sr5Q/Fjh3lNQ2KvgC7V7R/A+W0NUpju9JYhF9U41xYDpvY2MGhKEQZ6Qaj gXbGO6a6bN+5Y6GPesTcOEhtXLpnl60= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684134793; x=1715670793; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version:content-transfer-encoding; bh=PtPCTv3a0CfvTICet8R1hHTQ7UBaCNFZ7qQRcrSrCrQ=; b=n+jZIu/UFL89pA8u4oUPplmjZuFogOyT0UoLVRn6wfu2yOYZeJOQff7s vFI5rxsdM30/dIriwOJAaVx52KY9KzVVdpjJerSc81rXriCuhHx4BtAwp C/COtC9/0UzWfnympYxIpZ58ZybNZ9BHGLb0FlhFUrRkOFc7dRdtD46HS LelLjTPXYjSqHwxrZMe1J798ONEAmUJE8cNEOJ3QNhSKeEjm/JTOpvVlz 4UHsRbngwDegBoDtr8r4pIrAoRtjdjh/thgyj1CNyQoDGBwp6Bqti6x+W f68v6/lLIvoTxqoB2uXg15+6gbLsifV41kBCDgglQ8XBcCiyh0ny25shF g==; X-IronPort-AV: E=McAfee;i="6600,9927,10710"; a="340480179" X-IronPort-AV: E=Sophos;i="5.99,275,1677571200"; d="scan'208";a="340480179" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2023 00:13:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10710"; a="947315394" X-IronPort-AV: E=Sophos;i="5.99,275,1677571200"; d="scan'208";a="947315394" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2023 00:13:09 -0700 From: "Huang, Ying" To: "Matthew Wilcox (Oracle)" Cc: linux-mm@kvack.org Subject: Re: [PATCH] mm: Convert migrate_pages() to work on folios References: <20230513001101.276972-1-willy@infradead.org> Date: Mon, 15 May 2023 15:12:06 +0800 In-Reply-To: <20230513001101.276972-1-willy@infradead.org> (Matthew Wilcox's message of "Sat, 13 May 2023 01:11:01 +0100") Message-ID: <875y8u83w9.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Stat-Signature: 9s1eu1o1d8wgjtao8endd1zxsy8iyc96 X-Rspam-User: X-Rspamd-Queue-Id: 9D8F814000B X-Rspamd-Server: rspam07 X-HE-Tag: 1684134793-548665 X-HE-Meta: U2FsdGVkX19Gp+UCGKvjTU/jf5BSro+8qf3M0VC7cMAFuWagJlNUNnsIKfZpGdceJ30xgMFZC3VcaD/PEy1xgVUpkQqwlW17dP7Kz+Bf2MSilfxuZL5iz2liGK61ydbqMe3tr9WZPbCQS5eOLcuQLOdlUQQpksNwi3klw/9PVz1/NjXz12aKdgCN/SLd8o6HW5G8QjS6SS4Flacy2AgYGFLmXTPvwT9mOsaMeQ80+yAX8yrB1SBK6x7Hn9BvFOM+rvNMGRtThajliE7W5ULYZdP9gI3aXwnOlZi3Cw1Hft+QmbYMqmtWCJUfF3CvLljBMz71AlGUtChxKrRfsmjRp5GPF/pJQTLBjJPYtO1kDJSQM0sE1DBylh8fCz8q3A+hJUrkKRNy7cxQ6HhSYKGAj4GJ5/qGOKx3QuB2AyAXZHhUheiLTtqeFttXG3GGqk3ZDrOeZ23Qwxi5WZRDIE59dbZfX6VfDPVVSuLXGcegUTuQJHRPCJag51PVQJWeF5ZO5yIGWZ9Z/k+o5YXvrhRRqFX6NrjtRglhlosj4WIpGYTHki28A1hsfv1l1DODjzpThtCQlROlmX5TdKRTUY1aJIEumbuhKZnkAONms0ZNs6a2JM4bcNNO8+aMjwdAUpjPR+X+fwTb3XrgfqT0GP6v1GfrmBr12IwUk1duERc+ZHkuClLV+jpWFxJLwXOiWy5H1qKy0+ukocIQX3T140QhP7tssyeBbwrHfJ12EyeSIDTtOqQCqK04ypLvM4D3iFI7MnKcfK0ImxuquiuZC3uH6PDoZop8W9PB/fVhCRn7PxQ9BhZut5v8K4RxCIvCRsF33/dQYdxQneboUUiKqllw/cZTzo4tYmEHXD6VUrPFIB3EeFKBaG4gEUoqhK2SpJPQbD2MBFhyk0LegAAvPePD+MEs4jKdn/z+p7UsiQhJM6ri0/3NZImmnm6jkVQeWN4eJHicNGIUUtWokGPNBJd CJRi8Vvl QoccDSQkHXcA6+CbPqFcKMcVAZP2hS9GktE3gdhKy8Wu4uhkv+cfWZwAik7IjFYOab5hksY20LGk61/yPf/R7DfMZYx/x+tevoK4o15Rjh1f8w04WG+Ymaifh+ylPZNEHh8miSutVocvW9gyT3qnci8XSJcvyEm8Y9GRSi/mXsIDJeiJTtY9mTk9fRmUTXAOTwgips30wSMA/PNNVM8vuStfC2BVHbCrtyDoFDVCUx2JEGOx7c/3jcbZdEOrCMKca9FngLrLB6r+eNDHakX/a5R6WGA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: "Matthew Wilcox (Oracle)" writes: > Almost all of the callers & implementors of migrate_pages() were already > converted to use folios. compaction_alloc() & compaction_free() are > trivial to convert a part of this patch and not worth splitting out. > > Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: "Huang, Ying" Especially the Chinese document :-) Is it necessary to rename migrate_pages() to migrate_folios()? Best Regards, Huang, Ying > --- > Documentation/mm/page_migration.rst | 7 +- > .../translations/zh_CN/mm/page_migration.rst | 2 +- > include/linux/migrate.h | 16 +- > mm/compaction.c | 15 +- > mm/mempolicy.c | 15 +- > mm/migrate.c | 161 ++++++++---------- > mm/vmscan.c | 15 +- > 7 files changed, 108 insertions(+), 123 deletions(-) > > diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_= migration.rst > index 313dce18893e..e35af7805be5 100644 > --- a/Documentation/mm/page_migration.rst > +++ b/Documentation/mm/page_migration.rst > @@ -73,14 +73,13 @@ In kernel use of migrate_pages() > It also prevents the swapper or other scans from encountering > the page. >=20=20 > -2. We need to have a function of type new_page_t that can be > +2. We need to have a function of type new_folio_t that can be > passed to migrate_pages(). This function should figure out > - how to allocate the correct new page given the old page. > + how to allocate the correct new folio given the old folio. >=20=20 > 3. The migrate_pages() function is called which attempts > to do the migration. It will call the function to allocate > - the new page for each page that is considered for > - moving. > + the new folio for each folio that is considered for moving. >=20=20 > How migrate_pages() works > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > diff --git a/Documentation/translations/zh_CN/mm/page_migration.rst b/Doc= umentation/translations/zh_CN/mm/page_migration.rst > index 076081dc1635..f95063826a15 100644 > --- a/Documentation/translations/zh_CN/mm/page_migration.rst > +++ b/Documentation/translations/zh_CN/mm/page_migration.rst > @@ -55,7 +55,7 @@ mbind()=E8=AE=BE=E7=BD=AE=E4=B8=80=E4=B8=AA=E6=96=B0=E7= =9A=84=E5=86=85=E5=AD=98=E7=AD=96=E7=95=A5=E3=80=82=E4=B8=80=E4=B8=AA=E8=BF= =9B=E7=A8=8B=E7=9A=84=E9=A1=B5=E9=9D=A2=E4=B9=9F=E5=8F=AF=E4=BB=A5=E9=80=9A= =E8=BF=87sys_ > =E6=B6=88=E5=A4=B1=E3=80=82=E5=AE=83=E8=BF=98=E5=8F=AF=E4=BB=A5=E9=98= =B2=E6=AD=A2=E4=BA=A4=E6=8D=A2=E5=99=A8=E6=88=96=E5=85=B6=E4=BB=96=E6=89=AB= =E6=8F=8F=E5=99=A8=E9=81=87=E5=88=B0=E8=AF=A5=E9=A1=B5=E3=80=82 >=20=20 >=20=20 > -2. =E6=88=91=E4=BB=AC=E9=9C=80=E8=A6=81=E6=9C=89=E4=B8=80=E4=B8=AAnew_pa= ge_t=E7=B1=BB=E5=9E=8B=E7=9A=84=E5=87=BD=E6=95=B0=EF=BC=8C=E5=8F=AF=E4=BB= =A5=E4=BC=A0=E9=80=92=E7=BB=99migrate_pages()=E3=80=82=E8=BF=99=E4=B8=AA=E5= =87=BD=E6=95=B0=E5=BA=94=E8=AF=A5=E8=AE=A1=E7=AE=97 > +2. =E6=88=91=E4=BB=AC=E9=9C=80=E8=A6=81=E6=9C=89=E4=B8=80=E4=B8=AAnew_fo= lio_t=E7=B1=BB=E5=9E=8B=E7=9A=84=E5=87=BD=E6=95=B0=EF=BC=8C=E5=8F=AF=E4=BB= =A5=E4=BC=A0=E9=80=92=E7=BB=99migrate_pages()=E3=80=82=E8=BF=99=E4=B8=AA=E5= =87=BD=E6=95=B0=E5=BA=94=E8=AF=A5=E8=AE=A1=E7=AE=97 > =E5=87=BA=E5=A6=82=E4=BD=95=E5=9C=A8=E7=BB=99=E5=AE=9A=E7=9A=84=E6=97= =A7=E9=A1=B5=E9=9D=A2=E4=B8=AD=E5=88=86=E9=85=8D=E6=AD=A3=E7=A1=AE=E7=9A=84= =E6=96=B0=E9=A1=B5=E9=9D=A2=E3=80=82 >=20=20 > 3. migrate_pages()=E5=87=BD=E6=95=B0=E8=A2=AB=E8=B0=83=E7=94=A8=EF=BC=8C= =E5=AE=83=E8=AF=95=E5=9B=BE=E8=BF=9B=E8=A1=8C=E8=BF=81=E7=A7=BB=E3=80=82=E5= =AE=83=E5=B0=86=E8=B0=83=E7=94=A8=E8=AF=A5=E5=87=BD=E6=95=B0=E4=B8=BA=E6=AF= =8F=E4=B8=AA=E8=A2=AB=E8=80=83=E8=99=91=E8=BF=81=E7=A7=BB=E7=9A=84=E9=A1=B5= =E9=9D=A2=E5=88=86 > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 6241a1596a75..6de5756d8533 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -7,8 +7,8 @@ > #include > #include >=20=20 > -typedef struct page *new_page_t(struct page *page, unsigned long private= ); > -typedef void free_page_t(struct page *page, unsigned long private); > +typedef struct folio *new_folio_t(struct folio *folio, unsigned long pri= vate); > +typedef void free_folio_t(struct folio *folio, unsigned long private); >=20=20 > struct migration_target_control; >=20=20 > @@ -67,10 +67,10 @@ int migrate_folio_extra(struct address_space *mapping= , struct folio *dst, > struct folio *src, enum migrate_mode mode, int extra_count); > int migrate_folio(struct address_space *mapping, struct folio *dst, > struct folio *src, enum migrate_mode mode); > -int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, > +int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t fre= e, > unsigned long private, enum migrate_mode mode, int reason, > unsigned int *ret_succeeded); > -struct page *alloc_migration_target(struct page *page, unsigned long pri= vate); > +struct folio *alloc_migration_target(struct folio *src, unsigned long pr= ivate); > bool isolate_movable_page(struct page *page, isolate_mode_t mode); >=20=20 > int migrate_huge_page_move_mapping(struct address_space *mapping, > @@ -85,11 +85,11 @@ int folio_migrate_mapping(struct address_space *mappi= ng, > #else >=20=20 > static inline void putback_movable_pages(struct list_head *l) {} > -static inline int migrate_pages(struct list_head *l, new_page_t new, > - free_page_t free, unsigned long private, enum migrate_mode mode, > - int reason, unsigned int *ret_succeeded) > +static inline int migrate_pages(struct list_head *l, new_folio_t new, > + free_folio_t free, unsigned long private, > + enum migrate_mode mode, int reason, unsigned int *ret_succeeded) > { return -ENOSYS; } > -static inline struct page *alloc_migration_target(struct page *page, > +static inline struct folio *alloc_migration_target(struct folio *src, > unsigned long private) > { return NULL; } > static inline bool isolate_movable_page(struct page *page, isolate_mode_= t mode) > diff --git a/mm/compaction.c b/mm/compaction.c > index c8bcdea15f5f..3a8ac58c8af4 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -1684,11 +1684,10 @@ static void isolate_freepages(struct compact_cont= rol *cc) > * This is a migrate-callback that "allocates" freepages by taking pages > * from the isolated freelists in the block we are migrating to. > */ > -static struct page *compaction_alloc(struct page *migratepage, > - unsigned long data) > +static struct folio *compaction_alloc(struct folio *src, unsigned long d= ata) > { > struct compact_control *cc =3D (struct compact_control *)data; > - struct page *freepage; > + struct folio *dst; >=20=20 > if (list_empty(&cc->freepages)) { > isolate_freepages(cc); > @@ -1697,11 +1696,11 @@ static struct page *compaction_alloc(struct page = *migratepage, > return NULL; > } >=20=20 > - freepage =3D list_entry(cc->freepages.next, struct page, lru); > - list_del(&freepage->lru); > + dst =3D list_entry(cc->freepages.next, struct folio, lru); > + list_del(&dst->lru); > cc->nr_freepages--; >=20=20 > - return freepage; > + return dst; > } >=20=20 > /* > @@ -1709,11 +1708,11 @@ static struct page *compaction_alloc(struct page = *migratepage, > * freelist. All pages on the freelist are from the same zone, so there= is no > * special handling needed for NUMA. > */ > -static void compaction_free(struct page *page, unsigned long data) > +static void compaction_free(struct folio *dst, unsigned long data) > { > struct compact_control *cc =3D (struct compact_control *)data; >=20=20 > - list_add(&page->lru, &cc->freepages); > + list_add(&dst->lru, &cc->freepages); > cc->nr_freepages++; > } >=20=20 > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 1756389a0609..f06ca8c18e62 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -1195,24 +1195,22 @@ int do_migrate_pages(struct mm_struct *mm, const = nodemask_t *from, > * list of pages handed to migrate_pages()--which is how we get here-- > * is in virtual address order. > */ > -static struct page *new_page(struct page *page, unsigned long start) > +static struct folio *new_folio(struct folio *src, unsigned long start) > { > - struct folio *dst, *src =3D page_folio(page); > struct vm_area_struct *vma; > unsigned long address; > VMA_ITERATOR(vmi, current->mm, start); > gfp_t gfp =3D GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL; >=20=20 > for_each_vma(vmi, vma) { > - address =3D page_address_in_vma(page, vma); > + address =3D page_address_in_vma(&src->page, vma); > if (address !=3D -EFAULT) > break; > } >=20=20 > if (folio_test_hugetlb(src)) { > - dst =3D alloc_hugetlb_folio_vma(folio_hstate(src), > + return alloc_hugetlb_folio_vma(folio_hstate(src), > vma, address); > - return &dst->page; > } >=20=20 > if (folio_test_large(src)) > @@ -1221,9 +1219,8 @@ static struct page *new_page(struct page *page, uns= igned long start) > /* > * if !vma, vma_alloc_folio() will use task or system default policy > */ > - dst =3D vma_alloc_folio(gfp, folio_order(src), vma, address, > + return vma_alloc_folio(gfp, folio_order(src), vma, address, > folio_test_large(src)); > - return &dst->page; > } > #else >=20=20 > @@ -1239,7 +1236,7 @@ int do_migrate_pages(struct mm_struct *mm, const no= demask_t *from, > return -ENOSYS; > } >=20=20 > -static struct page *new_page(struct page *page, unsigned long start) > +static struct folio *new_folio(struct folio *src, unsigned long start) > { > return NULL; > } > @@ -1334,7 +1331,7 @@ static long do_mbind(unsigned long start, unsigned = long len, >=20=20 > if (!list_empty(&pagelist)) { > WARN_ON_ONCE(flags & MPOL_MF_LAZY); > - nr_failed =3D migrate_pages(&pagelist, new_page, NULL, > + nr_failed =3D migrate_pages(&pagelist, new_folio, NULL, > start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL); > if (nr_failed) > putback_movable_pages(&pagelist); > diff --git a/mm/migrate.c b/mm/migrate.c > index 01cac26a3127..fdf4e00f7fe4 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1072,15 +1072,13 @@ static void migrate_folio_undo_src(struct folio *= src, > } >=20=20 > /* Restore the destination folio to the original state upon failure */ > -static void migrate_folio_undo_dst(struct folio *dst, > - bool locked, > - free_page_t put_new_page, > - unsigned long private) > +static void migrate_folio_undo_dst(struct folio *dst, bool locked, > + free_folio_t put_new_folio, unsigned long private) > { > if (locked) > folio_unlock(dst); > - if (put_new_page) > - put_new_page(&dst->page, private); > + if (put_new_folio) > + put_new_folio(dst, private); > else > folio_put(dst); > } > @@ -1104,14 +1102,13 @@ static void migrate_folio_done(struct folio *src, > } >=20=20 > /* Obtain the lock on page, remove all ptes. */ > -static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_= new_page, > - unsigned long private, struct folio *src, > - struct folio **dstp, enum migrate_mode mode, > - enum migrate_reason reason, struct list_head *ret) > +static int migrate_folio_unmap(new_folio_t get_new_folio, > + free_folio_t put_new_folio, unsigned long private, > + struct folio *src, struct folio **dstp, enum migrate_mode mode, > + enum migrate_reason reason, struct list_head *ret) > { > struct folio *dst; > int rc =3D -EAGAIN; > - struct page *newpage =3D NULL; > int page_was_mapped =3D 0; > struct anon_vma *anon_vma =3D NULL; > bool is_lru =3D !__PageMovable(&src->page); > @@ -1128,10 +1125,9 @@ static int migrate_folio_unmap(new_page_t get_new_= page, free_page_t put_new_page > return MIGRATEPAGE_SUCCESS; > } >=20=20 > - newpage =3D get_new_page(&src->page, private); > - if (!newpage) > + dst =3D get_new_folio(src, private); > + if (!dst) > return -ENOMEM; > - dst =3D page_folio(newpage); > *dstp =3D dst; >=20=20 > dst->private =3D NULL; > @@ -1251,13 +1247,13 @@ static int migrate_folio_unmap(new_page_t get_new= _page, free_page_t put_new_page > ret =3D NULL; >=20=20 > migrate_folio_undo_src(src, page_was_mapped, anon_vma, locked, ret); > - migrate_folio_undo_dst(dst, dst_locked, put_new_page, private); > + migrate_folio_undo_dst(dst, dst_locked, put_new_folio, private); >=20=20 > return rc; > } >=20=20 > /* Migrate the folio to the newly allocated folio in dst. */ > -static int migrate_folio_move(free_page_t put_new_page, unsigned long pr= ivate, > +static int migrate_folio_move(free_folio_t put_new_folio, unsigned long = private, > struct folio *src, struct folio *dst, > enum migrate_mode mode, enum migrate_reason reason, > struct list_head *ret) > @@ -1329,7 +1325,7 @@ static int migrate_folio_move(free_page_t put_new_p= age, unsigned long private, > } >=20=20 > migrate_folio_undo_src(src, page_was_mapped, anon_vma, true, ret); > - migrate_folio_undo_dst(dst, true, put_new_page, private); > + migrate_folio_undo_dst(dst, true, put_new_folio, private); >=20=20 > return rc; > } > @@ -1352,16 +1348,14 @@ static int migrate_folio_move(free_page_t put_new= _page, unsigned long private, > * because then pte is replaced with migration swap entry and direct I/O= code > * will wait in the page fault for migration to complete. > */ > -static int unmap_and_move_huge_page(new_page_t get_new_page, > - free_page_t put_new_page, unsigned long private, > - struct page *hpage, int force, > - enum migrate_mode mode, int reason, > - struct list_head *ret) > +static int unmap_and_move_huge_page(new_folio_t get_new_folio, > + free_folio_t put_new_folio, unsigned long private, > + struct folio *src, int force, enum migrate_mode mode, > + int reason, struct list_head *ret) > { > - struct folio *dst, *src =3D page_folio(hpage); > + struct folio *dst; > int rc =3D -EAGAIN; > int page_was_mapped =3D 0; > - struct page *new_hpage; > struct anon_vma *anon_vma =3D NULL; > struct address_space *mapping =3D NULL; >=20=20 > @@ -1371,10 +1365,9 @@ static int unmap_and_move_huge_page(new_page_t get= _new_page, > return MIGRATEPAGE_SUCCESS; > } >=20=20 > - new_hpage =3D get_new_page(hpage, private); > - if (!new_hpage) > + dst =3D get_new_folio(src, private); > + if (!dst) > return -ENOMEM; > - dst =3D page_folio(new_hpage); >=20=20 > if (!folio_trylock(src)) { > if (!force) > @@ -1415,7 +1408,7 @@ static int unmap_and_move_huge_page(new_page_t get_= new_page, > * semaphore in write mode here and set TTU_RMAP_LOCKED > * to let lower levels know we have taken the lock. > */ > - mapping =3D hugetlb_page_mapping_lock_write(hpage); > + mapping =3D hugetlb_page_mapping_lock_write(&src->page); > if (unlikely(!mapping)) > goto unlock_put_anon; >=20=20 > @@ -1445,7 +1438,7 @@ static int unmap_and_move_huge_page(new_page_t get_= new_page, >=20=20 > if (rc =3D=3D MIGRATEPAGE_SUCCESS) { > move_hugetlb_state(src, dst, reason); > - put_new_page =3D NULL; > + put_new_folio =3D NULL; > } >=20=20 > out_unlock: > @@ -1461,8 +1454,8 @@ static int unmap_and_move_huge_page(new_page_t get_= new_page, > * it. Otherwise, put_page() will drop the reference grabbed during > * isolation. > */ > - if (put_new_page) > - put_new_page(new_hpage, private); > + if (put_new_folio) > + put_new_folio(dst, private); > else > folio_putback_active_hugetlb(dst); >=20=20 > @@ -1509,8 +1502,8 @@ struct migrate_pages_stats { > * exist any more. It is caller's responsibility to call putback_movable= _pages() > * only if ret !=3D 0. > */ > -static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_p= age, > - free_page_t put_new_page, unsigned long private, > +static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_= folio, > + free_folio_t put_new_folio, unsigned long private, > enum migrate_mode mode, int reason, > struct migrate_pages_stats *stats, > struct list_head *ret_folios) > @@ -1548,9 +1541,9 @@ static int migrate_hugetlbs(struct list_head *from,= new_page_t get_new_page, > continue; > } >=20=20 > - rc =3D unmap_and_move_huge_page(get_new_page, > - put_new_page, private, > - &folio->page, pass > 2, mode, > + rc =3D unmap_and_move_huge_page(get_new_folio, > + put_new_folio, private, > + folio, pass > 2, mode, > reason, ret_folios); > /* > * The rules are: > @@ -1607,11 +1600,11 @@ static int migrate_hugetlbs(struct list_head *fro= m, new_page_t get_new_page, > * deadlock (e.g., for loop device). So, if mode !=3D MIGRATE_ASYNC, the > * length of the from list must be <=3D 1. > */ > -static int migrate_pages_batch(struct list_head *from, new_page_t get_ne= w_page, > - free_page_t put_new_page, unsigned long private, > - enum migrate_mode mode, int reason, struct list_head *ret_folios, > - struct list_head *split_folios, struct migrate_pages_stats *stats, > - int nr_pass) > +static int migrate_pages_batch(struct list_head *from, > + new_folio_t get_new_folio, free_folio_t put_new_folio, > + unsigned long private, enum migrate_mode mode, int reason, > + struct list_head *ret_folios, struct list_head *split_folios, > + struct migrate_pages_stats *stats, int nr_pass) > { > int retry =3D 1; > int large_retry =3D 1; > @@ -1671,8 +1664,9 @@ static int migrate_pages_batch(struct list_head *fr= om, new_page_t get_new_page, > continue; > } >=20=20 > - rc =3D migrate_folio_unmap(get_new_page, put_new_page, private, > - folio, &dst, mode, reason, ret_folios); > + rc =3D migrate_folio_unmap(get_new_folio, put_new_folio, > + private, folio, &dst, mode, reason, > + ret_folios); > /* > * The rules are: > * Success: folio will be freed > @@ -1786,7 +1780,7 @@ static int migrate_pages_batch(struct list_head *fr= om, new_page_t get_new_page, >=20=20 > cond_resched(); >=20=20 > - rc =3D migrate_folio_move(put_new_page, private, > + rc =3D migrate_folio_move(put_new_folio, private, > folio, dst, mode, > reason, ret_folios); > /* > @@ -1845,7 +1839,7 @@ static int migrate_pages_batch(struct list_head *fr= om, new_page_t get_new_page, > migrate_folio_undo_src(folio, page_was_mapped, anon_vma, > true, ret_folios); > list_del(&dst->lru); > - migrate_folio_undo_dst(dst, true, put_new_page, private); > + migrate_folio_undo_dst(dst, true, put_new_folio, private); > dst =3D dst2; > dst2 =3D list_next_entry(dst, lru); > } > @@ -1853,10 +1847,11 @@ static int migrate_pages_batch(struct list_head *= from, new_page_t get_new_page, > return rc; > } >=20=20 > -static int migrate_pages_sync(struct list_head *from, new_page_t get_new= _page, > - free_page_t put_new_page, unsigned long private, > - enum migrate_mode mode, int reason, struct list_head *ret_folios, > - struct list_head *split_folios, struct migrate_pages_stats *stats) > +static int migrate_pages_sync(struct list_head *from, new_folio_t get_ne= w_folio, > + free_folio_t put_new_folio, unsigned long private, > + enum migrate_mode mode, int reason, > + struct list_head *ret_folios, struct list_head *split_folios, > + struct migrate_pages_stats *stats) > { > int rc, nr_failed =3D 0; > LIST_HEAD(folios); > @@ -1864,7 +1859,7 @@ static int migrate_pages_sync(struct list_head *fro= m, new_page_t get_new_page, >=20=20 > memset(&astats, 0, sizeof(astats)); > /* Try to migrate in batch with MIGRATE_ASYNC mode firstly */ > - rc =3D migrate_pages_batch(from, get_new_page, put_new_page, private, M= IGRATE_ASYNC, > + rc =3D migrate_pages_batch(from, get_new_folio, put_new_folio, private,= MIGRATE_ASYNC, > reason, &folios, split_folios, &astats, > NR_MAX_MIGRATE_ASYNC_RETRY); > stats->nr_succeeded +=3D astats.nr_succeeded; > @@ -1886,7 +1881,7 @@ static int migrate_pages_sync(struct list_head *fro= m, new_page_t get_new_page, > list_splice_tail_init(&folios, from); > while (!list_empty(from)) { > list_move(from->next, &folios); > - rc =3D migrate_pages_batch(&folios, get_new_page, put_new_page, > + rc =3D migrate_pages_batch(&folios, get_new_folio, put_new_folio, > private, mode, reason, ret_folios, > split_folios, stats, NR_MAX_MIGRATE_SYNC_RETRY); > list_splice_tail_init(&folios, ret_folios); > @@ -1903,11 +1898,11 @@ static int migrate_pages_sync(struct list_head *f= rom, new_page_t get_new_page, > * supplied as the target for the page migration > * > * @from: The list of folios to be migrated. > - * @get_new_page: The function used to allocate free folios to be used > + * @get_new_folio: The function used to allocate free folios to be used > * as the target of the folio migration. > - * @put_new_page: The function used to free target folios if migration > + * @put_new_folio: The function used to free target folios if migration > * fails, or NULL if no special handling is necessary. > - * @private: Private data to be passed on to get_new_page() > + * @private: Private data to be passed on to get_new_folio() > * @mode: The migration mode that specifies the constraints for > * folio migration, if any. > * @reason: The reason for folio migration. > @@ -1924,8 +1919,8 @@ static int migrate_pages_sync(struct list_head *fro= m, new_page_t get_new_page, > * considered as the number of non-migrated large folio, no matter how m= any > * split folios of the large folio are migrated successfully. > */ > -int migrate_pages(struct list_head *from, new_page_t get_new_page, > - free_page_t put_new_page, unsigned long private, > +int migrate_pages(struct list_head *from, new_folio_t get_new_folio, > + free_folio_t put_new_folio, unsigned long private, > enum migrate_mode mode, int reason, unsigned int *ret_succeeded) > { > int rc, rc_gather; > @@ -1940,7 +1935,7 @@ int migrate_pages(struct list_head *from, new_page_= t get_new_page, >=20=20 > memset(&stats, 0, sizeof(stats)); >=20=20 > - rc_gather =3D migrate_hugetlbs(from, get_new_page, put_new_page, privat= e, > + rc_gather =3D migrate_hugetlbs(from, get_new_folio, put_new_folio, priv= ate, > mode, reason, &stats, &ret_folios); > if (rc_gather < 0) > goto out; > @@ -1963,12 +1958,14 @@ int migrate_pages(struct list_head *from, new_pag= e_t get_new_page, > else > list_splice_init(from, &folios); > if (mode =3D=3D MIGRATE_ASYNC) > - rc =3D migrate_pages_batch(&folios, get_new_page, put_new_page, privat= e, > - mode, reason, &ret_folios, &split_folios, &stats, > - NR_MAX_MIGRATE_PAGES_RETRY); > + rc =3D migrate_pages_batch(&folios, get_new_folio, put_new_folio, > + private, mode, reason, &ret_folios, > + &split_folios, &stats, > + NR_MAX_MIGRATE_PAGES_RETRY); > else > - rc =3D migrate_pages_sync(&folios, get_new_page, put_new_page, private, > - mode, reason, &ret_folios, &split_folios, &stats); > + rc =3D migrate_pages_sync(&folios, get_new_folio, put_new_folio, > + private, mode, reason, &ret_folios, > + &split_folios, &stats); > list_splice_tail_init(&folios, &ret_folios); > if (rc < 0) { > rc_gather =3D rc; > @@ -1981,8 +1978,9 @@ int migrate_pages(struct list_head *from, new_page_= t get_new_page, > * is counted as 1 failure already. And, we only try to migrate > * with minimal effort, force MIGRATE_ASYNC mode and retry once. > */ > - migrate_pages_batch(&split_folios, get_new_page, put_new_page, private, > - MIGRATE_ASYNC, reason, &ret_folios, NULL, &stats, 1); > + migrate_pages_batch(&split_folios, get_new_folio, > + put_new_folio, private, MIGRATE_ASYNC, reason, > + &ret_folios, NULL, &stats, 1); > list_splice_tail_init(&split_folios, &ret_folios); > } > rc_gather +=3D rc; > @@ -2017,14 +2015,11 @@ int migrate_pages(struct list_head *from, new_pag= e_t get_new_page, > return rc_gather; > } >=20=20 > -struct page *alloc_migration_target(struct page *page, unsigned long pri= vate) > +struct folio *alloc_migration_target(struct folio *src, unsigned long pr= ivate) > { > - struct folio *folio =3D page_folio(page); > struct migration_target_control *mtc; > gfp_t gfp_mask; > unsigned int order =3D 0; > - struct folio *hugetlb_folio =3D NULL; > - struct folio *new_folio =3D NULL; > int nid; > int zidx; >=20=20 > @@ -2032,33 +2027,30 @@ struct page *alloc_migration_target(struct page *= page, unsigned long private) > gfp_mask =3D mtc->gfp_mask; > nid =3D mtc->nid; > if (nid =3D=3D NUMA_NO_NODE) > - nid =3D folio_nid(folio); > + nid =3D folio_nid(src); >=20=20 > - if (folio_test_hugetlb(folio)) { > - struct hstate *h =3D folio_hstate(folio); > + if (folio_test_hugetlb(src)) { > + struct hstate *h =3D folio_hstate(src); >=20=20 > gfp_mask =3D htlb_modify_alloc_mask(h, gfp_mask); > - hugetlb_folio =3D alloc_hugetlb_folio_nodemask(h, nid, > + return alloc_hugetlb_folio_nodemask(h, nid, > mtc->nmask, gfp_mask); > - return &hugetlb_folio->page; > } >=20=20 > - if (folio_test_large(folio)) { > + if (folio_test_large(src)) { > /* > * clear __GFP_RECLAIM to make the migration callback > * consistent with regular THP allocations. > */ > gfp_mask &=3D ~__GFP_RECLAIM; > gfp_mask |=3D GFP_TRANSHUGE; > - order =3D folio_order(folio); > + order =3D folio_order(src); > } > - zidx =3D zone_idx(folio_zone(folio)); > + zidx =3D zone_idx(folio_zone(src)); > if (is_highmem_idx(zidx) || zidx =3D=3D ZONE_MOVABLE) > gfp_mask |=3D __GFP_HIGHMEM; >=20=20 > - new_folio =3D __folio_alloc(gfp_mask, order, nid, mtc->nmask); > - > - return &new_folio->page; > + return __folio_alloc(gfp_mask, order, nid, mtc->nmask); > } >=20=20 > #ifdef CONFIG_NUMA > @@ -2509,13 +2501,12 @@ static bool migrate_balanced_pgdat(struct pglist_= data *pgdat, > return false; > } >=20=20 > -static struct page *alloc_misplaced_dst_page(struct page *page, > +static struct folio *alloc_misplaced_dst_folio(struct folio *src, > unsigned long data) > { > int nid =3D (int) data; > - int order =3D compound_order(page); > + int order =3D folio_order(src); > gfp_t gfp =3D __GFP_THISNODE; > - struct folio *new; >=20=20 > if (order > 0) > gfp |=3D GFP_TRANSHUGE_LIGHT; > @@ -2524,9 +2515,7 @@ static struct page *alloc_misplaced_dst_page(struct= page *page, > __GFP_NOWARN; > gfp &=3D ~__GFP_RECLAIM; > } > - new =3D __folio_alloc_node(gfp, order, nid); > - > - return &new->page; > + return __folio_alloc_node(gfp, order, nid); > } >=20=20 > static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) > @@ -2604,7 +2593,7 @@ int migrate_misplaced_page(struct page *page, struc= t vm_area_struct *vma, > goto out; >=20=20 > list_add(&page->lru, &migratepages); > - nr_remaining =3D migrate_pages(&migratepages, alloc_misplaced_dst_page, > + nr_remaining =3D migrate_pages(&migratepages, alloc_misplaced_dst_folio, > NULL, node, MIGRATE_ASYNC, > MR_NUMA_MISPLACED, &nr_succeeded); > if (nr_remaining) { > diff --git a/mm/vmscan.c b/mm/vmscan.c > index d257916f39e5..a41fd3333773 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1620,9 +1620,10 @@ static void folio_check_dirty_writeback(struct fol= io *folio, > mapping->a_ops->is_dirty_writeback(folio, dirty, writeback); > } >=20=20 > -static struct page *alloc_demote_page(struct page *page, unsigned long p= rivate) > +static struct folio *alloc_demote_folio(struct folio *src, > + unsigned long private) > { > - struct page *target_page; > + struct folio *dst; > nodemask_t *allowed_mask; > struct migration_target_control *mtc; >=20=20 > @@ -1640,14 +1641,14 @@ static struct page *alloc_demote_page(struct page= *page, unsigned long private) > */ > mtc->nmask =3D NULL; > mtc->gfp_mask |=3D __GFP_THISNODE; > - target_page =3D alloc_migration_target(page, (unsigned long)mtc); > - if (target_page) > - return target_page; > + dst =3D alloc_migration_target(src, (unsigned long)mtc); > + if (dst) > + return dst; >=20=20 > mtc->gfp_mask &=3D ~__GFP_THISNODE; > mtc->nmask =3D allowed_mask; >=20=20 > - return alloc_migration_target(page, (unsigned long)mtc); > + return alloc_migration_target(src, (unsigned long)mtc); > } >=20=20 > /* > @@ -1682,7 +1683,7 @@ static unsigned int demote_folio_list(struct list_h= ead *demote_folios, > node_get_allowed_targets(pgdat, &allowed_mask); >=20=20 > /* Demotion ignores all cpuset and mempolicy settings */ > - migrate_pages(demote_folios, alloc_demote_page, NULL, > + migrate_pages(demote_folios, alloc_demote_folio, NULL, > (unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION, > &nr_succeeded);