From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8885BCF11FE for ; Thu, 10 Oct 2024 15:36:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 225666B0085; Thu, 10 Oct 2024 11:36:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D4F36B0088; Thu, 10 Oct 2024 11:36:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09CFF6B0089; Thu, 10 Oct 2024 11:36:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DEC496B0085 for ; Thu, 10 Oct 2024 11:35:59 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8B6711A0621 for ; Thu, 10 Oct 2024 15:35:53 +0000 (UTC) X-FDA: 82658093238.16.B6EA293 Received: from mail-vs1-f44.google.com (mail-vs1-f44.google.com [209.85.217.44]) by imf21.hostedemail.com (Postfix) with ESMTP id 0A2BE1C001D for ; Thu, 10 Oct 2024 15:35:54 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nf2gTKai; spf=pass (imf21.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.44 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728574419; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C2LvCFPGeELTdJ9g2V3+XgffqNb3K5/JVEMZ2D6JdyQ=; b=jHoG1OUImCW3ZW+gxFX704LIklN2moAT5lsHX4tFAM9q931tLz+zxI6Fy2bybyDtk9OAS/ MMK8riqvLQdlQ9i4lCFUvR3GDc4Jg3BgzQH/FRBOWwdhTK/sG0Bb4C8AubMckWPMZyKz+V 1nZUpGt23N/dZfEOU4zLCtVv5Ce+N4U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728574419; a=rsa-sha256; cv=none; b=kdW+m9hnTmpm7MDK7uc2ylLsQoR0lNQgoWYYlPbmu0INWnr6DuUDEFSbkD2b567QL+tvKw uGAaQJfseInEYhKqJG3nSJEErvxfddsv5kZod2sFx+46AvHdZVyEUK5NZm0pXuxcMYZTHM DItBvOd+W9gAtLgECeJmhBshhkTuTEs= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nf2gTKai; spf=pass (imf21.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.44 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vs1-f44.google.com with SMTP id ada2fe7eead31-4a3a8e81897so448928137.1 for ; Thu, 10 Oct 2024 08:35:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1728574557; x=1729179357; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=C2LvCFPGeELTdJ9g2V3+XgffqNb3K5/JVEMZ2D6JdyQ=; b=nf2gTKaiheNOa7b02amAyXySn5Q2rxr/FZQkNz8uU0yE2AeZOH5AJtEVqbVa/lDI+N 4hVHZasowBdn7yKMQO4v86rwNQOUW6Las1C02InO1uCI88M5LmhAisOip9v0FuwYYMXT qKg7uxZxIMyO9zwg+a1ik8D8IOetpttNRqH9KhXMjv7lCKAlRD6c0Ox6PwhVj+WbCJun lY+YBd6Ux1sKv11jT+PQ9+7JdrVkxNQvl78MmBIBu8/fPdr10FWcL5UYSTmuhWEg1sBm pUga1fzlkvXXnzm8Fb7+KCdX+FdMC9sdwgHe+8hDCmCMthlS8iFrIq33McCLigO40w0j c3Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728574557; x=1729179357; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C2LvCFPGeELTdJ9g2V3+XgffqNb3K5/JVEMZ2D6JdyQ=; b=GrMb2PFIgJHU5KNSOidO49SfYaL0Ea5CNa8ND1QyMdG2O7kh2yleZH//iKArHB3aHY 6EDS/BLvBPNMG7CpU9HQheLfgf5S73N9E3RI7YvuSLK6L09MnwZwm6z9gjXul9pFu1x/ otAhQZeIeyUTtkBWWlUqoUdpQvNFdW4psO/Xi5ZcdjyLgYUnu/XxM6w8J0UjSki6njdP oBHN5170Z/BJGW+/GCrY2OEFFod9lq8s6ManevMOMdUtg0aUxcLCZ6C95gP1/gV3ACOH 756BXauzj6lzgYtiCtFKPpfbdVJyRExvxDC93qDMON2bEeB3qncdxJWMm+EnVGWbr5DG 4rOQ== X-Forwarded-Encrypted: i=1; AJvYcCXmJxBJqitkNtdyPku0E+CQ9gdBswDSuIiETSVn0SMRQ7R4+LrNMmrnpZO1scUo9MXuKmuN5CEV+g==@kvack.org X-Gm-Message-State: AOJu0YxjRblB6cDZWq6vtYQRZwGyCxtnFE60dY5nD114xvSjoKwdGBOU fyBzlrhQ7tMdbTGpAZ0ESViParvKnUIS2DVx9ZnNmYrPJVD1lMP92lfrZ0BxmdYQTVN4zGgVDxh 4ZUC9pUPsL9nEEJeqa+KTmvUrfDE= X-Google-Smtp-Source: AGHT+IHV5ldyv39H3RgA+LjQ7s0+jlA8HGt096Ln2MNtm1MsHBsSrcvWPdNY/tYr6UC1G+dRDs5Y76N+ZWTJ48G0M3Y= X-Received: by 2002:a05:6122:2020:b0:4f2:a974:29e5 with SMTP id 71dfb90a1353d-50cf0889da2mr6932691e0c.1.1728574556651; Thu, 10 Oct 2024 08:35:56 -0700 (PDT) MIME-Version: 1.0 References: <20241010061556.1846751-1-wangkefeng.wang@huawei.com> In-Reply-To: <20241010061556.1846751-1-wangkefeng.wang@huawei.com> From: Barry Song <21cnbao@gmail.com> Date: Thu, 10 Oct 2024 23:35:45 +0800 Message-ID: Subject: Re: [PATCH] mm: remove unused hugepage for vma_alloc_folio() To: Kefeng Wang Cc: Andrew Morton , linux-mm@kvack.org, Ryan Roberts , David Hildenbrand , Hugh Dickins , willy@infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: m5pqzspffomtm5tf8jwyjfn1993zh8kr X-Rspamd-Queue-Id: 0A2BE1C001D X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1728574554-141925 X-HE-Meta: U2FsdGVkX1/uun0EHCR1VtwheLrB1bhiRwpFMFx3R4EXoJ3Aw9pJWtcx/yCYA4LGG9biFtINhaVH2DmqRRrdzDHmXrAyEIMYAlO9NfbaRsNUbQnso2XjnSIuxJtQDX8J88rnGFqz1y4P8ApA+bgUrv2d1reixbYQVEnZhRO20GbYPxwXcQUQ9enpZ9RBeN4Sgpd3GgaN7g7KsJksBGI+r9eD7JgVxfam0RkKJioKKZT8ms7i1/OsaW5afXba4xnx+wGfMNZz78lE2PTRh0IIuReSgj6xfOMarcrWketvQwTEHD8Jun9EBW69PNdogeGshhqG9a/UZOi5AowIovQlUZNaN/aLCW8YEMkHeufRzYIQo6vO+pKSbFlpLuUsXWsyyU2NQFc00/E+hx1jwAujSPEW64IBB5i+YSGNe+Jc2El+1U7o1LKpcvadygxHc5bHbDSuLnNF0wsu1lBZYuZVg1nojuvcEnV1Adi9FVlXxSCIacKirX+HtYk5FUqzVuQ/s0EWMgNaOeRwtXhZsVC2yoShhMuVL7KMwlzb28MDlRaUlSbQ8GkBVU/m513CGlKJesCW9b6Y8MotyV3uTpgzvL/bGlT2oBvn40C8S1Gx1T3hYf1WyHqg+fhuSlBMa/B/WMqxcKNrkTrz8lvFSttUqqmntWmbq/CN77x1rmnxdqLcXwv7u/iVUXEDzhz5Z1iTA9SNxuyzywt6MXH9BMfxhfYpnPvKTvfiFDIp0zrNB0y1m7LU95UwfYhJaoXxY0w9MrQzSfoOE4MrRfC4x1c/pnDepCeWNOxSLrHNTajVGOm2m4MKnXun9O2P2HZz7AmUFLJPo2XNlhtk3Yu36sfPFcLXMadZtqY4mI19k6OA7yYtpZ59HvkwL4QLpLDPjh+W6AHZyPKWv2qYAS3h9kUotvSjpbx6uUzu4HrzKg4d4z8q0JSRSIOEGA0H+k1xteJZ9E78kz/ufTmsyXbQVxu Dqaza7Xq 2ua8mwEXulfY8XpNp1YiZpJ81xhB0NEHfKMHDSjEA/5M2ufaKWJ6jlCF7hq7e2q5hKCIusrOQ7wuMRREq3kYkMwDSUDH/CeHC+MTYpals4PODFiZPKBd/jGxIiKzQ95hqVSKwnw6Le0fzn9/1teaHrwsMvLsthxOOkS0bhNE2l4pLF8apMtcJy7R98vxaQdXK+4Y0mWG6oo1VouxV2GLtNEHf1sWgK+cfC2+62MMwQGE4QK2cHZH6A0SQ+ynqFXGDCUGuSQbK5F08k4/mf+wRw3Ntx9CLc9Wv1ylh3kvGrOFwKKKPAtlKsj2w0wCBGmF+wjHBT1YCQbZMY3HE57/wWEgQp9U8E0mexNiS+/5W71kL/VSrW+F0jqZz38uuMx+w3Z/45CZZjKRCQkjT60meii4kLKYIUdlxk9nT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 10, 2024 at 2:16=E2=80=AFPM Kefeng Wang wrote: > > The hugepage parameter was deprecated since commit ddc1a5cbc05d > ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"), > for PMD-sized THP, it still tries only preferred node if possible > in vma_alloc_folio() by checking the order of the folio allocation. > > Signed-off-by: Kefeng Wang Reviewed-by: Barry Song > --- > arch/alpha/include/asm/page.h | 2 +- > arch/arm64/mm/fault.c | 2 +- > arch/m68k/include/asm/page_no.h | 2 +- > arch/s390/include/asm/page.h | 2 +- > arch/x86/include/asm/page.h | 2 +- > include/linux/gfp.h | 6 +++--- > include/linux/highmem.h | 2 +- > mm/huge_memory.c | 2 +- > mm/ksm.c | 2 +- > mm/memory.c | 10 ++++------ > mm/mempolicy.c | 3 +-- > mm/userfaultfd.c | 2 +- > 12 files changed, 17 insertions(+), 20 deletions(-) > > diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.= h > index 70419e6be1a3..3dffa2a461d7 100644 > --- a/arch/alpha/include/asm/page.h > +++ b/arch/alpha/include/asm/page.h > @@ -18,7 +18,7 @@ extern void clear_page(void *page); > #define clear_user_page(page, vaddr, pg) clear_page(page) > > #define vma_alloc_zeroed_movable_folio(vma, vaddr) \ > - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr,= false) > + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) > > extern void copy_page(void * _to, void * _from); > #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index c2f89a678ac0..ef63651099a9 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -1023,7 +1023,7 @@ struct folio *vma_alloc_zeroed_movable_folio(struct= vm_area_struct *vma, > if (vma->vm_flags & VM_MTE) > flags |=3D __GFP_ZEROTAGS; > > - return vma_alloc_folio(flags, 0, vma, vaddr, false); > + return vma_alloc_folio(flags, 0, vma, vaddr); > } > > void tag_clear_highpage(struct page *page) > diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page= _no.h > index af3a10973233..63c0e706084b 100644 > --- a/arch/m68k/include/asm/page_no.h > +++ b/arch/m68k/include/asm/page_no.h > @@ -14,7 +14,7 @@ extern unsigned long memory_end; > #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) > > #define vma_alloc_zeroed_movable_folio(vma, vaddr) \ > - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr,= false) > + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) > > #define __pa(vaddr) ((unsigned long)(vaddr)) > #define __va(paddr) ((void *)((unsigned long)(paddr))) > diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h > index 73e1e03317b4..d02058f96bcf 100644 > --- a/arch/s390/include/asm/page.h > +++ b/arch/s390/include/asm/page.h > @@ -74,7 +74,7 @@ static inline void copy_page(void *to, void *from) > #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) > > #define vma_alloc_zeroed_movable_folio(vma, vaddr) \ > - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr,= false) > + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) > > /* > * These are used to make use of C type-checking.. > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h > index 1b93ff80b43b..c9fe207916f4 100644 > --- a/arch/x86/include/asm/page.h > +++ b/arch/x86/include/asm/page.h > @@ -35,7 +35,7 @@ static inline void copy_user_page(void *to, void *from,= unsigned long vaddr, > } > > #define vma_alloc_zeroed_movable_folio(vma, vaddr) \ > - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr,= false) > + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) > > #ifndef __pa > #define __pa(x) __phys_addr((unsigned long)(x)) > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index a951de920e20..b65724c3427d 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -306,7 +306,7 @@ struct folio *folio_alloc_noprof(gfp_t gfp, unsigned = int order); > struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order, > struct mempolicy *mpol, pgoff_t ilx, int nid); > struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_are= a_struct *vma, > - unsigned long addr, bool hugepage); > + unsigned long addr); > #else > static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned i= nt order) > { > @@ -326,7 +326,7 @@ static inline struct folio *folio_alloc_mpol_noprof(g= fp_t gfp, unsigned int orde > { > return folio_alloc_noprof(gfp, order); > } > -#define vma_alloc_folio_noprof(gfp, order, vma, addr, hugepage) = \ > +#define vma_alloc_folio_noprof(gfp, order, vma, addr) \ > folio_alloc_noprof(gfp, order) > #endif > > @@ -341,7 +341,7 @@ static inline struct folio *folio_alloc_mpol_noprof(g= fp_t gfp, unsigned int orde > static inline struct page *alloc_page_vma_noprof(gfp_t gfp, > struct vm_area_struct *vma, unsigned long addr) > { > - struct folio *folio =3D vma_alloc_folio_noprof(gfp, 0, vma, addr,= false); > + struct folio *folio =3D vma_alloc_folio_noprof(gfp, 0, vma, addr)= ; > > return &folio->page; > } > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index 930a591b9b61..bec9bd715acf 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -226,7 +226,7 @@ struct folio *vma_alloc_zeroed_movable_folio(struct v= m_area_struct *vma, > { > struct folio *folio; > > - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr, fa= lse); > + folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr); > if (folio) > clear_user_highpage(&folio->page, vaddr); > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 30912a93f7dc..7f254fd2a3a0 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1342,7 +1342,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fau= lt *vmf) > return ret; > } > gfp =3D vma_thp_gfp_mask(vma); > - folio =3D vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true)= ; > + folio =3D vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr); > if (unlikely(!folio)) { > count_vm_event(THP_FAULT_FALLBACK); > count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FAL= LBACK); > diff --git a/mm/ksm.c b/mm/ksm.c > index eea5a426be2c..4d482d011745 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2970,7 +2970,7 @@ struct folio *ksm_might_need_to_copy(struct folio *= folio, > if (!folio_test_uptodate(folio)) > return folio; /* let do_swap_page report the er= ror */ > > - new_folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr,= false); > + new_folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr)= ; > if (new_folio && > mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL)) { > folio_put(new_folio); > diff --git a/mm/memory.c b/mm/memory.c > index fe21bd3beff5..9ba1fcdb9bb5 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1059,8 +1059,7 @@ static inline struct folio *folio_prealloc(struct m= m_struct *src_mm, > if (need_zero) > new_folio =3D vma_alloc_zeroed_movable_folio(vma, addr); > else > - new_folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vm= a, > - addr, false); > + new_folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vm= a, addr); > > if (!new_folio) > return NULL; > @@ -4017,8 +4016,7 @@ static struct folio *__alloc_swap_folio(struct vm_f= ault *vmf) > struct folio *folio; > swp_entry_t entry; > > - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, > - vmf->address, false); > + folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->addr= ess); > if (!folio) > return NULL; > > @@ -4174,7 +4172,7 @@ static struct folio *alloc_swap_folio(struct vm_fau= lt *vmf) > gfp =3D vma_thp_gfp_mask(vma); > while (orders) { > addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > - folio =3D vma_alloc_folio(gfp, order, vma, addr, true); > + folio =3D vma_alloc_folio(gfp, order, vma, addr); > if (folio) { > if (!mem_cgroup_swapin_charge_folio(folio, vma->v= m_mm, > gfp, entry)) > @@ -4716,7 +4714,7 @@ static struct folio *alloc_anon_folio(struct vm_fau= lt *vmf) > gfp =3D vma_thp_gfp_mask(vma); > while (orders) { > addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > - folio =3D vma_alloc_folio(gfp, order, vma, addr, true); > + folio =3D vma_alloc_folio(gfp, order, vma, addr); > if (folio) { > if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { > count_mthp_stat(order, MTHP_STAT_ANON_FAU= LT_FALLBACK_CHARGE); > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index a8aa83a97ad1..bb37cd1a51d8 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -2290,7 +2290,6 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, un= signed int order, > * @order: Order of the folio. > * @vma: Pointer to VMA. > * @addr: Virtual address of the allocation. Must be inside @vma. > - * @hugepage: Unused (was: For hugepages try only preferred node if poss= ible). > * > * Allocate a folio for a specific address in @vma, using the appropriat= e > * NUMA policy. The caller must hold the mmap_lock of the mm_struct of = the > @@ -2301,7 +2300,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, un= signed int order, > * Return: The folio on success or NULL if allocation fails. > */ > struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_are= a_struct *vma, > - unsigned long addr, bool hugepage) > + unsigned long addr) > { > struct mempolicy *pol; > pgoff_t ilx; > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 48b87c62fc3d..60a0be33766f 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -251,7 +251,7 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, > if (!*foliop) { > ret =3D -ENOMEM; > folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vm= a, > - dst_addr, false); > + dst_addr); > if (!folio) > goto out; > > -- > 2.27.0 >