From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E118BCD1284 for ; Tue, 2 Apr 2024 11:30:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 763756B009A; Tue, 2 Apr 2024 07:30:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 712B86B009B; Tue, 2 Apr 2024 07:30:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5DAF36B009C; Tue, 2 Apr 2024 07:30:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3ADC26B009A for ; Tue, 2 Apr 2024 07:30:30 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4EC60C0B5F for ; Tue, 2 Apr 2024 11:30:29 +0000 (UTC) X-FDA: 81964373778.21.2BC9433 Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by imf08.hostedemail.com (Postfix) with ESMTP id 796F4160014 for ; Tue, 2 Apr 2024 11:30:27 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aqcfTr34; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712057427; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tvLJKayzGhGCWIQRgWMmiuFqAyfV+XipkMVmkcqYEZ4=; b=wsj6eAW3bFuII9hc9/cdzBgxQe60kWyvIfsRiAAJ2cJshdd8O1xaLttju2DWOPp+63G6AK b+cuEdEJXaHGKUSRfDq0bx+odZgM7CPzU+6DgDyA8gWtLebDJu4vcHaZ9XDrLCI3YeyDNW bQYTsuxixrgNTJDdErKLk40E/AGJl0E= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aqcfTr34; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712057427; a=rsa-sha256; cv=none; b=2dVB/t9tiAutx5oto1r4S+l5jxcUTJvEuEb8sSKK4pagSk98eZ80ALx8J01zSJuc1VXjMI ZpBfogjWlja5QS0sKAmTh/dhJVrGfSb618LAwMcIXtaUl5hb4ZjUhF8UxfrXhK5GKbnZ6G LNAr7FEaxdGrH3iQJPI/0Y8i2VGypDw= Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-516b80252c5so504737e87.3 for ; Tue, 02 Apr 2024 04:30:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712057426; x=1712662226; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=tvLJKayzGhGCWIQRgWMmiuFqAyfV+XipkMVmkcqYEZ4=; b=aqcfTr34gXaemC0CGtEp8VsqAYo1n9ben9OqRCeaFitKqo0bmMhRUJBp3P60jM3OgX a8SKFmwb+7R8HQcetXcncv/G2aei+sbMiv2GneJ1PFWTpxlz0Ls3oranNI5oxbLiR2/1 HB9gMO28qd+uPa8e+xNjjykBHh+Sh0GkuEYNOENKgJ5WOgDmczpG6JfMjDhL02FSvGfu hCoIk6fJl8zheDNUqzzFYm8k3TxGZe6JU5tsW7DfkBWbJxBsPM85Nq/CDlDDqqdzGoiS OctM8SBg/wMJ1kOd1CZVic+qJ+oOd9QzwQE9ZsCX6Xvf3y0mnW7CMy+a7uoQs90QxQOi +xkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712057426; x=1712662226; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tvLJKayzGhGCWIQRgWMmiuFqAyfV+XipkMVmkcqYEZ4=; b=UlSybnYzoZoQfI81UM6oxtSNonQZRDwe21QiYiNQ2+71/hc56D2M6K/D+8/sXw3/pn o6II9qwdAvNamSPTdY6N4hmGRSX2gC3/dt4gb78TSxfPbajjyJJxCO/kjrEKtaHXHjaz HViXweiWbb6B3fR7hc0nqfop4xSWh2BE+/cjRVR/86+9OD1xjuDhLR6vIB9xRutZXt75 SgNZGxk9HPWZk1HwE6ZG8HKnGu6T7aeyD9vfqVg9w6t05ecBgeKnu/XLPzB6ROWTfgSI xw6GwTqP9WRi+E0mrG9N1U9cMlaKiwhtTABQ8cOzcGsSJ7GxA4b844PIm/EV5ZnWB9s7 mZBA== X-Forwarded-Encrypted: i=1; AJvYcCV4VzZTEBumODAY7h+4pTmDrrONg23eapQgCiRQzk7il4jtl+mE9qjljS24R84KD2+Gi31dsyvSipSmS40JbFD1T4Y= X-Gm-Message-State: AOJu0YzCwMyQMCL//B/nKfH0eC1CzwZXdwLn195lUjG4F6NNE9xo8sgV cmzJ2E/v/AgI2DcDSZN7Ama60Z+VbYCZsX43wquNF/BrT8GXggLpBAVDxql9XBkcT8OmdoF2TZO TcTRw0ONAVdt7BUYyk6i5w/i8Ofo= X-Google-Smtp-Source: AGHT+IG4CfnxLbVPc+7VK201cG0D9tzVYq3yiSkvweNt1GPtOsN270C8O0XLt57IhfAo50OYUih6KbzG7fXfKtetwys= X-Received: by 2002:a05:6512:21ca:b0:515:8bde:56c3 with SMTP id d10-20020a05651221ca00b005158bde56c3mr9309404lft.26.1712057425386; Tue, 02 Apr 2024 04:30:25 -0700 (PDT) MIME-Version: 1.0 References: <20240327144537.4165578-1-ryan.roberts@arm.com> <20240327144537.4165578-7-ryan.roberts@arm.com> In-Reply-To: From: Lance Yang Date: Tue, 2 Apr 2024 19:30:13 +0800 Message-ID: Subject: Re: [PATCH v5 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD To: Ryan Roberts Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 796F4160014 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 4ukgj8jnm4siy1koupf1ro5iruisoz8p X-HE-Tag: 1712057427-854023 X-HE-Meta: U2FsdGVkX1+dVAKVoYxL1iDumlnPL00yvSmSjYinhqlsXMBW1Di2dgaD2e/mvzV2Dahc4BTjjjURba6NkyCVUBoUUFBcGOb3fMVLVAWtyzDgH7ePYwQDHux0AcANaqWlYW0I0P/GDo203kBEbOKRhQIMp0yfi4JuUwQtbK7enLp2RVIcWwEOmsLl6rUdsRqkbJNH9ok7rfL+ZxEXSAmvtksELQDG5j//BrtHdxa9Fhunk+IPwx+/s7wjq9Vza/LgCTea62ITpDhSvXp+C3Ecd9/8vOOvva++McVZ6rypELWl1T3Bfwl67eCTN2C4kGGDgzdHNnx6X+XELqSD3ZRlE3viB5stMpVk1IaA9hiXoKeaNjmQ9Uq8fh0elLvlmq8w3tItFxBS7w6raj8pO3EGqoCaqew8tSzwdqwUdm5QpyXvW0SPJsMOnSr8Jm9TTeN7yN8Dt0Y0Bev1vVbbQ5h9KKru38e7eYe7at+Irf2a0sfaOIwSocQ3UnvY8R46KKs4rbAQtQBbYkNpEOKudDzpVxx4Hnp4wS/JE0KRSbJBsQCIHzK6ahmmU1qFiApmbAfjzdzQt3Enn4tHD53XlgBJaBWhTUGtTNPPD6KIhxMQKS7cJG+j7iBIs1mB6b4jbXNH4B0V4SiRQdFZp89yFjsbixEj3iabJv5QHRiwlslkxNmYFQnSyK6tv8iqykaVGd7DBTQETX9+Y+8Z2sHeyUDa0lpzevRyZsRbRbVrqWBpxc/GMq9uaEWkW+uz4ljzr5H1J0NH9otZ2SRrkJFoO7MjzlbkAXkLjE+lXJmcxslozn2fBnfK8ktHat3ZKbkru9T/EsZhoUnrE8ve8dDF6dFhhg5dJFWn+2hwSAIjlbpPgjruB9vldtfwdjkh2ahsEdkc001matclYzX/YrEYEqAMlcB/lVTFmAR2kHRXB4AHlGzTtMzm+HDfOloHSwFXc1mPupKi8z43GoF4RQUWsa0 ylHeM9HO LU8YD0k3JMcAnyjIOq4NIjRdyAsgd2BZ317UnwvWuF2f7YzxPEkk8z7fshSXTp5h/MR4bJUAVy2Lq5mfnQXFlDfmnXw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 2, 2024 at 7:20=E2=80=AFPM Ryan Roberts = wrote: > > On 01/04/2024 13:25, Lance Yang wrote: > > On Wed, Mar 27, 2024 at 10:46=E2=80=AFPM Ryan Roberts wrote: > >> > >> Rework madvise_cold_or_pageout_pte_range() to avoid splitting any larg= e > >> folio that is fully and contiguously mapped in the pageout/cold vm > >> range. This change means that large folios will be maintained all the > >> way to swap storage. This both improves performance during swap-out, b= y > >> eliding the cost of splitting the folio, and sets us up nicely for > >> maintaining the large folio when it is swapped back in (to be covered = in > >> a separate series). > >> > >> Folios that are not fully mapped in the target range are still split, > >> but note that behavior is changed so that if the split fails for any > >> reason (folio locked, shared, etc) we now leave it as is and move to t= he > >> next pte in the range and continue work on the proceeding folios. > >> Previously any failure of this sort would cause the entire operation t= o > >> give up and no folios mapped at higher addresses were paged out or mad= e > >> cold. Given large folios are becoming more common, this old behavior > >> would have likely lead to wasted opportunities. > >> > >> While we are at it, change the code that clears young from the ptes to > >> use ptep_test_and_clear_young(), via the new mkold_ptes() batch helper > >> function. This is more efficent than get_and_clear/modify/set, > >> especially for contpte mappings on arm64, where the old approach would > >> require unfolding/refolding and the new approach can be done in place. > >> > >> Reviewed-by: Barry Song > >> Signed-off-by: Ryan Roberts > >> --- > >> include/linux/pgtable.h | 30 ++++++++++++++ > >> mm/internal.h | 12 +++++- > >> mm/madvise.c | 88 ++++++++++++++++++++++++----------------= - > >> mm/memory.c | 4 +- > >> 4 files changed, 93 insertions(+), 41 deletions(-) > >> > >> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > >> index 8185939df1e8..391f56a1b188 100644 > >> --- a/include/linux/pgtable.h > >> +++ b/include/linux/pgtable.h > >> @@ -361,6 +361,36 @@ static inline int ptep_test_and_clear_young(struc= t vm_area_struct *vma, > >> } > >> #endif > >> > >> +#ifndef mkold_ptes > >> +/** > >> + * mkold_ptes - Mark PTEs that map consecutive pages of the same foli= o as old. > >> + * @vma: VMA the pages are mapped into. > >> + * @addr: Address the first page is mapped at. > >> + * @ptep: Page table pointer for the first entry. > >> + * @nr: Number of entries to mark old. > >> + * > >> + * May be overridden by the architecture; otherwise, implemented as a= simple > >> + * loop over ptep_test_and_clear_young(). > >> + * > >> + * Note that PTE bits in the PTE range besides the PFN can differ. Fo= r example, > >> + * some PTEs might be write-protected. > >> + * > >> + * Context: The caller holds the page table lock. The PTEs map conse= cutive > >> + * pages that belong to the same folio. The PTEs are all in the same= PMD. > >> + */ > >> +static inline void mkold_ptes(struct vm_area_struct *vma, unsigned lo= ng addr, > >> + pte_t *ptep, unsigned int nr) > >> +{ > >> + for (;;) { > >> + ptep_test_and_clear_young(vma, addr, ptep); > > > > IIUC, if the first PTE is a CONT-PTE, then calling ptep_test_and_clear_= young() > > will clear the young bit for the entire contig range to avoid having > > to unfold. So, > > the other PTEs within the range don't need to clear again. > > > > Maybe we should consider overriding mkold_ptes for arm64? > > Yes completely agree. I was saving this for a separate submission though,= to > reduce the complexity of this initial series as much as possible. Let me = know if > you disagree and want to see that change as part of this series. Feel free to save the change for a separate submission :) > > > > > Thanks, > > Lance > > > >> + if (--nr =3D=3D 0) > >> + break; > >> + ptep++; > >> + addr +=3D PAGE_SIZE; > >> + } > >> +} > >> +#endif > >> + > >> #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG > >> #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_N= ONLEAF_PMD_YOUNG) > >> static inline int pmdp_test_and_clear_young(struct vm_area_struct *vm= a, > >> diff --git a/mm/internal.h b/mm/internal.h > >> index eadb79c3a357..efee8e4cd2af 100644 > >> --- a/mm/internal.h > >> +++ b/mm/internal.h > >> @@ -130,6 +130,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_= t pte, fpb_t flags) > >> * @flags: Flags to modify the PTE batch semantics. > >> * @any_writable: Optional pointer to indicate whether any entry exce= pt the > >> * first one is writable. > >> + * @any_young: Optional pointer to indicate whether any entry except = the > >> + * first one is young. > >> * > >> * Detect a PTE batch: consecutive (present) PTEs that map consecutiv= e > >> * pages of the same large folio. > >> @@ -145,16 +147,18 @@ static inline pte_t __pte_batch_clear_ignored(pt= e_t pte, fpb_t flags) > >> */ > >> static inline int folio_pte_batch(struct folio *folio, unsigned long = addr, > >> pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, > >> - bool *any_writable) > >> + bool *any_writable, bool *any_young) > >> { > >> unsigned long folio_end_pfn =3D folio_pfn(folio) + folio_nr_pa= ges(folio); > >> const pte_t *end_ptep =3D start_ptep + max_nr; > >> pte_t expected_pte, *ptep; > >> - bool writable; > >> + bool writable, young; > >> int nr; > >> > >> if (any_writable) > >> *any_writable =3D false; > >> + if (any_young) > >> + *any_young =3D false; > >> > >> VM_WARN_ON_FOLIO(!pte_present(pte), folio); > >> VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio= ); > >> @@ -168,6 +172,8 @@ static inline int folio_pte_batch(struct folio *fo= lio, unsigned long addr, > >> pte =3D ptep_get(ptep); > >> if (any_writable) > >> writable =3D !!pte_write(pte); > >> + if (any_young) > >> + young =3D !!pte_young(pte); > >> pte =3D __pte_batch_clear_ignored(pte, flags); > >> > >> if (!pte_same(pte, expected_pte)) > >> @@ -183,6 +189,8 @@ static inline int folio_pte_batch(struct folio *fo= lio, unsigned long addr, > >> > >> if (any_writable) > >> *any_writable |=3D writable; > >> + if (any_young) > >> + *any_young |=3D young; > >> > >> nr =3D pte_batch_hint(ptep, pte); > >> expected_pte =3D pte_advance_pfn(expected_pte, nr); > >> diff --git a/mm/madvise.c b/mm/madvise.c > >> index 070bedb4996e..bd00b83e7c50 100644 > >> --- a/mm/madvise.c > >> +++ b/mm/madvise.c > >> @@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t= *pmd, > >> LIST_HEAD(folio_list); > >> bool pageout_anon_only_filter; > >> unsigned int batch_count =3D 0; > >> + int nr; > >> > >> if (fatal_signal_pending(current)) > >> return -EINTR; > >> @@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t= *pmd, > >> return 0; > >> flush_tlb_batched_pending(mm); > >> arch_enter_lazy_mmu_mode(); > >> - for (; addr < end; pte++, addr +=3D PAGE_SIZE) { > >> + for (; addr < end; pte +=3D nr, addr +=3D nr * PAGE_SIZE) { > >> + nr =3D 1; > >> ptent =3D ptep_get(pte); > >> > >> if (++batch_count =3D=3D SWAP_CLUSTER_MAX) { > >> @@ -447,55 +449,67 @@ static int madvise_cold_or_pageout_pte_range(pmd= _t *pmd, > >> continue; > >> > >> /* > >> - * Creating a THP page is expensive so split it only i= f we > >> - * are sure it's worth. Split it if we are only owner. > >> + * If we encounter a large folio, only split it if it = is not > >> + * fully mapped within the range we are operating on. = Otherwise > >> + * leave it as is so that it can be swapped out whole.= If we > >> + * fail to split a folio, leave it in place and advanc= e to the > >> + * next pte in the range. > >> */ > >> if (folio_test_large(folio)) { > >> - int err; > >> - > >> - if (folio_likely_mapped_shared(folio)) > >> - break; > >> - if (pageout_anon_only_filter && !folio_test_an= on(folio)) > >> - break; > >> - if (!folio_trylock(folio)) > >> - break; > >> - folio_get(folio); > >> - arch_leave_lazy_mmu_mode(); > >> - pte_unmap_unlock(start_pte, ptl); > >> - start_pte =3D NULL; > >> - err =3D split_folio(folio); > >> - folio_unlock(folio); > >> - folio_put(folio); > >> - if (err) > >> - break; > >> - start_pte =3D pte =3D > >> - pte_offset_map_lock(mm, pmd, addr, &pt= l); > >> - if (!start_pte) > >> - break; > >> - arch_enter_lazy_mmu_mode(); > >> - pte--; > >> - addr -=3D PAGE_SIZE; > >> - continue; > >> + const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | > >> + FPB_IGNORE_SOFT_DIRTY; > >> + int max_nr =3D (end - addr) / PAGE_SIZE; > >> + bool any_young; > >> + > >> + nr =3D folio_pte_batch(folio, addr, pte, ptent= , max_nr, > >> + fpb_flags, NULL, &any_you= ng); > >> + if (any_young) > >> + ptent =3D pte_mkyoung(ptent); > >> + > >> + if (nr < folio_nr_pages(folio)) { > >> + int err; > >> + > >> + if (folio_likely_mapped_shared(folio)) > >> + continue; > >> + if (pageout_anon_only_filter && !folio= _test_anon(folio)) > >> + continue; > >> + if (!folio_trylock(folio)) > >> + continue; > >> + folio_get(folio); > >> + arch_leave_lazy_mmu_mode(); > >> + pte_unmap_unlock(start_pte, ptl); > >> + start_pte =3D NULL; > >> + err =3D split_folio(folio); > >> + folio_unlock(folio); > >> + folio_put(folio); > >> + if (err) > >> + continue; > >> + start_pte =3D pte =3D > >> + pte_offset_map_lock(mm, pmd, a= ddr, &ptl); > >> + if (!start_pte) > >> + break; > >> + arch_enter_lazy_mmu_mode(); > >> + nr =3D 0; > >> + continue; > >> + } > >> } > >> > >> /* > >> * Do not interfere with other mappings of this folio = and > >> - * non-LRU folio. > >> + * non-LRU folio. If we have a large folio at this poi= nt, we > >> + * know it is fully mapped so if its mapcount is the s= ame as its > >> + * number of pages, it must be exclusive. > >> */ > >> - if (!folio_test_lru(folio) || folio_mapcount(folio) != =3D 1) > >> + if (!folio_test_lru(folio) || > >> + folio_mapcount(folio) !=3D folio_nr_pages(folio)) > >> continue; > >> > >> if (pageout_anon_only_filter && !folio_test_anon(folio= )) > >> continue; > >> > >> - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > >> - > >> if (!pageout && pte_young(ptent)) { > >> - ptent =3D ptep_get_and_clear_full(mm, addr, pt= e, > >> - tlb->fullmm); > >> - ptent =3D pte_mkold(ptent); > >> - set_pte_at(mm, addr, pte, ptent); > >> - tlb_remove_tlb_entry(tlb, pte, addr); > >> + mkold_ptes(vma, addr, pte, nr); > >> + tlb_remove_tlb_entries(tlb, pte, nr, addr); > >> } > >> > >> /* > >> diff --git a/mm/memory.c b/mm/memory.c > >> index 9d844582ba38..b5b48f4cf2af 100644 > >> --- a/mm/memory.c > >> +++ b/mm/memory.c > >> @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, = struct vm_area_struct *src_vma > >> flags |=3D FPB_IGNORE_SOFT_DIRTY; > >> > >> nr =3D folio_pte_batch(folio, addr, src_pte, pte, max_= nr, flags, > >> - &any_writable); > >> + &any_writable, NULL); > >> folio_ref_add(folio, nr); > >> if (folio_test_anon(folio)) { > >> if (unlikely(folio_try_dup_anon_rmap_ptes(foli= o, page, > >> @@ -1553,7 +1553,7 @@ static inline int zap_present_ptes(struct mmu_ga= ther *tlb, > >> */ > >> if (unlikely(folio_test_large(folio) && max_nr !=3D 1)) { > >> nr =3D folio_pte_batch(folio, addr, pte, ptent, max_nr= , fpb_flags, > >> - NULL); > >> + NULL, NULL); > >> > >> zap_present_folio_ptes(tlb, vma, folio, page, pte, pte= nt, nr, > >> addr, details, rss, force_flush= , > >> -- > >> 2.25.1 > >> >