From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE6A4C7EE31 for ; Fri, 27 Jun 2025 14:35:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 57D396B00B0; Fri, 27 Jun 2025 10:35:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52D286B00B2; Fri, 27 Jun 2025 10:35:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 442846B00B8; Fri, 27 Jun 2025 10:35:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 32BE36B00B0 for ; Fri, 27 Jun 2025 10:35:01 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 343E680207 for ; Fri, 27 Jun 2025 14:35:00 +0000 (UTC) X-FDA: 83601427560.15.D808026 Received: from mail-qv1-f48.google.com (mail-qv1-f48.google.com [209.85.219.48]) by imf12.hostedemail.com (Postfix) with ESMTP id 360AE4000E for ; Fri, 27 Jun 2025 14:34:58 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bS9EuEg0; spf=pass (imf12.hostedemail.com: domain of refault0@gmail.com designates 209.85.219.48 as permitted sender) smtp.mailfrom=refault0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751034898; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VUWRSrhW18inrM2U55/15P9xJCvGAZ8/b4I9nG6Q2rs=; b=3215HzMxcgEuWF//9q/zxxvQckpmB11/A7nwmosvjsdsd9hTtzf0dkLMXN+rDqthxBa0QU 6tzBt9X+P67sB2DDwfV+SBFqdQhu8mIz9CW7yblodB4lSqbq90UFqVvU3qExDSP6E32GMm xnKuvcDBqRRPMIJTOoeqCOF53qijqyE= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bS9EuEg0; spf=pass (imf12.hostedemail.com: domain of refault0@gmail.com designates 209.85.219.48 as permitted sender) smtp.mailfrom=refault0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751034898; a=rsa-sha256; cv=none; b=vgZCycLIChlY2FTkBNvqemwZ7QtL4PseYW4WIWBNpaMqid+N1GvcJgkBHHqJNx5Aw/eVrp AmF4Et/cV8top3UnNr3EnbTdQqVjc9GNdFwZwYhtD/icwqJnkKPj+oHLQ72YMPi85IjmLz jIffsZdweZD87zDobPsGMRcASV5Opcg= Received: by mail-qv1-f48.google.com with SMTP id 6a1803df08f44-6face1d58dcso37235746d6.2 for ; Fri, 27 Jun 2025 07:34:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751034897; x=1751639697; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=VUWRSrhW18inrM2U55/15P9xJCvGAZ8/b4I9nG6Q2rs=; b=bS9EuEg0GFBbB8VXnVKpNrq+qUhsuwnlemvMACiSExzafEVkWmUPLju9SpvIl1e3oT GXhpX30/T9cUrObgpF+MYHZK0ioD5jvb+eDII/xH82rtOtj6lZtPBhoTe33qFNGrTsM2 EO45aio4ru1jSpDWbgoUTy1BwD9+y1IpyHKuo9KGoRt7/oUl00iPr85YIeaIBY+cMGxR incqH3OvwJbRAkyfxn4rQMi9P4Zk2rzlZBB3LHiR3rNwyeL3B1NL0HNpvcljAJkdLCkh IXPkAWxLBpRfnM9s9O12U+do/Ap1NqqBKNbRNslv0zVFcH+VrpKgLYW8IijKp26DNZV/ bh1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751034897; x=1751639697; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VUWRSrhW18inrM2U55/15P9xJCvGAZ8/b4I9nG6Q2rs=; b=L9vaoBqrUlnkzhyxqtnuMsuviQ8YjzF2sJo5Fmc9VIW8TbzH2IfKIS1g10LUx6VFhC zXsW9NtCErnGUUEDs4eMKiexLUc4e/t9T4L6ypUefSTGNIHrZQx+iaiZNo9cI7KIbzlj UMtd2LvxiPjoiyWJuMcvLQo8d27GT91EoY4uqvdetDWnew9H9DXTProcny0I+IWlihHq 7zy+S6FJd1x7fHIcKTuSE8Kskr1/+8H2dZ05SZQUuj0J85hnVNWiVgCJESjgA5dQq/OJ ivyVcCvZabb26SY2grkUSennOWkvQ0TmdrFcMQQX9f5YIV8Pc2TtcLOxE9zNwAX8FxRQ 1gIw== X-Forwarded-Encrypted: i=1; AJvYcCUNnTVdN7nX2FhNS/LWGBoc0Ab8bQgK2s0CpwbVk0MwTbwh/GKfSoVFSgTN96XZivKCeE0ixllgJQ==@kvack.org X-Gm-Message-State: AOJu0YwfJma5cRYUfM6JDeiceMhypek/B/BokYHP7XhFi9l+IuKa3IIP cFte9Suxjg2pSgH4SLazqeVi9JNaxh+RqYQ44OM8REqcAwl+vN9h4tshITC+ICe9DZnj2+3rles f6nRZvsWMNDxk0R1lrGFyNfxirpV1TIU= X-Gm-Gg: ASbGncsXYVrjwqlwMNc9G0LdLaD07rU0Hs6u2fhof0Bbwqu8SNtuG1gMZzOQAorIKIo y7tetdBRtrwSB3fMiQAXac9vw4WUZi88jOraIl/WWNB05+DHBx/pY8F/dveUGtcAmcU5VIIzCYi L5sidrAQCXgkxYOOTqJvdDxJGJ2Cd98BvjpUNEBnXUT/tFNOwK088dKL2IAA== X-Google-Smtp-Source: AGHT+IHOAtB2bfJJ+OBxDpC50T5Yf6mq7R0xq/g2hcK5EODQbk489K8wapErIHNZN1aBexXDwQCdR9sEqXbd6EFIgUs= X-Received: by 2002:a05:6214:2f90:b0:6fa:c2a4:f806 with SMTP id 6a1803df08f44-700029155eamr56151186d6.29.1751034897174; Fri, 27 Jun 2025 07:34:57 -0700 (PDT) MIME-Version: 1.0 References: <20250627115510.3273675-1-david@redhat.com> <20250627115510.3273675-5-david@redhat.com> In-Reply-To: <20250627115510.3273675-5-david@redhat.com> From: Lance Yang Date: Fri, 27 Jun 2025 22:34:44 +0800 X-Gm-Features: Ac12FXzCpNegqkO37t2dgAKJKb6-obTMh0G-SuUe8uFTcIhWOpR4z4mbSRbmI6Y Message-ID: Subject: Re: [PATCH v1 4/4] mm: remove boolean output parameters from folio_pte_batch_ext() To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Pedro Falcato , Rik van Riel , Harry Yoo Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 360AE4000E X-Stat-Signature: yaqytm85hfr8ueo56y9rc6uux7pubd3y X-HE-Tag: 1751034898-5172 X-HE-Meta: U2FsdGVkX1/lf0mOVi9TESDPGVKJ04wTI/fRsJtMRoOLXrWLaeevGD8clD8UONXXD+gTLa6AOgT5rgOgktEv7Kyo0GdGO0g2p7BJWAiSz0pmtCVwtgZ0OzpdY6GPYqm3RWWqh7T/JL5VLCcYSr6k9dpBaWvhAnYoMfah/qs7hOM0+fUbvPsh3YR6YKiqooy59go5SFHkFXRizOZisjiSn1uWJikYlfcXi0xY7KHkwN3NphfbjzDqGn7aGQdb0uKxx8RU+tkAztCkTgjtGYfQ7qjvRSOJTDCDp+QPtyz15YkfkTYbA5opAXUtk8LofBQfUUZVxDCNP5BxzZxb861VoD2GoUHCDxIALVDd9B2EvUZP7L62NkFD0pPb13ehoZtFF9hQ/5I5spVpuM3F11JadQciTgNf0bei0g557VGQjOCvBoFPflZMJXn6wA5u++MoL/Ge5r5SHFoCdmeClLHVU9ZqzuVsU9j08P1rN0ldoCMggx+xD3CaQ4V5vbi5sk/W5l3vpmVKO10kuI6ptDeyMbdSVJKK+tDbR6180388Z1+UCle8wuwm+DMSkNGKMvBeGidGILdjLgAQsXDg+Y9wlVqOkA5eApyRD5QVH+te7d7Rw8D/O8IaW8vwSnA6O4W5USlJciQfXZ2QJ7R/0YEsz0ertAkwjFNwt8iNeXvWqXGuxqYA6sDV0a7ykgaXWdDFDSf84j5RnkJCJFSGHH0JCTX5oIy4LdQ5hmIai3VYmh5CBGB90ICPUpFPwGCVg1pdfLkgqMfX+w9/YzOgaf6RTLXJ0tA7xvyc4QMUuFEv1eQu+yo7dOJqEteCrrbhnbXlbjtkkbuXOVmpYFWxwH34Ic1w7wrlAlPRYWhuCmD1Tz/EGmFP/JWIcaWrQbCt5hMMTcck1WsFgfHSbYYE+avDwKNXNkSIyfcIkPhrNXDopvOnrVGZnvF3t8G98ec6LeSky6w2JhTDNqaUgn8Xbhe rPpYyDX4 AwuWBwbFxq8VaBHrM9ya3n//0ABczgvQ2W9++X8ZXolW+xQqT4y1kz85Hp5Hy1avucs4Len2n+h3fGGweJf3+MPaJ+I+j+ewpGmaB7CIVflZOytDQElpRGHWQmOYnVLwO3A8sYOB5D4PiwtlULZDUvGv0nWadgn6RGKQI22d1QW1wTYhFlCxNKvPKMSmoROF9WzrRTdtVBds9ANQi9aVnDRvvP5IJH0m5lTeN711GUL6T+gYf+v177Ef6Ici/6IV4MTV/a2zGyrp3CMpX/pJffpTRLZJgJ6+TIUwFeL4dHukD+rljJgrYOJdsuJFAzUK3eJDawp7rs/LxQCJ7GECCsblAIoaeklXzl/JakLt1LXejzOfHDpSPK6izGORPO/iQ+Y0pzPOj5l/x0cC4nyFEM6mpxMOIMCsh4LbQ3i3xcmo2JLYXXlgyblLSa+ebsROMeGLmllra9NUqTcRiTgCJmlXlBgYLJ253bQFwMPAbuaDrBSpqCq0bvJKMV0Jmw0/g5gaPdbnXWClhTzyhE5pV8ERMIK47SwIxcgn2oH8d7vHV2MkeYjLvshMLKDc4kaJeUVyoQrDRmvf3lMg1dJY/RIvI0JhmNSRpvdG3lpc4poAcRgzPwKjyE6eYFklIAVyFFX9vTbT8LOY/lOCJdy0nXoSh2KUVBE4zS/AR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jun 27, 2025 at 7:55=E2=80=AFPM David Hildenbrand wrote: > > Instead, let's just allow for specifying through flags whether we want > to have bits merged into the original PTE. > > For the madvise() case, simplify by having only a single parameter for > merging young+dirty. For madvise_cold_or_pageout_pte_range() merging the > dirty bit is not required, but also not harmful. This code is not that > performance critical after all to really force all micro-optimizations. IIRC, this work you've wanted to do for a long time - maybe even a year ago= ? Less conditional logic is always a good thing ;) Thanks, Lance > > As we now have two pte_t * parameters, use PageTable() to make sure we > are actually given a pointer at a copy of the PTE, not a pointer into > an actual page table. > > Signed-off-by: David Hildenbrand > --- > mm/internal.h | 58 +++++++++++++++++++++++++++++++-------------------- > mm/madvise.c | 26 +++++------------------ > mm/memory.c | 8 ++----- > mm/util.c | 2 +- > 4 files changed, 43 insertions(+), 51 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 6000b683f68ee..fe69e21b34a24 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -208,6 +208,18 @@ typedef int __bitwise fpb_t; > /* Compare PTEs honoring the soft-dirty bit. */ > #define FPB_HONOR_SOFT_DIRTY ((__force fpb_t)BIT(1)) > > +/* > + * Merge PTE write bits: if any PTE in the batch is writable, modify the > + * PTE at @ptentp to be writable. > + */ > +#define FPB_MERGE_WRITE ((__force fpb_t)BIT(2)) > + > +/* > + * Merge PTE young and dirty bits: if any PTE in the batch is young or d= irty, > + * modify the PTE at @ptentp to be young or dirty, respectively. > + */ > +#define FPB_MERGE_YOUNG_DIRTY ((__force fpb_t)BIT(3)) > + > static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) > { > if (!(flags & FPB_HONOR_DIRTY)) > @@ -220,16 +232,11 @@ static inline pte_t __pte_batch_clear_ignored(pte_t= pte, fpb_t flags) > /** > * folio_pte_batch_ext - detect a PTE batch for a large folio > * @folio: The large folio to detect a PTE batch for. > + * @vma: The VMA. Only relevant with FPB_MERGE_WRITE, otherwise can be N= ULL. > * @ptep: Page table pointer for the first entry. > - * @pte: Page table entry for the first page. > + * @ptentp: Pointer at a copy of the first page table entry. > * @max_nr: The maximum number of table entries to consider. > * @flags: Flags to modify the PTE batch semantics. > - * @any_writable: Optional pointer to indicate whether any entry except = the > - * first one is writable. > - * @any_young: Optional pointer to indicate whether any entry except the > - * first one is young. > - * @any_dirty: Optional pointer to indicate whether any entry except the > - * first one is dirty. > * > * Detect a PTE batch: consecutive (present) PTEs that map consecutive > * pages of the same large folio in a single VMA and a single page table= . > @@ -242,28 +249,26 @@ static inline pte_t __pte_batch_clear_ignored(pte_t= pte, fpb_t flags) > * must be limited by the caller so scanning cannot exceed a single VMA = and > * a single page table. > * > + * Depending on the FPB_MERGE_* flags, the pte stored at @ptentp will > + * be modified. > + * > * This function will be inlined to optimize based on the input paramete= rs; > * consider using folio_pte_batch() instead if applicable. > * > * Return: the number of table entries in the batch. > */ > static inline unsigned int folio_pte_batch_ext(struct folio *folio, > - pte_t *ptep, pte_t pte, unsigned int max_nr, fpb_t flags, > - bool *any_writable, bool *any_young, bool *any_dirty) > + struct vm_area_struct *vma, pte_t *ptep, pte_t *ptentp, > + unsigned int max_nr, fpb_t flags) > { > + bool any_writable =3D false, any_young =3D false, any_dirty =3D f= alse; > + pte_t expected_pte, pte =3D *ptentp; > unsigned int nr, cur_nr; > - pte_t expected_pte; > - > - if (any_writable) > - *any_writable =3D false; > - if (any_young) > - *any_young =3D false; > - if (any_dirty) > - *any_dirty =3D false; > > VM_WARN_ON_FOLIO(!pte_present(pte), folio); > VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); > VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) !=3D folio= , folio); > + VM_WARN_ON(virt_addr_valid(ptentp) && PageTable(virt_to_page(pten= tp))); > > /* Limit max_nr to the actual remaining PFNs in the folio we coul= d batch. */ > max_nr =3D min_t(unsigned long, max_nr, > @@ -279,12 +284,12 @@ static inline unsigned int folio_pte_batch_ext(stru= ct folio *folio, > if (!pte_same(__pte_batch_clear_ignored(pte, flags), expe= cted_pte)) > break; > > - if (any_writable) > - *any_writable |=3D pte_write(pte); > - if (any_young) > - *any_young |=3D pte_young(pte); > - if (any_dirty) > - *any_dirty |=3D pte_dirty(pte); > + if (flags & FPB_MERGE_WRITE) > + any_writable |=3D pte_write(pte); > + if (flags & FPB_MERGE_YOUNG_DIRTY) { > + any_young |=3D pte_young(pte); > + any_dirty |=3D pte_dirty(pte); > + } > > cur_nr =3D pte_batch_hint(ptep, pte); > expected_pte =3D pte_advance_pfn(expected_pte, cur_nr); > @@ -292,6 +297,13 @@ static inline unsigned int folio_pte_batch_ext(struc= t folio *folio, > nr +=3D cur_nr; > } > > + if (any_writable) > + *ptentp =3D pte_mkwrite(*ptentp, vma); > + if (any_young) > + *ptentp =3D pte_mkyoung(*ptentp); > + if (any_dirty) > + *ptentp =3D pte_mkdirty(*ptentp); > + > return min(nr, max_nr); > } > > diff --git a/mm/madvise.c b/mm/madvise.c > index 9b9c35a398ed0..dce8f5e8555cb 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -344,13 +344,12 @@ static inline bool can_do_file_pageout(struct vm_ar= ea_struct *vma) > > static inline int madvise_folio_pte_batch(unsigned long addr, unsigned l= ong end, > struct folio *folio, pte_t *pte= p, > - pte_t pte, bool *any_young, > - bool *any_dirty) > + pte_t *ptentp) > { > int max_nr =3D (end - addr) / PAGE_SIZE; > > - return folio_pte_batch_ext(folio, ptep, pte, max_nr, 0, NULL, > - any_young, any_dirty); > + return folio_pte_batch_ext(folio, NULL, ptep, ptentp, max_nr, > + FPB_MERGE_YOUNG_DIRTY); > } > > static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > @@ -488,13 +487,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *= pmd, > * next pte in the range. > */ > if (folio_test_large(folio)) { > - bool any_young; > - > - nr =3D madvise_folio_pte_batch(addr, end, folio, = pte, > - ptent, &any_young, N= ULL); > - if (any_young) > - ptent =3D pte_mkyoung(ptent); > - > + nr =3D madvise_folio_pte_batch(addr, end, folio, = pte, &ptent); > if (nr < folio_nr_pages(folio)) { > int err; > > @@ -724,11 +717,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsign= ed long addr, > * next pte in the range. > */ > if (folio_test_large(folio)) { > - bool any_young, any_dirty; > - > - nr =3D madvise_folio_pte_batch(addr, end, folio, = pte, > - ptent, &any_young, &= any_dirty); > - > + nr =3D madvise_folio_pte_batch(addr, end, folio, = pte, &ptent); > if (nr < folio_nr_pages(folio)) { > int err; > > @@ -753,11 +742,6 @@ static int madvise_free_pte_range(pmd_t *pmd, unsign= ed long addr, > nr =3D 0; > continue; > } > - > - if (any_young) > - ptent =3D pte_mkyoung(ptent); > - if (any_dirty) > - ptent =3D pte_mkdirty(ptent); > } > > if (folio_test_swapcache(folio) || folio_test_dirty(folio= )) { > diff --git a/mm/memory.c b/mm/memory.c > index 43d35d6675f2e..985d09bee44fd 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -972,10 +972,9 @@ copy_present_ptes(struct vm_area_struct *dst_vma, st= ruct vm_area_struct *src_vma > pte_t *dst_pte, pte_t *src_pte, pte_t pte, unsigned long= addr, > int max_nr, int *rss, struct folio **prealloc) > { > + fpb_t flags =3D FPB_MERGE_WRITE; > struct page *page; > struct folio *folio; > - bool any_writable; > - fpb_t flags =3D 0; > int err, nr; > > page =3D vm_normal_page(src_vma, addr, pte); > @@ -995,8 +994,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, str= uct vm_area_struct *src_vma > if (vma_soft_dirty_enabled(src_vma)) > flags |=3D FPB_HONOR_SOFT_DIRTY; > > - nr =3D folio_pte_batch_ext(folio, src_pte, pte, max_nr, f= lags, > - &any_writable, NULL, NULL); > + nr =3D folio_pte_batch_ext(folio, src_vma, src_pte, &pte,= max_nr, flags); > folio_ref_add(folio, nr); > if (folio_test_anon(folio)) { > if (unlikely(folio_try_dup_anon_rmap_ptes(folio, = page, > @@ -1010,8 +1008,6 @@ copy_present_ptes(struct vm_area_struct *dst_vma, s= truct vm_area_struct *src_vma > folio_dup_file_rmap_ptes(folio, page, nr, dst_vma= ); > rss[mm_counter_file(folio)] +=3D nr; > } > - if (any_writable) > - pte =3D pte_mkwrite(pte, src_vma); > __copy_present_ptes(dst_vma, src_vma, dst_pte, src_pte, p= te, > addr, nr); > return nr; > diff --git a/mm/util.c b/mm/util.c > index d29dcc135ad28..19d1a5814fac7 100644 > --- a/mm/util.c > +++ b/mm/util.c > @@ -1197,6 +1197,6 @@ EXPORT_SYMBOL(compat_vma_mmap_prepare); > unsigned int folio_pte_batch(struct folio *folio, pte_t *ptep, pte_t pte= , > unsigned int max_nr) > { > - return folio_pte_batch_ext(folio, ptep, pte, max_nr, 0, NULL, NUL= L, NULL); > + return folio_pte_batch_ext(folio, NULL, ptep, &pte, max_nr, 0); > } > #endif /* CONFIG_MMU */ > -- > 2.49.0 > >