From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF687C48BF6 for ; Thu, 7 Mar 2024 12:02:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 22E536B0176; Thu, 7 Mar 2024 07:02:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B6426B0177; Thu, 7 Mar 2024 07:02:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0089E6B0178; Thu, 7 Mar 2024 07:02:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E00526B0176 for ; Thu, 7 Mar 2024 07:02:05 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A662DA0ED3 for ; Thu, 7 Mar 2024 12:02:05 +0000 (UTC) X-FDA: 81870104610.29.9BB63EC Received: from mail-vk1-f177.google.com (mail-vk1-f177.google.com [209.85.221.177]) by imf23.hostedemail.com (Postfix) with ESMTP id EF6B414000C for ; Thu, 7 Mar 2024 12:02:03 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MD20cvF0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.177 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709812924; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yiRVLLKTXZwQKsZIjiQH5O7OCbSj9iMZpcubEGTHRPU=; b=puedMK8ARlRB3YZKOSDkIWBiZJp21anHTMzLDUP5WOF+IR9qVqzAz3w9YeZQwN5SHCG9jp Rtb2UV7ACMflaBOzLtPXLmPXJ6zMG8SozXKr51O88kdCQ7ieoKa+eNhx6syYu31a3pJix+ mdjYOpJxkNhsvd11v/eti/CFPCBOwds= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MD20cvF0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.177 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709812924; a=rsa-sha256; cv=none; b=Or2TtKNQCC8lWTQALRNOEo5jxHzC/Lio5drjDRhIAGvZuP+9vQHSMjMCT6L3NMMoya21zZ LFpgq/99yQY17de46KYdQK4Zy29zaKTGJQzU/Y7nRDJAyMguoCV5uNH2Bm8YLB/f5aO81j RhAGL92xbd2fIGom/eyYOTC70cH6CsU= Received: by mail-vk1-f177.google.com with SMTP id 71dfb90a1353d-4d371351b62so333473e0c.1 for ; Thu, 07 Mar 2024 04:02:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709812923; x=1710417723; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=yiRVLLKTXZwQKsZIjiQH5O7OCbSj9iMZpcubEGTHRPU=; b=MD20cvF07w+osEoKHIX/CszpKK/usKA/VdrxEHhN1HlhaGpl76JVqYGznPxCbaOK8x rejIzdLnp73l6MtS5TsCf9y7ROivM3ijfj21WvhCZWxU9mOKsRYuPSlRdIS2hEV3QBxF SxLk0raFBeO8B48v0Mo1pnGeipYtaV+4v+FeoVZU3SkGLynkfYwcH6aZGCKm9Q+1aq9+ UKX/rR5OQ33ixtKOdbXA3LqMMfH57gVtu/Z3oEIVL5erpJUCWXVpyCoy3aMSHT2ejmv+ iRDc9ujtU8OzYt2nDEyMbxrL1/m5nRc+LdKwMi1RKG0KlPwDGyP/r+P2tqg7UqA/3WTQ Utbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709812923; x=1710417723; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yiRVLLKTXZwQKsZIjiQH5O7OCbSj9iMZpcubEGTHRPU=; b=IdnyZ1FPnuOLbWDbEpWCi1TkbOmbuunA0vkho55Qj+0yoR1xliYtSZC2TLdO8lDvB4 XxgiFtNHSVYZxByt+hfvbTfk6DkBu1jrL7SnZmETutfwvosjtUVTR6CspG1wlrE+uy+q gQVKL9q1aZNP8YH7NxHzHsrWuykMA4GzfzQH+JYWvdB7oCBoTCju3TWEnBTSdEbsoR8f qN4GWfHWuWMIzrQsJmUOBXEeDr+0qlzS/0V8a5j1EkHfLxc4QD31MYdTBrbGjBv9aGCm zZYessHElf1KvtRPO0rTsP1Q2r+fgDIr2UnAErwilwTrtbM2o2kz9T3qN1UtmN2C4VQO +tyg== X-Forwarded-Encrypted: i=1; AJvYcCUJNEsPjxRjFK5vU8gfPqEIhy63/wkstrZuVJHQ2D3R/UKo3LHHjDnOtth8+dBG94Cu/sRmJaUBSWEgwWa7C1RxsfE= X-Gm-Message-State: AOJu0YwkyOxywT8kbscvAvgggF5Z9dl2SCa4SFDmlKkCNWg8o5+AjtuL gXF8Mf+Efgxlj2FwvkRI+PLZfv7qx27EQtGEuie16JRLAdUoNyJ69CYwy8qEn5PRbvrZra/74z4 8OYD4eBDJS3YObltKzEb8oVmNSM8= X-Google-Smtp-Source: AGHT+IE56D95vkcppYlIaSiMWmuIKbFmdc2jgyWmasyu8Vr8sUr86KEcUK/2DCaU+6D4jFwZiI93hxtcs7gn9xnTs+s= X-Received: by 2002:a05:6122:17a7:b0:4c9:75c3:e79b with SMTP id o39-20020a05612217a700b004c975c3e79bmr8433911vkf.6.1709812922947; Thu, 07 Mar 2024 04:02:02 -0800 (PST) MIME-Version: 1.0 References: <20240307061425.21013-1-ioworker0@gmail.com> <03458c20-5544-411b-9b8d-b4600a9b802f@arm.com> <501c9f77-1459-467a-8619-78e86b46d300@arm.com> <8f84c7d6-982a-4933-a7a7-3f640df64991@redhat.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Thu, 7 Mar 2024 20:01:51 +0800 Message-ID: Subject: Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free To: David Hildenbrand Cc: Ryan Roberts , Lance Yang , Vishal Moola , akpm@linux-foundation.org, zokeefe@google.com, shy828301@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EF6B414000C X-Stat-Signature: d7j7r3wkjyzb8wwag3q7dny1sk3kabz6 X-Rspam-User: X-HE-Tag: 1709812923-63396 X-HE-Meta: U2FsdGVkX1993+o7YtAm6OYBgS/WeIPbFnSBUQMaxMw6ZoD1cnWsi1hX9yvQ/WLMB85ilU1DrqiCSnmSkHz1SijaosOgG9/cQrhM+blWCrxpeLjhvkw2LlM1A+7t9Wg2zrpSIuYa/bSm50T/T3V4H3ShEEUPIRmA+bBDtUT8ATqFiWR1HjcJDNLuTD7J6CHu3A27/8l7c5oFEaMepS/QKJz1/kim02V7tbJgdtN/BwEy5ELsm/WhfAOk/wXt3DZ2ZSG/KQzsuzFnFDiJ75iVtWYzfdRUGpECJs7pdWgA7W7l9SN5NTBj9u+1j38siVXjr91AJcCz/yF7IJRDIIfkf9aw+apWl44n1NFBem22ruxldw6pnHJkLW3IBB2CJLNGKddLJw+HDLbB9DZxXRiqr6F6jsww8juzGs5urr6GyXRkqnKO11G7VXmvEw88wli8d+XJ+HPqkF3tXk5spbLNdDoUaGFNYn+Sg9KIQNnDyr7l2sBllT1B25o/EERcYnQPEpyhxNuPCD4OBk3CUzv1TyCz2OgVv+ECWhi/oDyn5jn4VefKIVjVbI5rtaEPxvx5n34TqL0jLkY5dr1aFQnFcE9eRH61fbc9EaOP8i7gIjcBfcSZI5NqKGtlh+kibMuLfwFjItgj+z5vD4TCQlQsqp7q4CRhVk/b+BMa1tz5tlPEczFhwBqfWCWKJQuzOxnZMKZuA5um4M5rXhTpxJoQBAZbw+G29fk4+SZtX9SpbRLHWEFOx14zObl/OJm6x/zBLvDCCAkEGnrnYpJloyqhYo7lpkYOI5tqGJDQDIw4erUuIKGWCt0pvhuxE2L3bwpzD/9xdLiFB2VQS/ZV4TOzPeaR+ldYUv+2ahG8Gz9S9aeIpUzI5ZcjFZPKnIUIa8oUhVVJMYUItW/sk02+V93xOtdlPCLjF8Mm8oJWENacpFvLUlTcezN3F221E1FJK01ODNIzCJDVe/RtYvsQrHh ZymBxvzf W8S+7qKTzvXx1l6MdKPxdogJSulCGoygGmEwEXSepdFB/etKRF3ItOP2eFqQS/9Ry8rawIpItDk1IGHk0IhWhs2SGa+j/ZrYFtIoMY3/1SrzQhP7xgLhvSC7RWn/upWx5zLBMzoIwlKQc0O1k4p84g1z9K4U5BzFW+dSD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Mar 7, 2024 at 7:45=E2=80=AFPM David Hildenbrand = wrote: > > On 07.03.24 12:42, Ryan Roberts wrote: > > On 07/03/2024 11:31, David Hildenbrand wrote: > >> On 07.03.24 12:26, Barry Song wrote: > >>> On Thu, Mar 7, 2024 at 7:13=E2=80=AFPM Ryan Roberts wrote: > >>>> > >>>> On 07/03/2024 10:54, David Hildenbrand wrote: > >>>>> On 07.03.24 11:54, David Hildenbrand wrote: > >>>>>> On 07.03.24 11:50, Ryan Roberts wrote: > >>>>>>> On 07/03/2024 09:33, Barry Song wrote: > >>>>>>>> On Thu, Mar 7, 2024 at 10:07=E2=80=AFPM Ryan Roberts wrote: > >>>>>>>>> > >>>>>>>>> On 07/03/2024 08:10, Barry Song wrote: > >>>>>>>>>> On Thu, Mar 7, 2024 at 9:00=E2=80=AFPM Lance Yang wrote: > >>>>>>>>>>> > >>>>>>>>>>> Hey Barry, > >>>>>>>>>>> > >>>>>>>>>>> Thanks for taking time to review! > >>>>>>>>>>> > >>>>>>>>>>> On Thu, Mar 7, 2024 at 3:00=E2=80=AFPM Barry Song <21cnbao@gm= ail.com> wrote: > >>>>>>>>>>>> > >>>>>>>>>>>> On Thu, Mar 7, 2024 at 7:15=E2=80=AFPM Lance Yang wrote: > >>>>>>>>>>>>> > >>>>>>>>>>> [...] > >>>>>>>>>>>>> +static inline bool can_mark_large_folio_lazyfree(unsigned = long addr, > >>>>>>>>>>>>> + struct fol= io *folio, > >>>>>>>>>>>>> pte_t *start_pte) > >>>>>>>>>>>>> +{ > >>>>>>>>>>>>> + int nr_pages =3D folio_nr_pages(folio); > >>>>>>>>>>>>> + fpb_t flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_= DIRTY; > >>>>>>>>>>>>> + > >>>>>>>>>>>>> + for (int i =3D 0; i < nr_pages; i++) > >>>>>>>>>>>>> + if (page_mapcount(folio_page(folio, i)) != =3D 1) > >>>>>>>>>>>>> + return false; > >>>>>>>>>>>> > >>>>>>>>>>>> we have moved to folio_estimated_sharers though it is not pr= ecise, so > >>>>>>>>>>>> we don't do > >>>>>>>>>>>> this check with lots of loops and depending on the subpage's= mapcount. > >>>>>>>>>>> > >>>>>>>>>>> If we don't check the subpage=E2=80=99s mapcount, and there i= s a cow folio > >>>>>>>>>>> associated > >>>>>>>>>>> with this folio and the cow folio has smaller size than this = folio, > >>>>>>>>>>> should we still > >>>>>>>>>>> mark this folio as lazyfree? > >>>>>>>>>> > >>>>>>>>>> I agree, this is true. However, we've somehow accepted the fac= t that > >>>>>>>>>> folio_likely_mapped_shared > >>>>>>>>>> can result in false negatives or false positives to balance th= e > >>>>>>>>>> overhead. So I really don't know :-) > >>>>>>>>>> > >>>>>>>>>> Maybe David and Vishal can give some comments here. > >>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>>> BTW, do we need to rebase our work against David's changes[1= ]? > >>>>>>>>>>>> [1] > >>>>>>>>>>>> https://lore.kernel.org/linux-mm/20240227201548.857831-1-dav= id@redhat.com/ > >>>>>>>>>>> > >>>>>>>>>>> Yes, we should rebase our work against David=E2=80=99s change= s. > >>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>>> + > >>>>>>>>>>>>> + return nr_pages =3D=3D folio_pte_batch(folio, addr,= start_pte, > >>>>>>>>>>>>> + ptep_get(start_pte= ), nr_pages, > >>>>>>>>>>>>> flags, NULL); > >>>>>>>>>>>>> +} > >>>>>>>>>>>>> + > >>>>>>>>>>>>> static int madvise_free_pte_range(pmd_t *pmd, unsigned= long addr, > >>>>>>>>>>>>> unsigned long end, stru= ct mm_walk > >>>>>>>>>>>>> *walk) > >>>>>>>>>>>>> > >>>>>>>>>>>>> @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd= _t *pmd, > >>>>>>>>>>>>> unsigned long addr, > >>>>>>>>>>>>> */ > >>>>>>>>>>>>> if (folio_test_large(folio)) { > >>>>>>>>>>>>> int err; > >>>>>>>>>>>>> + unsigned long next_addr, align; > >>>>>>>>>>>>> > >>>>>>>>>>>>> - if (folio_estimated_sharers(folio) = !=3D 1) > >>>>>>>>>>>>> - break; > >>>>>>>>>>>>> - if (!folio_trylock(folio)) > >>>>>>>>>>>>> - break; > >>>>>>>>>>>>> + if (folio_estimated_sharers(folio) = !=3D 1 || > >>>>>>>>>>>>> + !folio_trylock(folio)) > >>>>>>>>>>>>> + goto skip_large_folio; > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> I don't think we can skip all the PTEs for nr_pages, as some= of them > >>>>>>>>>>>> might be > >>>>>>>>>>>> pointing to other folios. > >>>>>>>>>>>> > >>>>>>>>>>>> for example, for a large folio with 16PTEs, you do MADV_DONT= NEED(15-16), > >>>>>>>>>>>> and write the memory of PTE15 and PTE16, you get page faults= , thus PTE15 > >>>>>>>>>>>> and PTE16 will point to two different small folios. We can o= nly skip > >>>>>>>>>>>> when we > >>>>>>>>>>>> are sure nr_pages =3D=3D folio_pte_batch() is sure. > >>>>>>>>>>> > >>>>>>>>>>> Agreed. Thanks for pointing that out. > >>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>>> + > >>>>>>>>>>>>> + align =3D folio_nr_pages(folio) * P= AGE_SIZE; > >>>>>>>>>>>>> + next_addr =3D ALIGN_DOWN(addr + ali= gn, align); > >>>>>>>>>>>>> + > >>>>>>>>>>>>> + /* > >>>>>>>>>>>>> + * If we mark only the subpages as = lazyfree, or > >>>>>>>>>>>>> + * cannot mark the entire large fol= io as > >>>>>>>>>>>>> lazyfree, > >>>>>>>>>>>>> + * then just split it. > >>>>>>>>>>>>> + */ > >>>>>>>>>>>>> + if (next_addr > end || next_addr - = addr !=3D > >>>>>>>>>>>>> align || > >>>>>>>>>>>>> + !can_mark_large_folio_lazyfree(= addr, folio, > >>>>>>>>>>>>> pte)) > >>>>>>>>>>>>> + goto split_large_folio; > >>>>>>>>>>>>> + > >>>>>>>>>>>>> + /* > >>>>>>>>>>>>> + * Avoid unnecessary folio splittin= g if the > >>>>>>>>>>>>> large > >>>>>>>>>>>>> + * folio is entirely within the giv= en range. > >>>>>>>>>>>>> + */ > >>>>>>>>>>>>> + folio_clear_dirty(folio); > >>>>>>>>>>>>> + folio_unlock(folio); > >>>>>>>>>>>>> + for (; addr !=3D next_addr; pte++, = addr +=3D > >>>>>>>>>>>>> PAGE_SIZE) { > >>>>>>>>>>>>> + ptent =3D ptep_get(pte); > >>>>>>>>>>>>> + if (pte_young(ptent) || > >>>>>>>>>>>>> pte_dirty(ptent)) { > >>>>>>>>>>>>> + ptent =3D > >>>>>>>>>>>>> ptep_get_and_clear_full( > >>>>>>>>>>>>> + mm, addr, p= te, > >>>>>>>>>>>>> tlb->fullmm); > >>>>>>>>>>>>> + ptent =3D pte_mkold= (ptent); > >>>>>>>>>>>>> + ptent =3D pte_mkcle= an(ptent); > >>>>>>>>>>>>> + set_pte_at(mm, addr= , pte, > >>>>>>>>>>>>> ptent); > >>>>>>>>>>>>> + tlb_remove_tlb_entr= y(tlb, pte, > >>>>>>>>>>>>> addr); > >>>>>>>>>>>>> + } > >>>>>>>>>>>> > >>>>>>>>>>>> Can we do this in batches? for a CONT-PTE mapped large folio= , you are > >>>>>>>>>>>> unfolding > >>>>>>>>>>>> and folding again. It seems quite expensive. > >>>>>>>>> > >>>>>>>>> I'm not convinced we should be doing this in batches. We want t= he initial > >>>>>>>>> folio_pte_batch() to be as loose as possible regarding permissi= ons so > >>>>>>>>> that we > >>>>>>>>> reduce our chances of splitting folios to the min. (e.g. ignore= SW bits > >>>>>>>>> like > >>>>>>>>> soft dirty, etc). I think it might be possible that some PTEs a= re RO and > >>>>>>>>> other > >>>>>>>>> RW too (e.g. due to cow - although with the current cow impl, p= robably not. > >>>>>>>>> But > >>>>>>>>> its fragile to assume that). Anyway, if we do an initial batch = that ignores > >>>>>>>>> all > >>>>>>>> > >>>>>>>> You are correct. I believe this scenario could indeed occur. For= instance, > >>>>>>>> if process A forks process B and then unmaps itself, leaving B a= s the > >>>>>>>> sole process owning the large folio. The current wp_page_reuse(= ) function > >>>>>>>> will reuse PTE one by one while the specific subpage is written. > >>>>>>> > >>>>>>> Hmm - I thought it would only reuse if the total mapcount for the= folio > >>>>>>> was 1. > >>>>>>> And since it is a large folio with each page mapped once in proc = B, I thought > >>>>>>> every subpage write would cause a copy except the last one? I hav= en't > >>>>>>> looked at > >>>>>>> the code for a while. But I had it in my head that this is an are= a we need to > >>>>>>> improve for mTHP. > >>> > >>> So sad I am wrong again =F0=9F=98=A2 > >>> > >>>>>> > >>>>>> wp_page_reuse() will currently reuse a PTE part of a large folio o= nly if > >>>>>> a single PTE remains mapped (refcount =3D=3D 0). > >>>>> > >>>>> ^ =3D=3D 1 > >>> > >>> seems this needs improvement. it is a waste the last subpage can > >> > >> My take that is WIP: > >> > >> https://lore.kernel.org/all/20231124132626.235350-1-david@redhat.com/T= /#u > >> > >>> reuse the whole large folio. i was doing it in a quite different way, > >>> if the large folio had only one subpage left, i would do copy and > >>> released the large folio[1]. and if i could reuse the whole large fol= io > >>> with CONT-PTE, i would reuse the whole large folio[2]. in mainline, > >>> we don't have this cont-pte luxury exposed to mm, so i guess we can > >>> not do [2] easily, but [1] seems to be an optimization. > >> > >> Yeah, I had essentially the same idea: just free up the large folio if= most of > >> the stuff is unmapped. But that's rather a corner-case optimization, s= o I did > >> not proceed with that. > >> > > > > I'm not sure it's a corner case, really? - process forks, then both par= ent and > > child and write to all pages in what was previously a fully & contiguou= sly > > mapped large folio? > > Well, with 2 MiB my assumption was that while it can happen, it's rather > rare. With smaller THP it might get more likely, agreed. > > > > > Reggardless, why is it an optimization to do the copy for the last subp= age and > > syncrhonously free the large folio? It's already partially mapped so is= on the > > deferred split list and can be split if memory is tight. we don't want reclamation overhead later. and we want memories immediately available to others. reclamation will always cause latency and affect User experience. split_folio is not cheap :-) if the number of this kind of large folios is huge, the waste can be huge for some while. it is not a corner case for large folio swap-in. while someone writes one subpage, I swap-in a large folio, wp_reuse will immediately be called. This can cause waste quite often. One outcome of this discussion is that I realize I should investigate this issue immediately in the swap-in series as my off-tree code has optimized reuse but mainline hasn't. > > At least for 2 MiB THP, it might make sense to make that large folio > available immediately again, even without memory pressure. Even > compaction would not compact it. It is also true for 64KiB. as we want other processes to allocate 64KiB successfully as much as possible, and reduce the rate of falling back to small folios. by releasing 64KiB directly to buddy rather than splitting and returning 15*4KiB in shrinker, we reduce buddy fragmentation too. > > -- > Cheers, > > David / dhildenb > Thanks Barry