From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED2CFD0BB47 for ; Wed, 23 Oct 2024 23:36:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8784B6B0092; Wed, 23 Oct 2024 19:36:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 828DE6B0093; Wed, 23 Oct 2024 19:36:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C8416B0095; Wed, 23 Oct 2024 19:36:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 48DB06B0092 for ; Wed, 23 Oct 2024 19:36:05 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 30668140FD2 for ; Wed, 23 Oct 2024 23:35:46 +0000 (UTC) X-FDA: 82706476986.12.9C08F6F Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf02.hostedemail.com (Postfix) with ESMTP id 303928000D for ; Wed, 23 Oct 2024 23:35:28 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZuHHxgXX; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729726438; a=rsa-sha256; cv=none; b=NtAvY3CHvat6RPAubiYaRPMAm6tW6pz4R51vxVYao8uOCUMOuuEknLQELMNWRnqvZpTJMr W24Ezxofb+Rtc7+DAenvAiORTwMqDbKskZoD+6BM/HGLHOy5+fL9KqDb1/8SkL7VGibe/2 lq/zidsM5OHUJ6i98RBiS2GmYub8CCs= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZuHHxgXX; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729726438; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=auq6NrV3v+InNziabZyjFj0YOrFI/wETtxHg8D9gjLk=; b=YfL+gMnkJrnFYUK+NRgi9whOm3wY9zcRQ97A2IBy17K4dABCSFzJMOLw1r+nE71UThY6p6 xGKv5YmLJFWfxfoV7ptFaA2n5Sg01F5IKGB0zK4wXUdc5jO37h9LskIWnS52RIW5ZpmzbH M/S/tPG0TlDpomF/Mqn2mulI12B0WXo= Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-7ea8c4ce232so244985a12.0 for ; Wed, 23 Oct 2024 16:36:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729726562; x=1730331362; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=auq6NrV3v+InNziabZyjFj0YOrFI/wETtxHg8D9gjLk=; b=ZuHHxgXXS1kEu50elX1lMj3fUNufA7O+TpY2ruXJ7N8mkFgzqFuK6v2fMmL3gTkvie TJOhpwptqgsfb1LO24IZ4BFvC5nenHOuR5JoZPJX/i2YAO0ws6zyN1mJFdnfN2AkemFE yc91okfgfZFkpMDllt921xrcUvAgfJ/Gi+8XGZHVEs6LplqsJ1JLfEgs6hpXpm75fuyM wkuV1TNQAGufLULwG0T8bl27Eo7sEO6lYmF5CdVR11Xn5gs6LEdl8mK6mDY+mIkneVWU PPX0+zb5C6H/naJyUyat+BLRIizSp+OI/7/Uc4zGl576hxjxJaa3nhf/AajKsbKr1lhr k4Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729726562; x=1730331362; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=auq6NrV3v+InNziabZyjFj0YOrFI/wETtxHg8D9gjLk=; b=PVd4SoVn2qmJhG8PqNCleiP7xOQZhNWuAUuD9lXEHPp5jUDEMXF9i2LqPWKf7ktX0o wiZm40fA53lo3oKu0kOL4rwHUHqDoVAiquQJAY9SccVJb9PpzKzPOfWtsfuzN+lEXGW/ NRtVrwlbUJLUBjB+lZNO2pP2TtRRhXFtH4ANnmyQS18b+T/2HpvGm7e12LPN3WS1CTIm 5Vg7v8xpHwrIJIY4orVTUH2uECq565oLpnGaGG2oknuqo/GfRMIXQiQZgOBUIlIEpBbW zaIXLP1QiBa2ZAY1is5nKpNFygmly91XnNAgg/jzcXMq4YwjCkofx6qUcD4bmb3bIQFk dk3Q== X-Forwarded-Encrypted: i=1; AJvYcCWVP7KOIOT7s3R3JRBUBApz6soY/M8+vo45N21KNif+q+BerDHBRrv3ItwDlNeuNBZvtYULxX2vaQ==@kvack.org X-Gm-Message-State: AOJu0YxYAnI18cKP/kDnjYNbh+wrDMBBl49rI9rFmg0woradcQOFJG3N ImmOH5r4HR6UZ00aXrIQWe7gTr9aP/dOZ8w7AY8VdqYLobreB0on X-Google-Smtp-Source: AGHT+IG2NysavPnCtCNUZPmECOIxySmlIQ34K4HjPj0hJ7205hPvwp7XWPyS0Rg8QcNB/V8/KZgvDA== X-Received: by 2002:a05:6a20:d50a:b0:1d9:4837:ad84 with SMTP id adf61e73a8af0-1d978b33ecamr5521145637.25.1729726561490; Wed, 23 Oct 2024 16:36:01 -0700 (PDT) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:3020:9cde:9371:3772]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7eaeab1e5cdsm7415559a12.21.2024.10.23.16.35.53 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 23 Oct 2024 16:36:00 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: usamaarif642@gmail.com Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, chengming.zhou@linux.dev, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, kanchana.p.sridhar@intel.com, kernel-team@meta.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, nphamcs@gmail.com, riel@surriel.com, ryan.roberts@arm.com, senozhatsky@chromium.org, shakeel.butt@linux.dev, v-songbaohua@oppo.com, willy@infradead.org, ying.huang@intel.com, yosryahmed@google.com Subject: Re: [RFC 0/4] mm: zswap: add support for zswapin of large folios Date: Thu, 24 Oct 2024 12:35:48 +1300 Message-Id: <20241023233548.23348-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 303928000D X-Stat-Signature: nwr5yxem6ts6xd5rw4qcwgydgs87r576 X-Rspam-User: X-HE-Tag: 1729726528-371935 X-HE-Meta: U2FsdGVkX1+e2DNu4Ft830eN7T0WijiB9d1R42s/EC/6MlADb3hGhl2q+3Cv0vhtc7E3xgx6nF95GhOtyYnDBajQyMG42NNOpcfIB+rKqhXhXgB1dpC1te60BxyH1Xiya/2qrs4Uw3c2pnY/5Lx/gF2bMBFGcdX2BwDjJ+wN/E5IapBHA1K7KL8PnVtBlvGJiQeH/ozSJEJHpMsmksHOmg1/Q/36N6m6Md0zmibJ1IbKSGlGlwvxN+oxVfy+vNjL8ozbas0RhrxvqfsJSF8dimM7p/HndMgXN5AKjB1B59g/sMU+v8s/niy7oXdGOl8m9qJEMjm2HQY/4lOfgqrLfj83M+yIsqnH2HhGgPt2156CzR1B1ys+XVJHzvyvpyuFQ7+CoUEGK1gzJQplDkSEG6nOiodNCakFW1tkeHm8NPi1bVa4edtexh2rjoBzl6CfoVdAlIyjvCylKmVyqY2WDUM04eYnnLSfrI4KRNp1HDRN/awNnM2xpEo/S/wip7UMKRehd10E94bZrKSmg2fslOobXSZxsygPJA1KuvNhYa1qzq0kaWFy4zGit6FrPCzJpSXjaczuWT5pPcAe3DgbEIVFwSPvJlsGrRdG5Ru5jG2SidL4ZnHwW064aY1eAYvFmsW7pc+hk97lKdI4+n6LBdTYFfNZ/dg9NRMZ2aU8C7bgzJu/rK8YrVPGYjPHbvhQHA6yI1K6CaVQGo/3fir0ihkfVJrKUUWr2Ui4Epi9Hp/nZqdISoojr8bS/CTfRcLvS5OmjpnTjzUCEkVeKOokJxkAh6snIE51OWFCGXjlrdOTzKhM4ekTY+oTMRDxiZPnbP/K/qNhbvIKVxmavFQSaQwmalnD9x3NjfiwQs1HySO7WSEs/B5f8FwffbxRPqXu7EwU9UkIbG3prfFd7UMG51M5GVfkZxhySkNyxEgNpl8qMhVL5FPdikLDNTR5ePBFsu4XjLJy/ADw7c+IROV vk+7MMKZ jwowgHwzoThun3raXd70sbyWerRlkmCixX5ug0V2HcUqDOrcvKLNbu1rknHJEQUerSpIXZkCIHjzw6OQzFx3XsBzpcd26J03zf4r1Z63ep9OfVzmvu/7r3WZEz5Rdb5faaxGJE/rUSNLVYK+qEQGIUOBvHTHKKyeLZRajBx7+TxMujKc962X5WtNKGXAUpibUM+VRCjsdRJadhu7oNlDU9gsF1+2wpVzZIkg56WODwAXsG41P4l0GYw0N9eIUSF6gvEg+tnBdzGwJogSp/iz1dmgS4AyRtPwKxk8Apkt8wMqcEPacVlKzEaaf6pibpEqcSlwxEo9xygyWJpMSx+Hesoo477TgmdHQECLNhbH2wr4nsFiDebXhFWso6IqRBoNc2/FndnTgWPHW3hAY7+sLsMOFZJM+3+uvVCRMVMLiTlzKYjk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 24, 2024 at 9:36 AM Barry Song <21cnbao@gmail.com> wrote: > > On Thu, Oct 24, 2024 at 8:47 AM Usama Arif wrote: > > > > > > > > On 23/10/2024 19:52, Barry Song wrote: > > > On Thu, Oct 24, 2024 at 7:31 AM Usama Arif wrote: > > >> > > >> > > >> > > >> On 23/10/2024 19:02, Yosry Ahmed wrote: > > >>> [..] > > >>>>>> I suspect the regression occurs because you're running an edge case > > >>>>>> where the memory cgroup stays nearly full most of the time (this isn't > > >>>>>> an inherent issue with large folio swap-in). As a result, swapping in > > >>>>>> mTHP quickly triggers a memcg overflow, causing a swap-out. The > > >>>>>> next swap-in then recreates the overflow, leading to a repeating > > >>>>>> cycle. > > >>>>>> > > >>>>> > > >>>>> Yes, agreed! Looking at the swap counters, I think this is what is going > > >>>>> on as well. > > >>>>> > > >>>>>> We need a way to stop the cup from repeatedly filling to the brim and > > >>>>>> overflowing. While not a definitive fix, the following change might help > > >>>>>> improve the situation: > > >>>>>> > > >>>>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > >>>>>> > > >>>>>> index 17af08367c68..f2fa0eeb2d9a 100644 > > >>>>>> --- a/mm/memcontrol.c > > >>>>>> +++ b/mm/memcontrol.c > > >>>>>> > > >>>>>> @@ -4559,7 +4559,10 @@ int mem_cgroup_swapin_charge_folio(struct folio > > >>>>>> *folio, struct mm_struct *mm, > > >>>>>>                 memcg = get_mem_cgroup_from_mm(mm); > > >>>>>>         rcu_read_unlock(); > > >>>>>> > > >>>>>> -       ret = charge_memcg(folio, memcg, gfp); > > >>>>>> +       if (folio_test_large(folio) && mem_cgroup_margin(memcg) < > > >>>>>> MEMCG_CHARGE_BATCH) > > >>>>>> +               ret = -ENOMEM; > > >>>>>> +       else > > >>>>>> +               ret = charge_memcg(folio, memcg, gfp); > > >>>>>> > > >>>>>>         css_put(&memcg->css); > > >>>>>>         return ret; > > >>>>>> } > > >>>>>> > > >>>>> > > >>>>> The diff makes sense to me. Let me test later today and get back to you. > > >>>>> > > >>>>> Thanks! > > >>>>> > > >>>>>> Please confirm if it makes the kernel build with memcg limitation > > >>>>>> faster. If so, let's > > >>>>>> work together to figure out an official patch :-) The above code hasn't consider > > >>>>>> the parent memcg's overflow, so not an ideal fix. > > >>>>>> > > >>>> > > >>>> Thanks Barry, I think this fixes the regression, and even gives an improvement! > > >>>> I think the below might be better to do: > > >>>> > > >>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > >>>> index c098fd7f5c5e..0a1ec55cc079 100644 > > >>>> --- a/mm/memcontrol.c > > >>>> +++ b/mm/memcontrol.c > > >>>> @@ -4550,7 +4550,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, > > >>>>                 memcg = get_mem_cgroup_from_mm(mm); > > >>>>         rcu_read_unlock(); > > >>>> > > >>>> -       ret = charge_memcg(folio, memcg, gfp); > > >>>> +       if (folio_test_large(folio) && > > >>>> +           mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio))) > > >>>> +               ret = -ENOMEM; > > >>>> +       else > > >>>> +               ret = charge_memcg(folio, memcg, gfp); > > >>>> > > >>>>         css_put(&memcg->css); > > >>>>         return ret; > > >>>> > > >>>> > > >>>> AMD 16K+32K THP=always > > >>>> metric         mm-unstable      mm-unstable + large folio zswapin series    mm-unstable + large folio zswapin + no swap thrashing fix > > >>>> real           1m23.038s        1m23.050s                                   1m22.704s > > >>>> user           53m57.210s       53m53.437s                                  53m52.577s > > >>>> sys            7m24.592s        7m48.843s                                   7m22.519s > > >>>> zswpin         612070           999244                                      815934 > > >>>> zswpout        2226403          2347979                                     2054980 > > >>>> pgfault        20667366         20481728                                    20478690 > > >>>> pgmajfault     385887           269117                                      309702 > > >>>> > > >>>> AMD 16K+32K+64K THP=always > > >>>> metric         mm-unstable      mm-unstable + large folio zswapin series   mm-unstable + large folio zswapin + no swap thrashing fix > > >>>> real           1m22.975s        1m23.266s                                  1m22.549s > > >>>> user           53m51.302s       53m51.069s                                 53m46.471s > > >>>> sys            7m40.168s        7m57.104s                                  7m25.012s > > >>>> zswpin         676492           1258573                                    1225703 > > >>>> zswpout        2449839          2714767                                    2899178 > > >>>> pgfault        17540746         17296555                                   17234663 > > >>>> pgmajfault     429629           307495                                     287859 > > >>>> > > >>> > > >>> Thanks Usama and Barry for looking into this. It seems like this would > > >>> fix a regression with large folio swapin regardless of zswap. Can the > > >>> same result be reproduced on zram without this series? > > >> > > >> > > >> Yes, its a regression in large folio swapin support regardless of zswap/zram. > > >> > > >> Need to do 3 tests, one with probably the below diff to remove large folio support, > > >> one with current upstream and one with upstream + swap thrashing fix. > > >> > > >> We only use zswap and dont have a zram setup (and I am a bit lazy to create one :)). > > >> Any zram volunteers to try this? > > > > > > Hi Usama, > > > > > > I tried a quick experiment: > > > > > > echo 1 > /sys/module/zswap/parameters/enabled > > > echo 0 > /sys/module/zswap/parameters/enabled > > > > > > This was to test the zRAM scenario. Enabling zswap even > > > once disables mTHP swap-in. :) > > > > > > I noticed a similar regression with zRAM alone, but the change resolved > > > the issue and even sped up the kernel build compared to the setup without > > > mTHP swap-in. > > > > Thanks for trying, this is amazing! > > > > > > However, I’m still working on a proper patch to address this. The current > > > approach: > > > > > > mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio)) > > > > > > isn’t sufficient, as it doesn’t cover cases where group A contains group B, and > > > we’re operating within group B. The problem occurs not at the boundary of > > > group B but at the boundary of group A. > > > > I am not sure I completely followed this. As MEMCG_CHARGE_BATCH=64, if we are > > trying to swapin a 16kB page, we basically check if atleast 64/4 = 16 folios can be > > charged to cgroup, which is reasonable. If we try to swapin a 1M folio, we just > > check if we can charge atleast 1 folio. Are you saying that checking just 1 folio > > is not enough in this case and can still cause thrashing, i.e we should check more? > > My understanding is that cgroups are hierarchical. Even if we don’t > hit the memory >  limit of the folio’s direct memcg, we could still reach the limit of > one of its parent > memcgs. Imagine a structure like: > > /sys/fs/cgroup/a/b/c/d > > If we’re compiling the kernel in d, there’s a chance that while d > isn’t at its limit, its > parents (c, b, or a) could be. Currently, the check only applies to d. To clarify, I mean something like this: diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 17af08367c68..cc6d21848ee8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4530,6 +4530,29 @@ int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, gfp_t gfp, return 0; } +/* + * When the memory cgroup is nearly full, swapping in large folios can + * easily lead to swap thrashing, as the memcg operates on the edge of + * being full. We maintain a margin to allow for quick fallback to + * smaller folios during the swap-in process. + */ +static inline bool mem_cgroup_swapin_margin_protected(struct mem_cgroup *memcg, + struct folio *folio) +{ + unsigned int nr; + + if (!folio_test_large(folio)) + return false; + + nr = max_t(unsigned int, folio_nr_pages(folio), MEMCG_CHARGE_BATCH); + for (; !mem_cgroup_is_root(memcg); memcg = parent_mem_cgroup(memcg)) { + if (mem_cgroup_margin(memcg) < nr) + return true; + } + + return false; +} + /** * mem_cgroup_swapin_charge_folio - Charge a newly allocated folio for swapin. * @folio: folio to charge. @@ -4547,7 +4570,8 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, { struct mem_cgroup *memcg; unsigned short id; - int ret; + int ret = -ENOMEM; + bool margin_prot; if (mem_cgroup_disabled()) return 0; @@ -4557,9 +4581,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, memcg = mem_cgroup_from_id(id); if (!memcg || !css_tryget_online(&memcg->css)) memcg = get_mem_cgroup_from_mm(mm); + margin_prot = mem_cgroup_swapin_margin_protected(memcg, folio); rcu_read_unlock(); - ret = charge_memcg(folio, memcg, gfp); + if (!margin_prot) + ret = charge_memcg(folio, memcg, gfp); css_put(&memcg->css); return ret; > > > > > If we want to maintain consitency for all folios another option is > > mem_cgroup_margin(memcg) < MEMCG_CHARGE_BATCH * folio_nr_pages(folio) > > but I think this is too extreme, we would be checking if 64M can be charged to > > cgroup just to swapin 1M. > > > > > > > > I believe there’s still room for improvement. For example, if a 64KB charge > > > attempt fails, there’s no need to waste time trying 32KB or 16KB. We can > > > directly fall back to 4KB, as 32KB and 16KB will also fail based on our > > > margin detection logic. > > > > > > > Yes that makes sense. Would something like below work to fix that: > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index c098fd7f5c5e..0a1ec55cc079 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -4550,7 +4550,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, > >                 memcg = get_mem_cgroup_from_mm(mm); > >         rcu_read_unlock(); > > > > -       ret = charge_memcg(folio, memcg, gfp); > > +       if (folio_test_large(folio) && > > +           mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio))) > > +               ret = -ENOMEM; > > +       else > > +               ret = charge_memcg(folio, memcg, gfp); > > > >         css_put(&memcg->css); > >         return ret; > > diff --git a/mm/memory.c b/mm/memory.c > > index fecdd044bc0b..b6ce6605dc63 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4123,6 +4123,7 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) > >         pte_t *pte; > >         gfp_t gfp; > >         int order; > > +       int ret; > > > >         /* > >          * If uffd is active for the vma we need per-page fault fidelity to > > @@ -4170,9 +4171,13 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) > >                 addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > >                 folio = vma_alloc_folio(gfp, order, vma, addr, true); > >                 if (folio) { > > -                       if (!mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, > > -                                                           gfp, entry)) > > +                       ret = mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, gfp, entry); > > +                       if (!ret) { > >                                 return folio; > > +                       } else if (ret == -ENOMEM) { > > +                               folio_put(folio); > > +                               goto fallback; > > +                       } > >                         folio_put(folio); > >                 } > >                 order = next_order(&orders, order); > > > > Yes, does it make your kernel build even faster? Thanks Barry