From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A697DCE8E82 for ; Thu, 24 Oct 2024 14:29:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3EF096B00C6; Thu, 24 Oct 2024 10:29:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39F476B00C7; Thu, 24 Oct 2024 10:29:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2673E6B00C9; Thu, 24 Oct 2024 10:29:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 089AE6B00C6 for ; Thu, 24 Oct 2024 10:29:55 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id EF885A0469 for ; Thu, 24 Oct 2024 14:29:20 +0000 (UTC) X-FDA: 82708729740.29.2133AFF Received: from mail-oo1-f50.google.com (mail-oo1-f50.google.com [209.85.161.50]) by imf01.hostedemail.com (Postfix) with ESMTP id 9EDDD4001B for ; Thu, 24 Oct 2024 14:29:37 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="E4xST0/C"; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf01.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.161.50 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729780023; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jGzft1YUen5dBqLhCOJgDrbvcX8bu3ygkdyqno3M1gM=; b=ALjSwbsAm2Hs8Twe0v2kPFROVaLc6x4xyDo0XLuDcYpHgeKR024lUAsjmKAjE6avoSlhZn NssCScVrSGqIhzbt5uS1+SOKXaf9YKByrzSoEpDPiimyPLRWLyhaOdFHfBYD62bUF6/17+ 2TS3Ap6aYc9RoG8TtREOyudpvSote4U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729780023; a=rsa-sha256; cv=none; b=GZBLzbSWvfd067juXEd7Xv2YBq0BqXhZ/GcOGXB9a2A6Nj4YI772A1+mNSAN9x96SoUuol qOQoVTqXC3qJgNjornNbJZZHaoIPcJgzKDStlQjh3trXNQLnkcWT49SG3xF5Nbk/z9i4mY 0ePwPtbcyc5OZlEkMIbrupumafW8KCI= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="E4xST0/C"; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf01.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.161.50 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org Received: by mail-oo1-f50.google.com with SMTP id 006d021491bc7-5ebc05007daso538119eaf.1 for ; Thu, 24 Oct 2024 07:29:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1729780191; x=1730384991; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=jGzft1YUen5dBqLhCOJgDrbvcX8bu3ygkdyqno3M1gM=; b=E4xST0/CMSbSSbZMGPEN0ukteyNWKfHufjpPEi3WMYDh8ZeFF1iSMoVmPhyF515Tgd vJkA4HC2xuU8b/oloxm+Aqath7KizgzhDj0P0Iz58IBI5+GM5YKWssaBmyuAF1ce7lFy lctA2l9UZ2AbD8crQriv6xy4M53ycJ9xwRuGr0z+Y/LpO19plnneg+XKFoJrbRKs0Tor L9I30AE3jc5uhMvJ+ILocqWGk2VSEY0NEkfP2CQ8ED3XoPHZacTxfnFttHTi7lt82ZNU VBeFXKdOEsFWcOogTIB/fnzdjdMHeAC+OGWip7C8ooo1OIeVyJl+D0TCN1a6xm+of6sZ CgDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729780191; x=1730384991; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jGzft1YUen5dBqLhCOJgDrbvcX8bu3ygkdyqno3M1gM=; b=PrE1ahym4a0vt7Nn4E56VXFzPCLaoiadv3jqDvyywzMotmDzGGZEDY61vLETLP6y+W bWxuD5coRcfh6ChYWAJHeHKfspWSgK8VhQy6szp5bFMfNsDDkTZROCrA9mjuYOrxByze CBiB0b1eOsNTWsuS02WUaDF4Hpcv3WCIlecx2Uq7JXayr0Q1wlEmbnsF048bQFwT0ixY 3PZFV10wAKzLxR7yfRirtRlrShHOHWWThHJMoudcHRyS5Qtm9F8uPmtpcTbSEBrh+LuA 4rLj0aeQSaufrqvv9CMqJ/lQzvWnil9dMKwAPrWm/CmsttGZi3Yx1VyJBJaarOEuZCMe MlQg== X-Forwarded-Encrypted: i=1; AJvYcCWgy6N8HiMsNUXrkM/nHLADbncVv3hAkCROzufe/rI0fDvptOfaH+LnTkGRAiSxcvx7xzWDV6f+zA==@kvack.org X-Gm-Message-State: AOJu0YyQv1RxR6q+1WF9h1kv9i9wgUjcoizqvDC3AiG0Y2Nzht0PM6hs A0/VVzDEaOhkqagOGG2LCK6bvRBkEcPROSusj66zSC1OslKKJw8cmXlTzsnolTo= X-Google-Smtp-Source: AGHT+IGFKnE14XSmpiPPttQLO4SqKVkktAh1DaIv9DpXUOOMPhP25ArsgmyVdcR0G8obfOO2fInY3w== X-Received: by 2002:a05:6358:419f:b0:1c3:77fc:3bb6 with SMTP id e5c5f4694b2df-1c3d818f7bcmr463942655d.21.1729780190633; Thu, 24 Oct 2024 07:29:50 -0700 (PDT) Received: from localhost ([2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6ce008acd46sm50609926d6.18.2024.10.24.07.29.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2024 07:29:47 -0700 (PDT) Date: Thu, 24 Oct 2024 10:29:42 -0400 From: Johannes Weiner To: Barry Song <21cnbao@gmail.com> Cc: usamaarif642@gmail.com, akpm@linux-foundation.org, chengming.zhou@linux.dev, david@redhat.com, hanchuanhua@oppo.com, kanchana.p.sridhar@intel.com, kernel-team@meta.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, nphamcs@gmail.com, riel@surriel.com, ryan.roberts@arm.com, senozhatsky@chromium.org, shakeel.butt@linux.dev, v-songbaohua@oppo.com, willy@infradead.org, ying.huang@intel.com, yosryahmed@google.com Subject: Re: [RFC 0/4] mm: zswap: add support for zswapin of large folios Message-ID: <20241024142942.GA279597@cmpxchg.org> References: <20241023233548.23348-1-21cnbao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20241023233548.23348-1-21cnbao@gmail.com> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9EDDD4001B X-Stat-Signature: 1skiczpw74x14rh9np66ra15eepfpq7q X-Rspam-User: X-HE-Tag: 1729780177-701231 X-HE-Meta: U2FsdGVkX18XqcmMJ8FlwqoUPDEOqnyK/mMCB49iKZ63ltsfei72L259n7/k14eNzENU0jjEAEof5+6AGudQwwL3K+weH0ADuOFkT1wBYelfsiQLhHwWpy2Hm0a5R9JqLIceyuIj+PSbh3xY3cQwx7St4+ymmO9gNxqSh3NSxSQrEvdqrpUX9BNYdV13LAwBSYg7eZWoDCta2LBJbOrqRR5evLoFe2a0Q8UOsjxxk9dsD1Xqlxz1FbCPYtgPI+7fX1BJhXNwxlGfuvflASOMdis953Fdgf0dLHgNJNNvk9Apj72jJ7TT/rd5X5QZQ8QJ1+LTa+uAnN60uEUE3fakkCy5udLBpkIXnAM+afndxBlQQfwcgtVnxH7NJXUV5nXWJH7G8qfY5GlEkqyP8d2fSncAjX3A2nxL8yvcnouI4YlxbvN+k56U8bDHnS6EVLDIL9xr520hgzzWs2DYeTt3f4fv+FqC/Km8q1234K/dnHE/b1F8zsz91HHq9JYEFqmDWLViWpFu+5lE7L3JXX6N9YvVGqCQ5a3yyz3c6IxCHBTb90G0WJ3Fjq4yvXpnb1EgIKO0FGCKkDb4r6yy2u8OLbR9rA2x8dj4dN0kz6FavtPRG+e3iUz5GHucW55MG0sNyTg1mWmjD7XCfwEJVJLSD0dUtANOvoXTVlzWgcAR+XKazE+/pccX8LjqQYtbG6HBwX7P+Ojs4fvVi/NC8nm5QfhkW8swSAmLeUVfxng4Y1o7WHZRIe0KpfYtdtftF/teSHpcnGd+oidU1/4ir010IRamMScn8Hni+w33VdwjVS6ZifvMI94HIOq2G0yjnJVZ/fAkXECYpi8Zc1uZEdhhglTj5XbdHGpS2osbVvI8t3A0DbxxVP6H+IOL63IUj2IvZkWTXfVKLvKDfrL3bb+2qNx0PWNjkzFJZYsv3ked20hRuFBpiLiua53nTtqit2yyzaj2Awe8STW9JL5FF23 NShl5be/ OmXnij4uKVmo2tzxOH9OYtVAi8H0/tj60xJoRcEi1l1VYwy3dMVYI1VHA+1TPPgF4LKUxYHuM/Lwvtar1xGRBt3bxKULGOBu8Z7/zXI+Vl0MW6SP6r5Mmv9Kqzx5dzSW5b1L9HyO48AfIw8RV/X9F/gbAeNggZCPhoe5OZ5m/lBxoCCfb0/lwJY5LTiK6D7AYi7f/S4Skn2NE2mU57KXLfBEz41bIA7f/jAZCQkQDDc3VYReDfN2YzUSIPGpKDhnqAnhgFsqcwCYQ+YfdIHE22eHvdUMDxfmjM5Xnm9+PJvCyh+jqw3maxrqPxlHwqiySdinPh7UO7WxTAJdsfAhqjpZcI3rULaU3lDePSNC7yX9xDHI1pj8sztjSo/AbXGQfK6ziz5JvfHOnTzYZwH9tFbzYFhfH3yJMNL9GImwSzC2sKLD4wOPE3JLoJ9diIRKdiSTDKtUpmog0nlEcLzI11f1nPrHKSJmGLXyEJkOXIyvwl8tUl0f5sl4vqTsFglyn0NsqQOIAC/TLz7G3IwXvXU9nowV8P3gUfZpRaN28x5dnUAk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 24, 2024 at 12:35:48PM +1300, Barry Song wrote: > On Thu, Oct 24, 2024 at 9:36 AM Barry Song <21cnbao@gmail.com> wrote: > > > > On Thu, Oct 24, 2024 at 8:47 AM Usama Arif wrote: > > > > > > > > > > > > On 23/10/2024 19:52, Barry Song wrote: > > > > On Thu, Oct 24, 2024 at 7:31 AM Usama Arif wrote: > > > >> > > > >> > > > >> > > > >> On 23/10/2024 19:02, Yosry Ahmed wrote: > > > >>> [..] > > > >>>>>> I suspect the regression occurs because you're running an edge case > > > >>>>>> where the memory cgroup stays nearly full most of the time (this isn't > > > >>>>>> an inherent issue with large folio swap-in). As a result, swapping in > > > >>>>>> mTHP quickly triggers a memcg overflow, causing a swap-out. The > > > >>>>>> next swap-in then recreates the overflow, leading to a repeating > > > >>>>>> cycle. > > > >>>>>> > > > >>>>> > > > >>>>> Yes, agreed! Looking at the swap counters, I think this is what is going > > > >>>>> on as well. > > > >>>>> > > > >>>>>> We need a way to stop the cup from repeatedly filling to the brim and > > > >>>>>> overflowing. While not a definitive fix, the following change might help > > > >>>>>> improve the situation: > > > >>>>>> > > > >>>>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > >>>>>> > > > >>>>>> index 17af08367c68..f2fa0eeb2d9a 100644 > > > >>>>>> --- a/mm/memcontrol.c > > > >>>>>> +++ b/mm/memcontrol.c > > > >>>>>> > > > >>>>>> @@ -4559,7 +4559,10 @@ int mem_cgroup_swapin_charge_folio(struct folio > > > >>>>>> *folio, struct mm_struct *mm, > > > >>>>>>                 memcg = get_mem_cgroup_from_mm(mm); > > > >>>>>>         rcu_read_unlock(); > > > >>>>>> > > > >>>>>> -       ret = charge_memcg(folio, memcg, gfp); > > > >>>>>> +       if (folio_test_large(folio) && mem_cgroup_margin(memcg) < > > > >>>>>> MEMCG_CHARGE_BATCH) > > > >>>>>> +               ret = -ENOMEM; > > > >>>>>> +       else > > > >>>>>> +               ret = charge_memcg(folio, memcg, gfp); > > > >>>>>> > > > >>>>>>         css_put(&memcg->css); > > > >>>>>>         return ret; > > > >>>>>> } > > > >>>>>> > > > >>>>> > > > >>>>> The diff makes sense to me. Let me test later today and get back to you. > > > >>>>> > > > >>>>> Thanks! > > > >>>>> > > > >>>>>> Please confirm if it makes the kernel build with memcg limitation > > > >>>>>> faster. If so, let's > > > >>>>>> work together to figure out an official patch :-) The above code hasn't consider > > > >>>>>> the parent memcg's overflow, so not an ideal fix. > > > >>>>>> > > > >>>> > > > >>>> Thanks Barry, I think this fixes the regression, and even gives an improvement! > > > >>>> I think the below might be better to do: > > > >>>> > > > >>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > >>>> index c098fd7f5c5e..0a1ec55cc079 100644 > > > >>>> --- a/mm/memcontrol.c > > > >>>> +++ b/mm/memcontrol.c > > > >>>> @@ -4550,7 +4550,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, > > > >>>>                 memcg = get_mem_cgroup_from_mm(mm); > > > >>>>         rcu_read_unlock(); > > > >>>> > > > >>>> -       ret = charge_memcg(folio, memcg, gfp); > > > >>>> +       if (folio_test_large(folio) && > > > >>>> +           mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio))) > > > >>>> +               ret = -ENOMEM; > > > >>>> +       else > > > >>>> +               ret = charge_memcg(folio, memcg, gfp); > > > >>>> > > > >>>>         css_put(&memcg->css); > > > >>>>         return ret; > > > >>>> > > > >>>> > > > >>>> AMD 16K+32K THP=always > > > >>>> metric         mm-unstable      mm-unstable + large folio zswapin series    mm-unstable + large folio zswapin + no swap thrashing fix > > > >>>> real           1m23.038s        1m23.050s                                   1m22.704s > > > >>>> user           53m57.210s       53m53.437s                                  53m52.577s > > > >>>> sys            7m24.592s        7m48.843s                                   7m22.519s > > > >>>> zswpin         612070           999244                                      815934 > > > >>>> zswpout        2226403          2347979                                     2054980 > > > >>>> pgfault        20667366         20481728                                    20478690 > > > >>>> pgmajfault     385887           269117                                      309702 > > > >>>> > > > >>>> AMD 16K+32K+64K THP=always > > > >>>> metric         mm-unstable      mm-unstable + large folio zswapin series   mm-unstable + large folio zswapin + no swap thrashing fix > > > >>>> real           1m22.975s        1m23.266s                                  1m22.549s > > > >>>> user           53m51.302s       53m51.069s                                 53m46.471s > > > >>>> sys            7m40.168s        7m57.104s                                  7m25.012s > > > >>>> zswpin         676492           1258573                                    1225703 > > > >>>> zswpout        2449839          2714767                                    2899178 > > > >>>> pgfault        17540746         17296555                                   17234663 > > > >>>> pgmajfault     429629           307495                                     287859 > > > >>>> > > > >>> > > > >>> Thanks Usama and Barry for looking into this. It seems like this would > > > >>> fix a regression with large folio swapin regardless of zswap. Can the > > > >>> same result be reproduced on zram without this series? > > > >> > > > >> > > > >> Yes, its a regression in large folio swapin support regardless of zswap/zram. > > > >> > > > >> Need to do 3 tests, one with probably the below diff to remove large folio support, > > > >> one with current upstream and one with upstream + swap thrashing fix. > > > >> > > > >> We only use zswap and dont have a zram setup (and I am a bit lazy to create one :)). > > > >> Any zram volunteers to try this? > > > > > > > > Hi Usama, > > > > > > > > I tried a quick experiment: > > > > > > > > echo 1 > /sys/module/zswap/parameters/enabled > > > > echo 0 > /sys/module/zswap/parameters/enabled > > > > > > > > This was to test the zRAM scenario. Enabling zswap even > > > > once disables mTHP swap-in. :) > > > > > > > > I noticed a similar regression with zRAM alone, but the change resolved > > > > the issue and even sped up the kernel build compared to the setup without > > > > mTHP swap-in. > > > > > > Thanks for trying, this is amazing! > > > > > > > > However, I’m still working on a proper patch to address this. The current > > > > approach: > > > > > > > > mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio)) > > > > > > > > isn’t sufficient, as it doesn’t cover cases where group A contains group B, and > > > > we’re operating within group B. The problem occurs not at the boundary of > > > > group B but at the boundary of group A. > > > > > > I am not sure I completely followed this. As MEMCG_CHARGE_BATCH=64, if we are > > > trying to swapin a 16kB page, we basically check if atleast 64/4 = 16 folios can be > > > charged to cgroup, which is reasonable. If we try to swapin a 1M folio, we just > > > check if we can charge atleast 1 folio. Are you saying that checking just 1 folio > > > is not enough in this case and can still cause thrashing, i.e we should check more? > > > > My understanding is that cgroups are hierarchical. Even if we don’t > > hit the memory > >  limit of the folio’s direct memcg, we could still reach the limit of > > one of its parent > > memcgs. Imagine a structure like: > > > > /sys/fs/cgroup/a/b/c/d > > > > If we’re compiling the kernel in d, there’s a chance that while d > > isn’t at its limit, its > > parents (c, b, or a) could be. Currently, the check only applies to d. > > To clarify, I mean something like this: > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 17af08367c68..cc6d21848ee8 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -4530,6 +4530,29 @@ int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, gfp_t gfp, > return 0; > } > > +/* > + * When the memory cgroup is nearly full, swapping in large folios can > + * easily lead to swap thrashing, as the memcg operates on the edge of > + * being full. We maintain a margin to allow for quick fallback to > + * smaller folios during the swap-in process. > + */ > +static inline bool mem_cgroup_swapin_margin_protected(struct mem_cgroup *memcg, > + struct folio *folio) > +{ > + unsigned int nr; > + > + if (!folio_test_large(folio)) > + return false; > + > + nr = max_t(unsigned int, folio_nr_pages(folio), MEMCG_CHARGE_BATCH); > + for (; !mem_cgroup_is_root(memcg); memcg = parent_mem_cgroup(memcg)) { > + if (mem_cgroup_margin(memcg) < nr) > + return true; > + } > + > + return false; > +} > + > /** > * mem_cgroup_swapin_charge_folio - Charge a newly allocated folio for swapin. > * @folio: folio to charge. > @@ -4547,7 +4570,8 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, > { > struct mem_cgroup *memcg; > unsigned short id; > - int ret; > + int ret = -ENOMEM; > + bool margin_prot; > > if (mem_cgroup_disabled()) > return 0; > @@ -4557,9 +4581,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, > memcg = mem_cgroup_from_id(id); > if (!memcg || !css_tryget_online(&memcg->css)) > memcg = get_mem_cgroup_from_mm(mm); > + margin_prot = mem_cgroup_swapin_margin_protected(memcg, folio); > rcu_read_unlock(); > > - ret = charge_memcg(folio, memcg, gfp); > + if (!margin_prot) > + ret = charge_memcg(folio, memcg, gfp); > > css_put(&memcg->css); > return ret; I'm not quite following. The charging code DOES the margin check. If you just want to avoid reclaim, pass gfp without __GFP_DIRECT_RECLAIM, and it will return -ENOMEM if there is no margin. alloc_swap_folio() passes the THP mask, which should not include the reclaim flag per default (GFP_TRANSHUGE_LIGHT). Unless you run with defrag=always. Is that what's going on?