linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Usama Arif <usamaarif642@gmail.com>
To: Yosry Ahmed <yosryahmed@google.com>, Barry Song <21cnbao@gmail.com>
Cc: senozhatsky@chromium.org, minchan@kernel.org,
	hanchuanhua@oppo.com, v-songbaohua@oppo.com,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	hannes@cmpxchg.org, david@redhat.com, willy@infradead.org,
	kanchana.p.sridhar@intel.com, nphamcs@gmail.com,
	chengming.zhou@linux.dev, ryan.roberts@arm.com,
	ying.huang@intel.com, riel@surriel.com, shakeel.butt@linux.dev,
	kernel-team@meta.com, linux-kernel@vger.kernel.org,
	linux-doc@vger.kernel.org
Subject: Re: [RFC 0/4] mm: zswap: add support for zswapin of large folios
Date: Wed, 23 Oct 2024 19:31:46 +0100	[thread overview]
Message-ID: <3dca2498-363c-4ba5-a7e6-80c5e5532db5@gmail.com> (raw)
In-Reply-To: <CAJD7tkbrjV3Px8h1p950VZFi9FnzxZPn2Kg+vZD69eEcsQvtxg@mail.gmail.com>



On 23/10/2024 19:02, Yosry Ahmed wrote:
> [..]
>>>> I suspect the regression occurs because you're running an edge case
>>>> where the memory cgroup stays nearly full most of the time (this isn't
>>>> an inherent issue with large folio swap-in). As a result, swapping in
>>>> mTHP quickly triggers a memcg overflow, causing a swap-out. The
>>>> next swap-in then recreates the overflow, leading to a repeating
>>>> cycle.
>>>>
>>>
>>> Yes, agreed! Looking at the swap counters, I think this is what is going
>>> on as well.
>>>
>>>> We need a way to stop the cup from repeatedly filling to the brim and
>>>> overflowing. While not a definitive fix, the following change might help
>>>> improve the situation:
>>>>
>>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>>>
>>>> index 17af08367c68..f2fa0eeb2d9a 100644
>>>> --- a/mm/memcontrol.c
>>>> +++ b/mm/memcontrol.c
>>>>
>>>> @@ -4559,7 +4559,10 @@ int mem_cgroup_swapin_charge_folio(struct folio
>>>> *folio, struct mm_struct *mm,
>>>>                 memcg = get_mem_cgroup_from_mm(mm);
>>>>         rcu_read_unlock();
>>>>
>>>> -       ret = charge_memcg(folio, memcg, gfp);
>>>> +       if (folio_test_large(folio) && mem_cgroup_margin(memcg) <
>>>> MEMCG_CHARGE_BATCH)
>>>> +               ret = -ENOMEM;
>>>> +       else
>>>> +               ret = charge_memcg(folio, memcg, gfp);
>>>>
>>>>         css_put(&memcg->css);
>>>>         return ret;
>>>> }
>>>>
>>>
>>> The diff makes sense to me. Let me test later today and get back to you.
>>>
>>> Thanks!
>>>
>>>> Please confirm if it makes the kernel build with memcg limitation
>>>> faster. If so, let's
>>>> work together to figure out an official patch :-) The above code hasn't consider
>>>> the parent memcg's overflow, so not an ideal fix.
>>>>
>>
>> Thanks Barry, I think this fixes the regression, and even gives an improvement!
>> I think the below might be better to do:
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index c098fd7f5c5e..0a1ec55cc079 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -4550,7 +4550,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm,
>>                 memcg = get_mem_cgroup_from_mm(mm);
>>         rcu_read_unlock();
>>
>> -       ret = charge_memcg(folio, memcg, gfp);
>> +       if (folio_test_large(folio) &&
>> +           mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio)))
>> +               ret = -ENOMEM;
>> +       else
>> +               ret = charge_memcg(folio, memcg, gfp);
>>
>>         css_put(&memcg->css);
>>         return ret;
>>
>>
>> AMD 16K+32K THP=always
>> metric         mm-unstable      mm-unstable + large folio zswapin series    mm-unstable + large folio zswapin + no swap thrashing fix
>> real           1m23.038s        1m23.050s                                   1m22.704s
>> user           53m57.210s       53m53.437s                                  53m52.577s
>> sys            7m24.592s        7m48.843s                                   7m22.519s
>> zswpin         612070           999244                                      815934
>> zswpout        2226403          2347979                                     2054980
>> pgfault        20667366         20481728                                    20478690
>> pgmajfault     385887           269117                                      309702
>>
>> AMD 16K+32K+64K THP=always
>> metric         mm-unstable      mm-unstable + large folio zswapin series   mm-unstable + large folio zswapin + no swap thrashing fix
>> real           1m22.975s        1m23.266s                                  1m22.549s
>> user           53m51.302s       53m51.069s                                 53m46.471s
>> sys            7m40.168s        7m57.104s                                  7m25.012s
>> zswpin         676492           1258573                                    1225703
>> zswpout        2449839          2714767                                    2899178
>> pgfault        17540746         17296555                                   17234663
>> pgmajfault     429629           307495                                     287859
>>
> 
> Thanks Usama and Barry for looking into this. It seems like this would
> fix a regression with large folio swapin regardless of zswap. Can the
> same result be reproduced on zram without this series?


Yes, its a regression in large folio swapin support regardless of zswap/zram.

Need to do 3 tests, one with probably the below diff to remove large folio support,
one with current upstream and one with upstream + swap thrashing fix.

We only use zswap and dont have a zram setup (and I am a bit lazy to create one :)).
Any zram volunteers to try this?

diff --git a/mm/memory.c b/mm/memory.c
index fecdd044bc0b..62f6b087beb3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4124,6 +4124,8 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
        gfp_t gfp;
        int order;
 
+       goto fallback;
+
        /*
         * If uffd is active for the vma we need per-page fault fidelity to
         * maintain the uffd semantics. 


  reply	other threads:[~2024-10-23 18:31 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-18 10:48 Usama Arif
2024-10-18 10:48 ` [RFC 1/4] mm/zswap: skip swapcache for swapping in zswap pages Usama Arif
2024-10-21 21:09   ` Yosry Ahmed
2024-10-22 19:49     ` Usama Arif
2024-10-23  0:45       ` Yosry Ahmed
2024-10-25 18:19         ` Nhat Pham
2024-10-25 19:10           ` Yosry Ahmed
2024-10-21 21:11   ` Yosry Ahmed
2024-10-22 19:59     ` Usama Arif
2024-10-23  0:47       ` Yosry Ahmed
2024-10-18 10:48 ` [RFC 2/4] mm/zswap: modify zswap_decompress to accept page instead of folio Usama Arif
2024-10-18 10:48 ` [RFC 3/4] mm/zswap: add support for large folio zswapin Usama Arif
2024-10-21  5:49   ` Barry Song
2024-10-21 10:44     ` Usama Arif
2024-10-21 10:55       ` Barry Song
2024-10-21 12:21         ` Usama Arif
2024-10-21 20:28           ` Barry Song
2024-10-21 20:57             ` Usama Arif
2024-10-21 21:34               ` Yosry Ahmed
2024-10-18 10:48 ` [RFC 4/4] mm/zswap: count successful large folio zswap loads Usama Arif
2024-10-21  5:09 ` [RFC 0/4] mm: zswap: add support for zswapin of large folios Barry Song
2024-10-21 10:40   ` Usama Arif
2024-10-22 15:26     ` Usama Arif
2024-10-22 20:46       ` Barry Song
2024-10-22 21:17         ` Usama Arif
2024-10-22 22:07           ` Barry Song
2024-10-23 10:26             ` Barry Song
2024-10-23 10:48               ` Usama Arif
2024-10-23 13:08                 ` Usama Arif
2024-10-23 18:02                   ` Yosry Ahmed
2024-10-23 18:31                     ` Usama Arif [this message]
2024-10-23 18:52                       ` Barry Song
2024-10-23 19:47                         ` Usama Arif
2024-10-23 20:36                           ` Barry Song
2024-10-23 23:35                           ` Barry Song
2024-10-24 14:29                             ` Johannes Weiner
2024-10-24 17:48                               ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3dca2498-363c-4ba5-a7e6-80c5e5532db5@gmail.com \
    --to=usamaarif642@gmail.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chengming.zhou@linux.dev \
    --cc=david@redhat.com \
    --cc=hanchuanhua@oppo.com \
    --cc=hannes@cmpxchg.org \
    --cc=kanchana.p.sridhar@intel.com \
    --cc=kernel-team@meta.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=riel@surriel.com \
    --cc=ryan.roberts@arm.com \
    --cc=senozhatsky@chromium.org \
    --cc=shakeel.butt@linux.dev \
    --cc=v-songbaohua@oppo.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox