linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Chengming Zhou <chengming.zhou@linux.dev>,
	Mark Brown <broonie@kernel.org>
Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com,
	iamjoonsoo.kim@lge.com, akpm@linux-foundation.org,
	roman.gushchin@linux.dev, 42.hyeyoo@gmail.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Chengming Zhou <zhouchengming@bytedance.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH v5 6/9] slub: Delay freezing of partial slabs
Date: Wed, 22 Nov 2023 14:19:41 +0100	[thread overview]
Message-ID: <42867716-5d3d-0252-5fd2-0f8b62498523@suse.cz> (raw)
In-Reply-To: <2af8c92f-0de8-4528-af43-6c6e8c1ebdf3@linux.dev>

On 11/22/23 12:54, Chengming Zhou wrote:
> On 2023/11/22 19:40, Vlastimil Babka wrote:
>> On 11/22/23 12:35, Chengming Zhou wrote:
>>> On 2023/11/22 17:37, Vlastimil Babka wrote:
>>>> On 11/20/23 19:49, Mark Brown wrote:
>>>>> On Thu, Nov 02, 2023 at 03:23:27AM +0000, chengming.zhou@linux.dev wrote:
>>>>>> From: Chengming Zhou <zhouchengming@bytedance.com>
>>>>>>
>>>>>> Now we will freeze slabs when moving them out of node partial list to
>>>>>> cpu partial list, this method needs two cmpxchg_double operations:
>>>>>>
>>>>>> 1. freeze slab (acquire_slab()) under the node list_lock
>>>>>> 2. get_freelist() when pick used in ___slab_alloc()
>>>>>
>>>>> Recently -next has been failing to boot on a Raspberry Pi 3 with an arm
>>>>> multi_v7_defconfig and a NFS rootfs, a bisect appears to point to this
>>>>> patch (in -next as c8d312e039030edab25836a326bcaeb2a3d4db14) as having
>>>>> introduced the issue.  I've included the full bisect log below.
>>>>>
>>>>> When we see problems we see RCU stalls while logging in, for example:
>>>>
>>>> Can you try this, please?
>>>>
>>>
>>> Great! I manually disabled __CMPXCHG_DOUBLE to reproduce the problem,
>>> and this patch can solve the machine hang problem.
>>>
>>> BTW, I also did the performance testcase on the machine with 128 CPUs.
>>>
>>> stress-ng --rawpkt 128 --rawpkt-ops 100000000
>>>
>>> base    patched
>>> 2.22s   2.35s
>>> 2.21s   3.14s
>>> 2.19s   4.75s
>>>
>>> Found this atomic version performance numbers are not stable.
>> 
>> That's weirdly too bad. Is that measured also with __CMPXCHG_DOUBLE
>> disabled, or just the patch? The PG_workingset flag change should be
> 
> The performance test is just the patch.
> 
>> uncontended as we are doing it under list_lock, and with __CMPXCHG_DOUBLE
>> there should be no interfering PG_locked interference.
>> 
> 
> Yes, I don't know. Maybe it's related with my kernel config, making the
> atomic operation much expensive? Will look again..

I doubt it can explain going from 2.19s to 4.75s, must have been some
interference on the machine?

> And I also tested the atomic-optional version like below, found the
> performance numbers are much stable.

This gets rather ugly and fragile so I'd maybe rather go back to the
__unused field approach :/

> diff --git a/mm/slub.c b/mm/slub.c
> index a307d319e82c..e11d34d51a14 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -531,7 +531,7 @@ static __always_inline void slab_unlock(struct slab *slab)
>         struct page *page = slab_page(slab);
> 
>         VM_BUG_ON_PAGE(PageTail(page), page);
> -       __bit_spin_unlock(PG_locked, &page->flags);
> +       bit_spin_unlock(PG_locked, &page->flags);
>  }
> 
>  static inline bool
> @@ -2136,12 +2136,18 @@ static inline bool slab_test_node_partial(const struct slab *slab)
> 
>  static inline void slab_set_node_partial(struct slab *slab)
>  {
> -       __set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
> +       if (slab->slab_cache->flags & __CMPXCHG_DOUBLE)
> +               __set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
> +       else
> +               set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
>  }
> 
>  static inline void slab_clear_node_partial(struct slab *slab)
>  {
> -       __clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
> +       if (slab->slab_cache->flags & __CMPXCHG_DOUBLE)
> +               __clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
> +       else
> +               clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
>  }



  reply	other threads:[~2023-11-22 13:19 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-02  3:23 [PATCH v5 0/9] slub: Delay freezing of CPU " chengming.zhou
2023-11-02  3:23 ` [PATCH v5 1/9] slub: Reflow ___slab_alloc() chengming.zhou
2023-11-22  0:26   ` Hyeonggon Yoo
2023-11-02  3:23 ` [PATCH v5 2/9] slub: Change get_partial() interfaces to return slab chengming.zhou
2023-11-22  1:09   ` Hyeonggon Yoo
2023-11-02  3:23 ` [PATCH v5 3/9] slub: Keep track of whether slub is on the per-node partial list chengming.zhou
2023-11-22  1:21   ` Hyeonggon Yoo
2023-11-02  3:23 ` [PATCH v5 4/9] slub: Prepare __slab_free() for unfrozen partial slab out of node " chengming.zhou
2023-12-03  6:01   ` Hyeonggon Yoo
2023-11-02  3:23 ` [PATCH v5 5/9] slub: Introduce freeze_slab() chengming.zhou
2023-11-02  3:23 ` [PATCH v5 6/9] slub: Delay freezing of partial slabs chengming.zhou
2023-11-14  5:44   ` kernel test robot
2023-11-20 18:49   ` Mark Brown
2023-11-21  0:58     ` Chengming Zhou
2023-11-21  1:29       ` Mark Brown
2023-11-21 15:47         ` Chengming Zhou
2023-11-21 18:21           ` Mark Brown
2023-11-22  8:52             ` Vlastimil Babka
2023-11-22  9:37     ` Vlastimil Babka
2023-11-22 11:27       ` Mark Brown
2023-11-22 11:35       ` Chengming Zhou
2023-11-22 11:40         ` Vlastimil Babka
2023-11-22 11:54           ` Chengming Zhou
2023-11-22 13:19             ` Vlastimil Babka [this message]
2023-11-22 14:28               ` Chengming Zhou
2023-11-22 14:32                 ` Vlastimil Babka
2023-12-03  6:53   ` Hyeonggon Yoo
2023-12-03 10:15     ` Chengming Zhou
2023-12-04 16:58       ` Vlastimil Babka
2023-11-02  3:23 ` [PATCH v5 7/9] slub: Optimize deactivate_slab() chengming.zhou
2023-12-03  9:23   ` Hyeonggon Yoo
2023-12-03 10:26     ` Chengming Zhou
2023-12-03 11:19       ` Hyeonggon Yoo
2023-12-03 11:47         ` Chengming Zhou
2023-12-04 17:55     ` Vlastimil Babka
2023-12-05  0:20       ` Hyeonggon Yoo
2023-11-02  3:23 ` [PATCH v5 8/9] slub: Rename all *unfreeze_partials* functions to *put_partials* chengming.zhou
2023-12-03  9:27   ` Hyeonggon Yoo
2023-11-02  3:23 ` [PATCH v5 9/9] slub: Update frozen slabs documentations in the source chengming.zhou
2023-12-03  9:47   ` Hyeonggon Yoo
2023-12-04 21:41   ` Christoph Lameter (Ampere)
2023-12-05  6:06     ` Chengming Zhou
2023-12-05  9:39       ` Vlastimil Babka
2023-11-13  8:36 ` [PATCH v5 0/9] slub: Delay freezing of CPU partial slabs Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=42867716-5d3d-0252-5fd2-0f8b62498523@suse.cz \
    --to=vbabka@suse.cz \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=broonie@kernel.org \
    --cc=chengming.zhou@linux.dev \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=willy@infradead.org \
    --cc=zhouchengming@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox