linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: mengensun88@gmail.com
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	alexjlzheng@tencent.com,  MengEn Sun <mengensun@tencent.com>
Subject: Re: [PATCH] mm/page_alloc: add cond_resched in __drain_all_pages()
Date: Wed, 25 Dec 2024 15:03:16 -0800 (PST)	[thread overview]
Message-ID: <3b000941-b1b6-befa-4ec9-2bff63d557c1@google.com> (raw)
In-Reply-To: <1735107961-9938-1-git-send-email-mengensun@tencent.com>

On Wed, 25 Dec 2024, mengensun88@gmail.com wrote:

> From: MengEn Sun <mengensun@tencent.com>
> 
> Since version v5.19-rc7, draining remote per-CPU pools (PCP) no
> longer relies on workqueues; instead, the current CPU is
> responsible for draining the PCPs of all CPUs.
> 
> However, due to the lack of scheduling points in the
> __drain_all_pages function, this can lead to soft locks in
> some extreme cases.
> 
> We observed the following soft-lockup stack on a 64-core,
> 223GB machine during testing:
> watchdog: BUG: soft lockup - CPU#29 stuck for 23s! [stress-ng-vm]
> RIP: 0010:native_queued_spin_lock_slowpath+0x5b/0x1c0
> _raw_spin_lock
> drain_pages_zone
> drain_pages
> drain_all_pages
> __alloc_pages_slowpath
> __alloc_pages_nodemask
> alloc_pages_vma
> do_huge_pmd_anonymous_page
> handle_mm_fault
> 
> Fixes: <443c2accd1b66> ("mm/page_alloc: remotely drain per-cpu lists")

The < > would be removed.

> Reviewed-by: JinLiang Zheng <alexjlzheng@tencent.com>
> Signed-off-by: MengEn Sun <mengensun@tencent.com>
> ---
>  mm/page_alloc.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c6c7bb3ea71b..d05b32ec1e40 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2487,6 +2487,7 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
>  			drain_pages_zone(cpu, zone);
>  		else
>  			drain_pages(cpu);
> +		cond_resched();
>  	}
>  
>  	mutex_unlock(&pcpu_drain_mutex);

This is another example of a soft lockup that we haven't observed and we 
have systems with many more cores than 64.

Is this happening because of contention on pcp->lock or zone->lock?  I 
would assume the latter, but best to confirm.

I think this is just papering over a scalability problem with zone->lock.  
How many NUMA nodes and zones does this 223GB system have?

If this is a problem with zone->lock, this problem should likely be 
addressed more holistically.


  reply	other threads:[~2024-12-25 23:03 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-25  6:26 mengensun88
2024-12-25 23:03 ` David Rientjes [this message]
2025-01-07 17:39   ` MengEn Sun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3b000941-b1b6-befa-4ec9-2bff63d557c1@google.com \
    --to=rientjes@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexjlzheng@tencent.com \
    --cc=linux-mm@kvack.org \
    --cc=mengensun88@gmail.com \
    --cc=mengensun@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox