From: "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
To: Li Zhijian <lizhijian@fujitsu.com>, linux-mm@kvack.org
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
Yasunori Gotou <y-goto@fujitsu.com>,
David Hildenbrand <david@redhat.com>,
Yao Xingtao <yaoxt.fnst@fujitsu.com>,
Lucas Stach <l.stach@pengutronix.de>
Subject: Re: [PATCH RFC] mm/page_alloc: Fix pcp->count race between drain_pages_zone() vs __rmqueue_pcplist()
Date: Thu, 18 Jul 2024 13:16:35 +0200 [thread overview]
Message-ID: <6f2151a9-da23-4917-b985-8de6b0852e37@kernel.org> (raw)
In-Reply-To: <20240716073929.843277-1-lizhijian@fujitsu.com>
On 7/16/24 9:39 AM, Li Zhijian wrote:
> It's expected that no page should be left in pcp_list after calling
> zone_pcp_disable() in offline_pages(). Previously, it's observed that
> offline_pages() gets stuck [1] due to some pages remaining in pcp_list.
>
> Cause:
> There is a race condition between drain_pages_zone() and __rmqueue_pcplist()
> involving the pcp->count variable. See below scenario:
>
> CPU0 CPU1
> ---------------- ---------------
> spin_lock(&pcp->lock);
> __rmqueue_pcplist() {
> zone_pcp_disable() {
> /* list is empty */
> if (list_empty(list)) {
> /* add pages to pcp_list */
> alloced = rmqueue_bulk()
> mutex_lock(&pcp_batch_high_lock)
> ...
> __drain_all_pages() {
> drain_pages_zone() {
> /* read pcp->count, it's 0 here */
> count = READ_ONCE(pcp->count)
> /* 0 means nothing to drain */
> /* update pcp->count */
> pcp->count += alloced << order;
> ...
> ...
> spin_unlock(&pcp->lock);
>
> In this case, after calling zone_pcp_disable() though, there are still some
> pages in pcp_list. And these pages in pcp_list are neither movable nor
> isolated, offline_pages() gets stuck as a result.
>
> Solution:
> Expand the scope of the pcp->lock to also protect pcp->count in
> drain_pages_zone(), ensuring no pages are left in the pcp list.
>
> [1] https://lore.kernel.org/linux-mm/6a07125f-e720-404c-b2f9-e55f3f166e85@fujitsu.com/
>
> Cc: David Hildenbrand <david@redhat.com>
> Reported-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
> Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
> ---
> mm/page_alloc.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9ecf99190ea2..1780df31d5f5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2323,16 +2323,17 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
> static void drain_pages_zone(unsigned int cpu, struct zone *zone)
> {
> struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
> - int count = READ_ONCE(pcp->count);
> + int count;
>
> + spin_lock(&pcp->lock);
> + count = pcp->count;
> while (count) {
> int to_drain = min(count, pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
> count -= to_drain;
>
> - spin_lock(&pcp->lock);
> free_pcppages_bulk(zone, to_drain, pcp, 0);
> - spin_unlock(&pcp->lock);
> }
> + spin_unlock(&pcp->lock);
This way seems to be partially going against the purpose of 55f77df7d715
("mm: page_alloc: control latency caused by zone PCP draining") - the zone
lock hold time will still be limited by the batch, but not the pcp lock
time. It should still be possible to relock between the iterations? To
prevent the race I think the main part is determining pcp->count under the
lock, but release/retake should still be ok if the pcp->count is reread
after relocking.
> }
>
> /*
next prev parent reply other threads:[~2024-07-18 11:16 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-16 7:39 Li Zhijian
2024-07-18 11:16 ` Vlastimil Babka (SUSE) [this message]
2024-07-18 14:02 ` David Hildenbrand
2024-07-19 2:28 ` Zhijian Li (Fujitsu)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6f2151a9-da23-4917-b985-8de6b0852e37@kernel.org \
--to=vbabka@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=l.stach@pengutronix.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizhijian@fujitsu.com \
--cc=y-goto@fujitsu.com \
--cc=yaoxt.fnst@fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox