From: Michal Hocko <mhocko@suse.com>
To: Aboorva Devarajan <aboorvad@linux.ibm.com>
Cc: akpm@linux-foundation.org, vbabka@suse.cz, surenb@google.com,
jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/page_alloc: make percpu_pagelist_high_fraction reads lock-free
Date: Wed, 3 Dec 2025 09:21:08 +0100 [thread overview]
Message-ID: <aS_y9AuJQFydLEXo@tiehlicka> (raw)
In-Reply-To: <20251201060009.1420792-1-aboorvad@linux.ibm.com>
On Mon 01-12-25 11:30:09, Aboorva Devarajan wrote:
> When page isolation loops indefinitely during memory offline, reading
> /proc/sys/vm/percpu_pagelist_high_fraction blocks on pcp_batch_high_lock,
> causing hung task warnings.
>
> Make procfs reads lock-free since percpu_pagelist_high_fraction is a simple
> integer with naturally atomic reads, writers still serialize via the mutex.
>
> This prevents hung task warnings when reading the procfs file during
> long-running memory offline operations.
>
> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Looks OK. I would just add a short comment explaining that in the code.
See below.
Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/page_alloc.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ed82ee55e66a..7c8d773ed4af 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6611,11 +6611,14 @@ static int percpu_pagelist_high_fraction_sysctl_handler(const struct ctl_table *
> int old_percpu_pagelist_high_fraction;
> int ret;
>
/*
* Avoid using pcp_batch_high_lock for reads as the value is
* read atomicaly and race with offlining is harmless.
*/
> + if (!write)
> + return proc_dointvec_minmax(table, write, buffer, length, ppos);
> +
> mutex_lock(&pcp_batch_high_lock);
> old_percpu_pagelist_high_fraction = percpu_pagelist_high_fraction;
>
> ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
> - if (!write || ret < 0)
> + if (ret < 0)
> goto out;
>
> /* Sanity checking to avoid pcp imbalance */
> --
> 2.50.1
--
Michal Hocko
SUSE Labs
prev parent reply other threads:[~2025-12-03 8:21 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-01 6:00 Aboorva Devarajan
2025-12-01 17:41 ` Andrew Morton
2025-12-03 8:27 ` Michal Hocko
2025-12-03 8:35 ` Gregory Price
2025-12-03 8:42 ` Michal Hocko
2025-12-03 8:51 ` David Hildenbrand (Red Hat)
2025-12-03 9:02 ` Gregory Price
2025-12-03 9:08 ` David Hildenbrand (Red Hat)
2025-12-03 9:23 ` Gregory Price
2025-12-03 9:26 ` Gregory Price
2025-12-03 11:28 ` David Hildenbrand (Red Hat)
2025-12-03 8:59 ` Gregory Price
2025-12-03 9:15 ` David Hildenbrand (Red Hat)
2025-12-03 9:42 ` Michal Hocko
2025-12-03 11:22 ` David Hildenbrand (Red Hat)
2025-12-03 8:21 ` Michal Hocko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aS_y9AuJQFydLEXo@tiehlicka \
--to=mhocko@suse.com \
--cc=aboorvad@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox