linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Aboorva Devarajan <aboorvad@linux.ibm.com>
Cc: vbabka@suse.cz, surenb@google.com, mhocko@suse.com,
	jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/page_alloc: make percpu_pagelist_high_fraction reads lock-free
Date: Mon, 1 Dec 2025 09:41:12 -0800	[thread overview]
Message-ID: <20251201094112.07eb1e588b6da2ee70c4641d@linux-foundation.org> (raw)
In-Reply-To: <20251201060009.1420792-1-aboorvad@linux.ibm.com>

On Mon,  1 Dec 2025 11:30:09 +0530 Aboorva Devarajan <aboorvad@linux.ibm.com> wrote:

> When page isolation loops indefinitely during memory offline, reading
> /proc/sys/vm/percpu_pagelist_high_fraction blocks on pcp_batch_high_lock,
> causing hung task warnings.

That's pretty bad behavior.

I wonder if there are other problems which can be caused by this
lengthy hold time.

It would be better to address the lengthy hold time rather that having
to work around it in one impacted site.

> Make procfs reads lock-free since percpu_pagelist_high_fraction is a simple
> integer with naturally atomic reads, writers still serialize via the mutex.
> 
> This prevents hung task warnings when reading the procfs file during
> long-running memory offline operations.
> 
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6611,11 +6611,14 @@ static int percpu_pagelist_high_fraction_sysctl_handler(const struct ctl_table *
>  	int old_percpu_pagelist_high_fraction;
>  	int ret;
>  
> +	if (!write)
> +		return proc_dointvec_minmax(table, write, buffer, length, ppos);
> +
>  	mutex_lock(&pcp_batch_high_lock);
>  	old_percpu_pagelist_high_fraction = percpu_pagelist_high_fraction;
>  
>  	ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
> -	if (!write || ret < 0)
> +	if (ret < 0)
>  		goto out;
>  
>  	/* Sanity checking to avoid pcp imbalance */

That being said, I'll grab the patch and shall put a cc:stable on it,
see what people think about this hold-time issue.


  reply	other threads:[~2025-12-01 17:41 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-01  6:00 Aboorva Devarajan
2025-12-01 17:41 ` Andrew Morton [this message]
2025-12-03  8:27   ` Michal Hocko
2025-12-03  8:35     ` Gregory Price
2025-12-03  8:42       ` Michal Hocko
2025-12-03  8:51         ` David Hildenbrand (Red Hat)
2025-12-03  9:02           ` Gregory Price
2025-12-03  9:08             ` David Hildenbrand (Red Hat)
2025-12-03  9:23               ` Gregory Price
2025-12-03  9:26                 ` Gregory Price
2025-12-03 11:28                 ` David Hildenbrand (Red Hat)
2025-12-03  8:59         ` Gregory Price
2025-12-03  9:15           ` David Hildenbrand (Red Hat)
2025-12-03  9:42             ` Michal Hocko
2025-12-03 11:22               ` David Hildenbrand (Red Hat)
2025-12-03  8:21 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251201094112.07eb1e588b6da2ee70c4641d@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=aboorvad@linux.ibm.com \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox