linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Xianying Wang <wangxianying546@gmail.com>,
	akpm@linux-foundation.org, surenb@google.com, mhocko@suse.com,
	jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Namhyung Kim <namhyung@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
	Adrian Hunter <adrian.hunter@intel.com>,
	"Liang, Kan" <kan.liang@linux.intel.com>,
	linux-perf-users@vger.kernel.org
Subject: Re: [BUG] WARNING in __alloc_frozen_pages_noprof
Date: Wed, 26 Nov 2025 12:19:21 +0100	[thread overview]
Message-ID: <20251126111921.GU4067720@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <4cb9f727-734b-43fa-92d2-80559df76c84@suse.cz>

On Wed, Nov 26, 2025 at 10:46:38AM +0100, Vlastimil Babka wrote:
> +CC perf people as AFAIU the problem originates there. Should the limit
> be lowered, or the allocations e.g. switched to kvmalloc, to avoid
> requesting impossibly high order allocations?
> 
>         /*
>          * There are several places where we assume that the order value is sane
>          * so bail out early if the request is out of bound.
>          */
>         if (WARN_ON_ONCE_GFP(order > MAX_PAGE_ORDER, gfp))
>                 return NULL;
> 
> 
> 
> On 11/19/25 10:07 AM, Xianying Wang wrote:
> > Hi,
> > 
> > I hit the following warning in the page allocator when opening a perf
> > event with callchain sampling after increasing
> > kernel.perf_event_max_stack.This warning can be triggered by first
> > writing a large value into kernel.perf_event_max_stack and then
> > opening a perf event with callchain sampling enabled.
> > 
> > The reproducer does two things:
> > 
> > 1) It writes a large (but still accepted) value to the sysctl:
> > 
> > echo 0x40132 > /proc/sys/kernel/perf_event_max_stack
> > 

Yeah, that is far too large. I suppose the actual max is somewhere near
8k, which would give 64k data for just the callchain -- given that a
single perf buffer entry is limited to 64k (IIRC) and all that.


  reply	other threads:[~2025-11-26 11:19 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-19  9:07 Xianying Wang
2025-11-26  9:46 ` Vlastimil Babka
2025-11-26 11:19   ` Peter Zijlstra [this message]
2025-11-26 19:00     ` Namhyung Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251126111921.GU4067720@noisy.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=irogers@google.com \
    --cc=jackmanb@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mhocko@suse.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=wangxianying546@gmail.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox