linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>, bpf@vger.kernel.org
Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org,
	peterz@infradead.org, bigeasy@linutronix.de, rostedt@goodmis.org,
	houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev,
	mhocko@suse.com, willy@infradead.org, tglx@linutronix.de,
	jannh@google.com, tj@kernel.org, linux-mm@kvack.org,
	kernel-team@fb.com
Subject: Re: [PATCH bpf-next v5 1/7] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation
Date: Wed, 15 Jan 2025 12:19:26 +0100	[thread overview]
Message-ID: <9fb94763-69b2-45bd-bc54-aef82037a68c@suse.cz> (raw)
In-Reply-To: <20250115021746.34691-2-alexei.starovoitov@gmail.com>

On 1/15/25 03:17, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> Tracing BPF programs execute from tracepoints and kprobes where
> running context is unknown, but they need to request additional
> memory. The prior workarounds were using pre-allocated memory and
> BPF specific freelists to satisfy such allocation requests.
> Instead, introduce gfpflags_allow_spinning() condition that signals
> to the allocator that running context is unknown.
> Then rely on percpu free list of pages to allocate a page.
> try_alloc_pages() -> get_page_from_freelist() -> rmqueue() ->
> rmqueue_pcplist() will spin_trylock to grab the page from percpu
> free list. If it fails (due to re-entrancy or list being empty)
> then rmqueue_bulk()/rmqueue_buddy() will attempt to
> spin_trylock zone->lock and grab the page from there.
> spin_trylock() is not safe in RT when in NMI or in hard IRQ.
> Bailout early in such case.
> 
> The support for gfpflags_allow_spinning() mode for free_page and memcg
> comes in the next patches.
> 
> This is a first step towards supporting BPF requirements in SLUB
> and getting rid of bpf_mem_alloc.
> That goal was discussed at LSFMM: https://lwn.net/Articles/974138/
> 
> Acked-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Some nits below:

> ---
>  include/linux/gfp.h | 22 ++++++++++
>  mm/internal.h       |  1 +
>  mm/page_alloc.c     | 98 +++++++++++++++++++++++++++++++++++++++++++--
>  3 files changed, 118 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index b0fe9f62d15b..b41bb6e01781 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -39,6 +39,25 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
>  	return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
>  }
>  
> +static inline bool gfpflags_allow_spinning(const gfp_t gfp_flags)
> +{
> +	/*
> +	 * !__GFP_DIRECT_RECLAIM -> direct claim is not allowed.
> +	 * !__GFP_KSWAPD_RECLAIM -> it's not safe to wake up kswapd.
> +	 * All GFP_* flags including GFP_NOWAIT use one or both flags.
> +	 * try_alloc_pages() is the only API that doesn't specify either flag.
> +	 *
> +	 * This is stronger than GFP_NOWAIT or GFP_ATOMIC because
> +	 * those are guaranteed to never block on a sleeping lock.
> +	 * Here we are enforcing that the allaaction doesn't ever spin

					  allocation

> +	 * on any locks (i.e. only trylocks). There is no highlevel
> +	 * GFP_$FOO flag for this use in try_alloc_pages() as the
> +	 * regular page allocator doesn't fully support this
> +	 * allocation mode.
> +	 */
> +	return !(gfp_flags & __GFP_RECLAIM);
> +}

This function seems unused, guess the following patches will use.

> @@ -4509,7 +4517,8 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
>  
>  	might_alloc(gfp_mask);
>  
> -	if (should_fail_alloc_page(gfp_mask, order))
> +	if (!(*alloc_flags & ALLOC_TRYLOCK) &&
> +	    should_fail_alloc_page(gfp_mask, order))

Is it because should_fail_alloc_page() might take some lock or whatnot?
Maybe comment?

>  		return false;
>  
>  	*alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags);
> @@ -7023,3 +7032,86 @@ static bool __free_unaccepted(struct page *page)
>  }
>  
>  #endif /* CONFIG_UNACCEPTED_MEMORY */
> +
> +/**
> + * try_alloc_pages_noprof - opportunistic reentrant allocation from any context
> + * @nid - node to allocate from
> + * @order - allocation order size
> + *
> + * Allocates pages of a given order from the given node. This is safe to
> + * call from any context (from atomic, NMI, and also reentrant
> + * allocator -> tracepoint -> try_alloc_pages_noprof).
> + * Allocation is best effort and to be expected to fail easily so nobody should
> + * rely on the success. Failures are not reported via warn_alloc().
> + *
> + * Return: allocated page or NULL on failure.
> + */
> +struct page *try_alloc_pages_noprof(int nid, unsigned int order)
> +{
> +	/*
> +	 * Do not specify __GFP_DIRECT_RECLAIM, since direct claim is not allowed.
> +	 * Do not specify __GFP_KSWAPD_RECLAIM either, since wake up of kswapd
> +	 * is not safe in arbitrary context.
> +	 *
> +	 * These two are the conditions for gfpflags_allow_spinning() being true.
> +	 *
> +	 * Specify __GFP_NOWARN since failing try_alloc_pages() is not a reason
> +	 * to warn. Also warn would trigger printk() which is unsafe from
> +	 * various contexts. We cannot use printk_deferred_enter() to mitigate,
> +	 * since the running context is unknown.
> +	 *
> +	 * Specify __GFP_ZERO to make sure that call to kmsan_alloc_page() below
> +	 * is safe in any context. Also zeroing the page is mandatory for
> +	 * BPF use cases.
> +	 *
> +	 * Though __GFP_NOMEMALLOC is not checked in the code path below,
> +	 * specify it here to highlight that try_alloc_pages()
> +	 * doesn't want to deplete reserves.
> +	 */
> +	gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC;
> +	unsigned int alloc_flags = ALLOC_TRYLOCK;
> +	struct alloc_context ac = { };
> +	struct page *page;
> +
> +	/*
> +	 * In RT spin_trylock() may call raw_spin_lock() which is unsafe in NMI.
> +	 * If spin_trylock() is called from hard IRQ the current task may be
> +	 * waiting for one rt_spin_lock, but rt_spin_trylock() will mark the
> +	 * task as the owner of another rt_spin_lock which will confuse PI
> +	 * logic, so return immediately if called form hard IRQ or NMI.
> +	 *
> +	 * Note, irqs_disabled() case is ok. This function can be called
> +	 * from raw_spin_lock_irqsave region.
> +	 */
> +	if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq()))
> +		return NULL;
> +	if (!pcp_allowed_order(order))
> +		return NULL;
> +
> +#ifdef CONFIG_UNACCEPTED_MEMORY
> +	/* Bailout, since try_to_accept_memory_one() needs to take a lock */
> +	if (has_unaccepted_memory())
> +		return NULL;
> +#endif
> +	/* Bailout, since _deferred_grow_zone() needs to take a lock */
> +	if (deferred_pages_enabled())
> +		return NULL;

Is it fine for BPF that things will fail to allocate until all memory is
deferred-initialized and accepted? I guess it's easy to teach those places
later to evaluate if they can take the lock.

> +
> +	if (nid == NUMA_NO_NODE)
> +		nid = numa_node_id();
> +
> +	prepare_alloc_pages(alloc_gfp, order, nid, NULL, &ac,
> +			    &alloc_gfp, &alloc_flags);
> +
> +	/*
> +	 * Best effort allocation from percpu free list.
> +	 * If it's empty attempt to spin_trylock zone->lock.
> +	 */
> +	page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac);

What about set_page_owner() from post_alloc_hook() and it's stackdepot
saving. I guess not an issue until try_alloc_pages() gets used later, so
just a mental note that it has to be resolved before. Or is it actually safe?

> +
> +	/* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
> +
> +	trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype);
> +	kmsan_alloc_page(page, order, alloc_gfp);
> +	return page;
> +}



  reply	other threads:[~2025-01-15 11:19 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-15  2:17 [PATCH bpf-next v5 0/7] bpf, mm: Introduce try_alloc_pages() Alexei Starovoitov
2025-01-15  2:17 ` [PATCH bpf-next v5 1/7] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Alexei Starovoitov
2025-01-15 11:19   ` Vlastimil Babka [this message]
2025-01-15 23:00     ` Alexei Starovoitov
2025-01-15 23:47       ` Shakeel Butt
2025-01-16  2:44         ` Alexei Starovoitov
2025-01-15 23:16     ` Shakeel Butt
2025-01-17 18:19   ` Sebastian Andrzej Siewior
2025-01-15  2:17 ` [PATCH bpf-next v5 2/7] mm, bpf: Introduce free_pages_nolock() Alexei Starovoitov
2025-01-15 11:47   ` Vlastimil Babka
2025-01-15 23:15     ` Alexei Starovoitov
2025-01-16  8:31       ` Vlastimil Babka
2025-01-17 18:20   ` Sebastian Andrzej Siewior
2025-01-15  2:17 ` [PATCH bpf-next v5 3/7] locking/local_lock: Introduce local_trylock_irqsave() Alexei Starovoitov
2025-01-15  2:23   ` Alexei Starovoitov
2025-01-15  7:22     ` Sebastian Sewior
2025-01-15 14:22   ` Vlastimil Babka
2025-01-16  2:20     ` Alexei Starovoitov
2025-01-17 20:33   ` Sebastian Andrzej Siewior
2025-01-21 15:59     ` Vlastimil Babka
2025-01-21 16:43       ` Sebastian Andrzej Siewior
2025-01-22  1:35         ` Alexei Starovoitov
2025-01-15  2:17 ` [PATCH bpf-next v5 4/7] memcg: Use trylock to access memcg stock_lock Alexei Starovoitov
2025-01-15 16:07   ` Vlastimil Babka
2025-01-16  0:12   ` Shakeel Butt
2025-01-16  2:22     ` Alexei Starovoitov
2025-01-16 20:07       ` Joshua Hahn
2025-01-17 17:36         ` Johannes Weiner
2025-01-15  2:17 ` [PATCH bpf-next v5 5/7] mm, bpf: Use memcg in try_alloc_pages() Alexei Starovoitov
2025-01-15 17:51   ` Vlastimil Babka
2025-01-16  0:24   ` Shakeel Butt
2025-01-15  2:17 ` [PATCH bpf-next v5 6/7] mm: Make failslab, kfence, kmemleak aware of trylock mode Alexei Starovoitov
2025-01-15 17:57   ` Vlastimil Babka
2025-01-16  2:23     ` Alexei Starovoitov
2025-01-15  2:17 ` [PATCH bpf-next v5 7/7] bpf: Use try_alloc_pages() to allocate pages for bpf needs Alexei Starovoitov
2025-01-15 18:02   ` Vlastimil Babka
2025-01-16  2:25     ` Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9fb94763-69b2-45bd-bc54-aef82037a68c@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=bpf@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=houtao1@huawei.com \
    --cc=jannh@google.com \
    --cc=kernel-team@fb.com \
    --cc=linux-mm@kvack.org \
    --cc=memxor@gmail.com \
    --cc=mhocko@suse.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=shakeel.butt@linux.dev \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox