From: Michal Hocko <mhocko@suse.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: bpf@vger.kernel.org, andrii@kernel.org, memxor@gmail.com,
akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz,
bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com,
hannes@cmpxchg.org, shakeel.butt@linux.dev, willy@infradead.org,
tglx@linutronix.de, jannh@google.com, tj@kernel.org,
linux-mm@kvack.org, kernel-team@fb.com
Subject: Re: [PATCH bpf-next v4 1/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation
Date: Tue, 14 Jan 2025 11:31:33 +0100 [thread overview]
Message-ID: <Z4Y9BVygLcRTjhMh@tiehlicka> (raw)
In-Reply-To: <20250114021922.92609-2-alexei.starovoitov@gmail.com>
On Mon 13-01-25 18:19:17, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> Tracing BPF programs execute from tracepoints and kprobes where
> running context is unknown, but they need to request additional
> memory. The prior workarounds were using pre-allocated memory and
> BPF specific freelists to satisfy such allocation requests.
> Instead, introduce gfpflags_allow_spinning() condition that signals
> to the allocator that running context is unknown.
> Then rely on percpu free list of pages to allocate a page.
> The rmqueue_pcplist() should be able to pop the page from.
> If it fails (due to IRQ re-entrancy or list being empty) then
> try_alloc_pages() attempts to spin_trylock zone->lock
> and refill percpu freelist as normal.
> BPF program may execute with IRQs disabled and zone->lock is
> sleeping in RT, so trylock is the only option. In theory we can
> introduce percpu reentrance counter and increment it every time
> spin_lock_irqsave(&zone->lock, flags) is used, but we cannot rely
> on it. Even if this cpu is not in page_alloc path the
> spin_lock_irqsave() is not safe, since BPF prog might be called
> from tracepoint where preemption is disabled. So trylock only.
>
> Note, free_page and memcg are not taught about gfpflags_allow_spinning()
> condition. The support comes in the next patches.
>
> This is a first step towards supporting BPF requirements in SLUB
> and getting rid of bpf_mem_alloc.
> That goal was discussed at LSFMM: https://lwn.net/Articles/974138/
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
LGTM, I am not entirely clear on kmsan_alloc_page part though.
As long as that part is correct you can add
Acked-by: Michal Hocko <mhocko@suse.com>
Other than that try_alloc_pages_noprof begs some user documentation.
/**
* try_alloc_pages_noprof - opportunistic reentrant allocation from any context
* @nid - node to allocate from
* @order - allocation order size
*
* Allocates pages of a given order from the given node. This is safe to
* call from any context (from atomic, NMI but also reentrant
* allocator -> tracepoint -> try_alloc_pages_noprof).
* Allocation is best effort and to be expected to fail easily so nobody should
* rely on the succeess. Failures are not reported via warn_alloc().
*
* Return: allocated page or NULL on failure.
*/
> +struct page *try_alloc_pages_noprof(int nid, unsigned int order)
> +{
> + /*
> + * Do not specify __GFP_DIRECT_RECLAIM, since direct claim is not allowed.
> + * Do not specify __GFP_KSWAPD_RECLAIM either, since wake up of kswapd
> + * is not safe in arbitrary context.
> + *
> + * These two are the conditions for gfpflags_allow_spinning() being true.
> + *
> + * Specify __GFP_NOWARN since failing try_alloc_pages() is not a reason
> + * to warn. Also warn would trigger printk() which is unsafe from
> + * various contexts. We cannot use printk_deferred_enter() to mitigate,
> + * since the running context is unknown.
> + *
> + * Specify __GFP_ZERO to make sure that call to kmsan_alloc_page() below
> + * is safe in any context. Also zeroing the page is mandatory for
> + * BPF use cases.
> + *
> + * Though __GFP_NOMEMALLOC is not checked in the code path below,
> + * specify it here to highlight that try_alloc_pages()
> + * doesn't want to deplete reserves.
> + */
> + gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC;
> + unsigned int alloc_flags = ALLOC_TRYLOCK;
> + struct alloc_context ac = { };
> + struct page *page;
> +
> + /*
> + * In RT spin_trylock() may call raw_spin_lock() which is unsafe in NMI.
> + * If spin_trylock() is called from hard IRQ the current task may be
> + * waiting for one rt_spin_lock, but rt_spin_trylock() will mark the
> + * task as the owner of another rt_spin_lock which will confuse PI
> + * logic, so return immediately if called form hard IRQ or NMI.
> + *
> + * Note, irqs_disabled() case is ok. This function can be called
> + * from raw_spin_lock_irqsave region.
> + */
> + if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq()))
> + return NULL;
> + if (!pcp_allowed_order(order))
> + return NULL;
> +
> +#ifdef CONFIG_UNACCEPTED_MEMORY
> + /* Bailout, since try_to_accept_memory_one() needs to take a lock */
> + if (has_unaccepted_memory())
> + return NULL;
> +#endif
> + /* Bailout, since _deferred_grow_zone() needs to take a lock */
> + if (deferred_pages_enabled())
> + return NULL;
> +
> + if (nid == NUMA_NO_NODE)
> + nid = numa_node_id();
> +
> + prepare_alloc_pages(alloc_gfp, order, nid, NULL, &ac,
> + &alloc_gfp, &alloc_flags);
> +
> + /*
> + * Best effort allocation from percpu free list.
> + * If it's empty attempt to spin_trylock zone->lock.
> + */
> + page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac);
> +
> + /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
> +
> + trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype);
> + kmsan_alloc_page(page, order, alloc_gfp);
> + return page;
> +}
> --
> 2.43.5
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2025-01-14 10:31 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-14 2:19 [PATCH bpf-next v4 0/6] bpf, mm: Introduce try_alloc_pages() Alexei Starovoitov
2025-01-14 2:19 ` [PATCH bpf-next v4 1/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Alexei Starovoitov
2025-01-14 9:53 ` Peter Zijlstra
2025-01-14 10:19 ` Michal Hocko
2025-01-14 10:39 ` Peter Zijlstra
2025-01-14 10:43 ` Michal Hocko
2025-01-14 18:29 ` Alexei Starovoitov
2025-01-14 18:34 ` Steven Rostedt
2025-01-14 10:31 ` Michal Hocko [this message]
2025-01-15 1:23 ` Alexei Starovoitov
2025-01-15 8:35 ` Michal Hocko
2025-01-15 22:33 ` Alexei Starovoitov
2025-01-14 2:19 ` [PATCH bpf-next v4 2/6] mm, bpf: Introduce free_pages_nolock() Alexei Starovoitov
2025-01-14 2:19 ` [PATCH bpf-next v4 3/6] locking/local_lock: Introduce local_trylock_irqsave() Alexei Starovoitov
2025-01-14 2:19 ` [PATCH bpf-next v4 4/6] memcg: Use trylock to access memcg stock_lock Alexei Starovoitov
2025-01-14 10:39 ` Michal Hocko
2025-01-14 2:19 ` [PATCH bpf-next v4 5/6] mm, bpf: Use memcg in try_alloc_pages() Alexei Starovoitov
2025-01-14 2:19 ` [PATCH bpf-next v4 6/6] bpf: Use try_alloc_pages() to allocate pages for bpf needs Alexei Starovoitov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z4Y9BVygLcRTjhMh@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=houtao1@huawei.com \
--cc=jannh@google.com \
--cc=kernel-team@fb.com \
--cc=linux-mm@kvack.org \
--cc=memxor@gmail.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=shakeel.butt@linux.dev \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox