From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3D34C02183 for ; Fri, 17 Jan 2025 18:19:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 38CD6280006; Fri, 17 Jan 2025 13:19:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 33CC8280004; Fri, 17 Jan 2025 13:19:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DDBA280006; Fri, 17 Jan 2025 13:19:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EF9C6280004 for ; Fri, 17 Jan 2025 13:19:37 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7500D140838 for ; Fri, 17 Jan 2025 18:19:37 +0000 (UTC) X-FDA: 83017756794.03.99BABF5 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf23.hostedemail.com (Postfix) with ESMTP id 75E06140011 for ; Fri, 17 Jan 2025 18:19:35 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=JT1ywXok; dkim=pass header.d=linutronix.de header.s=2020e header.b=xVguczh5; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf23.hostedemail.com: domain of bigeasy@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737137975; a=rsa-sha256; cv=none; b=2gzZR41fVlFuQAACkYnrsI5EqCRpSilUnJSe3kRMz7ZQLTkPhcZ53P1skorGhwMuvfDBLU hE3KVXvYEqY5t4Yq/vLyqltp3Na7UdWUBAxaLo9mIDav/oxp2VaqR2bEviWJidECrf1imS Na5F6N2n5RBszbWXCQVKqvk8XcI6ioU= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=JT1ywXok; dkim=pass header.d=linutronix.de header.s=2020e header.b=xVguczh5; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf23.hostedemail.com: domain of bigeasy@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737137975; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6XHn+3AlguhVOe5cKDPy5uHL2DKfIiFEU6eifK1hc2M=; b=foOLBbTXr2y3LBlP+NNs03/U/p1UUQIfH0gbEHlfn8E769BWZA+e9ytqax8pRm5p3njj3/ Szlt8C0g64BNDCOi7jJWQpMJPo1Pz6hVqK5c8Me1syeWnRKYAQBJZbZUOcSEz2nn4VZucG rVa1uJCjg0A/xpgXQHGNVGx6E0sVq08= Date: Fri, 17 Jan 2025 19:19:31 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1737137973; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6XHn+3AlguhVOe5cKDPy5uHL2DKfIiFEU6eifK1hc2M=; b=JT1ywXokcV9mmWHLbCcFzuCx9poNQTl5VKTtp9c0ayzjaqhmxiMaz3mxyEXWnaZg2F6IVN JaqmWabMVoi3/NvDr579eZqA+9PonbvwupIED4D4emdJfiimj0oihGabde7rgHxkUndf8O kQRFF9YAv+Kh7wFk9b7s+Tj6tTP/QYIkJmWv1riYLleJYXNU0HhqeBkIqHhZW+IgP1XMBz PcwvwSOf5a76rQbSoiRUSWPgh14bKVQMi2VINM4g1qWoLGW7GkS5zI09+iQjXvIxTuOxh/ 621bCefLRyqoJC8+Xww5WHpJ8mW01rjuIrxCMypJ285eCt5xoefUfxxXCJYVRQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1737137973; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6XHn+3AlguhVOe5cKDPy5uHL2DKfIiFEU6eifK1hc2M=; b=xVguczh5/z4+20GpZNGsj14I2xRfnCTImAZZB9VONr7HLEXaQzDOOWstLrSvGPpTjRCCXp g3yRV3XtK//aWIDQ== From: Sebastian Andrzej Siewior To: Alexei Starovoitov Cc: bpf@vger.kernel.org, andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, willy@infradead.org, tglx@linutronix.de, jannh@google.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: Re: [PATCH bpf-next v5 1/7] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Message-ID: <20250117181931.54EJgni0@linutronix.de> References: <20250115021746.34691-1-alexei.starovoitov@gmail.com> <20250115021746.34691-2-alexei.starovoitov@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable In-Reply-To: <20250115021746.34691-2-alexei.starovoitov@gmail.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 75E06140011 X-Stat-Signature: 81e3w4psdrghxo3k3ejah8zxhmxgp9wg X-Rspam-User: X-HE-Tag: 1737137975-323256 X-HE-Meta: U2FsdGVkX18F13MCuafb8UnDb5FkEyf46IEMwf/DGiBLYZCRzHRQI5/t50BLtFw7XSRNgw4JaqvB0VFek6NHU6ClMvbz9UlzUeT18Ew1cvOncqCge4wYeJrQpFPIlKgnsq0KQosC72O6VPqZBZ4RYvKMe2Ws1ah7yfaQwwcVDKfTZZuzdKlV0UzR/NtEt7gyHtx268I7FU/5AUZ6HCmwNsmPtB53J8Knzi2sH5+JFQVQwich0HllQ2PuMYEjxhatUxfZ0qz1KtSW2vUOtaIfc+tWuxbVwfKtLlNPKs1CCJ6RdTFktU2iVbWzf3Lq/nd26CbTSKjoyvUPmHwGnLw2ANeUsHakWztpXHwe2dEOy5XW1EDljRQNo/GFyDjEy9jpOfzUwSyV/Ep09/RSpG1aSsJPV3ptpRnHlWpAcquCuQ+OugoGhoTV2lxWCtuGrD1tV3sm80gFDQpd+083PkOeH4y6sP93miLtzTjF2n1pHJFMAwOVMfwEFCnTEz/93kj8UFlQ1cTBOPs8Y6R757jRWg1jtM16cZudBCbnZon9DwjTgU8D/C6kHJOzliyqrY4UUGdOErGz/LQhal/YDti43RbJbVSPvmyYJ6CdLijBy3wzFhwLgQgpCTDW6dl+fU9DZl0RDFDFvsnLabPC5PGt35eXQzRqXURNILkw3sSB1yZTwKtUUlrs7fC+Ufxk2IVzTXONbH1iA2CWBFNezmLxBy26YYEbPcbd5GwwJTGsEV0lW5ur7vSahcrHn+YYr2r0Rm2ltf/RRb5WEVPFCWCNrPwy7XikoDCXyZQLDE5uAa/YOLHDlDZZuCGqNztKFzAeNi18I1kvwKw6A8i4P1PO/TF8sR10yAbm7bXxbhqBwc8p9tzHuN0uOU4oLdep/zvolAL1CvTB+JpKTVhQxcrL0IsxNLObWZRQY7gZcFpIzlTv79s9Y/XSnOdXqvpIoZ6C3z82q76mShZ7kuBTo8l EIKQSQM0 8tQ4rsCM9vkAFSaRBp07Q2h54x1ypi96IzC8nkTIVGRECqry91sgZWIkWV9tkmrsPflaM0PdvAt5ROitpc0JrbPaawF+EUnDDfm41JDeOlDCCY3JCZkyb3MtoaYkrZq9kScLXVOzxpgN7xT0X2ilXw6jNcteOKilWjmKe0JVIgDZ/73/828zmDDAOymzQX6+5dB/5nRUgIlnaEm4PDdyT7le4lDZjhyEE4A9Dq0iZi0k7cMDlj28XY/wpqQF2UpVvwxPzqV+n/optrKplAKAQXxThGkNcNcIrSvf6pr/eKf8eIHLsrHuBi5aevI10xvCRI1aPVTdIocYgrmaqY8jejVG5kfYISHZJxTNl4j+C9PjuZmci21mbRr13jEotEncaLQMr0chnaxDujsEe2lOslhbSvtBdl/5KZHhGeA6Ml5iBF+LmWTAdMCy3N5B3L8i6IJo93FJDGqZnpGs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025-01-14 18:17:40 [-0800], Alexei Starovoitov wrote: > From: Alexei Starovoitov >=20 > Tracing BPF programs execute from tracepoints and kprobes where > running context is unknown, but they need to request additional > memory. The prior workarounds were using pre-allocated memory and > BPF specific freelists to satisfy such allocation requests. > Instead, introduce gfpflags_allow_spinning() condition that signals > to the allocator that running context is unknown. > Then rely on percpu free list of pages to allocate a page. > try_alloc_pages() -> get_page_from_freelist() -> rmqueue() -> > rmqueue_pcplist() will spin_trylock to grab the page from percpu > free list. If it fails (due to re-entrancy or list being empty) > then rmqueue_bulk()/rmqueue_buddy() will attempt to > spin_trylock zone->lock and grab the page from there. > spin_trylock() is not safe in RT when in NMI or in hard IRQ. > Bailout early in such case. >=20 > The support for gfpflags_allow_spinning() mode for free_page and memcg > comes in the next patches. >=20 > This is a first step towards supporting BPF requirements in SLUB > and getting rid of bpf_mem_alloc. > That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ >=20 > Acked-by: Michal Hocko Acked-by: Sebastian Andrzej Siewior could you=E2=80=A6 > Signed-off-by: Alexei Starovoitov =E2=80=A6 > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 1cb4b8c8886d..74c2a7af1a77 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7023,3 +7032,86 @@ static bool __free_unaccepted(struct page *page) > } > =20 > #endif /* CONFIG_UNACCEPTED_MEMORY */ > + > +/** > + * try_alloc_pages_noprof - opportunistic reentrant allocation from any = context > + * @nid - node to allocate from > + * @order - allocation order size > + * > + * Allocates pages of a given order from the given node. This is safe to > + * call from any context (from atomic, NMI, and also reentrant > + * allocator -> tracepoint -> try_alloc_pages_noprof). > + * Allocation is best effort and to be expected to fail easily so nobody= should > + * rely on the success. Failures are not reported via warn_alloc(). Could you maybe add a pointer like "See AlwaysFailRestrictions below." or something similar to make the user aware of the comment below where certain always-fail restrictions are mentioned. Such as PREEMPT_RT + NMI or deferred_pages_enabled(). It might not be easy to be aware of this. I'm curious how this turns out in the long run :) > + * > + * Return: allocated page or NULL on failure. > + */ > +struct page *try_alloc_pages_noprof(int nid, unsigned int order) > +{ > + /* > + * Do not specify __GFP_DIRECT_RECLAIM, since direct claim is not allow= ed. > + * Do not specify __GFP_KSWAPD_RECLAIM either, since wake up of kswapd > + * is not safe in arbitrary context. > + * > + * These two are the conditions for gfpflags_allow_spinning() being tru= e. > + * > + * Specify __GFP_NOWARN since failing try_alloc_pages() is not a reason > + * to warn. Also warn would trigger printk() which is unsafe from > + * various contexts. We cannot use printk_deferred_enter() to mitigate, > + * since the running context is unknown. > + * > + * Specify __GFP_ZERO to make sure that call to kmsan_alloc_page() below > + * is safe in any context. Also zeroing the page is mandatory for > + * BPF use cases. > + * > + * Though __GFP_NOMEMALLOC is not checked in the code path below, > + * specify it here to highlight that try_alloc_pages() > + * doesn't want to deplete reserves. > + */ > + gfp_t alloc_gfp =3D __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC; > + unsigned int alloc_flags =3D ALLOC_TRYLOCK; > + struct alloc_context ac =3D { }; > + struct page *page; > + > + /* > + * In RT spin_trylock() may call raw_spin_lock() which is unsafe in NMI. PREEMPT_RT please. s/may/will > + * If spin_trylock() is called from hard IRQ the current task may be > + * waiting for one rt_spin_lock, but rt_spin_trylock() will mark the > + * task as the owner of another rt_spin_lock which will confuse PI > + * logic, so return immediately if called form hard IRQ or NMI. > + * > + * Note, irqs_disabled() case is ok. This function can be called > + * from raw_spin_lock_irqsave region. > + */ > + if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) > + return NULL; > + if (!pcp_allowed_order(order)) > + return NULL; =E2=80=A6 Sebastian