From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06C48E77188 for ; Tue, 14 Jan 2025 10:31:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BDC36B007B; Tue, 14 Jan 2025 05:31:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 86E606B0083; Tue, 14 Jan 2025 05:31:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70E866B0085; Tue, 14 Jan 2025 05:31:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4F8716B007B for ; Tue, 14 Jan 2025 05:31:50 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C27D9A0913 for ; Tue, 14 Jan 2025 10:31:37 +0000 (UTC) X-FDA: 83005691034.23.D1219F1 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by imf13.hostedemail.com (Postfix) with ESMTP id B89912000D for ; Tue, 14 Jan 2025 10:31:35 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=MSYC3ILb; spf=pass (imf13.hostedemail.com: domain of mhocko@suse.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736850696; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gRE92pmkbnMF6hIf2dQZfyPPqndDQfFd76z+Var3uis=; b=CDj4DER+Qm5cOGqg0AEBQCSYigvvtFk4QebaUMl6W2rUIPjHhym5en2uPoryrkLpeFCCbZ wPUkN6Qa9vVCcEXL5oFLXiG2i38ZkxRab8h4DaSnow3/DjACrk3QW3G3KCNufX5WxmPWo1 vqMvDUt0Q3ofIOjPqOLRSfk3CcfBlFk= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=MSYC3ILb; spf=pass (imf13.hostedemail.com: domain of mhocko@suse.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736850696; a=rsa-sha256; cv=none; b=m6z53Itv6fZofMT79pz0cck9K1KHNfkjEJ6wuQS/Jmv4nm3Xx2hwwG3DvPpMiSVpn7HDr1 d8kDQuj6ogJsFGKBuVdy6LbDJqe9y+hi/jVlhnu8IlFKYYzRkP28dp3jwMWSS7LkK/NFga KRELP9c1/H30VAajd3wbU05UBpVew80= Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-436a39e4891so36558425e9.1 for ; Tue, 14 Jan 2025 02:31:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1736850694; x=1737455494; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=gRE92pmkbnMF6hIf2dQZfyPPqndDQfFd76z+Var3uis=; b=MSYC3ILbfLcVo9d2WkSEdysixBp9sLUl7BUGvvMePFqg0DMphRPjCJx/zKPxj0INEK 3iub5r12Y1Eh0BCgxwnvcP1id6tBpA2nzfNBSo09O4jhQrlP9587+VIwpCYSxq8k6WTB fsYrBJC9+z6m2qCrHS8CYeW6SE7p3qsPxiWXGSrAtwMiH3Rk2LFNXk1OcuAhLsGaTbcM IHBkllkyVnE97wPkuC8YBAybGWn2KFlIj2larMahHrdNufLLJ4pVph2fBqTnpVzfe9BC Yrkq1IdRAHuDnXNRm+niu5E6C25zZYbDFJzqE+BRzv+a8fJqJTULEBqjbC/IMFLLL/ni 6GJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736850694; x=1737455494; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=gRE92pmkbnMF6hIf2dQZfyPPqndDQfFd76z+Var3uis=; b=CrriCquEuSyF+CHwIr5mHVJfeWI+ApQycvxV+QrAIurQMeSt6hhU/Lm2cSQZCvS+PW wy75e+mw2mtcZwhcTsz9DrrNcQK8rlIgiYNm0OvuT2fp4SJlY5+8VPCElFmQ/68g/2Rs KRWzqks3xfxDXulWGbKsWeUfMW7aZpQPys7NEQZRpm+FENWfknL7ElgylxBVLO9bh3QX LLSP3tVs/ofVXK1WENtTPSDR9Ov2Sn93nnyJhjk++poDsHs1//WBFXP0LIBc+/iLM+Au Q72QICns6TSjmkReacngNRByiFV4GQoOJkC5zjxyveWGf0CLL1eYRoAIzH78sjP/lCEl EVRA== X-Forwarded-Encrypted: i=1; AJvYcCUXVv6i748ftSclquCWLA6x1iGKq3/Ubm8ObHdDWS66fGKuA6a+TVTTHgoMOQ4pOH/PfQa//Iq+9g==@kvack.org X-Gm-Message-State: AOJu0YziNreVke+BhN1LYKDqdid2DZlQxpfZC1LpHlmji8bz42ATk8c6 T1ljArmJtH+v2QNr23j9JTWeCzWdHIhdaCZfm8cxsJ4PJWLNWkhEB4k/0HpJkUU= X-Gm-Gg: ASbGncuym1jOdRMgYLYRGs72iiZvDQu/BWtCYz2zRELex/0v6Ok5abgvqF1qEUy8Uzu qzeGKyGKyVrQOUKQS6Ly9nr1FSoFqMLsYE8ZtdL3wvzsQeSFyM9m0F943Rkfv+0s5q+wI4ymkTO pIOP386dNWVkYRLiWlmNzx0NTgw7LJ74JhJyGMftIyGtj27Lc+jXxWjhMK/I0bwxLQyMgrBLy7s bLXOI9Wa0GsOXkxTKHEsFM+7i96vtEFXkXow71/Psj30mUkq2Cm8FVBU/X9XN94s5Ncpw== X-Google-Smtp-Source: AGHT+IF+F/lO+cl3eE7Z2r+FMHP9SrnQeYSXk2JHXeyZ82uqrwzFOTlI6itWbrD8ysfA/IKYifwJNw== X-Received: by 2002:a05:6000:1fa1:b0:388:caf4:e909 with SMTP id ffacd0b85a97d-38a872e8bc4mr24664584f8f.25.1736850694287; Tue, 14 Jan 2025 02:31:34 -0800 (PST) Received: from localhost (109-81-90-202.rct.o2.cz. [109.81.90.202]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38a8e37d0bdsm14355005f8f.3.2025.01.14.02.31.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jan 2025 02:31:34 -0800 (PST) Date: Tue, 14 Jan 2025 11:31:33 +0100 From: Michal Hocko To: Alexei Starovoitov Cc: bpf@vger.kernel.org, andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, willy@infradead.org, tglx@linutronix.de, jannh@google.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: Re: [PATCH bpf-next v4 1/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Message-ID: References: <20250114021922.92609-1-alexei.starovoitov@gmail.com> <20250114021922.92609-2-alexei.starovoitov@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250114021922.92609-2-alexei.starovoitov@gmail.com> X-Rspamd-Server: rspam05 X-Stat-Signature: zkn34tbmzurf8z7wk48k3pq7q8hnmwik X-Rspamd-Queue-Id: B89912000D X-Rspam-User: X-HE-Tag: 1736850695-313401 X-HE-Meta: U2FsdGVkX1/3rhT6Lppw0Sb+H3Wouvn+tQuBFNncZ4hpUAu1TehdMCKvpSbYHAuqP0sqK+vdPbmidlfP+pkh5ZT8TxK9Q+Gau63wfaPPHXIJYJ/MF0ehtvu78eCt6k+owqsYfQzhTthulJzRmXUQ0Hc0tVONqp754KiWVN8tZupVvlrOQhErnRs8WGWmdKbdKyJX19YMKdi//S57UPjiN43Xn5sD6HkwoL2MxtaOFVF2lARpjy+5EW4Fk/siYjKHDNm/K7FyEaP51Mrq0HV4HWhWaEzAxRkzkbCTEij3VUG6ER0sb2Tjle6INqQvzSpvZp09qHYs8FpBfnkhTR6x95CJd5nXiXUalW4xarEnY4YAYNA/EDw/XC7B1bxi8NSOFmmdRu9HH0E3fUe7vyz6HXyv5w9AGhi6fer/T0rlGLCT15wJbvR2gBQv6jqiNzTHRq/0F3s3+yH8ui+K4k0bCJCRZTe5QHXNvXPXUsvVnP+pTV2w6HqBrawneESOGrzN+rWi67qtECy6mCjl1KLhlOsC6UjtUglKIsOkjkbZpDV3+998Imp8PQvq/klqqzrsluv4KERPBa49z50lPdVA+8L4/c7QtEohiab6ecOpkUpaVc9cwskmXXFaOMcX3m0mnZDBZYBkXlbdfa1BevdIZmWlLyMXyzCfgvjA8SmuDaw7SyEluVENuutSaPs3vqN5A9XVaIP1GwVq7+L4XQWcA535v9BYbADihrYHsUoY4ALoo2X45Ypwm/WtkyYfSyAxIdctBN0uSBKS3M/jz6S6mYZ1gkgq+K8T3+yWgbov5bO8VC4c+fzFSiTuwzsTUlVniQKIPryvizBZcyWC7XQZ+B0K6h22ju6FazGx9KkJo/tpzfHdNE8maZxkYCTnGchz6lyKLvBSY7V8vn5SlJA6xAu7i94x8eQcVXeUjbh/2f4mstZF3DebVcxC3V465pyxMvqwJFMXnuthMJSTlbi UFdIgWuK Lx4Sz5AkKviG3FDG29RHq0BWyk0/SVmLZnJ1lhZ4ImZ3mEff/aZSwW6yyo3AWKZ4w7tnPLlaTdYOQopeqeyMMsk8oIWKPZJWF5qRC7WQ0bZx5EC57vhNnbid3ho8wDvxTmgmgRL6lis+5Kw6zdt5GpplWCdPCgLaDtQUejeLsmbV7em3Z5Lcw43eHQ1Yf8fdl6+NZ/06h0H9ubfXETukym7YN5KhqFMFoYhfQsmW6HG5B9/X8zpDR2XGzTOV8oftunnFprkVlv5BhqPhlGIeaXHqGSN5tsS/fkmiRV1OzO6OR9PS5ncsMd5fEr4cXbtov23knfESYkKbDnyiijFX8JewhAFgWF7JhcBuTsV4IGaWh+iRwM+Vbd++jBZE2pEYsf8yNk5QUxiZpjausYOXVU/ZXVXPTRKsY9S/N7kIeULTEyd13VRVoNmvBgMlc9/gs7VX4eSdBHee/gY/4/Tz7ypi68+Dtt9cYwXNhoBcLc0s+zMMit3RwNn7w3VMIEOCjVaE1NulzuJXPmaJPKWmq9vj+Yz8Y1ToNRXayzrtHqGryRYGDpXViPAozAKYlNKUqEEFL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon 13-01-25 18:19:17, Alexei Starovoitov wrote: > From: Alexei Starovoitov > > Tracing BPF programs execute from tracepoints and kprobes where > running context is unknown, but they need to request additional > memory. The prior workarounds were using pre-allocated memory and > BPF specific freelists to satisfy such allocation requests. > Instead, introduce gfpflags_allow_spinning() condition that signals > to the allocator that running context is unknown. > Then rely on percpu free list of pages to allocate a page. > The rmqueue_pcplist() should be able to pop the page from. > If it fails (due to IRQ re-entrancy or list being empty) then > try_alloc_pages() attempts to spin_trylock zone->lock > and refill percpu freelist as normal. > BPF program may execute with IRQs disabled and zone->lock is > sleeping in RT, so trylock is the only option. In theory we can > introduce percpu reentrance counter and increment it every time > spin_lock_irqsave(&zone->lock, flags) is used, but we cannot rely > on it. Even if this cpu is not in page_alloc path the > spin_lock_irqsave() is not safe, since BPF prog might be called > from tracepoint where preemption is disabled. So trylock only. > > Note, free_page and memcg are not taught about gfpflags_allow_spinning() > condition. The support comes in the next patches. > > This is a first step towards supporting BPF requirements in SLUB > and getting rid of bpf_mem_alloc. > That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ > > Signed-off-by: Alexei Starovoitov LGTM, I am not entirely clear on kmsan_alloc_page part though. As long as that part is correct you can add Acked-by: Michal Hocko Other than that try_alloc_pages_noprof begs some user documentation. /** * try_alloc_pages_noprof - opportunistic reentrant allocation from any context * @nid - node to allocate from * @order - allocation order size * * Allocates pages of a given order from the given node. This is safe to * call from any context (from atomic, NMI but also reentrant * allocator -> tracepoint -> try_alloc_pages_noprof). * Allocation is best effort and to be expected to fail easily so nobody should * rely on the succeess. Failures are not reported via warn_alloc(). * * Return: allocated page or NULL on failure. */ > +struct page *try_alloc_pages_noprof(int nid, unsigned int order) > +{ > + /* > + * Do not specify __GFP_DIRECT_RECLAIM, since direct claim is not allowed. > + * Do not specify __GFP_KSWAPD_RECLAIM either, since wake up of kswapd > + * is not safe in arbitrary context. > + * > + * These two are the conditions for gfpflags_allow_spinning() being true. > + * > + * Specify __GFP_NOWARN since failing try_alloc_pages() is not a reason > + * to warn. Also warn would trigger printk() which is unsafe from > + * various contexts. We cannot use printk_deferred_enter() to mitigate, > + * since the running context is unknown. > + * > + * Specify __GFP_ZERO to make sure that call to kmsan_alloc_page() below > + * is safe in any context. Also zeroing the page is mandatory for > + * BPF use cases. > + * > + * Though __GFP_NOMEMALLOC is not checked in the code path below, > + * specify it here to highlight that try_alloc_pages() > + * doesn't want to deplete reserves. > + */ > + gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC; > + unsigned int alloc_flags = ALLOC_TRYLOCK; > + struct alloc_context ac = { }; > + struct page *page; > + > + /* > + * In RT spin_trylock() may call raw_spin_lock() which is unsafe in NMI. > + * If spin_trylock() is called from hard IRQ the current task may be > + * waiting for one rt_spin_lock, but rt_spin_trylock() will mark the > + * task as the owner of another rt_spin_lock which will confuse PI > + * logic, so return immediately if called form hard IRQ or NMI. > + * > + * Note, irqs_disabled() case is ok. This function can be called > + * from raw_spin_lock_irqsave region. > + */ > + if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) > + return NULL; > + if (!pcp_allowed_order(order)) > + return NULL; > + > +#ifdef CONFIG_UNACCEPTED_MEMORY > + /* Bailout, since try_to_accept_memory_one() needs to take a lock */ > + if (has_unaccepted_memory()) > + return NULL; > +#endif > + /* Bailout, since _deferred_grow_zone() needs to take a lock */ > + if (deferred_pages_enabled()) > + return NULL; > + > + if (nid == NUMA_NO_NODE) > + nid = numa_node_id(); > + > + prepare_alloc_pages(alloc_gfp, order, nid, NULL, &ac, > + &alloc_gfp, &alloc_flags); > + > + /* > + * Best effort allocation from percpu free list. > + * If it's empty attempt to spin_trylock zone->lock. > + */ > + page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac); > + > + /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */ > + > + trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); > + kmsan_alloc_page(page, order, alloc_gfp); > + return page; > +} > -- > 2.43.5 -- Michal Hocko SUSE Labs