From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F49CCF884F for ; Fri, 4 Oct 2024 23:28:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CDBE76B036F; Fri, 4 Oct 2024 19:28:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8BB26B0370; Fri, 4 Oct 2024 19:28:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B53A36B0371; Fri, 4 Oct 2024 19:28:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 97CE56B036F for ; Fri, 4 Oct 2024 19:28:07 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3F2DA1C60F0 for ; Fri, 4 Oct 2024 23:28:07 +0000 (UTC) X-FDA: 82637510214.30.84CA079 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf04.hostedemail.com (Postfix) with ESMTP id 9CD9C40007 for ; Fri, 4 Oct 2024 23:28:04 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t1VOzuKo; spf=pass (imf04.hostedemail.com: domain of namhyung@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=namhyung@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728084311; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MVxBVaTwV0KUOcgAl30sPNf6tP8QRy39bcdLHt4vWjA=; b=5OU8TXhIrBHOnDuCLikmb9kPPSqoONeXDbw8kgPYDOcc8Us/U4tffwB2qLX6PxMCTeZBS8 RQ0FipE1EKdPXQZRLbP8j9XJrhahtrQO5letEweairlIBdEIGWRwJJHQEjHaAFQwEgdeOD m7fNSF3ZllDngD/qB/4NABAZIi/CyuU= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t1VOzuKo; spf=pass (imf04.hostedemail.com: domain of namhyung@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=namhyung@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728084311; a=rsa-sha256; cv=none; b=JU8V1DCZtH4wfJAQWAYQA+9Vci3QVaiRY4A14Ht/3KHFnNSdnrbFL6Ubzz/TLbtSBfij+v ppaleUW8IAYUhJWEQk/dbpG5YnjFkOaCIEX0kFjzKOL87FjPovLFVRC8zfxrwpY2I8ZsSW 4s+kTLO7Mr4IbVcX/BqDUivGLJLsCxw= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 5A9B2A4187B; Fri, 4 Oct 2024 23:27:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FE61C4CEC6; Fri, 4 Oct 2024 23:28:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1728084483; bh=i6orkJt58R7hUtyqlR8RvKjXczmkJgle1PuY39oWi9U=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=t1VOzuKoVyAioEhxd68OM6vFa6mtJi6TfqX324QocuWEoUIhuugHSvCI8DOrsgmWW mhXHU0AyNlYlpNJUlPZ8qe6Hyq1l059sqh4v7gbZ+mqGEH+XZfqXjRBb4cmOh8SDbC oCeUzOVfX698B5n/WYdN9EdziTcqjTRAmY90yL1/WsXzWM3utFVjDRog4LmtJ2sRi7 0o8GluwbBZwosxKbAmS8gBzqtPNZwuevpx2/7ZwZPn0Vx3fbjvUvvEtYB0QkvAp+eK 3AViSZgdQl4LfT5JjxRKMrsX3scqmsm7uQBfFhMSkqjMDrSr+m423mMIJ3zLnk31ig GLi1Dfpe5+ghQ== Date: Fri, 4 Oct 2024 16:28:01 -0700 From: Namhyung Kim To: Song Liu Cc: Roman Gushchin , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , LKML , bpf@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, Arnaldo Carvalho de Melo , Kees Cook Subject: Re: [PATCH v4 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc Message-ID: References: <20241002180956.1781008-1-namhyung@kernel.org> <20241002180956.1781008-3-namhyung@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9CD9C40007 X-Stat-Signature: 8x81uo13ffnykgey3ofcepu78nc6cttz X-Rspam-User: X-HE-Tag: 1728084484-334093 X-HE-Meta: U2FsdGVkX1/kofc4tU6kIC0tO5hyyCiUAs9e32Xd3ZNR8YsRJ5LQsqWcOHJVNyRgLPrk+HPTyKerlixaD/hmD+jJ7of5P+sF26TcS+ZqSmyzSmIFCk68iCVuKA0YflSsjmEHmb2D88eC6WNrvfzvuLp/5EcILR303XLuDCH0IWoXsZneN95O/I3YVMfASV6VxkzL5JfHyyg5jWILCa6T2o++qAAudukTFzZyye29Q1YGvOa5bmJTSCvTccPcEgwipJgVlldjIX42TdZggw8fhsxAyIH5bZFXGdmYnPyeZxVI2+S6w/KAvNjbu3ergo3E4GHYjV47N1QFj5ZjWCaQjD1mQ5UjRjPbS//UsDKvGxX58pNP6ohXYTuh8O4F+/0sp/q9VAK1rECPVdQWzP0d683Btkqk0isdsvva+eUGH4l2Zj8j7YwQm6U5pjhhK0vEqkzsCSBjmL3c8tDIPQ2OZ/OyZ1xsDNss729uzngjH3IVIQYjnrHuWuJBDsySGvxpItvv8qHAGPlsL85dpA4NSZE+bkRS3hwJydlwy4/srNLyV3EInFY5KdCY++Kcl2akIbzxGXgjqWyKBXJdIDTyl3gNU7K0Mqhb7mEeUMq/Sr92Wou2RyKMO16ZFS9v1IALStrvQcqK14heK2VahJSzwGjo9eXM3jMICUITKU3TVPQB9TT7WzSvlq70fZ/QcRF1BFtryvoNuwDo0hsAYB1HsIsNzPyprerZhW+8wMVL/AepZOnKhtnXSJJ16PnGNaof/tpugigB3tpzsWaSMRlDycVeTTOAqWrweaY9BS+rJ2QQKLQI3VDyHwgK3m9PFP6e8FS1ocwUmpwO1f+UMIB5Sguy+LM0XWnD04y+s4Svl5fYsiqDTZW7YgW6dXHXr7rt2UxGfvHA09u984guk3q1/P3WttzqfQxqkJ0RwS8Sxvo8fwSN5B27TzB1PyCgUHAbOwdM6U7I3TqStqpm0wr eaX0/UGC HDgfAyBYE1OYcMTWv8k8Q0W7urDYQ53MtfxIJWDB425wM7NmedgOwcrKD4FgUXxk+ng7/w2uJ0q+grYVkE69Fsw/LaKnsM17EEUJtNcHCHSBtDeriuZPjR7If53BQC8LgeOS3lTmdD1RD05NNBkVQazC5WUUwCUt0Qac+M7RoYYwjmXCrwpBodXRGy61l9rqn8m6HwU0oV0FxCn4NDYoQzx+25YRT5No1M0HZBw/1MTZkZrEnxv9qUuc/JGMLW8WJLHmeS5p0YmqT3hlujROmapG/Df8dHgYNauujbdwAGvzEkS106gf5xhc5DkNuQTbfoQQEwBJaX2TazeTWgRFfyaWMhd4DT/sNfpmkDw1uFBjCh0U1tauqaPOlMjywEDkjJ3YnoEgLMhqlrpH+Ha/PudITVaLiydBG0lt5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Oct 04, 2024 at 03:57:26PM -0700, Song Liu wrote: > On Fri, Oct 4, 2024 at 2:58 PM Namhyung Kim wrote: > > > > On Fri, Oct 04, 2024 at 02:36:30PM -0700, Song Liu wrote: > > > On Fri, Oct 4, 2024 at 2:25 PM Roman Gushchin wrote: > > > > > > > > On Fri, Oct 04, 2024 at 01:10:58PM -0700, Song Liu wrote: > > > > > On Wed, Oct 2, 2024 at 11:10 AM Namhyung Kim wrote: > > > > > > > > > > > > The bpf_get_kmem_cache() is to get a slab cache information from a > > > > > > virtual address like virt_to_cache(). If the address is a pointer > > > > > > to a slab object, it'd return a valid kmem_cache pointer, otherwise > > > > > > NULL is returned. > > > > > > > > > > > > It doesn't grab a reference count of the kmem_cache so the caller is > > > > > > responsible to manage the access. The intended use case for now is to > > > > > > symbolize locks in slab objects from the lock contention tracepoints. > > > > > > > > > > > > Suggested-by: Vlastimil Babka > > > > > > Acked-by: Roman Gushchin (mm/*) > > > > > > Acked-by: Vlastimil Babka #mm/slab > > > > > > Signed-off-by: Namhyung Kim > > > > > > --- > > > > > > kernel/bpf/helpers.c | 1 + > > > > > > mm/slab_common.c | 19 +++++++++++++++++++ > > > > > > 2 files changed, 20 insertions(+) > > > > > > > > > > > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > > > > > > index 4053f279ed4cc7ab..3709fb14288105c6 100644 > > > > > > --- a/kernel/bpf/helpers.c > > > > > > +++ b/kernel/bpf/helpers.c > > > > > > @@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW) > > > > > > BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL) > > > > > > BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY) > > > > > > BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE) > > > > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL) > > > > > > BTF_KFUNCS_END(common_btf_ids) > > > > > > > > > > > > static const struct btf_kfunc_id_set common_kfunc_set = { > > > > > > diff --git a/mm/slab_common.c b/mm/slab_common.c > > > > > > index 7443244656150325..5484e1cd812f698e 100644 > > > > > > --- a/mm/slab_common.c > > > > > > +++ b/mm/slab_common.c > > > > > > @@ -1322,6 +1322,25 @@ size_t ksize(const void *objp) > > > > > > } > > > > > > EXPORT_SYMBOL(ksize); > > > > > > > > > > > > +#ifdef CONFIG_BPF_SYSCALL > > > > > > +#include > > > > > > + > > > > > > +__bpf_kfunc_start_defs(); > > > > > > + > > > > > > +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr) > > > > > > +{ > > > > > > + struct slab *slab; > > > > > > + > > > > > > + if (!virt_addr_valid(addr)) > > > > > > + return NULL; > > > > > > + > > > > > > + slab = virt_to_slab((void *)(long)addr); > > > > > > + return slab ? slab->slab_cache : NULL; > > > > > > +} > > > > > > > > > > Do we need to hold a refcount to the slab_cache? Given > > > > > we make this kfunc available everywhere, including > > > > > sleepable contexts, I think it is necessary. > > > > > > > > It's a really good question. > > > > > > > > If the callee somehow owns the slab object, as in the example > > > > provided in the series (current task), it's not necessarily. > > > > > > > > If a user can pass a random address, you're right, we need to > > > > grab the slab_cache's refcnt. But then we also can't guarantee > > > > that the object still belongs to the same slab_cache, the > > > > function becomes racy by the definition. > > > > > > To be safe, we can limit the kfunc to sleepable context only. Then > > > we can lock slab_mutex for virt_to_slab, and hold a refcount > > > to slab_cache. We will need a KF_RELEASE kfunc to release > > > the refcount later. > > > > Then it needs to call kmem_cache_destroy() for release which contains > > rcu_barrier. :( > > > > > > > > IIUC, this limitation (sleepable context only) shouldn't be a problem > > > for perf use case? > > > > No, it would be called from the lock contention path including > > spinlocks. :( > > > > Can we limit it to non-sleepable ctx and not to pass arbtrary address > > somehow (or not to save the result pointer)? > > I hacked something like the following. It is not ideal, because we are > taking spinlock_t pointer instead of void pointer. To use this with void > 'pointer, we will need some verifier changes. Thanks a lot for doing this!! I'll take a look at the verifier what needs to be done. Namhyung > > > diff --git i/kernel/bpf/helpers.c w/kernel/bpf/helpers.c > index 3709fb142881..7311a26ecb01 100644 > --- i/kernel/bpf/helpers.c > +++ w/kernel/bpf/helpers.c > @@ -3090,7 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW) > BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL) > BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY) > BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE) > -BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL) > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL | KF_TRUSTED_ARGS > | KF_RCU_PROTECTED) > BTF_KFUNCS_END(common_btf_ids) > > static const struct btf_kfunc_id_set common_kfunc_set = { > diff --git i/mm/slab_common.c w/mm/slab_common.c > index 5484e1cd812f..3e3e5f172f2e 100644 > --- i/mm/slab_common.c > +++ w/mm/slab_common.c > @@ -1327,14 +1327,15 @@ EXPORT_SYMBOL(ksize); > > __bpf_kfunc_start_defs(); > > -__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr) > +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(spinlock_t *addr) > { > struct slab *slab; > + unsigned long a = (unsigned long)addr; > > - if (!virt_addr_valid(addr)) > + if (!virt_addr_valid(a)) > return NULL; > > - slab = virt_to_slab((void *)(long)addr); > + slab = virt_to_slab(addr); > return slab ? slab->slab_cache : NULL; > } > > @@ -1346,4 +1347,3 @@ EXPORT_TRACEPOINT_SYMBOL(kmalloc); > EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); > EXPORT_TRACEPOINT_SYMBOL(kfree); > EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); > - > diff --git i/tools/testing/selftests/bpf/progs/kmem_cache_iter.c > w/tools/testing/selftests/bpf/progs/kmem_cache_iter.c > index 3f6ec15a1bf6..8238155a5055 100644 > --- i/tools/testing/selftests/bpf/progs/kmem_cache_iter.c > +++ w/tools/testing/selftests/bpf/progs/kmem_cache_iter.c > @@ -16,7 +16,7 @@ struct { > __uint(max_entries, 1024); > } slab_hash SEC(".maps"); > > -extern struct kmem_cache *bpf_get_kmem_cache(__u64 addr) __ksym; > +extern struct kmem_cache *bpf_get_kmem_cache(spinlock_t *addr) __ksym; > > /* result, will be checked by userspace */ > int found; > @@ -46,21 +46,23 @@ int slab_info_collector(struct bpf_iter__kmem_cache *ctx) > SEC("raw_tp/bpf_test_finish") > int BPF_PROG(check_task_struct) > { > - __u64 curr = bpf_get_current_task(); > + struct task_struct *curr = bpf_get_current_task_btf(); > struct kmem_cache *s; > char *name; > > - s = bpf_get_kmem_cache(curr); > + s = bpf_get_kmem_cache(&curr->alloc_lock); > if (s == NULL) { > found = -1; > return 0; > } > > + bpf_rcu_read_lock(); > name = bpf_map_lookup_elem(&slab_hash, &s); > if (name && !bpf_strncmp(name, 11, "task_struct")) > found = 1; > else > found = -2; > + bpf_rcu_read_unlock(); > > return 0; > }