From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95249CF8852 for ; Fri, 4 Oct 2024 22:57:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 100946B015C; Fri, 4 Oct 2024 18:57:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 089486B0164; Fri, 4 Oct 2024 18:57:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1D7E6B015D; Fri, 4 Oct 2024 18:57:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C0A406B0159 for ; Fri, 4 Oct 2024 18:57:43 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3CC601C5BA9 for ; Fri, 4 Oct 2024 22:57:43 +0000 (UTC) X-FDA: 82637433606.28.CA51471 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf13.hostedemail.com (Postfix) with ESMTP id 636A420008 for ; Fri, 4 Oct 2024 22:57:41 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IAcVqJTb; spf=pass (imf13.hostedemail.com: domain of song@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=song@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728082594; a=rsa-sha256; cv=none; b=J8wT7LGawVi3WVqrdXs01/jt+I5hO2xxBkt/CvIX5r/1OF46/9QlapmNSju/S+Pp/uikjx jQdpVOtIQv30IM6ZCH17yyE9GIZIPPh4r3I1Oqw6Q6D2wlRrrrWPxKYv87S4Ao0hJslH/O gQdF8hUvSeqA9UAx2phd3vvfhmz94rQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IAcVqJTb; spf=pass (imf13.hostedemail.com: domain of song@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=song@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728082594; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iE0lFPHs6j4FD5lUHhITwyo+5n7+SlvsXk/zNEVr+i0=; b=d1A/N5s592Yo7+xikSRrq2R2ZTBEci6t/nAqRAemxZDrFSB/A2k/ui65Xd0k5dJYWDYDu6 sP8V1uGuCgxBLZ26agMcCW2MOydu1ucni2Ahlv8iiSQvbV03y1Cb7Qkrb2uroxIFfMfqjI SXvty7vIuYQaon5b7Ln79AFL/rUKmLQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id A7C7FA4327F for ; Fri, 4 Oct 2024 22:57:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76B76C4CED6 for ; Fri, 4 Oct 2024 22:57:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1728082658; bh=qVjDmqSZwjw9opZT+5TeJEUEe/n3oLyxDf4L24muHzk=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=IAcVqJTbpoSqWwg6W7hK2MFIU8Ijvon0F3DYu9FVRhDWnOAZ7tpai5CDPmZ6GESfm 5k4UfJRAZf1M1Qgs/CoDKUI+/j0t+hVrAfmXz7CN4yNjVD0/9AIiGsg1SWksyDUTkY q97lltyPUj/FDep7HpfIHy3bDo3rwRuyJqh0YjFrkXFaQqPagVqDFE5jD6Z7NqXYLK u8+7iOwhOTEY6OM2HRtFSOQWfPpANogTptch092aSauSzzhY08DAH3J1rcul7ttiBB dWOaH/0QVPGfrVEi24v4rqVT1Hm4OzFDUWBNXI/ajRYkqPn3qYs+5YhrYC9gEShly1 Ff4pWyeyB4zxA== Received: by mail-il1-f179.google.com with SMTP id e9e14a558f8ab-3a342bae051so10898025ab.3 for ; Fri, 04 Oct 2024 15:57:38 -0700 (PDT) X-Forwarded-Encrypted: i=1; AJvYcCWC31eiujYiu1fNyqiHizTxAjFo/rhezk0NqvAY4jBSayEy6cUnEHzt7TpZIfdyPjEq4G0fx91uaw==@kvack.org X-Gm-Message-State: AOJu0Yy52fkN4naUvmMxrAd5QngEdqQ5QW4qADO69BG3N6/CnkNs+ecf 54MqndogVkyp1F47QAEMbT6OOwQNwlNXbVZDDJ3UNJLwIx1J4xTxcLSBBW5O8ps413axuu4siXa vK7ck3xyBnvoDvddEHHzkCeC4Xog= X-Google-Smtp-Source: AGHT+IFxRndEcVq6a0ZjsKN35u+f0Bwc6BGCMWePGg+IW0DnMklxekOyLQfc291uwPNil47MRqUyEBZPrNFtxWwugsc= X-Received: by 2002:a05:6e02:1fcf:b0:3a0:533e:3c0a with SMTP id e9e14a558f8ab-3a3759925c7mr47646525ab.7.1728082657742; Fri, 04 Oct 2024 15:57:37 -0700 (PDT) MIME-Version: 1.0 References: <20241002180956.1781008-1-namhyung@kernel.org> <20241002180956.1781008-3-namhyung@kernel.org> In-Reply-To: From: Song Liu Date: Fri, 4 Oct 2024 15:57:26 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc To: Namhyung Kim Cc: Roman Gushchin , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , LKML , bpf@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, Arnaldo Carvalho de Melo , Kees Cook Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: bhfr1ttnrasotd16tt9kjwbka9kzj6mw X-Rspamd-Queue-Id: 636A420008 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1728082661-900063 X-HE-Meta: U2FsdGVkX18NkENFxqaGJrqEfpX0TeDwBJGsgO/7tix/zGbhdi6O0o9Q83stdz3X9zhHvVL6GuQn21gjvZgA8+kKk5jW/lPbGZCGdbh6osk12Aiku6Qe3Mf43O8ZDuKWMDn9do+BLF1g2lrJmZ4FodXw+r+v7HcAlByq7X+j7qXsg1pY4bTP/w3mTqPrToGnKfcuh+4+ng7D8IT/bmAX8fkcT1X53g2yEi4q78smd1kAUCrjx11dqePwSrHIVZq7uD0aATHqzjMhRNXT5oDDcHc3WmRF5SxzbzKbMngc0mxZzzZJKak8PYh21xdb8lvwyryTpPlkM9byc44k4Zfp7Qj9RwExvs7TQxenn0We3Qv5ricqpsrkzCo+mYJPvnLaGnV4WE1GqZMakscKNtcRqOUu/PKU9Z88QihyhZCmZbmDd4kqtAAycXw/WNPRyXhUBl0V3DxfBvKh6vbPQWwleWgQ+Agv43yMVn8qOn+3JfEJCcICFX7r9lTxGOm8/fFMbivKEqiUOmfUIc16OIDkTIXaY15VE6kNv1TZwgM+i3YNLVPXx54I3HczirEWtrTGpUkCZYVQ4MgYduYEzAwR89hREgafU1VQho0zP079EwVGTJruTYgx9YSxhm+s25uvoYLUb7Kb4lNu3eSGq8O7GkoqMlBoNDvfIM4sLJOlkwGBGrE2DkYYH6cr3YMXXckYRJwsYaCIOJR4hUrXkIdwpW+itcW6ELOfOY2HeVW/ZoceB4//peugD3SePEfbdKhm5ZqiK/7OsEdS0gmQaQxSaYW7XbrHXqspErgv8dlPAK91ctJf83W30QZQhDk3NQR94Idem+oJXJViEbJ9QzOrh26bAhhdWPnOwURXXz8eisMCYO1XBtqpSeZ97URQfunJ8V60HTvXAzN4Cys2H6qMoVsibQ3JTsiKdTnvAkcHStQFsQ6f0TIJ0m8D0Xnhb9OuXlG44imGmqr51U7ZPAR pHanwydx yefDWMy4+Kx7ZY/bDONiSfBHIgThqnNlJqcMwfnTxF86Dgwa3Hia9RXfTEsEntR0vysc2mEwm0OXnnaPTuoTbQ4zv7xQLh+zv0DGXbF5g3PvXACLTKDVH/zgOTJEeJS3OH1HgYyxqbWnokodoU59QtXNtg5O1bkzpK1NAURtz7RK4f7K/Dzeequ2/PoOF5/zRren7/O4K57yuDMGiohKHGEpDeDketYSti7PLJmtN5eJP7MIfzEDgTV9rHYLNxsA49HPZ+8BzY2sW0U6qmb1L1Is8xfku2no37Oil2LhQJ3jyWKm5S0PPz2z5P6K/w1wyiCM2LozyDIWuqnWVprS+wqK0Vc2IbpxnGa2mzMRz9yytspE1IDFDvRUz2VtjNsksx/sf+j3JfSjvnc/bfc0DjZRTMReX7W1DVXAiT/UoBNXbf58= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Oct 4, 2024 at 2:58=E2=80=AFPM Namhyung Kim w= rote: > > On Fri, Oct 04, 2024 at 02:36:30PM -0700, Song Liu wrote: > > On Fri, Oct 4, 2024 at 2:25=E2=80=AFPM Roman Gushchin wrote: > > > > > > On Fri, Oct 04, 2024 at 01:10:58PM -0700, Song Liu wrote: > > > > On Wed, Oct 2, 2024 at 11:10=E2=80=AFAM Namhyung Kim wrote: > > > > > > > > > > The bpf_get_kmem_cache() is to get a slab cache information from = a > > > > > virtual address like virt_to_cache(). If the address is a pointe= r > > > > > to a slab object, it'd return a valid kmem_cache pointer, otherwi= se > > > > > NULL is returned. > > > > > > > > > > It doesn't grab a reference count of the kmem_cache so the caller= is > > > > > responsible to manage the access. The intended use case for now = is to > > > > > symbolize locks in slab objects from the lock contention tracepoi= nts. > > > > > > > > > > Suggested-by: Vlastimil Babka > > > > > Acked-by: Roman Gushchin (mm/*) > > > > > Acked-by: Vlastimil Babka #mm/slab > > > > > Signed-off-by: Namhyung Kim > > > > > --- > > > > > kernel/bpf/helpers.c | 1 + > > > > > mm/slab_common.c | 19 +++++++++++++++++++ > > > > > 2 files changed, 20 insertions(+) > > > > > > > > > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > > > > > index 4053f279ed4cc7ab..3709fb14288105c6 100644 > > > > > --- a/kernel/bpf/helpers.c > > > > > +++ b/kernel/bpf/helpers.c > > > > > @@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_IT= ER_NEW) > > > > > BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NUL= L) > > > > > BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY) > > > > > BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE) > > > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL) > > > > > BTF_KFUNCS_END(common_btf_ids) > > > > > > > > > > static const struct btf_kfunc_id_set common_kfunc_set =3D { > > > > > diff --git a/mm/slab_common.c b/mm/slab_common.c > > > > > index 7443244656150325..5484e1cd812f698e 100644 > > > > > --- a/mm/slab_common.c > > > > > +++ b/mm/slab_common.c > > > > > @@ -1322,6 +1322,25 @@ size_t ksize(const void *objp) > > > > > } > > > > > EXPORT_SYMBOL(ksize); > > > > > > > > > > +#ifdef CONFIG_BPF_SYSCALL > > > > > +#include > > > > > + > > > > > +__bpf_kfunc_start_defs(); > > > > > + > > > > > +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr) > > > > > +{ > > > > > + struct slab *slab; > > > > > + > > > > > + if (!virt_addr_valid(addr)) > > > > > + return NULL; > > > > > + > > > > > + slab =3D virt_to_slab((void *)(long)addr); > > > > > + return slab ? slab->slab_cache : NULL; > > > > > +} > > > > > > > > Do we need to hold a refcount to the slab_cache? Given > > > > we make this kfunc available everywhere, including > > > > sleepable contexts, I think it is necessary. > > > > > > It's a really good question. > > > > > > If the callee somehow owns the slab object, as in the example > > > provided in the series (current task), it's not necessarily. > > > > > > If a user can pass a random address, you're right, we need to > > > grab the slab_cache's refcnt. But then we also can't guarantee > > > that the object still belongs to the same slab_cache, the > > > function becomes racy by the definition. > > > > To be safe, we can limit the kfunc to sleepable context only. Then > > we can lock slab_mutex for virt_to_slab, and hold a refcount > > to slab_cache. We will need a KF_RELEASE kfunc to release > > the refcount later. > > Then it needs to call kmem_cache_destroy() for release which contains > rcu_barrier. :( > > > > > IIUC, this limitation (sleepable context only) shouldn't be a problem > > for perf use case? > > No, it would be called from the lock contention path including > spinlocks. :( > > Can we limit it to non-sleepable ctx and not to pass arbtrary address > somehow (or not to save the result pointer)? I hacked something like the following. It is not ideal, because we are taking spinlock_t pointer instead of void pointer. To use this with void 'pointer, we will need some verifier changes. Thanks, Song diff --git i/kernel/bpf/helpers.c w/kernel/bpf/helpers.c index 3709fb142881..7311a26ecb01 100644 --- i/kernel/bpf/helpers.c +++ w/kernel/bpf/helpers.c @@ -3090,7 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW) BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY) BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE) -BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL | KF_TRUSTED_ARGS | KF_RCU_PROTECTED) BTF_KFUNCS_END(common_btf_ids) static const struct btf_kfunc_id_set common_kfunc_set =3D { diff --git i/mm/slab_common.c w/mm/slab_common.c index 5484e1cd812f..3e3e5f172f2e 100644 --- i/mm/slab_common.c +++ w/mm/slab_common.c @@ -1327,14 +1327,15 @@ EXPORT_SYMBOL(ksize); __bpf_kfunc_start_defs(); -__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr) +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(spinlock_t *addr) { struct slab *slab; + unsigned long a =3D (unsigned long)addr; - if (!virt_addr_valid(addr)) + if (!virt_addr_valid(a)) return NULL; - slab =3D virt_to_slab((void *)(long)addr); + slab =3D virt_to_slab(addr); return slab ? slab->slab_cache : NULL; } @@ -1346,4 +1347,3 @@ EXPORT_TRACEPOINT_SYMBOL(kmalloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); EXPORT_TRACEPOINT_SYMBOL(kfree); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); - diff --git i/tools/testing/selftests/bpf/progs/kmem_cache_iter.c w/tools/testing/selftests/bpf/progs/kmem_cache_iter.c index 3f6ec15a1bf6..8238155a5055 100644 --- i/tools/testing/selftests/bpf/progs/kmem_cache_iter.c +++ w/tools/testing/selftests/bpf/progs/kmem_cache_iter.c @@ -16,7 +16,7 @@ struct { __uint(max_entries, 1024); } slab_hash SEC(".maps"); -extern struct kmem_cache *bpf_get_kmem_cache(__u64 addr) __ksym; +extern struct kmem_cache *bpf_get_kmem_cache(spinlock_t *addr) __ksym; /* result, will be checked by userspace */ int found; @@ -46,21 +46,23 @@ int slab_info_collector(struct bpf_iter__kmem_cache *ct= x) SEC("raw_tp/bpf_test_finish") int BPF_PROG(check_task_struct) { - __u64 curr =3D bpf_get_current_task(); + struct task_struct *curr =3D bpf_get_current_task_btf(); struct kmem_cache *s; char *name; - s =3D bpf_get_kmem_cache(curr); + s =3D bpf_get_kmem_cache(&curr->alloc_lock); if (s =3D=3D NULL) { found =3D -1; return 0; } + bpf_rcu_read_lock(); name =3D bpf_map_lookup_elem(&slab_hash, &s); if (name && !bpf_strncmp(name, 11, "task_struct")) found =3D 1; else found =3D -2; + bpf_rcu_read_unlock(); return 0; }