From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DF08C54E58 for ; Fri, 15 Mar 2024 17:06:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFF2180138; Fri, 15 Mar 2024 13:06:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E887F800B4; Fri, 15 Mar 2024 13:06:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D021D80138; Fri, 15 Mar 2024 13:06:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BA085800B4 for ; Fri, 15 Mar 2024 13:06:50 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 54E2F4139B for ; Fri, 15 Mar 2024 17:06:50 +0000 (UTC) X-FDA: 81899902980.11.2053E50 Received: from mail-yb1-f170.google.com (mail-yb1-f170.google.com [209.85.219.170]) by imf10.hostedemail.com (Postfix) with ESMTP id 4A0B9C0032 for ; Fri, 15 Mar 2024 17:06:47 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KQMlQVdR; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of surenb@google.com designates 209.85.219.170 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710522407; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rC572A3yia7X47ZYtWd+ymj431bX9Z5tiP3vPmrWrg8=; b=S1d8J2iU7Dn9sMD/tIkFtq+ukkfQLPrCtg9SDAIHokuVcWrgd8l9S7ejvg1CGhlq5uTjLY KLE69Jw8blZaJOnPNUQFmddfYfnuGRXQd1JNQthXdN5Acsdzqt+S/XXnBTgYGiUl+9HGu4 z/n3Wl/MBEuDWWdZvUwoQJhR1/VVKDM= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KQMlQVdR; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of surenb@google.com designates 209.85.219.170 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710522407; a=rsa-sha256; cv=none; b=v698RgNQjF4Dcfw6o5+ozQStQhc8yemuRvh3nwR7Y1CV4NOwJVS1crFEap+P7+a1lAO/Qb VpulgqMjcTR6YBQxYUDR3px/HeZTDt+CajHA/SNoCedI5Wx7vNmsmwMDZuSZVyyJZFQEP1 GHDCjQze7g8n7fzkn43Xx4vNEDTvC4k= Received: by mail-yb1-f170.google.com with SMTP id 3f1490d57ef6-dd14d8e7026so2057775276.2 for ; Fri, 15 Mar 2024 10:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710522406; x=1711127206; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=rC572A3yia7X47ZYtWd+ymj431bX9Z5tiP3vPmrWrg8=; b=KQMlQVdRBNjvI6KazkwxnBa68rxrmbjAbezwQkX0HdOpeim3dHcmUcPjAv/tcWn9gc r3p0ZvyQH491LWV1pLUWeb9hlfDx9FaG2qAyvWk34zqU9ijzmJfiKsZdX1sb0ac03Ws5 jCyejvBWOwEWCe9tNLiqbu5WgTLHyjvyTn61N8HZ0nDBNqD+ZMvF1N+tRFgCfDughd9V uICsD69gvaAfIB1MdGRCLkwWJNRvJ6XAh/LtmuBSjzPvc+zD4Ytmva76Zh5VxajWNjg4 XPnkGAbPMaL5s79kDUWXo2JT8QScgQazzrnwAoHkGlz1YCPYOD+MlCnQ960gv8fPjCUk C74g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710522406; x=1711127206; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rC572A3yia7X47ZYtWd+ymj431bX9Z5tiP3vPmrWrg8=; b=LJ+dPKP9pu1GNmFLdJi1klDtQ9ToxsrVbGYz75zmLHQGo+D4u7A7BnAA5WOUr6hnyK jeECXXjZaUTpkEBS9RFEZvXOK9HjlO6Jmq0TOwjC6KkMpLzyQa0Ltue+Q4wG/LNadVh1 s/cFfG9lDzTdiJctH8L1xI84yzVSp3QMhCaethkIZotPzAtR925sGn5lfpmcJBUjZrKD HamuaPyZcnikjarnABHby6wpxEAFuI7W2skie1a/4mQGWZiXLIs6ZolnyrENXQPSfAK3 Rj//tIGhwKYXF46lNwADd0u0sae6cFyvzDPDf/+suuORZykqktEfSQ1JmRC/Qg3+lQyh LyiA== X-Forwarded-Encrypted: i=1; AJvYcCWbW3QodblhUNCV14QEAcVinDIrCOKsEa8ev4vXryhRhkg1x0LDMhqid/1LNCi1V+d3nHGJi28DabilCZEkBsFNU4w= X-Gm-Message-State: AOJu0YyGXrXQ1g15lawlxftLP5iUTA9xtkW0t/02nwGcj2CQNi+iNPXL dGNcSLSRpz1j0DCSnsJt7e2Mokvnj4OC/sBp62ZECQ4FStaHpuaOw+I9eBEe7kq2xwTQY3WrJNi /5jSygtneyjLrKkiax1qnsHUC0etv8A7QvPe3 X-Google-Smtp-Source: AGHT+IG+SAoqNcEaTJHB1lafvOscPKTA59uq+i5wvMJxRbkvSwmj0DhtE9DM8+iuZUaegEqULpfsbKdPtUWThZDcacY= X-Received: by 2002:a25:dbca:0:b0:dcc:273e:1613 with SMTP id g193-20020a25dbca000000b00dcc273e1613mr5471648ybf.40.1710522405619; Fri, 15 Mar 2024 10:06:45 -0700 (PDT) MIME-Version: 1.0 References: <20240306182440.2003814-1-surenb@google.com> <20240306182440.2003814-24-surenb@google.com> <1f51ffe8-e5b9-460f-815e-50e3a81c57bf@suse.cz> In-Reply-To: From: Suren Baghdasaryan Date: Fri, 15 Mar 2024 17:06:32 +0000 Message-ID: Subject: Re: [PATCH v5 23/37] mm/slab: add allocation accounting into slab allocation and free paths To: Vlastimil Babka Cc: akpm@linux-foundation.org, kent.overstreet@linux.dev, mhocko@suse.com, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, penguin-kernel@i-love.sakura.ne.jp, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, jhubbard@nvidia.com, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, aliceryhl@google.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4A0B9C0032 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: o3gcxoudn4ezw6wokukninko75wu6stg X-HE-Tag: 1710522407-630893 X-HE-Meta: U2FsdGVkX18GpLwW2siuhFarRe7mTZjgqCnCgMBbiNTS8JxWdKOkDj010jILegL04B8Fh7DrgnQE175sOEiI/t666DR+U0CPJiLcJY8CevNaqdmaxK32NCbcw/K8ravED2AF3cYMqmOF4F90guXFo33cR1T8zqKh3tOSIozgpB8Q9H6iBcHjc9lqHzRlGFkPIhxW6w+dxkth4vg4h+joQkFhRxNuyjRVM6ycKXcwq1c5N2QyNqI6BDkEDToFCohUfDpuLpytPvLKnUMWiTLT3lZVOYFp0Jb7zQuToN4yoj9Ppf5xx5Dv0ytiExOAexq18v5j9sPxQlyMw3swPCJ+4KRNYvI5qNf8Z5xFnyBeVFFIorX2PSM2parakaUGe3UW366W9OOPna+yU3/JB3BAsTiUTHGfrXq9tlAzV6Ftxkjb8xuABOWT1ZEGPZ5b30zqA184fffBEijoTiue3oSEr+vIu6YYHUWZijA7nDzhc8dfotsWjkxkPthLEBWL6DO3jaf5ZvOIwv81efcMe6nClW+JA2qxBwjRSmure3WcOR3usB0FpDgO/cSJTkgrISlXKcPKyBAMAJ36LUOzEHC4eNA4uMpsNf62I815y9vm5R9A+dh7X+ubCYcOQNQfTgHxuuOG+HOWD/1upRJAwbJns70V6rCMqSPP3qPdePEyMiyr0yYPAMsAflAR3zOf2js/ZiZ6hj8iHG9zGGZBvqLZgfoRiUmNLbLAOG3Tgoh8BgfTnUJIAAnRqr/Cr1aYJn/HmB75WGcJJUOXX57D3mamwzZNeW/odCRiP5omb34c7r/icaddjgKtsgW6JL1Bp1pKsJqL9el2yMzNXEhLfTJ2tFv8WeXb2ELOb08+xHhLKD/+agjW8cIqk8cUxGOYT+T2STSwirQJSimUHGlYqQOelO0myO6uuz16vDRlmUjbWb4fZSlQSyz1pjbnT1rQSGFol6ceQ0s2rIbD+UXIo1d JpWazdU0 eythNF3sP40EWP5ih5VfEECuXAJ/h5PvGxS8BGk/lCzsKUzDqGPVJyYeGp7ZR5A0zhkhoib4XMdICpIjTmgEmuOctLRKS9BFeRBdzlBLI046O/r/b4HHAxukLCu3CrIuJa4M5HKCDQc0aKTO29Zr4aJY/fETY8fjPWGW5jMbO2GFzC/6FP5+gLWyHG9nHdDCFhbUDWmggarC9DdBOoLdqRooeSteSFeolhUFNgi1W0KZVrr91cJqJ5V8xGIa2nUEsmyb9JT1JOKzvSUKxQU8a2XFo1X/eVsUwHAQbjUDAguPj3F3iPvnSlyZrzg952BjY8ou88L0/qZgZ9/5ossNJZHYkWmvT05fQgKF/RXbRsnVDoE20s+Eg7OX1++5Q+4t4uCL2GL5XVIgnl35XjPWcZ9FOZL4fcGR+/Z+GDNKUZJm/XCM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 15, 2024 at 4:52=E2=80=AFPM Vlastimil Babka wr= ote: > > On 3/15/24 16:43, Suren Baghdasaryan wrote: > > On Fri, Mar 15, 2024 at 3:58=E2=80=AFAM Vlastimil Babka wrote: > >> > >> On 3/6/24 19:24, Suren Baghdasaryan wrote: > >> > Account slab allocations using codetag reference embedded into slabo= bj_ext. > >> > > >> > Signed-off-by: Suren Baghdasaryan > >> > Co-developed-by: Kent Overstreet > >> > Signed-off-by: Kent Overstreet > >> > Reviewed-by: Kees Cook > >> > >> Reviewed-by: Vlastimil Babka > >> > >> Nit below: > >> > >> > @@ -3833,6 +3913,7 @@ void slab_post_alloc_hook(struct kmem_cache *s= , struct obj_cgroup *objcg, > >> > unsigned int orig_size) > >> > { > >> > unsigned int zero_size =3D s->object_size; > >> > + struct slabobj_ext *obj_exts; > >> > bool kasan_init =3D init; > >> > size_t i; > >> > gfp_t init_flags =3D flags & gfp_allowed_mask; > >> > @@ -3875,6 +3956,12 @@ void slab_post_alloc_hook(struct kmem_cache *= s, struct obj_cgroup *objcg, > >> > kmemleak_alloc_recursive(p[i], s->object_size, 1, > >> > s->flags, init_flags); > >> > kmsan_slab_alloc(s, p[i], init_flags); > >> > + obj_exts =3D prepare_slab_obj_exts_hook(s, flags, p[i]= ); > >> > +#ifdef CONFIG_MEM_ALLOC_PROFILING > >> > + /* obj_exts can be allocated for other reasons */ > >> > + if (likely(obj_exts) && mem_alloc_profiling_enabled()) > > Could you at least flip these two checks then so the static key one goes = first? Yes, definitely. I was thinking about removing need_slab_obj_ext() from prepare_slab_obj_exts_hook() and adding this instead of the above code: + if (need_slab_obj_ext()) { + obj_exts =3D prepare_slab_obj_exts_hook(s, flags, p[i]); +#ifdef CONFIG_MEM_ALLOC_PROFILING + /* + * Currently obj_exts is used only for allocation profiling. If other users appear + * then mem_alloc_profiling_enabled() check should be added here. + */ + if (likely(obj_exts)) + alloc_tag_add(&obj_exts->ref, current->alloc_tag, s->size); +#endif + } Does that look good? > >> > +#ifdef CONFIG_MEM_ALLOC_PROFILING > >> > + /* obj_exts can be allocated for other reasons */ > >> > + if (likely(obj_exts) && mem_alloc_profiling_enabled()) > > >> > + alloc_tag_add(&obj_exts->ref, current->alloc_t= ag, s->size); > >> > +#endif > >> > >> I think you could still do this a bit better: > >> > >> Check mem_alloc_profiling_enabled() once before the whole block callin= g > >> prepare_slab_obj_exts_hook() and alloc_tag_add() > >> Remove need_slab_obj_ext() check from prepare_slab_obj_exts_hook() > > > > Agree about checking mem_alloc_profiling_enabled() early and one time, > > except I would like to use need_slab_obj_ext() instead of > > mem_alloc_profiling_enabled() for that check. Currently they are > > equivalent but if there are more slab_obj_ext users in the future then > > there will be cases when we need to prepare_slab_obj_exts_hook() even > > when mem_alloc_profiling_enabled()=3D=3Dfalse. need_slab_obj_ext() will= be > > easy to extend for such cases. > > I thought we don't generally future-proof internal implementation details > like this until it's actually needed. But at least what I suggested above > would help, thanks. > > > Thanks, > > Suren. > > > >> > >> > } > >> > > >> > memcg_slab_post_alloc_hook(s, objcg, flags, size, p); > >> > @@ -4353,6 +4440,7 @@ void slab_free(struct kmem_cache *s, struct sl= ab *slab, void *object, > >> > unsigned long addr) > >> > { > >> > memcg_slab_free_hook(s, slab, &object, 1); > >> > + alloc_tagging_slab_free_hook(s, slab, &object, 1); > >> > > >> > if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)= ))) > >> > do_slab_free(s, slab, object, object, 1, addr); > >> > @@ -4363,6 +4451,7 @@ void slab_free_bulk(struct kmem_cache *s, stru= ct slab *slab, void *head, > >> > void *tail, void **p, int cnt, unsigned long addr) > >> > { > >> > memcg_slab_free_hook(s, slab, p, cnt); > >> > + alloc_tagging_slab_free_hook(s, slab, p, cnt); > >> > /* > >> > * With KASAN enabled slab_free_freelist_hook modifies the fre= elist > >> > * to remove objects, whose reuse must be delayed. > >> > > -- > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >