From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3744EC38145 for ; Wed, 7 Sep 2022 14:57:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C27128D0001; Wed, 7 Sep 2022 10:57:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD6906B0075; Wed, 7 Sep 2022 10:57:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC5B48D0001; Wed, 7 Sep 2022 10:57:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9CFEC6B0074 for ; Wed, 7 Sep 2022 10:57:42 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 790B61C74F5 for ; Wed, 7 Sep 2022 14:57:42 +0000 (UTC) X-FDA: 79885593564.17.A3B8665 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf05.hostedemail.com (Postfix) with ESMTP id 2688D100086 for ; Wed, 7 Sep 2022 14:57:41 +0000 (UTC) Received: by mail-pg1-f172.google.com with SMTP id v4so13808049pgi.10 for ; Wed, 07 Sep 2022 07:57:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date; bh=dn715j7z9j3EMcSnevzvQwvbS4cOIJSxg6vaem3HAEM=; b=kN++HlInDyQauxjeUcoZJN6gZlmoOr5DtjJhrnJGszXBNgFgYEPJgvNC0jykY/67lD 7sPEC/sJtLzVgFdFc+J9+1TtzmtdLeUyQJQrgymPO/7JoS+aDNYJMg1cUKKTJyLwWtp1 YI6oN8aFbb+jreKl81LJbmhNl7odjSr2Q5ibodiO7T3MGXUNw88Y8+REdAfrWABMeN0G VEn9IjRffs6vmMD3A4f5qj06ws1Y85pGU2daQOYm6JE89S80/PqTaaL614Sx+vwdzMGI 4L0y8vMaHc0fdRg31SM4JkjfAehrxLaXNKpXgBF3F5RU2pxpiVS57/Pzge63nNESCCS4 aVeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=dn715j7z9j3EMcSnevzvQwvbS4cOIJSxg6vaem3HAEM=; b=VLscX0CwJBpEGJNodQeOYZzN7ws/WlQhZxVYJwTr4+oBcY/8GGCmzxYcXC8wrDDRa6 KEU0Rnt99OBf7A5iru54WihwxoJe2ZNZ6Bt2V7W3u9dHZxNUzBm0RvtzoN/r/hG0TTcg fpkWTuh39tKsDoaQj+VOFc7UQ5Srd4HTk0tHjPSTvN7Qdu6KiSvUGSF1NeGz+QWGaqet cSr+Nq2ajJYG/anpTmbTUj5ZUpA/rsmBpUGrDasu/d94DDmDgzujkPbTjc+V8aTXLd+2 R+XT4O2Yri928HMjXTm96JZa1gnFjYq5fLFalBR+5h4R8DqgIclXO7mzTV+18xqfrnGw FpwA== X-Gm-Message-State: ACgBeo3Z6RZ2bQP/+ruOzxmOfoHiRnu5fpULBZYxRjgsNtC9do1SPaOE uDsWjVGujE9ICjS+R/liYvU= X-Google-Smtp-Source: AA6agR6xiYL2bVwIXA8+vc9FXFQszSurEhI0CNWPgL1OKJJA++ooRj6t7xT3qp/IAZc+DvjYZ0qbmA== X-Received: by 2002:a63:485a:0:b0:41d:ed37:d937 with SMTP id x26-20020a63485a000000b0041ded37d937mr3845615pgk.336.1662562660933; Wed, 07 Sep 2022 07:57:40 -0700 (PDT) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id f28-20020a63511c000000b00422c003cf78sm4122866pgb.82.2022.09.07.07.57.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Sep 2022 07:57:39 -0700 (PDT) Date: Wed, 7 Sep 2022 23:57:34 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Feng Tang Cc: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Dmitry Vyukov , Jonathan Corbet , Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com Subject: Re: [PATCH v5 2/4] mm/slub: only zero the requested size of buffer for kzalloc Message-ID: References: <20220907071023.3838692-1-feng.tang@intel.com> <20220907071023.3838692-3-feng.tang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220907071023.3838692-3-feng.tang@intel.com> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662562662; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dn715j7z9j3EMcSnevzvQwvbS4cOIJSxg6vaem3HAEM=; b=4y0lj08bc3wunTZab8aTg0QeUgY/fw1Lz5vtz/yDxTeunxN3E2RQjPdcclxzA2//9T6ggf ioB5ShPiHT9fvQO3XaIc3p++J6t3q6lrMj1r/a1wF/OnU/ZmN6BPZj9/iaHTx0cp0Ku/JE r+kiAMmTkC6PDv3XrIhdpK+HwRS0i08= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=kN++HlIn; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662562662; a=rsa-sha256; cv=none; b=W8mwM4dIR2SyjxuqJWcMgqjRRz64PXaTYnrFdS+HK8aEyW1reFbrM19js5Yt0jQQ3p/GWY o/cp4tNBXRwF8kKfKfIupFTMGGelIDhL/QcDbQa6OkzmKI605nxUNBfIF/640yFOzLFucQ rL5O1sVhrirqBrAI/xjPDJx1WVHbqdA= X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=kN++HlIn; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2688D100086 X-Stat-Signature: zcsjy1bbof1rbep9da73d6cdogc8uifb X-HE-Tag: 1662562661-201489 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 07, 2022 at 03:10:21PM +0800, Feng Tang wrote: > kzalloc/kmalloc will round up the request size to a fixed size > (mostly power of 2), so the allocated memory could be more than > requested. Currently kzalloc family APIs will zero all the > allocated memory. > > To detect out-of-bound usage of the extra allocated memory, only > zero the requested part, so that sanity check could be added to > the extra space later. > > For kzalloc users who will call ksize() later and utilize this > extra space, please be aware that the space is not zeroed any > more. Can this break existing users? or should we initialize extra bytes to zero when someone called ksize()? If it is not going to break something - I think we can add a comment of this. something like "... kzalloc() will initialize to zero only for @size bytes ..." > Signed-off-by: Feng Tang > --- > mm/slab.c | 6 +++--- > mm/slab.h | 9 +++++++-- > mm/slub.c | 6 +++--- > 3 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/mm/slab.c b/mm/slab.c > index a5486ff8362a..73ecaa7066e1 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -3253,7 +3253,7 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, > init = slab_want_init_on_alloc(flags, cachep); > > out: > - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); > + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, 0); > return objp; > } > > @@ -3506,13 +3506,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > * Done outside of the IRQ disabled section. > */ > slab_post_alloc_hook(s, objcg, flags, size, p, > - slab_want_init_on_alloc(flags, s)); > + slab_want_init_on_alloc(flags, s), 0); > /* FIXME: Trace call missing. Christoph would like a bulk variant */ > return size; > error: > local_irq_enable(); > cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); > - slab_post_alloc_hook(s, objcg, flags, i, p, false); > + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); > kmem_cache_free_bulk(s, i, p); > return 0; > } > diff --git a/mm/slab.h b/mm/slab.h > index d0ef9dd44b71..20f9e2a9814f 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -730,12 +730,17 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > > static inline void slab_post_alloc_hook(struct kmem_cache *s, > struct obj_cgroup *objcg, gfp_t flags, > - size_t size, void **p, bool init) > + size_t size, void **p, bool init, > + unsigned int orig_size) > { > size_t i; > > flags &= gfp_allowed_mask; > > + /* If original request size(kmalloc) is not set, use object_size */ > + if (!orig_size) > + orig_size = s->object_size; I think it is more readable to pass s->object_size than zero > + > /* > * As memory initialization might be integrated into KASAN, > * kasan_slab_alloc and initialization memset must be > @@ -746,7 +751,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > for (i = 0; i < size; i++) { > p[i] = kasan_slab_alloc(s, p[i], flags, init); > if (p[i] && init && !kasan_has_integrated_init()) > - memset(p[i], 0, s->object_size); > + memset(p[i], 0, orig_size); > kmemleak_alloc_recursive(p[i], s->object_size, 1, > s->flags, flags); > } > diff --git a/mm/slub.c b/mm/slub.c > index effd994438e6..f523601d3fcf 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3376,7 +3376,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l > init = slab_want_init_on_alloc(gfpflags, s); > > out: > - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); > + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); > > return object; > } > @@ -3833,11 +3833,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > * Done outside of the IRQ disabled fastpath loop. > */ > slab_post_alloc_hook(s, objcg, flags, size, p, > - slab_want_init_on_alloc(flags, s)); > + slab_want_init_on_alloc(flags, s), 0); > return i; > error: > slub_put_cpu_ptr(s->cpu_slab); > - slab_post_alloc_hook(s, objcg, flags, i, p, false); > + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); > kmem_cache_free_bulk(s, i, p); > return 0; > } > -- > 2.34.1 > -- Thanks, Hyeonggon