From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B49CC3DA4A for ; Sat, 17 Aug 2024 01:31:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C7D1D8D00BF; Fri, 16 Aug 2024 21:31:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C2D328D0066; Fri, 16 Aug 2024 21:31:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF68E8D00BF; Fri, 16 Aug 2024 21:31:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8F2C18D0066 for ; Fri, 16 Aug 2024 21:31:10 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F19641A0440 for ; Sat, 17 Aug 2024 01:31:09 +0000 (UTC) X-FDA: 82460009058.01.40695B3 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by imf26.hostedemail.com (Postfix) with ESMTP id E5045140008 for ; Sat, 17 Aug 2024 01:31:04 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of xiujianfeng@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=xiujianfeng@huaweicloud.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723858193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=495PrV6cVhN2yeQDjcVdfM51GCH2xppSF0oyl+RFq7U=; b=W1egtK4PWXQGSCNJAqW3Kx9qlTxqU5lpQOdOlTJglfwZ5wFrrFiSWVYu1gjbJ1nVx0QlhR o3F/pfF0+veTP9QBXHoq28rRRnWENu/Ie+ARlLLu18Evpmq2f5jQEyaUiYmTqQtv5Qimdr fA+UipQD6A6189ExawYb4CdZwTNhn/Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723858193; a=rsa-sha256; cv=none; b=qSU+VpnN+ZMxYhdCS7q1jDSNvfiRPTGoF4riMRc77HJ6GGsJcxdhY02H5tu5d5H/sKDKwA 1nf4lqg94gdgE9Fm/1WQtfZfjEF7w6OoVbCytTuvGurTPsEiOHC3zuiELSXUUHG81abDBC W9gp927JxbMjqztn/n/Esv+orqli+eI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of xiujianfeng@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=xiujianfeng@huaweicloud.com; dmarc=none Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Wm1X4654Dz4f3l1y for ; Sat, 17 Aug 2024 09:30:44 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id B13DE1A018D for ; Sat, 17 Aug 2024 09:30:59 +0800 (CST) Received: from [10.67.110.112] (unknown [10.67.110.112]) by APP4 (Coremail) with SMTP id gCh0CgAnmoBS_b9mcaGdBw--.32124S2; Sat, 17 Aug 2024 09:30:59 +0800 (CST) Message-ID: <1ddb539a-79ed-d992-76cf-061acb4df11e@huaweicloud.com> Date: Sat, 17 Aug 2024 09:30:58 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 Subject: Re: [PATCH 5/5] slab: Allocate and use per-call-site caches To: Kees Cook , Vlastimil Babka Cc: Suren Baghdasaryan , Kent Overstreet , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, "GONG, Ruiqi" , Jann Horn , Matteo Rizzo , jvoisin , linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org References: <20240809072532.work.266-kees@kernel.org> <20240809073309.2134488-5-kees@kernel.org> Content-Language: en-US From: Xiu Jianfeng In-Reply-To: <20240809073309.2134488-5-kees@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CM-TRANSID:gCh0CgAnmoBS_b9mcaGdBw--.32124S2 X-Coremail-Antispam: 1UD129KBjvJXoWxGrW5Xw4xZF18KFWrCrW5GFg_yoWrJw4UpF WxWa15GFs5XFy7Ca9xt348WrySqayrGFy5Jayaq3s5ZF1Yqr18WFn7GrWIvrWkAry5CF40 gF9YyasI93WUA3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9Ib4IE77IF4wAFF20E14v26ryj6rWUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x 0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG 6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFV Cjc4AY6r1j6r4UM4x0Y48IcVAKI48JM4IIrI8v6xkF7I0E8cxan2IY04v7Mxk0xIA0c2IE e2xFo4CEbIxvr21lc7CjxVAaw2AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4I kC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWU WwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr 0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWU JVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJb IYCTnIWIevJa73UjIFyTuYvjxUIa0PDUUUU X-CM-SenderInfo: x0lxyxpdqiv03j6k3tpzhluzxrxghudrp/ X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E5045140008 X-Stat-Signature: 1fagk7n933jnk443agd5kh3ftnnt8s7u X-HE-Tag: 1723858264-207705 X-HE-Meta: U2FsdGVkX181xF95dXVONyXEEibvCvkLzsfkZn06ngpzQuEyMJZiIRX4SMkDC3iw8m81rEKnGzjN0vF/kDtTvVe2p/ZR6W+MbAhq88oF14nEpdGvm5FDEkdVj3NsA5cvMN0T6VyxJl3VD6OswMdLWnNYKzGxirekTu9X7cfBNTiwece64IUBVwYPNF+8sbDuDgV+ipHFrymHaqJxcqDhgYoQ9GYQeVoecBeLmNeM6JQMc6ZuUWeuKc66LZBoaJoKd1Au5x1Vr7x7yEDVOqdXl2AVltJ4v69mVG2tIIuE8iJ5r+NUoAYjz+ZdSMElC8IdSQntlj/mzADu3dUE5Q+eMdx+cBeibOayYDnhTwJ4IWd7nIhTBKpkTHp+pi8CZ3810M+2I4JyflYIveYOIKn7qYgk5Q0OZo8+iSM/O5lK55kNnvotw9BY+UU/d8vuOZfWjZiLOTlpaQOAdqgJ6/v7xuHp6CIoQyINzymkpl5/ZDjY+EBkKVxRvrAcbpjFNoZPVouOK9ecF+Rizxrzo2EKpAEqInZxV/TnN5z/QGM6hdgS8LJP+h0YkrpRCsEl52/dZ5H7mBtBb4WXDkDJhWstuM0/XRvUGPiUYuSh1KESvNNXOR1KPjEwYY9sK0g9ykNlFl8Gqwu/1bjqzto/F/r0uKoWcoYv5Wzcq6UajuHx623K++WGYlhXY+KZZ7dHMReVyDbeusd3dOU/ZykbgEmD9z6yai1A5c9bPHlyK/99EwBywqGwyschu1Wl0dPbJvRfwMfXrbB0rWmDN2v8k3rxeXl3ZKpJIMXYBGZaV6Le3OVzKZWBzCez/BXOJZQrPzEdJauwB2ipESuqgbwspAgLrBUN+b0PKqEilZGVrMbMrx+yvslnfztRuegbPkemGk0n10ERIcJGCYkcHSBirM8OGwrrF5gbAaPF7hAjSMGIEejxgjzwcJchWG0u8C3CXvFVOwVidBrJnHPhFlQ3ypB Gy+gjKTn YHFSW6T1VdL/jjZYrVHpLrddniq93UYppHQsRHIN4+CUap6NOMnuCbGzf29wtU0rbML5RDPemmWd7pXgFAwVpFiy3yY5qDkAVmPWR6zTI+hM38qDuKVSpRwJm8o16qhfCW2iIKlmhw9Gfrlg6x25NmxtdLHEMipq4YoOhRgRc0xi1vZpJKY/ac/JmrXIEq2d8pEbxXcW62jCZIcuno/3uZRGGZad02H9w0yYeLSKgdN+xu7/xTkQguumt5pEz4uTLqgQkMSzcIhD73MPKd5ombNhRUg3VG2Jnt/oLGqhoQhozOF4gUQAuHupatj+iIW1qYXwpvCjVxJrArCt9hy0Kdhwftc6ENfb4tzrih/cp14945SM2nraYpYik6qRnolmRJirLEv/QvOzy5XdzPcoYovJvzCD9otfgyb8bvUOM950fmzKyUJE5/dr8lUdoetgOlOz/uIXCT9RwEL9D0Y/QCIj5d+68hvUB2qopU5JBWVvaS+jxG8HHM86qYfJtSxtOj0My9lUCCDTAjXAREW1q4AFdguCPlNS1FCngyrJngZHvPyx/KEbh8Sw5rg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Kees, On 2024/8/9 15:33, Kees Cook wrote: > Use separate per-call-site kmem_cache or kmem_buckets. These are > allocated on demand to avoid wasting memory for unused caches. > > A few caches need to be allocated very early to support allocating the > caches themselves: kstrdup(), kvasprintf(), and pcpu_mem_zalloc(). Any > GFP_ATOMIC allocations are currently left to be allocated from > KMALLOC_NORMAL. > > With a distro config, /proc/slabinfo grows from ~400 entries to ~2200. > > Since this feature (CONFIG_SLAB_PER_SITE) is redundant to > CONFIG_RANDOM_KMALLOC_CACHES, mark it a incompatible. Add Kconfig help > text that compares the features. > > Improvements needed: > - Retain call site gfp flags in alloc_tag meta field to: > - pre-allocate all GFP_ATOMIC caches (since their caches cannot > be allocated on demand unless we want them to be GFP_ATOMIC > themselves...) > - Separate MEMCG allocations as well > - Allocate individual caches within kmem_buckets on demand to > further reduce memory usage overhead. > > Signed-off-by: Kees Cook > --- > Cc: Suren Baghdasaryan > Cc: Kent Overstreet > Cc: Vlastimil Babka > Cc: Christoph Lameter > Cc: Pekka Enberg > Cc: David Rientjes > Cc: Joonsoo Kim > Cc: Andrew Morton > Cc: Roman Gushchin > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > Cc: linux-mm@kvack.org > --- > include/linux/alloc_tag.h | 8 +++ > lib/alloc_tag.c | 121 +++++++++++++++++++++++++++++++++++--- > mm/Kconfig | 19 +++++- > mm/slab_common.c | 1 + > mm/slub.c | 31 +++++++++- > 5 files changed, 170 insertions(+), 10 deletions(-) > [...] > diff --git a/mm/slub.c b/mm/slub.c > index 3520acaf9afa..d14102c4b4d7 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4135,6 +4135,35 @@ void *__kmalloc_large_node_noprof(size_t size, gfp_t flags, int node) > } > EXPORT_SYMBOL(__kmalloc_large_node_noprof); > > +static __always_inline > +struct kmem_cache *choose_slab(size_t size, kmem_buckets *b, gfp_t flags, > + unsigned long caller) > +{ > +#ifdef CONFIG_SLAB_PER_SITE > + struct alloc_tag *tag = current->alloc_tag; There is a compile error here if CONFIG_MEM_ALLOC_PROFILING is disabled when I test this patchset. mm/slub.c: In function ‘choose_slab’: mm/slub.c:4187:40: error: ‘struct task_struct’ has no member named ‘alloc_tag’ 4187 | struct alloc_tag *tag = current->alloc_tag; | ^~ CC mm/page_reporting.o maybe CONFIG_SLAB_PER_SITE should depend on CONFIG_MEM_ALLOC_PROFILING > + > + if (!b && tag && tag->meta.sized && > + kmalloc_type(flags, caller) == KMALLOC_NORMAL && > + (flags & GFP_ATOMIC) != GFP_ATOMIC) { > + void *p = READ_ONCE(tag->meta.cache); > + > + if (!p && slab_state >= UP) { > + alloc_tag_site_init(&tag->ct, true); > + p = READ_ONCE(tag->meta.cache); > + } > + > + if (tag->meta.sized < SIZE_MAX) { > + if (p) > + return p; > + /* Otherwise continue with default buckets. */ > + } else { > + b = p; > + } > + } > +#endif > + return kmalloc_slab(size, b, flags, caller); > +} > + > static __always_inline > void *__do_kmalloc_node(size_t size, kmem_buckets *b, gfp_t flags, int node, > unsigned long caller) > @@ -4152,7 +4181,7 @@ void *__do_kmalloc_node(size_t size, kmem_buckets *b, gfp_t flags, int node, > if (unlikely(!size)) > return ZERO_SIZE_PTR; > > - s = kmalloc_slab(size, b, flags, caller); > + s = choose_slab(size, b, flags, caller); > > ret = slab_alloc_node(s, NULL, flags, node, caller, size); > ret = kasan_kmalloc(s, ret, size, flags);