From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10829C43219 for ; Thu, 24 Nov 2022 11:33:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C6756B0074; Thu, 24 Nov 2022 06:33:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64F846B0075; Thu, 24 Nov 2022 06:33:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C9F76B0078; Thu, 24 Nov 2022 06:33:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3CA816B0074 for ; Thu, 24 Nov 2022 06:33:22 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F3C0A1C6C4D for ; Thu, 24 Nov 2022 11:33:21 +0000 (UTC) X-FDA: 80168125002.17.92DD120 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by imf09.hostedemail.com (Postfix) with ESMTP id 829E014000C for ; Thu, 24 Nov 2022 11:33:20 +0000 (UTC) Received: by mail-pf1-f172.google.com with SMTP id b29so1388336pfp.13 for ; Thu, 24 Nov 2022 03:33:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=6YMTx0dYsCXcdicmdhniq9bvZBuxl4kai8z8N0OctaU=; b=FL9D8iGgmgmofA5VoLVagb3Z/kMvnn1F9H88SIPRFUTcPk6RrqxMqEYrQKpo7zMF+H CBX887MkKNednTWCyU6O+ZKdoK5jgvF9Pmg5Qp2U4OdK9TGmd5s8opFFIw4tGNjnqaY0 EU2j9GKQKe2WzveuiSd5XvPv2O7qrTGsKjJdEYX6pUUP3HCfzHXPAfUiD1VWafcuEPaW qYWKk+ol1OhOfWIxXSSXqW8W/QQF/7Ke+8DmUvy73ww5Yim97xi3GOUT2k4xJ42buJLu iuEpUDtWQpt7USLqKmZJ0W6YeZFgY2KClw9nKVfhuJ2mqBafYYvuIOlqro/vmE0jLNfj Oq/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=6YMTx0dYsCXcdicmdhniq9bvZBuxl4kai8z8N0OctaU=; b=g5jo2f00ePflmg9MaGQqDU/pAPsky9xH4XASr3nbZXpU5zjlVH9GXMe7b5gt/EE9iX nrSLRsFGHX4SsvMbQ8e2Qk5uJDUNpiLyOc3lGm08FtO/QNH4oP3fDtGxNsuVTqt44Yrd 1XYsdwAYVEszSCI6zYVTNxFjOoEet9QMrEuN1o9hgEzokENecCCiLwEZ/uih31Y6mPnz ek68o9KOKDUQ5PkG4eXcv+gXgK5SsL4A8cKykpq1hJJU1ZR6Nl9KftHDfMCytuStRgWG 4Ei+Rlxqi44amguSYOoxne/Qa6JM/9tzHplaB2JTVGmRi5K0c3P9v0PTvnQD90Dq/shB ZsRg== X-Gm-Message-State: ANoB5pmCTcFR4WRfKCYdEyr1eBADDS1ApRVT38CrNAmFkNOxXzN2ylIt 2WW8/NFskdhCi+wyt7LT1J4= X-Google-Smtp-Source: AA0mqf4XlADXE9OvMpXzVxVnILOKztzIa1G613NCrC4ef+qIggfZ8kQ5+6tkPgi9VcJqts5f8LMmQA== X-Received: by 2002:a63:5007:0:b0:45f:beda:4116 with SMTP id e7-20020a635007000000b0045fbeda4116mr12995905pgb.618.1669289599361; Thu, 24 Nov 2022 03:33:19 -0800 (PST) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id p20-20020a170902a41400b0018693643504sm1100620plq.40.2022.11.24.03.33.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Nov 2022 03:33:18 -0800 (PST) Date: Thu, 24 Nov 2022 20:33:12 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg , Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 02/12] mm, slub: add CONFIG_SLUB_TINY Message-ID: References: <20221121171202.22080-1-vbabka@suse.cz> <20221121171202.22080-3-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221121171202.22080-3-vbabka@suse.cz> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669289600; a=rsa-sha256; cv=none; b=Zn776xDpEiRRteqOUZ+FK0+Gokz1tmLX3zrrRaNuOWhli0tRalkpRYmi6axIh5dXrNEgEZ 29jJ2iWnRd48SoeFVMXdUIRKMrezvSQiZnNQ3klW+w/GqZXIG3nTE7t0ojDBuoyJSJr/tt hE46bbtzfkTj/urMYs0rA5NrO0O138A= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=FL9D8iGg; spf=pass (imf09.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669289600; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6YMTx0dYsCXcdicmdhniq9bvZBuxl4kai8z8N0OctaU=; b=o1+wmiBLetZTRFaLRV34PM/X4oLYv6QyXhuAWIYSTB64kJzoUE0EVnA2nEkCPv4IXkb2nm b+OYVLoCxde6sQp1hklAmLR39zBGPMo+N6tVsxndy6bIHLojQnXBzzhHLikeJVJPvsXUmN K+p1xq33eh0N7RkXkbljxAd6geMsW2M= X-Rspamd-Queue-Id: 829E014000C X-Rspam-User: Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=FL9D8iGg; spf=pass (imf09.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam07 X-Stat-Signature: wais138qptxugij69dwoybk418twtsta X-HE-Tag: 1669289600-944136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 21, 2022 at 06:11:52PM +0100, Vlastimil Babka wrote: > For tiny systems that have used SLOB until now, SLUB might be > impractical due to its higher memory usage. To help with that, introduce > an option CONFIG_SLUB_TINY that modifies SLUB to use less memory. > This is done by sacrificing scalability, security and debugging > features, therefore not recommended for any system with more than 16MB > RAM. > > This commit introduces the option and uses it to set other related > options in a way that reduces memory usage. > > Signed-off-by: Vlastimil Babka > --- > mm/Kconfig | 21 +++++++++++++++++---- > mm/Kconfig.debug | 2 +- > 2 files changed, 18 insertions(+), 5 deletions(-) > > diff --git a/mm/Kconfig b/mm/Kconfig > index 57e1d8c5b505..5941cb34e30d 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -230,6 +230,19 @@ config SLOB > > endchoice > > +config SLUB_TINY > + bool "Configure SLUB for minimal memory footprint" > + depends on SLUB && EXPERT > + select SLAB_MERGE_DEFAULT > + help > + Configures the SLUB allocator in a way to achieve minimal memory > + footprint, sacrificing scalability, debugging and other features. > + This is intended only for the smallest system that had used the > + SLOB allocator and is not recommended for systems with more than > + 16MB RAM. > + > + If unsure, say N. > + > config SLAB_MERGE_DEFAULT > bool "Allow slab caches to be merged" > default y > @@ -247,7 +260,7 @@ config SLAB_MERGE_DEFAULT > > config SLAB_FREELIST_RANDOM > bool "Randomize slab freelist" > - depends on SLAB || SLUB > + depends on SLAB || SLUB && !SLUB_TINY > help > Randomizes the freelist order used on creating new pages. This > security feature reduces the predictability of the kernel slab > @@ -255,7 +268,7 @@ config SLAB_FREELIST_RANDOM > > config SLAB_FREELIST_HARDENED > bool "Harden slab freelist metadata" > - depends on SLAB || SLUB > + depends on SLAB || SLUB && !SLUB_TINY > help > Many kernel heap attacks try to target slab cache metadata and > other infrastructure. This options makes minor performance > @@ -267,7 +280,7 @@ config SLAB_FREELIST_HARDENED > config SLUB_STATS > default n > bool "Enable SLUB performance statistics" > - depends on SLUB && SYSFS > + depends on SLUB && SYSFS && !SLUB_TINY > help > SLUB statistics are useful to debug SLUBs allocation behavior in > order find ways to optimize the allocator. This should never be > @@ -279,7 +292,7 @@ config SLUB_STATS > > config SLUB_CPU_PARTIAL > default y > - depends on SLUB && SMP > + depends on SLUB && SMP && !SLUB_TINY > bool "SLUB per cpu partial cache" > help > Per cpu partial caches accelerate objects allocation and freeing > diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug > index ce8dded36de9..fca699ad1fb0 100644 > --- a/mm/Kconfig.debug > +++ b/mm/Kconfig.debug > @@ -56,7 +56,7 @@ config DEBUG_SLAB > config SLUB_DEBUG > default y > bool "Enable SLUB debugging support" if EXPERT > - depends on SLUB && SYSFS > + depends on SLUB && SYSFS && !SLUB_TINY > select STACKDEPOT if STACKTRACE_SUPPORT > help > SLUB has extensive debug support features. Disabling these can > -- > 2.38.1 Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> small comment: SLAB || (SLUB && !SLUB_TINY) would be easier to interpret than SLAB || SLUB && !SLUB_TINY -- Thanks, Hyeonggon