From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56FFFC433FE for ; Wed, 23 Feb 2022 03:38:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2C1E8D0002; Tue, 22 Feb 2022 22:38:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ADB6B8D0001; Tue, 22 Feb 2022 22:38:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97CD58D0002; Tue, 22 Feb 2022 22:38:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 860D78D0001 for ; Tue, 22 Feb 2022 22:38:05 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 43BE4614 for ; Wed, 23 Feb 2022 03:38:05 +0000 (UTC) X-FDA: 79172636130.03.762A50E Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by imf13.hostedemail.com (Postfix) with ESMTP id C41AA20003 for ; Wed, 23 Feb 2022 03:38:04 +0000 (UTC) Received: by mail-pf1-f172.google.com with SMTP id x18so14082903pfh.5 for ; Tue, 22 Feb 2022 19:38:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Ss9CQrenCYzdH6jACZIdZ+kxt5gNMEdsRLw6I3i9fX0=; b=iy5enKrskamRDHG9md1KL6LdriZXhQfHRz/GSD8J/gcEaKfg3eHtAZosqmJVbebNId B3SG5KNQpBqbrZijZHaYWVzA9ZjJHP0a4PtFEDVLcX4qo+CRoxYoX8CIaUkOl1Byf14W 7qeYfHpaaFVT1DV9tDvQH0xLnFRFzfDz6hKOVmXJvO7PNkOg0ywF127KhDe8MTHgdPiJ fBOUn0tEzk1N0nfN+TWk1AmtxT4R0C4+lme4mNf29A5yTT2tn+qb8kKax5jHek6hclPI Nug5OjJ3YCmoLKpKJ4/gmcR59pMun3cLZVyewnkYcwmM6av3TWwQDPB34svSObTtEnwg Xz1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Ss9CQrenCYzdH6jACZIdZ+kxt5gNMEdsRLw6I3i9fX0=; b=dEvuHy0knHD3w/C17BW2QUlj7Ph+c1TqbjPXmFAwrQptb5PjbuE7iZGjClzowUjdcU Qr4Udj++go1JMWJJIzIa4UDu1M0Gm2U2vcBC0Od4luxtnLyAeQgYxKMrMF4zKe/HfUax L/LBDppS7A7TUIRyljUttv8d0R3B1kMnVTBmLSruV4rEz1f6uZ8PL16OxjtFpvInM7ay 0LRtk4fdIOolxtFCSE4zF4rb/2S2uciQhCBKrSa/2si+IYY3/njXTGbKYaXh1RsHUSVe VF3mAfxSSaqCsLi06AVCpC0QOq7cXeE6lNc51NncbClgEUJyHsdKG6JWD3CSv3d5os5J oENw== X-Gm-Message-State: AOAM531fH4WALSkw9TVgs7l67Hy3718hjfs59B/xb+8IgJKLMrrdGo1R QScofvZCd/u+vc+kmSSnxg4= X-Google-Smtp-Source: ABdhPJzhF40IC+0CpRZ1jtWjnsQVBe912GBg3fk2d++1drpatXPmO+YGep9hXsTTkCLol03Yo+NM3g== X-Received: by 2002:a63:6982:0:b0:373:494a:22ae with SMTP id e124-20020a636982000000b00373494a22aemr22108252pgc.624.1645587483892; Tue, 22 Feb 2022 19:38:03 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id nr11-20020a17090b240b00b001b9e24b16basm1141248pjb.0.2022.02.22.19.38.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Feb 2022 19:38:03 -0800 (PST) Date: Wed, 23 Feb 2022 03:37:59 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: David Rientjes Cc: linux-mm@kvack.org, Roman Gushchin , Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, Joonsoo Kim , Christoph Lameter , Pekka Enberg Subject: Re: [PATCH 4/5] mm/slub: Limit min_partial only in cache creation Message-ID: References: <20220221105336.522086-1-42.hyeyoo@gmail.com> <20220221105336.522086-5-42.hyeyoo@gmail.com> <91cc8ab-a0f0-2687-df99-10b2267c7a9@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <91cc8ab-a0f0-2687-df99-10b2267c7a9@google.com> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C41AA20003 X-Stat-Signature: u3mk4m9cfrtdijzcgeeoqurjjq3j77f5 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=iy5enKrs; spf=pass (imf13.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1645587484-850661 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 22, 2022 at 03:48:16PM -0800, David Rientjes wrote: > On Mon, 21 Feb 2022, Hyeonggon Yoo wrote: > > > SLUB sets number of minimum partial slabs for node (min_partial) using > > set_min_partial(). SLUB holds at least min_partial slabs even if they're empty > > to avoid excessive use of page allocator. > > > > set_min_partial() limits value of min_partial between MIN_PARTIAL and > > MAX_PARTIAL. As set_min_partial() can be called by min_partial_store() > > too, Only limit value of min_partial in kmem_cache_open() so that it > > can be changed to value that a user wants. > > > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > I think this makes sense and there is no reason to limit the bounds that > may be set at runtime with undocumented behavior. Thank you for comment. > > However, since set_min_partial() is only setting the value in the > kmem_cache, could we remove the helper function entirely and fold it into > its two callers? Right. We don't need to separate this as function. I'll update this in next version. Thanks! > > > --- > > mm/slub.c | 11 ++++++----- > > 1 file changed, 6 insertions(+), 5 deletions(-) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index 3a4458976ab7..a4964deccb61 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -4002,10 +4002,6 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) > > > > static void set_min_partial(struct kmem_cache *s, unsigned long min) > > { > > - if (min < MIN_PARTIAL) > > - min = MIN_PARTIAL; > > - else if (min > MAX_PARTIAL) > > - min = MAX_PARTIAL; > > s->min_partial = min; > > } > > > > @@ -4184,6 +4180,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) > > > > static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) > > { > > + int min_partial; > > + > > s->flags = kmem_cache_flags(s->size, flags, s->name); > > #ifdef CONFIG_SLAB_FREELIST_HARDENED > > s->random = get_random_long(); > > @@ -4215,7 +4213,10 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) > > * The larger the object size is, the more slabs we want on the partial > > * list to avoid pounding the page allocator excessively. > > */ > > - set_min_partial(s, ilog2(s->size) / 2); > > + min_partial = min(MAX_PARTIAL, ilog2(s->size) / 2); > > + min_partial = max(MIN_PARTIAL, min_partial); > > + > > + set_min_partial(s, min_partial); > > > > set_cpu_partial(s); > > > > -- > > 2.33.1 > > > > -- Hyeonggon