From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E50DC433F5 for ; Tue, 22 Feb 2022 23:48:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 520F08D0002; Tue, 22 Feb 2022 18:48:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A9368D0001; Tue, 22 Feb 2022 18:48:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FE858D0002; Tue, 22 Feb 2022 18:48:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 19FE28D0001 for ; Tue, 22 Feb 2022 18:48:20 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B940598C15 for ; Tue, 22 Feb 2022 23:48:19 +0000 (UTC) X-FDA: 79172057118.28.54DCB8D Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf01.hostedemail.com (Postfix) with ESMTP id 1B2B140004 for ; Tue, 22 Feb 2022 23:48:18 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id w20so17076313plq.12 for ; Tue, 22 Feb 2022 15:48:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=QHrAS3A8IcMIjqhd+x7nT/pax8A9IkaVmYq/eD6UPq8=; b=YlfM1kO4ptvT/e1FagzFXsEe8gOa3MZTgvK/6HVo+9mABi6khL+U77cXvM+rKC3LSg MpcDSVsxHnik7Np0YrpcNQf63jSopJ/n7bTcdbBmqc+3fQIqK0Qzm/pWxuUv5FI1t0T8 VNwFbj/0xgD3jimV8y0fkQmGYBg73OJD8ZW/ftgB9PyJVLSXgiA/7VGhTay92MgtA6Nn EJOXMcbqJs6qn0cVaAegFr2RBCwqT1AgRm1S8kfLHbpr6vxGsX2GlvoENB14YdI9k8J0 ci7PDnUMLujzURubjF7J9QN3QWK92V6/LTo4WF/kL0AkAKbwUm8a2LNn853luFmlJ27c qhYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=QHrAS3A8IcMIjqhd+x7nT/pax8A9IkaVmYq/eD6UPq8=; b=mcBhSRBXf/JuIV3ObyNyJCPo/MIGHA3e0/s4BtapKckbVARjSaqXcZcLkUi7X63oy9 t74xdPYuX56d9ovd2RIBxqPWoUZmCdhaOTvnPMYd/gFYMEtyGIuYs9nDb0kVpJfIG5nC qv+r059PqxOkomAtGztRF0P00L0gPn88SHHuHcY1C//CvCOut9QlfU59kBZFIzHOEyPz NqN7epnbdU1mCQIolbd6hvALLyUQonb16dkqPMjuJ9pI8npP1qf74/PLuydgutlT9X8o 14VCY3D5qNmLRthKTRteQEnMm8K2b5KGU8LJyE6Geq/Qyo8joEibGVinex1IPppCsA2N AKcA== X-Gm-Message-State: AOAM5323GCojmJy21Bej/1gzQh9ljh3zuG3ozp40RwC1Z+zxHuBXNTuP 97MKFjELAHWcV8OmAr8whSBfRw== X-Google-Smtp-Source: ABdhPJx5QYgdP+V0SOujHmb8fCcQLYTtX+5XbFBHLUWH9l6WovBtPOraI7jXZd6BSlWd47c+YlDJNA== X-Received: by 2002:a17:903:32c6:b0:14f:c5ec:9f28 with SMTP id i6-20020a17090332c600b0014fc5ec9f28mr9291210plr.51.1645573697992; Tue, 22 Feb 2022 15:48:17 -0800 (PST) Received: from [2620:15c:29:204:6b0f:423a:fd82:4e05] ([2620:15c:29:204:6b0f:423a:fd82:4e05]) by smtp.gmail.com with ESMTPSA id u12-20020a17090a890c00b001b8efcf8e48sm739687pjn.14.2022.02.22.15.48.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Feb 2022 15:48:17 -0800 (PST) Date: Tue, 22 Feb 2022 15:48:16 -0800 (PST) From: David Rientjes To: Hyeonggon Yoo <42.hyeyoo@gmail.com> cc: linux-mm@kvack.org, Roman Gushchin , Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, Joonsoo Kim , Christoph Lameter , Pekka Enberg Subject: Re: [PATCH 4/5] mm/slub: Limit min_partial only in cache creation In-Reply-To: <20220221105336.522086-5-42.hyeyoo@gmail.com> Message-ID: <91cc8ab-a0f0-2687-df99-10b2267c7a9@google.com> References: <20220221105336.522086-1-42.hyeyoo@gmail.com> <20220221105336.522086-5-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YlfM1kO4; spf=pass (imf01.hostedemail.com: domain of rientjes@google.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=rientjes@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 1B2B140004 X-Stat-Signature: i3hi9euskiwqairs5r73a1zjajn5bjd9 X-HE-Tag: 1645573698-819166 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 21 Feb 2022, Hyeonggon Yoo wrote: > SLUB sets number of minimum partial slabs for node (min_partial) using > set_min_partial(). SLUB holds at least min_partial slabs even if they're empty > to avoid excessive use of page allocator. > > set_min_partial() limits value of min_partial between MIN_PARTIAL and > MAX_PARTIAL. As set_min_partial() can be called by min_partial_store() > too, Only limit value of min_partial in kmem_cache_open() so that it > can be changed to value that a user wants. > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> I think this makes sense and there is no reason to limit the bounds that may be set at runtime with undocumented behavior. However, since set_min_partial() is only setting the value in the kmem_cache, could we remove the helper function entirely and fold it into its two callers? > --- > mm/slub.c | 11 ++++++----- > 1 file changed, 6 insertions(+), 5 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 3a4458976ab7..a4964deccb61 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4002,10 +4002,6 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) > > static void set_min_partial(struct kmem_cache *s, unsigned long min) > { > - if (min < MIN_PARTIAL) > - min = MIN_PARTIAL; > - else if (min > MAX_PARTIAL) > - min = MAX_PARTIAL; > s->min_partial = min; > } > > @@ -4184,6 +4180,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) > > static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) > { > + int min_partial; > + > s->flags = kmem_cache_flags(s->size, flags, s->name); > #ifdef CONFIG_SLAB_FREELIST_HARDENED > s->random = get_random_long(); > @@ -4215,7 +4213,10 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) > * The larger the object size is, the more slabs we want on the partial > * list to avoid pounding the page allocator excessively. > */ > - set_min_partial(s, ilog2(s->size) / 2); > + min_partial = min(MAX_PARTIAL, ilog2(s->size) / 2); > + min_partial = max(MIN_PARTIAL, min_partial); > + > + set_min_partial(s, min_partial); > > set_cpu_partial(s); > > -- > 2.33.1 > >