From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E234C433F5 for ; Mon, 9 May 2022 09:42:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16BBC6B0071; Mon, 9 May 2022 05:42:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11B4F6B0073; Mon, 9 May 2022 05:42:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 006FF6B0074; Mon, 9 May 2022 05:42:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E19BB6B0071 for ; Mon, 9 May 2022 05:42:54 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9E74660FFB for ; Mon, 9 May 2022 09:42:54 +0000 (UTC) X-FDA: 79445715468.25.5CA2523 Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) by imf18.hostedemail.com (Postfix) with ESMTP id 583721C006F for ; Mon, 9 May 2022 09:42:43 +0000 (UTC) Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-2f7bb893309so137426417b3.12 for ; Mon, 09 May 2022 02:42:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=M6N0wgYMD6AvDl9pK6OWMAXj6rVMyYj7BLt9cpbI8cI=; b=GazxP/IPUdJpR9ySfbbmwBGpFTShUJw/9vCL34Abf2Vr7lgqLld5QeJSaPKVU3yN5H tCJB++ISaQAxpt1YYsVQhvzwckMc1WIRedMUkF/bDC40jzpslm0Bu5kCkSYsm8ESW9kG 1tUkGKq7mBvOjFHofFfz2UT2WnyQ8bihwqOQRXuqHqUY670z7Gk4SlqFgO20JM5tJU3e MrHTv2oL+M2TzDXxdH8OFHpp2u1AfVvYcYSpZc5tWi9JRQCOimhSdryUm1pSPTk/k1XS GXLnehf/zxiDDuvpLlBwnGJO5BQ6eg6OGIuznFqR99T32qMeF74wtUDTkQVtUtqnFt+L WWBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=M6N0wgYMD6AvDl9pK6OWMAXj6rVMyYj7BLt9cpbI8cI=; b=NeS9qWOuBouHeCW9isVxZMeQxzEJ9UAM2i86+wCstn3lkJ7KGh7yudByjj+/t7RjTS 3VsjFOtwFR4788ngZYNUa3zWJkhBBq6T0Oz9eLhyAO1oV5/ScBxsiNUwo87wy0xRsQ+h BZwV1ItQeHzvW+uSME6wmgI8AYi4oxzosLCaAwxZCWaj2SRtBPj8XzWEo5hPak707iNq v5E5GqXM7XYIDLw96qWfg1mqtwaMcMEKLwy2aYZnHi13Sxzd9yIF4Za8CLwO3K0P8tJH 0KhbhtWejQO91fS8fbP0wC7WBoXM2xghupjpidOuW14aKxZl74l9Ijo1ml2CfLZZW2Ud fzmw== X-Gm-Message-State: AOAM533x4Hf3NsrNApqb0hGX0FFAg6op37mmsDu1ntNBksWJg5IxM3K0 8NYbsFkPPQqMUu2BveOd9MoLEjEyEbe+d2Pe0Ys= X-Google-Smtp-Source: ABdhPJxFBg9L0Iz6dbtr+9hxwvWVjmUJWCMwscsZod7XxM4XYLhUXuEATEYOq2wSbgUTDgUChPyF1PR3gWn8MtIEbZU= X-Received: by 2002:a0d:d510:0:b0:2f4:e202:2d9d with SMTP id x16-20020a0dd510000000b002f4e2022d9dmr13462464ywd.237.1652089372220; Mon, 09 May 2022 02:42:52 -0700 (PDT) MIME-Version: 1.0 References: <20220430002555.3881-1-vvghjk1234@gmail.com> <49b0d611-e116-c78d-cf14-6d5f96ae500e@suse.cz> In-Reply-To: <49b0d611-e116-c78d-cf14-6d5f96ae500e@suse.cz> From: Wonhyuk Yang Date: Mon, 9 May 2022 18:42:40 +0900 Message-ID: Subject: Re: [Patch v3] mm/slub: Remove repeated action in calculate_order() To: Vlastimil Babka Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="GazxP/IP"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of vvghjk1234@gmail.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=vvghjk1234@gmail.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 583721C006F X-Rspam-User: X-Stat-Signature: sh3u67ygm1om4i7ob33nussz51tkfret X-HE-Tag: 1652089363-321560 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, May 2, 2022 at 7:00 PM Vlastimil Babka wrote: > > On 4/30/22 02:25, Wonhyuk Yang wrote: > > To calculate order, calc_slab_order() is called repeatly changing the > > fract_leftover. Thus, the branch which is not dependent on > > fract_leftover is executed repeatly. So make it run only once. > > > > Plus, when min_object reached to 1, we set fract_leftover to 1. In > > this case, we can calculate order by max(slub_min_order, > > get_order(size)) instead of calling calc_slab_order(). > > > > No functional impact expected. > > > > Signed-off-by: Wonhyuk Yang > > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > --- > > > > mm/slub.c | 18 +++++++----------- > > 1 file changed, 7 insertions(+), 11 deletions(-) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index ed5c2c03a47a..1fe4d62b72b8 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -3795,9 +3795,6 @@ static inline unsigned int calc_slab_order(unsigned int size, > > unsigned int min_order = slub_min_order; > > unsigned int order; > > > > - if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE) > > - return get_order(size * MAX_OBJS_PER_PAGE) - 1; > > - > > for (order = max(min_order, (unsigned int)get_order(min_objects * size)); > > order <= max_order; order++) { > > > > @@ -3820,6 +3817,11 @@ static inline int calculate_order(unsigned int size) > > unsigned int max_objects; > > unsigned int nr_cpus; > > > > + if (unlikely(order_objects(slub_min_order, size) > MAX_OBJS_PER_PAGE)) { > > + order = get_order(size * MAX_OBJS_PER_PAGE) - 1; > > + goto out; > > + } > > Hm interestingly, both before and after your patch, MAX_OBJS_PER_PAGE might > be theoretically overflowed not by slub_min_order, but then with higher > orders. Seems to be prevented only as a side-effect of fragmentation close > to none, thus higher orders not attempted. Would be maybe less confusing to > check that explicitly. Even if that's wasteful, but this is not really perf > critical code. Yes, I agree that checking the overflow of object number explicitly is better even if it is almost impossible. But it checked repeatedly by calling calc_slab_order(). It seems to me that is unnecessary doesn't it? > > > + > > /* > > * Attempt to find best configuration for a slab. This > > * works by first attempting to generate a layout with > > @@ -3865,14 +3867,8 @@ static inline int calculate_order(unsigned int size) > > * We were unable to place multiple objects in a slab. Now > > * lets see if we can place a single object there. > > */ > > - order = calc_slab_order(size, 1, slub_max_order, 1); > > - if (order <= slub_max_order) > > - return order; > > - > > - /* > > - * Doh this slab cannot be placed using slub_max_order. > > - */ > > - order = calc_slab_order(size, 1, MAX_ORDER, 1); > > + order = max_t(unsigned int, slub_min_order, get_order(size)); > > If we failed to assign order above, then AFAICS it means even slub_min_order > will not give us more than 1 object per slub. Thus it doesn't make sense to > use it in a max() formula, and we can just se get_order(), no? That's sounds reasonable. When it reached here, we don't need to keep the slub_min_order. > > > +out: > > if (order < MAX_ORDER) > > return order; > > return -ENOSYS; >