From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-f198.google.com (mail-qk0-f198.google.com [209.85.220.198]) by kanga.kvack.org (Postfix) with ESMTP id 5C95D6B000C for ; Wed, 25 Apr 2018 17:04:56 -0400 (EDT) Received: by mail-qk0-f198.google.com with SMTP id o68so1610897qke.3 for ; Wed, 25 Apr 2018 14:04:56 -0700 (PDT) Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id u123si309036qkb.241.2018.04.25.14.04.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Apr 2018 14:04:55 -0700 (PDT) Date: Wed, 25 Apr 2018 17:04:49 -0400 (EDT) From: Mikulas Patocka Subject: Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE In-Reply-To: Message-ID: References: <20c58a03-90a8-7e75-5fc7-856facfb6c8a@suse.cz> <20180413151019.GA5660@redhat.com> <20180416142703.GA22422@redhat.com> <20180416144638.GA22484@redhat.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org List-ID: To: Christopher Lameter Cc: Mike Snitzer , Vlastimil Babka , Matthew Wilcox , Pekka Enberg , linux-mm@kvack.org, dm-devel@redhat.com, David Rientjes , Joonsoo Kim , Andrew Morton , linux-kernel@vger.kernel.org On Wed, 18 Apr 2018, Christopher Lameter wrote: > On Tue, 17 Apr 2018, Mikulas Patocka wrote: > > > I can make a slub-only patch with no extra flag (on a freshly booted > > system it increases only the order of caches "TCPv6" and "sighand_cache" > > by one - so it should not have unexpected effects): > > > > Doing a generic solution for slab would be more comlpicated because slab > > assumes that all slabs have the same order, so it can't fall-back to > > lower-order allocations. > > Well again SLAB uses compound pages and thus would be able to detect the > size of the page. It may be some work but it could be done. > > > > > Index: linux-2.6/mm/slub.c > > =================================================================== > > --- linux-2.6.orig/mm/slub.c 2018-04-17 19:59:49.000000000 +0200 > > +++ linux-2.6/mm/slub.c 2018-04-17 20:58:23.000000000 +0200 > > @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un > > static inline int calculate_order(unsigned int size, unsigned int reserved) > > { > > unsigned int order; > > + unsigned int test_order; > > unsigned int min_objects; > > unsigned int max_objects; > > > > @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign > > order = slab_order(size, min_objects, > > slub_max_order, fraction, reserved); > > if (order <= slub_max_order) > > - return order; > > + goto ret_order; > > fraction /= 2; > > } > > min_objects--; > > @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign > > */ > > order = slab_order(size, 1, slub_max_order, 1, reserved); > > The slab order is determined in slab_order() > > > if (order <= slub_max_order) > > - return order; > > + goto ret_order; > > > > /* > > * Doh this slab cannot be placed using slub_max_order. > > */ > > order = slab_order(size, 1, MAX_ORDER, 1, reserved); > > - if (order < MAX_ORDER) > > - return order; > > - return -ENOSYS; > > + if (order >= MAX_ORDER) > > + return -ENOSYS; > > + > > +ret_order: > > + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) { > > + unsigned long order_objects = ((PAGE_SIZE << order) - reserved) / size; > > + unsigned long test_order_objects = ((PAGE_SIZE << test_order) - reserved) / size; > > + if (test_order_objects > min(32, MAX_OBJS_PER_PAGE)) > > + break; > > + if (test_order_objects > order_objects << (test_order - order)) > > + order = test_order; > > + } > > + return order; > > Could yo move that logic into slab_order()? It does something awfully > similar. But slab_order (and its caller) limits the order to "max_order" and we want more. Perhaps slab_order should be dropped and calculate_order totally rewritten? Mikulas