linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Bharata B Rao <bharata@linux.ibm.com>
To: Roman Gushchin <guro@fb.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, cl@linux.com,
	rientjes@google.com, iamjoonsoo.kim@lge.com,
	akpm@linux-foundation.org, vbabka@suse.cz, shakeelb@google.com,
	hannes@cmpxchg.org, aneesh.kumar@linux.ibm.com
Subject: Re: Higher slub memory consumption on 64K page-size systems?
Date: Mon, 2 Nov 2020 17:03:06 +0530	[thread overview]
Message-ID: <20201102113306.GA584494@in.ibm.com> (raw)
In-Reply-To: <20201029000757.GE827280@carbon.dhcp.thefacebook.com>

On Wed, Oct 28, 2020 at 05:07:57PM -0700, Roman Gushchin wrote:
> On Wed, Oct 28, 2020 at 11:20:30AM +0530, Bharata B Rao wrote:
> > I have mostly looked at reducing the slab memory consumption here.
> > But I do understand that default tunable values have been arrived
> > at based on some benchmark numbers. Are there ways or possibilities
> > to reduce the slub memory consumption with the existing level of
> > performance is what I would like to understand and explore.
> 
> Hi Bharata!
> 
> I wonder how the distribution of the consumed memory by slab_caches
> differs between 4k and 64k pages. In particular, I wonder if
> page-sized and larger kmallocs make the difference (or a big part of it)?
> There are many places in the kernel which are doing something like
> kmalloc(PAGE_SIZE).

Here is comparision of topmost slabs in terms of memory usage b/n
4K and 64K configurations:

Case 1: After boot
==================
4K page-size
------------
Name                   Objects Objsize           Space Slabs/Part/Cpu  O/S O %Fr %Ef Flg
inode_cache              23382     592           14.1M       400/0/33   54 3   0  97 a
dentry                   29484     192            5.7M      592/0/110   42 1   0  98 a
kmalloc-1k                5358    1024            5.6M       130/9/42   32 3   5  97
task_struct                371    9856            4.1M        88/6/40    3 3   4  87
kmalloc-512               6640     512            3.4M       159/3/49   32 2   1  99
...
kmalloc-4k                 530    4096            2.2M        42/6/27    8 3   8  96

64K page-size
-------------
pgtable-2^11               935   16384           38.7M       16/16/58   16 3  21  39
inode_cache              23980     592           14.4M       203/0/17  109 0   0  98 a
thread_stack               709   16384           12.0M         6/1/17   32 3   4  96
task_struct               1012    9856           10.4M         4/1/16   53 3   5  95
kmalloc-64k                144   65536            9.4M         2/0/16    8 3   0 100

Case 2: After hackbench run
===========================
4K page-size
------------
inode_cache              21823     592           13.3M       361/3/46   54 3   0  96 a
kmalloc-512              10309     512            9.4M    433/325/146   32 2  56  55
kmalloc-1k                6207    1024            6.5M      121/12/78   32 3   6  97
dentry                   28923     192            5.9M     468/48/261   42 1   6  92 a
task_struct                418    9856            5.1M      106/24/51    3 3  15  80
...
kmalloc-4k                 510    4096            2.1M       41/10/26    8 3  14  95

64K page-size
-------------
kmalloc-8k                3081    8192           84.9M     241/241/83   32 2  74  29
thread_stack              2919   16384           52.4M       15/10/85   32 3  10  91
pgtable-2^11              1281   16384           50.8M       20/20/77   16 3  20  41
task_struct               3771    9856           40.3M         9/6/68   53 3   7  92
vm_area_struct           92295     200           18.9M        8/8/281  327 0   2  97
...
kmalloc-64k                144   65536            9.4M         2/0/16    8 3   0 100

I can't see any specific pattern wrt to kmalloc cache usage in both the
above cases (boot vs hackbench run). In the boot case, the 64K configuration
consuming more memory can be attributed probably to the bigger page size
itself. However in case of hackbench run, any significant number of
partial slabs does contribute to significant increase of memory for
64K configuration.

> 
> Re slub tuning: in general we do care about the number of objects
> in a partial list, less about the number of pages. If we can have the
> same amount of objects but on fewer pages, it's even better.

Right, but how do we achieve that when few number of inuse objects are
spread across a number of partial slabs? This specifically is the case
we see after a workload run (hackbench in this case)

> So I don't see any reasons why we shouldn't scale down these tunables
> if the PAGE_SIZE > 4K.
> Idk if it makes sense to switch to byte-sized tunables or just to hardcode
> custom default values for the 64k page case. The latter is probably
> is easier.

Right, tuning the mininum number of objects when calculating the page order
of the slab and tuning cpu_partial value show some consistent reduction
in the slab memory consumption. (I have shown this in previous mail)

Thanks for your comments.

Regards,
Bharata.


  reply	other threads:[~2020-11-02 11:34 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-28  5:50 Bharata B Rao
2020-10-29  0:07 ` Roman Gushchin
2020-11-02 11:33   ` Bharata B Rao [this message]
2020-11-05 16:47 ` Vlastimil Babka
2020-11-11  9:02   ` Bharata B Rao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201102113306.GA584494@in.ibm.com \
    --to=bharata@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox