linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wanpeng Li <liwanp@linux.vnet.ibm.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Pekka Enberg <penberg@kernel.org>,
	Christoph Lameter <cl@linux.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Joonsoo Kim <js1304@gmail.com>,
	David Rientjes <rientjes@google.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/4] slab: implement byte sized indexes for the freelist of a slab
Date: Wed, 4 Sep 2013 10:17:46 +0800	[thread overview]
Message-ID: <20130904021746.GA13186@hacker.(null)> (raw)
In-Reply-To: <1378111138-30340-1-git-send-email-iamjoonsoo.kim@lge.com>

Hi Joonsoo,
On Mon, Sep 02, 2013 at 05:38:54PM +0900, Joonsoo Kim wrote:
>This patchset implements byte sized indexes for the freelist of a slab.
>
>Currently, the freelist of a slab consist of unsigned int sized indexes.
>Most of slabs have less number of objects than 256, so much space is wasted.
>To reduce this overhead, this patchset implements byte sized indexes for
>the freelist of a slab. With it, we can save 3 bytes for each objects.
>
>This introduce one likely branch to functions used for setting/getting
>objects to/from the freelist, but we may get more benefits from
>this change.
>
>Below is some numbers of 'cat /proc/slabinfo' related to my previous posting
>and this patchset.
>
>
>* Before *
># name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables [snip...]
>kmalloc-512          525    640    512    8    1 : tunables   54   27    0 : slabdata     80     80      0   
>kmalloc-256          210    210    256   15    1 : tunables  120   60    0 : slabdata     14     14      0   
>kmalloc-192         1016   1040    192   20    1 : tunables  120   60    0 : slabdata     52     52      0   
>kmalloc-96           560    620    128   31    1 : tunables  120   60    0 : slabdata     20     20      0   
>kmalloc-64          2148   2280     64   60    1 : tunables  120   60    0 : slabdata     38     38      0   
>kmalloc-128          647    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0   
>kmalloc-32         11360  11413     32  113    1 : tunables  120   60    0 : slabdata    101    101      0   
>kmem_cache           197    200    192   20    1 : tunables  120   60    0 : slabdata     10     10      0   
>
>* After my previous posting(overload struct slab over struct page) *
># name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables [snip...]
>kmalloc-512          527    600    512    8    1 : tunables   54   27    0 : slabdata     75     75      0   
>kmalloc-256          210    210    256   15    1 : tunables  120   60    0 : slabdata     14     14      0   
>kmalloc-192         1040   1040    192   20    1 : tunables  120   60    0 : slabdata     52     52      0   
>kmalloc-96           750    750    128   30    1 : tunables  120   60    0 : slabdata     25     25      0   
>kmalloc-64          2773   2773     64   59    1 : tunables  120   60    0 : slabdata     47     47      0   
>kmalloc-128          660    690    128   30    1 : tunables  120   60    0 : slabdata     23     23      0   
>kmalloc-32         11200  11200     32  112    1 : tunables  120   60    0 : slabdata    100    100      0   
>kmem_cache           197    200    192   20    1 : tunables  120   60    0 : slabdata     10     10      0   
>
>kmem_caches consisting of objects less than or equal to 128 byte have one more
>objects in a slab. You can see it at objperslab.

I think there is one less objects in a slab after observing objperslab.

Regards,
Wanpeng Li 

>
>We can improve further with this patchset.
>
>* My previous posting + this patchset *
># name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables [snip...]
>kmalloc-512          521    648    512    8    1 : tunables   54   27    0 : slabdata     81     81      0
>kmalloc-256          208    208    256   16    1 : tunables  120   60    0 : slabdata     13     13      0
>kmalloc-192         1029   1029    192   21    1 : tunables  120   60    0 : slabdata     49     49      0
>kmalloc-96           529    589    128   31    1 : tunables  120   60    0 : slabdata     19     19      0
>kmalloc-64          2142   2142     64   63    1 : tunables  120   60    0 : slabdata     34     34      0
>kmalloc-128          660    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
>kmalloc-32         11716  11780     32  124    1 : tunables  120   60    0 : slabdata     95     95      0
>kmem_cache           197    210    192   21    1 : tunables  120   60    0 : slabdata     10     10      0
>
>kmem_caches consisting of objects less than or equal to 256 byte have
>one or more objects than before. In the case of kmalloc-32, we have 12 more
>objects, so 384 bytes (12 * 32) are saved and this is roughly 9% saving of
>memory. Of couse, this percentage decreases as the number of objects
>in a slab decreases.
>
>Please let me know expert's opions :)
>Thanks.
>
>This patchset comes from a Christoph's idea.
>https://lkml.org/lkml/2013/8/23/315
>
>Patches are on top of my previous posting.
>https://lkml.org/lkml/2013/8/22/137
>
>Joonsoo Kim (4):
>  slab: factor out calculate nr objects in cache_estimate
>  slab: introduce helper functions to get/set free object
>  slab: introduce byte sized index for the freelist of a slab
>  slab: make more slab management structure off the slab
>
> mm/slab.c |  138 +++++++++++++++++++++++++++++++++++++++++++++----------------
> 1 file changed, 103 insertions(+), 35 deletions(-)
>
>-- 
>1.7.9.5
>
>--
>To unsubscribe, send a message with 'unsubscribe linux-mm' in
>the body to majordomo@kvack.org.  For more info on Linux MM,
>see: http://www.linux-mm.org/ .
>Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2013-09-04  2:18 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-22  8:44 [PATCH 00/16] slab: overload struct slab over struct page to reduce memory usage Joonsoo Kim
2013-08-22  8:44 ` [PATCH 01/16] slab: correct pfmemalloc check Joonsoo Kim
2013-09-11 14:30   ` Christoph Lameter
2013-09-12  6:51     ` Joonsoo Kim
2013-08-22  8:44 ` [PATCH 02/16] slab: change return type of kmem_getpages() to struct page Joonsoo Kim
2013-08-22 17:49   ` Christoph Lameter
2013-08-23  6:40     ` Joonsoo Kim
2013-09-11 14:31   ` Christoph Lameter
2013-08-22  8:44 ` [PATCH 03/16] slab: remove colouroff in struct slab Joonsoo Kim
2013-09-11 14:32   ` Christoph Lameter
2013-08-22  8:44 ` [PATCH 04/16] slab: remove nodeid " Joonsoo Kim
2013-08-22 17:51   ` Christoph Lameter
2013-08-23  6:49     ` Joonsoo Kim
2013-08-22  8:44 ` [PATCH 05/16] slab: remove cachep in struct slab_rcu Joonsoo Kim
2013-08-22 17:53   ` Christoph Lameter
2013-08-23  6:53     ` Joonsoo Kim
2013-08-23 13:42       ` Christoph Lameter
2013-08-23 14:24         ` JoonSoo Kim
2013-08-23 15:41           ` Christoph Lameter
2013-08-23 16:12             ` JoonSoo Kim
2013-09-02  8:38               ` [PATCH 0/4] slab: implement byte sized indexes for the freelist of a slab Joonsoo Kim
2013-09-02  8:38                 ` [PATCH 1/4] slab: factor out calculate nr objects in cache_estimate Joonsoo Kim
2013-09-02  8:38                 ` [PATCH 2/4] slab: introduce helper functions to get/set free object Joonsoo Kim
2013-09-02  8:38                 ` [PATCH 3/4] slab: introduce byte sized index for the freelist of a slab Joonsoo Kim
2013-09-02  8:38                 ` [PATCH 4/4] slab: make more slab management structure off the slab Joonsoo Kim
2013-09-03 14:15                 ` [PATCH 0/4] slab: implement byte sized indexes for the freelist of a slab Christoph Lameter
2013-09-04  8:33                   ` Joonsoo Kim
2013-09-05  6:55                     ` Joonsoo Kim
2013-09-05 14:33                       ` Christoph Lameter
2013-09-06  5:58                         ` Joonsoo Kim
2013-09-04  2:17                 ` Wanpeng Li [this message]
2013-09-04  2:17                 ` Wanpeng Li
     [not found]                 ` <5226985f.4475320a.1c61.2623SMTPIN_ADDED_BROKEN@mx.google.com>
2013-09-04  8:28                   ` Joonsoo Kim
2013-09-11 14:33   ` [PATCH 05/16] slab: remove cachep in struct slab_rcu Christoph Lameter
2013-08-22  8:44 ` [PATCH 06/16] slab: put forward freeing slab management object Joonsoo Kim
2013-09-11 14:35   ` Christoph Lameter
2013-08-22  8:44 ` [PATCH 07/16] slab: overloading the RCU head over the LRU for RCU free Joonsoo Kim
2013-08-27 22:06   ` Jonathan Corbet
2013-08-28  6:36     ` Joonsoo Kim
2013-09-11 14:39   ` Christoph Lameter
2013-09-12  6:55     ` Joonsoo Kim
2013-09-12 14:21       ` Christoph Lameter
2013-08-22  8:44 ` [PATCH 08/16] slab: use well-defined macro, virt_to_slab() Joonsoo Kim
2013-09-11 14:40   ` Christoph Lameter
2013-08-22  8:44 ` [PATCH 09/16] slab: use __GFP_COMP flag for allocating slab pages Joonsoo Kim
2013-08-22 18:00   ` Christoph Lameter
2013-08-23  6:55     ` Joonsoo Kim
2013-08-22  8:44 ` [PATCH 10/16] slab: change the management method of free objects of the slab Joonsoo Kim
2013-08-22  8:44 ` [PATCH 11/16] slab: remove kmem_bufctl_t Joonsoo Kim
2013-08-22  8:44 ` [PATCH 12/16] slab: remove SLAB_LIMIT Joonsoo Kim
2013-08-22  8:44 ` [PATCH 13/16] slab: replace free and inuse in struct slab with newly introduced active Joonsoo Kim
2013-08-22  8:44 ` [PATCH 14/16] slab: use struct page for slab management Joonsoo Kim
2013-08-22  8:44 ` [PATCH 15/16] slab: remove useless statement for checking pfmemalloc Joonsoo Kim
2013-08-22  8:44 ` [PATCH 16/16] slab: rename slab_bufctl to slab_freelist Joonsoo Kim
2013-08-22 16:47 ` [PATCH 00/16] slab: overload struct slab over struct page to reduce memory usage Christoph Lameter
2013-08-23  6:35   ` Joonsoo Kim
2013-09-04  3:38     ` Wanpeng Li
2013-09-04  3:38     ` Wanpeng Li
     [not found]     ` <5226ab2c.02092b0a.5eed.ffffd7e4SMTPIN_ADDED_BROKEN@mx.google.com>
2013-09-04  8:25       ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='20130904021746.GA13186@hacker.(null)' \
    --to=liwanp@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=js1304@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox