linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mark Seger <Mark.Seger@hp.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: linux-mm@kvack.org
Subject: Re: SLUB
Date: Thu, 20 Dec 2007 18:36:12 -0500	[thread overview]
Message-ID: <476AFC6C.3080903@hp.com> (raw)
In-Reply-To: <Pine.LNX.4.64.0712201138280.30648@schroedinger.engr.sgi.com>

>> Perhaps someone would like to take this discussion off-line with me and even
>> collaborate with me on enhancements for slub in collectl?
sounds good to me, I just didn't want to annoy anyone...
>> I think we better keep it public (so that it goes into the archive). Here 
>> a short description of the field in /sys/kernel/slab/<slabcache> that you 
>> would need
>>
>> -r--r--r-- 1 root root 4096 Dec 20 11:41 object_size
>>
>> The size of an object. Subtract slab_size - object_size and you have the 
>> per object overhead generated by alignements and slab metadata. Does not 
>> change you only need to read this once.
>>
>> -r--r--r-- 1 root root 4096 Dec 20 11:41 objects
>>
>> Number of objects in use. This changes and you may want to monitor it.
>>
>> -r--r--r-- 1 root root 4096 Dec 20 11:41 slab_size
>>
>> Total memory used for a single object. Read this only once.
>>
>> -r--r--r-- 1 root root 4096 Dec 20 11:41 slabs
>>
>> Number of slab pages in use for this slab cache. May change if slab is 
>> extended.
>>     
What I'm not sure about is how this maps to the old slab info.  
Specifically, I believe in the old model one reported on the size taken 
up by the slabs (number of slabs X number of objects/slab X object 
size).  There was a second size for the actual number of objects in use, 
so in my report that looked like this:

#                      <-----------Objects----------><---------Slab 
Allocation------>
#Name                  InUse   Bytes   Alloc   Bytes   InUse   Bytes   
Total   Bytes
nfs_direct_cache           0       0       0       0       0       
0       0       0
nfs_write_data            36   27648      40   30720       8   
32768       8   32768

the slab allocation was real memory allocated (which should come close 
to Slab: in /proc/meminfo, right?) for the slabs while the object bytes 
were those in use.  Is it worth it to continue this model or do thing 
work differently.   It sounds like I can still do this with the numbers 
you've pointed me to above and I do now realize I only need to monitor 
the number of slabs and the number of objects since the others are 
constants.

To get back to my original question, I'd like to make sure that I'm 
reporting useful information and not just data for the sake of it.  In 
one of your postings I saw a report you had that showed:

slubinfo - version: 1.0
# name            <objects> <order> <objsize> <slabs>/<partial>/<cpu> <flags> <nodes>

How useful is order, cpu, flags and nodes?
Do people really care about how much memory is taken up by objects vs 
slabs?  If not, I could see reporting for each slab:
- object size
- number objects
- slab size
- number of slabs
- total memory (slab size X number of slabs)
- whatever else people might think to be useful such as order, cpu, 
flags, etc

Another thing I noticed is a number of the slabs are simply links to the 
same base name and is it sufficient to just report the base names and 
not those linked to it?  Seems reasonable to me...

The interesting thing about collectl is that it's written in perl (but 
I'm trying to be very careful to keep it efficient and it tends to use 
<0.1% cpu when run as a daemon) and the good news is it's pretty easy to 
get something implemented, depending on my free time.  If we can get 
some level of agreement on what seems useful I could get a version up 
fairly quickly for people to start playing with if there is any interest.

-mark


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2007-12-20 23:36 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-12-20 15:06 SLUB Mark Seger
2007-12-20 19:44 ` SLUB Christoph Lameter
2007-12-20 23:36   ` Mark Seger [this message]
2007-12-21  1:09     ` SLUB Mark Seger
2007-12-21  1:27       ` SLUB Mark Seger
2007-12-21 21:41       ` SLUB Christoph Lameter
2007-12-27 14:22         ` SLUB Mark Seger
2007-12-27 15:59           ` SLUB Mark Seger
2007-12-27 19:43             ` SLUB Christoph Lameter
2007-12-27 19:57               ` SLUB Mark Seger
2007-12-27 19:58                 ` SLUB Christoph Lameter
2007-12-27 20:17                   ` SLUB Mark Seger
2007-12-27 20:55                   ` SLUB Mark Seger
2007-12-27 20:59                     ` SLUB Christoph Lameter
2007-12-27 23:49                       ` collectl and the new slab allocator [slub] statistics Mark Seger
2007-12-27 23:52                         ` Christoph Lameter
2007-12-28 15:10                           ` Mark Seger
2007-12-31 18:30                             ` Mark Seger
2007-12-27 19:40           ` SLUB Christoph Lameter
2007-12-27 19:51             ` SLUB Mark Seger
2007-12-27 19:53               ` SLUB Christoph Lameter
2007-12-21 21:32     ` SLUB Christoph Lameter
2007-12-21 16:59   ` SLUB Mark Seger
2007-12-21 21:37     ` SLUB Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=476AFC6C.3080903@hp.com \
    --to=mark.seger@hp.com \
    --cc=clameter@sgi.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox