linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Chow <davidchow@shaolinmicro.com>
To: "Stephen C. Tweedie" <sct@redhat.com>
Cc: linux-mm@kvack.org
Subject: Re: slab cache
Date: Fri, 14 Jun 2002 00:34:12 +0800	[thread overview]
Message-ID: <3D08C984.3010308@shaolinmicro.com> (raw)
In-Reply-To: <20020612162941.M12834@redhat.com>

Stephen C. Tweedie wrote:

>Hi,
>
>On Wed, Jun 12, 2002 at 11:05:29PM +0800, David Chow wrote:
>
>>>Using 4k buffers does not limit your ability to use larger data
>>>structures --- you can still chain 4k buffers together by creating an
>>>array of struct page* pointers via which you can access the data.
>>>
>
>>Yes, but for me it is very hard. When doing compression code, most of 
>>the stuff is not even byte aligned, most of them might be bitwise 
>>operated, it need very change to existing code. 
>>
>
>Perhaps, but the VM basically doesn't give you any primitives that you
>can use for arbitrarily large chunks of linear data; things like
>vmalloc are limited in the amount of data they can use, total, and it
>is _slow_ to set up and tear down vmalloc mappings.
>
>>get_free_page to allocate memory that is 4k to avoid some stress to the 
>>vm, I have no idea about the difference of get_fee_page and the slab 
>>cache. All my linear buffers stuff is already using array of page 
>>pointers, if there any benefits for changing them to use slabcache? 
>>Please advice, thanks.
>>
>
>It might be if you are allocating and deallocating large numbers of
>them in bunches, since the slab cache can then keep a few pages cached
>for immediate reuse rather than going to the global page allocator for
>every single page.  The per-cpu slab stuff would also help to keep the
>pages concerned hot in the cache of the local cpu, and that is likely
>to be a big performance improvement in some cases.
>
>--Stephen
>

Thanks for comment, since you mention about cache, do you mean CPU L2 
caches? I don't use to dynamic alloc and dealloc pages, I have a fixed 
sized cache per CPU, even using vmalloc I will only do it only once 
during module initialize, and dealloc only on unload, so the performance 
about allocation does not matter me, but it would be interesting to do 
something to keep those allocations higher chance to cached by the CPU's 
L2 cache. I experience 512K cache CPU's are lot faster .

-- David

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

  reply	other threads:[~2002-06-13 16:34 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-06-09 14:52 David Chow
2002-06-10  8:57 ` Stephen C. Tweedie
2002-06-12 15:05   ` David Chow
2002-06-12 15:29     ` Stephen C. Tweedie
2002-06-13 16:34       ` David Chow [this message]
2002-06-13 16:45         ` Stephen C. Tweedie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3D08C984.3010308@shaolinmicro.com \
    --to=davidchow@shaolinmicro.com \
    --cc=linux-mm@kvack.org \
    --cc=sct@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox