From: Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>
To: Andrew Morton <akpm@osdl.org>
Cc: haveblue@us.ibm.com, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, lhms-devel@lists.sourceforge.net,
wli@holomorphy.com
Subject: Re: [Lhms-devel] [RFC] buddy allocator without bitmap [2/4]
Date: Fri, 27 Aug 2004 14:20:48 +0900 [thread overview]
Message-ID: <412EC4B0.1040901@jp.fujitsu.com> (raw)
In-Reply-To: <20040826215927.0af2dee9.akpm@osdl.org>
Andrew Morton wrote:
> Certainly, executing an atomic op in a tight loop will show a lot of
> difference. But that doesn't mean that making these operations non-atomic
> makes a significant difference to overall kernel performance!
>
Thanks.
My test before positng patch is calling mmap()/munmap() with 4-16Mega bytes.
munmap with such Mega bytes causes many calls of __free_pages_bulk() and
many pages are coalesced at once.
This means atomic_ops in heavyly called tight loop
(I called it 3 times in the most inner loop...)
and my test shows bad performance ;).
> But whatever - it all adds up. The microoptimisation is fine - let's go
> that way.
>
I'd like to add macros and to get my codes clear.
>
>>Result:
>>[root@kanex2 atomic]# nice -10 ./test-atomics
>>score 0 is 64011 note: cache hit, no atomic
>>score 1 is 543011 note: cache hit, atomic
>>score 2 is 303901 note: cache hit, mixture
>>score 3 is 344261 note: cache miss, no atomic
>>score 4 is 1131085 note: cache miss, atomic
>>score 5 is 593443 note: cache miss, mixture
>>score 6 is 118455 note: cache hit, dependency, noatomic
>>score 7 is 416195 note: cache hit, dependency, mixture
>>
>>smaller score is better.
>>score 0-2 shows set_bit/__set_bit performance during good cache hit rate.
>>score 3-5 shows set_bit/__set_bit performance during bad cache hit rate.
>>score 6-7 shows set_bit/__set_bit performance during good cache hit
>>but there is data dependency on each access in the tight loop.
>
>
> I _think_ the above means atomic ops are 10x more costly, yes?
>
yes, when L2 cache hits, I think.
--
--the clue is these footmarks leading to the door.--
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next prev parent reply other threads:[~2004-08-27 5:15 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-08-26 12:03 Hiroyuki KAMEZAWA
2004-08-26 15:50 ` [Lhms-devel] " Dave Hansen
2004-08-26 23:05 ` Hiroyuki KAMEZAWA
2004-08-26 23:11 ` Dave Hansen
2004-08-26 23:28 ` Hiroyuki KAMEZAWA
2004-08-27 0:18 ` Andrew Morton
2004-08-27 0:27 ` Hiroyuki KAMEZAWA
2004-08-27 4:48 ` Hiroyuki KAMEZAWA
2004-08-27 4:59 ` Andrew Morton
2004-08-27 5:20 ` Hiroyuki KAMEZAWA [this message]
2004-08-27 5:04 ` Dave Hansen
2004-08-27 5:31 ` Hiroyuki KAMEZAWA
2004-08-27 5:31 ` Dave Hansen
2004-08-27 5:47 ` Dave Hansen
2004-08-27 6:09 ` Hiroyuki KAMEZAWA
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=412EC4B0.1040901@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@osdl.org \
--cc=haveblue@us.ibm.com \
--cc=lhms-devel@lists.sourceforge.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox