From: kamezawa.hiroyu@jp.fujitsu.com
To: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: kamezawa.hiroyu@jp.fujitsu.com, linux-mm@kvack.org,
balbir@linux.vnet.ibm.com, nishimura@mxp.nes.nec.co.jp,
xemul@openvz.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: Re: Re: [PATCH 9/13] memcg: lookup page cgroup (and remove pointer from struct page)
Date: Tue, 23 Sep 2008 00:57:18 +0900 (JST) [thread overview]
Message-ID: <32459434.1222099038142.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <1222098450.8533.41.camel@nimitz>
----- Original Message -----
>> >
>> I admit this calcuration is too easy. Hmm, based on totalram_pages is
>> better. ok.
>
>No, I was setting a trap. ;)
>
Bomb!
>If you use totalram_pages, I'll just complain that it doesn't work if a
>memory hotplug machine drastically changes its size. You'll end up with
>pretty darn big hash buckets.
>
As I wrote, this is just _generic_ one.
I'll add FLATMEM and SPARSEMEM support later.
I never want to write SPARSEMEM_EXTREME by myself and want to depend
on SPARSEMEM's internal implementation, which I know well.
>You basically can't get away with the fact that you (potentially) have
>really sparse addresses to play with here. Using a hash table is
>exactly the same as using an array such as sparsemem except you randomly
>index into it instead of using straight arithmetic.
>
see the next patch. per-cpu look-aside cache works well.
>My gut says that you'll need to do exactly the same things sparsemem did
>here, which is at *least* have a two-level lookup before you get to the
>linear search. The two-level lookup also makes the hotplug problem
>easier.
>
>As I look at this, I always have to bounce between these tradeoffs:
>
>1. deal with sparse address spaces (keeps you from using max_pfn)
>2. scale as that sparse address space has memory hotplugged into it
> (keeps you from using boot-time present_pages)
>3. deal with performance impacts from new data structures created to
> deal with the other two :)
>
>> >Can you lay out how much memory this will use on a machine like Dave
>> >Miller's which has 1GB of memory at 0x0 and 1GB of memory at 1TB up in
>> >the address space?
>>
>> >Also, how large do the hash buckets get in the average case?
>> >
>> on my 48GB box, hashtable was 16384bytes. (in dmesg log.)
>> (section size was 128MB.)
>
>I'm wondering how long the linear searches of those hlists get.
>
In above case, just one step. 16384/8 * 128MB.
In ppc, it has 16MB sections, hash table will be bigger. But "walk" is
not very long.
Anyway, How "walk" is long is not very big problem because look-aside
buffer helps.
I'll add FLATMEM/SPARSEMEM support later. Could you wait for a while ?
Because we have lookup_page_cgroup() after this, we can do anything.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-09-22 15:57 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-09-22 10:51 [PATCH 0/13] memory cgroup updates v4 KAMEZAWA Hiroyuki
2008-09-22 10:55 ` [PATCH 1/13] memcg: avoid accounting special mapping KAMEZAWA Hiroyuki
2008-09-22 10:57 ` [PATCH 2/13] memcg: account fault-in swap under lock KAMEZAWA Hiroyuki
2008-09-22 10:58 ` [PATCH 3/13] memcg: nolimit root cgroup KAMEZAWA Hiroyuki
2008-09-22 11:00 ` [PATCH 4/13] memcg: force_empty moving account KAMEZAWA Hiroyuki
2008-09-22 14:23 ` Peter Zijlstra
2008-09-22 14:50 ` kamezawa.hiroyu
2008-09-22 14:56 ` Peter Zijlstra
2008-09-22 15:06 ` kamezawa.hiroyu
2008-09-22 15:32 ` Peter Zijlstra
2008-09-22 15:43 ` kamezawa.hiroyu
2008-09-22 11:02 ` [PATCH 5/13] memcg: cleanup to make mapping null before unchage KAMEZAWA Hiroyuki
2008-09-22 11:03 ` [PATCH 6/13] memcg: optimze per cpu accounting for memcg KAMEZAWA Hiroyuki
2008-09-22 11:05 ` [PATCH 3.5/13] memcg: make page_cgroup flags to be atomic KAMEZAWA Hiroyuki
2008-09-22 11:09 ` [PATCH 3.6/13] memcg: add function to move account KAMEZAWA Hiroyuki
2008-09-24 6:50 ` Daisuke Nishimura
2008-09-24 7:11 ` KAMEZAWA Hiroyuki
2008-09-22 11:12 ` [PATCH 9/13] memcg: lookup page cgroup (and remove pointer from struct page) KAMEZAWA Hiroyuki
2008-09-22 14:52 ` Dave Hansen
2008-09-22 15:14 ` kamezawa.hiroyu
2008-09-22 15:47 ` Dave Hansen
2008-09-22 15:57 ` kamezawa.hiroyu [this message]
2008-09-22 16:10 ` Dave Hansen
2008-09-22 17:34 ` kamezawa.hiroyu
2008-09-22 15:47 ` Peter Zijlstra
2008-09-22 16:04 ` kamezawa.hiroyu
2008-09-22 16:06 ` Peter Zijlstra
2008-09-23 23:48 ` KAMEZAWA Hiroyuki
2008-09-24 2:09 ` Balbir Singh
2008-09-24 3:09 ` KAMEZAWA Hiroyuki
2008-09-24 8:31 ` Balbir Singh
2008-09-24 8:46 ` KAMEZAWA Hiroyuki
2008-09-22 11:13 ` [PATCH 10/13] memcg: page_cgroup look aside table KAMEZAWA Hiroyuki
2008-09-22 11:17 ` [PATCH 11/13] memcg: lazy LRU free (NEW) KAMEZAWA Hiroyuki
2008-09-22 11:22 ` [PATCH 12/13] memcg: lazy LRU add KAMEZAWA Hiroyuki
2008-09-22 11:24 ` [PATCH 13/13] memcg: swap accounting fix KAMEZAWA Hiroyuki
2008-09-22 11:28 ` [PATCH 0/13] memory cgroup updates v4 KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=32459434.1222099038142.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=dave@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
--cc=xemul@openvz.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox