From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Christoph Lameter <cl@linux-foundation.org>
Cc: "hugh.dickins@tiscali.co.uk" <hugh.dickins@tiscali.co.uk>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
akpm@linux-foundation.org, Tejun Heo <tj@kernel.org>
Subject: Re: [MM] Make mm counters per cpu instead of atomic
Date: Fri, 6 Nov 2009 08:42:38 +0900 [thread overview]
Message-ID: <20091106084238.cbecd8ef.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <alpine.DEB.1.10.0911051008260.25718@V090114053VZO-1>
On Thu, 5 Nov 2009 10:10:56 -0500 (EST)
Christoph Lameter <cl@linux-foundation.org> wrote:
> On Thu, 5 Nov 2009, KAMEZAWA Hiroyuki wrote:
>
> > Hmm, I don't fully understand _new_ percpu but...
> > In logical (even if not realistic), x86-32 supports up to 512 ? cpus in Kconfig.
> > BIGSMP.
>
> x86-32 only supports 32 processors. Plus per cpu areas are only allocated
> for the possible processors.
>
My number is just from Kconfig.
> > Then, if 65536 process runs, this consumes
> >
> > 65536(nr_proc) * 8 (size) * 512(cpus) = 256MBytes.
>
> With 32 possible cpus this results in 16m of per cpu space use.
>
If swap_usage is added, 24m, 25% of vmalloc area.
(But, yes, returning -ENOMEM to fork() is ok to me, 65536 proc are extreme.)
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-11-05 23:45 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-04 19:14 Christoph Lameter
2009-11-04 19:17 ` [MM] Remove rss batching from copy_page_range() Christoph Lameter
2009-11-04 21:02 ` Andi Kleen
2009-11-04 22:02 ` Christoph Lameter
2009-11-05 8:27 ` Andi Kleen
2009-11-04 21:01 ` [MM] Make mm counters per cpu instead of atomic Andi Kleen
2009-11-04 23:49 ` Dave Jones
2009-11-05 15:04 ` Christoph Lameter
2009-11-05 15:36 ` [MM] Make mm counters per cpu instead of atomic V2 Christoph Lameter
2009-11-06 1:11 ` KAMEZAWA Hiroyuki
2009-11-06 3:23 ` KAMEZAWA Hiroyuki
2009-11-06 17:32 ` Christoph Lameter
2009-11-06 19:03 ` KAMEZAWA Hiroyuki
2009-11-06 19:13 ` Christoph Lameter
2009-11-06 19:20 ` KAMEZAWA Hiroyuki
2009-11-06 19:47 ` Christoph Lameter
2009-11-10 22:44 ` Andrew Morton
2009-11-10 23:20 ` Christoph Lameter
2009-11-06 4:08 ` KAMEZAWA Hiroyuki
2009-11-06 4:15 ` KAMEZAWA Hiroyuki
2009-11-05 1:16 ` [MM] Make mm counters per cpu instead of atomic KAMEZAWA Hiroyuki
2009-11-05 15:10 ` Christoph Lameter
2009-11-05 23:42 ` KAMEZAWA Hiroyuki [this message]
2009-11-17 6:48 ` Zhang, Yanmin
2009-11-17 7:31 ` Zhang, Yanmin
2009-11-17 9:34 ` Zhang, Yanmin
2009-11-17 17:25 ` Christoph Lameter
2009-11-19 0:48 ` Zhang, Yanmin
2009-11-23 8:51 ` Zhang, Yanmin
2009-11-23 14:31 ` Christoph Lameter
2009-11-24 8:02 ` Zhang, Yanmin
2009-11-24 15:17 ` Christoph Lameter
2009-11-25 1:23 ` Zhang, Yanmin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091106084238.cbecd8ef.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux-foundation.org \
--cc=hugh.dickins@tiscali.co.uk \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox