From: Peter Zijlstra <peterz@infradead.org>
To: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>,
Nick Piggin <npiggin@suse.de>,
linux-kernel@vger.kernel.org, Hugh Dickins <hugh@veritas.com>,
Andrew Morton <akpm@linux-foundation.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Rik van Riel <riel@redhat.com>,
Lee Schermerhorn <lee.schermerhorn@hp.com>,
linux-mm@kvack.org, Christoph Lameter <cl@linux-foundation.org>,
Gautham Shenoy <ego@in.ibm.com>, Oleg Nesterov <oleg@tv-sign.ru>,
Rusty Russell <rusty@rustcorp.com.au>, mpm <mpm@selenic.com>
Subject: Re: [RFC][PATCH] lru_add_drain_all() don't use schedule_on_each_cpu()
Date: Sun, 26 Oct 2008 17:17:52 +0100 [thread overview]
Message-ID: <1225037872.32713.22.camel@twins> (raw)
In-Reply-To: <2f11576a0810260851h15cb7e1ahb454b70a2e99e1a8@mail.gmail.com>
On Mon, 2008-10-27 at 00:51 +0900, KOSAKI Motohiro wrote:
> >> >> @@ -611,4 +613,8 @@ void __init swap_setup(void)
> >> >> #ifdef CONFIG_HOTPLUG_CPU
> >> >> hotcpu_notifier(cpu_swap_callback, 0);
> >> >> #endif
> >> >> +
> >> >> + vm_wq = create_workqueue("vm_work");
> >> >> + BUG_ON(!vm_wq);
> >> >> +
> >> >> }
> >> >
> >> > While I really hate adding yet another per-cpu thread for this, I don't
> >> > see another way out atm.
> >>
> >> Can I ask the reason of your hate?
> >> if I don't know it, making improvement patch is very difficult to me.
> >
> > There seems to be no drive to keep them down, ps -def output it utterly
> > dominated by kernel threads on a freshly booted machine with many cpus.
>
> True. but I don't think it is big problem. because
>
> 1. people can use grep filter easily.
> 2. that is just "sense of beauty" issue. not real pain.
> 3. current ps output is already utterly filled by kernel thread on
> large server :)
> the patch doesn't introduce new problem.
Sure, its already bad, which is why I think we should see to it it
doesn't get worse - also we could make kthreads use CLONE_PID in which
case they'd all get collapsed, but that would be a use-visible change
which might up-set folks even more.
> > And while they are not _that_ expensive to have around, they are not
> > free either, I imagine the tiny-linux folks having an interest in
> > keeping these down too.
>
> In my embedded job experience, I don't hear that.
> Their folks strongly interest to memory size and cpu usage, but don't
> interest # of thread so much.
>
> Yes, too many thread spent many memory by stack. but the patch
> introduce only one thread on embedded device.
Right, and would be about 4k+sizeof(task_struct), some people might be
bothered, but most won't care.
> Perhaps, I misunderstand your intension. so can you point your
> previous discussion url?
my google skillz fail me, but once in a while people complain that we
have too many kernel threads.
Anyway, if we can re-use this per-cpu workqueue for more goals, I guess
there is even less of an objection.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-10-26 16:17 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <200810201659.m9KGxtFC016280@hera.kernel.org>
2008-10-21 15:13 ` mlock: mlocked pages are unevictable Heiko Carstens
2008-10-21 15:51 ` KOSAKI Motohiro
2008-10-21 17:18 ` KOSAKI Motohiro
2008-10-21 20:30 ` Peter Zijlstra
2008-10-21 20:48 ` Peter Zijlstra
2008-10-23 15:00 ` [RFC][PATCH] lru_add_drain_all() don't use schedule_on_each_cpu() KOSAKI Motohiro
2008-10-24 1:28 ` Nick Piggin
2008-10-24 4:54 ` KOSAKI Motohiro
2008-10-24 4:55 ` Nick Piggin
2008-10-24 5:29 ` KOSAKI Motohiro
2008-10-24 5:34 ` Nick Piggin
2008-10-24 5:51 ` KOSAKI Motohiro
2008-10-24 19:20 ` Heiko Carstens
2008-10-26 11:06 ` Peter Zijlstra
2008-10-26 13:37 ` KOSAKI Motohiro
2008-10-26 13:49 ` Peter Zijlstra
2008-10-26 15:51 ` KOSAKI Motohiro
2008-10-26 16:17 ` Peter Zijlstra [this message]
2008-10-27 3:14 ` KOSAKI Motohiro
2008-10-27 7:56 ` Peter Zijlstra
2008-10-27 8:03 ` KOSAKI Motohiro
2008-10-27 10:42 ` KOSAKI Motohiro
2008-10-27 21:55 ` Andrew Morton
2008-10-28 14:25 ` Christoph Lameter
2008-10-28 20:45 ` Andrew Morton
2008-10-28 21:29 ` Lee Schermerhorn
2008-10-29 7:17 ` KOSAKI Motohiro
2008-10-29 12:40 ` Lee Schermerhorn
2008-11-06 0:14 ` [PATCH] get rid of lru_add_drain_all() in munlock path KOSAKI Motohiro
2008-11-06 16:33 ` Kamalesh Babulal
2008-10-29 7:20 ` [RFC][PATCH] lru_add_drain_all() don't use schedule_on_each_cpu() KOSAKI Motohiro
2008-10-29 8:21 ` KAMEZAWA Hiroyuki
2008-11-05 9:51 ` Peter Zijlstra
2008-11-05 9:55 ` KOSAKI Motohiro
2008-10-22 15:28 ` mlock: mlocked pages are unevictable Lee Schermerhorn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1225037872.32713.22.camel@twins \
--to=peterz@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=cl@linux-foundation.org \
--cc=ego@in.ibm.com \
--cc=heiko.carstens@de.ibm.com \
--cc=hugh@veritas.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=lee.schermerhorn@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mpm@selenic.com \
--cc=npiggin@suse.de \
--cc=oleg@tv-sign.ru \
--cc=riel@redhat.com \
--cc=rusty@rustcorp.com.au \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox