From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail143.messagelabs.com (mail143.messagelabs.com [216.82.254.35]) by kanga.kvack.org (Postfix) with SMTP id E62806B00BB for ; Tue, 13 Oct 2009 04:08:12 -0400 (EDT) Received: from m2.gw.fujitsu.co.jp ([10.0.50.72]) by fgwmail7.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n9D88A1i027690 for (envelope-from kamezawa.hiroyu@jp.fujitsu.com); Tue, 13 Oct 2009 17:08:10 +0900 Received: from smail (m2 [127.0.0.1]) by outgoing.m2.gw.fujitsu.co.jp (Postfix) with ESMTP id E17BD45DE57 for ; Tue, 13 Oct 2009 17:08:09 +0900 (JST) Received: from s2.gw.fujitsu.co.jp (s2.gw.fujitsu.co.jp [10.0.50.92]) by m2.gw.fujitsu.co.jp (Postfix) with ESMTP id B7A4B45DE51 for ; Tue, 13 Oct 2009 17:08:09 +0900 (JST) Received: from s2.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s2.gw.fujitsu.co.jp (Postfix) with ESMTP id 8DDADE78004 for ; Tue, 13 Oct 2009 17:08:09 +0900 (JST) Received: from m105.s.css.fujitsu.com (m105.s.css.fujitsu.com [10.249.87.105]) by s2.gw.fujitsu.co.jp (Postfix) with ESMTP id 38CED1DB8038 for ; Tue, 13 Oct 2009 17:08:09 +0900 (JST) Date: Tue, 13 Oct 2009 17:05:45 +0900 From: KAMEZAWA Hiroyuki Subject: Re: [PATCH 2/2] memcg: coalescing charge by percpu (Oct/9) Message-Id: <20091013170545.3af1cf7b.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <20091013165719.c5781bfa.nishimura@mxp.nes.nec.co.jp> References: <20091009165826.59c6f6e3.kamezawa.hiroyu@jp.fujitsu.com> <20091009170105.170e025f.kamezawa.hiroyu@jp.fujitsu.com> <20091009165002.629a91d2.akpm@linux-foundation.org> <72e9a96ea399491948f396dab01b4c77.squirrel@webmail-b.css.fujitsu.com> <20091013165719.c5781bfa.nishimura@mxp.nes.nec.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: Daisuke Nishimura Cc: Andrew Morton , "linux-mm@kvack.org" , "balbir@linux.vnet.ibm.com" , h-shimamoto@ct.jp.nec.com, linux-kernel@vger.kernel.org List-ID: On Tue, 13 Oct 2009 16:57:19 +0900 Daisuke Nishimura wrote: > On Sun, 11 Oct 2009 11:37:35 +0900 (JST), "KAMEZAWA Hiroyuki" wrote: > > Andrew Morton wrote: > > > On Fri, 9 Oct 2009 17:01:05 +0900 > > > KAMEZAWA Hiroyuki wrote: > > > > > >> +static void drain_all_stock_async(void) > > >> +{ > > >> + int cpu; > > >> + /* This function is for scheduling "drain" in asynchronous way. > > >> + * The result of "drain" is not directly handled by callers. Then, > > >> + * if someone is calling drain, we don't have to call drain more. > > >> + * Anyway, work_pending() will catch if there is a race. We just do > > >> + * loose check here. > > >> + */ > > >> + if (atomic_read(&memcg_drain_count)) > > >> + return; > > >> + /* Notify other cpus that system-wide "drain" is running */ > > >> + atomic_inc(&memcg_drain_count); > Shouldn't we use atomic_inc_not_zero() ? > (Do you mean this problem by "is not very good" below ?) > As comment says, "we just do loose check". There is no terrible race except for wasting cpu time. I'm now thinking about following. == for_each_online_cpu(cpu) { struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); if (work_pending(&stock->work)) continue; + atomic_inc(&memcg_drain_count); INIT_WORK(&stock->work, drain_local_stock); schedule_work_on(cpu, &stock->work); } == Or using cpumask to avoid scheduleing twice. atomic_dec will be added to worker routine, after drain. I'm now prepareing slides for JLS (ah, yes, deadline has gone.), so plz give me time.. If you want to review it, plz let me know. Thanks, -Kame > > Thanks, > Daisuke Nishimura. > > > >> + get_online_cpus(); > > >> + for_each_online_cpu(cpu) { > > >> + struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); > > >> + if (work_pending(&stock->work)) > > >> + continue; > > >> + INIT_WORK(&stock->work, drain_local_stock); > > >> + schedule_work_on(cpu, &stock->work); > > >> + } > > >> + put_online_cpus(); > > >> + atomic_dec(&memcg_drain_count); > > >> + /* We don't wait for flush_work */ > > >> +} > > > > > > It's unusual to run INIT_WORK() each time we use a work_struct. > > > Usually we will run INIT_WORK a single time, then just repeatedly use > > > that structure. Because after the work has completed, it is still in a > > > ready-to-use state. > > > > > > Running INIT_WORK() repeatedly against the same work_struct adds a risk > > > that we'll scribble on an in-use work_struct, which would make a big > > > mess. > > > > > Ah, ok. I'll prepare a fix. (And I think atomic_dec/inc placement is not > > very good....I'll do total review, again.) > > > > Thank you for review. > > > > Regards, > > -Kame > > > > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org