linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
	"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
	h-shimamoto@ct.jp.nec.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] memcg: coalescing charge by percpu (Oct/9)
Date: Fri, 9 Oct 2009 16:50:02 -0700	[thread overview]
Message-ID: <20091009165002.629a91d2.akpm@linux-foundation.org> (raw)
In-Reply-To: <20091009170105.170e025f.kamezawa.hiroyu@jp.fujitsu.com>

On Fri, 9 Oct 2009 17:01:05 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:

> +static void drain_all_stock_async(void)
> +{
> +	int cpu;
> +	/* This function is for scheduling "drain" in asynchronous way.
> +	 * The result of "drain" is not directly handled by callers. Then,
> +	 * if someone is calling drain, we don't have to call drain more.
> +	 * Anyway, work_pending() will catch if there is a race. We just do
> +	 * loose check here.
> +	 */
> +	if (atomic_read(&memcg_drain_count))
> +		return;
> +	/* Notify other cpus that system-wide "drain" is running */
> +	atomic_inc(&memcg_drain_count);
> +	get_online_cpus();
> +	for_each_online_cpu(cpu) {
> +		struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
> +		if (work_pending(&stock->work))
> +			continue;
> +		INIT_WORK(&stock->work, drain_local_stock);
> +		schedule_work_on(cpu, &stock->work);
> +	}
> + 	put_online_cpus();
> +	atomic_dec(&memcg_drain_count);
> +	/* We don't wait for flush_work */
> +}

It's unusual to run INIT_WORK() each time we use a work_struct. 
Usually we will run INIT_WORK a single time, then just repeatedly use
that structure.  Because after the work has completed, it is still in a
ready-to-use state.

Running INIT_WORK() repeatedly against the same work_struct adds a risk
that we'll scribble on an in-use work_struct, which would make a big
mess.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-10-09 23:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-09  7:58 [PATCH 1/2] memcg: coalescing uncharge at unmap/truncate (Oct/9) KAMEZAWA Hiroyuki
2009-10-09  8:01 ` [PATCH 2/2] memcg: coalescing charge by percpu (Oct/9) KAMEZAWA Hiroyuki
2009-10-09 23:50   ` Andrew Morton [this message]
2009-10-11  2:37     ` KAMEZAWA Hiroyuki
2009-10-13  7:57       ` Daisuke Nishimura
2009-10-13  8:05         ` KAMEZAWA Hiroyuki
2009-10-14  6:42           ` Daisuke Nishimura
2009-10-14  7:02             ` KAMEZAWA Hiroyuki
2009-10-16  0:32               ` [BUGFIX][PATCH -mmotm] memcg: don't do INIT_WORK() repeatedly against the same work_struct Daisuke Nishimura

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20091009165002.629a91d2.akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=h-shimamoto@ct.jp.nec.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nishimura@mxp.nes.nec.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox