linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: linux-mm@kvack.org, Balbir Singh <bsingharora@gmail.com>,
	Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4/4] memcg: prevent from reclaiming if there are per-cpu cached charges
Date: Thu, 21 Jul 2011 19:54:11 +0900	[thread overview]
Message-ID: <20110721195411.f4fa9f91.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <0ed59a22cc84037d6e42b258981c75e3a6063899.1311241300.git.mhocko@suse.cz>

On Thu, 21 Jul 2011 10:28:10 +0200
Michal Hocko <mhocko@suse.cz> wrote:

> If we fail to charge an allocation for a cgroup we usually have to fall
> back into direct reclaim (mem_cgroup_hierarchical_reclaim).
> The charging code, however, currently doesn't care about per-cpu charge
> caches which might have up to (nr_cpus - 1) * CHARGE_BATCH pre charged
> pages (the current cache is already drained, otherwise we wouldn't get
> to mem_cgroup_do_charge).
> That can be quite a lot on boxes with big amounts of CPUs so we can end
> up reclaiming even though there are charges that could be used. This
> will typically happen in a multi-threaded applications pined to many CPUs
> which allocates memory heavily.
> 

Do you have example and score, numbers on your test ?

> Currently we are draining caches during reclaim
> (mem_cgroup_hierarchical_reclaim) but this can be already late as we
> could have already reclaimed from other groups in the hierarchy.
> 
> The solution for this would be to synchronously drain charges early when
> we fail to charge and retry the charge once more.
> I think it still makes sense to keep async draining in the reclaim path
> as it is used from other code paths as well (e.g. limit resize). It will
> not do any work if we drained previously anyway.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.cz>

I don't like this solution, at all.

Assume 2 cpu SMP, (a special case), and 2 applications running under
a memcg.

 - one is running in SCHED_FIFO.
 - another is running into mem_cgroup_do_charge() and call drain_all_stock_sync().

Then, the application stops until SCHED_FIFO application release the cpu.

In general, I don't think waiting for schedule_work() against multiple cpus
is not quicker than short memory reclaim. Adding flush_work() here means
that a context switch is requred before calling direct reclaim. That's bad.
(At leaset, please check __GFP_NOWAIT.)


Please find another way, I think calling synchronous drain here is overkill.
There are not important file caches in the most case and reclaim is quick.
(And async draining runs.)

How about automatically adjusting CHARGE_BATCH and make it small when the
system is near to limit ? or flushing ->stock periodically ?


Thanks,
-Kame

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-07-21 11:01 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-21  9:41 [PATCH 0/4] memcg: cleanup per-cpu charge caches + fix unnecessary reclaim if there are still " Michal Hocko
2011-07-21  7:38 ` [PATCH 1/4] memcg: do not try to drain per-cpu caches without pages Michal Hocko
2011-07-21 10:12   ` KAMEZAWA Hiroyuki
2011-07-21 11:36     ` Michal Hocko
2011-07-21 23:44       ` KAMEZAWA Hiroyuki
2011-07-22  9:19         ` Michal Hocko
2011-07-22  9:28           ` KAMEZAWA Hiroyuki
2011-07-22  9:58             ` Michal Hocko
2011-07-22 10:23               ` Michal Hocko
2011-07-21  7:50 ` [PATCH 2/4] memcg: unify sync and async per-cpu charge cache draining Michal Hocko
2011-07-21 10:25   ` KAMEZAWA Hiroyuki
2011-07-21 11:36     ` Michal Hocko
2011-07-21  7:58 ` [PATCH 3/4] memcg: get rid of percpu_charge_mutex lock Michal Hocko
2011-07-21 10:30   ` KAMEZAWA Hiroyuki
2011-07-21 11:47     ` Michal Hocko
2011-07-21 12:42       ` Michal Hocko
2011-07-21 23:49         ` KAMEZAWA Hiroyuki
2011-07-22  9:21           ` Michal Hocko
2011-07-22  0:27         ` Daisuke Nishimura
2011-07-22  9:41           ` Michal Hocko
2011-07-21  8:28 ` [PATCH 4/4] memcg: prevent from reclaiming if there are per-cpu cached charges Michal Hocko
2011-07-21 10:54   ` KAMEZAWA Hiroyuki [this message]
2011-07-21 12:30     ` Michal Hocko
2011-07-21 23:56       ` KAMEZAWA Hiroyuki
2011-07-22  0:18         ` KAMEZAWA Hiroyuki
2011-07-22  9:54         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110721195411.f4fa9f91.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=bsingharora@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=nishimura@mxp.nes.nec.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox