From: Michal Hocko <mhocko@suse.cz>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: linux-mm@kvack.org, Balbir Singh <bsingharora@gmail.com>,
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4/4] memcg: prevent from reclaiming if there are per-cpu cached charges
Date: Thu, 21 Jul 2011 14:30:12 +0200 [thread overview]
Message-ID: <20110721123012.GD27855@tiehlicka.suse.cz> (raw)
In-Reply-To: <20110721195411.f4fa9f91.kamezawa.hiroyu@jp.fujitsu.com>
On Thu 21-07-11 19:54:11, KAMEZAWA Hiroyuki wrote:
> On Thu, 21 Jul 2011 10:28:10 +0200
> Michal Hocko <mhocko@suse.cz> wrote:
>
> > If we fail to charge an allocation for a cgroup we usually have to fall
> > back into direct reclaim (mem_cgroup_hierarchical_reclaim).
> > The charging code, however, currently doesn't care about per-cpu charge
> > caches which might have up to (nr_cpus - 1) * CHARGE_BATCH pre charged
> > pages (the current cache is already drained, otherwise we wouldn't get
> > to mem_cgroup_do_charge).
> > That can be quite a lot on boxes with big amounts of CPUs so we can end
> > up reclaiming even though there are charges that could be used. This
> > will typically happen in a multi-threaded applications pined to many CPUs
> > which allocates memory heavily.
> >
>
> Do you have example and score, numbers on your test ?
As I said, I haven't seen anything that would affect visibly performance
but I have seen situations where we reclaimed even though there were
pre-charges on other CPUs.
> > Currently we are draining caches during reclaim
> > (mem_cgroup_hierarchical_reclaim) but this can be already late as we
> > could have already reclaimed from other groups in the hierarchy.
> >
> > The solution for this would be to synchronously drain charges early when
> > we fail to charge and retry the charge once more.
> > I think it still makes sense to keep async draining in the reclaim path
> > as it is used from other code paths as well (e.g. limit resize). It will
> > not do any work if we drained previously anyway.
> >
> > Signed-off-by: Michal Hocko <mhocko@suse.cz>
>
> I don't like this solution, at all.
>
> Assume 2 cpu SMP, (a special case), and 2 applications running under
> a memcg.
>
> - one is running in SCHED_FIFO.
> - another is running into mem_cgroup_do_charge() and call drain_all_stock_sync().
>
> Then, the application stops until SCHED_FIFO application release the cpu.
It would have to back off during reclaim anyaway (because we check
cond_resched during reclaim), right?
> In general, I don't think waiting for schedule_work() against multiple cpus
> is not quicker than short memory reclaim.
You are right, but if you consider small groups then the reclaim can
make the situation much worse.
> Adding flush_work() here means that a context switch is requred before
> calling direct reclaim.
Is that really a problem? We would context switch during reclaim if
there is something else that wants CPU anyway.
Maybe we could drain only if we get a reasonable number of pages back?
This would require two passes over per-cpu caches to find the number -
not nice. Or we could drain only those caches that have at least some
threshold of pages.
> That's bad. (At leaset, please check __GFP_NOWAIT.)
Definitely a good idea. Fixed.
> Please find another way, I think calling synchronous drain here is overkill.
> There are not important file caches in the most case and reclaim is quick.
This is, however, really hard to know in advance. If there are used-once
unmaped file pages then it is much easier to reclaim them for sure.
Maybe I could check the statistics and decide whether to drain according
pages we have in the group. Let me think about that.
> (And async draining runs.)
>
> How about automatically adjusting CHARGE_BATCH and make it small when the
> system is near to limit ?
Hmm, we are already bypassing batching if we are close to the limit,
aren't we? If we get to the reclaim we fallback to nr_pages allocation
and so we do not refill the stock.
Maybe we could check how much we have reclaimed and update the batch
size accordingly.
> or flushing ->stock periodically ?
>
>
> Thanks,
> -Kame
Thanks!
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-07-21 12:30 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-21 9:41 [PATCH 0/4] memcg: cleanup per-cpu charge caches + fix unnecessary reclaim if there are still " Michal Hocko
2011-07-21 7:38 ` [PATCH 1/4] memcg: do not try to drain per-cpu caches without pages Michal Hocko
2011-07-21 10:12 ` KAMEZAWA Hiroyuki
2011-07-21 11:36 ` Michal Hocko
2011-07-21 23:44 ` KAMEZAWA Hiroyuki
2011-07-22 9:19 ` Michal Hocko
2011-07-22 9:28 ` KAMEZAWA Hiroyuki
2011-07-22 9:58 ` Michal Hocko
2011-07-22 10:23 ` Michal Hocko
2011-07-21 7:50 ` [PATCH 2/4] memcg: unify sync and async per-cpu charge cache draining Michal Hocko
2011-07-21 10:25 ` KAMEZAWA Hiroyuki
2011-07-21 11:36 ` Michal Hocko
2011-07-21 7:58 ` [PATCH 3/4] memcg: get rid of percpu_charge_mutex lock Michal Hocko
2011-07-21 10:30 ` KAMEZAWA Hiroyuki
2011-07-21 11:47 ` Michal Hocko
2011-07-21 12:42 ` Michal Hocko
2011-07-21 23:49 ` KAMEZAWA Hiroyuki
2011-07-22 9:21 ` Michal Hocko
2011-07-22 0:27 ` Daisuke Nishimura
2011-07-22 9:41 ` Michal Hocko
2011-07-21 8:28 ` [PATCH 4/4] memcg: prevent from reclaiming if there are per-cpu cached charges Michal Hocko
2011-07-21 10:54 ` KAMEZAWA Hiroyuki
2011-07-21 12:30 ` Michal Hocko [this message]
2011-07-21 23:56 ` KAMEZAWA Hiroyuki
2011-07-22 0:18 ` KAMEZAWA Hiroyuki
2011-07-22 9:54 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110721123012.GD27855@tiehlicka.suse.cz \
--to=mhocko@suse.cz \
--cc=bsingharora@gmail.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox