From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: balbir@linux.vnet.ibm.com
Cc: Evgeniy Ivanov <lolkaantimat@gmail.com>,
linux-mm@kvack.org,
"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
gthelen@google.com
Subject: Re: Question about cgroup hierarchy and reducing memory limit
Date: Tue, 30 Nov 2010 09:03:33 +0900 [thread overview]
Message-ID: <20101130090333.0c8c1728.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20101129140233.GA4199@balbir.in.ibm.com>
On Mon, 29 Nov 2010 19:32:33 +0530
Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2010-11-29 15:58:58]:
>
> > On Thu, 25 Nov 2010 13:51:06 +0300
> > Evgeniy Ivanov <lolkaantimat@gmail.com> wrote:
> >
> > > That would be great, thanks!
> > > For now we decided either to use decreasing limits in script with
> > > timeout or controlling the limit just by root group.
> > >
> >
> > I wrote a patch as below but I also found that "success" of shrkinking limit
> > means easy OOM Kill because we don't have wait-for-writeback logic.
> >
> > Now, -EBUSY seems to be a safe guard logic against OOM KILL.
> > I'd like to wait for the merge of dirty_ratio logic and test this again.
> > I hope it helps.
> >
> > Thanks,
> > -Kame
> > ==
> > At changing limit of memory cgroup, we see many -EBUSY when
> > 1. Cgroup is small.
> > 2. Some tasks are accessing pages very frequently.
> >
> > It's not very covenient. This patch makes memcg to be in "shrinking" mode
> > when the limit is shrinking. This patch does,
> >
> > a) block new allocation.
> > b) ignore page reference bit at shrinking.
> >
> > The admin should know what he does...
> >
> > Need:
> > - dirty_ratio for avoid OOM.
> > - Documentation update.
> >
> > Note:
> > - Sudden shrinking of memory limit tends to cause OOM.
> > We need dirty_ratio patch before merging this.
> >
> > Reported-by: Evgeniy Ivanov <lolkaantimat@gmail.com>
> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > ---
> > include/linux/memcontrol.h | 6 +++++
> > mm/memcontrol.c | 48 +++++++++++++++++++++++++++++++++++++++++++++
> > mm/vmscan.c | 2 +
> > 3 files changed, 56 insertions(+)
> >
> > Index: mmotm-1117/mm/memcontrol.c
> > ===================================================================
> > --- mmotm-1117.orig/mm/memcontrol.c
> > +++ mmotm-1117/mm/memcontrol.c
> > @@ -239,6 +239,7 @@ struct mem_cgroup {
> > unsigned int swappiness;
> > /* OOM-Killer disable */
> > int oom_kill_disable;
> > + atomic_t shrinking;
> >
> > /* set when res.limit == memsw.limit */
> > bool memsw_is_minimum;
> > @@ -1814,6 +1815,25 @@ static int __cpuinit memcg_cpu_hotplug_c
> > return NOTIFY_OK;
> > }
> >
> > +static DECLARE_WAIT_QUEUE_HEAD(memcg_shrink_waitq);
> > +
> > +bool mem_cgroup_shrinking(struct mem_cgroup *mem)
>
> I prefer is_mem_cgroup_shrinking
>
Hmm, ok.
> > +{
> > + return atomic_read(&mem->shrinking) > 0;
> > +}
> > +
> > +void mem_cgroup_shrink_wait(struct mem_cgroup *mem)
> > +{
> > + wait_queue_t wait;
> > +
> > + init_wait(&wait);
> > + prepare_to_wait(&memcg_shrink_waitq, &wait, TASK_INTERRUPTIBLE);
> > + smp_rmb();
>
> Why the rmb?
>
my fault.
> > + if (mem_cgroup_shrinking(mem))
> > + schedule();
>
> We need to check for signals if we sleep with TASK_INTERRUPTIBLE, but
> that complicates the entire path as well. May be the question to ask
> is - why is this TASK_INTERRUPTIBLE, what is the expected delay. Could
> this be a fairness issue as well?
>
Signal check is done in do_charge() automaticaly.
> > + finish_wait(&memcg_shrink_waitq, &wait);
> > +}
> > +
> >
> > /* See __mem_cgroup_try_charge() for details */
> > enum {
> > @@ -1832,6 +1852,17 @@ static int __mem_cgroup_do_charge(struct
> > unsigned long flags = 0;
> > int ret;
> >
> > + /*
> > + * If shrinking() == true, admin is now reducing limit of memcg and
> > + * reclaiming memory eagerly. This _new_ charge will increase usage and
> > + * prevents the system from setting new limit. We add delay here and
> > + * make reducing size easier.
> > + */
> > + if (unlikely(mem_cgroup_shrinking(mem)) && (gfp_mask & __GFP_WAIT)) {
> > + mem_cgroup_shrink_wait(mem);
> > + return CHARGE_RETRY;
> > + }
> > +
>
> Oh! oh! I'd hate to do this in the fault path
>
Why ? We have per-cpu stock now and infulence of this is minimum.
We never hit this.
If problem, I'll use per-cpu value but it seems to be overkill.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-11-30 0:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-22 16:59 Evgeniy Ivanov
2010-11-24 0:47 ` KAMEZAWA Hiroyuki
2010-11-24 12:17 ` Evgeniy Ivanov
2010-11-25 1:04 ` KAMEZAWA Hiroyuki
2010-11-25 10:51 ` Evgeniy Ivanov
2010-11-29 6:58 ` KAMEZAWA Hiroyuki
2010-11-29 14:02 ` Balbir Singh
2010-11-30 0:03 ` KAMEZAWA Hiroyuki [this message]
2010-11-30 1:23 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101130090333.0c8c1728.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=gthelen@google.com \
--cc=linux-mm@kvack.org \
--cc=lolkaantimat@gmail.com \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox