From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Ying Han <yinghan@google.com>,
Minchan Kim <minchan.kim@gmail.com>,
Rik van Riel <riel@redhat.com>, Mel Gorman <mel@csn.ul.ie>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org
Subject: Re: [PATCH 1/2] check the return value of soft_limit reclaim
Date: Mon, 28 Mar 2011 17:44:21 +0900 [thread overview]
Message-ID: <20110328174421.6ac9ada0.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20110328154033.F068.A69D9226@jp.fujitsu.com>
On Mon, 28 Mar 2011 15:39:59 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
> > In the global background reclaim, we do soft reclaim before scanning the
> > per-zone LRU. However, the return value is ignored. This patch adds the logic
> > where no per-zone reclaim happens if the soft reclaim raise the free pages
> > above the zone's high_wmark.
> >
> > I did notice a similar check exists but instead leaving a "gap" above the
> > high_wmark(the code right after my change in vmscan.c). There are discussions
> > on whether or not removing the "gap" which intends to balance pressures across
> > zones over time. Without fully understand the logic behind, I didn't try to
> > merge them into one, but instead adding the condition only for memcg users
> > who care a lot on memory isolation.
> >
> > Signed-off-by: Ying Han <yinghan@google.com>
>
> Looks good to me. But this depend on "memcg soft limit" spec. To be honest,
> I don't know this return value ignorance is intentional or not. So I think
> you need to get ack from memcg folks.
>
>
Hi,
> > ---
> > mm/vmscan.c | 16 +++++++++++++++-
> > 1 files changed, 15 insertions(+), 1 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 060e4c1..e4601c5 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2320,6 +2320,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> > int end_zone = 0; /* Inclusive. 0 = ZONE_DMA */
> > unsigned long total_scanned;
> > struct reclaim_state *reclaim_state = current->reclaim_state;
> > + unsigned long nr_soft_reclaimed;
> > struct scan_control sc = {
> > .gfp_mask = GFP_KERNEL,
> > .may_unmap = 1,
> > @@ -2413,7 +2414,20 @@ loop_again:
> > * Call soft limit reclaim before calling shrink_zone.
> > * For now we ignore the return value
> > */
> > - mem_cgroup_soft_limit_reclaim(zone, order, sc.gfp_mask);
> > + nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone,
> > + order, sc.gfp_mask);
> > +
> > + /*
> > + * Check the watermark after the soft limit reclaim. If
> > + * the free pages is above the watermark, no need to
> > + * proceed to the zone reclaim.
> > + */
> > + if (nr_soft_reclaimed && zone_watermark_ok_safe(zone,
> > + order, high_wmark_pages(zone),
> > + end_zone, 0)) {
> > + __inc_zone_state(zone, NR_SKIP_RECLAIM_GLOBAL);
>
> NR_SKIP_RECLAIM_GLOBAL is defined by patch 2/2. please don't break bisectability.
>
>
>
> > + continue;
> > + }
Hmm, this "continue" seems not good to me. And, IIUC, this was a reason
we ignore the result. But yes, ignore the result is bad.
I think you should just do sc.nr_reclaimed += nr_soft_reclaimed.
Or mem_cgroup_soft_limit_reclaim() should update sc.
And allow kswapd to do some jobs as
- call shrink_slab()
- update total_scanned
- update other flags.. etc...etc..
If extra shink_zone() seems bad, please skip it, if mem_cgroup_soft_limit_reclaim()
did enough jobs.
IOW, mem_cgroup_soft_limit_reclaim() can't do enough jobs to satisfy
==
2426 balance_gap = min(low_wmark_pages(zone),
2427 (zone->present_pages +
2428 KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
2429 KSWAPD_ZONE_BALANCE_GAP_RATIO);
2430 if (!zone_watermark_ok_safe(zone, order,
2431 high_wmark_pages(zone) + balance_gap,
2432 end_zone, 0))
2433 shrink_zone(priority, zone, &sc);
==
This condition, you should update mem_cgroup_soft_limit_relcaim() to satisfy this,
rather than continue here.
I guess this is not easy...So, how about starting from updating 'sc' passed to
mem_cgroup_soft_limit_reclaim() ? Then, we can think of algorithm.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-03-28 8:54 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-28 6:12 [PATCH 0/2] Reduce reclaim from per-zone LRU in global kswapd Ying Han
2011-03-28 6:12 ` [PATCH 1/2] check the return value of soft_limit reclaim Ying Han
2011-03-28 6:39 ` KOSAKI Motohiro
2011-03-28 8:44 ` KAMEZAWA Hiroyuki [this message]
2011-03-28 15:29 ` Minchan Kim
2011-03-28 17:35 ` Ying Han
2011-03-28 16:44 ` Ying Han
2011-03-28 7:33 ` Daisuke Nishimura
2011-03-29 15:38 ` Balbir Singh
2011-03-29 17:39 ` Ying Han
2011-03-28 6:12 ` [PATCH 2/2] add two stats to monitor " Ying Han
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110328174421.6ac9ada0.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=minchan.kim@gmail.com \
--cc=riel@redhat.com \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox