From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: kosaki.motohiro@jp.fujitsu.com, Shaohua Li <shaohua.li@intel.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"cl@linux.com" <cl@linux.com>,
Andrew Morton <akpm@linux-foundation.org>,
David Rientjes <rientjes@google.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Subject: Re: zone state overhead
Date: Thu, 14 Oct 2010 12:07:29 +0900 (JST) [thread overview]
Message-ID: <20101014120804.8B8F.A69D9226@jp.fujitsu.com> (raw)
In-Reply-To: <20101013112430.GI30667@csn.ul.ie>
Hi
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index c5dfabf..47ba29e 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -2378,7 +2378,9 @@ static int kswapd(void *p)
> > > */
> > > if (!sleeping_prematurely(pgdat, order, remaining)) {
> > > trace_mm_vmscan_kswapd_sleep(pgdat->node_id);
> > > + enable_pgdat_percpu_threshold(pgdat);
> > > schedule();
> > > + disable_pgdat_percpu_threshold(pgdat);
> >
> > If we have 4096 cpus, max drift = 125x4096x4096 ~= 2GB. It is higher than zone watermark.
> > Then, such sysmtem can makes memory exshost before kswap call disable_pgdat_percpu_threshold().
> >
>
> I don't *think* so but lets explore that possibility. For this to occur, all
> CPUs would have to be allocating all of their memory from the one node (4096
> CPUs is not going to be UMA) which is not going to happen. But allocations
> from one node could be falling over to others of course.
>
> Lets take an early condition that has to occur for a 4096 CPU machine to
> get into trouble - node 0 exhausted and moving to node 1 and counter drift
> makes us think everything is fine.
>
> __alloc_pages_nodemask
> -> get_page_from_freelist
> -> zone_watermark_ok == true (because we are drifting)
> -> buffered_rmqueue
> -> __rmqueue (fails eventually, no pages despite watermark_ok)
> -> __alloc_pages_slowpath
> -> wake_all_kswapd()
> ...
>
> kswapd wakes
> -> disable_pgdat_percpu_threshold()
>
> i.e. as each node becomes exhausted in reality, kswapd will wake up, disable
> the thresholds until the high watermark is back and go back to sleep. I'm
> not seeing how we'd get into a situation where all kswapds are asleep at the
> same time while each allocator allocates all of memory without managing to
> wake kswapd. Even GFP_ATOMIC allocations will wakeup kswapd.
>
> Hence, I think the current patch of disabling thresholds while kswapd is
> awake to be sufficient to avoid livelock due to memory exhaustion and
> counter drift.
>
In this case, wakeup_kswapd() don't wake kswapd because
---------------------------------------------------------------------------------
void wakeup_kswapd(struct zone *zone, int order)
{
pg_data_t *pgdat;
if (!populated_zone(zone))
return;
pgdat = zone->zone_pgdat;
if (zone_watermark_ok(zone, order, low_wmark_pages(zone), 0, 0))
return; // HERE
---------------------------------------------------------------------------------
So, if we take your approach, we need to know exact free pages in this.
But, zone_page_state_snapshot() is slow. that's dilemma.
> > Hmmm....
> > This seems fundamental problem. current our zone watermark and per-cpu stat threshold have completely
> > unbalanced definition.
> >
> > zone watermak: very few (few mega bytes)
> > propotional sqrt(mem)
> > no propotional nr-cpus
> >
> > per-cpu stat threshold: relatively large (desktop: few mega bytes, server ~50MB, SGI 2GB ;-)
> > propotional log(mem)
> > propotional log(nr-cpus)
> >
> > It mean, much cpus break watermark assumption.....
> >
>
> They are for different things. watermarks are meant to prevent livelock
> due to memory exhaustion. per-cpu thresholds are so that counters have
> acceptable performance. The assumptions of watermarks remain the same
> but we have to correctly handle when counter drift can break watermarks.
ok.
> > > +void enable_pgdat_percpu_threshold(pg_data_t *pgdat)
> > > +{
> > > + struct zone *zone;
> > > + int cpu;
> > > + int threshold;
> > > +
> > > + for_each_populated_zone(zone) {
> > > + if (!zone->percpu_drift_mark || zone->zone_pgdat != pgdat)
> > > + continue;
> > > +
> > > + threshold = calculate_threshold(zone);
> > > + for_each_online_cpu(cpu)
> > > + per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> > > + = threshold;
> > > + }
> > > +}
> >
> > disable_pgdat_percpu_threshold() and enable_pgdat_percpu_threshold() are
> > almostly same. can you merge them?
> >
>
> I wondered the same but as thresholds are calculated per-zone, I didn't see
> how that could be handled in a unified function without using a callback
> function pointer. If I used callback functions and an additional boolean, I
> could merge refresh_zone_stat_thresholds(), disable_pgdat_percpu_threshold()
> and enable_pgdat_percpu_threshold() but I worried the end-result would be
> a bit unreadable and hinder review. I could roll a standalone patch that
> merges the three if we end up agreeing on this patches general approach
> to counter drift.
ok, I think you are right.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-10-14 3:07 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-28 5:08 Shaohua Li
2010-09-28 12:39 ` Christoph Lameter
2010-09-28 13:30 ` Mel Gorman
2010-09-28 13:40 ` Christoph Lameter
2010-09-28 13:51 ` Mel Gorman
2010-09-28 14:08 ` Christoph Lameter
2010-09-29 3:02 ` Shaohua Li
2010-09-29 4:02 ` David Rientjes
2010-09-29 4:47 ` Shaohua Li
2010-09-29 5:06 ` David Rientjes
2010-09-29 10:03 ` Mel Gorman
2010-09-29 14:12 ` Christoph Lameter
2010-09-29 14:17 ` Mel Gorman
2010-09-29 14:34 ` Christoph Lameter
2010-09-29 14:41 ` Mel Gorman
2010-09-29 14:45 ` Mel Gorman
2010-09-29 14:54 ` Christoph Lameter
2010-09-29 14:52 ` Christoph Lameter
2010-09-29 19:44 ` David Rientjes
2010-10-08 15:29 ` Mel Gorman
2010-10-09 0:58 ` Shaohua Li
2010-10-11 8:56 ` Mel Gorman
2010-10-12 1:05 ` Shaohua Li
2010-10-12 16:25 ` Mel Gorman
2010-10-13 2:41 ` Shaohua Li
2010-10-13 12:09 ` Mel Gorman
2010-10-13 3:36 ` KOSAKI Motohiro
2010-10-13 6:25 ` [RFC][PATCH 0/3] mm: reserve max drift pages at boot time instead using zone_page_state_snapshot() KOSAKI Motohiro
2010-10-13 6:27 ` [RFC][PATCH 1/3] mm, mem-hotplug: recalculate lowmem_reserve when memory hotplug occur KOSAKI Motohiro
2010-10-13 6:39 ` KAMEZAWA Hiroyuki
2010-10-13 12:59 ` Mel Gorman
2010-10-14 2:44 ` KOSAKI Motohiro
2010-10-13 6:28 ` [RFC][PATCH 2/3] mm: update pcp->stat_threshold " KOSAKI Motohiro
2010-10-13 6:40 ` KAMEZAWA Hiroyuki
2010-10-13 13:02 ` Mel Gorman
2010-10-13 6:32 ` [RFC][PATCH 3/3] mm: reserve max drift pages at boot time instead using zone_page_state_snapshot() KOSAKI Motohiro
2010-10-13 13:19 ` Mel Gorman
2010-10-14 2:39 ` KOSAKI Motohiro
2010-10-18 10:43 ` Mel Gorman
2010-10-13 7:10 ` [experimental][PATCH] mm,vmstat: per cpu stat flush too when per cpu page cache flushed KOSAKI Motohiro
2010-10-13 7:16 ` KAMEZAWA Hiroyuki
2010-10-13 13:22 ` Mel Gorman
2010-10-14 2:50 ` KOSAKI Motohiro
2010-10-15 17:31 ` Christoph Lameter
2010-10-18 9:27 ` KOSAKI Motohiro
2010-10-18 15:44 ` Christoph Lameter
2010-10-19 1:10 ` KOSAKI Motohiro
2010-10-18 11:08 ` Mel Gorman
2010-10-19 1:34 ` KOSAKI Motohiro
2010-10-19 9:06 ` Mel Gorman
2010-10-18 15:51 ` Christoph Lameter
2010-10-19 0:43 ` KOSAKI Motohiro
2010-10-13 11:24 ` zone state overhead Mel Gorman
2010-10-14 3:07 ` KOSAKI Motohiro [this message]
2010-10-18 10:39 ` Mel Gorman
2010-10-19 1:16 ` KOSAKI Motohiro
2010-10-19 9:08 ` Mel Gorman
2010-10-22 14:12 ` Mel Gorman
2010-10-22 15:23 ` Christoph Lameter
2010-10-22 18:45 ` Mel Gorman
2010-10-22 15:27 ` Christoph Lameter
2010-10-22 18:46 ` Mel Gorman
2010-10-22 20:01 ` Christoph Lameter
2010-10-25 4:46 ` KOSAKI Motohiro
2010-10-27 8:19 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101014120804.8B8F.A69D9226@jp.fujitsu.com \
--to=kosaki.motohiro@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=rientjes@google.com \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox