From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Simon Kirby <sim@hostway.ca>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Shaohua Li <shaohua.li@intel.com>,
Dave Hansen <dave@linux.vnet.ibm.com>,
linux-mm <linux-mm@kvack.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/5] mm: kswapd: Stop high-order balancing when any suitable zone is balanced
Date: Mon, 6 Dec 2010 11:35:41 +0900 [thread overview]
Message-ID: <20101206113541.dda0a794.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <1291376734-30202-2-git-send-email-mel@csn.ul.ie>
On Fri, 3 Dec 2010 11:45:30 +0000
Mel Gorman <mel@csn.ul.ie> wrote:
> When the allocator enters its slow path, kswapd is woken up to balance the
> node. It continues working until all zones within the node are balanced. For
> order-0 allocations, this makes perfect sense but for higher orders it can
> have unintended side-effects. If the zone sizes are imbalanced, kswapd may
> reclaim heavily within a smaller zone discarding an excessive number of
> pages. The user-visible behaviour is that kswapd is awake and reclaiming
> even though plenty of pages are free from a suitable zone.
>
> This patch alters the "balance" logic for high-order reclaim allowing kswapd
> to stop if any suitable zone becomes balanced to reduce the number of pages
> it reclaims from other zones. kswapd still tries to ensure that order-0
> watermarks for all zones are met before sleeping.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
a nitpick.
> ---
> include/linux/mmzone.h | 3 +-
> mm/page_alloc.c | 8 ++++--
> mm/vmscan.c | 55 +++++++++++++++++++++++++++++++++++++++++-------
> 3 files changed, 54 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 39c24eb..7177f51 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -645,6 +645,7 @@ typedef struct pglist_data {
> wait_queue_head_t kswapd_wait;
> struct task_struct *kswapd;
> int kswapd_max_order;
> + enum zone_type classzone_idx;
> } pg_data_t;
>
> #define node_present_pages(nid) (NODE_DATA(nid)->node_present_pages)
> @@ -660,7 +661,7 @@ typedef struct pglist_data {
>
> extern struct mutex zonelists_mutex;
> void build_all_zonelists(void *data);
> -void wakeup_kswapd(struct zone *zone, int order);
> +void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx);
> int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
> int classzone_idx, int alloc_flags);
> enum memmap_context {
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e409270..82e3499 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1915,13 +1915,14 @@ __alloc_pages_high_priority(gfp_t gfp_mask, unsigned int order,
>
> static inline
> void wake_all_kswapd(unsigned int order, struct zonelist *zonelist,
> - enum zone_type high_zoneidx)
> + enum zone_type high_zoneidx,
> + enum zone_type classzone_idx)
> {
> struct zoneref *z;
> struct zone *zone;
>
> for_each_zone_zonelist(zone, z, zonelist, high_zoneidx)
> - wakeup_kswapd(zone, order);
> + wakeup_kswapd(zone, order, classzone_idx);
> }
>
> static inline int
> @@ -1998,7 +1999,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> goto nopage;
>
> restart:
> - wake_all_kswapd(order, zonelist, high_zoneidx);
> + wake_all_kswapd(order, zonelist, high_zoneidx,
> + zone_idx(preferred_zone));
>
> /*
> * OK, we're below the kswapd watermark and have kicked background
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d31d7ce..d070d19 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2165,11 +2165,14 @@ static int sleeping_prematurely(pg_data_t *pgdat, int order, long remaining)
> * interoperates with the page allocator fallback scheme to ensure that aging
> * of pages is balanced across the zones.
> */
> -static unsigned long balance_pgdat(pg_data_t *pgdat, int order)
> +static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> + int classzone_idx)
> {
> int all_zones_ok;
> + int any_zone_ok;
> int priority;
> int i;
> + int end_zone = 0; /* Inclusive. 0 = ZONE_DMA */
> unsigned long total_scanned;
> struct reclaim_state *reclaim_state = current->reclaim_state;
> struct scan_control sc = {
> @@ -2192,7 +2195,6 @@ loop_again:
> count_vm_event(PAGEOUTRUN);
>
> for (priority = DEF_PRIORITY; priority >= 0; priority--) {
> - int end_zone = 0; /* Inclusive. 0 = ZONE_DMA */
> unsigned long lru_pages = 0;
> int has_under_min_watermark_zone = 0;
>
> @@ -2201,6 +2203,7 @@ loop_again:
> disable_swap_token();
>
> all_zones_ok = 1;
> + any_zone_ok = 0;
>
> /*
> * Scan in the highmem->dma direction for the highest
> @@ -2310,10 +2313,12 @@ loop_again:
> * spectulatively avoid congestion waits
> */
> zone_clear_flag(zone, ZONE_CONGESTED);
> + if (i <= classzone_idx)
> + any_zone_ok = 1;
> }
>
> }
> - if (all_zones_ok)
> + if (all_zones_ok || (order && any_zone_ok))
> break; /* kswapd: all done */
> /*
> * OK, kswapd is getting into trouble. Take a nap, then take
> @@ -2336,7 +2341,7 @@ loop_again:
> break;
> }
> out:
> - if (!all_zones_ok) {
> + if (!(all_zones_ok || (order && any_zone_ok))) {
Could you add a comment ?
And this means...
all_zones_ok .... all_zone_balanced
any_zones_ok .... fallback_allocation_ok
?
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-12-06 2:41 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-03 11:45 [PATCH 0/5] Prevent kswapd dumping excessive amounts of memory in response to high-order allocations V2 Mel Gorman
2010-12-03 11:45 ` [PATCH 1/5] mm: kswapd: Stop high-order balancing when any suitable zone is balanced Mel Gorman
2010-12-05 23:35 ` Minchan Kim
2010-12-06 10:55 ` Mel Gorman
2010-12-07 1:32 ` Minchan Kim
2010-12-07 9:49 ` Mel Gorman
2010-12-06 2:35 ` KAMEZAWA Hiroyuki [this message]
2010-12-06 11:32 ` Mel Gorman
2010-12-06 23:51 ` KAMEZAWA Hiroyuki
2010-12-03 11:45 ` [PATCH 2/5] mm: kswapd: Use the order that kswapd was reclaiming at for sleeping_prematurely() Mel Gorman
2010-12-03 11:45 ` [PATCH 3/5] mm: kswapd: Use the classzone idx that kswapd was using " Mel Gorman
2010-12-03 11:45 ` [PATCH 4/5] mm: kswapd: Reset kswapd_max_order and classzone_idx after reading Mel Gorman
2010-12-03 11:45 ` [PATCH 5/5] mm: kswapd: Keep kswapd awake for high-order allocations until a percentage of the node is balanced Mel Gorman
2010-12-09 1:18 ` [PATCH 0/5] Prevent kswapd dumping excessive amounts of memory in response to high-order allocations V2 Simon Kirby
2010-12-09 12:13 ` Mel Gorman
2010-12-09 1:55 ` Simon Kirby
2010-12-09 11:45 ` Mel Gorman
2010-12-10 0:06 ` Simon Kirby
2010-12-10 11:28 ` Mel Gorman
2010-12-11 1:33 ` Simon Kirby
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101206113541.dda0a794.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=dave@linux.vnet.ibm.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=shaohua.li@intel.com \
--cc=sim@hostway.ca \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox