From: David Rientjes <rientjes@google.com>
To: Miao Xie <miaox@cn.fujitsu.com>
Cc: Nick Piggin <npiggin@suse.de>,
Lee Schermerhorn <lee.schermerhorn@hp.com>,
Paul Menage <menage@google.com>,
Linux-Kernel <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>, Li Zefan <lizf@cn.fujitsu.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH V2 4/4] cpuset,mm: update task's mems_allowed lazily
Date: Wed, 31 Mar 2010 03:34:08 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.00.1003310324550.17661@chino.kir.corp.google.com> (raw)
In-Reply-To: <4BB31BDA.8080203@cn.fujitsu.com>
On Wed, 31 Mar 2010, Miao Xie wrote:
> diff --git a/mm/mmzone.c b/mm/mmzone.c
> index f5b7d17..43ac21b 100644
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -58,6 +58,7 @@ struct zoneref *next_zones_zonelist(struct zoneref *z,
> nodemask_t *nodes,
> struct zone **zone)
> {
> + nodemask_t tmp_nodes;
> /*
> * Find the next suitable zone to use for the allocation.
> * Only filter based on nodemask if it's set
> @@ -65,10 +66,16 @@ struct zoneref *next_zones_zonelist(struct zoneref *z,
> if (likely(nodes == NULL))
> while (zonelist_zone_idx(z) > highest_zoneidx)
> z++;
> - else
> - while (zonelist_zone_idx(z) > highest_zoneidx ||
> - (z->zone && !zref_in_nodemask(z, nodes)))
> - z++;
> + else {
> + tmp_nodes = *nodes;
> + if (nodes_empty(tmp_nodes))
> + while (zonelist_zone_idx(z) > highest_zoneidx)
> + z++;
> + else
> + while (zonelist_zone_idx(z) > highest_zoneidx ||
> + (z->zone && !zref_in_nodemask(z, &tmp_nodes)))
> + z++;
> + }
>
> *zone = zonelist_zone(z);
> return z;
Unfortunately, you can't allocate a nodemask_t on the stack here because
this is used in the iteration for get_page_from_freelist() which can occur
very deep in the stack already and there's a probability of overflow.
Dynamically allocating a nodemask_t simply wouldn't scale here, either,
since it would allocate on every iteration of a zonelist.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-03-31 10:34 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-08 10:10 Miao Xie
2010-03-08 21:46 ` David Rientjes
2010-03-09 7:25 ` Miao Xie
2010-03-11 8:15 ` Nick Piggin
2010-03-11 10:33 ` Miao Xie
2010-03-11 11:03 ` Nick Piggin
2010-03-25 10:23 ` Miao Xie
2010-03-25 12:56 ` Miao Xie
2010-03-25 13:33 ` [PATCH] [PATCH -mmotm] cpuset,mm: use seqlock to protect task->mempolicy and mems_allowed (v2) (was: Re: [PATCH V2 4/4] cpuset,mm: update task's mems_allowed lazily) Miao Xie
2010-03-28 5:30 ` Bob Liu
2010-03-31 19:42 ` Andrew Morton
2010-03-31 9:54 ` [PATCH V2 4/4] cpuset,mm: update task's mems_allowed lazily Miao Xie
2010-03-31 10:34 ` David Rientjes [this message]
2010-04-01 2:16 ` Miao Xie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.00.1003310324550.17661@chino.kir.corp.google.com \
--to=rientjes@google.com \
--cc=akpm@linux-foundation.org \
--cc=lee.schermerhorn@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizf@cn.fujitsu.com \
--cc=menage@google.com \
--cc=miaox@cn.fujitsu.com \
--cc=npiggin@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox