From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: clameter@sgi.com, linux-mm@kvack.org
Subject: Re: [PATCH 0/8] Review-based updates to grouping pages by mobility
Date: Wed, 16 May 2007 11:33:14 +0900 [thread overview]
Message-ID: <20070516113314.65f442a2.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20070515150311.16348.56826.sendpatchset@skynet.skynet.ie>
On Tue, 15 May 2007 16:03:11 +0100 (IST)
Mel Gorman <mel@csn.ul.ie> wrote:
> Hi Christoph,
>
> The following patches address points brought up by your review of the
> grouping pages by mobility patches. There are quite a number of patches here.
>
May I have a question ?
Not about this patch but about 2.6.21-mm2.
In free_hot_cold_page()
==
static void fastcall free_hot_cold_page(struct page *page, int cold)
{
struct zone *zone = page_zone(page);
struct per_cpu_pages *pcp;
unsigned long flags;
<snip>
set_page_private(page, get_pageblock_migratetype(page));
pcp->count++;
if (pcp->count >= pcp->high) {
free_pages_bulk(zone, pcp->batch, &pcp->list, 0);
pcp->count -= pcp->batch;
}
==
get_pageblock_migratetype(page) is called without zone->lock.
Is this safe ? or should we add seqlock(or something) to access
migrate type bitmap ?
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2007-05-16 2:33 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-15 15:03 Mel Gorman
2007-05-15 15:03 ` [PATCH 1/8] Do not depend on MAX_ORDER when " Mel Gorman
2007-05-15 18:19 ` Christoph Lameter
2007-05-15 19:19 ` Mel Gorman
2007-05-15 15:03 ` [PATCH 2/8] Print out statistics in relation to fragmentation avoidance to /proc/fragavoidance Mel Gorman
2007-05-15 18:25 ` Christoph Lameter
2007-05-15 19:23 ` Mel Gorman
2007-05-16 0:27 ` KAMEZAWA Hiroyuki
2007-05-15 15:04 ` [PATCH 3/8] Print out PAGE_OWNER statistics in relation to fragmentation avoidance Mel Gorman
2007-05-15 15:04 ` [PATCH 4/8] Mark bio_alloc() allocations correctly Mel Gorman
2007-05-15 15:04 ` [PATCH 5/8] Do not annotate shmem allocations explicitly Mel Gorman
2007-05-15 15:05 ` [PATCH 6/8] Add __GFP_TEMPORARY to identify allocations that are short-lived Mel Gorman
2007-05-15 18:29 ` Christoph Lameter
2007-05-16 0:36 ` KAMEZAWA Hiroyuki
2007-05-16 0:52 ` Christoph Lameter
2007-05-16 9:04 ` Mel Gorman
2007-05-15 15:05 ` [PATCH 7/8] Rename GFP_HIGH_MOVABLE to GFP_HIGHUSER_MOVABLE Mel Gorman
2007-05-15 18:29 ` Christoph Lameter
2007-05-15 15:05 ` [PATCH 8/8] Mark page cache pages as __GFP_PAGECACHE instead of __GFP_MOVABLE Mel Gorman
2007-05-15 18:31 ` Christoph Lameter
2007-05-15 19:52 ` Mel Gorman
2007-05-15 20:04 ` Christoph Lameter
2007-05-15 20:20 ` Mel Gorman
2007-05-15 20:36 ` Christoph Lameter
2007-05-15 20:50 ` Mel Gorman
2007-05-16 2:33 ` KAMEZAWA Hiroyuki [this message]
2007-05-16 8:58 ` [PATCH 0/8] Review-based updates to grouping pages by mobility Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070516113314.65f442a2.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=clameter@sgi.com \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox