From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Chris Metcalf <cmetcalf@tilera.com>, Len Brown <lenb@kernel.org>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Andi Kleen <andi@firstfloor.org>, Julia Lawall <julia@diku.dk>,
David Howells <dhowells@redhat.com>,
Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Kay Sievers <kay.sievers@vrfy.org>, Ingo Molnar <mingo@elte.hu>,
Paul Gortmaker <paul.gortmaker@windriver.com>,
Daniel Kiper <dkiper@net-space.pl>,
Andrew Morton <akpm@linux-foundation.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Michal Hocko <mhocko@suse.cz>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Minchan Kim <minchan@kernel.org>,
Michal Nazarewicz <mina86@mina86.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Rik van Riel <riel@redhat.com>,
Bjorn Helgaas <bhelgaas@google.com>,
Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [RFC PATCH 1/3 V1] mm, page_alloc: use __rmqueue_smallest when borrow memory from MIGRATE_CMA
Date: Thu, 05 Jul 2012 09:36:14 +0800 [thread overview]
Message-ID: <4FF4EF8E.4080706@cn.fujitsu.com> (raw)
In-Reply-To: <20120704111935.GN13141@csn.ul.ie>
On 07/04/2012 07:19 PM, Mel Gorman wrote:
> On Wed, Jul 04, 2012 at 06:43:47PM +0800, Lai Jiangshan wrote:
>> On 07/04/2012 06:17 PM, Mel Gorman wrote:
>>> On Wed, Jul 04, 2012 at 03:26:16PM +0800, Lai Jiangshan wrote:
>>>> The pages of MIGRATE_CMA can't not be changed to the other type,
>>>> nor be moved to the other free list.
>>>>
>>>> ==>
>>>> So when we use __rmqueue_fallback() to borrow memory from MIGRATE_CMA,
>>>> one of the highest order page is borrowed and it is split.
>>>> But the free pages resulted by splitting can NOT
>>>> be moved to MIGRATE_MOVABLE.
>>>>
>>>> ==>
>>>> So in the next time of allocation, we NEED to borrow again,
>>>> another one of the highest order page is borrowed from CMA and it is split.
>>>> and results some other new split free pages.
>>>>
>>>
>>> Then special case __rmqueue_fallback() to move pages stolen from
>>> MIGRATE_CMA to the MIGRATE_MOVABLE lists but do not change the pageblock
>>> type.
>>
>> Because unmovable-page-requirement can allocate page from
>> MIGRATE_MOVABLE free list. So We can not move MIGRATE_CMA pages
>> to the MIGRATE_MOVABLE free list.
>>
>
> Ok, good point.
>
>> See here:
>>
>> MOVABLE list is empty
>> UNMOVABLE list is empty
>> movable-page-requirement
>> borrow from CMA list
>> split it, others are put into UNMOVABLE list
>> unmovable-page-requiremnt
>> borrow from UNMOVABLE list
>> NOW, it is BUG, we use CMA pages for unmovable usage.
>>
>
> The patch still looks unnecessarily complex for what you are trying to
Which is complex in my code? __rmqueue_smallest()? __rmqueue_fallback()?
__rmqueue_smallest() ? I think it is required.
__rmqueue_fallback()?
It is just a cleanup for __rmqueue_fallback(), CMA is removed out from
__rmqueue_fallback(), so we can cleanup fallback(). I will remove/split
the cleanup part of the patch in next round.
> achieve and as a result I'm not reviewing it as carefully as I should.
> It looks like the entire patch boiled down to this hunk here
>
> +#ifdef CONFIG_CMA
> + if (unlikely(!page) && migratetype == MIGRATE_MOVABLE)
> + page = __rmqueue_smallest(zone, order, MIGRATE_CMA);
> +#endif
> +
>
> With that in place, this would would need to change from
>
> [MIGRATE_MOVABLE] = { MIGRATE_CMA, MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
>
> to
>
> [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
>
> because the fallback is already being handled as a special case. Leave
> the other fallback logic as it is.
>
> This is not tested at all and is only meant to illustrate why I think
> your patch looks excessively complex for what you are trying to
> achieve.
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4beb7ae..0063e93 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -895,11 +895,9 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
> static int fallbacks[MIGRATE_TYPES][4] = {
> [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
> [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
> + [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
> #ifdef CONFIG_CMA
> - [MIGRATE_MOVABLE] = { MIGRATE_CMA, MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
> [MIGRATE_CMA] = { MIGRATE_RESERVE }, /* Never used */
> -#else
> - [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
> #endif
> [MIGRATE_RESERVE] = { MIGRATE_RESERVE }, /* Never used */
> [MIGRATE_ISOLATE] = { MIGRATE_RESERVE }, /* Never used */
> @@ -1076,6 +1074,20 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
>
> retry_reserve:
> page = __rmqueue_smallest(zone, order, migratetype);
> +#ifdef CONFIG_CMA
> + if (!unlikely(!page) && migratetype == MIGRATE_MOVABLE) {
> +
> + /*
> + * CMA is a special case where we want to use
> + * the smallest available page instead of splitting
> + * the largest chunks. We still must avoid the pages
> + * moving to MIGRATE_MOVABLE where they might be
> + * used for UNRECLAIMABLE or UNMOVABLE allocations
> + */
> + migratetype = MIGRATE_CMA;
> + goto retry_reserve;
> + }
> +#endif /* CONFIG_CMA */
>
> if (unlikely(!page) && migratetype != MIGRATE_RESERVE) {
need to add
+ if (migratetype == MIGRATE_CMA)
+ migratetype = MIGRATE_MOVABLE;
to restore the migratetype for fallback.
And your code are the same as mine in the view of CPU:
__rmqueue_smallest(MIGRATE_MOVABLE)
if failed: __rmqueue_smallest(MIGRATE_CMA)
if failed: __rmqueue_fallback()
if failed: __rmqueue_smallest(MIGRATE_RESERVE)
The differences are:
you just use "goto" instead "if" for instruction control.
your code are longer.
the number of branch in your code = mine + 1
My code have better readability:
Mine:
========================================================================
page = __rmqueue_smallest(zone, order, migratetype);
#ifdef CONFIG_CMA
if (unlikely(!page) && migratetype == MIGRATE_MOVABLE)
page = __rmqueue_smallest(zone, order, MIGRATE_CMA);
#endif
if (unlikely(!page))
page = __rmqueue_fallback(zone, order, migratetype);
if (unlikely(!page))
page = __rmqueue_smallest(zone, order, MIGRATE_RESERVE);
trace_mm_page_alloc_zone_locked(page, order, migratetype);
return page;
=====================================================================
Yours:
==================================================================
retry_reserve:
page = __rmqueue_smallest(zone, order, migratetype);
#ifdef CONFIG_CMA
if (!unlikely(!page) && migratetype == MIGRATE_MOVABLE) {
/*
* CMA is a special case where we want to use
* the smallest available page instead of splitting
* the largest chunks. We still must avoid the pages
* moving to MIGRATE_MOVABLE where they might be
* used for UNRECLAIMABLE or UNMOVABLE allocations
*/
migratetype = MIGRATE_CMA;
goto retry_reserve;
}
#endif /* CONFIG_CMA */
if (unlikely(!page) && migratetype != MIGRATE_RESERVE) {
if (migratetype == MIGRATE_CMA)
migratetype = MIGRATE_MOVABLE;
page = __rmqueue_fallback(zone, order, migratetype);
/*
* Use MIGRATE_RESERVE rather than fail an allocation. goto
* is used because __rmqueue_smallest is an inline function
* and we want just one call site
*/
if (!page) {
migratetype = MIGRATE_RESERVE;
goto retry_reserve;
}
}
trace_mm_page_alloc_zone_locked(page, order, migratetype);
return page;
==========================================================================
How about this one? (just type it in the email client)
#define RMQUEUE_FALLBACK 1024
int rmqueue_list[3][4] = {
[MIGRATE_UNMOVABLE] = { MIGRATE_UNMOVABLE, RMQUEUE_FALLBACK, MIGRATE_RESERVE},
[MIGRATE_RECLAIMABLE] = { MIGRATE_RECLAIMABLE, RMQUEUE_FALLBACK, MIGRATE_RESERVE},
[MIGRATE_MOVABLE] = {MIGRATE_MOVABLE, MIGRATE_CMA, RMQUEUE_FALLBACK, MIGRATE_RESERVE},
}
static struct page *__rmqueue(struct zone *zone, unsigned int order,
int migratetype)
{
struct page *page;
int i, mt;
for (i = 0; ; i++) {
mt = rmqueue_list[migratetype][i];
if (likely(mt != RMQUEUE_FALLBACK)
page = __rmqueue_smallest(zone, order, mt);
else
page = __rmqueue_fallback(zone, order, migratetype);
/* MIGRATE_RESERVE is always the last one */
if (likely(page) || (mt == MIGRATE_RESERVE))
break;
}
trace_mm_page_alloc_zone_locked(page, order, migratetype);
return page;
}
Thanks,
Lai
> page = __rmqueue_fallback(zone, order, migratetype);
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-07-05 1:35 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-04 7:26 [RFC PATCH 0/3 V1] mm: add new migrate type and online_movable for hotplug Lai Jiangshan
2012-07-04 7:26 ` [RFC PATCH 1/3 V1] mm, page_alloc: use __rmqueue_smallest when borrow memory from MIGRATE_CMA Lai Jiangshan
2012-07-04 10:17 ` Mel Gorman
2012-07-04 10:43 ` Lai Jiangshan
2012-07-04 11:19 ` Mel Gorman
2012-07-05 1:36 ` Lai Jiangshan [this message]
2012-07-05 8:38 ` Mel Gorman
2012-07-04 7:26 ` [RFC PATCH 2/3 V1] mm, page migrate: add MIGRATE_HOTREMOVE type Lai Jiangshan
2012-07-04 10:19 ` Mel Gorman
2012-07-04 10:50 ` Lai Jiangshan
2012-07-04 7:26 ` [RFC PATCH 3/3 V1] mm, memory-hotplug: add online_movable Lai Jiangshan
2012-07-04 14:58 ` Greg Kroah-Hartman
2012-07-04 7:35 ` [RFC PATCH 0/3 V1] mm: add new migrate type and online_movable for hotplug Minchan Kim
2012-07-04 8:23 ` Lai Jiangshan
2012-07-04 8:43 ` Lai Jiangshan
2012-07-05 9:05 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FF4EF8E.4080706@cn.fujitsu.com \
--to=laijs@cn.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=benh@kernel.crashing.org \
--cc=bhelgaas@google.com \
--cc=cl@linux.com \
--cc=cmetcalf@tilera.com \
--cc=dhowells@redhat.com \
--cc=dkiper@net-space.pl \
--cc=gregkh@linuxfoundation.org \
--cc=julia@diku.dk \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=kay.sievers@vrfy.org \
--cc=konrad.wilk@oracle.com \
--cc=lenb@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m.szyprowski@samsung.com \
--cc=mel@csn.ul.ie \
--cc=mhocko@suse.cz \
--cc=mina86@mina86.com \
--cc=minchan@kernel.org \
--cc=mingo@elte.hu \
--cc=paul.gortmaker@windriver.com \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox