* [PATCH 1/4] mm, page_alloc: Only check PageCompound for high-order pages -fix
2016-04-27 12:24 [PATCH 0/4] Optimise page alloc/free fast paths followup v1 Mel Gorman
@ 2016-04-27 12:24 ` Mel Gorman
2016-04-27 12:24 ` [PATCH 2/4] mm, page_alloc: inline the fast path of the zonelist iterator -fix Mel Gorman
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Mel Gorman @ 2016-04-27 12:24 UTC (permalink / raw)
To: Andrew Morton
Cc: Vlastimil Babka, Jesper Dangaard Brouer, Linux-MM, LKML, Mel Gorman
Vlastimil Babka pointed out that an unlikely annotation in free_pages_prepare
shrinks stack usage by moving compound handling to the end of the function.
add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-30 (-30)
function old new delta
free_pages_prepare 771 741 -30
It's also consistent with the buffered_rmqueue path.
This is a fix to the mmotm patch
mm-page_alloc-only-check-pagecompound-for-high-order-pages.patch.
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1da56779f8fa..d8383750bd43 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1003,7 +1003,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
* Check tail pages before head page information is cleared to
* avoid checking PageCompound for order-0 pages.
*/
- if (order) {
+ if (unlikely(order)) {
bool compound = PageCompound(page);
int i;
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH 2/4] mm, page_alloc: inline the fast path of the zonelist iterator -fix
2016-04-27 12:24 [PATCH 0/4] Optimise page alloc/free fast paths followup v1 Mel Gorman
2016-04-27 12:24 ` [PATCH 1/4] mm, page_alloc: Only check PageCompound for high-order pages -fix Mel Gorman
@ 2016-04-27 12:24 ` Mel Gorman
2016-04-27 12:30 ` Mel Gorman
2016-04-27 12:24 ` [PATCH 3/4] mm, page_alloc: move might_sleep_if check to the allocator slowpath -revert Mel Gorman
2016-04-27 12:24 ` [PATCH 4/4] mm, page_alloc: Check once if a zone has isolated pageblocks -fix Mel Gorman
3 siblings, 1 reply; 6+ messages in thread
From: Mel Gorman @ 2016-04-27 12:24 UTC (permalink / raw)
To: Andrew Morton
Cc: Vlastimil Babka, Jesper Dangaard Brouer, Linux-MM, LKML, Mel Gorman
Vlastimil Babka pointed out that the nodes allowed by a cpuset are not
reread if the nodemask changes during an allocation. This potentially
allows an unnecessary page allocation failure. Moving the retry_cpuset
label is insufficient but rereading the nodemask before retrying addresses
the problem.
This is a fix to the mmotm patch
mm-page_alloc-inline-the-fast-path-of-the-zonelist-iterator.patch .
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d8383750bd43..45a36e98b9cb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3855,6 +3855,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
*/
if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie))) {
alloc_mask = gfp_mask;
+ ac.nodemask = &cpuset_current_mems_allowed;
goto retry_cpuset;
}
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH 2/4] mm, page_alloc: inline the fast path of the zonelist iterator -fix
2016-04-27 12:24 ` [PATCH 2/4] mm, page_alloc: inline the fast path of the zonelist iterator -fix Mel Gorman
@ 2016-04-27 12:30 ` Mel Gorman
0 siblings, 0 replies; 6+ messages in thread
From: Mel Gorman @ 2016-04-27 12:30 UTC (permalink / raw)
To: Andrew Morton; +Cc: Vlastimil Babka, Jesper Dangaard Brouer, Linux-MM, LKML
On Wed, Apr 27, 2016 at 01:24:43PM +0100, Mel Gorman wrote:
> Vlastimil Babka pointed out that the nodes allowed by a cpuset are not
> reread if the nodemask changes during an allocation. This potentially
> allows an unnecessary page allocation failure. Moving the retry_cpuset
> label is insufficient but rereading the nodemask before retrying addresses
> the problem.
>
> This is a fix to the mmotm patch
> mm-page_alloc-inline-the-fast-path-of-the-zonelist-iterator.patch .
>
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
And this is wrong :( . I'll think again.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 3/4] mm, page_alloc: move might_sleep_if check to the allocator slowpath -revert
2016-04-27 12:24 [PATCH 0/4] Optimise page alloc/free fast paths followup v1 Mel Gorman
2016-04-27 12:24 ` [PATCH 1/4] mm, page_alloc: Only check PageCompound for high-order pages -fix Mel Gorman
2016-04-27 12:24 ` [PATCH 2/4] mm, page_alloc: inline the fast path of the zonelist iterator -fix Mel Gorman
@ 2016-04-27 12:24 ` Mel Gorman
2016-04-27 12:24 ` [PATCH 4/4] mm, page_alloc: Check once if a zone has isolated pageblocks -fix Mel Gorman
3 siblings, 0 replies; 6+ messages in thread
From: Mel Gorman @ 2016-04-27 12:24 UTC (permalink / raw)
To: Andrew Morton
Cc: Vlastimil Babka, Jesper Dangaard Brouer, Linux-MM, LKML, Mel Gorman
Vlastimil Babka pointed out that a patch weakens a zone_reclaim test
which while "safe" defeats the purposes of the debugging check. As most
configurations eliminate this check anyway, I thought it was better to
simply revert the patch instead of adding a second check in zone_reclaim.
This is a revert of the mmotm patch
mm-page_alloc-move-might_sleep_if-check-to-the-allocator-slowpath.patch .
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 45a36e98b9cb..599bd1a49384 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3606,8 +3606,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
return NULL;
}
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
-
/*
* We also sanity check to catch abuse of atomic reserves being used by
* callers that are not in atomic context.
@@ -3806,6 +3804,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
lockdep_trace_alloc(gfp_mask);
+ might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
+
if (should_fail_alloc_page(gfp_mask, order))
return NULL;
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 4/4] mm, page_alloc: Check once if a zone has isolated pageblocks -fix
2016-04-27 12:24 [PATCH 0/4] Optimise page alloc/free fast paths followup v1 Mel Gorman
` (2 preceding siblings ...)
2016-04-27 12:24 ` [PATCH 3/4] mm, page_alloc: move might_sleep_if check to the allocator slowpath -revert Mel Gorman
@ 2016-04-27 12:24 ` Mel Gorman
3 siblings, 0 replies; 6+ messages in thread
From: Mel Gorman @ 2016-04-27 12:24 UTC (permalink / raw)
To: Andrew Morton
Cc: Vlastimil Babka, Jesper Dangaard Brouer, Linux-MM, LKML, Mel Gorman
Vlastimil Babka pointed out that the original code was protected by
the zone lock and provided a fix.
This is a fix to the mmotm patch
mm-page_alloc-check-once-if-a-zone-has-isolated-pageblocks.patch . Once
applied the following line should be removed from the changelog "Technically
this is race-prone but so is the existing code."
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 599bd1a49384..269cdb53297c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1098,9 +1098,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
int migratetype = 0;
int batch_free = 0;
unsigned long nr_scanned;
- bool isolated_pageblocks = has_isolate_pageblock(zone);
+ bool isolated_pageblocks;
spin_lock(&zone->lock);
+ isolated_pageblocks = has_isolate_pageblock(zone);
nr_scanned = zone_page_state(zone, NR_PAGES_SCANNED);
if (nr_scanned)
__mod_zone_page_state(zone, NR_PAGES_SCANNED, -nr_scanned);
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread