* [RFC] Try pages allocation from higher to lower orders
@ 2012-09-08 22:16 David Cohen
2012-09-17 9:12 ` David Rientjes
0 siblings, 1 reply; 2+ messages in thread
From: David Cohen @ 2012-09-08 22:16 UTC (permalink / raw)
To: linux-mm
Hi,
I work with embedded Linux, but new to linux MM community.
I need a way to improve performance when allocating a high number of
pages. Can't describe the exact scenario, but need to request more
than 20k pages on a time-sensitive task.
Requesting pages with order > 0 is faster than requesting a single
page 20k times if memory isn't fragmented. But in case memory is
fragmented, at some point order > 0 may not be available and page
allocation process go through more expensive path, which ends up being
slower than requesting 20k single pages. I'd like to have a way to
choose faster option depending on fragmentation scenario.
Is there currently a reliable solution for this case? Couldn't find one.
If the answer is really "no", what does it sound like to implement a
function e.g. alloc_pages_try_orders(mask, min_order, max_order). The
idea would be to try to get from free list (faster path only) page
with order <= max_order and > order_min (the higher is preferable) and
allow slow path only if min_order is the only option.
Thanks for your time,
David Cohen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [RFC] Try pages allocation from higher to lower orders
2012-09-08 22:16 [RFC] Try pages allocation from higher to lower orders David Cohen
@ 2012-09-17 9:12 ` David Rientjes
0 siblings, 0 replies; 2+ messages in thread
From: David Rientjes @ 2012-09-17 9:12 UTC (permalink / raw)
To: David Cohen; +Cc: linux-mm
On Sun, 9 Sep 2012, David Cohen wrote:
> Requesting pages with order > 0 is faster than requesting a single
> page 20k times if memory isn't fragmented. But in case memory is
> fragmented, at some point order > 0 may not be available and page
> allocation process go through more expensive path, which ends up being
> slower than requesting 20k single pages. I'd like to have a way to
> choose faster option depending on fragmentation scenario.
> Is there currently a reliable solution for this case? Couldn't find one.
> If the answer is really "no", what does it sound like to implement a
> function e.g. alloc_pages_try_orders(mask, min_order, max_order).
I don't think that's generally useful, so it would have to be isolated to
the driver you're working on. But what I would suggest would be to avoid
doing memory compaction and reclaim on higher orders and rather fallback
to allocating smaller and smaller orders first. Try using
fragmentation_index() and determine the optimal order to allocate
depending on the current state of fragmentation; if that's insufficient,
then you'll have to fallback to using memory compaction. You'll want to
compact much more than a single order-9 page allocation, though, so
perhaps explicitly trigger compact_node() beforehand and try to incur the
penalty only once.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-09-17 9:12 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-08 22:16 [RFC] Try pages allocation from higher to lower orders David Cohen
2012-09-17 9:12 ` David Rientjes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox