From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: "Michał Nazarewicz" <m.nazarewicz@samsung.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
minchan.kim@gmail.com, Bob Liu <lliubbo@gmail.com>,
fujita.tomonori@lab.ntt.co.jp, pawel@osciak.com,
andi.kleen@intel.com, felipe.contreras@gmail.com,
"kosaki.motohiro@jp.fujitsu.com" <kosaki.motohiro@jp.fujitsu.com>,
Marek Szyprowski <m.szyprowski@samsung.com>
Subject: Re: [PATCH 0/4] big chunk memory allocator v4
Date: Wed, 24 Nov 2010 09:36:53 +0900 [thread overview]
Message-ID: <20101124093653.bb8692e4.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <op.vmmre1vv7p4s8u@pikus>
On Tue, 23 Nov 2010 16:46:03 +0100
MichaA? Nazarewicz <m.nazarewicz@samsung.com> wrote:
> A few things than:
>
> 1. As Felipe mentioned, on ARM it is often desired to have the memory
> mapped as non-cacheable, which most often mean that the memory never
> reaches the page allocator. This means, that alloc_contig_pages()
> would not be suitable for cases where one needs such memory.
>
> Or could this be overcome by adding the memory back as highmem? But
> then, it would force to compile in highmem support even if platform
> does not really need it.
>
> 2. Device drivers should not by themselves know what ranges of memory to
> allocate memory from. Moreover, some device drivers could require
> allocation different buffers from different ranges. As such, this
> would require some management code on top of alloc_contig_pages().
>
> 3. When posting hwmem, Johan Mossberg mentioned that he'd like to see
> notion of "pinning" chunks (so that not-pinned chunks can be moved
> around when hardware does not use them to defragment memory). This
> would again require some management code on top of
> alloc_contig_pages().
>
> 4. I might be mistaken here, but the way I understand ZONE_MOVABLE work
> is that it is cut of from the end of memory. Or am I talking nonsense?
> My concern is that at least one chip I'm working with requires
> allocations from different memory banks which would basically mean that
> there would have to be two movable zones, ie:
>
> +-------------------+-------------------+
> | Memory Bank #1 | Memory Bank #2 |
> +---------+---------+---------+---------+
> | normal | movable | normal | movable |
> +---------+---------+---------+---------+
>
yes.
> So even though I'm personally somehow drawn by alloc_contig_pages()'s
> simplicity (compared to CMA at least), those quick thoughts make me think
> that alloc_contig_pages() would work rather as a backend (as Kamezawa
> mentioned) for some, maybe even tiny but still present, management code
> which would handle "marking memory fragments as ZONE_MOVABLE" (whatever
> that would involve) and deciding which memory ranges drivers can allocate
> from.
>
> I'm also wondering whether alloc_contig_pages()'s first-fit is suitable but
> that probably cannot be judged without some benchmarks.
>
I'll continue to update patches, you can freely reuse my code and integrate
this set to yours. I works for this firstly for EMBEDED but I want this to be
a _generic_ function for gerenal purpose architecture.
There may be guys who want 1G page on a host with tons of free memory.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-11-24 0:42 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-19 8:10 KAMEZAWA Hiroyuki
2010-11-19 8:12 ` [PATCH 1/4] alloc_contig_pages() move some functions to page_isolation.c KAMEZAWA Hiroyuki
2010-11-21 15:07 ` Minchan Kim
2010-11-19 8:14 ` [PATCH 2/4] alloc_contig_pages() find appropriate physical memory range KAMEZAWA Hiroyuki
2010-11-21 15:21 ` Minchan Kim
2010-11-22 0:11 ` KAMEZAWA Hiroyuki
2010-11-22 11:20 ` Minchan Kim
2010-11-24 0:15 ` KAMEZAWA Hiroyuki
2010-11-19 8:15 ` [PATCH 3/4] alloc_contig_pages() allocate big chunk memory using migration KAMEZAWA Hiroyuki
2010-11-21 15:25 ` Minchan Kim
2010-11-22 0:13 ` KAMEZAWA Hiroyuki
2010-11-22 11:44 ` Minchan Kim
2010-11-24 0:20 ` KAMEZAWA Hiroyuki
2010-11-19 8:16 ` [PATCH 4/4] alloc_contig_pages() use better allocation function for migration KAMEZAWA Hiroyuki
2010-11-22 12:01 ` Minchan Kim
2010-11-19 20:56 ` [PATCH 0/4] big chunk memory allocator v4 Andrew Morton
2010-11-22 0:04 ` KAMEZAWA Hiroyuki
2010-11-23 15:46 ` Michał Nazarewicz
2010-11-24 0:36 ` KAMEZAWA Hiroyuki [this message]
2010-11-22 0:30 ` Felipe Contreras
2010-11-22 8:59 ` Kleen, Andi
2010-11-23 15:44 ` Michał Nazarewicz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101124093653.bb8692e4.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=andi.kleen@intel.com \
--cc=felipe.contreras@gmail.com \
--cc=fujita.tomonori@lab.ntt.co.jp \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lliubbo@gmail.com \
--cc=m.nazarewicz@samsung.com \
--cc=m.szyprowski@samsung.com \
--cc=minchan.kim@gmail.com \
--cc=pawel@osciak.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox