From: Johan MOSSBERG <johan.xx.mossberg@stericsson.com>
To: "Michał Nazarewicz" <m.nazarewicz@samsung.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: RE: [PATCH 0/3] hwmem: Hardware memory driver
Date: Tue, 16 Nov 2010 17:16:23 +0100 [thread overview]
Message-ID: <C832F8F5D375BD43BFA11E82E0FE9FE0081BE73A1D@EXDCVYMBSTM005.EQ1STM.local> (raw)
In-Reply-To: <op.vl9r6xld7p4s8u@pikus>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 1890 bytes --]
MichaÅ Nazarewicz wrote:
> > I mean the ability to move allocated buffers to free more
> > contiguous space. To support this in CMA the API(s) would have to
> > change.
> > * A buffer's physical address cannot be used to identify it as the
> > physical address can change.
> > * Pin/unpin functions would have to be added so that you can pin a
> > buffer when hardware uses it.
> > * The allocators needs to be able to inform CMA that they have
> > moved a buffer. This is so that CMA can keep track of what memory
> > is free so that it can supply the free memory to the kernel for
> > temporary use there.
>
> I don't think those are fundamentally against CMA and as such I see
> no reason why such calls could not be added to CMA. Allocators that
> do not support defragmentation could just ignore those calls.
Sounds good.
> In particular, a cma_alloc() could return a pointer to an opaque
> struct cma and to get physical address user would have to pin the
> buffer with, say, cma_pin() and then call cma_phys() to obtain
> physical address.
I think cma_phys() is redundant, cma_pin() can return the physical
address, that's how we did it in hwmem.
> I'm only wondering if treating "unpin" as "free" and pin as another
> "alloc" would not suffice?
I don't understand. Wouldn't you lose all the data in the buffer
when you free it? How would we handle something like the desktop
image which is blitted to the display all the time but never
changes? We'd have to keep a scattered version and then copy it
into a temporary contiguous buffer which is not optimal
performance wise. The other alternative would be to keep the
allocation but then we would get fragmentation problems.
/Johan Mossberg
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ñb^[nö¢®×¥yÊ&¦bs(§ ©Úu«"xm¶ÿv+,¢[Þ¶\x17®×\x1ckðèצj)Z·
next prev parent reply other threads:[~2010-11-16 16:16 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-16 13:07 Johan Mossberg
2010-11-16 13:08 ` [PATCH 1/3] hwmem: Add hwmem (part 1) Johan Mossberg
2010-11-16 13:08 ` [PATCH 2/3] hwmem: Add hwmem (part 2) Johan Mossberg
2010-11-16 13:08 ` [PATCH 3/3] hwmem: Add hwmem to ux500 and mop500 Johan Mossberg
2010-11-16 14:50 ` [PATCH 0/3] hwmem: Hardware memory driver Michał Nazarewicz
2010-11-16 15:25 ` Johan MOSSBERG
2010-11-16 15:33 ` Michał Nazarewicz
2010-11-16 16:16 ` Johan MOSSBERG [this message]
2010-11-16 17:36 ` Michał Nazarewicz
2010-11-17 9:28 ` Johan MOSSBERG
2010-11-19 10:44 ` Michał Nazarewicz
2010-11-19 13:47 ` Johan MOSSBERG
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C832F8F5D375BD43BFA11E82E0FE9FE0081BE73A1D@EXDCVYMBSTM005.EQ1STM.local \
--to=johan.xx.mossberg@stericsson.com \
--cc=linux-mm@kvack.org \
--cc=m.nazarewicz@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox