From: Boaz Harrosh <boaz@plexistor.com>
To: Toshi Kani <toshi.kani@hp.com>, Boaz Harrosh <openosd@gmail.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
linux-fsdevel <linux-fsdevel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, Matthew Wilcox <willy@linux.intel.com>,
Yigal Korman <yigal@plexistor.com>
Subject: Re: [RFC 9/9] prd: Add support for page struct mapping
Date: Tue, 19 Aug 2014 11:40:17 +0300 [thread overview]
Message-ID: <53F30D71.9010107@plexistor.com> (raw)
In-Reply-To: <1408391280.26567.79.camel@misato.fc.hp.com>
On 08/18/2014 10:48 PM, Toshi Kani wrote:
> On Sun, 2014-08-17 at 12:17 +0300, Boaz Harrosh wrote:
<>
>> "System RAM" it is not.
>
> I think add_memory() can be easily extended (or modified to provide a
> separate interface) for persistent memory, and avoid creating the sysfs
> interface and change the handling with firmware_map. But I can also see
> your point that persistent memory should not be added to zone at all.
>
Right
> Anyway, I am a bit concerned with the way to create direct mappings with
> map_vm_area() within the prd driver. Can we use init_memory_mapping()
> as it's used by add_memory() and supports large page size? The size of
> persistent memory will grow up quickly.
A bit about large page size. The principal reason of my effort here is
that at some stage I need to send pmem blocks to block-layer or network.
The PAGE == 4K is pasted all over the block stack. Do you know how those
can work together? will we need some kind of page_split thing how does
that work?
> Also, I'd prefer to have an mm
> interface that takes care of page allocations and mappings, and avoid a
> driver to deal with them.
>
This is a great idea you mean that I define:
+ int mm_add_page_mapping(phys_addr_t phys_addr, size_t total_size,
+ void **o_virt_addr)
At the mm level. OK It needs a much better name.
I know of 2 more drivers that will need the use of the same interface
actually, so you are absolutely right. I didn't dare ask ;-)
>> And also I think that for DDR4 NvDIMMs we will fail with:
>> ret = check_hotplug_memory_range(start, size);
>>
>
> Can you elaborate why DDR4 will fail with the function above?
>
I'm not at all familiar with the details, perhaps the Intel
guys that knows better can chip in, but from the little I
understood: Today with DDR3 these chips come up at the e820
controller, as type 12 memory and, each vendor has a driver
to drive proprietary enablement and persistence.
With DDR4 it will all be standardized, but it will not come
up through the e820 manager, but as a separate device on the
SMBus/ACPI.
So it is not clear to me that we want to plug this back into
the ARCH's memory controllers. check_hotplug_memory_range is
it per ARCH?
> Thanks,
> -Toshi
>
>
I will produce a new Patchset that introduces a new API
for drivers. And I will try to see about the use of
init_memory_mapping(), as long as it is not using
zones.
Do you think that the new code should sit in?
mm/memory_hotplug.c
Thanks
Boaz
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-08-19 8:40 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-13 12:08 [RFC 0/9] pmem: Support for "struct page" with Persistent Memory storage Boaz Harrosh
2014-08-13 12:10 ` [RFC 1/9] prd: Initial version of Persistent RAM Driver Boaz Harrosh
2014-08-13 12:11 ` [RFC 2/9] prd: add support for rw_page() Boaz Harrosh
2014-08-13 12:12 ` [RFC 3/9] prd: Add getgeo to block ops Boaz Harrosh
2014-08-13 12:14 ` [RFC 4/9] SQUASHME: prd: Fixs to getgeo Boaz Harrosh
2014-08-20 22:10 ` Ross Zwisler
2014-08-21 9:47 ` Boaz Harrosh
2014-08-13 12:16 ` [RFC 5/9] SQUASHME: prd: Last fixes for partitions Boaz Harrosh
2014-08-14 13:04 ` Boaz Harrosh
2014-08-14 13:16 ` Matthew Wilcox
2014-08-14 13:55 ` Boaz Harrosh
2014-08-14 13:07 ` [PATCH 5/9 v2] " Boaz Harrosh
2014-08-25 20:10 ` Ross Zwisler
2014-08-26 8:18 ` Boaz Harrosh
2014-08-26 17:36 ` Boaz Harrosh
2014-08-26 20:34 ` Ross Zwisler
2014-08-27 4:38 ` Matthew Wilcox
2014-08-20 23:03 ` [RFC 5/9] " Ross Zwisler
2014-08-21 10:05 ` Boaz Harrosh
2014-08-13 12:18 ` [RFC 6/9] SQUASHME: prd: Let each prd-device manage private memory region Boaz Harrosh
2014-08-21 16:57 ` Ross Zwisler
2014-08-13 12:20 ` [RFC 7/9] SQUASHME: prd: Support of multiple memory regions Boaz Harrosh
2014-08-25 23:02 ` Ross Zwisler
2014-08-13 12:21 ` [RFC 8/9] mm: export sparse_add/remove_one_section Boaz Harrosh
2014-08-13 12:26 ` [RFC 9/9] prd: Add support for page struct mapping Boaz Harrosh
2014-08-15 20:28 ` Toshi Kani
2014-08-17 9:17 ` Boaz Harrosh
2014-08-18 19:48 ` Toshi Kani
2014-08-19 8:40 ` Boaz Harrosh [this message]
2014-08-19 16:49 ` Toshi Kani
2014-08-22 14:36 ` Dave Hansen
2014-09-09 16:16 ` Boaz Harrosh
2014-09-09 16:29 ` Dave Hansen
2014-08-20 20:13 ` [RFC 0/9] pmem: Support for "struct page" with Persistent Memory storage Ross Zwisler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53F30D71.9010107@plexistor.com \
--to=boaz@plexistor.com \
--cc=akpm@linux-foundation.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=openosd@gmail.com \
--cc=ross.zwisler@linux.intel.com \
--cc=toshi.kani@hp.com \
--cc=willy@linux.intel.com \
--cc=yigal@plexistor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox