From: John Hubbard <jhubbard@nvidia.com>
To: Dan Williams <dan.j.williams@intel.com>,
Jerome Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Linux MM <linux-mm@kvack.org>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
David Nellans <dnellans@nvidia.com>,
Ross Zwisler <ross.zwisler@linux.intel.com>
Subject: Re: [HMM 03/15] mm/unaddressable-memory: new type of ZONE_DEVICE for unaddressable memory
Date: Sun, 23 Apr 2017 17:39:12 -0700 [thread overview]
Message-ID: <f88de491-1cd2-75e1-4304-dc11c96b5d2a@nvidia.com> (raw)
In-Reply-To: <CAPcyv4jr=CNuaGQt80SwR5dpiXy_pDr8aD-w0EtLNE4oGC8WcQ@mail.gmail.com>
On 4/23/17 6:13 AM, Dan Williams wrote:
> On Sat, Apr 22, 2017 at 11:11 AM, Jerome Glisse <jglisse@redhat.com> wrote:
>> On Fri, Apr 21, 2017 at 10:30:01PM -0700, Dan Williams wrote:
>>> On Fri, Apr 21, 2017 at 8:30 PM, Jérôme Glisse <jglisse@redhat.com> wrote:
>>
>> [...]
>>
>>>> +/*
>>>> + * Specialize ZONE_DEVICE memory into multiple types each having differents
>>>> + * usage.
>>>> + *
>>>> + * MEMORY_DEVICE_PERSISTENT:
>>>> + * Persistent device memory (pmem): struct page might be allocated in different
>>>> + * memory and architecture might want to perform special actions. It is similar
>>>> + * to regular memory, in that the CPU can access it transparently. However,
>>>> + * it is likely to have different bandwidth and latency than regular memory.
>>>> + * See Documentation/nvdimm/nvdimm.txt for more information.
>>>> + *
>>>> + * MEMORY_DEVICE_UNADDRESSABLE:
>>>> + * Device memory that is not directly addressable by the CPU: CPU can neither
>>>> + * read nor write _UNADDRESSABLE memory. In this case, we do still have struct
>>>> + * pages backing the device memory. Doing so simplifies the implementation, but
>>>> + * it is important to remember that there are certain points at which the struct
>>>> + * page must be treated as an opaque object, rather than a "normal" struct page.
>>>> + * A more complete discussion of unaddressable memory may be found in
>>>> + * include/linux/hmm.h and Documentation/vm/hmm.txt.
>>>> + */
>>>> +enum memory_type {
>>>> + MEMORY_DEVICE_PERSISTENT = 0,
>>>> + MEMORY_DEVICE_UNADDRESSABLE,
>>>> +};
>>>
>>> Ok, this is a bikeshed, but I think it is important. I think these
>>> should be called MEMORY_DEVICE_PUBLIC and MEMORY_DEVICE_PRIVATE. The
>>> reason is that persistence has nothing to do with the code paths that
>>> deal with the pmem use case of ZONE_DEVICE. The only property the mm
>>> cares about is that the address range behaves the same as host memory
>>> for dma and cpu accesses. The "unaddressable" designation always
>>> confuses me because a memory range isn't memory if it's
>>> "unaddressable". It is addressable, it's just "private" to the device.
>>
>> I can change the name but the memory is truely unaddressable, the CPU
>> can not access it whatsoever (well it can access a small window but
>> even that is not guaranteed).
>>
>
> Understood, but that's still "addressable only by certain agents or
> through a proxy" which seems closer to "private" to me.
>
Actually, MEMORY_DEVICE_PRIVATE / _PUBLIC seems like a good choice to
me, because the memory may not remain CPU-unaddressable in the future.
By that, I mean that I know of at least one company (ours) that is
working on products that will support hardware-based memory coherence
(and access counters to go along with that). If someone were to enable
HMM on such a system, then the device memory would be, in fact, directly
addressable by a CPU--thus exactly contradicting the "unaddressable" name.
Yes, it is true that we would have to modify HMM anyway, in order to
work in that situation, partly because HMM today relies on CPU and
device page faults in order to work. And it's also true that we might
want to take a different approach than HMM, to support that kind of
device: for example, making it a NUMA node has been debated here, recently.
But even so, I think the potential for the "unaddressable" memory
actually becoming "addressable" someday, is a good argument for using a
different name.
thanks,
--
John Hubbard
NVIDIA
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-04-24 0:39 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-22 3:30 [HMM 00/15] HMM (Heterogeneous Memory Management) v20 Jérôme Glisse
2017-04-22 3:30 ` [HMM 01/15] mm, memory_hotplug: introduce add_pages Jérôme Glisse
2017-04-22 3:30 ` [HMM 02/15] mm/put_page: move ZONE_DEVICE page reference decrement v2 Jérôme Glisse
2017-04-22 4:46 ` Dan Williams
2017-04-22 3:30 ` [HMM 03/15] mm/unaddressable-memory: new type of ZONE_DEVICE for unaddressable memory Jérôme Glisse
2017-04-22 5:30 ` Dan Williams
2017-04-22 18:11 ` Jerome Glisse
2017-04-23 13:13 ` Dan Williams
2017-04-24 0:39 ` John Hubbard [this message]
2017-04-25 7:08 ` Jon Masters
2017-04-22 3:30 ` [HMM 04/15] mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY Jérôme Glisse
2017-04-22 3:30 ` [HMM 05/15] mm/migrate: new memory migration helper for use with device memory v4 Jérôme Glisse
2017-04-22 3:30 ` [HMM 06/15] mm/migrate: migrate_vma() unmap page from vma while collecting pages Jérôme Glisse
2017-04-22 3:30 ` [HMM 07/15] mm/hmm: heterogeneous memory management (HMM for short) v2 Jérôme Glisse
2017-04-22 3:30 ` [HMM 08/15] mm/hmm/mirror: mirror process address space on device with HMM helpers v2 Jérôme Glisse
2017-04-22 3:30 ` [HMM 09/15] mm/hmm/mirror: helper to snapshot CPU page table v2 Jérôme Glisse
2017-04-22 3:30 ` [HMM 10/15] mm/hmm/mirror: device page fault handler Jérôme Glisse
2017-04-22 3:30 ` [HMM 11/15] mm/migrate: support un-addressable ZONE_DEVICE page in migration Jérôme Glisse
2017-04-22 3:30 ` [HMM 12/15] mm/migrate: allow migrate_vma() to alloc new page on empty entry v2 Jérôme Glisse
2017-04-22 3:30 ` [HMM 13/15] mm/hmm/devmem: device memory hotplug using ZONE_DEVICE v3 Jérôme Glisse
2017-04-22 3:30 ` [HMM 14/15] mm/hmm/devmem: dummy HMM device for ZONE_DEVICE memory v3 Jérôme Glisse
2017-04-22 3:30 ` [HMM 15/15] hmm: heterogeneous memory management documentation Jérôme Glisse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f88de491-1cd2-75e1-4304-dc11c96b5d2a@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=dnellans@nvidia.com \
--cc=jglisse@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=n-horiguchi@ah.jp.nec.com \
--cc=ross.zwisler@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox