From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
To: Michal Hocko <mhocko@kernel.org>,
Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Mel Gorman <mgorman@suse.de>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org, vbabka@suse.cz,
minchan@kernel.org, aneesh.kumar@linux.vnet.ibm.com,
bsingharora@gmail.com, srikar@linux.vnet.ibm.com,
haren@linux.vnet.ibm.com, jglisse@redhat.com,
dave.hansen@intel.com, dan.j.williams@intel.com
Subject: Re: [PATCH V3 0/4] Define coherent device memory node
Date: Thu, 23 Feb 2017 12:22:40 +0530 [thread overview]
Message-ID: <a69556b2-7273-108b-3ec1-ccbce468cf1c@linux.vnet.ibm.com> (raw)
In-Reply-To: <20170222095043.GG5753@dhcp22.suse.cz>
On 02/22/2017 03:20 PM, Michal Hocko wrote:
> On Tue 21-02-17 19:09:18, Anshuman Khandual wrote:
>> On 02/21/2017 04:41 PM, Michal Hocko wrote:
>>> On Fri 17-02-17 17:11:57, Anshuman Khandual wrote:
>>> [...]
>>>> * User space using mbind() to get CDM memory is an additional benefit
>>>> we get by making the CDM plug in as a node and be part of the buddy
>>>> allocator. But the over all idea from the user space point of view
>>>> is that the application can allocate any generic buffer and try to
>>>> use the buffer either from the CPU side or from the device without
>>>> knowing about where the buffer is really mapped physically. That
>>>> gives a seamless and transparent view to the user space where CPU
>>>> compute and possible device based compute can work together. This
>>>> is not possible through a driver allocated buffer.
>>>
>>> But how are you going to define any policy around that. Who is allowed
>>
>> The user space VMA can define the policy with a mbind(MPOL_BIND) call
>> with CDM/CDMs in the nodemask.
>>
>>> to allocate and how much of this "special memory". Is it possible that
>>
>> Any user space application with mbind(MPOL_BIND) call with CDM/CDMs in
>> the nodemask can allocate from the CDM memory. "How much" gets controlled
>> by how we fault from CPU and the default behavior of the buddy allocator.
>
> In other words the policy is implemented by the kernel. Why is this a
> good thing?
Its controlled by the kernel only during page fault paths of either CPU
or device. But the device driver will actually do the placements after
wards after taking into consideration access patterns and relative
performance. We dont want the driver to be involved during page fault
path memory allocations which should naturally go through the buddy
allocator.
>
>>> we will eventually need some access control mechanism? If yes then mbind
>>
>> No access control mechanism is needed. If an application wants to use
>> CDM memory by specifying in the mbind() it can. Nothing prevents it
>> from using the CDM memory.
>
> What if we find out that an access control _is_ really needed? I can
> easily imagine that some devices will come up with really fast and expensive
> memory. You do not want some random user to steal it from you when you
> want to use it for your workload.
Hmm, it makes sense but I think its not something we have to deal with
right away. Later we may have to think about some generic access control
mechanism for mbind() and then accommodate CDM with it.
>
>>> is really not suitable interface to (ab)use. Also what should happen if
>>> the mbind mentions only CDM memory and that is depleted?
>>
>> IIUC *only CDM* cannot be requested from user space as there are no user
>> visible interface which can translate to __GFP_THISNODE.
>
> I do not understand what __GFP_THISNODE has to do with this. This is an
> internal flag.
Right. My bad. I was just referring to the fact that there is nothing in
user space which can make buddy allocator pick NOFALLBACK list instead of
FALLBACK list.
>
>> MPOL_BIND with
>> CDM in the nodemask will eventually pick a FALLBACK zonelist which will
>> have zones of the system including CDM ones. If the resultant CDM zones
>> run out of memory, we fail the allocation request as usual.
>
> OK, so let's say you mbind to a single node which is CDM. You seem to be
> saying that we will simply break the NUMA affinity in this special case?
Why ? It should simply follow what happens when we pick a single NUMA node
in previous situations.
> Currently we invoke the OOM killer if nodes which the application binds
> to are depleted and cannot be reclaimed.
Right, the same should happen here for CDM as well.
>
>>> Could you also explain why the transparent view is really better than
>>> using a device specific mmap (aka CDM awareness)?
>>
>> Okay with a transparent view, we can achieve a control flow of application
>> like the following.
>>
>> (1) Allocate a buffer: alloc_buffer(buf, size)
>> (2) CPU compute on buffer: cpu_compute(buf, size)
>> (3) Device compute on buffer: device_compute(buf, size)
>> (4) CPU compute on buffer: cpu_compute(buf, size)
>> (5) Release the buffer: release_buffer(buf, size)
>>
>> With assistance from a device specific driver, the actual page mapping of
>> the buffer can change between system RAM and device memory depending on
>> which side is accessing at a given point. This will be achieved through
>> driver initiated migrations.
>
> But then you do not need any NUMA affinity, right? The driver can do
> all this automagically. How does the numa policy comes into the game in
> your above example. Sorry for being dense, I might be really missing
> something important here, but I really fail to see why the NUMA is the
> proper interface here.
You are right. Driver can migrate any mapping in the userspace to any
where on the system as long as cpuset does not prohibit it. But we still
want the driver to conform to the applicable VMA memory policy set from
the userspace. Hence a VMA policy needs to be set from the user space.
NUMA VMA memory policy also restricts the allocations inside the
applicable nodemask during page fault paths (CPU and device) as well.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-23 6:53 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-15 12:07 Anshuman Khandual
2017-02-15 12:07 ` [PATCH V3 1/4] mm: Define coherent device memory (CDM) node Anshuman Khandual
2017-02-17 14:05 ` Bob Liu
2017-02-21 10:20 ` Anshuman Khandual
2017-02-15 12:07 ` [PATCH V3 2/4] mm: Enable HugeTLB allocation isolation for CDM nodes Anshuman Khandual
2017-02-15 12:07 ` [PATCH V3 3/4] mm: Add new parameter to get_page_from_freelist() function Anshuman Khandual
2017-02-15 12:07 ` [PATCH V3 4/4] mm: Enable Buddy allocation isolation for CDM nodes Anshuman Khandual
2017-02-15 18:20 ` [PATCH V3 0/4] Define coherent device memory node Mel Gorman
2017-02-16 22:14 ` Balbir Singh
2017-02-17 9:33 ` Mel Gorman
2017-02-21 2:57 ` Balbir Singh
2017-03-01 2:42 ` Balbir Singh
2017-03-01 9:55 ` Mel Gorman
2017-03-01 10:59 ` Balbir Singh
2017-03-08 9:04 ` Anshuman Khandual
2017-03-08 9:21 ` [PATCH 1/2] mm: Change generic FALLBACK zonelist creation process Anshuman Khandual
2017-03-08 11:07 ` John Hubbard
2017-03-14 13:33 ` Anshuman Khandual
2017-03-15 4:10 ` John Hubbard
2017-03-08 9:21 ` [PATCH 2/2] mm: Change mbind(MPOL_BIND) implementation for CDM nodes Anshuman Khandual
2017-02-17 11:41 ` [PATCH V3 0/4] Define coherent device memory node Anshuman Khandual
2017-02-17 13:32 ` Mel Gorman
2017-02-21 13:09 ` Anshuman Khandual
2017-02-21 20:14 ` Jerome Glisse
2017-02-23 8:14 ` Anshuman Khandual
2017-02-23 15:27 ` Jerome Glisse
2017-02-22 9:29 ` Michal Hocko
2017-02-22 14:59 ` Jerome Glisse
2017-02-22 16:54 ` Michal Hocko
2017-03-06 5:48 ` Anshuman Khandual
2017-02-23 8:52 ` Anshuman Khandual
2017-02-23 15:57 ` Mel Gorman
2017-03-06 5:12 ` Anshuman Khandual
2017-02-21 11:11 ` Michal Hocko
2017-02-21 13:39 ` Anshuman Khandual
2017-02-22 9:50 ` Michal Hocko
2017-02-23 6:52 ` Anshuman Khandual [this message]
2017-03-05 12:39 ` Anshuman Khandual
2017-02-24 1:06 ` Bob Liu
2017-02-24 4:39 ` John Hubbard
2017-02-24 4:53 ` Jerome Glisse
2017-02-27 1:56 ` Bob Liu
2017-02-27 5:41 ` Anshuman Khandual
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a69556b2-7273-108b-3ec1-ccbce468cf1c@linux.vnet.ibm.com \
--to=khandual@linux.vnet.ibm.com \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=bsingharora@gmail.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=haren@linux.vnet.ibm.com \
--cc=jglisse@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=minchan@kernel.org \
--cc=srikar@linux.vnet.ibm.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox