* [PATCH v11 00/15] HMM (Heterogeneous Memory Management)
@ 2015-11-04 21:16 Bing Yu
0 siblings, 0 replies; 3+ messages in thread
From: Bing Yu @ 2015-11-04 21:16 UTC (permalink / raw)
To: akpm; +Cc: torvalds, linux-mm, linux-kernel, j.glisse
[-- Attachment #1: Type: text/plain, Size: 155 bytes --]
Mediatek would like to see HMM upstream as we are interested in using it for future hardware.
Does anyone know when will this happen?
Regards,
Bing
\r
[-- Attachment #2: Type: text/html, Size: 3398 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v11 00/15] HMM (Heterogeneous Memory Management)
@ 2015-10-21 20:59 Jérôme Glisse
2015-10-25 10:00 ` Haggai Eran
0 siblings, 1 reply; 3+ messages in thread
From: Jérôme Glisse @ 2015-10-21 20:59 UTC (permalink / raw)
To: akpm, linux-kernel, linux-mm
Cc: Linus Torvalds, joro, Mel Gorman, H. Peter Anvin, Peter Zijlstra,
Andrea Arcangeli, Johannes Weiner, Larry Woodman, Rik van Riel,
Dave Airlie, Brendan Conoboy, Joe Donohue, Christophe Harle,
Duncan Poole, Sherry Cheung, Subhash Gutti, John Hubbard,
Mark Hairgrove, Lucien Dunning, Cameron Buschardt,
Arvind Gopalakrishnan, Haggai Eran, Shachar Raindel, Liran Liss,
Roland Dreier, Ben Sander, Greg Stoner, John Bridgman,
Michael Mantor, Paul Blinzer, Leonid Shamis, Laurent Morichetti,
Alexander Deucher, Linda Wang, Kevin E Martin, Jeff Law,
Or Gerlitz, Sagi Grimberg, Aneesh Kumar K.V
Minor fixes since last post (1), apply on top of 4.3rc6.
Please consider applying. Tree with the patchset:
git://people.freedesktop.org/~glisse/linux hmm-v11 branch
HMM (HMM (Heterogeneous Memory Management) is an helper layer
for device driver, its main features are :
- Shadow CPU page table of a process into a device specific
format page table and keep both page table synchronize.
- Handle DMA mapping of system ram page on behalf of device
(for shadowed page table entry).
- Migrate private anonymous memory to private device memory
and handle CPU page fault (which triggers a migration back
to system memory so CPU can access it).
Benefits of HMM :
- Avoid current model where device driver have to pin page
which blocks several kernel features (KSM, migration, ...).
- No impact on existing workload that do not use HMM (it only
adds couple more if() to common code path).
- Intended as common infrastructure for several different
hardware, as of today Mellanox and NVidia.
- Allow userspace API to move away from explicit copy code
path where application programmer has to manage manually
memcpy to and from device memory.
- Transparent to userspace, for instance allowing library to
use GPU without involving application linked against it.
I expect other hardware company to express interest in HMM and
eventualy start using it with their new hardware. I give a more
in depth motivation after the change log.
Change log :
v11:
- Fix PROT_NONE case.
- Fix missing page table walk callback.
- Add support for hugetlbfs.
v10:
- Minor fixes here and there.
v9:
- Added new device driver helpers.
- Added documentions.
- Improved page table code claritity (minor architectural changes
and better names).
v8:
- Removed currently unuse fence code.
- Added DMA mapping on behalf of device.
v7:
- Redone and simplified page table code to match Linus suggestion
http://article.gmane.org/gmane.linux.kernel.mm/125257
... Lost in translation ...
Why doing this ?
Mirroring a process address space is mandatory with OpenCL 2.0 and
with other GPU compute APIs. OpenCL 2.0 allows different level of
implementation and currently only the lowest 2 are supported on
Linux. To implement the highest level, where CPU and GPU access
can happen concurently and are cache coherent, HMM is needed, or
something providing same functionality, for instance through
platform hardware.
Hardware solution such as PCIE ATS/PASID is limited to mirroring
system memory and does not provide way to migrate memory to device
memory (which offer significantly more bandwidth, up to 10 times
faster than regular system memory with discrete GPU, also have
lower latency than PCIE transaction).
Current CPU with GPU on same die (AMD or Intel) use the ATS/PASID
and for Intel a special level of cache (backed by a large pool of
fast memory).
For foreseeable future, discrete GPUs will remain releveant as they
can have a large quantity of faster memory than integrated GPU.
Thus we believe HMM will allow us to leverage discrete GPUs memory
in a transparent fashion to the application, with minimum disruption
to the linux kernel mm code. Also HMM can work along hardware
solution such as PCIE ATS/PASID (leaving regular case to ATS/PASID
while HMM handles the migrated memory case).
Design :
The patch 1, 2, 3 and 4 augment the mmu notifier API with new
informations to more efficiently mirror CPU page table updates.
The first side of HMM, process address space mirroring, is
implemented in patch 5 through 14. This use a secondary page
table, in which HMM mirror memory actively use by the device.
HMM does not take a reference on any of the page, it use the
mmu notifier API to track changes to the CPU page table and to
update the mirror page table. All this while providing a simple
API to device driver.
To implement this we use a "generic" page table and not a radix
tree because we need to store more flags than radix allows and
we need to store dma address (sizeof(dma_addr_t) > sizeof(long)
on some platform).
(1) Previous patchset posting :
v1 http://lwn.net/Articles/597289/
v2 https://lkml.org/lkml/2014/6/12/559
v3 https://lkml.org/lkml/2014/6/13/633
v4 https://lkml.org/lkml/2014/8/29/423
v5 https://lkml.org/lkml/2014/11/3/759
v6 http://lwn.net/Articles/619737/
v7 http://lwn.net/Articles/627316/
v8 https://lwn.net/Articles/645515/
v9 https://lwn.net/Articles/651553/
v10 https://lwn.net/Articles/654430/
Cheers,
JA(C)rA'me
To: "Andrew Morton" <akpm@linux-foundation.org>,
To: <linux-kernel@vger.kernel.org>,
To: linux-mm <linux-mm@kvack.org>,
Cc: "Linus Torvalds" <torvalds@linux-foundation.org>,
Cc: "Mel Gorman" <mgorman@suse.de>,
Cc: "H. Peter Anvin" <hpa@zytor.com>,
Cc: "Peter Zijlstra" <peterz@infradead.org>,
Cc: "Linda Wang" <lwang@redhat.com>,
Cc: "Kevin E Martin" <kem@redhat.com>,
Cc: "Andrea Arcangeli" <aarcange@redhat.com>,
Cc: "Johannes Weiner" <jweiner@redhat.com>,
Cc: "Larry Woodman" <lwoodman@redhat.com>,
Cc: "Rik van Riel" <riel@redhat.com>,
Cc: "Dave Airlie" <airlied@redhat.com>,
Cc: "Jeff Law" <law@redhat.com>,
Cc: "Brendan Conoboy" <blc@redhat.com>,
Cc: "Joe Donohue" <jdonohue@redhat.com>,
Cc: "Christophe Harle" <charle@nvidia.com>,
Cc: "Duncan Poole" <dpoole@nvidia.com>,
Cc: "Sherry Cheung" <SCheung@nvidia.com>,
Cc: "Subhash Gutti" <sgutti@nvidia.com>,
Cc: "John Hubbard" <jhubbard@nvidia.com>,
Cc: "Mark Hairgrove" <mhairgrove@nvidia.com>,
Cc: "Lucien Dunning" <ldunning@nvidia.com>,
Cc: "Cameron Buschardt" <cabuschardt@nvidia.com>,
Cc: "Arvind Gopalakrishnan" <arvindg@nvidia.com>,
Cc: "Haggai Eran" <haggaie@mellanox.com>,
Cc: "Or Gerlitz" <ogerlitz@mellanox.com>,
Cc: "Sagi Grimberg" <sagig@mellanox.com>
Cc: "Shachar Raindel" <raindel@mellanox.com>,
Cc: "Liran Liss" <liranl@mellanox.com>,
Cc: "Roland Dreier" <roland@purestorage.com>,
Cc: "Sander, Ben" <ben.sander@amd.com>,
Cc: "Stoner, Greg" <Greg.Stoner@amd.com>,
Cc: "Bridgman, John" <John.Bridgman@amd.com>,
Cc: "Mantor, Michael" <Michael.Mantor@amd.com>,
Cc: "Blinzer, Paul" <Paul.Blinzer@amd.com>,
Cc: "Morichetti, Laurent" <Laurent.Morichetti@amd.com>,
Cc: "Deucher, Alexander" <Alexander.Deucher@amd.com>,
Cc: "Leonid Shamis" <Leonid.Shamis@amd.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH v11 00/15] HMM (Heterogeneous Memory Management)
2015-10-21 20:59 Jérôme Glisse
@ 2015-10-25 10:00 ` Haggai Eran
0 siblings, 0 replies; 3+ messages in thread
From: Haggai Eran @ 2015-10-25 10:00 UTC (permalink / raw)
To: Jérôme Glisse, akpm, linux-kernel, linux-mm
Cc: Linus Torvalds, joro, Mel Gorman, H. Peter Anvin, Peter Zijlstra,
Andrea Arcangeli, Johannes Weiner, Larry Woodman, Rik van Riel,
Dave Airlie, Brendan Conoboy, Joe Donohue, Christophe Harle,
Duncan Poole, Sherry Cheung, Subhash Gutti, John Hubbard,
Mark Hairgrove, Lucien Dunning, Cameron Buschardt,
Arvind Gopalakrishnan, Shachar Raindel, Liran Liss,
Roland Dreier, Ben Sander, Greg Stoner, John Bridgman,
Michael Mantor, Paul Blinzer, Leonid Shamis, Laurent Morichetti,
Alexander Deucher, Linda Wang, Kevin E Martin, Jeff Law,
Or Gerlitz, Sagi Grimberg, Aneesh Kumar K.V
On 21/10/2015 23:59, JA(C)rA'me Glisse wrote:
> HMM (HMM (Heterogeneous Memory Management) is an helper layer
> for device driver, its main features are :
> - Shadow CPU page table of a process into a device specific
> format page table and keep both page table synchronize.
> - Handle DMA mapping of system ram page on behalf of device
> (for shadowed page table entry).
> - Migrate private anonymous memory to private device memory
> and handle CPU page fault (which triggers a migration back
> to system memory so CPU can access it).
>
> Benefits of HMM :
> - Avoid current model where device driver have to pin page
> which blocks several kernel features (KSM, migration, ...).
> - No impact on existing workload that do not use HMM (it only
> adds couple more if() to common code path).
> - Intended as common infrastructure for several different
> hardware, as of today Mellanox and NVidia.
> - Allow userspace API to move away from explicit copy code
> path where application programmer has to manage manually
> memcpy to and from device memory.
> - Transparent to userspace, for instance allowing library to
> use GPU without involving application linked against it.
>
> I expect other hardware company to express interest in HMM and
> eventualy start using it with their new hardware. I give a more
> in depth motivation after the change log.
The RDMA stack had IO paging support since kernel v4.0, using the
mmu_notifier APIs to interface with the mm subsystem. As one may expect,
it allows RDMA applications to decrease the amount of memory that needs
to be pinned, and allows the kernel to better allocate physical memory.
HMM looks like a better API than mmu_notifiers for that purpose, as it
allows sharing more code. It handles internally things that any similar
driver or subsystem would need to do, such as synchronization between
page fault events and invalidations, and DMA-mapping pages for device
use. It looks like it can be extended to also assist in device peer to
peer memory mapping, to allow capable devices to transfer data directly
without CPU intervention.
Regards,
Haggai
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-11-04 21:16 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-04 21:16 [PATCH v11 00/15] HMM (Heterogeneous Memory Management) Bing Yu
-- strict thread matches above, loose matches on Subject: below --
2015-10-21 20:59 Jérôme Glisse
2015-10-25 10:00 ` Haggai Eran
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox