* [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf
@ 2024-07-10 14:09 Lei Liu
2024-07-10 14:09 ` [PATCH 1/2] mm: dmabuf_direct_io: Support direct_io " Lei Liu
2024-07-10 14:09 ` [PATCH 2/2] mm: dmabuf_direct_io: Fix memory statistics error for dmabuf allocated memory with direct_io support Lei Liu
0 siblings, 2 replies; 12+ messages in thread
From: Lei Liu @ 2024-07-10 14:09 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum,
Andrei Vagin, Ryan Roberts, Peter Xu, Kefeng Wang, linux-media,
dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm
Cc: opensource.kernel, Lei Liu
Use vm_insert_page to establish a mapping for the memory allocated
by dmabuf, thus supporting direct I/O read and write; and fix the
issue of incorrect memory statistics after mapping dmabuf memory.
Lei Liu (2):
mm: dmabuf_direct_io: Support direct_io for memory allocated by dmabuf
mm: dmabuf_direct_io: Fix memory statistics error for dmabuf allocated
memory with direct_io support
drivers/dma-buf/heaps/system_heap.c | 5 +++--
fs/proc/task_mmu.c | 8 +++++++-
include/linux/mm.h | 1 +
mm/memory.c | 15 ++++++++++-----
mm/rmap.c | 9 +++++----
5 files changed, 26 insertions(+), 12 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread* [PATCH 1/2] mm: dmabuf_direct_io: Support direct_io for memory allocated by dmabuf 2024-07-10 14:09 [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf Lei Liu @ 2024-07-10 14:09 ` Lei Liu 2024-07-10 14:09 ` [PATCH 2/2] mm: dmabuf_direct_io: Fix memory statistics error for dmabuf allocated memory with direct_io support Lei Liu 1 sibling, 0 replies; 12+ messages in thread From: Lei Liu @ 2024-07-10 14:09 UTC (permalink / raw) To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier, Christian König, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Peter Xu, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm Cc: opensource.kernel, Lei Liu 1.Effects and reasons for lack of support: Currently, memory allocated by dmabuf cannot be read from files using direct_io. With the increasing use of AI models in mobile applications, there is a growing need to load large model files occupying up to 3-4GB into mobile memory. Presently, the only way to read is through buffer_io, which limits performance. In low memory scenarios on 12GB RAM smartphones, buffer_io requires additional memory, leading to a 3-4 times degradation in read performance with significant fluctuations. The reason for the lack of support for direct_io reading is that the current system establishes mappings for memory allocated by dmabuf using remap_pfn_range, which includes the VM_PFN_MAP flag. When attempting direct_io reads, the get_user_page process intercepts the VM_PFN_MAP flag, preventing the page from being returned and resulting in read failures. 2.Proposed solution: (1) Establish mmap mappings for memory allocated by dmabuf using the vm_insert_page method to support direct_io read and write. 3.Advantages and benefits: (1) Faster and more stable reading speed. (2) Reduced pagecache memory usage. (3) Reduction in CPU data copying and unnecessary power consumption. 4.In a clean and stressapptest(a 16GB memory phone consumed 4GB of memory). A comparison of the time taken to read a 3.2GB large AI model file using buffer_io and direct_io. Read 3.21G AI large model file on mobilephone Memstress Rounds DIO-Time/ms BIO-Time/ms 01 1432 2034 Clean 02 1406 2225 03 1476 2097 average 1438 2118 Memstress Rounds DIO-Time/ms BIO-Time/ms 01 1585 4821 Eat 4GB 02 1560 4957 03 1519 4936 average 1554 4905 Signed-off-by: Lei Liu <liulei.rjpt@vivo.com> --- drivers/dma-buf/heaps/system_heap.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 9076d47ed2ef..87547791f9e1 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -203,8 +203,7 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) for_each_sgtable_page(table, &piter, vma->vm_pgoff) { struct page *page = sg_page_iter_page(&piter); - ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, - vma->vm_page_prot); + ret = vm_insert_page(vma, addr, page); if (ret) return ret; addr += PAGE_SIZE; -- 2.34.1 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/2] mm: dmabuf_direct_io: Fix memory statistics error for dmabuf allocated memory with direct_io support 2024-07-10 14:09 [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf Lei Liu 2024-07-10 14:09 ` [PATCH 1/2] mm: dmabuf_direct_io: Support direct_io " Lei Liu @ 2024-07-10 14:09 ` Lei Liu 1 sibling, 0 replies; 12+ messages in thread From: Lei Liu @ 2024-07-10 14:09 UTC (permalink / raw) To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier, Christian König, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Peter Xu, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm Cc: opensource.kernel, Lei Liu The method of establishing mmap mapping for memory allocated by dmabuf through vm_insert_page causes changes in the way dmabuf memory is accounted for, primarily in the following three aspects: (1) The memory usage of dmabuf is accounted for in mm->rss. (2) /proc/self/smaps will account for the memory usage of dmabuf. (3) Memory usage of dmabuf after mmap will be counted in Mapped in /proc/meminfo. By adding a VM_DMABUF_DIO_MAP flag, we address the memory accounting issues in the three aspects mentioned above, ensuring that the memory allocated by dmabuf with direct_io support does not undergo changes in its memory accounting method. Signed-off-by: Lei Liu <liulei.rjpt@vivo.com> --- drivers/dma-buf/heaps/system_heap.c | 2 ++ fs/proc/task_mmu.c | 8 +++++++- include/linux/mm.h | 1 + mm/memory.c | 15 ++++++++++----- mm/rmap.c | 9 +++++---- 5 files changed, 25 insertions(+), 10 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 87547791f9e1..1d6f08b1dc5b 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -200,6 +200,8 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) struct sg_page_iter piter; int ret; + vm_flags_set(vma, VM_DMABUF_DIO); + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { struct page *page = sg_page_iter_page(&piter); diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 71e5039d940d..8070fdd4ac7b 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -784,7 +784,13 @@ static void smap_gather_stats(struct vm_area_struct *vma, /* Invalid start */ if (start >= vma->vm_end) return; - + /* + * The memory of DMABUF needs to be mmaped using vm_insert_page in order to + * support direct_io. It will not with VM_PFNMAP flag, but it does have the + * VM_DMABUF_DIO flag memory will be counted in the process's RSS. + */ + if (vma->vm_flags & VM_DMABUF_DIO) + return; if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) { /* * For shared or readonly shmem mappings we know that all diff --git a/include/linux/mm.h b/include/linux/mm.h index eb7c96d24ac0..86d23f1a9717 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -283,6 +283,7 @@ extern unsigned int kobjsize(const void *objp); #define VM_UFFD_MISSING 0 #endif /* CONFIG_MMU */ #define VM_PFNMAP 0x00000400 /* Page-ranges managed without "struct page", just pure PFN */ +#define VM_DMABUF_DIO 0x00000800 /* Memory accounting for dmabuf support direct_io */ #define VM_UFFD_WP 0x00001000 /* wrprotect pages tracking */ #define VM_LOCKED 0x00002000 diff --git a/mm/memory.c b/mm/memory.c index d10e616d7389..8b126ce0f788 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1003,7 +1003,8 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio); } else { folio_dup_file_rmap_ptes(folio, page, nr); - rss[mm_counter_file(folio)] += nr; + if (likely(!(src_vma->vm_flags & VM_DMABUF_DIO))) + rss[mm_counter_file(folio)] += nr; } if (any_writable) pte = pte_mkwrite(pte, src_vma); @@ -1031,7 +1032,8 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio); } else { folio_dup_file_rmap_pte(folio, page); - rss[mm_counter_file(folio)]++; + if (likely(!(src_vma->vm_flags & VM_DMABUF_DIO))) + rss[mm_counter_file(folio)]++; } copy_pte: @@ -1488,7 +1490,8 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb, } if (pte_young(ptent) && likely(vma_has_recency(vma))) folio_mark_accessed(folio); - rss[mm_counter(folio)] -= nr; + if (likely(!(vma->vm_flags & VM_DMABUF_DIO))) + rss[mm_counter(folio)] -= nr; } else { /* We don't need up-to-date accessed/dirty bits. */ clear_full_ptes(mm, addr, pte, nr, tlb->fullmm); @@ -1997,7 +2000,8 @@ static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte, return -EBUSY; /* Ok, finally just insert the thing.. */ folio_get(folio); - inc_mm_counter(vma->vm_mm, mm_counter_file(folio)); + if (likely(!(vma->vm_flags & VM_DMABUF_DIO))) + inc_mm_counter(vma->vm_mm, mm_counter_file(folio)); folio_add_file_rmap_pte(folio, page, vma); set_pte_at(vma->vm_mm, addr, pte, mk_pte(page, prot)); return 0; @@ -4641,7 +4645,8 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) if (write) entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); - add_mm_counter(vma->vm_mm, mm_counter_file(folio), HPAGE_PMD_NR); + if (likely(!(vma->vm_flags & VM_DMABUF_DIO))) + add_mm_counter(vma->vm_mm, mm_counter_file(folio), HPAGE_PMD_NR); folio_add_file_rmap_pmd(folio, page, vma); /* diff --git a/mm/rmap.c b/mm/rmap.c index e8fc5ecb59b2..17cab358acc1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1441,10 +1441,10 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio, VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped); - if (nr_pmdmapped) + if (nr_pmdmapped && !(vma->vm_flags & VM_DMABUF_DIO)) __mod_node_page_state(pgdat, folio_test_swapbacked(folio) ? NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped); - if (nr) + if (nr && !(vma->vm_flags & VM_DMABUF_DIO)) __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr); /* See comments in folio_add_anon_rmap_*() */ @@ -1545,7 +1545,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, /* NR_{FILE/SHMEM}_PMDMAPPED are not maintained per-memcg */ if (folio_test_anon(folio)) __lruvec_stat_mod_folio(folio, NR_ANON_THPS, -nr_pmdmapped); - else + else if (likely(!(vma->vm_flags & VM_DMABUF_DIO))) __mod_node_page_state(pgdat, folio_test_swapbacked(folio) ? NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, @@ -1553,7 +1553,8 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, } if (nr) { idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; - __lruvec_stat_mod_folio(folio, idx, -nr); + if (likely(!(vma->vm_flags & VM_DMABUF_DIO))) + __lruvec_stat_mod_folio(folio, idx, -nr); /* * Queue anon large folio for deferred split if at least one -- 2.34.1 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf
@ 2024-07-10 13:57 Lei Liu
2024-07-10 14:14 ` Christian König
0 siblings, 1 reply; 12+ messages in thread
From: Lei Liu @ 2024-07-10 13:57 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum,
Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel,
linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm
Cc: opensource.kernel, Lei Liu
Use vm_insert_page to establish a mapping for the memory allocated
by dmabuf, thus supporting direct I/O read and write; and fix the
issue of incorrect memory statistics after mapping dmabuf memory.
Lei Liu (2):
mm: dmabuf_direct_io: Support direct_io for memory allocated by dmabuf
mm: dmabuf_direct_io: Fix memory statistics error for dmabuf allocated
memory with direct_io support
drivers/dma-buf/heaps/system_heap.c | 5 +++--
fs/proc/task_mmu.c | 8 +++++++-
include/linux/mm.h | 1 +
mm/memory.c | 15 ++++++++++-----
mm/rmap.c | 9 +++++----
5 files changed, 26 insertions(+), 12 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-10 13:57 [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf Lei Liu @ 2024-07-10 14:14 ` Christian König 2024-07-10 14:35 ` Lei Liu 2024-07-15 8:52 ` Daniel Vetter 0 siblings, 2 replies; 12+ messages in thread From: Christian König @ 2024-07-10 14:14 UTC (permalink / raw) To: Lei Liu, Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel Cc: opensource.kernel Am 10.07.24 um 15:57 schrieb Lei Liu: > Use vm_insert_page to establish a mapping for the memory allocated > by dmabuf, thus supporting direct I/O read and write; and fix the > issue of incorrect memory statistics after mapping dmabuf memory. Well big NAK to that! Direct I/O is intentionally disabled on DMA-bufs. We already discussed enforcing that in the DMA-buf framework and this patch probably means that we should really do that. Regards, Christian. > > Lei Liu (2): > mm: dmabuf_direct_io: Support direct_io for memory allocated by dmabuf > mm: dmabuf_direct_io: Fix memory statistics error for dmabuf allocated > memory with direct_io support > > drivers/dma-buf/heaps/system_heap.c | 5 +++-- > fs/proc/task_mmu.c | 8 +++++++- > include/linux/mm.h | 1 + > mm/memory.c | 15 ++++++++++----- > mm/rmap.c | 9 +++++---- > 5 files changed, 26 insertions(+), 12 deletions(-) > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-10 14:14 ` Christian König @ 2024-07-10 14:35 ` Lei Liu 2024-07-10 14:48 ` Christian König 2024-07-15 8:52 ` Daniel Vetter 1 sibling, 1 reply; 12+ messages in thread From: Lei Liu @ 2024-07-10 14:35 UTC (permalink / raw) To: Christian König, Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel Cc: opensource.kernel 在 2024/7/10 22:14, Christian König 写道: > Am 10.07.24 um 15:57 schrieb Lei Liu: >> Use vm_insert_page to establish a mapping for the memory allocated >> by dmabuf, thus supporting direct I/O read and write; and fix the >> issue of incorrect memory statistics after mapping dmabuf memory. > > Well big NAK to that! Direct I/O is intentionally disabled on DMA-bufs. Hello! Could you explain why direct_io is disabled on DMABUF? Is there any historical reason for this? > > We already discussed enforcing that in the DMA-buf framework and this > patch probably means that we should really do that. > > Regards, > Christian. Thank you for your response. With the application of AI large model edgeification, we urgently need support for direct_io on DMABUF to read some very large files. Do you have any new solutions or plans for this? Regards, Lei Liu. > >> >> Lei Liu (2): >> mm: dmabuf_direct_io: Support direct_io for memory allocated by >> dmabuf >> mm: dmabuf_direct_io: Fix memory statistics error for dmabuf >> allocated >> memory with direct_io support >> >> drivers/dma-buf/heaps/system_heap.c | 5 +++-- >> fs/proc/task_mmu.c | 8 +++++++- >> include/linux/mm.h | 1 + >> mm/memory.c | 15 ++++++++++----- >> mm/rmap.c | 9 +++++---- >> 5 files changed, 26 insertions(+), 12 deletions(-) >> > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-10 14:35 ` Lei Liu @ 2024-07-10 14:48 ` Christian König 2024-07-10 15:06 ` Lei Liu 0 siblings, 1 reply; 12+ messages in thread From: Christian König @ 2024-07-10 14:48 UTC (permalink / raw) To: Lei Liu, Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel Cc: opensource.kernel Am 10.07.24 um 16:35 schrieb Lei Liu: > > 在 2024/7/10 22:14, Christian König 写道: >> Am 10.07.24 um 15:57 schrieb Lei Liu: >>> Use vm_insert_page to establish a mapping for the memory allocated >>> by dmabuf, thus supporting direct I/O read and write; and fix the >>> issue of incorrect memory statistics after mapping dmabuf memory. >> >> Well big NAK to that! Direct I/O is intentionally disabled on DMA-bufs. > > Hello! Could you explain why direct_io is disabled on DMABUF? Is there > any historical reason for this? It's basically one of the most fundamental design decision of DMA-Buf. The attachment/map/fence model DMA-buf uses is not really compatible with direct I/O on the underlying pages. >> >> We already discussed enforcing that in the DMA-buf framework and this >> patch probably means that we should really do that. >> >> Regards, >> Christian. > > Thank you for your response. With the application of AI large model > edgeification, we urgently need support for direct_io on DMABUF to > read some very large files. Do you have any new solutions or plans for > this? We have seen similar projects over the years and all of those turned out to be complete shipwrecks. There is currently a patch set under discussion to give the network subsystem DMA-buf support. If you are interest in network direct I/O that could help. Additional to that a lot of GPU drivers support userptr usages, e.g. to import malloced memory into the GPU driver. You can then also do direct I/O on that malloced memory and the kernel will enforce correct handling with the GPU driver through MMU notifiers. But as far as I know a general DMA-buf based solution isn't possible. Regards, Christian. > > Regards, > Lei Liu. > >> >>> >>> Lei Liu (2): >>> mm: dmabuf_direct_io: Support direct_io for memory allocated by >>> dmabuf >>> mm: dmabuf_direct_io: Fix memory statistics error for dmabuf >>> allocated >>> memory with direct_io support >>> >>> drivers/dma-buf/heaps/system_heap.c | 5 +++-- >>> fs/proc/task_mmu.c | 8 +++++++- >>> include/linux/mm.h | 1 + >>> mm/memory.c | 15 ++++++++++----- >>> mm/rmap.c | 9 +++++---- >>> 5 files changed, 26 insertions(+), 12 deletions(-) >>> >> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-10 14:48 ` Christian König @ 2024-07-10 15:06 ` Lei Liu 2024-07-10 16:34 ` T.J. Mercier 0 siblings, 1 reply; 12+ messages in thread From: Lei Liu @ 2024-07-10 15:06 UTC (permalink / raw) To: Christian König, Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel Cc: opensource.kernel on 2024/7/10 22:48, Christian König wrote: > Am 10.07.24 um 16:35 schrieb Lei Liu: >> >> on 2024/7/10 22:14, Christian König wrote: >>> Am 10.07.24 um 15:57 schrieb Lei Liu: >>>> Use vm_insert_page to establish a mapping for the memory allocated >>>> by dmabuf, thus supporting direct I/O read and write; and fix the >>>> issue of incorrect memory statistics after mapping dmabuf memory. >>> >>> Well big NAK to that! Direct I/O is intentionally disabled on DMA-bufs. >> >> Hello! Could you explain why direct_io is disabled on DMABUF? Is >> there any historical reason for this? > > It's basically one of the most fundamental design decision of DMA-Buf. > The attachment/map/fence model DMA-buf uses is not really compatible > with direct I/O on the underlying pages. Thank you! Is there any related documentation on this? I would like to understand and learn more about the fundamental reasons for the lack of support. > >>> >>> We already discussed enforcing that in the DMA-buf framework and >>> this patch probably means that we should really do that. >>> >>> Regards, >>> Christian. >> >> Thank you for your response. With the application of AI large model >> edgeification, we urgently need support for direct_io on DMABUF to >> read some very large files. Do you have any new solutions or plans >> for this? > > We have seen similar projects over the years and all of those turned > out to be complete shipwrecks. > > There is currently a patch set under discussion to give the network > subsystem DMA-buf support. If you are interest in network direct I/O > that could help. Is there a related introduction link for this patch? > > Additional to that a lot of GPU drivers support userptr usages, e.g. > to import malloced memory into the GPU driver. You can then also do > direct I/O on that malloced memory and the kernel will enforce correct > handling with the GPU driver through MMU notifiers. > > But as far as I know a general DMA-buf based solution isn't possible. 1.The reason we need to use DMABUF memory here is that we need to share memory between the CPU and APU. Currently, only DMABUF memory is suitable for this purpose. Additionally, we need to read very large files. 2. Are there any other solutions for this? Also, do you have any plans to support direct_io for DMABUF memory in the future? > > Regards, > Christian. > >> >> Regards, >> Lei Liu. >> >>> >>>> >>>> Lei Liu (2): >>>> mm: dmabuf_direct_io: Support direct_io for memory allocated by >>>> dmabuf >>>> mm: dmabuf_direct_io: Fix memory statistics error for dmabuf >>>> allocated >>>> memory with direct_io support >>>> >>>> drivers/dma-buf/heaps/system_heap.c | 5 +++-- >>>> fs/proc/task_mmu.c | 8 +++++++- >>>> include/linux/mm.h | 1 + >>>> mm/memory.c | 15 ++++++++++----- >>>> mm/rmap.c | 9 +++++---- >>>> 5 files changed, 26 insertions(+), 12 deletions(-) >>>> >>> > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-10 15:06 ` Lei Liu @ 2024-07-10 16:34 ` T.J. Mercier 2024-07-11 14:25 ` Christian König 0 siblings, 1 reply; 12+ messages in thread From: T.J. Mercier @ 2024-07-10 16:34 UTC (permalink / raw) To: Lei Liu Cc: Christian König, Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel, opensource.kernel, quic_sukadev, quic_cgoldswo, Akilesh Kailash On Wed, Jul 10, 2024 at 8:08 AM Lei Liu <liulei.rjpt@vivo.com> wrote: > > > on 2024/7/10 22:48, Christian König wrote: > > Am 10.07.24 um 16:35 schrieb Lei Liu: > >> > >> on 2024/7/10 22:14, Christian König wrote: > >>> Am 10.07.24 um 15:57 schrieb Lei Liu: > >>>> Use vm_insert_page to establish a mapping for the memory allocated > >>>> by dmabuf, thus supporting direct I/O read and write; and fix the > >>>> issue of incorrect memory statistics after mapping dmabuf memory. > >>> > >>> Well big NAK to that! Direct I/O is intentionally disabled on DMA-bufs. > >> > >> Hello! Could you explain why direct_io is disabled on DMABUF? Is > >> there any historical reason for this? > > > > It's basically one of the most fundamental design decision of DMA-Buf. > > The attachment/map/fence model DMA-buf uses is not really compatible > > with direct I/O on the underlying pages. > > Thank you! Is there any related documentation on this? I would like to > understand and learn more about the fundamental reasons for the lack of > support. Hi Lei and Christian, This is now the third request I've seen from three different companies who are interested in this, but the others are not for reasons of read performance that you mention in the commit message on your first patch. Someone else at Google ran a comparison between a normal read() and a direct I/O read() into a preallocated user buffer and found that with large readahead (16 MB) the throughput can actually be slightly higher than direct I/O. If you have concerns about read performance, have you tried increasing the readahead size? The other motivation is to load a gajillion byte file from disk into a dmabuf without evicting the entire contents of pagecache while doing so. Something like this (which does not currently work because read() tries to GUP on the dmabuf memory as you mention): static int dmabuf_heap_alloc(int heap_fd, size_t len) { struct dma_heap_allocation_data data = { .len = len, .fd = 0, .fd_flags = O_RDWR | O_CLOEXEC, .heap_flags = 0, }; int ret = ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &data); if (ret < 0) return ret; return data.fd; } int main(int, char **argv) { const char *file_path = argv[1]; printf("File: %s\n", file_path); int file_fd = open(file_path, O_RDONLY | O_DIRECT); struct stat st; stat(file_path, &st); ssize_t file_size = st.st_size; ssize_t aligned_size = (file_size + 4095) & ~4095; printf("File size: %zd Aligned size: %zd\n", file_size, aligned_size); int heap_fd = open("/dev/dma_heap/system", O_RDONLY); int dmabuf_fd = dmabuf_heap_alloc(heap_fd, aligned_size); void *vm = mmap(nullptr, aligned_size, PROT_READ | PROT_WRITE, MAP_SHARED, dmabuf_fd, 0); printf("VM at 0x%lx\n", (unsigned long)vm); dma_buf_sync sync_flags { DMA_BUF_SYNC_START | DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE }; ioctl(dmabuf_fd, DMA_BUF_IOCTL_SYNC, &sync_flags); ssize_t rc = read(file_fd, vm, file_size); printf("Read: %zd %s\n", rc, rc < 0 ? strerror(errno) : ""); sync_flags.flags = DMA_BUF_SYNC_END | DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE; ioctl(dmabuf_fd, DMA_BUF_IOCTL_SYNC, &sync_flags); } Or replace the mmap() + read() with sendfile(). So I would also like to see the above code (or something else similar) be able to work and I understand some of the reasons why it currently does not, but I don't understand why we should actively prevent this type of behavior entirely. Best, T.J. > > > >>> > >>> We already discussed enforcing that in the DMA-buf framework and > >>> this patch probably means that we should really do that. > >>> > >>> Regards, > >>> Christian. > >> > >> Thank you for your response. With the application of AI large model > >> edgeification, we urgently need support for direct_io on DMABUF to > >> read some very large files. Do you have any new solutions or plans > >> for this? > > > > We have seen similar projects over the years and all of those turned > > out to be complete shipwrecks. > > > > There is currently a patch set under discussion to give the network > > subsystem DMA-buf support. If you are interest in network direct I/O > > that could help. > > Is there a related introduction link for this patch? > > > > > Additional to that a lot of GPU drivers support userptr usages, e.g. > > to import malloced memory into the GPU driver. You can then also do > > direct I/O on that malloced memory and the kernel will enforce correct > > handling with the GPU driver through MMU notifiers. > > > > But as far as I know a general DMA-buf based solution isn't possible. > > 1.The reason we need to use DMABUF memory here is that we need to share > memory between the CPU and APU. Currently, only DMABUF memory is > suitable for this purpose. Additionally, we need to read very large files. > > 2. Are there any other solutions for this? Also, do you have any plans > to support direct_io for DMABUF memory in the future? > > > > > Regards, > > Christian. > > > >> > >> Regards, > >> Lei Liu. > >> > >>> > >>>> > >>>> Lei Liu (2): > >>>> mm: dmabuf_direct_io: Support direct_io for memory allocated by > >>>> dmabuf > >>>> mm: dmabuf_direct_io: Fix memory statistics error for dmabuf > >>>> allocated > >>>> memory with direct_io support > >>>> > >>>> drivers/dma-buf/heaps/system_heap.c | 5 +++-- > >>>> fs/proc/task_mmu.c | 8 +++++++- > >>>> include/linux/mm.h | 1 + > >>>> mm/memory.c | 15 ++++++++++----- > >>>> mm/rmap.c | 9 +++++---- > >>>> 5 files changed, 26 insertions(+), 12 deletions(-) > >>>> > >>> > > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-10 16:34 ` T.J. Mercier @ 2024-07-11 14:25 ` Christian König 2024-07-15 9:07 ` Lei Liu 0 siblings, 1 reply; 12+ messages in thread From: Christian König @ 2024-07-11 14:25 UTC (permalink / raw) To: T.J. Mercier, Lei Liu Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel, opensource.kernel, quic_sukadev, quic_cgoldswo, Akilesh Kailash Am 10.07.24 um 18:34 schrieb T.J. Mercier: > On Wed, Jul 10, 2024 at 8:08 AM Lei Liu <liulei.rjpt@vivo.com> wrote: >> on 2024/7/10 22:48, Christian König wrote: >>> Am 10.07.24 um 16:35 schrieb Lei Liu: >>>> on 2024/7/10 22:14, Christian König wrote: >>>>> Am 10.07.24 um 15:57 schrieb Lei Liu: >>>>>> Use vm_insert_page to establish a mapping for the memory allocated >>>>>> by dmabuf, thus supporting direct I/O read and write; and fix the >>>>>> issue of incorrect memory statistics after mapping dmabuf memory. >>>>> Well big NAK to that! Direct I/O is intentionally disabled on DMA-bufs. >>>> Hello! Could you explain why direct_io is disabled on DMABUF? Is >>>> there any historical reason for this? >>> It's basically one of the most fundamental design decision of DMA-Buf. >>> The attachment/map/fence model DMA-buf uses is not really compatible >>> with direct I/O on the underlying pages. >> Thank you! Is there any related documentation on this? I would like to >> understand and learn more about the fundamental reasons for the lack of >> support. > Hi Lei and Christian, > > This is now the third request I've seen from three different companies > who are interested in this, Yeah, completely agree. This is a re-occurring pattern :) Maybe we should document the preferred solution for that. > but the others are not for reasons of read > performance that you mention in the commit message on your first > patch. Someone else at Google ran a comparison between a normal read() > and a direct I/O read() into a preallocated user buffer and found that > with large readahead (16 MB) the throughput can actually be slightly > higher than direct I/O. If you have concerns about read performance, > have you tried increasing the readahead size? > > The other motivation is to load a gajillion byte file from disk into a > dmabuf without evicting the entire contents of pagecache while doing > so. Something like this (which does not currently work because read() > tries to GUP on the dmabuf memory as you mention): > > static int dmabuf_heap_alloc(int heap_fd, size_t len) > { > struct dma_heap_allocation_data data = { > .len = len, > .fd = 0, > .fd_flags = O_RDWR | O_CLOEXEC, > .heap_flags = 0, > }; > int ret = ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &data); > if (ret < 0) > return ret; > return data.fd; > } > > int main(int, char **argv) > { > const char *file_path = argv[1]; > printf("File: %s\n", file_path); > int file_fd = open(file_path, O_RDONLY | O_DIRECT); > > struct stat st; > stat(file_path, &st); > ssize_t file_size = st.st_size; > ssize_t aligned_size = (file_size + 4095) & ~4095; > > printf("File size: %zd Aligned size: %zd\n", file_size, aligned_size); > int heap_fd = open("/dev/dma_heap/system", O_RDONLY); > int dmabuf_fd = dmabuf_heap_alloc(heap_fd, aligned_size); > > void *vm = mmap(nullptr, aligned_size, PROT_READ | PROT_WRITE, > MAP_SHARED, dmabuf_fd, 0); > printf("VM at 0x%lx\n", (unsigned long)vm); > > dma_buf_sync sync_flags { DMA_BUF_SYNC_START | > DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE }; > ioctl(dmabuf_fd, DMA_BUF_IOCTL_SYNC, &sync_flags); > > ssize_t rc = read(file_fd, vm, file_size); > printf("Read: %zd %s\n", rc, rc < 0 ? strerror(errno) : ""); > > sync_flags.flags = DMA_BUF_SYNC_END | DMA_BUF_SYNC_READ | > DMA_BUF_SYNC_WRITE; > ioctl(dmabuf_fd, DMA_BUF_IOCTL_SYNC, &sync_flags); > } > > Or replace the mmap() + read() with sendfile(). Or copy_file_range(). That's pretty much exactly what I suggested on the other mail thread around that topic as well. > So I would also like to see the above code (or something else similar) > be able to work and I understand some of the reasons why it currently > does not, but I don't understand why we should actively prevent this > type of behavior entirely. +1 Regards, Christian. > > Best, > T.J. > > > > > > > > >>>>> We already discussed enforcing that in the DMA-buf framework and >>>>> this patch probably means that we should really do that. >>>>> >>>>> Regards, >>>>> Christian. >>>> Thank you for your response. With the application of AI large model >>>> edgeification, we urgently need support for direct_io on DMABUF to >>>> read some very large files. Do you have any new solutions or plans >>>> for this? >>> We have seen similar projects over the years and all of those turned >>> out to be complete shipwrecks. >>> >>> There is currently a patch set under discussion to give the network >>> subsystem DMA-buf support. If you are interest in network direct I/O >>> that could help. >> Is there a related introduction link for this patch? >> >>> Additional to that a lot of GPU drivers support userptr usages, e.g. >>> to import malloced memory into the GPU driver. You can then also do >>> direct I/O on that malloced memory and the kernel will enforce correct >>> handling with the GPU driver through MMU notifiers. >>> >>> But as far as I know a general DMA-buf based solution isn't possible. >> 1.The reason we need to use DMABUF memory here is that we need to share >> memory between the CPU and APU. Currently, only DMABUF memory is >> suitable for this purpose. Additionally, we need to read very large files. >> >> 2. Are there any other solutions for this? Also, do you have any plans >> to support direct_io for DMABUF memory in the future? >> >>> Regards, >>> Christian. >>> >>>> Regards, >>>> Lei Liu. >>>> >>>>>> Lei Liu (2): >>>>>> mm: dmabuf_direct_io: Support direct_io for memory allocated by >>>>>> dmabuf >>>>>> mm: dmabuf_direct_io: Fix memory statistics error for dmabuf >>>>>> allocated >>>>>> memory with direct_io support >>>>>> >>>>>> drivers/dma-buf/heaps/system_heap.c | 5 +++-- >>>>>> fs/proc/task_mmu.c | 8 +++++++- >>>>>> include/linux/mm.h | 1 + >>>>>> mm/memory.c | 15 ++++++++++----- >>>>>> mm/rmap.c | 9 +++++---- >>>>>> 5 files changed, 26 insertions(+), 12 deletions(-) >>>>>> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-11 14:25 ` Christian König @ 2024-07-15 9:07 ` Lei Liu 0 siblings, 0 replies; 12+ messages in thread From: Lei Liu @ 2024-07-15 9:07 UTC (permalink / raw) To: Christian König, T.J. Mercier Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel, opensource.kernel, quic_sukadev, quic_cgoldswo, Akilesh Kailash On 2024/7/11 22:25, Christian König wrote: > Am 10.07.24 um 18:34 schrieb T.J. Mercier: >> On Wed, Jul 10, 2024 at 8:08 AM Lei Liu <liulei.rjpt@vivo.com> wrote: >>> on 2024/7/10 22:48, Christian König wrote: >>>> Am 10.07.24 um 16:35 schrieb Lei Liu: >>>>> on 2024/7/10 22:14, Christian König wrote: >>>>>> Am 10.07.24 um 15:57 schrieb Lei Liu: >>>>>>> Use vm_insert_page to establish a mapping for the memory allocated >>>>>>> by dmabuf, thus supporting direct I/O read and write; and fix the >>>>>>> issue of incorrect memory statistics after mapping dmabuf memory. >>>>>> Well big NAK to that! Direct I/O is intentionally disabled on >>>>>> DMA-bufs. >>>>> Hello! Could you explain why direct_io is disabled on DMABUF? Is >>>>> there any historical reason for this? >>>> It's basically one of the most fundamental design decision of DMA-Buf. >>>> The attachment/map/fence model DMA-buf uses is not really compatible >>>> with direct I/O on the underlying pages. >>> Thank you! Is there any related documentation on this? I would like to >>> understand and learn more about the fundamental reasons for the lack of >>> support. >> Hi Lei and Christian, >> >> This is now the third request I've seen from three different companies >> who are interested in this, > > Yeah, completely agree. This is a re-occurring pattern :) > > Maybe we should document the preferred solution for that. > >> but the others are not for reasons of read >> performance that you mention in the commit message on your first >> patch. Someone else at Google ran a comparison between a normal read() >> and a direct I/O read() into a preallocated user buffer and found that >> with large readahead (16 MB) the throughput can actually be slightly >> higher than direct I/O. If you have concerns about read performance, >> have you tried increasing the readahead size? >> >> The other motivation is to load a gajillion byte file from disk into a >> dmabuf without evicting the entire contents of pagecache while doing >> so. Something like this (which does not currently work because read() >> tries to GUP on the dmabuf memory as you mention): >> >> static int dmabuf_heap_alloc(int heap_fd, size_t len) >> { >> struct dma_heap_allocation_data data = { >> .len = len, >> .fd = 0, >> .fd_flags = O_RDWR | O_CLOEXEC, >> .heap_flags = 0, >> }; >> int ret = ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &data); >> if (ret < 0) >> return ret; >> return data.fd; >> } >> >> int main(int, char **argv) >> { >> const char *file_path = argv[1]; >> printf("File: %s\n", file_path); >> int file_fd = open(file_path, O_RDONLY | O_DIRECT); >> >> struct stat st; >> stat(file_path, &st); >> ssize_t file_size = st.st_size; >> ssize_t aligned_size = (file_size + 4095) & ~4095; >> >> printf("File size: %zd Aligned size: %zd\n", file_size, >> aligned_size); >> int heap_fd = open("/dev/dma_heap/system", O_RDONLY); >> int dmabuf_fd = dmabuf_heap_alloc(heap_fd, aligned_size); >> >> void *vm = mmap(nullptr, aligned_size, PROT_READ | PROT_WRITE, >> MAP_SHARED, dmabuf_fd, 0); >> printf("VM at 0x%lx\n", (unsigned long)vm); >> >> dma_buf_sync sync_flags { DMA_BUF_SYNC_START | >> DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE }; >> ioctl(dmabuf_fd, DMA_BUF_IOCTL_SYNC, &sync_flags); >> >> ssize_t rc = read(file_fd, vm, file_size); >> printf("Read: %zd %s\n", rc, rc < 0 ? strerror(errno) : ""); >> >> sync_flags.flags = DMA_BUF_SYNC_END | DMA_BUF_SYNC_READ | >> DMA_BUF_SYNC_WRITE; >> ioctl(dmabuf_fd, DMA_BUF_IOCTL_SYNC, &sync_flags); >> } >> >> Or replace the mmap() + read() with sendfile(). > > Or copy_file_range(). That's pretty much exactly what I suggested on > the other mail thread around that topic as well. Thank you for your suggestion. I will study the method you suggested with Yang. Using copy_file_range() might be a good solution approach. Regards, Lei Liu. > >> So I would also like to see the above code (or something else similar) >> be able to work and I understand some of the reasons why it currently >> does not, but I don't understand why we should actively prevent this >> type of behavior entirely. > > +1 > > Regards, > Christian. > >> >> Best, >> T.J. >> >> >> >> >> >> >> >> >>>>>> We already discussed enforcing that in the DMA-buf framework and >>>>>> this patch probably means that we should really do that. >>>>>> >>>>>> Regards, >>>>>> Christian. >>>>> Thank you for your response. With the application of AI large model >>>>> edgeification, we urgently need support for direct_io on DMABUF to >>>>> read some very large files. Do you have any new solutions or plans >>>>> for this? >>>> We have seen similar projects over the years and all of those turned >>>> out to be complete shipwrecks. >>>> >>>> There is currently a patch set under discussion to give the network >>>> subsystem DMA-buf support. If you are interest in network direct I/O >>>> that could help. >>> Is there a related introduction link for this patch? >>> >>>> Additional to that a lot of GPU drivers support userptr usages, e.g. >>>> to import malloced memory into the GPU driver. You can then also do >>>> direct I/O on that malloced memory and the kernel will enforce correct >>>> handling with the GPU driver through MMU notifiers. >>>> >>>> But as far as I know a general DMA-buf based solution isn't possible. >>> 1.The reason we need to use DMABUF memory here is that we need to share >>> memory between the CPU and APU. Currently, only DMABUF memory is >>> suitable for this purpose. Additionally, we need to read very large >>> files. >>> >>> 2. Are there any other solutions for this? Also, do you have any plans >>> to support direct_io for DMABUF memory in the future? >>> >>>> Regards, >>>> Christian. >>>> >>>>> Regards, >>>>> Lei Liu. >>>>> >>>>>>> Lei Liu (2): >>>>>>> mm: dmabuf_direct_io: Support direct_io for memory allocated by >>>>>>> dmabuf >>>>>>> mm: dmabuf_direct_io: Fix memory statistics error for dmabuf >>>>>>> allocated >>>>>>> memory with direct_io support >>>>>>> >>>>>>> drivers/dma-buf/heaps/system_heap.c | 5 +++-- >>>>>>> fs/proc/task_mmu.c | 8 +++++++- >>>>>>> include/linux/mm.h | 1 + >>>>>>> mm/memory.c | 15 ++++++++++----- >>>>>>> mm/rmap.c | 9 +++++---- >>>>>>> 5 files changed, 26 insertions(+), 12 deletions(-) >>>>>>> > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf 2024-07-10 14:14 ` Christian König 2024-07-10 14:35 ` Lei Liu @ 2024-07-15 8:52 ` Daniel Vetter 1 sibling, 0 replies; 12+ messages in thread From: Daniel Vetter @ 2024-07-15 8:52 UTC (permalink / raw) To: Christian König Cc: Lei Liu, Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier, Andrew Morton, David Hildenbrand, Matthew Wilcox, Muhammad Usama Anjum, Andrei Vagin, Ryan Roberts, Kefeng Wang, linux-media, dri-devel, linaro-mm-sig, linux-kernel, linux-fsdevel, linux-mm, Daniel Vetter, Vetter, Daniel, opensource.kernel On Wed, Jul 10, 2024 at 04:14:18PM +0200, Christian König wrote: > Am 10.07.24 um 15:57 schrieb Lei Liu: > > Use vm_insert_page to establish a mapping for the memory allocated > > by dmabuf, thus supporting direct I/O read and write; and fix the > > issue of incorrect memory statistics after mapping dmabuf memory. > > Well big NAK to that! Direct I/O is intentionally disabled on DMA-bufs. > > We already discussed enforcing that in the DMA-buf framework and this patch > probably means that we should really do that. Last time I looked dma_mmap doesn't guarantee that the vma end sup with VM_SPECIAL, and that's pretty much the only reason why we can't enforce this. But we might be able to enforce this at least on some architectures, I didn't check for that ... if at least x86-64 and arm64 could have the check, that would be great. So might be worth it to re-audit this all. I think all other dma-buf exporters/allocators do only create VM_SPECIAL vmas. -Sima -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-07-15 9:07 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-07-10 14:09 [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf Lei Liu 2024-07-10 14:09 ` [PATCH 1/2] mm: dmabuf_direct_io: Support direct_io " Lei Liu 2024-07-10 14:09 ` [PATCH 2/2] mm: dmabuf_direct_io: Fix memory statistics error for dmabuf allocated memory with direct_io support Lei Liu -- strict thread matches above, loose matches on Subject: below -- 2024-07-10 13:57 [PATCH 0/2] Support direct I/O read and write for memory allocated by dmabuf Lei Liu 2024-07-10 14:14 ` Christian König 2024-07-10 14:35 ` Lei Liu 2024-07-10 14:48 ` Christian König 2024-07-10 15:06 ` Lei Liu 2024-07-10 16:34 ` T.J. Mercier 2024-07-11 14:25 ` Christian König 2024-07-15 9:07 ` Lei Liu 2024-07-15 8:52 ` Daniel Vetter
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox