On Tue, Feb 21, 2023 at 01:20:50PM -0500, Liam R. Howlett wrote: > * Danilo Krummrich [230217 08:45]: > > Add infrastructure to keep track of GPU virtual address (VA) mappings > > with a decicated VA space manager implementation. > > > > New UAPIs, motivated by Vulkan sparse memory bindings graphics drivers > > start implementing, allow userspace applications to request multiple and > > arbitrary GPU VA mappings of buffer objects. The DRM GPU VA manager is > > intended to serve the following purposes in this context. > > > > 1) Provide infrastructure to track GPU VA allocations and mappings, > > making use of the maple_tree. > > > > 2) Generically connect GPU VA mappings to their backing buffers, in > > particular DRM GEM objects. > > > > 3) Provide a common implementation to perform more complex mapping > > operations on the GPU VA space. In particular splitting and merging > > of GPU VA mappings, e.g. for intersecting mapping requests or partial > > unmap requests. > > > > Suggested-by: Dave Airlie > > Signed-off-by: Danilo Krummrich > > --- > > Documentation/gpu/drm-mm.rst | 31 + > > drivers/gpu/drm/Makefile | 1 + > > drivers/gpu/drm/drm_gem.c | 3 + > > drivers/gpu/drm/drm_gpuva_mgr.c | 1704 +++++++++++++++++++++++++++++++ > > include/drm/drm_drv.h | 6 + > > include/drm/drm_gem.h | 75 ++ > > include/drm/drm_gpuva_mgr.h | 714 +++++++++++++ > > 7 files changed, 2534 insertions(+) > > create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c > > create mode 100644 include/drm/drm_gpuva_mgr.h > > > > diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst > > index a52e6f4117d6..c9f120cfe730 100644 > > --- a/Documentation/gpu/drm-mm.rst > > +++ b/Documentation/gpu/drm-mm.rst > > @@ -466,6 +466,37 @@ DRM MM Range Allocator Function References > > .. kernel-doc:: drivers/gpu/drm/drm_mm.c > > :export: > > > ... > > > + > > +/** > > + * drm_gpuva_remove_iter - removes the iterators current element > > + * @it: the &drm_gpuva_iterator > > + * > > + * This removes the element the iterator currently points to. > > + */ > > +void > > +drm_gpuva_iter_remove(struct drm_gpuva_iterator *it) > > +{ > > + mas_erase(&it->mas); > > +} > > +EXPORT_SYMBOL(drm_gpuva_iter_remove); > > + > > +/** > > + * drm_gpuva_insert - insert a &drm_gpuva > > + * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in > > + * @va: the &drm_gpuva to insert > > + * @addr: the start address of the GPU VA > > + * @range: the range of the GPU VA > > + * > > + * Insert a &drm_gpuva with a given address and range into a > > + * &drm_gpuva_manager. > > + * > > + * Returns: 0 on success, negative error code on failure. > > + */ > > +int > > +drm_gpuva_insert(struct drm_gpuva_manager *mgr, > > + struct drm_gpuva *va) > > +{ > > + u64 addr = va->va.addr; > > + u64 range = va->va.range; > > + MA_STATE(mas, &mgr->va_mt, addr, addr + range - 1); > > + struct drm_gpuva_region *reg = NULL; > > + int ret; > > + > > + if (unlikely(!drm_gpuva_in_mm_range(mgr, addr, range))) > > + return -EINVAL; > > + > > + if (unlikely(drm_gpuva_in_kernel_region(mgr, addr, range))) > > + return -EINVAL; > > + > > + if (mgr->flags & DRM_GPUVA_MANAGER_REGIONS) { > > + reg = drm_gpuva_in_region(mgr, addr, range); > > + if (unlikely(!reg)) > > + return -EINVAL; > > + } > > + > > ----- > > > + if (unlikely(drm_gpuva_find_first(mgr, addr, range))) > > + return -EEXIST; > > + > > + ret = mas_store_gfp(&mas, va, GFP_KERNEL); > > mas_walk() will set the internal maple state to the limits to what it > finds. So, instead of an iterator, you can use the walk function and > ensure there is a large enough area in the existing NULL: > > /* > * Nothing at addr, mas now points to the location where the store would > * happen > */ > if (mas_walk(&mas)) > return -EEXIST; > For some reason mas_walk() finds an entry and hence this function returns -EEXIST for the following sequence of insertions. A = [0xc0000 - 0xfffff] B = [0x0 - 0xbffff] Interestingly, inserting B before A works fine. I attached a test module that reproduces the issue. I hope its just a stupid mistake I just can't spot though. > /* The NULL entry ends at mas.last, make sure there is room */ > if (mas.last < (addr + range - 1)) > return -EEXIST; > > /* Limit the store size to the correct end address, and store */ > mas.last = addr + range - 1; > ret = mas_store_gfp(&mas, va, GFP_KERNEL); >