From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3270CAC5B9 for ; Mon, 29 Sep 2025 20:04:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 30E568E0016; Mon, 29 Sep 2025 16:04:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29D3E8E0002; Mon, 29 Sep 2025 16:04:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1381F8E0016; Mon, 29 Sep 2025 16:04:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E55608E0002 for ; Mon, 29 Sep 2025 16:04:44 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 92AC0B9CB3 for ; Mon, 29 Sep 2025 20:04:44 +0000 (UTC) X-FDA: 83943365688.14.22E773C Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by imf26.hostedemail.com (Postfix) with ESMTP id B38C2140015 for ; Mon, 29 Sep 2025 20:04:42 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=collabora.com header.s=mail header.b=QQ7JpXEC; spf=pass (imf26.hostedemail.com: domain of loic.molinari@collabora.com designates 148.251.105.195 as permitted sender) smtp.mailfrom=loic.molinari@collabora.com; dmarc=pass (policy=none) header.from=collabora.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1759176283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+WklpDEoXV76SKYptFBzohrBqnTClsUQVtCzN2eq6MM=; b=55u79dr/THTzgQ/Gm5WJWchZSGbSZttLZlGkzvL1h6jd8911q8D+Nq9X0DP84fpQAAfvId j81PL+phzawuEbSSmCbCakAW1quPGiwokNEsDfWolfrCnSaR0TMhJM3RlesG2mAh79ZZQG ZBTM8ZEj2AkmZ6jKrBPT5lFRqmUQZ/0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=collabora.com header.s=mail header.b=QQ7JpXEC; spf=pass (imf26.hostedemail.com: domain of loic.molinari@collabora.com designates 148.251.105.195 as permitted sender) smtp.mailfrom=loic.molinari@collabora.com; dmarc=pass (policy=none) header.from=collabora.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1759176283; a=rsa-sha256; cv=none; b=eQFhL2mjKU4RpLad2VFLBsAE5IeLWhISAbw4WA5sjRKfXgiHd8simOE8wRdblMHnaSva1V 4tz4VSfEceMAi993jkMjVePsBTeuf3dIDSlhKPjDKnz0csWCLEYua/OzUhHT0W33BCGA2Q RdLsUgMzgjYjovehnPKt1Yp37XAt+bU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1759176280; bh=ZhOASKWOW7ymaXmTPQsalimKlt/dPd4xHjyRCEjgNMU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QQ7JpXECvSwPbpvSsWjdPDpPDPcvc928v6WIwF8e3MZPINFwDgRl8fPLtBrrg42xl s9XPFqyPNObtJIrhqKHki6RWtZccH6pgNGonXeDB2BR2ISUsJJrOtoeMk3WQwXo6LT OQ4NK2Qr+QaQr0CiGeV6FTcOyUAqWZgdHW4DuGcS/dR7kye2N4d3k2N3VFrHAKdx8W mwQqYqGeNyPcfgGcSUU7aIeD6/zzB0Hk74xgOwSfhmYT1nJTDKwc1WqYIuRn6pNllv LDu271ycHN4wsrtVpGpQ6HDwLL92oAqG4c52hZWWZ8ARFKKBc7oh+ZhIxuoz6z7PEE OtOUOSo7LsxCg== Received: from debian-rockchip-rock5b-rk3588.. (unknown [IPv6:2a01:e0a:5e3:6100:826d:bc07:e98c:84a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: loicmolinari) by bali.collaboradmins.com (Postfix) with ESMTPSA id 5DADE17E13C1; Mon, 29 Sep 2025 22:04:39 +0200 (CEST) From: =?UTF-8?q?Lo=C3=AFc=20Molinari?= To: Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau , Melissa Wen , =?UTF-8?q?Ma=C3=ADra=20Canal?= , Hugh Dickins , Baolin Wang , Andrew Morton , =?UTF-8?q?Lo=C3=AFc=20Molinari?= , Al Viro , =?UTF-8?q?Miko=C5=82aj=20Wasiak?= , Christian Brauner , Nitin Gote , Andi Shyti Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-mm@kvack.org, kernel@collabora.com Subject: [PATCH 2/8] drm/gem: Introduce drm_gem_get_unmapped_area() fop Date: Mon, 29 Sep 2025 22:03:10 +0200 Message-ID: <20250929200316.18417-3-loic.molinari@collabora.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250929200316.18417-1-loic.molinari@collabora.com> References: <20250929200316.18417-1-loic.molinari@collabora.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: B38C2140015 X-Rspamd-Server: rspam05 X-Stat-Signature: 9mys15d7u1ctynnxaexpz5m69987bt8w X-Rspam-User: X-HE-Tag: 1759176282-531282 X-HE-Meta: U2FsdGVkX18pbCQcBECL3erjXZChfQNkiQU71+MCavKbvkHHQ1IztlIq+f2affsNJ5ivOaYXh5y4jipG2J206uNCfuKzNzgHkqVMmluQPTmIaKtxKVJoZl1CChUn1mSdYonzP53miJRVNrTd7uFGMxrXLJESSMn7o0tOJtq4g9IzbBZG3lPfJPvXib655ArE4Brf8mnpzkx6fXUNTedL/lpecJbCIm23qfjNkKJdVdKFC+OIYIHZscrEuiMCbebpIskSy1ST9pTj/B1fhqusJhIMHUhoQLBLnTR/YYugE7rbfUrzGpOBoypfZnLXrdg+v/3M1MXFSwZ7V5NHX6E/aVpYvTgRUJFErnimBqitI3Dxur1PS+GW6x7yDzDqSgTsZ84mckTrOZFPmaWuS7W3QS7zYjINTe7bd1ladqyVlAJvsy1A+qsH68vWLp/nRg3TxLpXk5O3NlxY4HPtfSawQ4cKX8e7gKOogmzEFvU/UGDuPmTzL1rUZLXx+pQltIDp3Uo29HakeKn4gYaTSRXfnYniVR5kc/aemHlXqs3EGvXT9hxJoHdzLw1zl4GaohiejlPAyjZvDxpXuEH5PAlO3PpipqvnBstszVWtljiZoxQrfnjloYhlmg0TicRUUFtOLoXaGwZOesEn+yNjtDHgQ6uIjhb7dEpD0XIE1PmXHdVeMv6D4xV+214/FZPDqc2H134wj1Dejwg7H5phaxzpafP577MZGhupfwRZOLzbKmUMbBA+ixVj/M6UT/RgnO6m7DQ6eC8K9x38oTi7SWIrc/aNtqutPElAxrRH92/mBLIimTo95koT6jKECRgx6UN3KXfFw7yUK7tPmLPMnnHx4kfDztHUrm2dSPvkfST2STtQMoOA9V4N9dj0a+BHU6ObgfkhP8tj1beNDHGNQ1jsNwqdOXk5akSMwmxZGEQhVjg231emObWDig5J0sQujUoywQIcn/VbLk5S9TYC4dv 7yiXML8u pbqyZLAHM9fdWeEioJec4RAUsezQXkiiE50SCt0PB2KePofMEN1J5dmR/PQnMUh0BwcMUxrz1sqVcaCmpnREna/oS8wvPr7p6otawY34wHiBsgiCV8uHk6sc1y01UZ0SdaRzgavmNCnGcBiCTgKxR0KpCXtxGwwB7bXd61TIuhJLgFEobCtO/ym7kMsqZdxY961hOaeTcl8FNy95e0rzaTWrU5SPhRfrre2RPWoUC2CBfqzfLIkpgs5MATSyLL1xLtciizHLrH9EMii7BX5TgY5g/VeOmHlEkZKTp2quoxR7c4JZSFxEHN+mc8Vm7gKOrSloSzo0Qx2ng+lzFzs1vuuNi34JQ+0NZB0+JAHc+grWLqJUKncMhqdEpexmG1cUoBUt0HFrzzLflRNutIBfhZGShWmxBDr47XS1hhJjnnpPZS+TKubaQpc46HxR8GctKPPhDWflu/atBu6M44Ie9TKXKpg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mmap() calls on the drm file pointer currently always end up using mm_get_unmapped_area() to get a free mapping region. On builds with CONFIG_TRANSPARENT_HUGEPAGE enabled, this isn't ideal for GEM objects backed by shmem buffers on mount points setting the 'huge=' option because it can't correctly figure out the potentially huge address alignment required. This commit introduces the drm_gem_get_unmapped_area() function which is meant to be used as a get_unmapped_area file operation on the drm file pointer to lookup GEM objects based on their fake offsets and get a properly aligned region by calling shmem_get_unmapped_area() with the right file pointer. If a GEM object isn't available at the given offset or if the caller isn't granted access to it, the function falls back to mm_get_unmapped_area(). This also makes drm_gem_get_unmapped_area() part of the default GEM file operations so that all the drm drivers can benefit from more efficient mappings thanks to the huge page fault handler introduced in previous commit 'drm/shmem-helper: Add huge page fault handler'. The shmem_get_unmapped_area() function needs to be exported so that it can be used from the drm subsystem. Signed-off-by: Loïc Molinari --- drivers/gpu/drm/drm_gem.c | 110 ++++++++++++++++++++++++++++++-------- include/drm/drm_gem.h | 4 ++ mm/shmem.c | 1 + 3 files changed, 93 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index cbeb76b2124f..d027db462c2d 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1187,36 +1187,27 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, } EXPORT_SYMBOL(drm_gem_mmap_obj); -/** - * drm_gem_mmap - memory map routine for GEM objects - * @filp: DRM file pointer - * @vma: VMA for the area to be mapped - * - * If a driver supports GEM object mapping, mmap calls on the DRM file - * descriptor will end up here. - * - * Look up the GEM object based on the offset passed in (vma->vm_pgoff will - * contain the fake offset we created when the GTT map ioctl was called on - * the object) and map it with a call to drm_gem_mmap_obj(). - * - * If the caller is not granted access to the buffer object, the mmap will fail - * with EACCES. Please see the vma manager for more information. +/* + * Look up a GEM object in offset space based on the exact start address. The + * caller must be granted access to the object. Returns a GEM object on success + * or a negative error code on failure. The returned GEM object needs to be + * released with drm_gem_object_put(). */ -int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) +static struct drm_gem_object * +drm_gem_object_lookup_from_offset(struct file *filp, unsigned long start, + unsigned long pages) { struct drm_file *priv = filp->private_data; struct drm_device *dev = priv->minor->dev; struct drm_gem_object *obj = NULL; struct drm_vma_offset_node *node; - int ret; if (drm_dev_is_unplugged(dev)) - return -ENODEV; + return ERR_PTR(-ENODEV); drm_vma_offset_lock_lookup(dev->vma_offset_manager); node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager, - vma->vm_pgoff, - vma_pages(vma)); + start, pages); if (likely(node)) { obj = container_of(node, struct drm_gem_object, vma_node); /* @@ -1235,14 +1226,89 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) drm_vma_offset_unlock_lookup(dev->vma_offset_manager); if (!obj) - return -EINVAL; + return ERR_PTR(-EINVAL); if (!drm_vma_node_is_allowed(node, priv)) { drm_gem_object_put(obj); - return -EACCES; + return ERR_PTR(-EACCES); } - ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT, + return obj; +} + +/** + * drm_gem_get_unmapped_area - get memory mapping region routine for GEM objects + * @filp: DRM file pointer + * @uaddr: User address hint + * @len: Mapping length + * @pgoff: Offset (in pages) + * @flags: Mapping flags + * + * If a driver supports GEM object mapping, before ending up in drm_gem_mmap(), + * mmap calls on the DRM file descriptor will first try to find a free linear + * address space large enough for a mapping. Since GEM objects are backed by + * shmem buffers, this should preferably be handled by the shmem virtual memory + * filesystem which can appropriately align addresses to huge page sizes when + * needed. + * + * Look up the GEM object based on the offset passed in (vma->vm_pgoff will + * contain the fake offset we created) and call shmem_get_unmapped_area() with + * the right file pointer. + * + * If a GEM object is not available at the given offset or if the caller is not + * granted access to it, fall back to mm_get_unmapped_area(). + */ +unsigned long drm_gem_get_unmapped_area(struct file *filp, unsigned long uaddr, + unsigned long len, unsigned long pgoff, + unsigned long flags) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + struct drm_gem_object *obj; + unsigned long ret; + + obj = drm_gem_object_lookup_from_offset(filp, pgoff, len >> PAGE_SHIFT); + if (IS_ERR(obj)) + return mm_get_unmapped_area(current->mm, filp, uaddr, len, 0, + flags); + + ret = shmem_get_unmapped_area(obj->filp, uaddr, len, 0, flags); + + drm_gem_object_put(obj); + + return ret; +#else + return mm_get_unmapped_area(current->mm, filp, uaddr, len, 0, flags); +#endif +} +EXPORT_SYMBOL(drm_gem_get_unmapped_area); + +/** + * drm_gem_mmap - memory map routine for GEM objects + * @filp: DRM file pointer + * @vma: VMA for the area to be mapped + * + * If a driver supports GEM object mapping, mmap calls on the DRM file + * descriptor will end up here. + * + * Look up the GEM object based on the offset passed in (vma->vm_pgoff will + * contain the fake offset we created) and map it with a call to + * drm_gem_mmap_obj(). + * + * If the caller is not granted access to the buffer object, the mmap will fail + * with EACCES. Please see the vma manager for more information. + */ +int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct drm_gem_object *obj; + int ret; + + obj = drm_gem_object_lookup_from_offset(filp, vma->vm_pgoff, + vma_pages(vma)); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + ret = drm_gem_mmap_obj(obj, + drm_vma_node_size(&obj->vma_node) << PAGE_SHIFT, vma); drm_gem_object_put(obj); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 8d48d2af2649..7c8bd67d087c 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -469,6 +469,7 @@ struct drm_gem_object { .poll = drm_poll,\ .read = drm_read,\ .llseek = noop_llseek,\ + .get_unmapped_area = drm_gem_get_unmapped_area,\ .mmap = drm_gem_mmap, \ .fop_flags = FOP_UNSIGNED_OFFSET @@ -506,6 +507,9 @@ void drm_gem_vm_close(struct vm_area_struct *vma); int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, struct vm_area_struct *vma); int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); +unsigned long drm_gem_get_unmapped_area(struct file *filp, unsigned long uaddr, + unsigned long len, unsigned long pgoff, + unsigned long flags); /** * drm_gem_object_get - acquire a GEM buffer object reference diff --git a/mm/shmem.c b/mm/shmem.c index e2c76a30802b..b2f41b430daa 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2915,6 +2915,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, return addr; return inflated_addr; } +EXPORT_SYMBOL_GPL(shmem_get_unmapped_area); #ifdef CONFIG_NUMA static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol) -- 2.47.3