From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5169AC19F32 for ; Wed, 5 Mar 2025 19:04:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0175A280013; Wed, 5 Mar 2025 14:04:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EBE74280004; Wed, 5 Mar 2025 14:04:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE93A280013; Wed, 5 Mar 2025 14:04:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A76E3280004 for ; Wed, 5 Mar 2025 14:04:09 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CE1D6C072C for ; Wed, 5 Mar 2025 15:26:53 +0000 (UTC) X-FDA: 83187875106.27.ADF469F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id E07051C0011 for ; Wed, 5 Mar 2025 15:26:50 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aWNBq6GF; spf=pass (imf20.hostedemail.com: domain of ryasuoka@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ryasuoka@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741188411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qPtUDuBhtdSX6GhOg+XQnbMK00oSmP61u2YCpHroymk=; b=hrqXz+vW3KNhYmQhk+C68l5qNR3r6AE2zZ8LDRiSPUvFatKuqAAST5BZ2T4wc0jl8dMeAf y1O0zhQS/Sg8sQ9/OAJoK5+AqDIz0Cs1iswfnfDwLXvRuB3Fgt7z+qrCcswGDliCtVzhzb 5l10Y1M/fDV2kSYeQscBoPdBoxN+/rE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aWNBq6GF; spf=pass (imf20.hostedemail.com: domain of ryasuoka@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ryasuoka@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741188411; a=rsa-sha256; cv=none; b=0MJkIa6+C+mGhivxwBqQdvMlBX2ONEmtEo+ALKrdhLWODDBknWn/jV/uIyJBUkBpucEfo6 BOehiQ2VyCUnrmZ7DPDsYT1+HBVAn9Xq4oi7x7Yy5h3d3Zcsz8I1xgyIg/kQ4baKXid0RP GAw5l6HVAJUCevdtQny1D1MnFUy+Mu4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741188410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qPtUDuBhtdSX6GhOg+XQnbMK00oSmP61u2YCpHroymk=; b=aWNBq6GFFvr/DKNJj/ovtyI2fvVDP994a8HN0ZnyO2gOhrA6DP7VGhc5tsaajktkTNvD0q jFDTXHO7pmMfFxqfqf/M75OBP2YCsQM5+iY43uuOTONwayiqRN3TKil3QWsclO8k1f3lKv fv/c8/bWDBBYKyVE/4xJJnc3pMH2Wx8= Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-655-3kjXYydrM3eShs53TWBxLQ-1; Wed, 05 Mar 2025 10:26:47 -0500 X-MC-Unique: 3kjXYydrM3eShs53TWBxLQ-1 X-Mimecast-MFC-AGG-ID: 3kjXYydrM3eShs53TWBxLQ_1741188407 Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-223f63b450eso17659865ad.2 for ; Wed, 05 Mar 2025 07:26:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741188406; x=1741793206; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qPtUDuBhtdSX6GhOg+XQnbMK00oSmP61u2YCpHroymk=; b=gbv7ftVbVCrvpich2zjiDtOURLxOUqDYgcKpNsV2TuoBsFBLsJrksfzELe6WD8zAVy PEwO26ny9eNVgq0xnzOYgPAYwdkyo8YLSF1zE3I0DUHqBuURcJRzZjPbeMR/cVX5z9wp cc9/jJHGkyyGiNu/n/XGbFWpL5+f+mRPumR80MPCGH/AdrzBPVkqSivkXnZuitPf8JYo 4DSF/bWCBvDpWOdhJGidLrxpYCGc6OAi4GWR89BsHHFADOF0wkl056Rw+LdLgX8Z7ZpD d6KDI+LIdAPyg16IUCHgee0oR/ELZwBRSVz1HM7vnPPmy3uVFNcLEIc1rV3Et1mtQM2b w4yQ== X-Forwarded-Encrypted: i=1; AJvYcCV9zaq3EYlBC2YlU1/DHi1kZ91nNXsF6uNLKZ5w43sumy3iKmt5/B9/j1uVIAG92yRMbf/3zJWAIg==@kvack.org X-Gm-Message-State: AOJu0Ywmbjyat5lMFwJlKokWbwHSIsXalTFGXqAL01OJ7JRV7P649hY/ 5LhVv7sf/LMH9+ea4DS0c1gkseLjDDeJK+/hmkJCjJ2c6F5iNYtMZLGERd9knkdoq1cTmGpS5MP /fbi8WBwxx+atWBg2ZO8CelOT4fl2WiRBqz8AeMt4IUVoVj99 X-Gm-Gg: ASbGncvOaCl+Uo+j7T8bjQz08VVRuObJ4E02SLjRitbt9McvcM2Rk2/AG9Brp0L17Yg Ot9ctRpGIPv2X+BuZ8OzsDvFNOyEh9BMKhWiaufXytrgnSY40FIcgcO9TGyprae8qlU97StwNdW nPPqUI/LBlfNBDefah9wWb/K52FFvjJQ04wE8e5ICvjobV58ExqsZpXkCxCQ6FfZB2zhMOLw5Ib 9D7I5UZ4LetW4z6cbYf5Av3l2t2kkASCUDtuMDralQ9pjMVgAEe7Db09q4AiZ0JWj7HZBGT2IDy ypYiF1cbhjTW7HyW X-Received: by 2002:a05:6a00:998:b0:736:46b4:beef with SMTP id d2e1a72fcca58-73682b55144mr6706577b3a.3.1741188406428; Wed, 05 Mar 2025 07:26:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IECYyXVd+Gb6leHDb5NO82ag7tQipklHOXKqXQqY4fydYrcJZzPu+CjaBp7aum4qt8U11Vpow== X-Received: by 2002:a05:6a00:998:b0:736:46b4:beef with SMTP id d2e1a72fcca58-73682b55144mr6706526b3a.3.1741188406066; Wed, 05 Mar 2025 07:26:46 -0800 (PST) Received: from zeus.elecom ([240b:10:83a2:bd00:6e35:f2f5:2e21:ae3a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7364b4eff66sm6983292b3a.83.2025.03.05.07.26.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 07:26:45 -0800 (PST) From: Ryosuke Yasuoka To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, simona@ffwll.ch, kraxel@redhat.com, gurchetansingh@chromium.org, olvaffe@gmail.com, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, dmitry.osipenko@collabora.com, jfalempe@redhat.com Cc: Ryosuke Yasuoka , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org Subject: [PATCH drm-next 2/2] drm/virtio: Use atomic_vmap to work drm_panic in GUI Date: Thu, 6 Mar 2025 00:25:54 +0900 Message-ID: <20250305152555.318159-3-ryasuoka@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250305152555.318159-1-ryasuoka@redhat.com> References: <20250305152555.318159-1-ryasuoka@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: SKTKanaIWHmhiiYHtBAMdFVmN76ZCi0RDzb_u6T4S7U_1741188407 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Stat-Signature: qp1555ng9axenmctx3nk1pw9kzbbtjsk X-Rspamd-Queue-Id: E07051C0011 X-Rspamd-Server: rspam07 X-HE-Tag: 1741188410-859938 X-HE-Meta: U2FsdGVkX1/iwZYQL3Uziz0ZwI3WE3JMtb1d7eV0aFllM2U+cU4j+5x+fdtjXNurLv6O3U3pVdc+zywR+Y0a/hClfPCHndbEtJPwhRDdg3VwKy3b1/ZfcfeC1lu0hYyyfbZjeN7H+zdKJZS1Fa5202qx38PG0pdOHQXeynwMbHicER9oPXOMHAgePKTrCZ1iH/it2KXfJmqW6wxsC2MIaJzF6/gdLd0hFDGm3ayRdkBzfqKNNJqWhPy7t4iO2CPW3v8sainz6zCCQBKD2nF4bhGcDFoI6Mm8L1co3+z+pqci0nMbf0PsqBb2WUTimpyG4MDdTrLzdeNMqZXq2N+0EL/PVDgfzpy6gUKYfkDxcR39XHYFY2stBqfbinGxcHvZBYxwTsn8iq5HuMNhVif0U+4IZ53hm5ir+yZ8pLvEaA+f1ixEv6oMs+VYuJzyPyuNk2ia7hSF6znQtCTWpzboC5Er/WAo6JdNPO1jmt70vJ8Xa2VHdwI/jcFL4VLiJFATPcuQixYbLtm4K73MbPSXU4DjICFz96pVN7NgF1eq81U8MGWDp4lEXOnqJKYTUvD4RlzGT4/015aqgLYUPCUkTzidue4+321tCsa8NeiSSPgZXhj7B6Ym52fe/hpy+Ijj8CjoA2Bjxbnf7dK04KbaVRYT7w3BxbVKiBXYxKcrt9ssl8/D5Pjv+ti+gL8Vtw96iZWrVlHF1khj0qL8F3jF8RHORvER8JuKQysdEecXfMtDzAtxvxQz9Pml8LsxinH+jUaUrWAtlE6gQo+YHx9tPpFhLA1h0AoCgPBTV40n0gEZcpxyHUIvtc8E7BiTTWEV9mV7wSF34kANF8N6WQ4TuoExV0DosXrSzKdsIcnkHW4jglTTZR+UjOSR3ktZRwxRA+CI/1ZkwgBLO7oJZccGlFBAfEVMgMeayrvyCqVhR3Ximbxr4X0SE20dXPGaGTeB9rqTukywT2NGq45kCEO dfwbDw13 x4SHBsW3/QdauxnNRlCiqktqwyUeWs/Ksx/s+0upqTHfdQMD3rHcojmK3qdLhorJVAxK1XTXc/dLL/fE3pNVleHLifgiNd9aSFM8zAVuGVMyd5EqpNCWQqLOa9UhR2HQJL57c0zwFQrQtqg5yI/1I4PgjJjNBgXBjU8pxywXZLRSnYRcT32NS5KFH3SBpnIS6K/L/78o0Mq27sAYey9RkaGDZifHSTuqHyNZWt2YA0RCmgYlzUjn2b60ZGZoPLJt9JKm8AXjM9/ccWDuS0KXCHnEhzxVBlKwt6/YJyxkWQvUU7gzO0/YvhABJzIBhuHYLZOjBeQtHVLbUe0d1VlmANgCL/U7FAfK+KOIzmwv8lIza4TAzcTZXhR2wwXGIYp/R+NzykBtmDsJczUOPxdjyEl2AxnoqqPUivRwu3A4WTTBubpg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: virtio drm_panic supports only vmapped shmem BO because there is no atomic vmap feature. Now atomic_vmap is supported, so drm_panic tries to vmap addr if it is not mapped. Signed-off-by: Ryosuke Yasuoka --- drivers/gpu/drm/drm_gem.c | 51 ++++++++++++++++++++++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 51 ++++++++++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_plane.c | 14 +++++-- include/drm/drm_gem.h | 1 + include/drm/drm_gem_shmem_helper.h | 2 + 5 files changed, 116 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index ee811764c3df..eebfaef3a52e 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -535,6 +535,57 @@ static void drm_gem_check_release_batch(struct folio_batch *fbatch) cond_resched(); } +struct page **drm_gem_atomic_get_pages(struct drm_gem_object *obj) +{ + struct address_space *mapping; + struct page **pages; + struct folio *folio; + long i, j, npages; + + if (WARN_ON(!obj->filp)) + return ERR_PTR(-EINVAL); + + /* This is the shared memory object that backs the GEM resource */ + mapping = obj->filp->f_mapping; + + /* We already BUG_ON() for non-page-aligned sizes in + * drm_gem_object_init(), so we should never hit this unless + * driver author is doing something really wrong: + */ + WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0); + + npages = obj->size >> PAGE_SHIFT; + + pages = kmalloc_array(npages, sizeof(struct page *), GFP_ATOMIC); + if (pages == NULL) + return ERR_PTR(-ENOMEM); + + mapping_set_unevictable(mapping); + + i = 0; + while (i < npages) { + long nr; + + folio = shmem_read_folio_gfp(mapping, i, + GFP_ATOMIC); + if (IS_ERR(folio)) + return ERR_PTR(-ENOMEM); + nr = min(npages - i, folio_nr_pages(folio)); + for (j = 0; j < nr; j++, i++) + pages[i] = folio_file_page(folio, i); + + /* Make sure shmem keeps __GFP_DMA32 allocated pages in the + * correct region during swapin. Note that this requires + * __GFP_DMA32 to be set in mapping_gfp_mask(inode->i_mapping) + * so shmem can relocate pages during swapin if required. + */ + BUG_ON(mapping_gfp_constraint(mapping, __GFP_DMA32) && + (folio_pfn(folio) >= 0x00100000UL)); + } + + return pages; +} + /** * drm_gem_get_pages - helper to allocate backing pages for a GEM object * from shmem diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 5ab351409312..789dfd726a36 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -186,6 +186,34 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); +static int drm_gem_shmem_atomic_get_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct page **pages; + + pages = drm_gem_atomic_get_pages(obj); + if (IS_ERR(pages)) { + drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", + PTR_ERR(pages)); + shmem->pages_use_count = 0; + return PTR_ERR(pages); + } + + /* + * TODO: Allocating WC pages which are correctly flushed is only + * supported on x86. Ideal solution would be a GFP_WC flag, which also + * ttm_pool.c could use. + */ +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wc(pages, obj->size >> PAGE_SHIFT); +#endif + + shmem->pages = pages; + + return 0; +} + static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; @@ -317,6 +345,29 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL(drm_gem_shmem_unpin); +int drm_gem_shmem_atomic_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) +{ + struct drm_gem_object *obj = &shmem->base; + int ret = 0; + + pgprot_t prot = PAGE_KERNEL; + + ret = drm_gem_shmem_atomic_get_pages(shmem); + if (ret) + return -ENOMEM; + + if (shmem->map_wc) + prot = pgprot_writecombine(prot); + shmem->vaddr = atomic_vmap(shmem->pages, obj->size >> PAGE_SHIFT, + VM_MAP, prot); + if (!shmem->vaddr) + return -ENOMEM; + iosys_map_set_vaddr(map, shmem->vaddr); + + return 0; +} + /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index a6f5a78f436a..2a977c5cf42a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -500,11 +500,19 @@ static int virtio_drm_get_scanout_buffer(struct drm_plane *plane, bo = gem_to_virtio_gpu_obj(plane->state->fb->obj[0]); - /* Only support mapped shmem bo */ - if (virtio_gpu_is_vram(bo) || bo->base.base.import_attach || !bo->base.vaddr) + if (virtio_gpu_is_vram(bo) || bo->base.base.import_attach) return -ENODEV; - iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr); + /* try to vmap it if possible */ + if (!bo->base.vaddr) { + int ret; + + ret = drm_gem_shmem_atomic_vmap(&bo->base, &sb->map[0]); + if (ret) + return ret; + } else { + iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr); + } sb->format = plane->state->fb->format; sb->height = plane->state->fb->height; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index fdae947682cd..cfed66bc12ef 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -529,6 +529,7 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size); +struct page **drm_gem_atomic_get_pages(struct drm_gem_object *obj); struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index d22e3fb53631..86a357945f42 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -105,6 +105,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_atomic_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, struct iosys_map *map); void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, -- 2.48.1