From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A71AC07E9D for ; Mon, 26 Sep 2022 21:30:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 643478E0086; Mon, 26 Sep 2022 17:30:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F2C48E0066; Mon, 26 Sep 2022 17:30:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46BB48E0086; Mon, 26 Sep 2022 17:30:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 375F48E0066 for ; Mon, 26 Sep 2022 17:30:04 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0A6901A042A for ; Mon, 26 Sep 2022 21:30:04 +0000 (UTC) X-FDA: 79955529528.15.CE28621 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 9D386140008 for ; Mon, 26 Sep 2022 21:30:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664227803; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WoZ9fgngRao8IT+B8E9l7WqjGAnK52AtXGsAOFoOmwE=; b=gqPkCoAQEXqORUk3o6im4JzSArv6xL3j/IDtTmL469ivnDe5yMrRCk9Xr53ETfsGaUYo6m tQb7OKQjmCMxAOJvxQRYcBq8iHB37974sy+5Cmi0kIMi3KvoZRYOL22cxpdWpY7tOSg7d6 nHV4I6mtZTWDKCjOkpdVbhKordrh4qs= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-114-Reg-NDEvOy-RJmtq6_vzEA-1; Mon, 26 Sep 2022 17:30:00 -0400 X-MC-Unique: Reg-NDEvOy-RJmtq6_vzEA-1 Received: by mail-qt1-f199.google.com with SMTP id u9-20020a05622a14c900b0035cc7e8cbaeso5584617qtx.19 for ; Mon, 26 Sep 2022 14:30:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:user-agent:organization :references:in-reply-to:date:cc:to:from:subject:message-id :x-gm-message-state:from:to:cc:subject:date; bh=WoZ9fgngRao8IT+B8E9l7WqjGAnK52AtXGsAOFoOmwE=; b=QNsw5nrzQvTafsRLJkC2yxFI4qVbcRVAhVHpkmFq/9abngVjtxVXzmWivju7tQ5Ymu VNpdrbZ80UCEQQjC6tJUHs8F1AIhW58sBh25o5UqvSVGcsmh/y+Zc2IScsOWHknfujeE AMw857pAJ88lyRu+ojBMjupjVvK0kbhKHtMf+Qgy5fdxqfNNrROwQu0n8G7ruyVb33Y4 YnaS+QemLIVvkqnPusE1dDQ26Mka4aU6eLxBLe+sh2dJkUjEQHyfxO9sMLawtUYaIF0W 1R3uuspCYnTBgfliZV1iQ0EtlQFybBM6UVDG/XAb0N2cZxeCiQVGeTyelRvR6Hyu6W9R /gyA== X-Gm-Message-State: ACrzQf1nUnpHcMHTpHM/Vlh1i7USSqAPhg2vIerfNPuTQvmXRZ5WaUx8 ln1LVz0fPHke7Szqsu9JqZzq2+P7+ju1Ivpk+Sy4A9GCJZpo4HE7VqsMQdNxGJk6g/MHTJjhUWV TS0ODqNdAv8Q= X-Received: by 2002:a05:620a:2987:b0:6ce:c029:5f03 with SMTP id r7-20020a05620a298700b006cec0295f03mr15801487qkp.157.1664227799808; Mon, 26 Sep 2022 14:29:59 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5ooMLHPNGYGRP1IOKqLF5TpXz9XfH1OcMI3VMSbPod5U0TpcrkzXrJ+Bq2BgRX5qr4d9CtNA== X-Received: by 2002:a05:620a:2987:b0:6ce:c029:5f03 with SMTP id r7-20020a05620a298700b006cec0295f03mr15801473qkp.157.1664227799582; Mon, 26 Sep 2022 14:29:59 -0700 (PDT) Received: from ?IPv6:2600:4040:5c48:e00:e786:1aff:4f5c:c549? ([2600:4040:5c48:e00:e786:1aff:4f5c:c549]) by smtp.gmail.com with ESMTPSA id y14-20020a05620a25ce00b006cfa7b944fdsm1686678qko.16.2022.09.26.14.29.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 14:29:57 -0700 (PDT) Message-ID: Subject: Re: [PATCH 5/7] nouveau/dmem: Refactor nouveau_dmem_fault_copy_one() From: Lyude Paul To: Alistair Popple , linux-mm@kvack.org, Andrew Morton Cc: Michael Ellerman , Nicholas Piggin , Felix Kuehling , Alex Deucher , Christian =?ISO-8859-1?Q?K=F6nig?= , "Pan, Xinhui" , David Airlie , Daniel Vetter , Ben Skeggs , Karol Herbst , Ralph Campbell , "Matthew Wilcox (Oracle)" , Alex Sierra , John Hubbard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Jason Gunthorpe , Dan Williams Date: Mon, 26 Sep 2022 17:29:55 -0400 In-Reply-To: References: Organization: Red Hat Inc. User-Agent: Evolution 3.42.4 (3.42.4-2.fc35) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664227803; a=rsa-sha256; cv=none; b=eSQ0I4g7YDtXU+OFeFqW9j1g77hS8UZD61OYmhB6uqlSimjhOoKiwlV0RpQ8hKzeGKYuXu UXn2nedPLNRCIw03Htl+TWrkwi78QyVxfhGFonGC76xv+A27FNRY84NmIO/hKcmu//u8Ia yPiCNlZ7YnAMDTvK3xZ5lRXZHGtdscI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gqPkCoAQ; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of lyude@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=lyude@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664227803; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WoZ9fgngRao8IT+B8E9l7WqjGAnK52AtXGsAOFoOmwE=; b=I7S+35iBeb2vmB88FqZmZqWgZ8A/XYUwi01HJCOc7Jus5DzAZ2djzMPPqO148Dj+PTtobA YisWePCISp93M5IyJH08M6Jeof/Pm95idlsYCjey5/XEqX4GtnPRYL+Jcqe/U+lrvb8zsP pYEwpzUEPlXSN/wCii/7DszqCLqrGI8= X-Stat-Signature: 6d5xeppmaem15eyoh5d1ni8xur63kdsu X-Rspam-User: X-Rspamd-Server: rspam04 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gqPkCoAQ; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of lyude@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=lyude@redhat.com X-Rspamd-Queue-Id: 9D386140008 X-HE-Tag: 1664227803-112699 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 2022-09-26 at 16:03 +1000, Alistair Popple wrote: > nouveau_dmem_fault_copy_one() is used during handling of CPU faults via > the migrate_to_ram() callback and is used to copy data from GPU to CPU > memory. It is currently specific to fault handling, however a future > patch implementing eviction of data during teardown needs similar > functionality. > > Refactor out the core functionality so that it is not specific to fault > handling. > > Signed-off-by: Alistair Popple > --- > drivers/gpu/drm/nouveau/nouveau_dmem.c | 59 +++++++++++++-------------- > 1 file changed, 29 insertions(+), 30 deletions(-) > > diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c > index f9234ed..66ebbd4 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c > +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c > @@ -139,44 +139,25 @@ static void nouveau_dmem_fence_done(struct nouveau_fence **fence) > } > } > > -static vm_fault_t nouveau_dmem_fault_copy_one(struct nouveau_drm *drm, > - struct vm_fault *vmf, struct migrate_vma *args, > - dma_addr_t *dma_addr) > +static int nouveau_dmem_copy_one(struct nouveau_drm *drm, struct page *spage, > + struct page *dpage, dma_addr_t *dma_addr) > { > struct device *dev = drm->dev->dev; > - struct page *dpage, *spage; > - struct nouveau_svmm *svmm; > - > - spage = migrate_pfn_to_page(args->src[0]); > - if (!spage || !(args->src[0] & MIGRATE_PFN_MIGRATE)) > - return 0; > > - dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address); > - if (!dpage) > - return VM_FAULT_SIGBUS; > lock_page(dpage); > > *dma_addr = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL); > if (dma_mapping_error(dev, *dma_addr)) > - goto error_free_page; > + return -EIO; > > - svmm = spage->zone_device_data; > - mutex_lock(&svmm->mutex); > - nouveau_svmm_invalidate(svmm, args->start, args->end); > if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_HOST, *dma_addr, > - NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) > - goto error_dma_unmap; > - mutex_unlock(&svmm->mutex); > + NOUVEAU_APER_VRAM, > + nouveau_dmem_page_addr(spage))) { > + dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); > + return -EIO; > + } Feel free to just align this with the starting (, as long as it doesn't go above 100 characters it doesn't really matter imho and would look nicer that way. Otherwise: Reviewed-by: Lyude Paul Will look at the other patch in a moment > > - args->dst[0] = migrate_pfn(page_to_pfn(dpage)); > return 0; > - > -error_dma_unmap: > - mutex_unlock(&svmm->mutex); > - dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); > -error_free_page: > - __free_page(dpage); > - return VM_FAULT_SIGBUS; > } > > static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) > @@ -184,9 +165,11 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) > struct nouveau_drm *drm = page_to_drm(vmf->page); > struct nouveau_dmem *dmem = drm->dmem; > struct nouveau_fence *fence; > + struct nouveau_svmm *svmm; > + struct page *spage, *dpage; > unsigned long src = 0, dst = 0; > dma_addr_t dma_addr = 0; > - vm_fault_t ret; > + vm_fault_t ret = 0; > struct migrate_vma args = { > .vma = vmf->vma, > .start = vmf->address, > @@ -207,9 +190,25 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) > if (!args.cpages) > return 0; > > - ret = nouveau_dmem_fault_copy_one(drm, vmf, &args, &dma_addr); > - if (ret || dst == 0) > + spage = migrate_pfn_to_page(src); > + if (!spage || !(src & MIGRATE_PFN_MIGRATE)) > + goto done; > + > + dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address); > + if (!dpage) > + goto done; > + > + dst = migrate_pfn(page_to_pfn(dpage)); > + > + svmm = spage->zone_device_data; > + mutex_lock(&svmm->mutex); > + nouveau_svmm_invalidate(svmm, args.start, args.end); > + ret = nouveau_dmem_copy_one(drm, spage, dpage, &dma_addr); > + mutex_unlock(&svmm->mutex); > + if (ret) { > + ret = VM_FAULT_SIGBUS; > goto done; > + } > > nouveau_fence_new(dmem->migrate.chan, false, &fence); > migrate_vma_pages(&args); -- Cheers, Lyude Paul (she/her) Software Engineer at Red Hat