From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27B7DEB64D7 for ; Fri, 23 Jun 2023 16:35:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 650788D0002; Fri, 23 Jun 2023 12:35:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 600A38D0001; Fri, 23 Jun 2023 12:35:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A1C78D0002; Fri, 23 Jun 2023 12:35:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 34F248D0001 for ; Fri, 23 Jun 2023 12:35:54 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D12E0120BAD for ; Fri, 23 Jun 2023 16:35:53 +0000 (UTC) X-FDA: 80934564186.28.2A16A81 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 5991D1C0016 for ; Fri, 23 Jun 2023 16:35:51 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DfIIR9WS; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687538151; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GtfnmDQrMWsv5Kv1rNTjvgLNiyDvetKnkxLSfe/rbwI=; b=LDNo5a+E3Vcc47sB2EkRGKNoRWAjZGleip5BQuwbTRRrDXYMKhndx+7+4TO1HTelXPlADJ CRP4Srp+XS5jtiV6dkIiGVfaw/40F2HmCCUzQR/zBZFOzxFki2sSZ5Mm/iFYNHgAlRqdNm nHxHuwwwOgUwb6Pce8xcbFp2wEMbZQ8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DfIIR9WS; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687538151; a=rsa-sha256; cv=none; b=8qo8vsli5d/EE+wP6dfDuzeXWaFUuFcRq0TpyUp7qpSTln4+1R3Xfbp+Vjb64pVy5ERbxz yAomzGA8H4UfP1gMyNbBtFIXkkZpAcSG2pRtH5UbXePmPwf3ob2bcd8gR5WaOjyydPpadQ gt8fvq3B8+00iqVSeghelEa0V+FbD2Y= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687538149; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=GtfnmDQrMWsv5Kv1rNTjvgLNiyDvetKnkxLSfe/rbwI=; b=DfIIR9WSRtorhBd83H93XFBW7m63o47vvNnSHV8j3Mgk9xhmZ7Db3fNOAMbTUi1J4pxDtf 6Glc//B6mUz/uB+t8nSKtDUEOiGNidbbyPOx65s0XLpMX9ypOMi2pwawBiapn+G0wNteA0 OiSwvtahpE/ktrqUAc0A0lH3TNRZHCo= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-494-c1k8hwG2Myeu1ZRK2ffH3g-1; Fri, 23 Jun 2023 12:35:48 -0400 X-MC-Unique: c1k8hwG2Myeu1ZRK2ffH3g-1 Received: by mail-qk1-f198.google.com with SMTP id af79cd13be357-765478b08c1so14480885a.0 for ; Fri, 23 Jun 2023 09:35:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687538147; x=1690130147; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=GtfnmDQrMWsv5Kv1rNTjvgLNiyDvetKnkxLSfe/rbwI=; b=e4ISF/kRwL0dqjSQmB7oM3lPpMLV7yl3mWgnWEjOA3t4ceCsQ5HcVGzmUZWJ8mmLcF xsNw3d92w+DoSVkBaI9jK9TjYYB2bnAsG6yzDw5yilsc16Ywk0mmaYZV7FsqncSRCvr6 2UAEAFkstYP61V20baFcLMNs264gVh1azChun8ZEUJxk3ojP2SmLDHXl+Zkho6W9HIFb JVRy3/9jeEhs6MN4cJ87cpfNVwlGzGirzlOVuL09MwxaGTJncCwzPD63TJ8yw5cG+W8s G9KA6MoWzTZO6oH151ltAHCLKh1T35Oh5WQXj7O3rIZzIpsVATJ8jSSfzs4lrBzbbJrO 1CFg== X-Gm-Message-State: AC+VfDw5qboHt4TuFUNY8RhU/REtRoqpWI1+fj2ZCDEPJHZuw9eCioOG ygTgH6UE1H5TB0MTp7pwzq12/x5hGcSQAgGtdgmbM+Pa6N0uO11XXyc9XCrV2gUnDKtGaLm5OFV AAKWLhOPGrLw= X-Received: by 2002:a05:622a:1a20:b0:400:8036:6f05 with SMTP id f32-20020a05622a1a2000b0040080366f05mr5683435qtb.2.1687538147077; Fri, 23 Jun 2023 09:35:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5DjQPmEKIJ8oJyPsEjwZviyJdCjcATb54MenUsXY2Hc/ayfGWYxfzoq6G9FTUvC6VgSzJsKQ== X-Received: by 2002:a05:622a:1a20:b0:400:8036:6f05 with SMTP id f32-20020a05622a1a2000b0040080366f05mr5683408qtb.2.1687538146773; Fri, 23 Jun 2023 09:35:46 -0700 (PDT) Received: from x1n (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id g27-20020ac8775b000000b003f9ad6acba4sm5012927qtu.79.2023.06.23.09.35.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 09:35:46 -0700 (PDT) Date: Fri, 23 Jun 2023 12:35:45 -0400 From: Peter Xu To: "Kasireddy, Vivek" Cc: David Hildenbrand , "dri-devel@lists.freedesktop.org" , "linux-mm@kvack.org" , Mike Kravetz , Gerd Hoffmann , "Kim, Dongwon" , Andrew Morton , James Houghton , Jerome Marchand , "Chang, Junxiao" , "Kirill A . Shutemov" , "Hocko, Michal" , Muchun Song , Jason Gunthorpe , John Hubbard Subject: Re: [PATCH v1 0/2] udmabuf: Add back support for mapping hugetlb pages Message-ID: References: <20230622072710.3707315-1-vivek.kasireddy@intel.com> <6e429fbc-e0e6-53c0-c545-2e2cbbe757de@redhat.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Queue-Id: 5991D1C0016 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ucmgh3tw47pmp7ieu4s96hx1ssgm3f5x X-HE-Tag: 1687538151-68978 X-HE-Meta: U2FsdGVkX19CFzouh6JwJcjx71fmBXSzYlrzw5W46EIDl1U37i2C+MWTVKUIt5kKu4cwigjX20v+fKEzFQu3dg7T5MZqGqAmPuwRGXblOxzJPRgnJp/z+cSn7dUpKeLbU5ayoEPxrMxkXoXS/BRWiRew5RW/UHpzmduC1yiAL4pdna18D6x2piWSZ9adq0yT4GC0EZkXQEPtJntEl4UhPI6GPjNHK7zybvn3GTUeRQzK/AEjanAlEh0avOx0YQLRzsPzySe2TNuPTTFTITHE4Jwm/0gbPJZPab4wLmbYCvZt1olYKj8A7t22m7DC4OVi49Wer2IyaMBBdVe/C6keqTp1oB0Lfo5IJfNT2cr7r5HLLBeRE02BYckHw4Z4mJBEfVZMzPpxeNn90Mpev/oNauhHkLFyG9H881P2dkHZFI4wKOyQ4IwAH9qpbOCKevtse1vOC1zYQWfRA+tB+I6saabFmhtlLj2K4Bf7gahcBmMYaHblKZlkQ5A2weC8eh+McBcifXSPi88zsZepFqkezrv1yaR421ix66RLBQtdAPwok0KPo3gBIs3RJaK3XIMvfn3QF80EShO7DwwuSiB7Ke1d01yczjt6O18kyrf9F0n0seJMy+QKaJxyyxD5TAEVXPG/fg8RbDMhtpfCMPQFjxXttZuNttoVd2uXJZ4dL+JHWzZTNPw6DAaQ9LEZgWy81LbrQWuYrndReF0JUmoFRF6iqje4GiNDjiMYE04B7zDGpUuNkNHh1PGXotzD0lVKAesWo5hOUrk/sQZFeVPAPA4zhSxvicStv+IkZDAD1v8sj8Q9V+51Ig48N+H5a2SXFd09N9jnHJMm9/ZOeYEK/zNxJhCtcMyPV/xWgk17pOCxpeeHwNifI5btu2mHabiy1z8/G6rrpZ2s8v2Y6fxzkWJlSuvuyqX/KE7HDdnfYD43FB1KL4wnlvx9qLRsjnkIA9YU1imXsJGJD/wScSx +YbgElzt 8+iUZKWBZX+/VQoZZ46cD/DOFi4gDV9EfUnnfYfun8XQJnOfi0qmawSSHji/VJH6fXmkXZiSLQ2s6qKq0NRFrgvurGnS3kfl1Tq27PFlML0pWzcmX6ILN2Ictx7KEm7qH/RiNNizJLzBmXpdqC0smMMP1kC+qBbCFlz1TzD8+WtQKsCkj91a5kP7b+VDCWIRCi1ISHt3xU6ZHUbPD5IoLC3cUkRym6DEnjvzmeXotQOvgYJu8OJxzxT4LjyBWlSBR07IblhVSSw8gvyoi3ti2BBJYr2BX+JOy44li4UQ10o8IqEHtUZgqX+FXfUwz6GiDaQbvUl8SmmIO5cfcAQraOYD2N4ix4cBUxH9qB4whGK+6WDRnn5TjyMQCWQe9rrqNMa2wZlff0A2L7DDIhc/OlZZ64AWTO3Yq+Pn62QUVYxUvD1FtSFKP4KrTQC6ERA91qjti3RYhMrgySNNg9poPV9csNg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jun 23, 2023 at 06:13:02AM +0000, Kasireddy, Vivek wrote: > Hi David, > > > > The first patch ensures that the mappings needed for handling mmap > > > operation would be managed by using the pfn instead of struct page. > > > The second patch restores support for mapping hugetlb pages where > > > subpages of a hugepage are not directly used anymore (main reason > > > for revert) and instead the hugetlb pages and the relevant offsets > > > are used to populate the scatterlist for dma-buf export and for > > > mmap operation. > > > > > > Testcase: default_hugepagesz=2M hugepagesz=2M hugepages=2500 > > options > > > were passed to the Host kernel and Qemu was launched with these > > > relevant options: qemu-system-x86_64 -m 4096m.... > > > -device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080 > > > -display gtk,gl=on > > > -object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M > > > -machine memory-backend=mem1 > > > > > > Replacing -display gtk,gl=on with -display gtk,gl=off above would > > > exercise the mmap handler. > > > > > > > While I think the VM_PFNMAP approach is much better and should fix that > > issue at hand, I thought more about missing memlock support and realized > > that we might have to fix something else. SO I'm going to raise the > > issue here. > > > > I think udmabuf chose the wrong interface to do what it's doing, that > > makes it harder to fix it eventually. > > > > Instead of accepting a range in a memfd, it should just have accepted a > > user space address range and then used > > pin_user_pages(FOLL_WRITE|FOLL_LONGTERM) to longterm-pin the pages > > "officially". > Udmabuf indeed started off by using user space address range and GUP but > the dma-buf subsystem maintainer had concerns with that approach in v2. > It also had support for mlock in that version. Here is v2 and the relevant > conversation: > https://patchwork.freedesktop.org/patch/210992/?series=39879&rev=2 > > > > > So what's the issue? Udma effectively pins pages longterm ("possibly > > forever") simply by grabbing a reference on them. These pages might > > easily reside in ZONE_MOVABLE or in MIGRATE_CMA pageblocks. > > > > So what udmabuf does is break memory hotunplug and CMA, because it > > turns > > pages that have to remain movable unmovable. > > > > In the pin_user_pages(FOLL_LONGTERM) case we make sure to migrate > > these > > pages. See mm/gup.c:check_and_migrate_movable_pages() and especially > > folio_is_longterm_pinnable(). We'd probably have to implement something > > similar for udmabuf, where we detect such unpinnable pages and migrate > > them. > The pages udmabuf pins are only those associated with Guest (GPU driver/virtio-gpu) > resources (or buffers allocated and pinned from shmem via drm GEM). Some > resources are short-lived, and some are long-lived and whenever a resource > gets destroyed, the pages are unpinned. And, not all resources have their pages > pinned. The resource that is pinned for the longest duration is the FB and that's > because it is updated every ~16ms (assuming 1920x1080@60) by the Guest > GPU driver. We can certainly pin/unpin the FB after it is accessed on the Host > as a workaround, but I guess that may not be very efficient given the amount > of churn it would create. > > Also, as far as migration or S3/S4 is concerned, my understanding is that all > the Guest resources are destroyed and recreated again. So, wouldn't something > similar happen during memory hotunplug? > > > > > > > For example, pairing udmabuf with vfio (which pins pages using > > pin_user_pages(FOLL_LONGTERM)) in QEMU will most probably not work in > > all cases: if udmabuf longterm pinned the pages "the wrong way", vfio > > will fail to migrate them during FOLL_LONGTERM and consequently fail > > pin_user_pages(). As long as udmabuf holds a reference on these pages, > > that will never succeed. > Dma-buf rules (for exporters) indicate that the pages only need to be pinned > during the map_attachment phase (and until unmap attachment happens). > In other words, only when the sg_table is created by udmabuf. I guess one > option would be to not hold any references during UDMABUF_CREATE and > only grab references to the pages (as and when it gets used) during this step. > Would this help? IIUC the refcount is needed, otherwise I don't see what to protect the page from being freed and even reused elsewhere before map_attachment(). It seems the previous concern on using gup was majorly fork(), if this is it: https://patchwork.freedesktop.org/patch/210992/?series=39879&rev=2#comment_414213 Could it also be guarded by just making sure the pages are MAP_SHARED when creating the udmabuf, if fork() is a requirement of the feature? I had a feeling that the userspace still needs to always do the right thing to make it work, even using pure PFN mappings. For instance, what if the userapp just punchs a hole in the shmem/hugetlbfs file after creating the udmabuf (I see that F_SEAL_SHRINK is required, but at least not F_SEAL_WRITE with current impl), and fault a new page into the page cache? Thanks, > > > > > > > There are *probably* more issues on the QEMU side when udmabuf is > > paired > > with things like MADV_DONTNEED/FALLOC_FL_PUNCH_HOLE used for > > virtio-balloon, virtio-mem, postcopy live migration, ... for example, in > > the vfio/vdpa case we make sure that we disallow most of these, because > > otherwise there can be an accidental "disconnect" between the pages > > mapped into the VM (guest view) and the pages mapped into the IOMMU > > (device view), for example, after a reboot. > Ok; I am not sure if I can figure out if there is any acceptable way to address > these issues but given the current constraints associated with udmabuf, what > do you suggest is the most reasonable way to deal with these problems you > have identified? > > Thanks, > Vivek > > > > > -- > > Cheers, > > > > David / dhildenb > -- Peter Xu