From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A708EB64DB for ; Wed, 14 Jun 2023 03:41:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C4618E0002; Tue, 13 Jun 2023 23:41:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64CCF6B0075; Tue, 13 Jun 2023 23:41:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C69B8E0002; Tue, 13 Jun 2023 23:41:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 36C046B0074 for ; Tue, 13 Jun 2023 23:41:14 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id EFA28A06EB for ; Wed, 14 Jun 2023 03:41:13 +0000 (UTC) X-FDA: 80899952826.11.EE40BEC Received: from mail-yw1-f180.google.com (mail-yw1-f180.google.com [209.85.128.180]) by imf05.hostedemail.com (Postfix) with ESMTP id 13914100011 for ; Wed, 14 Jun 2023 03:41:10 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="vKTCEq/g"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686714071; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=89E/vNm15KXI4BNl5Qj1rAiYLsSjfnECEfyhr24KnlQ=; b=eSLNBcYyQaN4XjrjuqrPu6f7aArJLBXdb10/tVCTx+D/48tWf6hpVYzQxfx9muZNcudfFp 31OCLiox3swsC/3L0rc4hxOh0/FRhw9GNrAnogzzn6XX/dvXYRjtbGAbmDHsA4mlCzOlhr ov1Pl4KzoILB/skV6IySbnlSvyXLbJU= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="vKTCEq/g"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686714071; a=rsa-sha256; cv=none; b=hRJ/P5rSgUTdFMZcH2UyQZXO1pHNqEYNzPLwukrqjpjmEEGLFmeLPZo5hh50o+S3XpFGDo FzT17pRdf7XFe3pxl9RdT1rlhIF6b+lmH3GYqOzJu1FrSaHD8gUulwkwoAL7+k76wYdd9D 9rXWOC5rpkFtxoFFgxUBex8IWk0MIgk= Received: by mail-yw1-f180.google.com with SMTP id 00721157ae682-56cf32f5bf8so2643537b3.3 for ; Tue, 13 Jun 2023 20:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686714070; x=1689306070; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=89E/vNm15KXI4BNl5Qj1rAiYLsSjfnECEfyhr24KnlQ=; b=vKTCEq/gV+PVeQNXgMkpr+n7v5Y/DXhSWm8RnVXfDxcCC+faivAk0lNIjtdHQFn8Sa BgWYqlj92ejqHVkr5i/UNnuCNju9kW52golDDRmlzB4jLuwh8zjt7CQofNLG+RhgD9FS x2io964yVNzQHIQprnB67rxEXKMUPeUAvM32bdh8zZcn3rxYJsLwEDgwkJIzFwtn9DQr pzIQ3oOLP1aaCBfAQ00kOxXpp68UriNs3EjF+FUBbb5pls3Z2TJMn+e9hmzPr2pKjffK JGuQhr4vY87GLfRZ92kpYzlDA2SL93gxuxlDYkT2a5N5IptTgXwoAd0wICSLzPvzIKuU WI1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686714070; x=1689306070; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=89E/vNm15KXI4BNl5Qj1rAiYLsSjfnECEfyhr24KnlQ=; b=XYFbncIiM98ybSNRIgcm/bC5wBBJlKVnwPzo16CaWD/xSC9eteUng7n2J6NlBzbzmT D2DUfYPNJ1oqlpvkD2Y2mAuW9zIiBr3vKXO5DJNNr7xGJuncnBgVqlDKWKinLnpuNo9z 1ZWz35sb1YvxZ9uJRuepThepnWoYxEV/5qSrJgXqZ/fIdpeVYVLW5CrDOOPjuPVlSUUw BA0v/qlPYy2v78kBxk0jRG6Cl00l/OKWwonEhILQH83P+kIW5Yi8rP7sWefkpx3rLBOA +dplq5WpqyvooPvrKJM5Uu+nX7nQXs5wqhXDEM7nG4aFoSShlhlVw6FecKIp103J1pOT Mtlg== X-Gm-Message-State: AC+VfDwemgeQRdRVmoDXUS8cdQgJJXFdvaG68V3+TQ5D+K8ckQFLSU7O npUZzMnBuJpb34CXZfA35claYA== X-Google-Smtp-Source: ACHHUZ5XIq5blHpZPaXxTMKFfNHRQa1tcMVkEWdNxYUmPW3WiJJjwg4PcLqdZC9W6ULm6CU+iXKYfg== X-Received: by 2002:a0d:d6d5:0:b0:55d:626c:b62f with SMTP id y204-20020a0dd6d5000000b0055d626cb62fmr563052ywd.51.1686714069952; Tue, 13 Jun 2023 20:41:09 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id f9-20020a816a09000000b00545dc7c4a9esm38074ywc.111.2023.06.13.20.41.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Jun 2023 20:41:09 -0700 (PDT) Date: Tue, 13 Jun 2023 20:40:58 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: David Hildenbrand cc: "Kasireddy, Vivek" , Mike Kravetz , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "dri-devel@lists.freedesktop.org" , "qemu-devel@nongnu.org" , Hugh Dickins , Gerd Hoffmann , "Kim, Dongwon" , "Chang, Junxiao" , "kirill.shutemov@linux.intel.com" , "Hocko, Michal" , "jmarchan@redhat.com" , "muchun.song@linux.dev" , James Houghton , Greg Kroah-Hartman , Andrew Morton , "stable@vger.kernel.org" Subject: Re: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)' In-Reply-To: <676ee47d-8ca0-94c4-7454-46e9915ea36a@redhat.com> Message-ID: <5dd5b94c-7bf-4de-40db-aeea8aa7b45e@google.com> References: <20230608204927.88711-1-mike.kravetz@oracle.com> <281caf4f-25da-3a73-554b-4fb252963035@redhat.com> <676ee47d-8ca0-94c4-7454-46e9915ea36a@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 13914100011 X-Stat-Signature: smurg6bxxxjdqdb5egfuzm9dtbeo4p37 X-Rspam-User: X-HE-Tag: 1686714070-919726 X-HE-Meta: U2FsdGVkX1+3Z7EymBEeMUwNlCp3MK6gtC8vw2EIMzRnnFOgWRHP3OiI06ioDCrSOulFrGXUI8r7VK8NWdKBH1ToF1dlqlOVbRykRFx9a6kGC9dQKlQZQRwgCWFRrIqIHa1roYhZPuRBaY06sgxjtUnF5nbhh/mzAiAfRShwgZ7vsROIS5lNF0WOe+cpEYc/BayTT67AUMRntL4RpJMYP/PdM7oxmzYuy347B0MFkNk0T+iKY4NHCuOJ1UWAPkpHa+zqorsxkALakyAMxcJSUNgmSfazgvusktSzAf/dyIQb2W5U3WoH/BhXTzZ3IcUAqMwsLopcEQj9b6CTVsfuMKB8F/cbes9vYrUUheboZk48Fiu9efsHd2KNrIL2hgp+1fX+gZXXDhAzqzoSpkzjZYxEcHYYmXDFo1HWB55aR7LTUGQ4BfhbVHbxf2/wQQWcAoCtpYwA+WE8fYX+ShE+zm12HtlH81X12d54cStO9KTS3156qtY2HDDuVwZK9LLKktYtXUUj2X8i2aWBVWSY9j93q1tOZPw2e//ezHW+5HC/iilinHl9eOKN/Wfs/C4q6UIkkFNcVaAXhbsDgN4wagsTBxuVsMY0fHQsuIcr2jBG74BkWBYHvm8mmFYbz9QK7MA3M5ae64M0FEQLeFHfe3azQVW44w2uqVZ4SF8AB4UzSP2N1Jj0Pdo+K2z8tRvL3YQrb7kRXoV9Zv+3elSnMNyXF90FSIEV+rzH7/ljI22LiwJ+kN11xqjoMod76CJmUZU7Pe6vAIaRmh1oj65WgdWZH9J1uBj3SO+aWGHHsWkh3Q6M+ETqJrF7MG3gjXtb55zec0adm9miT6J3sd4EIof/eUygpvA5A507T0hTTBrf3FoVXMrX9AyM5J8bKxBCRU3HhssrN4WsO80c82IjyfDkyTjJBd+SPlAToMkCn4ZFO+ZzhSvg9hyzvmuzoVe4xsYdXFQSFZkvCPdm2v6 SbqanXjl nFi2TbSpLBgD4jN12uUVbjBlJFT+7z7vq7RhZlwjIxor0IRhjQ0cVEKfKZW7gix7WiZLNKtGsL19ldZvB5Jws1aMUAIxqo+DlUPkICqOhWPG4b2/wHIY1X5/AgyeVUjLIXCtukSKc+0F5cqXoREzCygZMsGATEqqkfCifoXr2VJZsXCveFrARvT0h7s34UMZa8Z4gsQnjccph4estVlG5bHQdOaRfFVL1KFC6LEoq9qQPUYrSpRguDRQ8+sS9cp9/JoI5WBff56WdIi9bweJEJYax6ApIhaPW3dVm8wwlzXTOrzYZSyvGPFktWZ9DkBw3DysIWzwOQvH/CGOzpwvnz9VYKPtPRyzczgxLSqgFBfF7glCLXkJO97KK3/Tx0MZWL8LduxbZE+75e3xiQAVs15FczL4WQJ77+aRmAhVGQXvgUqOsYWtFgDFfWRzsXQCxQwY+yyN+7mxWl69EIrs/HMqbJ9cppEjwJjHNDPeEQXnQ6L/l7FaqyQ6gez5WKpbaAs+5IeXN7/QQOeyKEGGYGN507+pninvxV9LBculXiIH1EVoJrKNcawUNt0XmpBi3Qxt3tTcZU3J58J4QZZNh3h0/ySuSrdwbobRdhsmaEYrP3Eh9kHw7oyS+Iw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 13 Jun 2023, David Hildenbrand wrote: > On 13.06.23 10:26, Kasireddy, Vivek wrote: > >> On 12.06.23 09:10, Kasireddy, Vivek wrote: > >>> Sorry for the late reply; I just got back from vacation. > >>> If it is unsafe to directly use the subpages of a hugetlb page, then > >>> reverting > >>> this patch seems like the only option for addressing this issue > >>> immediately. > >>> So, this patch is > >>> Acked-by: Vivek Kasireddy > >>> > >>> As far as the use-case is concerned, there are two main users of the > >> udmabuf > >>> driver: Qemu and CrosVM VMMs. However, it appears Qemu is the only > >> one > >>> that uses hugetlb pages (when hugetlb=on is set) as the backing store for > >>> Guest (Linux, Android and Windows) system memory. The main goal is to > >>> share the pages associated with the Guest allocated framebuffer (FB) with > >>> the Host GPU driver and other components in a zero-copy way. To that > >> end, > >>> the guest GPU driver (virtio-gpu) allocates 4k size pages (associated with > >>> the FB) and pins them before sharing the (guest) physical (or dma) > >> addresses > >>> (and lengths) with Qemu. Qemu then translates the addresses into file > >>> offsets and shares these offsets with udmabuf. > >> > >> Is my understanding correct, that we can effectively long-term pin > >> (worse than mlock) 64 MiB per UDMABUF_CREATE, allowing eventually !root > > The 64 MiB limit is the theoretical upper bound that we have not seen hit in > > practice. Typically, for a 1920x1080 resolution (commonly used in Guests), > > the size of the FB is ~8 MB (1920x1080x4). And, most modern Graphics > > compositors flip between two FBs. > > > > Okay, but users with privileges to open that file can just create as many as > they want? I think I'll have to play with it. > > >> users > >> > >> ll /dev/udmabuf > >> crw-rw---- 1 root kvm 10, 125 12. Jun 08:12 /dev/udmabuf > >> > >> to bypass there effective MEMLOCK limit, fragmenting physical memory and > >> breaking swap? > > Right, it does not look like the mlock limits are honored. > > > > That should be added. Agreed. > > >> > >> Regarding the udmabuf_vm_fault(), I assume we're mapping pages we > >> obtained from the memfd ourselves into a special VMA (mmap() of the > > mmap operation is really needed only if any component on the Host needs > > CPU access to the buffer. But in most scenarios, we try to ensure direct GPU > > access (h/w acceleration via gl) to these pages. > > > >> udmabuf). I'm not sure how well shmem pages are prepared for getting > >> mapped by someone else into an arbitrary VMA (page->index?). > > Most drm/gpu drivers use shmem pages as the backing store for FBs and > > other buffers and also provide mmap capability. What concerns do you see > > with this approach? > > Are these mmaping the pages the way udmabuf maps these pages (IOW, on-demand > fault where we core-mm will adjust the mapcount etc)? > > Skimming over at shmem_read_mapping_page() users, I assume most of them use a > VM_PFNMAP mapping (or don't mmap them at all), where we won't be messing with > the struct page at all. > > (That might even allow you to mmap hugetlb sub-pages, because the struct page > -- and mapcount -- will be ignored completely and not touched.) You're well ahead of me: I didn't reach an understanding of whether or not mapcount would get manipulated here - though if Junxiao's original patch did fix the immediate hugetlb symptoms, presumably it is (and without much point, since udmabuf holds on to that extra reference which pins each page for the duration). > > > > >> > >> ... also, just imagine someone doing FALLOC_FL_PUNCH_HOLE / ftruncate() > >> on the memfd. What's mapped into the memfd no longer corresponds to > >> what's pinned / mapped into the VMA. > > IIUC, making use of the DMA_BUF_IOCTL_SYNC ioctl would help with any > > coherency issues: > > https://www.kernel.org/doc/html/v6.2/driver-api/dma-buf.html#c.dma_buf_sync > > > > Would it as of now? udmabuf_create() pulls the shmem pages out of the memfd, > not sure how DMA_BUF_IOCTL_SYNC would help to update that whenever the pages > inside the memfd would change (e.g., FALLOC_FL_PUNCH_HOLE + realloc). > > But that's most probably simply "not supported". Yes, the pages which udmabuf is holding would be the originals: they will then be detached from the hole-punched file, and subsequent faults or writes to that backing file (through shmem, rather than through udmabuf) can fill in the holes with new, different pages. So long as that's well understood, then it's not necessarily a disaster. I see udmabuf asks for SEAL_SHRINK (I guess to keep away from SIGBUS), but refuses SEAL_WRITE - so hole-punching remains permitted. > > >> > >> > >> Was linux-mm (and especially shmem maintainers, ccing Hugh) involved in > >> the upstreaming of udmabuf? Thanks for the Cc, David. No, I wasn't involved at all; but I probably would not have understood their needs much better then than now. I don't see anything obviously wrong with its use of shmem, aside from the unlimited pinning of pages which you pointed out; and I'll tend to assume that it's okay, from its five years of use. But certainly the more recent addition of hugetlb was mistaken, and needs to be reverted. > > It does not appear so from the link below although other key lists were > > cc'd: > > https://patchwork.freedesktop.org/patch/246100/?series=39879&rev=7 The i915 folks (looks like Daniel Vetter was involved there) have been using shmem_read_mapping_page() for a very long time: but they take care to register a shrinker and swap out under pressure, rather than holding pins indefinitely. I wonder, if we're taking MFD_HUGETLB away from them, whether this would be another call for MFD_HUGEPAGE (shmem memfd using THPs): https://lore.kernel.org/linux-mm/c140f56a-1aa3-f7ae-b7d1-93da7d5a3572@google.com/ And that series did also support F_MEM_LOCK, which could be used to help with the accounting of the locked pages. (But IIRC the necessary way of accounting changed just afterwards - or was it just before? - so that old series may not show what's needed today.) I was happy with using fcntls in that series; but could not decide the right restrictionss for F_MEM_UNLOCK (how assured is a memlock if anyone can unlock it?) - maybe F_MEM_UNLOCK should be refused while pins are outstanding. But I digress. Yes, please do revert that hugetlb usage from udmabuf. Hugh > > That's unfortunate :( > > -- > Cheers, > > David / dhildenb