From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 391C8C2BA15 for ; Wed, 19 Jun 2024 09:12:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA55B6B02B2; Wed, 19 Jun 2024 05:12:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B554B6B02B3; Wed, 19 Jun 2024 05:12:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1D966B02B8; Wed, 19 Jun 2024 05:12:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 82E5A6B02B2 for ; Wed, 19 Jun 2024 05:12:16 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 04B7916102D for ; Wed, 19 Jun 2024 09:12:15 +0000 (UTC) X-FDA: 82247071872.24.6357DCB Received: from mail-vs1-f52.google.com (mail-vs1-f52.google.com [209.85.217.52]) by imf04.hostedemail.com (Postfix) with ESMTP id 2C6F640004 for ; Wed, 19 Jun 2024 09:12:14 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lSbgj9rq; spf=pass (imf04.hostedemail.com: domain of tabba@google.com designates 209.85.217.52 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718788329; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wYkYh6+hMVb+fOYqcnMpeJBX33GChbsvZ+EqmJ1i9Cw=; b=cai1GTUw13pYKRAaAIIH079F2KU0aW/gBa5ncJeKW0KcYbHuE6p8MDblhDiM/I4crDAzzl jWbHwnw285HlTrNR3f2YLtM1Z7W3lRP85EgR9xJAKnCSGhzY5X7XC2zfY+tQpYFX1qUrnO cpeZv4vcOOQLq5Xx8Y6HLpxVD4y86zY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lSbgj9rq; spf=pass (imf04.hostedemail.com: domain of tabba@google.com designates 209.85.217.52 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718788329; a=rsa-sha256; cv=none; b=k1+LOFhBH/JS/KmWL8dPJFMLn/WqoRBLefcnEqriuVxSdd0s5mwEPA1i4wIyYAKirBo+aJ kMGuLDtgAUqY/iv5yckZHrqFPjuqK6xpJtIurpUTkCcnxXktX6rwuomfD6AciEaoPDA8NX rxQa0qF5pP650Kkk8eLM7sLNdih8HJs= Received: by mail-vs1-f52.google.com with SMTP id ada2fe7eead31-48c50e74fe6so2285400137.0 for ; Wed, 19 Jun 2024 02:12:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718788333; x=1719393133; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=wYkYh6+hMVb+fOYqcnMpeJBX33GChbsvZ+EqmJ1i9Cw=; b=lSbgj9rqiiqePzeuzQdS5Z9r9qq9+3BvRPT/67L3rkoYl2UiK3jZkMEJW7O8uBafz7 sNR3hXRilWPG25wp7R8rPQtR7gnwghhvHbbnGO+yii2Jlhkc1FRQr3NzywSCuzgB474t rpLWDHeGMTS1Ow4vacwmDKdr60TNH8JKxNvx/ovMRd7Ghqgrrii6iscvQWsFVVlsSiD1 N3+3kCL2vOR1SHwNgXkFfPYsKEh80xaxOcjh3kXQ91AyyDqfCxCxpCudLF07YAl7tRur CUbsGSvpna46rDQq+5/8qfHK2uhU/kJvb4cZF7SgiFW44HLaVHM3Hzg92UGrUzmvygrT n7Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718788333; x=1719393133; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wYkYh6+hMVb+fOYqcnMpeJBX33GChbsvZ+EqmJ1i9Cw=; b=n5Rf/3jmekr5MT5SrTD/62NsM3JBWl8qGvYsYjPyKhlB2YAQ61oG2lutwqh5p1Jv7I zR+D/VxdbIyL/KtPP1M6kT9GQIiRaTm6twOW6RrCz32O6x4gEhoU86+y3S+5mFfFyg42 StYmURULfeGCPjY21qF3vClSd8bQ4bW1B5uX1f4xidrptAJ++81rdsAdwwqDAZ4HonU7 XE2ccmqxfLGam9LxVhyXG3f0/Ugv02pdcvoFIQzchO58Vq7dCHIPOiHI94bhuS+YCOei CKBv3R6yTZK4Dj4JKzIgWFrSUTnxoMmZFGZCCsZBjPHRGDOCMRo65ELZRYl/g9PEHucI mWLw== X-Forwarded-Encrypted: i=1; AJvYcCXyuiLfnxBVP3rRCNxa2EJgi9k50VQKXMK3c5mPFogNEM6+C5lVY/vhPeo6qbdlqWlCBO9nVHaPmdW2okvSe8bAM0c= X-Gm-Message-State: AOJu0YwNUfvwCLqeOzxOGHTkr6xNKYv87ELYTjualaW8DegPAJWpvCMR P4VMAmJuZTKWhH4xlVmskyzRUGpyxunGkp/W9jme07UvLVgopi1QS/vmjJsuXPiMBOU5/8aMcfS h7UjYjBTWAxbyPtPNabs30w8xtNUU2+64C9z6 X-Google-Smtp-Source: AGHT+IFULFV5uhgABgZzBXYJqHVOU0QqocPAW9nYgm8HwHd5Y1v9lYw0r20cKwTG9sBBXkjoh+WhigVT8vp4bpfmLgc= X-Received: by 2002:a67:ef8e:0:b0:48c:3c3f:3696 with SMTP id ada2fe7eead31-48f13015e86mr2161505137.10.1718788332754; Wed, 19 Jun 2024 02:12:12 -0700 (PDT) MIME-Version: 1.0 References: <20240618-exclusive-gup-v1-0-30472a19c5d1@quicinc.com> <7fb8cc2c-916a-43e1-9edf-23ed35e42f51@nvidia.com> <14bd145a-039f-4fb9-8598-384d6a051737@redhat.com> In-Reply-To: <14bd145a-039f-4fb9-8598-384d6a051737@redhat.com> From: Fuad Tabba Date: Wed, 19 Jun 2024 10:11:35 +0100 Message-ID: Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning To: David Hildenbrand Cc: John Hubbard , Elliot Berman , Andrew Morton , Shuah Khan , Matthew Wilcox , maz@kernel.org, kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, pbonzini@redhat.com, Jason Gunthorpe Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 2C6F640004 X-Stat-Signature: zzw6ccf6q1b85n4ti33j7ikn3asmpg6f X-Rspam-User: X-HE-Tag: 1718788334-592773 X-HE-Meta: U2FsdGVkX1/M/tNjlAA9cT7pNB4QU6SpFa2Nn5/jE1t9hgAikdDsk4m0AEDq+OPY4UP1qnsYZVjD3KBQM6Cf5FxNVQY6yfd3N1J0I+O13NY4BoD0VT9I4MKluy6d7mKSNHF6Y6H03H/AXzV7cZTe2NKLXTXO0aMEzYESNVH/o54Nm1Z+MlBVjcyVr1KqxR1BgtftcTvCRWYYZzYRRSW9zTqF3HDdYCuZriq4jdudXpda62Ig1u8J7Gu7Ox+W2tonc3Hv+yn3iqJNEDyDYWpfqJi9uNJ0LBGkQRUfwjTxGwFeeAE8ya6yqbVZPW0MdkUcgQBsJIM0Gf2lCGm5ZrnJtEHxl96dtMHslgpt7uedTHwUO+041D/PLIVuOc/u/cJXYxRnIlHjReXQF2Gcqo0VEP9pY1H7jjgDUuY3Mun1xd0Hzge+iFmYYK1Cklp+F1G/9EZ4lyum9iBN5jiOnGRHScjrWD2Qs6oSbgT8fIKXYjMuQRzJMmVGWb3hPnqT2o8v1B6uKQ842rn2Y5xEbqY1yyFNaI2F8MbZIb+/m9jXcjeJxuazHOxM9b4x0qhtDobd9533G3LKcPPJU5XI53p/YkrP2nseC+GQ/fqDO5m3G4rApYI8hdFGQ8D5bNHrjI9UXBGbWyRytxEYW7eDZqBboiCsLZ3jr6BvPoaHskHkpzV/4WpSRRubflfTeSzvObQ1YXp0CtDKxMonP7s2zGYY8DpjXVsWvnz+im75coZdz3E96x7/Zn6aUGfh/Hnl1FoWpml5371ErVZtS9ZB7mzZdmEyF6z/KJN4/xKkYT2m4OyjEkjU8myccuIKLrYEOhHAAUXiVsh86+S8jQg1T455LiqTP+BBmK87m+QERG/HsOg6kM0AK9D5YIoRhrhx954uTMQs6Hja6WrB7m9P1pXwvR7qBoDruA8kEquTftFpb5V8ssQU+QAQ58AdE+dsq1K2J6wlm30F8Ghl1GUOzHl JFpk4jSh /qSYyOSTS4WCCk5qJWpr1vAHtROlDMAuebvkY3ja05zggBsO1uJ4aNdFkeh/IxOvHlKjTB24iBXrw7s6yVOGEy5JTfgLpCPqrl9GYmYFys5ozs4tjGkAnhhFITLvjOq15w7Rz8DZESG4BV6WUudJmWoQpALFd5g6O3ySuNrnM0ihQyz+SiRHwtXJL/03FEgkN674NRY+vLryN95THDboXw22k/5LiYDwNb5kiV3/b1T2hgWQQoqWuP/tIdU4mJFv7yCYF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi John and David, Thank you for your comments. On Wed, Jun 19, 2024 at 8:38=E2=80=AFAM David Hildenbrand wrote: > > Hi, > > On 19.06.24 04:44, John Hubbard wrote: > > On 6/18/24 5:05 PM, Elliot Berman wrote: > >> In arm64 pKVM and QuIC's Gunyah protected VM model, we want to support > >> grabbing shmem user pages instead of using KVM's guestmemfd. These > >> hypervisors provide a different isolation model than the CoCo > >> implementations from x86. KVM's guest_memfd is focused on providing > >> memory that is more isolated than AVF requires. Some specific examples > >> include ability to pre-load data onto guest-private pages, dynamically > >> sharing/isolating guest pages without copy, and (future) migrating > >> guest-private pages. In sum of those differences after a discussion i= n > >> [1] and at PUCK, we want to try to stick with existing shmem and exten= d > >> GUP to support the isolation needs for arm64 pKVM and Gunyah. > > The main question really is, into which direction we want and can > develop guest_memfd. At this point (after talking to Jason at LSF/MM), I > wonder if guest_memfd should be our new target for guest memory, both > shared and private. There are a bunch of issues to be sorted out though .= .. > > As there is interest from Red Hat into supporting hugetlb-style huge > pages in confidential VMs for real-time workloads, and wasting memory is > not really desired, I'm going to think some more about some of the > challenges (shared+private in guest_memfd, mmap support, migration of > !shared folios, hugetlb-like support, in-place shared<->private > conversion, interaction with page pinning). Tricky. > > Ideally, we'd have one way to back guest memory for confidential VMs in > the future. As you know, initially we went down the route of guest memory and invested a lot of time on it, including presenting our proposal at LPC last year. But there was resistance to expanding it to support more than what was initially envisioned, e.g., sharing guest memory in place migration, and maybe even huge pages, and its implications such as being able to conditionally mmap guest memory. To be honest, personally (speaking only for myself, not necessarily for Elliot and not for anyone else in the pKVM team), I still would prefer to use guest_memfd(). I think that having one solution for confidential computing that rules them all would be best. But we do need to be able to share memory in place, have a plan for supporting huge pages in the near future, and migration in the not-too-distant future. We are currently shipping pKVM in Android as it is, warts and all. We're also working on upstreaming the rest of it. Currently, this is the main blocker for us to be able to upstream the rest (same probably applies to Gunyah). > Can you comment on the bigger design goal here? In particular: At a high level: We want to prevent a misbehaving host process from crashing the system when attempting to access (deliberately or accidentally) protected guest memory. As it currently stands in pKVM and Gunyah, the hypervisor does prevent the host from accessing (private) guest memory. In certain cases though, if the host attempts to access that memory and is prevented by the hypervisor (either out of ignorance or out of malice), the host kernel wouldn't be able to recover, causing the whole system to crash. guest_memfd() prevents such accesses by not allowing confidential memory to be mapped at the host to begin with. This works fine for us, but there's the issue of being able to share memory in place, which implies mapping it conditionally (among others that I've mentioned). The approach we're taking with this proposal is to instead restrict the pinning of protected memory. If the host kernel can't pin the memory, then a misbehaving process can't trick the host into accessing it. > > 1) Who would get the exclusive PIN and for which reason? When would we > pin, when would we unpin? The exclusive pin would be acquired for private guest pages, in addition to a normal pin. It would be released when the private memory is released, or if the guest shares that memory. > 2) What would happen if there is already another PIN? Can we deal with > speculative short-term PINs from GUP-fast that could introduce > errors? The exclusive pin would be rejected if there's any other pin (exclusive or normal). Normal pins would be rejected if there's an exclusive pin. > 3) How can we be sure we don't need other long-term pins (IOMMUs?) in > the future? I can't :) > 4) Why are GUP pins special? How one would deal with other folio > references (e.g., simply mmap the shmem file into a different > process). Other references would crash the userspace process, but the host kernel can handle them, and shouldn't cause the system to crash. The way things are now in Android/pKVM, a userspace process can crash the system as a whole. > 5) Why you have to bother about anonymous pages at all (skimming over s > some patches), when you really want to handle shmem differently only? I'm not sure I understand the question. We use anonymous memory for pKVM. > >> To that > >> end, we introduce the concept of "exclusive GUP pinning", which enforc= es > >> that only one pin of any kind is allowed when using the FOLL_EXCLUSIVE > >> flag is set. This behavior doesn't affect FOLL_GET or any other folio > >> refcount operations that don't go through the FOLL_PIN path. > > So, FOLL_EXCLUSIVE would fail if there already is a PIN, but > !FOLL_EXCLUSIVE would succeed even if there is a single PIN via > FOLL_EXCLUSIVE? Or would the single FOLL_EXCLUSIVE pin make other pins > that don't have FOLL_EXCLUSIVE set fail as well? A FOLL_EXCLUSIVE would fail if there's any other pin. A normal pin (!FOLL_EXCLUSIVE) would fail if there's a FOLL_EXCLUSIVE pin. It's the PIN to end all pins! > >> > >> [1]: https://lore.kernel.org/all/20240319143119.GA2736@willie-the-truc= k/ > >> > > > > Hi! > > > > Looking through this, I feel that some intangible threshold of "this is > > too much overloading of page->_refcount" has been crossed. This is a ve= ry > > specific feature, and it is using approximately one more bit than is > > really actually "available"... > > Agreed. We are gating it behind a CONFIG flag :) Also, since pin is already overloading the refcount, having the exclusive pin there helps in ensuring atomic accesses and avoiding races. > > > > If we need a bit in struct page/folio, is this really the only way? Wil= ly > > is working towards getting us an entirely separate folio->pincount, I > > suppose that might take too long? Or not? > > Before talking about how to implement it, I think we first have to learn > whether that approach is what we want at all, and how it fits into the > bigger picture of that use case. > > > > > This feels like force-fitting a very specific feature (KVM/CoCo handlin= g > > of shmem pages) into a more general mechanism that is running low on > > bits (gup/pup). > > Agreed. > > > > > Maybe a good topic for LPC! > > The KVM track has plenty of guest_memfd topics, might be a good fit > there. (or in the MM track, of course) We are planning on submitting a proposal for LPC (see you in Vienna!) :) Thanks again! /fuad (and elliot*) * Mistakes, errors, and unclear statements in this email are mine alone tho= ugh. > -- > Cheers, > > David / dhildenb >