From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E706C3DA49 for ; Tue, 23 Jul 2024 19:55:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B3CB6B007B; Tue, 23 Jul 2024 15:55:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 663406B008A; Tue, 23 Jul 2024 15:55:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5041B6B008C; Tue, 23 Jul 2024 15:55:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 309826B007B for ; Tue, 23 Jul 2024 15:55:53 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 927FF4043A for ; Tue, 23 Jul 2024 19:55:52 +0000 (UTC) X-FDA: 82372072944.07.D256E7A Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf20.hostedemail.com (Postfix) with ESMTP id BE7BD1C0011 for ; Tue, 23 Jul 2024 19:55:50 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=as8w+8UA; spf=pass (imf20.hostedemail.com: domain of sagis@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=sagis@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721764504; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AE6e5fzPeKFIlBMmG5gJlxWybIkbfxBFCodCMXGx1jQ=; b=1+yvErRxxEnxRg+pI2nYrASdLLDgPJZw2SYnLtEdCjpMkXnhV78vSsTpb/0SZxcdqHNI97 c/Ak++NUYmq5EDp+PrEIJ7FVEimO3bw0HDni27nUJCbNwBKDuZ3oOw6pPQmVVw+pNiSE22 UREJqTeZ5kXcuWIlhA4M9rGSpHa3Ma8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721764504; a=rsa-sha256; cv=none; b=D3bB7+i9EgrD6Pvt70jJMJnJwbDl5Bs2AHyrky9XHCtCsIsxEcF0Zc1/cm62p+a9D4spEP Aa4y7bckji/AK9kul/MXWcv7XVZThDNsDxqZz1VoL/3s9+k00kD0u5NYfuVIgUlT7Ai95F bf4aza6FUAjDR+BuTKrQZgWgGJBWd0g= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=as8w+8UA; spf=pass (imf20.hostedemail.com: domain of sagis@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=sagis@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f176.google.com with SMTP id 3f1490d57ef6-e0871f82ff8so3589982276.3 for ; Tue, 23 Jul 2024 12:55:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721764549; x=1722369349; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=AE6e5fzPeKFIlBMmG5gJlxWybIkbfxBFCodCMXGx1jQ=; b=as8w+8UAd2cPn/H+BQgBdW4R33tSN6BKkAtN2Nuv3IKMlzxLgr99JyuKY8AxKJ36T2 wumVxibS9aKzNawZyTBY87SQ/bHRTuqaSpjIDfLEIxeBQW/ko6Pcl5uBBj+9rui3Jtl9 ABM6pzC1TzUFn7XEgPGkD5GnK7oIKDYwhTccw+0t93aIo+qdFZCXS29PjsVke9fdzOwK Jl+wQyc5EWz16aqCKRTLyg239Sqx5xOUptB9+uhEbvRpHkfQPqgSqm5UOG56mEu5pTMl w68/s/KzODHOxXMY2n9iwwfRq4v+7ELvcLi8eiONKqeU2ghekNegbFPy5rYHpNwoBsge 6yew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721764549; x=1722369349; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AE6e5fzPeKFIlBMmG5gJlxWybIkbfxBFCodCMXGx1jQ=; b=UIm7I3zV5O7GbfDFOEE0TDDxYHLvhiE5vo1saZN/fE7z5sXvthQh1poOGKCHQYJNS9 yxtUHBwG1Zfuk6smLMdzWu/f8LiG6PDtBd510YHk/KxAX9H07mwPpEKtajM6jpvJNADN GhcGmpVyHiglzB4D2hmdf3q09R5uGReeRClZsN3ps3WqlHqdGz4xJrBZBPKbxynHmIMm c9AgsRXuJQ1pXVTfFfCI4RL8LtVjbVVvnwGM0WkzDpR0i0Iur8gjh/wQJ1akloKhNdJl edSnLxI5NW8Y+ROowCyclRTCaW1kQ315QjvLgqXvUP5WIJF6r+9pcfjvArsqAco/MfeS aS3Q== X-Forwarded-Encrypted: i=1; AJvYcCUqjbUmychxwMqmRMZQtb2lg9TGPd3ERNqNIlFWLFMa1QZJvXKSDN5Ez42tcS1h0iqDFV27tFKX4HZgaiQgo3AsSLs= X-Gm-Message-State: AOJu0Yx33vsWLgQR/VvIEPoOIbIvb9+ZqvYlZl138g1x90+5Q5BdSyC3 NtaB+1TopGr2FGaDF0Z4dNtJ/13p/Fvlb5ulzKmzAXH0OoYGKlHITiOyLDyfh8PrmF0tKYG6YhV ZMztc+ef449DuxwdNJikdHKzzFWNTWtB8n/rN X-Google-Smtp-Source: AGHT+IFVIQx8k5pKeQbxna0tg4qcoy/15wSfpoaLxbaPj5khP8+jMjSrXVK0+kifV00saP01V2Gg6z43TEQy033MIoY= X-Received: by 2002:a05:6902:2d05:b0:e03:a70d:c12e with SMTP id 3f1490d57ef6-e087017fdd3mr13368632276.15.1721764549380; Tue, 23 Jul 2024 12:55:49 -0700 (PDT) MIME-Version: 1.0 References: <20231212204647.2170650-1-sagis@google.com> <20231212204647.2170650-2-sagis@google.com> <516247d2-7ba8-4b3e-8325-8c6dd89b929e@linux.intel.com> In-Reply-To: <516247d2-7ba8-4b3e-8325-8c6dd89b929e@linux.intel.com> From: Sagi Shahar Date: Tue, 23 Jul 2024 14:55:37 -0500 Message-ID: Subject: Re: [RFC PATCH v5 01/29] KVM: selftests: Add function to allow one-to-one GVA to GPA mappings To: Binbin Wu Cc: linux-kselftest@vger.kernel.org, Ackerley Tng , Erdem Aktas , Isaku Yamahata , Ryan Afranji , Sean Christopherson , Paolo Bonzini , Shuah Khan , Peter Gonda , Haibo Xu , Chao Peng , Vishal Annapurve , Roger Wang , Vipin Sharma , jmattson@google.com, dmatlack@google.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: BE7BD1C0011 X-Stat-Signature: 4beeqbhj7ooo6mqydrkhuekjxq3j8ky9 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1721764550-244413 X-HE-Meta: U2FsdGVkX1/Wyz5VWHioMNEYlNo6t4Mhfz1nIS1FZsuZdhJn3UbvBQnaTcJSJR2KOjDHU7Pf2DYpRoagDqcgs9UlwRHjXff4p029XSv31e8DkDM9Y3CoBZRo9kpamIfmE/3Ds1VjShJTVOHY9Ue86jQ9GKOuUvX4WB7qa+SHrPOBqbAiiYtaALlQKK3NThsw2U5F9PyO5G3fCeSoCDSSbedn3ZaANTEw6lZz70AkfTFrWryXv7WjGL6Rvv4x5Ne8+xJyFAlekA6/gC5cn96jOTxJIQr/chDD98O7MPK0V0pX3pBO39ykBXNlGv3vRMOfkmoA3ZTWupJ4Dld2n1U1B93sdH5MoralQIFnHuUNzGMMu40C1smHCW6MUOBG6HCknwcr13g59DUBJnNw0sdG6NUIe+EfRD7gs/zbNz1bJcotbzlhlud6M9jNAnAAIABqZRBXu19FUvQZu81X21GXxjLpW/aH/cUOjJ87ovRAFFKpvbJrSTLmcgOLRAhRhZSzkbQjYuiYB+FAvIUXh1ft1SaXB9LN1D41wLWSf38fUF0iIvs3SSmS1rh/r9J/M3RKKO85hxDBMIAaF+zKlgz5dcEeT1/MpUaJqrGWTEQkLKkR+OyBpeMfPWI5/bYtgAqfpzSHSPNq3JMsgRQ/9hrQoIdA5yxuvpw/Uusc6iTL0aNKpdHWIVTjfvhNzOFHqfuqX2QePaK6SSFyXLLsYcvhmuqFWyOrn5erCOz0vvcjYZ4ZhKrAmxvlW1+0k5SpVWwRGgW5tJXsmuWbgsTmbSaSfqLHdowYF2N91DAA+vwlAtyjn/mlg5iyO3U3GX1D2O3CLTf+sret/NKeDeahOJ0kiLmW9xAfHluBr9CXVSE7KH4ZZmIpgnMjKWcCtvgtiJa/YYrY/ETluS9BPip+b4psh0m5e3pkTcnY1KijLTrDtz2EAvM/Fyb7ZIpmM3RUMh5lFUwkTWgxdb7kx5yITIV u5vI2+7g 7ROFkuzSVZ6PrWkjFiNcVkcSDlb5LTcDS5LB7JaDNopOc6w4xLwv4ofga/vGQVihyTV2fxTFK/5vLtC3IhwGaCl0Y66qIa2A4KTTMrKAbWUA4Bv11Pqqr/WfT7LWKMB0GqMIX80mqSLuT/AsaL2xYYjcNe17YvrV3lQJ1P+47Fl4WucdoJQ7kRgjisZl4O7V2KV3/W6lnNBiptcgqIMnVfsWipCiIQ90sX7kO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Feb 20, 2024 at 7:43=E2=80=AFPM Binbin Wu wrote: > > > > On 12/13/2023 4:46 AM, Sagi Shahar wrote: > > From: Ackerley Tng > > > > One-to-one GVA to GPA mappings can be used in the guest to set up boot > > sequences during which paging is enabled, hence requiring a transition > > from using physical to virtual addresses in consecutive instructions. > > > > Signed-off-by: Ackerley Tng > > Signed-off-by: Ryan Afranji > > Signed-off-by: Sagi Shahar > > --- > > .../selftests/kvm/include/kvm_util_base.h | 2 + > > tools/testing/selftests/kvm/lib/kvm_util.c | 63 ++++++++++++++++--= - > > 2 files changed, 55 insertions(+), 10 deletions(-) > > > > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tool= s/testing/selftests/kvm/include/kvm_util_base.h > > index 1426e88ebdc7..c2e5c5f25dfc 100644 > > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h > > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h > > @@ -564,6 +564,8 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t= sz, vm_vaddr_t vaddr_min); > > vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t = vaddr_min, > > enum kvm_mem_region_type type); > > vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vad= dr_t vaddr_min); > > +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, > > + vm_vaddr_t vaddr_min, uint32_t data_memslo= t); > > vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); > > vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, > > enum kvm_mem_region_type type); > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing= /selftests/kvm/lib/kvm_util.c > > index febc63d7a46b..4f1ae0f1eef0 100644 > > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > > @@ -1388,17 +1388,37 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *v= m, size_t sz, > > return pgidx_start * vm->page_size; > > } > > > > +/* > > + * VM Virtual Address Allocate Shared/Encrypted > > + * > > + * Input Args: > > + * vm - Virtual Machine > > + * sz - Size in bytes > > + * vaddr_min - Minimum starting virtual address > > + * paddr_min - Minimum starting physical address > > + * data_memslot - memslot number to allocate in > > + * encrypt - Whether the region should be handled as encrypted > > + * > > + * Output Args: None > > + * > > + * Return: > > + * Starting guest virtual address > > + * > > + * Allocates at least sz bytes within the virtual address space of the= vm > > + * given by vm. The allocated bytes are mapped to a virtual address >= =3D > > + * the address given by vaddr_min. Note that each allocation uses a > > + * a unique set of pages, with the minimum real allocation being at le= ast > > + * a page. > > + */ > > static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, > > - vm_vaddr_t vaddr_min, > > - enum kvm_mem_region_type type, > > - bool encrypt) > > + vm_vaddr_t vaddr_min, vm_paddr_t pad= dr_min, > > + uint32_t data_memslot, bool encrypt) > > { > > uint64_t pages =3D (sz >> vm->page_shift) + ((sz % vm->page_size)= !=3D 0); > > > > virt_pgd_alloc(vm); > > - vm_paddr_t paddr =3D _vm_phy_pages_alloc(vm, pages, > > - KVM_UTIL_MIN_PFN * vm->page= _size, > > - vm->memslots[type], encrypt= ); > > + vm_paddr_t paddr =3D _vm_phy_pages_alloc(vm, pages, paddr_min, > > + data_memslot, encrypt); > > > > /* > > * Find an unused range of virtual page addresses of at least > > @@ -1408,8 +1428,7 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_v= m *vm, size_t sz, > > > > /* Map the virtual pages. */ > > for (vm_vaddr_t vaddr =3D vaddr_start; pages > 0; > > - pages--, vaddr +=3D vm->page_size, paddr +=3D vm->page_si= ze) { > > - > > + pages--, vaddr +=3D vm->page_size, paddr +=3D vm->page_size)= { > > virt_pg_map(vm, vaddr, paddr); > > > > sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift)= ; > > @@ -1421,12 +1440,16 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm= _vm *vm, size_t sz, > > vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t = vaddr_min, > > enum kvm_mem_region_type type) > > { > > - return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, vm->protected)= ; > > + return ____vm_vaddr_alloc(vm, sz, vaddr_min, > > + KVM_UTIL_MIN_PFN * vm->page_size, > > + vm->memslots[type], vm->protected); > > } > > > > vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vad= dr_t vaddr_min) > > { > > - return ____vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA= , false); > > + return ____vm_vaddr_alloc(vm, sz, vaddr_min, > > + KVM_UTIL_MIN_PFN * vm->page_size, > > + vm->memslots[MEM_REGION_TEST_DATA], fal= se); > > } > > > > /* > > @@ -1453,6 +1476,26 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, siz= e_t sz, vm_vaddr_t vaddr_min) > > return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA); > > } > > > > +/** > > + * Allocate memory in @vm of size @sz in memslot with id @data_memslot= , > > + * beginning with the desired address of @vaddr_min. > > + * > > + * If there isn't enough memory at @vaddr_min, find the next possible = address > > + * that can meet the requested size in the given memslot. > > + * > > + * Return the address where the memory is allocated. > > + */ > > +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, > > + vm_vaddr_t vaddr_min, uint32_t data_memslo= t) > > +{ > > + vm_vaddr_t gva =3D ____vm_vaddr_alloc(vm, sz, vaddr_min, > > + (vm_paddr_t)vaddr_min, data_m= emslot, > > + vm->protected); > > + TEST_ASSERT_EQ(gva, addr_gva2gpa(vm, gva)); > > How can this be guaranteed? > For ____vm_vaddr_alloc(), generically there is no enforcement about the > identity of virtual and physical address. The problem is that if the allocation won't be 1-to-1 the tests won't work. So we figured it's better to fail early. The way this is used in practice generally guarantees that the mapping can be 1-to-1 since we create these mappings at an early stage. > > > + > > + return gva; > > +} > > + > > /* > > * VM Virtual Address Allocate Pages > > * > >