From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2577C54E58 for ; Thu, 21 Mar 2024 22:29:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 213A26B0082; Thu, 21 Mar 2024 18:29:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19D846B0083; Thu, 21 Mar 2024 18:29:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 016F76B0093; Thu, 21 Mar 2024 18:29:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DF6FA6B0082 for ; Thu, 21 Mar 2024 18:29:43 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 839B780A09 for ; Thu, 21 Mar 2024 22:29:43 +0000 (UTC) X-FDA: 81922489446.19.1157683 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf23.hostedemail.com (Postfix) with ESMTP id 5DFD3140013 for ; Thu, 21 Mar 2024 22:29:40 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SWbYLv3D; spf=pass (imf23.hostedemail.com: domain of dongsheng.x.zhang@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=dongsheng.x.zhang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711060181; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ChQOIAjv48Gg1L47LUxD/S5fat7RJP4yupCS+Ns0sjU=; b=ujJPpUnxlpthU1U7k+QsKqHBDguwv7xZCDgKCC2euz7BDLy6HYMCYjv4FX8OivJEzctYDZ 080QY3DC5xEY0ZUfAB5eVI2KDSuSgmVWd0bP7KfipsI6vM6FDLkyojUmjaYNUzIny9XFnw TDJFcUqzYzZ1FzE5udHA+YRHBu4wSkA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711060181; a=rsa-sha256; cv=none; b=zVZBwKkoI/7eIIfq+mQVDr6u420HEHOW9P3ERVn3pMgaXLRsZDAVmLomDdCUogHt5NHs3k XA7IlngD8XPcgPseS6gqKJkiuqKR5/rnqc2pToH82gNzfnEJk5Nl4CCu7UgSoFuvPHKnLF Yy0guMYL/n/etJlKx2TYKKF7PTOsb5k= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SWbYLv3D; spf=pass (imf23.hostedemail.com: domain of dongsheng.x.zhang@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=dongsheng.x.zhang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711060181; x=1742596181; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=mncrWCICS4VL0Hc+/fAWMig/vblihxDQF+m/SdRG54Q=; b=SWbYLv3DirlE1VDOeuJY3BeESOhywLbI4SpM11x/neUSUvPk0HvUerVH RJ1Kq6WFB2dMOPseyAyCSUOsdgwuIAxlXYW8Q/gQ7xW7aUcE0jooaSXQN k0CtCm5Tlf+0/Kf3Jq8lFV60463E1ubFouhUxFz9KCCKhhymnguAcfEdZ n+4w/YniP2OYAteUwV+2c7/rJ2ZHI9PXmwC+rHeyuD623CMOD2e/auJN2 2s8lxQ/17cRX0nZO//RmD1XpjS1SppX4xvsmao2wFVV8GBcXioQ/v2dtH rnsIENv2qpMWXQb641pxcz5RQB+43YnJIZUCcI1Dtx0dTAKvGuspgmSNy w==; X-IronPort-AV: E=McAfee;i="6600,9927,11020"; a="6211089" X-IronPort-AV: E=Sophos;i="6.07,144,1708416000"; d="scan'208";a="6211089" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Mar 2024 15:29:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,144,1708416000"; d="scan'208";a="19240026" Received: from dongshen-mobl1.amr.corp.intel.com (HELO [10.212.116.150]) ([10.212.116.150]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Mar 2024 15:29:38 -0700 Message-ID: <797bfae3-6419-4a7a-991a-1d203691d2cb@intel.com> Date: Thu, 21 Mar 2024 15:29:37 -0700 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v5 01/29] KVM: selftests: Add function to allow one-to-one GVA to GPA mappings To: Sagi Shahar , linux-kselftest@vger.kernel.org, Ackerley Tng , Ryan Afranji , Erdem Aktas , Isaku Yamahata Cc: Sean Christopherson , Paolo Bonzini , Shuah Khan , Peter Gonda , Haibo Xu , Chao Peng , Vishal Annapurve , Roger Wang , Vipin Sharma , jmattson@google.com, dmatlack@google.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org References: <20231212204647.2170650-1-sagis@google.com> <20231212204647.2170650-2-sagis@google.com> Content-Language: en-US From: "Zhang, Dongsheng X" In-Reply-To: <20231212204647.2170650-2-sagis@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 5DFD3140013 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 8knmbfgn8xmgh5nm6wk9o6rghp6rtjcf X-HE-Tag: 1711060180-29720 X-HE-Meta: U2FsdGVkX18n/X36SZtJUrZ9/oVqFKuXgB+3l+XYbCngVUYyGGlfdJXz1sspKKRzHnqpLEiJhNQ7td8SP2bC0RYAJnoHlTL9Fkd6zPGDQfV+SOcSZ3dur79zqsu5eURtqyOmwszpomlx0K1wY1TqyCTYsmiAF55qfa64nUJb8KhD1DbJxjCqD7HDbVH1wB+B1dn1sr17Sbss5uFcOMEI/OKBdTifMcOuC9B6Gkk+KSvb7263h+fkrXjGbo/Ki8W7wLlJZ10+ka0JeAtAncQiwt2B7g4XladIQAjxkbj/bZlNFP+7ZuaHs3afTehyeAH7lowkPw7bSeHGZOroEOGtRm4Epixis18K4nDee6QZ0doVFIiT0P9omnxpLuruD0VqcbjxpFWW4GxS8kTbL8HsJ1kjpWa/kiAgP1f+EUCbOarCa6fGEYedaF03hbYW3mgJN5rZK5aWPRsNeHb9jMLC36dC98WufZPuc3TlBHoUTSFe1z3ue48CzWWw8N8QDSaC7rFLud7HB9oKZpZ6PpmAYnrU7J9XvxlK6bV3U4zusnDzmGHk4Nz7N6ZoS3O91szj+F0hJziqyGFmGxEgZzt9YlvXMlycJEZxAq1d71uegFOdspaebeUkbxxqaoUX2351fto27NZVmXGBhCEyfomg5KCCohL3ucQgVt4kBEmq6cw8sW21Ew6FQ28alLsGQK5RMm9Ft2m+5FAYqyr+8sXWMrhIY9XYffjWY62yXF5zdAWaqhtwby8N/rHbU2QQkb/nKEX78vJA89DgBXmYvoRre6Td8YOQrn6oB0CwcVGYsUoHB/FvhR+meM6ux5HviRzqTJOr8nR+Mgdrrkuu5XwDuP3Nzqsb6s9jINzUjngGoC7XuormLhdR0+olEof0GZ/c3cBBJEPutZLESG6NNpetvmz3ZQdDjbRPQvPlHI1DOwzts128is3UOK3KQy1XFCGOIUZqZX4KK0EimAYEtZu 090CrrA9 6bMdFjAbZ8MAHumQE1I62srfLGqawzNa0g8QjG0QAiZaL0QeYSny6if8NlToWVV4xGyxSf+MXh7LNiK2uOyLglQ5SI6DXdsVgI4Y9aHGD0NN1XqUN+jp+VWRqHD4IPVhHfy1syeubCEAOfJGGQFyRYVApjJV9JrHSB27qSTZDeI2PjYrRGp9WwKIMk7dUYrZIYgj4oR1Lxfyo1/k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/12/2023 12:46 PM, Sagi Shahar wrote: > From: Ackerley Tng > > One-to-one GVA to GPA mappings can be used in the guest to set up boot > sequences during which paging is enabled, hence requiring a transition > from using physical to virtual addresses in consecutive instructions. > > Signed-off-by: Ackerley Tng > Signed-off-by: Ryan Afranji > Signed-off-by: Sagi Shahar > --- > .../selftests/kvm/include/kvm_util_base.h | 2 + > tools/testing/selftests/kvm/lib/kvm_util.c | 63 ++++++++++++++++--- > 2 files changed, 55 insertions(+), 10 deletions(-) > > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h > index 1426e88ebdc7..c2e5c5f25dfc 100644 > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h > @@ -564,6 +564,8 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); > vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, > enum kvm_mem_region_type type); > vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); > +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, > + vm_vaddr_t vaddr_min, uint32_t data_memslot); > vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); > vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, > enum kvm_mem_region_type type); > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > index febc63d7a46b..4f1ae0f1eef0 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -1388,17 +1388,37 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, > return pgidx_start * vm->page_size; > } > > +/* > + * VM Virtual Address Allocate Shared/Encrypted > + * > + * Input Args: > + * vm - Virtual Machine > + * sz - Size in bytes > + * vaddr_min - Minimum starting virtual address > + * paddr_min - Minimum starting physical address > + * data_memslot - memslot number to allocate in > + * encrypt - Whether the region should be handled as encrypted > + * > + * Output Args: None > + * > + * Return: > + * Starting guest virtual address > + * > + * Allocates at least sz bytes within the virtual address space of the vm > + * given by vm. The allocated bytes are mapped to a virtual address >= > + * the address given by vaddr_min. Note that each allocation uses a > + * a unique set of pages, with the minimum real allocation being at least > + * a page. > + */ > static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, > - vm_vaddr_t vaddr_min, > - enum kvm_mem_region_type type, > - bool encrypt) > + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min, > + uint32_t data_memslot, bool encrypt) > { > uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); > > virt_pgd_alloc(vm); > - vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages, > - KVM_UTIL_MIN_PFN * vm->page_size, > - vm->memslots[type], encrypt); > + vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages, paddr_min, > + data_memslot, encrypt); > > /* > * Find an unused range of virtual page addresses of at least > @@ -1408,8 +1428,7 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, > > /* Map the virtual pages. */ > for (vm_vaddr_t vaddr = vaddr_start; pages > 0; > - pages--, vaddr += vm->page_size, paddr += vm->page_size) { > - > + pages--, vaddr += vm->page_size, paddr += vm->page_size) { > virt_pg_map(vm, vaddr, paddr); > > sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); > @@ -1421,12 +1440,16 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, > vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, > enum kvm_mem_region_type type) > { > - return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, vm->protected); > + return ____vm_vaddr_alloc(vm, sz, vaddr_min, > + KVM_UTIL_MIN_PFN * vm->page_size, > + vm->memslots[type], vm->protected); > } > > vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) > { > - return ____vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA, false); > + return ____vm_vaddr_alloc(vm, sz, vaddr_min, > + KVM_UTIL_MIN_PFN * vm->page_size, > + vm->memslots[MEM_REGION_TEST_DATA], false); > } > > /* > @@ -1453,6 +1476,26 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) > return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA); > } > > +/** > + * Allocate memory in @vm of size @sz in memslot with id @data_memslot, > + * beginning with the desired address of @vaddr_min. > + * > + * If there isn't enough memory at @vaddr_min, find the next possible address > + * that can meet the requested size in the given memslot. > + * > + * Return the address where the memory is allocated. > + */ > +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, > + vm_vaddr_t vaddr_min, uint32_t data_memslot) > +{ > + vm_vaddr_t gva = ____vm_vaddr_alloc(vm, sz, vaddr_min, > + (vm_paddr_t)vaddr_min, data_memslot, > + vm->protected); > + TEST_ASSERT_EQ(gva, addr_gva2gpa(vm, gva)); By 1to1, do you mean virtual address=physical address?, community tends to call this identity mapping. Examples (function name): create_identity_mapping_pagetables() hellcreek_setup_tc_identity_mapping() identity_mapping_add() > + > + return gva; > +} > + > /* > * VM Virtual Address Allocate Pages > *