From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69574C47079 for ; Mon, 8 Jan 2024 13:08:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 872BB6B0071; Mon, 8 Jan 2024 08:08:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 821B06B0074; Mon, 8 Jan 2024 08:08:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E8F86B0075; Mon, 8 Jan 2024 08:08:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 58EDA6B0071 for ; Mon, 8 Jan 2024 08:08:12 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2B476407D3 for ; Mon, 8 Jan 2024 13:08:12 +0000 (UTC) X-FDA: 81656172024.02.ADD163A Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by imf12.hostedemail.com (Postfix) with ESMTP id 75F5A4000D for ; Mon, 8 Jan 2024 13:08:09 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=du2wwa9a; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf12.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.198.163.9) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704719290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2TO/gvP/wmgolpCWOXjwebPT1YQfSxzNkPL9k3R0olc=; b=S6dVLBVhOD4f5d3QX5yoJaeCmO+rLL0rTJWEvKu3I1BL8gyy9GYDZ1OJVY+fYlwB/MgAs2 vJ86x8BzRWLOkH8foy17v56acR7jwBrfeyRPz/ZvAuqpHEs/ztSt8gT4vfANcOOXT0aUQj 44xGEPtpaWgXG8V7G4gTNEcg5FZmW2Q= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=du2wwa9a; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf12.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.198.163.9) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704719290; a=rsa-sha256; cv=none; b=F8oGsTzShAl/Kdx0xiCXpDtZdiarJvjoa25rA8xtVHJb13ZOJXGt5tTzy+gjYh1/K0dPa2 EL93HrPvrpOL2m4tANCShMFvTh0HAvtz6VYJvP1ws1UCHqsW8ABIQ1QzenICQBCMFZ4s6f 857AG1Gnd2/eEbqqge2VMejHgFRIcVE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704719290; x=1736255290; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=jzTq7VWB7XmiLfpeuE4JA2ASHQLGj6MFtlAwRa1zfoE=; b=du2wwa9abh0RB2IUPrgJhcAJfpgRnaqAbqBcbALydEbNS6WpJV/r5Nbx pVx9l3uxAZk4j+F2nbg3PFQye3SMC6ssPAN02zOxzdit1pxM9EU/5hjk3 uADUcGSuR66eayHnWMmi6ooL5PnKh4qkSVC6RIoWLdIqrLjIXJ4ABkXyd RD3eEuyHqSx1KjHV4gJPTlb56DtoGoNoFsMUelRdDx5XzpW0MBFUk5+Jn LQUfSCYlSKTUtTrs0egmMrLmXjSb9FbGU/siWn+9C6WKHKPDWraPQwHsf 08KpRDjbp1dJCYWlm+QoiqBHRbj3EotIlWx18DdTcohZhXDCxOaiAaZKm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10946"; a="4664999" X-IronPort-AV: E=Sophos;i="6.04,341,1695711600"; d="scan'208";a="4664999" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jan 2024 05:08:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10946"; a="815601069" X-IronPort-AV: E=Sophos;i="6.04,341,1695711600"; d="scan'208";a="815601069" Received: from ddraghic-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.212.53]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jan 2024 05:08:00 -0800 Received: by box.shutemov.name (Postfix, from userid 1000) id 5EB1510498C; Mon, 8 Jan 2024 16:07:57 +0300 (+03) Date: Mon, 8 Jan 2024 16:07:57 +0300 From: kirill.shutemov@linux.intel.com To: mhklinux@outlook.com Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3 1/3] x86/hyperv: Use slow_virt_to_phys() in page transition hypervisor callback Message-ID: <20240108130757.xryzva4fkmgeekhu@box.shutemov.name> References: <20240105183025.225972-1-mhklinux@outlook.com> <20240105183025.225972-2-mhklinux@outlook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240105183025.225972-2-mhklinux@outlook.com> X-Rspamd-Queue-Id: 75F5A4000D X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: mpmg5q4uic6nqn3fxhmjps6693gaiyfh X-HE-Tag: 1704719289-964845 X-HE-Meta: U2FsdGVkX1+TKQfMDztivN5MyvvrGCZsOhjssNzGQODGZCnGUnrPtXxhFVyywhWBvxRaJCU0znlwjB3m33igkWI695nHEaLk331eBW4EQqwaYvvbewQQ+itRKyVU9pwDFAIEJv8KOcZipsbgpV+tV42mTYSeRUwQTJa1YVZq63TDws/adBPPoCbAqnwiYhkTvKAhZOHwjhK174KP+bfBbaYwe/vVgn9bX4p/8ux4S03M+Z6DYTplDjVQKrXXUiXU7YjTODUqukebrFK0KwvTWqKS/sNiuANuRUawr/0aAXYxeURVveZvE9mHEYGF4e54cGcJOz7cKRFkh5jFtDTDsMTCpjnE+H7PxROFVj1t43/NadOl3wxFcwt+52V2tDoE+kAfYnItrd4ChwP1q4zfQ3zOsHmBd63445aC7vKlCKAw0O8R6/pkdSsUqVaEEGLfYQTYm5wyKdJ+AejeVD2g4jw5uCI1+OJ226AJr5xw5L+eM0DVSonMZq1Ur/6ukA/06/Y/RVTJ0o5YuaQ15IJNNm7qJdJa/PAFLYzMNub6b58MjlLcK/Qa/M0xRZpzidgIQ7aDoGCnZa0s3/l1cHrqln6MuXVwhSXq0642wIxABRAYdhhUxGbTmyH1OYqgIADbJR74BmM5UiquZG5XiAkXMkxi1tTy85ot//hQY4/5Z/jkglaf7Ct8BXLMYH9mSKo52NME5g1Bq954xN1m6XAwi0LN1mZye6xfYhWLl0iTukZIYQAYdOnM6eZ8tSQsfFTXiQKF1Fv1L7J/sV6lvynPsXRaJ03nbgu5MVNJXDVexYw3Z9OeEyDTaKsBDoEbQ2weiEE1JfTXjQGz0gO/C9H7BLHTlB/26PIFIutFy4VZynXv24OJKP/BhPqV9wy4WNY2CcbW+YyVvoXCBEwh/e7ql34rrSZ25lblnVvtKfYEXPFWW/iMUdny8lfGmGxTg37dHp0QBdcU6ClJhxjHPHJ p4ylZq8h L/CQYGDK+tH5oyIIX0tTmCYAzkpe/FtUWjqSm5bfUXmMcKHjOqsBUwOAdODd4pkRyGQlMRWxY2TBYrYxnyVGiKjZ7f3JQKfyVnWYZXiUgfkcE6z3v00BfeqP0zmbmhFnz0KSjmhctSTNje8x8fbrMLpLALU+luFV2NskJ+D0GHXkIN1afjz5P6t6p+j94A/s3nE+3HTmalf9SQZI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 05, 2024 at 10:30:23AM -0800, mhkelley58@gmail.com wrote: > From: Michael Kelley > > In preparation for temporarily marking pages not present during a > transition between encrypted and decrypted, use slow_virt_to_phys() > in the hypervisor callback. As long as the PFN is correct, > slow_virt_to_phys() works even if the leaf PTE is not present. > The existing functions that depend on vmalloc_to_page() all > require that the leaf PTE be marked present, so they don't work. > > Update the comments for slow_virt_to_phys() to note this broader usage > and the requirement to work even if the PTE is not marked present. > > Signed-off-by: Michael Kelley > --- > arch/x86/hyperv/ivm.c | 9 ++++++++- > arch/x86/mm/pat/set_memory.c | 13 +++++++++---- > 2 files changed, 17 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c > index 02e55237d919..8ba18635e338 100644 > --- a/arch/x86/hyperv/ivm.c > +++ b/arch/x86/hyperv/ivm.c > @@ -524,7 +524,14 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo > return false; > > for (i = 0, pfn = 0; i < pagecount; i++) { > - pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE); > + /* > + * Use slow_virt_to_phys() because the PRESENT bit has been > + * temporarily cleared in the PTEs. slow_virt_to_phys() works > + * without the PRESENT bit while virt_to_hvpfn() or similar > + * does not. > + */ > + pfn_array[pfn] = slow_virt_to_phys((void *)kbuffer + > + i * HV_HYP_PAGE_SIZE) >> HV_HYP_PAGE_SHIFT; I think you can make it much more readable by introducing few variables: virt = (void *)kbuffer + i * HV_HYPPAGE_SIZE; phys = slow_virt_to_phys(virt); pfn_array[pfn] = phys >> HV_HYP_PAGE_SHIFT; > pfn++; > > if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) { > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index bda9f129835e..8e19796e7ce5 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -755,10 +755,15 @@ pmd_t *lookup_pmd_address(unsigned long address) > * areas on 32-bit NUMA systems. The percpu areas can > * end up in this kind of memory, for instance. > * > - * This could be optimized, but it is only intended to be > - * used at initialization time, and keeping it > - * unoptimized should increase the testing coverage for > - * the more obscure platforms. > + * It is also used in callbacks for CoCo VM page transitions between private > + * and shared because it works when the PRESENT bit is not set in the leaf > + * PTE. In such cases, the state of the PTEs, including the PFN, is otherwise > + * known to be valid, so the returned physical address is correct. The similar > + * function vmalloc_to_pfn() can't be used because it requires the PRESENT bit. > + * > + * This could be optimized, but it is only used in paths that are not perf > + * sensitive, and keeping it unoptimized should increase the testing coverage > + * for the more obscure platforms. > */ > phys_addr_t slow_virt_to_phys(void *__virt_addr) > { > -- > 2.25.1 > -- Kiryl Shutsemau / Kirill A. Shutemov