From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27558C433F5 for ; Fri, 8 Apr 2022 14:07:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADE768D0001; Fri, 8 Apr 2022 10:07:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8DD66B0075; Fri, 8 Apr 2022 10:07:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9561B8D0001; Fri, 8 Apr 2022 10:07:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 862CA6B0074 for ; Fri, 8 Apr 2022 10:07:29 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3B2891BD7 for ; Fri, 8 Apr 2022 14:07:29 +0000 (UTC) X-FDA: 79333889418.09.D538D9E Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf25.hostedemail.com (Postfix) with ESMTP id 0A473A0004 for ; Fri, 8 Apr 2022 14:07:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649426848; x=1680962848; h=date:from:to:cc:subject:message-id:reply-to:references: mime-version:in-reply-to; bh=eDkQXvXOF7yKn1iuVU45nfja4Im2dxSC1xviYqR0nWQ=; b=PFpE7Jnvq6phU6mUc3D415S3yYqc4xpIbLTqM97f3FUHgfG3b8EQynQQ KCvVw3+iTJA6yX+eDOyYQwKzB7vzhoyFRfF+lt7hBhg6GTnZTsffdGdS9 eafCfHKuXSlx0YrdZLjfikDEIyQRc9WieztaS6yg7eTlO8v9MiWrfFIG9 wShuYl76yjOHaJeVimmZ4t1XpVtce7b5aPNo37TGL69SwdjWhgXcGFYNg B6SbeWyZmdx1Xnc2GIm6jYBAgi9jJRtAZguVKE7leNWnREcZk9FQ2MOR3 ou4PZAX6apeNYs0lXygMt7Epps0a69D4Zz/4St9BZEJUl5U18ByCRfYtg g==; X-IronPort-AV: E=McAfee;i="6400,9594,10310"; a="249128565" X-IronPort-AV: E=Sophos;i="5.90,245,1643702400"; d="scan'208";a="249128565" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2022 07:07:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,245,1643702400"; d="scan'208";a="571498592" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 08 Apr 2022 07:07:18 -0700 Date: Fri, 8 Apr 2022 22:07:07 +0800 From: Chao Peng To: Sean Christopherson Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: Re: [PATCH v5 08/13] KVM: Use memfile_pfn_ops to obtain pfn for private pages Message-ID: <20220408140707.GG57095@chaop.bj.intel.com> Reply-To: Chao Peng References: <20220310140911.50924-1-chao.p.peng@linux.intel.com> <20220310140911.50924-9-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) X-Stat-Signature: zhk3p66sbjw81dxw3kkufikmotp6shrn X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0A473A0004 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PFpE7Jnv; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf25.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=chao.p.peng@linux.intel.com X-Rspam-User: X-HE-Tag: 1649426847-348465 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 28, 2022 at 11:56:06PM +0000, Sean Christopherson wrote: > On Thu, Mar 10, 2022, Chao Peng wrote: > > @@ -2217,4 +2220,34 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) > > /* Max number of entries allowed for each kvm dirty ring */ > > #define KVM_DIRTY_RING_MAX_ENTRIES 65536 > > > > +#ifdef CONFIG_MEMFILE_NOTIFIER > > +static inline long kvm_memfile_get_pfn(struct kvm_memory_slot *slot, gfn_t gfn, > > + int *order) > > +{ > > + pgoff_t index = gfn - slot->base_gfn + > > + (slot->private_offset >> PAGE_SHIFT); > > This is broken for 32-bit kernels, where gfn_t is a 64-bit value but pgoff_t is a > 32-bit value. There's no reason to support this for 32-bit kernels, so... > > The easiest fix, and likely most maintainable for other code too, would be to > add a dedicated CONFIG for private memory, and then have KVM check that for all > the memfile stuff. x86 can then select it only for 64-bit kernels, and in turn > select MEMFILE_NOTIFIER iff private memory is supported. Looks good. > > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig > index ca7b2a6a452a..ee9c8c155300 100644 > --- a/arch/x86/kvm/Kconfig > +++ b/arch/x86/kvm/Kconfig > @@ -48,7 +48,9 @@ config KVM > select SRCU > select INTERVAL_TREE > select HAVE_KVM_PM_NOTIFIER if PM > - select MEMFILE_NOTIFIER > + select HAVE_KVM_PRIVATE_MEM if X86_64 > + select MEMFILE_NOTIFIER if HAVE_KVM_PRIVATE_MEM > + > help > Support hosting fully virtualized guest machines using hardware > virtualization extensions. You will need a fairly recent > > And in addition to replacing checks on CONFIG_MEMFILE_NOTIFIER, the probing of > whether or not KVM_MEM_PRIVATE is allowed can be: > > @@ -1499,23 +1499,19 @@ static void kvm_replace_memslot(struct kvm *kvm, > } > } > > -bool __weak kvm_arch_private_memory_supported(struct kvm *kvm) > -{ > - return false; > -} > - > static int check_memory_region_flags(struct kvm *kvm, > const struct kvm_userspace_memory_region *mem) > { > u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; > > - if (kvm_arch_private_memory_supported(kvm)) > - valid_flags |= KVM_MEM_PRIVATE; > - > #ifdef __KVM_HAVE_READONLY_MEM > valid_flags |= KVM_MEM_READONLY; > #endif > > +#ifdef CONFIG_KVM_HAVE_PRIVATE_MEM > + valid_flags |= KVM_MEM_PRIVATE; > +#endif > + > if (mem->flags & ~valid_flags) > return -EINVAL; > > > + > > + return slot->pfn_ops->get_lock_pfn(file_inode(slot->private_file), > > + index, order); > > In a similar vein, get_locK_pfn() shouldn't return a "long". KVM likely won't use > these APIs on 32-bit kernels, but that may not hold true for other subsystems, and > this code is confusing and technically wrong. The pfns for struct page squeeze > into an unsigned long because PAE support is capped at 64gb, but casting to a > signed long could result in a pfn with bit 31 set being misinterpreted as an error. > > Even returning an "unsigned long" for the pfn is wrong. It "works" for the shmem > code because shmem deals only with struct page, but it's technically wrong, especially > since one of the selling points of this approach is that it can work without struct > page. Hmmm, that's correct. > > OUT params suck, but I don't see a better option than having the return value be > 0/-errno, with "pfn_t *pfn" for the resolved pfn. > > > +} > > + > > +static inline void kvm_memfile_put_pfn(struct kvm_memory_slot *slot, > > + kvm_pfn_t pfn) > > +{ > > + slot->pfn_ops->put_unlock_pfn(pfn); > > +} > > + > > +#else > > +static inline long kvm_memfile_get_pfn(struct kvm_memory_slot *slot, gfn_t gfn, > > + int *order) > > +{ > > This should be a WARN_ON() as its usage should be guarded by a KVM_PRIVATE_MEM > check, and private memslots should be disallowed in this case. > > Alternatively, it might be a good idea to #ifdef these out entirely and not provide > stubs. That'd likely require a stub or two in arch code, but overall it might be > less painful in the long run, e.g. would force us to more carefully consider the > touch points for private memory. Definitely not a requirement, just an idea. Make sense, let me try. Thanks, Chao