From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C371C352A1 for ; Tue, 6 Dec 2022 15:49:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6A288E0003; Tue, 6 Dec 2022 10:49:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B1A878E0001; Tue, 6 Dec 2022 10:49:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BBD38E0003; Tue, 6 Dec 2022 10:49:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8E2FC8E0001 for ; Tue, 6 Dec 2022 10:49:29 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4211E40410 for ; Tue, 6 Dec 2022 15:49:29 +0000 (UTC) X-FDA: 80212316058.29.D804466 Received: from mail-lj1-f177.google.com (mail-lj1-f177.google.com [209.85.208.177]) by imf17.hostedemail.com (Postfix) with ESMTP id 9BB1A40008 for ; Tue, 6 Dec 2022 15:49:28 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YDlQIMLZ; spf=pass (imf17.hostedemail.com: domain of tabba@google.com designates 209.85.208.177 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670341768; a=rsa-sha256; cv=none; b=doDUaxChkPg7zEd+Z2yxDT7boU03iCjjD1f9v5PRtwMbY4Pw4Db8m9tKblpvrxcxTn+753 zKZabbZ8RRHW6y4jH60VzzFJI5J3oDS0e5kvrs+esSupKfBQotQ31YYIDu8GSqS6Fh6uqy F21rYFr7LJrrTVv3YiZQEX5Duh/60/g= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YDlQIMLZ; spf=pass (imf17.hostedemail.com: domain of tabba@google.com designates 209.85.208.177 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670341768; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZgXzVCO1OCRwjAZ65xIfYxlpH5JEmDjmLd1r8UgB90I=; b=6jNzudJSODAMS+T7FKupGPaR8nz9mhFU5EF3WxkJS22uzzWV7KHhB+yYIvQLIyeysV11XV Ck2r2ae0C6zPtdS8PyKvvvIFKsN3D0s1T2jkVGDqwrXhok5sJsD3UfwV8KDjfqtdg3SLiy kiwPMtNX9ba3K0wi2IF6tWBzAW/pghY= Received: by mail-lj1-f177.google.com with SMTP id f16so6024164ljc.8 for ; Tue, 06 Dec 2022 07:49:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=ZgXzVCO1OCRwjAZ65xIfYxlpH5JEmDjmLd1r8UgB90I=; b=YDlQIMLZ1TrD8HLPFmpAy1nUjSqrC6DimAKnD5PX/9U/Hkfbl+PAuaqn6eyudePAL/ XdAO4Y2LJu8jK/u/vHUKJ4Yrg4Bu6CcI9Cp13NQSd4EEALaMTAF43oGOBbkSHLMWJZew VBVjPJhisKdIegVeU2j+I1gfsP+HHDHSdHqItbK0ObvCJ3vGHDcvYAVHdqiK+hpVDt6A aGqh+YFoCDDnlu6vbz0TU8iT9GjKwmc4omVbFWDhRxxhtyC86vojtae5sUA2v56QZaQS XofeqJgFBJ7sIBVYgxziNf/pL9NIeN66U/SJva0SZj6ungKTfSgVexSSbj4DvlFx82mM v0tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZgXzVCO1OCRwjAZ65xIfYxlpH5JEmDjmLd1r8UgB90I=; b=FbN41Fa6icsREGEPlwuIrQiAOx2cYk8rs/r9BgU5ZnkThoHEEJxNwJWOLpsTtzfrdY H2pEAa7Ik2jIfJPmOS0cpMBnKP7uwIL+tBWSuNuWFlOTP1oGpC/+EEDecR4vXGb/XpUV rAwPeb7JMW6hkW0/PwhneK7LL2MWUCRh9VfRDr+5tIIgXcwLVR/vZK72nRKjuaCrNMOx aqN1ePDQrd/ohlrHfFDVHsj4Tu1/iWO+FLaXwX6V1i8P0LrVBolfGNMritpcjbpb1Z1f e/e9rABk8sYautTZfuNmwCutvtbPu/wbJ6C2r9nRFKT3vxpbwvsp/FJuqwa1nssdC/ZZ 2Vxw== X-Gm-Message-State: ANoB5pkKhmCppkgKR5nkmoLAA2Qe0aZ2fxKHB/Q1PFjN0nuY9KD1PYNS csY40duA+Cp+PLOviyVGJTxy3UmY32IP9PrPFx+pUg== X-Google-Smtp-Source: AA0mqf53ULfcaVf99mN6yfwo/i5dAXkqh588pxGSl1Mpx6AZ2ttTevA2OsFQ6+Jwt2fKkYQ85oGqqCkL/Y3QkpgFpUk= X-Received: by 2002:a2e:964c:0:b0:279:f197:440d with SMTP id z12-20020a2e964c000000b00279f197440dmr4247034ljh.517.1670341766820; Tue, 06 Dec 2022 07:49:26 -0800 (PST) MIME-Version: 1.0 References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> <20221202061347.1070246-6-chao.p.peng@linux.intel.com> <20221206115623.GB1216605@chaop.bj.intel.com> In-Reply-To: <20221206115623.GB1216605@chaop.bj.intel.com> From: Fuad Tabba Date: Tue, 6 Dec 2022 15:48:50 +0000 Message-ID: Subject: Re: [PATCH v10 5/9] KVM: Use gfn instead of hva for mmu_notifier_retry To: Chao Peng Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Content-Type: text/plain; charset="UTF-8" X-Spamd-Result: default: False [-1.90 / 9.00]; BAYES_HAM(-6.00)[100.00%]; SORBS_IRL_BL(3.00)[209.85.208.177:from]; SUBJECT_HAS_UNDERSCORES(1.00)[]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCPT_COUNT_TWELVE(0.00)[48]; FROM_EQ_ENVFROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; TO_MATCH_ENVRCPT_SOME(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[] X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9BB1A40008 X-Stat-Signature: 39pa554pk5ymdx4ik7zqw4kj5tn34ddq X-HE-Tag: 1670341768-589205 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, On Tue, Dec 6, 2022 at 12:01 PM Chao Peng wrote: > > On Mon, Dec 05, 2022 at 09:23:49AM +0000, Fuad Tabba wrote: > > Hi Chao, > > > > On Fri, Dec 2, 2022 at 6:19 AM Chao Peng wrote: > > > > > > Currently in mmu_notifier invalidate path, hva range is recorded and > > > then checked against by mmu_notifier_retry_hva() in the page fault > > > handling path. However, for the to be introduced private memory, a page > > > fault may not have a hva associated, checking gfn(gpa) makes more sense. > > > > > > For existing hva based shared memory, gfn is expected to also work. The > > > only downside is when aliasing multiple gfns to a single hva, the > > > current algorithm of checking multiple ranges could result in a much > > > larger range being rejected. Such aliasing should be uncommon, so the > > > impact is expected small. > > > > > > Suggested-by: Sean Christopherson > > > Signed-off-by: Chao Peng > > > --- > > > arch/x86/kvm/mmu/mmu.c | 8 +++++--- > > > include/linux/kvm_host.h | 33 +++++++++++++++++++++------------ > > > virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++--------- > > > 3 files changed, 49 insertions(+), 24 deletions(-) > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > index 4736d7849c60..e2c70b5afa3e 100644 > > > --- a/arch/x86/kvm/mmu/mmu.c > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > @@ -4259,7 +4259,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, > > > return true; > > > > > > return fault->slot && > > > - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); > > > + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); > > > } > > > > > > static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > > @@ -6098,7 +6098,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > > > > > > write_lock(&kvm->mmu_lock); > > > > > > - kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); > > > + kvm_mmu_invalidate_begin(kvm); > > > + > > > + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); > > > > > > flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); > > > > > > @@ -6112,7 +6114,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > > > kvm_flush_remote_tlbs_with_address(kvm, gfn_start, > > > gfn_end - gfn_start); > > > > > > - kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); > > > + kvm_mmu_invalidate_end(kvm); > > > > > > write_unlock(&kvm->mmu_lock); > > > } > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > > index 02347e386ea2..3d69484d2704 100644 > > > --- a/include/linux/kvm_host.h > > > +++ b/include/linux/kvm_host.h > > > @@ -787,8 +787,8 @@ struct kvm { > > > struct mmu_notifier mmu_notifier; > > > unsigned long mmu_invalidate_seq; > > > long mmu_invalidate_in_progress; > > > - unsigned long mmu_invalidate_range_start; > > > - unsigned long mmu_invalidate_range_end; > > > + gfn_t mmu_invalidate_range_start; > > > + gfn_t mmu_invalidate_range_end; > > > #endif > > > struct list_head devices; > > > u64 manual_dirty_log_protect; > > > @@ -1389,10 +1389,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); > > > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); > > > #endif > > > > > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > - unsigned long end); > > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > > > - unsigned long end); > > > +void kvm_mmu_invalidate_begin(struct kvm *kvm); > > > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); > > > +void kvm_mmu_invalidate_end(struct kvm *kvm); > > > > > > long kvm_arch_dev_ioctl(struct file *filp, > > > unsigned int ioctl, unsigned long arg); > > > @@ -1963,9 +1962,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) > > > return 0; > > > } > > > > > > -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > > > +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, > > > unsigned long mmu_seq, > > > - unsigned long hva) > > > + gfn_t gfn) > > > { > > > lockdep_assert_held(&kvm->mmu_lock); > > > /* > > > @@ -1974,10 +1973,20 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > > > * that might be being invalidated. Note that it may include some false > > > > nit: "might be" (or) "is being" > > > > > * positives, due to shortcuts when handing concurrent invalidations. > > > > nit: handling > > Both are existing code, but I can fix it either. That was just a nit, please feel free to ignore it, especially if it might cause headaches in the future with merges. > > > > > > */ > > > - if (unlikely(kvm->mmu_invalidate_in_progress) && > > > - hva >= kvm->mmu_invalidate_range_start && > > > - hva < kvm->mmu_invalidate_range_end) > > > - return 1; > > > + if (unlikely(kvm->mmu_invalidate_in_progress)) { > > > + /* > > > + * Dropping mmu_lock after bumping mmu_invalidate_in_progress > > > + * but before updating the range is a KVM bug. > > > + */ > > > + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || > > > + kvm->mmu_invalidate_range_end == INVALID_GPA)) > > > > INVALID_GPA is an x86-specific define in > > arch/x86/include/asm/kvm_host.h, so this doesn't build on other > > architectures. The obvious fix is to move it to > > include/linux/kvm_host.h. > > Hmm, INVALID_GPA is defined as ZERO for x86, not 100% confident this is > correct choice for other architectures, but after search it has not been > used for other architectures, so should be safe to make it common. With this fixed, Reviewed-by: Fuad Tabba And the necessary work to port to arm64 (on qemu/arm64): Tested-by: Fuad Tabba Cheers, /fuad > > Thanks, > Chao > > > > Cheers, > > /fuad > > > > > + return 1; > > > + > > > + if (gfn >= kvm->mmu_invalidate_range_start && > > > + gfn < kvm->mmu_invalidate_range_end) > > > + return 1; > > > + } > > > + > > > if (kvm->mmu_invalidate_seq != mmu_seq) > > > return 1; > > > return 0; > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > > index b882eb2c76a2..ad55dfbc75d7 100644 > > > --- a/virt/kvm/kvm_main.c > > > +++ b/virt/kvm/kvm_main.c > > > @@ -540,9 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, > > > > > > typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); > > > > > > -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, > > > - unsigned long end); > > > - > > > +typedef void (*on_lock_fn_t)(struct kvm *kvm); > > > typedef void (*on_unlock_fn_t)(struct kvm *kvm); > > > > > > struct kvm_hva_range { > > > @@ -628,7 +626,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > > > locked = true; > > > KVM_MMU_LOCK(kvm); > > > if (!IS_KVM_NULL_FN(range->on_lock)) > > > - range->on_lock(kvm, range->start, range->end); > > > + range->on_lock(kvm); > > > + > > > if (IS_KVM_NULL_FN(range->handler)) > > > break; > > > } > > > @@ -715,8 +714,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > > > kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); > > > } > > > > > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > - unsigned long end) > > > +void kvm_mmu_invalidate_begin(struct kvm *kvm) > > > { > > > /* > > > * The count increase must become visible at unlock time as no > > > @@ -724,6 +722,17 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > * count is also read inside the mmu_lock critical section. > > > */ > > > kvm->mmu_invalidate_in_progress++; > > > + > > > + if (likely(kvm->mmu_invalidate_in_progress == 1)) { > > > + kvm->mmu_invalidate_range_start = INVALID_GPA; > > > + kvm->mmu_invalidate_range_end = INVALID_GPA; > > > + } > > > +} > > > + > > > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) > > > +{ > > > + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); > > > + > > > if (likely(kvm->mmu_invalidate_in_progress == 1)) { > > > kvm->mmu_invalidate_range_start = start; > > > kvm->mmu_invalidate_range_end = end; > > > @@ -744,6 +753,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > } > > > } > > > > > > +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > > > +{ > > > + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); > > > + return kvm_unmap_gfn_range(kvm, range); > > > +} > > > + > > > static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > > const struct mmu_notifier_range *range) > > > { > > > @@ -752,7 +767,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > > .start = range->start, > > > .end = range->end, > > > .pte = __pte(0), > > > - .handler = kvm_unmap_gfn_range, > > > + .handler = kvm_mmu_unmap_gfn_range, > > > .on_lock = kvm_mmu_invalidate_begin, > > > .on_unlock = kvm_arch_guest_memory_reclaimed, > > > .flush_on_ret = true, > > > @@ -791,8 +806,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > > return 0; > > > } > > > > > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > > > - unsigned long end) > > > +void kvm_mmu_invalidate_end(struct kvm *kvm) > > > { > > > /* > > > * This sequence increase will notify the kvm page fault that > > > -- > > > 2.25.1 > > >