From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADE98FA373D for ; Thu, 27 Oct 2022 10:29:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F8308E0002; Thu, 27 Oct 2022 06:29:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 180D68E0001; Thu, 27 Oct 2022 06:29:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 047568E0002; Thu, 27 Oct 2022 06:29:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E58008E0001 for ; Thu, 27 Oct 2022 06:29:53 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A728381073 for ; Thu, 27 Oct 2022 10:29:53 +0000 (UTC) X-FDA: 80066358666.08.D1E267B Received: from mail-lf1-f47.google.com (mail-lf1-f47.google.com [209.85.167.47]) by imf19.hostedemail.com (Postfix) with ESMTP id 263771A0004 for ; Thu, 27 Oct 2022 10:29:52 +0000 (UTC) Received: by mail-lf1-f47.google.com with SMTP id f37so1835087lfv.8 for ; Thu, 27 Oct 2022 03:29:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=0heoQ75/5VSlWRbzYWlqtOyoihaCk0GBJXOxfmWTOfk=; b=TGZ+AHB+/VCuAf8/5IZLQRevtX055y+nnz+ARUuKwQU1I4fg7fp43eeRuH2v7XFvkS X+Gt7cF2nNFj4HpCbNa8jW9qEUtRMt+sS17Fcf05qC4cN3kYAndhWb9LZ0eyn6IvzOxf UOeSAFO4W+oPeg/fJnHtuDOr3YrIC4cX/Z4LxgyBljFOwiOLrYYkl4IKn8x7WwMN1DhQ vFafYxM3d1UVYup5HarUqQRn5QU47KwSOnDd0hfHqSRGuVJWqjUDcO1XM/8rHi/L59+w VU2aUC5tKx1ZdvIV+kfKXyE5JbtXoSIREX3CedB+BO6IQKhcKT+BItyxmNDmmPiu/8QV lzzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0heoQ75/5VSlWRbzYWlqtOyoihaCk0GBJXOxfmWTOfk=; b=MNAQfhvyzHBTgxivvAbXt7KXhLTcuKH9ztZ4aCqx1s2qyR4k4PfNpluBSdu9VmQNou LinuxjLR4gQiEPjumY/1JQobhU3h4Ps/NEeJhCf+66ycnzdByddD8X2gHU16I/9VIuw5 IYAP9qUey4vTiJ88m6o1QIB9Vhhw2hR9kXeA2hbOjRUnUM39kcoL14duJn+Mf+XcQc0u 3UN9xTMeYabA/A2b7ZdxjN7f46zH84ybx3Z3lFET1NlomdC+LnEgHgfTOYvfiB6SvQOB nTBqkDBXDEAU+B9/sYU/xRKOvqeNfLorcT4LvCbt6frwxa+E+cLV2AZ0YsE8Y1v2f0Kn yoNQ== X-Gm-Message-State: ACrzQf1JRbFlxlscXqch/uZkjwSgHakX1IctQa/kg1d3G9d1w3/+utPo pR+KqYflFV09SKs6fsnRXXr72Vy0dwSsw4hWIYPtKQ== X-Google-Smtp-Source: AMsMyM6pXaQUclBRuQdHyU6LfjVbG2j8GF0LigG6tmbtr2FkYZjZ0nO5HPb+OzdJSXOTlsdg0oH0XtaqnFKTblieit4= X-Received: by 2002:a05:6512:2392:b0:4a2:550a:e21d with SMTP id c18-20020a056512239200b004a2550ae21dmr19000831lfv.550.1666866591008; Thu, 27 Oct 2022 03:29:51 -0700 (PDT) MIME-Version: 1.0 References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> <20221025151344.3784230-5-chao.p.peng@linux.intel.com> In-Reply-To: <20221025151344.3784230-5-chao.p.peng@linux.intel.com> From: Fuad Tabba Date: Thu, 27 Oct 2022 11:29:14 +0100 Message-ID: Subject: Re: [PATCH v9 4/8] KVM: Use gfn instead of hva for mmu_notifier_retry To: Chao Peng Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666866593; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0heoQ75/5VSlWRbzYWlqtOyoihaCk0GBJXOxfmWTOfk=; b=62MV518GVNJ7lf0z10yWE1xiMqqkE5SO5VsMlflc4co/zsj1wgce6/Re8oKflhX12Xc+lw 1xDOClesvziT8RlZu4IYa7yLHTMluhZTytt5UE9kR+auKIqmHV1QV6oZcbC5x+IKzsiuhw zsLhnK9vSEuqhnrZEGVoff5jKIiPaCs= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=TGZ+AHB+; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of tabba@google.com designates 209.85.167.47 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666866593; a=rsa-sha256; cv=none; b=OINHCp+IK/W+KIjfpAp7oWU1oBWt1eGGw5N1SsFroF+Sb0LnfK07uiDsEw2ouOGtgdUNrP H4KagsH36dA4ycpXXynaE0q2YlKIUMpZwwW5thC40oVuvzPRc80FdUdEOJXyYbCCoR0N3o bYHGldnLqJF5Ju4kNro1jI+0cgn+WW0= X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 263771A0004 X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=TGZ+AHB+; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of tabba@google.com designates 209.85.167.47 as permitted sender) smtp.mailfrom=tabba@google.com X-Stat-Signature: jmuwrz35apdqhcrwyaimeajhpmzyyoof X-HE-Tag: 1666866592-462300 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, On Tue, Oct 25, 2022 at 4:19 PM Chao Peng wrote: > > Currently in mmu_notifier validate path, hva range is recorded and then > checked against in the mmu_notifier_retry_hva() of the page fault path. > However, for the to be introduced private memory, a page fault may not > have a hva associated, checking gfn(gpa) makes more sense. > > For existing non private memory case, gfn is expected to continue to > work. The only downside is when aliasing multiple gfns to a single hva, > the current algorithm of checking multiple ranges could result in a much > larger range being rejected. Such aliasing should be uncommon, so the > impact is expected small. > > It also fixes a bug in kvm_zap_gfn_range() which has already been using nit: Now it's kvm_unmap_gfn_range(). > gfn when calling kvm_mmu_invalidate_begin/end() while these functions > accept hva in current code. > > Signed-off-by: Chao Peng > --- Based on reading this code and my limited knowledge of the x86 MMU code: Reviewed-by: Fuad Tabba Cheers, /fuad > arch/x86/kvm/mmu/mmu.c | 2 +- > include/linux/kvm_host.h | 18 +++++++--------- > virt/kvm/kvm_main.c | 45 ++++++++++++++++++++++++++-------------- > 3 files changed, 39 insertions(+), 26 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6f81539061d6..33b1aec44fb8 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4217,7 +4217,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, > return true; > > return fault->slot && > - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); > + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); > } > > static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 739a7562a1f3..79e5cbc35fcf 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -775,8 +775,8 @@ struct kvm { > struct mmu_notifier mmu_notifier; > unsigned long mmu_invalidate_seq; > long mmu_invalidate_in_progress; > - unsigned long mmu_invalidate_range_start; > - unsigned long mmu_invalidate_range_end; > + gfn_t mmu_invalidate_range_start; > + gfn_t mmu_invalidate_range_end; > #endif > struct list_head devices; > u64 manual_dirty_log_protect; > @@ -1365,10 +1365,8 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); > #endif > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > - unsigned long end); > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > - unsigned long end); > +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end); > +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end); > > long kvm_arch_dev_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg); > @@ -1937,9 +1935,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) > return 0; > } > > -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, > unsigned long mmu_seq, > - unsigned long hva) > + gfn_t gfn) > { > lockdep_assert_held(&kvm->mmu_lock); > /* > @@ -1949,8 +1947,8 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > * positives, due to shortcuts when handing concurrent invalidations. > */ > if (unlikely(kvm->mmu_invalidate_in_progress) && > - hva >= kvm->mmu_invalidate_range_start && > - hva < kvm->mmu_invalidate_range_end) > + gfn >= kvm->mmu_invalidate_range_start && > + gfn < kvm->mmu_invalidate_range_end) > return 1; > if (kvm->mmu_invalidate_seq != mmu_seq) > return 1; > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 8dace78a0278..09c9cdeb773c 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -540,8 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, > > typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); > > -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, > - unsigned long end); > +typedef void (*on_lock_fn_t)(struct kvm *kvm, gfn_t start, gfn_t end); > > typedef void (*on_unlock_fn_t)(struct kvm *kvm); > > @@ -628,7 +627,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > locked = true; > KVM_MMU_LOCK(kvm); > if (!IS_KVM_NULL_FN(range->on_lock)) > - range->on_lock(kvm, range->start, range->end); > + range->on_lock(kvm, gfn_range.start, > + gfn_range.end); > if (IS_KVM_NULL_FN(range->handler)) > break; > } > @@ -715,15 +715,9 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); > } > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > - unsigned long end) > +static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, > + gfn_t end) > { > - /* > - * The count increase must become visible at unlock time as no > - * spte can be established without taking the mmu_lock and > - * count is also read inside the mmu_lock critical section. > - */ > - kvm->mmu_invalidate_in_progress++; > if (likely(kvm->mmu_invalidate_in_progress == 1)) { > kvm->mmu_invalidate_range_start = start; > kvm->mmu_invalidate_range_end = end; > @@ -744,6 +738,28 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > } > } > > +static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) > +{ > + /* > + * The count increase must become visible at unlock time as no > + * spte can be established without taking the mmu_lock and > + * count is also read inside the mmu_lock critical section. > + */ > + kvm->mmu_invalidate_in_progress++; > +} > + > +static bool kvm_mmu_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > +{ > + update_invalidate_range(kvm, range->start, range->end); > + return kvm_unmap_gfn_range(kvm, range); > +} > + > +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) > +{ > + mark_invalidate_in_progress(kvm, start, end); > + update_invalidate_range(kvm, start, end); > +} > + > static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > const struct mmu_notifier_range *range) > { > @@ -752,8 +768,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > .start = range->start, > .end = range->end, > .pte = __pte(0), > - .handler = kvm_unmap_gfn_range, > - .on_lock = kvm_mmu_invalidate_begin, > + .handler = kvm_mmu_handle_gfn_range, > + .on_lock = mark_invalidate_in_progress, > .on_unlock = kvm_arch_guest_memory_reclaimed, > .flush_on_ret = true, > .may_block = mmu_notifier_range_blockable(range), > @@ -791,8 +807,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > return 0; > } > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > - unsigned long end) > +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) > { > /* > * This sequence increase will notify the kvm page fault that > -- > 2.25.1 >