From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15EE2C433F5 for ; Tue, 23 Nov 2021 14:25:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 815716B006C; Tue, 23 Nov 2021 09:25:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 79DF66B0071; Tue, 23 Nov 2021 09:25:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63E416B0073; Tue, 23 Nov 2021 09:25:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id 4FE256B006C for ; Tue, 23 Nov 2021 09:25:27 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 039308A9A5 for ; Tue, 23 Nov 2021 14:25:17 +0000 (UTC) X-FDA: 78840417474.04.BA8DBC7 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf01.hostedemail.com (Postfix) with ESMTP id 59273508BD69 for ; Tue, 23 Nov 2021 14:25:12 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10176"; a="235266363" X-IronPort-AV: E=Sophos;i="5.87,257,1631602800"; d="scan'208";a="235266363" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2021 06:25:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,257,1631602800"; d="scan'208";a="509427091" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 23 Nov 2021 06:25:06 -0800 Date: Tue, 23 Nov 2021 22:24:20 +0800 From: Chao Peng To: Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org, Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: Re: [RFC v2 PATCH 09/13] KVM: Introduce kvm_memfd_invalidate_range Message-ID: <20211123142420.GB32088@chaop.bj.intel.com> Reply-To: Chao Peng References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> <20211119134739.20218-10-chao.p.peng@linux.intel.com> <4041d98a-23df-e9ed-b245-5edd7151fec5@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4041d98a-23df-e9ed-b245-5edd7151fec5@redhat.com> User-Agent: Mutt/1.9.4 (2018-02-28) X-Stat-Signature: nhdtffqutjcby74i87pr93sk7ut6hr5i X-Rspamd-Queue-Id: 59273508BD69 X-Rspamd-Server: rspam07 Authentication-Results: imf01.hostedemail.com; dkim=none; spf=none (imf01.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637677512-113455 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 23, 2021 at 09:46:34AM +0100, Paolo Bonzini wrote: > On 11/19/21 14:47, Chao Peng wrote: > > + > > + /* Prevent memslot modification */ > > + spin_lock(&kvm->mn_invalidate_lock); > > + kvm->mn_active_invalidate_count++; > > + spin_unlock(&kvm->mn_invalidate_lock); > > + > > + ret = __kvm_handle_useraddr_range(kvm, &useraddr_range); > > + > > + spin_lock(&kvm->mn_invalidate_lock); > > + kvm->mn_active_invalidate_count--; > > + spin_unlock(&kvm->mn_invalidate_lock); > > + > > > You need to follow this with a rcuwait_wake_up as in > kvm_mmu_notifier_invalidate_range_end. Oh right. > > It's probably best if you move the manipulations of > mn_active_invalidate_count from kvm_mmu_notifier_invalidate_range_* to two > separate functions. Will do. > > Paolo