From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC7EBC001B0 for ; Wed, 16 Aug 2023 03:44:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9E2294003C; Tue, 15 Aug 2023 23:44:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A27038D001C; Tue, 15 Aug 2023 23:44:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C7AD94003C; Tue, 15 Aug 2023 23:44:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7725C8D001C for ; Tue, 15 Aug 2023 23:44:37 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 32973C0266 for ; Wed, 16 Aug 2023 03:44:37 +0000 (UTC) X-FDA: 81128575794.03.726FDB7 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by imf25.hostedemail.com (Postfix) with ESMTP id 4C0FFA0009 for ; Wed, 16 Aug 2023 03:44:33 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf25.hostedemail.com: domain of maobibo@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=maobibo@loongson.cn ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692157475; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Do1V1b9V+kR3CX2qbtTZut0oiU0F5F/R3y//UnwOQ/c=; b=7vvhtlwC2S4e17ZYkdTiVgtuwu/vCz2B/slbQN9PJeV/l7EnEXN+kQn9o9bn3XQjdxQ+JV TPolhzOm8Vb/n8qRtDctoSGMsoJ8DAfhXYeRHswe4dBoeXlJdoHtY/23/EoJNUri7JOPdk c5vn/7Deg5UikKJSAwtHDzB0N+YaNlU= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf25.hostedemail.com: domain of maobibo@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=maobibo@loongson.cn ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692157475; a=rsa-sha256; cv=none; b=g3IoM4tyIT0XdH/L+J/+1P22SjzMODe92TQqIJ/Z8J9tuPt/QVS79nYBflBl7yMmDioXHu c4AcoMaMC3Ls/cx7uhD/s+bwxE/HFpuURyULUkilaSpVzMwgOfJa/oNKPJS4v436ECCdNA 5aYLYjwL+7JtyVAS76AoMQ/yMs1pliE= Received: from loongson.cn (unknown [10.20.42.170]) by gateway (Coremail) with SMTP id _____8CxNvEdRtxk2v4YAA--.51424S3; Wed, 16 Aug 2023 11:44:29 +0800 (CST) Received: from [10.20.42.170] (unknown [10.20.42.170]) by localhost.localdomain (Coremail) with SMTP id AQAAf8AxDCMdRtxkIKxbAA--.9932S3; Wed, 16 Aug 2023 11:44:29 +0800 (CST) Message-ID: Date: Wed, 16 Aug 2023 11:44:29 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed protected for NUMA migration Content-Language: en-US From: bibo mao To: Sean Christopherson , Yan Zhao Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, mike.kravetz@oracle.com, apopple@nvidia.com, jgg@nvidia.com, rppt@kernel.org, akpm@linux-foundation.org, kevin.tian@intel.com, david@redhat.com References: <20230810085636.25914-1-yan.y.zhao@intel.com> <20230810090218.26244-1-yan.y.zhao@intel.com> <277ee023-dc94-6c23-20b2-7deba641f1b1@loongson.cn> <107cdaaf-237f-16b9-ebe2-7eefd2b21f8f@loongson.cn> In-Reply-To: <107cdaaf-237f-16b9-ebe2-7eefd2b21f8f@loongson.cn> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CM-TRANSID:AQAAf8AxDCMdRtxkIKxbAA--.9932S3 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBj93XoWxuF1xJFWkXF48CFWfGry7twc_yoWrCr1Upa yrKay8tF4DJrZrC34ktw48AFy2ga92gF18Wry5K34qyFn8trnFkr4UtrZIkFyxArn5Xr1a qw4jqFsxua4UZagCm3ZEXasCq-sJn29KB7ZKAUJUUUU8529EdanIXcx71UUUUU7KY7ZEXa sCq-sGcSsGvfJ3Ic02F40EFcxC0VAKzVAqx4xG6I80ebIjqfuFe4nvWSU5nxnvy29KBjDU 0xBIdaVrnRJUUUvFb4IE77IF4wAFF20E14v26r1j6r4UM7CY07I20VC2zVCF04k26cxKx2 IYs7xG6rWj6s0DM7CIcVAFz4kK6r106r15M28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48v e4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_JFI_Gr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI 0_Jr0_Gr1l84ACjcxK6I8E87Iv67AKxVWxJVW8Jr1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6r4j6r4UJwAS0I0E0xvYzxvE52x082IY62kv0487Mc804VCY07AIYIkI8VC2zVCFFI0UMc 02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAF wI0_Gr0_Cr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcVAKI48JMxk0xIA0c2IEe2xFo4 CEbIxvr21l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG 67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MI IYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E 14v26r1j6r4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JV WxJwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU2ID7 UUUUU X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4C0FFA0009 X-Stat-Signature: mqde7c7jbxfrza9sgscjb4kpctcgxkk5 X-HE-Tag: 1692157473-68872 X-HE-Meta: U2FsdGVkX18i0ahQtoOobblfN2ayT6L7yCksDs5nziDDgwRm4LIr7MBNRMawMzEjvmYi7DyDtvgjVehE07kYbiqTUCenNzglfsRzQqYQ6FHfru0noM4ciwlQ7u/Aqth6K0eT2iMgwq4xzgEMjyMTU+U3Q3etxB1FiDjOf7a1eq4FtaUxQBDR26ddZdfybfG3trsrUveZXPWkZ9ooYcWmmIapYKUKndlnPhhSIvgzP+NNERhcfraGGhdBiySGOEt/r0yXMEw0FrkZk3YNANlhHviF1UlcRQMo96GPqhmXzmwJjEqbe4gnyVRFi/rMey2h6oU2tI+dVNIfFidsN/2Yf3Y1RFfgXiTkOQKVM9WjcVrbWhYR+gZrCKz9U3cVpiSSh7ncYSAi0bJ7frMfH5Bb8LZMEmXFRxZSj3PlwiGggBgBYEQxQAEeu64R+blDtEtN6mkvuEstsEkUXRtaNYxUqy/OhZxXVrbdqeZDC/O4lbtmloVrAeUo/o0Kaocqi0se0sxzTyysQOg2ao7ulynYP311njFWz8yq83x8YCM/9BEOwkAC5h1UnkEZXQXnmuif0RDN+lmfa/wprO8KiRcPD1fzcqXBE9rb6vwdxirY3laHaBrkWVpRoZirvFGg5t0CKzqFDBpY8X80eujOV2l6/vVDrcTium+0GcYhJVeHkVXEJwXLkWTgMNmnIg5DEiUP3n9iCgFTiDvaJnVFnNuoj1De67rqOtxCusTX3fMbMKAcobSx3InB+EncJPyVmfBt802s9QjMLna5aIBQFzQb7rq45jHwlc9qM48iEje+76oLy/yYLAm4Si+yPGYtjZrL75kgbD9dP2bTRbm2QnWN3NfM3ntJ4K6ccs7cdnHGrzYEJfaC0oeA9T9vTMWAJP5b0WvpkI25YdS4mZgmnp+YXACrGmpZxo7lv0fxNen+kSEgPUNgxJXSe6sa+tEHaEbVgxDUGv2tBckLsloBMyY blf7m2mG TdVwp/+bN8jQKVcqlJKx0vwma4OaGZg9DD//A693xSIsDgBLZCxp4YD36JA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 在 2023/8/16 10:43, bibo mao 写道: > > > 在 2023/8/15 22:50, Sean Christopherson 写道: >> On Tue, Aug 15, 2023, Yan Zhao wrote: >>> On Mon, Aug 14, 2023 at 09:40:44AM -0700, Sean Christopherson wrote: >>>>>> Note, I'm assuming secondary MMUs aren't allowed to map swap entries... >>>>>> >>>>>> Compile tested only. >>>>> >>>>> I don't find a matching end to each >>>>> mmu_notifier_invalidate_range_start_nonblock(). >>>> >>>> It pairs with existing call to mmu_notifier_invalidate_range_end() in change_pmd_range(): >>>> >>>> if (range.start) >>>> mmu_notifier_invalidate_range_end(&range); >>> No, It doesn't work for mmu_notifier_invalidate_range_start() sent in change_pte_range(), >>> if we only want the range to include pages successfully set to PROT_NONE. >> >> Precise invalidation was a non-goal for my hack-a-patch. The intent was purely >> to defer invalidation until it was actually needed, but still perform only a >> single notification so as to batch the TLB flushes, e.g. the start() call still >> used the original @end. >> >> The idea was to play nice with the scenario where nothing in a VMA could be migrated. >> It was complete untested though, so it may not have actually done anything to reduce >> the number of pointless invalidations. > For numa-balance scenery, can original page still be used by application even if pte > is changed with PROT_NONE? If it can be used, maybe we can zap shadow mmu and flush tlb Since there is kvm_mmu_notifier_change_pte notification when numa page is replaced with new page, my meaning that can original page still be used by application even if pte is changed with PROT_NONE and before replaced with new page? And for primary mmu, tlb is flushed after pte is changed with PROT_NONE and after mmu_notifier_invalidate_range_end notification for secondary mmu. Regards Bibo Mao > in notification mmu_notifier_invalidate_range_end with precised range, the range can > be cross-range between range mmu_gather and mmu_notifier_range. > > Regards > Bibo Mao >> >>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >>>> index 9e4cd8b4a202..f29718a16211 100644 >>>> --- a/arch/x86/kvm/mmu/mmu.c >>>> +++ b/arch/x86/kvm/mmu/mmu.c >>>> @@ -4345,6 +4345,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, >>>> if (unlikely(!fault->slot)) >>>> return kvm_handle_noslot_fault(vcpu, fault, access); >>>> >>>> + if (mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva)) >>>> + return RET_PF_RETRY; >>>> + >>> This can effectively reduce the remote flush IPIs a lot! >>> One Nit is that, maybe rmb() or READ_ONCE() is required for kvm->mmu_invalidate_range_start >>> and kvm->mmu_invalidate_range_end. >>> Otherwise, I'm somewhat worried about constant false positive and retry. >> >> If anything, this needs a READ_ONCE() on mmu_invalidate_in_progress. The ranges >> aren't touched when when mmu_invalidate_in_progress goes to zero, so ensuring they >> are reloaded wouldn't do anything. The key to making forward progress is seeing >> that there is no in-progress invalidation. >> >> I did consider adding said READ_ONCE(), but practically speaking, constant false >> positives are impossible. KVM will re-enter the guest when retrying, and there >> is zero chance of the compiler avoiding reloads across VM-Enter+VM-Exit. >> >> I suppose in theory we might someday differentiate between "retry because a different >> vCPU may have fixed the fault" and "retry because there's an in-progress invalidation", >> and not bother re-entering the guest for the latter, e.g. have it try to yield >> instead. >> >> All that said, READ_ONCE() on mmu_invalidate_in_progress should effectively be a >> nop, so it wouldn't hurt to be paranoid in this case. >> >> Hmm, at that point, it probably makes sense to add a READ_ONCE() for mmu_invalidate_seq >> too, e.g. so that a sufficiently clever compiler doesn't completely optimize away >> the check. Losing the check wouldn't be problematic (false negatives are fine, >> especially on that particular check), but the generated code would *look* buggy.