From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E15B8C433F5 for ; Thu, 29 Sep 2022 16:52:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E55168D0002; Thu, 29 Sep 2022 12:52:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E03EF8D0001; Thu, 29 Sep 2022 12:52:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7EB98D0002; Thu, 29 Sep 2022 12:52:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BAECE8D0001 for ; Thu, 29 Sep 2022 12:52:11 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 91629140182 for ; Thu, 29 Sep 2022 16:52:11 +0000 (UTC) X-FDA: 79965715662.20.D984F93 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf22.hostedemail.com (Postfix) with ESMTP id 100BFC0007 for ; Thu, 29 Sep 2022 16:52:10 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id e11-20020a17090a77cb00b00205edbfd646so6527105pjs.1 for ; Thu, 29 Sep 2022 09:52:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date; bh=x9smInTAbexJo9FMfDYVJQZwAv5RTV0u/BRwYhuyUIM=; b=PwjBr2d/k51WljMRLL/MtzEUzt/CV8BMwu9RVh5Qp64ULy6sMBEfD0X0V19DymfIUD thmUX8sJzb/9C05zq+6yW6pJoeUsZ4oBK1z5HZ5tus02z3yu/0eDEWpo+suieXqzpO4y g/fMYriQN9IU911un2COmNuISl00RmkN+KNzAFgzWxzHAw8GDOA3b4hBP7FDqUqgkSBk x4dDRHxZPQ4cAJRh/OBmcol+Zg9vythYVGgIu2g3K1fkJfpesTKQulYk7WgVpxsd79p2 EBpRetruvY1JFl2Q68g5TRpODWeocIr50AdShiLZwkkLmH4C5dWh9Zwa9cPFK5EQItUQ ndCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=x9smInTAbexJo9FMfDYVJQZwAv5RTV0u/BRwYhuyUIM=; b=4hDdM5JeAx++04uMKgxc8cSFjH73o4Q07NT/rOQtd43uXG+T8NkpmsGIk7hg1PbYnP WU8vLzFilH6qThAaHlVYNZQpW+jMThhMEPwIBw1/y0Fk6pYRryYhdYEicq3ONuMzmmj+ s1WxBmGM2jfzKBnxliCZXzFT9KCp23JBhkqMNtJptvzKP8Gdahgja/MlazSRW/DjmDlT NrGojD4flvYUk3DtbhRXmYvG+YdB6m9e1hao+Tm8+Kjz/ZVW+J+2BCyeXYquWI6NL6b8 EmhLQGVqNN3mDh2TVZw+u1BYGedNo0bQgX3ZAwFrMxpKxZoL4xREsnU17tYjdEem4I4M UTLA== X-Gm-Message-State: ACrzQf0LurYGoZMvAmOO1chvUx95b+jEAc22NypsHGxNG6cEkXneKbWV 8uAk/byuKPrZeJ47zqoup60= X-Google-Smtp-Source: AMsMyM6raR6uUEz9DHqlfOrbVWNAKhfnOrks6W/5gYqM980XfqIW8voC8f+sAqt/40JCIMp7SocU0Q== X-Received: by 2002:a17:902:b194:b0:17a:ccae:4ceb with SMTP id s20-20020a170902b19400b0017accae4cebmr4359490plr.36.1664470329501; Thu, 29 Sep 2022 09:52:09 -0700 (PDT) Received: from localhost ([192.55.54.55]) by smtp.gmail.com with ESMTPSA id z16-20020a170902ccd000b00176677a893bsm119123ple.82.2022.09.29.09.52.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 09:52:08 -0700 (PDT) Date: Thu, 29 Sep 2022 09:52:06 -0700 From: Isaku Yamahata To: Chao Peng Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com, isaku.yamahata@gmail.com Subject: Re: [PATCH v8 6/8] KVM: Update lpage info when private/shared memory are mixed Message-ID: <20220929165206.GA1963093@ls.amr.corp.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> <20220915142913.2213336-7-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20220915142913.2213336-7-chao.p.peng@linux.intel.com> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664470331; a=rsa-sha256; cv=none; b=xtQHLU7vLnvP76xg4U9us/yUFi6o5ol0EJ0YaMVVdSpApctBk8BMVeYaQyEGXXxpjaKjOH aoa6kLktB1tI41xGuN9Ma6byPFFo52/Ip2pNCMmnM+mWKi5YY2/RlTyVakO/Z473uDr3eh BtZ5kZ3wpEIEOpDkdisa0l29CcX4Ic4= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="PwjBr2d/"; spf=pass (imf22.hostedemail.com: domain of isaku.yamahata@gmail.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=isaku.yamahata@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664470331; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=x9smInTAbexJo9FMfDYVJQZwAv5RTV0u/BRwYhuyUIM=; b=zchPx5zNJclwIkds/FjIhQn8KV6fR0gzzFhhvKuxRnds9j2JzmjazruOIcB857iFfz8AyX Sa8UuuDrl7vbploij4NW4nXVinYRtVXElRrgxdmVvy6SwdETnK2iYCbJ2mqtATG4OWfiEo 1yNPdp2oG0OPmKawwCE4doFDtQhtuH4= X-Stat-Signature: tzzr7bjsgqyffj54jdarq95pnngwttj9 X-Rspamd-Queue-Id: 100BFC0007 X-Rspam-User: Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="PwjBr2d/"; spf=pass (imf22.hostedemail.com: domain of isaku.yamahata@gmail.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=isaku.yamahata@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-HE-Tag: 1664470330-999827 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 15, 2022 at 10:29:11PM +0800, Chao Peng wrote: > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 08abad4f3e6f..a0f198cede3d 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c ... > @@ -6894,3 +6899,115 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) > if (kvm->arch.nx_lpage_recovery_thread) > kthread_stop(kvm->arch.nx_lpage_recovery_thread); > } > + > +static bool mem_attr_is_mixed(struct kvm *kvm, unsigned int attr, > + gfn_t start, gfn_t end) > +{ > + XA_STATE(xas, &kvm->mem_attr_array, start); > + gfn_t gfn = start; > + void *entry; > + bool shared, private; > + bool mixed = false; > + > + if (attr == KVM_MEM_ATTR_SHARED) { > + shared = true; > + private = false; > + } else { > + shared = false; > + private = true; > + } We don't have to care the target is shared or private. We need to check only same or not. > + > + rcu_read_lock(); > + entry = xas_load(&xas); > + while (gfn < end) { > + if (xas_retry(&xas, entry)) > + continue; > + > + KVM_BUG_ON(gfn != xas.xa_index, kvm); > + > + if (entry) > + private = true; > + else > + shared = true; > + > + if (private && shared) { > + mixed = true; > + goto out; > + } > + > + entry = xas_next(&xas); > + gfn++; > + } > +out: > + rcu_read_unlock(); > + return mixed; > +} > + > +static inline void update_mixed(struct kvm_lpage_info *linfo, bool mixed) > +{ > + if (mixed) > + linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED; > + else > + linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED; > +} > + > +static void update_mem_lpage_info(struct kvm *kvm, > + struct kvm_memory_slot *slot, > + unsigned int attr, > + gfn_t start, gfn_t end) > +{ > + unsigned long lpage_start, lpage_end; > + unsigned long gfn, pages, mask; > + int level; > + > + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { > + pages = KVM_PAGES_PER_HPAGE(level); > + mask = ~(pages - 1); > + lpage_start = start & mask; > + lpage_end = (end - 1) & mask; > + > + /* > + * We only need to scan the head and tail page, for middle pages > + * we know they are not mixed. > + */ > + update_mixed(lpage_info_slot(lpage_start, slot, level), > + mem_attr_is_mixed(kvm, attr, lpage_start, > + lpage_start + pages)); > + > + if (lpage_start == lpage_end) > + return; > + > + for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) > + update_mixed(lpage_info_slot(gfn, slot, level), false); For >2M case, we don't have to check all entry. just check lower level case. > + > + update_mixed(lpage_info_slot(lpage_end, slot, level), > + mem_attr_is_mixed(kvm, attr, lpage_end, > + lpage_end + pages)); > + } > +} > + > +void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, > + gfn_t start, gfn_t end) > +{ > + struct kvm_memory_slot *slot; > + struct kvm_memslots *slots; > + struct kvm_memslot_iter iter; > + int i; > + > + WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)), > + "Unsupported mem attribute.\n"); > + > + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { > + slots = __kvm_memslots(kvm, i); > + > + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { > + slot = iter.slot; > + start = max(start, slot->base_gfn); > + end = min(end, slot->base_gfn + slot->npages); > + if (WARN_ON_ONCE(start >= end)) > + continue; > + > + update_mem_lpage_info(kvm, slot, attr, start, end); > + } > + } > +} Here is my updated version. bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level) { gfn_t pages = KVM_PAGES_PER_HPAGE(level); gfn_t mask = ~(pages - 1); struct kvm_lpage_info *linfo = lpage_info_slot(gfn & mask, slot, level); WARN_ON_ONCE(level == PG_LEVEL_4K); return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; } #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_ATTR static void update_mixed(struct kvm_lpage_info *linfo, bool mixed) { if (mixed) linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED; else linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED; } static bool __mem_attr_is_mixed(struct kvm *kvm, gfn_t start, gfn_t end) { XA_STATE(xas, &kvm->mem_attr_array, start); bool mixed = false; gfn_t gfn = start; void *s_entry; void *entry; rcu_read_lock(); s_entry = xas_load(&xas); entry = s_entry; while (gfn < end) { if (xas_retry(&xas, entry)) continue; KVM_BUG_ON(gfn != xas.xa_index, kvm); entry = xas_next(&xas); if (entry != s_entry) { mixed = true; break; } gfn++; } rcu_read_unlock(); return mixed; } static bool mem_attr_is_mixed(struct kvm *kvm, struct kvm_memory_slot *slot, int level, gfn_t start, gfn_t end) { struct kvm_lpage_info *child_linfo; unsigned long child_pages; bool mixed = false; unsigned long gfn; void *entry; if (WARN_ON_ONCE(level == PG_LEVEL_4K)) return false; if (level == PG_LEVEL_2M) return __mem_attr_is_mixed(kvm, start, end); /* This assumes that level - 1 is already updated. */ rcu_read_lock(); child_pages = KVM_PAGES_PER_HPAGE(level - 1); entry = xa_load(&kvm->mem_attr_array, start); for (gfn = start; gfn < end; gfn += child_pages) { child_linfo = lpage_info_slot(gfn, slot, level - 1); if (child_linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED) { mixed = true; break; } if (xa_load(&kvm->mem_attr_array, gfn) != entry) { mixed = true; break; } } rcu_read_unlock(); return mixed; } static void update_mem_lpage_info(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int attr, gfn_t start, gfn_t end) { unsigned long lpage_start, lpage_end; unsigned long gfn, pages, mask; int level; for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { pages = KVM_PAGES_PER_HPAGE(level); mask = ~(pages - 1); lpage_start = start & mask; lpage_end = (end - 1) & mask; /* * We only need to scan the head and tail page, for middle pages * we know they are not mixed. */ update_mixed(lpage_info_slot(lpage_start, slot, level), mem_attr_is_mixed(kvm, slot, level, lpage_start, lpage_start + pages)); if (lpage_start == lpage_end) return; for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) update_mixed(lpage_info_slot(gfn, slot, level), false); update_mixed(lpage_info_slot(lpage_end, slot, level), mem_attr_is_mixed(kvm, slot, level, lpage_end, lpage_end + pages)); } } void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, gfn_t start, gfn_t end) { struct kvm_memory_slot *slot; struct kvm_memslots *slots; struct kvm_memslot_iter iter; int idx; int i; WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)), "Unsupported mem attribute.\n"); idx = srcu_read_lock(&kvm->srcu); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { slot = iter.slot; start = max(start, slot->base_gfn); end = min(end, slot->base_gfn + slot->npages); if (WARN_ON_ONCE(start >= end)) continue; update_mem_lpage_info(kvm, slot, attr, start, end); } } srcu_read_unlock(&kvm->srcu, idx); } #endif -- Isaku Yamahata