From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A791FC432BE for ; Tue, 10 Aug 2021 17:31:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3264A60EE7 for ; Tue, 10 Aug 2021 17:31:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3264A60EE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8C4898D0001; Tue, 10 Aug 2021 13:31:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 874036B0088; Tue, 10 Aug 2021 13:31:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7619A8D0001; Tue, 10 Aug 2021 13:31:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id 594006B0080 for ; Tue, 10 Aug 2021 13:31:52 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E35FF23101 for ; Tue, 10 Aug 2021 17:31:51 +0000 (UTC) X-FDA: 78459863622.14.E84FE8D Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf05.hostedemail.com (Postfix) with ESMTP id 90906501C26B for ; Tue, 10 Aug 2021 17:31:50 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10072"; a="195220190" X-IronPort-AV: E=Sophos;i="5.84,310,1620716400"; d="scan'208";a="195220190" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2021 10:31:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,310,1620716400"; d="scan'208";a="570859629" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga004.jf.intel.com with ESMTP; 10 Aug 2021 10:31:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 2147AF9; Tue, 10 Aug 2021 20:31:24 +0300 (EEST) Date: Tue, 10 Aug 2021 20:31:24 +0300 From: "Kirill A. Shutemov" To: Dave Hansen Cc: "Kirill A. Shutemov" , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/5] x86: Impplement support for unaccepted memory Message-ID: <20210810173124.vzxpluaepdfe5aum@black.fi.intel.com> References: <20210810062626.1012-1-kirill.shutemov@linux.intel.com> <4b80289a-07a4-bf92-9946-b0a8afb27326@intel.com> <20210810151548.4exag5uj73bummsr@black.fi.intel.com> <82b8836f-a467-e5ff-08f3-704a85b9faa0@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <82b8836f-a467-e5ff-08f3-704a85b9faa0@intel.com> Authentication-Results: imf05.hostedemail.com; dkim=none; spf=none (imf05.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 90906501C26B X-Stat-Signature: zmorkxjms3nzcqd4wgsrrx9a5i8mo6a3 X-HE-Tag: 1628616710-24613 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 10, 2021 at 08:51:01AM -0700, Dave Hansen wrote: > In other words, I buy the boot speed argument. But, I don't buy the > "this saves memory long term" argument at all. Okay, that's a fair enough. I guess there's *some* workloads that may have memory footprint reduced, but I agree it's minority. > >> I had expected this series, but I also expected it to be connected to > >> CONFIG_DEFERRED_STRUCT_PAGE_INIT somehow. Could you explain a bit how > >> this problem is different and demands a totally orthogonal solution? > >> > >> For instance, what prevents us from declaring: "Memory is accepted at > >> the time that its 'struct page' is initialized" ? Then, we use all the > >> infrastructure we already have for DEFERRED_STRUCT_PAGE_INIT. > > > > That was my first thought too and I tried it just to realize that it is > > not what we want. If we would accept page on page struct init it means we > > would make host allocate all memory assigned to the guest on boot even if > > guest actually use small portion of it. > > > > Also deferred page init only allows to scale memory accept across multiple > > CPUs, but doesn't allow to get to userspace before we done with it. See > > wait_for_completion(&pgdat_init_all_done_comp). > > That's good information. It's a refinement of the "I want to boot > faster" requirement. What you want is not just going _faster_, but > being able to run userspace before full acceptance has completed. > > Would you be able to quantify how fast TDX page acceptance is? Are we > talking about MB/s, GB/s, TB/s? This series is rather bereft of numbers > for a feature which making a performance claim. > > Let's say we have a 128GB VM. How much does faster does this approach > reach userspace than if all memory was accepted up front? How much > memory _could_ have been accepted at the point userspace starts running? Acceptance code is not optimized yet: we accept memory in 4k chunk which is very slow because hypercall overhead dominates the picture. As of now, kernel boot time of 1 VCPU and 64TiB VM with upfront memory accept is >20 times slower than with this lazy memory accept approach. The difference is going to be substantially lower once we get it optimized properly. -- Kirill A. Shutemov