From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44DDEC77B60 for ; Mon, 3 Apr 2023 09:26:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8CC2C6B0072; Mon, 3 Apr 2023 05:26:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 87C186B0074; Mon, 3 Apr 2023 05:26:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 744066B0075; Mon, 3 Apr 2023 05:26:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 622446B0072 for ; Mon, 3 Apr 2023 05:26:58 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 29FEC140AFE for ; Mon, 3 Apr 2023 09:26:58 +0000 (UTC) X-FDA: 80639550516.27.024300E Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf28.hostedemail.com (Postfix) with ESMTP id F2E8EC0011 for ; Mon, 3 Apr 2023 09:26:55 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=2S4SKiU+; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=qg+E357x; spf=pass (imf28.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680514016; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dD5mb90+GHm9wBE5Mlt3//7atUZgg8lKi994A27Sm5Q=; b=1kwGAANJ/44lBMFwOqSyCVc4MPlZ2ewmQQGplfT09EtOnp+2DfNWcjypVXKUVrpqIOj6eA eP2wVXYY4eMQJwH4Mspijvx8zsQMxG+lkVyklYykKTYY6346xRmyHzFCQ6qht37PM2CrP3 Nckc6ebDa+oSpmN5yzhC0p821rmyi6I= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=2S4SKiU+; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=qg+E357x; spf=pass (imf28.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680514016; a=rsa-sha256; cv=none; b=Y+qYaJdHn0PUeTty2TepYw8R5z2Pi9dZbDqyg4YfRbuOWzynxihSDI44R4cnpCp6iiJm1w Khv6mJ4csJs3fyLzaiXbsaJwjW6k4nfxZwHF+i673ZGi9IbpvWvj/6hgmVXpKjB8JiDAqC 3NRZvHeHrkskJgDnEqcd7cY6++RmEd4= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 386091F8D9; Mon, 3 Apr 2023 09:26:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1680514014; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dD5mb90+GHm9wBE5Mlt3//7atUZgg8lKi994A27Sm5Q=; b=2S4SKiU+vFQ91fvzKCZ6togVsV3AMf8gjnEsEmFgV5+OIrAYALI9m7J4sLWXI84YzHMAHt uD0vbXNmpAZekxEe2CM8bfdTotRG867XyeuglywITPGd4CBg7e0vH/7Vdnu1PTCbdQsXg+ 4TZI4gHOMi091dpGlfZIs2lIR7n47Co= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1680514014; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dD5mb90+GHm9wBE5Mlt3//7atUZgg8lKi994A27Sm5Q=; b=qg+E357xtXrxgek5i3iRfTZh2Dk/r2o6MikDtk1VGmCFUTOYw0jOHyxSByKtZmlfUqHQbz 6ANQNJJ4CgBPjVCA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 80B1A13416; Mon, 3 Apr 2023 09:26:53 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id lfuoHt2bKmS9YQAAMHmgww (envelope-from ); Mon, 03 Apr 2023 09:26:53 +0000 Message-ID: <43234108-fa4f-7583-e3b4-2daa2de89fb0@suse.cz> Date: Mon, 3 Apr 2023 11:26:53 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [PATCHv9 02/14] mm: Add support for unaccepted memory Content-Language: en-US To: "Kirill A. Shutemov" , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, Mike Rapoport References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> <20230330114956.20342-3-kirill.shutemov@linux.intel.com> From: Vlastimil Babka In-Reply-To: <20230330114956.20342-3-kirill.shutemov@linux.intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: h7txywmgdsstx19m6r6cugqszk7j1g6i X-Rspamd-Queue-Id: F2E8EC0011 X-HE-Tag: 1680514015-655491 X-HE-Meta: U2FsdGVkX1/EWS3tTvawt9K1ry8SofK9K5JgngXd3A7B4p0AkIUn+M1BMhiiky2b6MTb2bHIGW3EviS8LOOHWiKDPIjTlTCMyWaNRiRnYBqK8dG2NKd1855acuM/8GH92D3zXQvfQfIuBj5HsHPxo78vDaxEXVVp0sW4KCb2LEATHqe0YNgl1qnmSETGMKgNAAmX16K+AwMccW5QyWDgBwtLibn2gOMrIci/m9++EjLE2hRZwxbmhoCpMQ8s8rmANw2C+SSOsnHy3fx1fwJ+WGEpODYfTz+8x2ajPcJ1/Hsp6eFE143yUMfgwcmFz5N4LJ8a8UII8AIM0ydCBnp6+2TB15Eplqmr0xS/nf+z09ewsveFKepeiP3CuahmhnXQmFHq51zINxIx8buYY6fy7iT5HTRuw7UakAEOg4YO9iM5z4z/a8ly5U8QN5IV24HQJ1kRnJZqlyCDN4WPEY9RDioauXIBybPDr6J9EGZrk7gmWAO62S2LyxveSsympctGkE5UesmBQezQIa2Xc3qN8wMs7fWvK842hspuev9hipBjsZ4RybPKXTaIqMezFx46sM2hJtARqwS6OU4y9M0dqjMVg5RcvrGyMWbp8BTSTnMLgtMhJxBQatXcVSy9pxZldn6Rj7tsu4sMixyWcLVE/0uvAmneZZqjEcAup6VTgGT9Vr/si8s1HkhXnneQOAdcaYjoBjDB6cOQYSIKo90LyJ0fyJFNxGxHEFIHBlPB1j83p10DwWh3mE6TT7KNIuETZtDmbdrGUc7Y4iFiuBJYqmzbRoeatKDUT7jNhva5PWEu3pe6uAJSTC5/qZsCTfgSMuMsvqJDO/rUOSjzQ3Zb2ZA6k/rdAJug5oN8ikPf9JJJmbWZDYtMzi4nZa7M+xP5pd+69yeX7pe07sQGyxxPhCkP9xzSDFvHoFHyZ97a3LaooqxvSr2P57ygUDCvbh7Q0tbJI5S8T7rNhnIIEdA zA1W8n8o 3s5hiMjj2LSKJ8+ryCE9I3u0BvD749S0XKbIPPujAKMpmkKwVFuHgwvmBZ1hE/SRcBU0li0ZQjsC0hIlU1JAN9xPCqQjApp5xj7nbcGNcsf+SGHJXrXYnCO62yAQ/RrMcPtqDHY+Csm/cGZFMS3KoCTDGTx5hS/9GpnBskwrDebCs/rILAGx8DbDOasgJDgj6T61tMLp6V7UA6ih+B+t5d38CApnlyi5suVWnoCB4yvdCsahlcUPfyQJ6+XaruBqCypRNjSYk2X+DSMw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/30/23 13:49, Kirill A. Shutemov wrote: > UEFI Specification version 2.9 introduces the concept of memory > acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD > SEV-SNP, require memory to be accepted before it can be used by the > guest. Accepting happens via a protocol specific to the Virtual Machine > platform. > > There are several ways kernel can deal with unaccepted memory: > > 1. Accept all the memory during the boot. It is easy to implement and > it doesn't have runtime cost once the system is booted. The downside > is very long boot time. > > Accept can be parallelized to multiple CPUs to keep it manageable > (i.e. via DEFERRED_STRUCT_PAGE_INIT), but it tends to saturate > memory bandwidth and does not scale beyond the point. > > 2. Accept a block of memory on the first use. It requires more > infrastructure and changes in page allocator to make it work, but > it provides good boot time. > > On-demand memory accept means latency spikes every time kernel steps > onto a new memory block. The spikes will go away once workload data > set size gets stabilized or all memory gets accepted. > > 3. Accept all memory in background. Introduce a thread (or multiple) > that gets memory accepted proactively. It will minimize time the > system experience latency spikes on memory allocation while keeping > low boot time. > > This approach cannot function on its own. It is an extension of #2: > background memory acceptance requires functional scheduler, but the > page allocator may need to tap into unaccepted memory before that. > > The downside of the approach is that these threads also steal CPU > cycles and memory bandwidth from the user's workload and may hurt > user experience. > > The patch implements #1 and #2 for now. #2 is the default. Some > workloads may want to use #1 with accept_memory=eager in kernel > command line. #3 can be implemented later based on user's demands. > > Support of unaccepted memory requires a few changes in core-mm code: > > - memblock has to accept memory on allocation; > > - page allocator has to accept memory on the first allocation of the > page; > > Memblock change is trivial. > > The page allocator is modified to accept pages. New memory gets accepted > before putting pages on free lists. It is done lazily: only accept new > pages when we run out of already accepted memory. The memory gets > accepted until the high watermark is reached. Great. > Architecture has to provide two helpers if it wants to support > unaccepted memory: > > - accept_memory() makes a range of physical addresses accepted. > > - range_contains_unaccepted_memory() checks anything within the range > of physical addresses requires acceptance. > > Signed-off-by: Kirill A. Shutemov > Acked-by: Mike Rapoport # memblock Reviewed-by: Vlastimil Babka Just a small suggestion below: > + > +static bool try_to_accept_memory(struct zone *zone, unsigned int order) > +{ > + long to_accept; > + int ret = false; > + > + if (!static_branch_unlikely(&zones_with_unaccepted_pages)) > + return false; This potentially (depends on what compiler decides) means we'll call this function just to skip the static branch. OTOH forcing it as inline would be wasteful too. So I'd split that away and make the callers do that static branch check inline. Just as deferred_pages_enabled() is used. > + /* How much to accept to get to high watermark? */ > + to_accept = high_wmark_pages(zone) - > + (zone_page_state(zone, NR_FREE_PAGES) - > + __zone_watermark_unusable_free(zone, order, 0)); > + > + /* Accept at least one page */ > + do { > + if (!try_to_accept_memory_one(zone)) > + break; > + ret = true; > + to_accept -= MAX_ORDER_NR_PAGES; > + } while (to_accept > 0); > + > + return ret; > +}