From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22B0ECDB47E for ; Fri, 13 Oct 2023 19:53:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6ECE08D016B; Fri, 13 Oct 2023 15:53:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 69CF68D0015; Fri, 13 Oct 2023 15:53:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53DEB8D016B; Fri, 13 Oct 2023 15:53:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 44A278D0015 for ; Fri, 13 Oct 2023 15:53:17 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 07DDF80535 for ; Fri, 13 Oct 2023 19:53:17 +0000 (UTC) X-FDA: 81341487234.09.17FB646 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by imf02.hostedemail.com (Postfix) with ESMTP id 9CF4680003 for ; Fri, 13 Oct 2023 19:53:13 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MPWMbL4c; spf=none (imf02.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697226794; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hV1ts7NqxhRekDRzUYouBCHe0UfvRgXOVhNyaNUzqsM=; b=gUSre8BzSbnoFcYy74h9dV6pDD7MAGqnDOS9t/QMASkm5FgyfvwSwZ1WrDjTZMuvEBJhX7 yRsSk0/3XpGnz5PHZmSQ/ghmQfLprasdG8rryhln9XhB4zR3qwc0NvEacBBUBz4EB5l/jz GjmzY9FoKwIxqu68o9ckmhpiI069/Lk= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MPWMbL4c; spf=none (imf02.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697226794; a=rsa-sha256; cv=none; b=EfTRe4L1LaPDfquzzvhLhhehv5XQRR/3ehVb+8aTE2BBS+7dJ0qpB9JNAitfPBo/Efjwi8 DdPFMvh7NiGAmj8UKPCgZAeAv4vpx7O5VdTtgDV/dPWYjWLXxpYiIM5zMSCvCYSdJ/0GzE XZe82s+OpLwM2pIz45otKtZ1ZrbUlZM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697226793; x=1728762793; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=WUbXHiuNIDBF/soxOsDV9IEi1qCavQt59cgdlaj68XY=; b=MPWMbL4cfVioeOGBVYyptwdwZL0ACnrKeR+L5ib53oJKBI/4HPjy5INU JOhTYNRP96xzBrHoBTt+iJ8JWpByBxwRTDd5UK5M1rItaaGQkDq1HmQEL q5DXfN12APMsPmlBndyg3MQZvC4Jkrz1BD0YLNArxgN1gYQRrZo2q/0kQ 4fLsZ39qZX6bXPMBKDTxFqLdbxLRb2KP7OI2MVw5jNnVlhZeejzuTdvbm bhtBcpmv/fjbVjf6woGnCi3dQEc6rlO0wU4fA4uS6OvJweydDj8cRnPcx D96Rs8Z0iD5qr0ta73sfm2d14TH8L3gn2DlcMkfLalCLY9PREJTAkY5kj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10862"; a="365518453" X-IronPort-AV: E=Sophos;i="6.03,223,1694761200"; d="scan'208";a="365518453" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2023 12:53:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10862"; a="878632703" X-IronPort-AV: E=Sophos;i="6.03,223,1694761200"; d="scan'208";a="878632703" Received: from bgras-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.59.145]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2023 12:53:02 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 21E7A104A05; Fri, 13 Oct 2023 22:53:00 +0300 (+03) Date: Fri, 13 Oct 2023 22:53:00 +0300 From: "Kirill A. Shutemov" To: Tom Lendacky Cc: Michael Roth , Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv14 5/9] efi: Add unaccepted memory support Message-ID: <20231013195300.cqv6dfdprr3givdr@box> References: <20230606142637.5171-1-kirill.shutemov@linux.intel.com> <20230606142637.5171-6-kirill.shutemov@linux.intel.com> <20231010210518.jguawj7bscwgvszv@amd.com> <20231013123358.y4pcdp5fgtt4ax6g@box.shutemov.name> <20231013162210.bqepgz6wnh7uohqq@box> <3577c8a5-3f88-45b8-9b41-2fb5cb6dc53a@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3577c8a5-3f88-45b8-9b41-2fb5cb6dc53a@amd.com> X-Rspamd-Queue-Id: 9CF4680003 X-Rspam-User: X-Stat-Signature: b9jy1wjbq716dmh7zz66co879486xcy9 X-Rspamd-Server: rspam01 X-HE-Tag: 1697226793-767322 X-HE-Meta: U2FsdGVkX18Z6h4uNvqpbM/OqwXKVu2AfFVAsoE9YnA5nAZpytVZprHyGCBQdEWrCG6dCI4kdGRQ44oN5JutSdhNae17VJbMWlYm0tCKAF+72m7VGeB9ygi+izp5+NbXEVLEor89LXqW+OtdhYc8cEJI11nAABrOgVGgKPQu8JpBBQ7DHgZJDevzRgZs0q2QBjhlrlW9fjoTkp/C9T+74sO2qfc51Gl7tVyOn4iIIzEmjJNClerX1kcd/OQYJoUqsF0zOR1pc8uiYt3ug9c8bpkrknjSBhMJfYWvffS8dC6t+u0i02JleHtcpMoasC+7EvZ3BgSOrlmd8xoy7oEXqm1/AtKAhnkWMdbOyomnbbY/0scjI3i3pZS5R0smphTZ7fzT1IU0D1PC0+M4DucJ3QYcKRUWk8pj4fTAAy6RXZZ2HjkjBx6AJfyJof/89adrXhtHYmII3JjCl2HPSW8Cki2F63V67+krEwBNpjQRYaLV1Shvelmd4AKFdI6qdv5dj8irra1bbuhOC9pGHnpjlEeEnqQPVjuDs8EDlUA0K7kGcdcfFOsDe1h9KAVNROVYoj+yyc240CdH7Ww4wfcjHIzeE4JmRYDlDqLiUWzuDKwCFth+8xayirgA/cH6Hvyvdm91TllSc94Ocq0DnVnM+HRVjsA+nBgikUmHSOiyw2g/qMyCyZWxyXJ3J+2JsgFqt2Bhy17wtsy7tAO22ip9WiiUFln7vrh5m7JwTrvByQpkGyftxZfGMoB3gpSrTaPfs9yLXaTNmLqbF3Op1UGZkZgRisV7agMScgzIly9Ilme/1zwBMeZwgFRVcnSzdmv7EC//1FbaLBFYtvN3ZtYG/LLxanmrTwWViaM3X8DpXi9xtW5CcyZw+A40qIt9hOR94tSxR3flsKRNv+QJawrfPEQn0dok8yZdklvHU6FfkW08CcrexSOklrzgA/0NezQPnqnp5wFl89OR1c5Zg5+ i1qEpEaP eBksI+FBB04qkGG+0JwubpcvUD6/MwCWZAMcUAKeiSIXCt3d0hanDF5bej1LI/a2PjYguD431b+CLA25qVwjpQh3hOep5IUthRGxTXfYKz3QwUE1ukptFosCSatB4vbSXCDE7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Oct 13, 2023 at 12:45:20PM -0500, Tom Lendacky wrote: > On 10/13/23 11:22, Kirill A. Shutemov wrote: > > On Fri, Oct 13, 2023 at 03:33:58PM +0300, Kirill A. Shutemov wrote: > > > > While testing SNP guests running today's tip/master (ef19bc9dddc3) I ran > > > > into what seems to be fairly significant lock contention due to the > > > > unaccepted_memory_lock spinlock above, which results in a constant stream > > > > of soft-lockups until the workload gets all its memory accepted/faulted > > > > in if the guest has around 16+ vCPUs. > > > > > > > > I've included the guest dmesg traces I was seeing below. > > > > > > > > In this case I was running a 32 vCPU guest with 200GB of memory running on > > > > a 256 thread EPYC (Milan) system, and can trigger the above situation fairly > > > > reliably by running the following workload in a freshly-booted guests: > > > > > > > > stress --vm 32 --vm-bytes 5G --vm-keep > > > > > > > > Scaling up the number of stress threads and vCPUs should make it easier > > > > to reproduce. > > > > > > > > Other than unresponsiveness/lockup messages until the memory is accepted, > > > > the guest seems to continue running fine, but for large guests where > > > > unaccepted memory is more likely to be useful, it seems like it could be > > > > an issue, especially when consider 100+ vCPU guests. > > > > > > Okay, sorry for delay. It took time to reproduce it with TDX. > > > > > > I will look what can be done. > > > > Could you check if the patch below helps? > > > > diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c > > index 853f7dc3c21d..591da3f368fa 100644 > > --- a/drivers/firmware/efi/unaccepted_memory.c > > +++ b/drivers/firmware/efi/unaccepted_memory.c > > @@ -8,6 +8,14 @@ > > /* Protects unaccepted memory bitmap */ > > static DEFINE_SPINLOCK(unaccepted_memory_lock); > > +struct accept_range { > > + struct list_head list; > > + unsigned long start; > > + unsigned long end; > > +}; > > + > > +static LIST_HEAD(accepting_list); > > + > > /* > > * accept_memory() -- Consult bitmap and accept the memory if needed. > > * > > @@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > > { > > struct efi_unaccepted_memory *unaccepted; > > unsigned long range_start, range_end; > > + struct accept_range range, *entry; > > unsigned long flags; > > u64 unit_size; > > @@ -80,7 +89,25 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > > range_start = start / unit_size; > > + range.start = start; > > + range.end = end; > > +retry: > > spin_lock_irqsave(&unaccepted_memory_lock, flags); > > + > > + list_for_each_entry(entry, &accepting_list, list) { > > + if (entry->end < start) > > + continue; > > + if (entry->start > end) > > Should this be a >= check since start and end are page aligned values? Right. Good catch. > > + continue; > > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > + > > + /* Somebody else accepting the range */ > > + cpu_relax(); > > + goto retry; > > Could you set some kind of flag here so that ... > > > + } > > + > > ... once you get here, that means that area was accepted and removed from > the list, so I think you could just drop the lock and exit now, right? > Because at that point the bitmap will have been updated and you wouldn't be > accepting any memory anyway? No. Consider the case if someone else accept part of the range you are accepting. I guess we can check if the range on the list covers what we are accepting fully, but it complication. Checking bitmap at this point is cheap enough: we already hold the lock. -- Kiryl Shutsemau / Kirill A. Shutemov