From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34772C7EE24 for ; Tue, 16 May 2023 18:08:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AAAD6900004; Tue, 16 May 2023 14:08:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A5A0A900002; Tue, 16 May 2023 14:08:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 94857900004; Tue, 16 May 2023 14:08:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 81DF8900002 for ; Tue, 16 May 2023 14:08:56 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 475CDC029E for ; Tue, 16 May 2023 18:08:56 +0000 (UTC) X-FDA: 80796904272.04.35F2E66 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf18.hostedemail.com (Postfix) with ESMTP id 5750C1C000D for ; Tue, 16 May 2023 18:08:54 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eqrkQFx6; spf=pass (imf18.hostedemail.com: domain of ardb@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=ardb@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684260534; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oaW7YC+MMtdke4Owm5YvonWDjRcP7YuhgQVU3EbcWa4=; b=6Lr2Vhr4TE1zwK7DRHRnP09D3I78NmFbQYECu5thN1kLU/uEkuKkkJ3xQGDrb7HbAewC5r 9MsOvP2Xn68naS/hwY3qcKy3/AgreBf7wnSQVhY1iLoSirhnxZwSn5coEBeksmrQMYDumr b9hM69mifoaLKSVFAPaT3b5ugkpm52s= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684260534; a=rsa-sha256; cv=none; b=GbKgwcoJg4Vx/J88bAYkZcaogU3wrKex1vfRT8HERam86PIfgQlpHbV636kWb6/VpdogLs kyDhMSTCfgqT2PL+fYn8tlM0LUvZqIz/vH/8qpFsJ9JRHXnrEgEqdUtyQRgjE2CrP5PtGw c9LeYyq5fcYldPslClgsTcWAiCqeujM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eqrkQFx6; spf=pass (imf18.hostedemail.com: domain of ardb@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=ardb@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5E20A63DA6 for ; Tue, 16 May 2023 18:08:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4FFF9C433B4 for ; Tue, 16 May 2023 18:08:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684260532; bh=EuLTL0z0cItgFiTcg+BNi9m4MmZo8uZTaS650kq9lR4=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=eqrkQFx6rhRJujq4VWj4FzAyNGWYXTRwwDgyA5V3A+MHAv5k+HnqxBOF4OIy8cZkJ wlr/nGLB9U0awyzmtcWNdiQ3locGkyOpSlHrOTZrHPz7cdw7STS6cmzLETt17aRQUZ fkQu/Cq+SFiadTqPuOBZ5gStXcB2xqJYRpSp+bJKGfBGn7le4a7xeNDV1zbX5ztiGx i1ohR7qwbhExdzokVQagdx9ZsrS2RI8U+e8S3buraBXpfYStZQ09HaTMP8x2/bNuzr NG/f2QhWBJ8vXT5AilkZk2rYXxj5Eqo367bSvv+7iOHdMvmmK+W+kd5ieeRNFAJzFQ tQ3jSabpkTbMQ== Received: by mail-lj1-f180.google.com with SMTP id 38308e7fff4ca-2ad819ab8a9so124266831fa.0 for ; Tue, 16 May 2023 11:08:52 -0700 (PDT) X-Gm-Message-State: AC+VfDzHEsAgoDU5uP+3+nYjXA+GUH3cs/LQ2wl2ka9wedp0qPFu7T4L SvIqQ9F2Q7hOAYdK5whewqV8LmlXQHGeUqX2jzs= X-Google-Smtp-Source: ACHHUZ5+shXS/oY0pTS/6qckGGB5Nhpshb8lVSqz1yNpZtyaX4Fz38IBjMuv/6+PbSqztZeXv7E5jFQB3se1sORtPTk= X-Received: by 2002:a2e:9056:0:b0:2a8:bf35:3b7 with SMTP id n22-20020a2e9056000000b002a8bf3503b7mr7472306ljg.32.1684260530160; Tue, 16 May 2023 11:08:50 -0700 (PDT) MIME-Version: 1.0 References: <20230513220418.19357-1-kirill.shutemov@linux.intel.com> <20230513220418.19357-7-kirill.shutemov@linux.intel.com> In-Reply-To: <20230513220418.19357-7-kirill.shutemov@linux.intel.com> From: Ard Biesheuvel Date: Tue, 16 May 2023 20:08:37 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCHv11 6/9] efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory To: "Kirill A. Shutemov" Cc: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, Dave Hansen Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: 9qzb8csnxitiocj6opbgx8eso87jcgh8 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 5750C1C000D X-HE-Tag: 1684260534-917050 X-HE-Meta: U2FsdGVkX18MseF6qWZRV0FHcoRTTnJIcF7ohxvwCzSlRtTsp/74q6c3tFI8pjeOYekE4TSfUKf3v7qHL2vGwRzcDnGXcgI0PNZ7yLQUvsLNjm8HjJyE41YvS13G9E9gOCMLp9/OvCIinWTa33sBbTsBdWlAPOnxk8cCXHmygwGavKQrKGilRcmEZWvXJZ5SkncHOaeWuIATAmW+fhwS4Xfgir6CvsxA5kS5OicDshckv/fg86IAFUazSpmTbfHrXJ2Nyhpj/Vtd31r9zut9t7P6+1tQZojL5ZwpKitgQQJUqDSMG87A6fkOqQHdalypuobiT9ES8dzuFyWvDzdEoMUnay8Bl8gEBpbZ0XC1a9urkw0rA9BArKJ9byp8CQEvFah/4ysTykTWb/rm30Gl1JS43d9dYbUEZgl3VqmG0Cx8rm+Dv2FTAuoeDfunUrLLDxxeVi5Xadyj6rDkpT5y9tESrHzKFkbmX294i+/BF3qS8CtO6rFTfnzLXo51fanyesYd+FuZi8LiHsgAYwT2MBez2MLck5rr+xVuuCzdWcmXLl3db771lmbNibSeCdAjEGMSVLz6x9t0YF53H05f2YJZ3fNdwPCDVNnVEJ0StBO2hfGmB4R596pPJd2tB0xGYBcNMNs7mWkUyA66trfdndhjuBVxlNpzIGP8P0DuJtAHtHJhsdUbH4xaQ4x9sKHqYxWKF7diudCmpgU2kUoCJtFSXrmeSgPFUK43HYzNSRHkJIRWeNRuNU04hm+AWwJHzeOajMYEEoRaaWDR6+/DOViKOs4ptKNtCkIbNkJQ26sP7cKxNZiGC7oNWhfEppx/Oef5O5KJQJV9Q4T7O4XSP6yiNrJp9GaJCTJwN/MAQ1lM+E2zl1mrmEq1OCm7fJ0zLzipH4thOx20LNC0xcxquoNEA4D8dmka6+mAY1U0OOsuKF86qLUz0eoDJ38koHVivHWTfnWPigdQArloblv LZ1GJWyQ WtBnEoCQERRFPzdUJglfizX06HqTUw/nKUJHt+zwzLSzEQPC9Ibqi3K3/tSEx6OwAT05F3ogjPYtqFARTCfztc2PB/HG0kDBJa1RKn66nlsZnWyy7JDVhJ0XuqShRz6GRn/Iog3C3RFAhVKQ4XVuF+ClbR6PN+YmHo8jb0/pkMiyWt8VTgKGTLCYr0sV8xtetUqoKDMS0DtBm0qJ2rE0xZorj9KjPQk22h8kd4xHa9bRts4p5t/9Sc7Os3FYW01VZIPxCZYtQj4mOxgP4UM5rTzyWpR9RuAC63S9Y0QB+iTfBIbZ6AP1imBbOXQOJT1cq8b3V+AcLxfVVBi0AGr2xfR0CNbF/pxSpt5jVOztn2nmXtReWGqDzrbU+u1zCIBXPJMHaoIKp2qjIRtYihLLbrSgN7Ie1VVDvoPzW0oDD21/QVfs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, 14 May 2023 at 00:04, Kirill A. Shutemov wrote: > > load_unaligned_zeropad() can lead to unwanted loads across page boundaries. > The unwanted loads are typically harmless. But, they might be made to > totally unrelated or even unmapped memory. load_unaligned_zeropad() > relies on exception fixup (#PF, #GP and now #VE) to recover from these > unwanted loads. > > But, this approach does not work for unaccepted memory. For TDX, a load > from unaccepted memory will not lead to a recoverable exception within > the guest. The guest will exit to the VMM where the only recourse is to > terminate the guest. > Does this mean that the kernel maps memory before accepting it? As otherwise, I would assume that such an access would page fault inside the guest before triggering an exception related to the unaccepted state. > There are two parts to fix this issue and comprehensively avoid access > to unaccepted memory. Together these ensure that an extra "guard" page > is accepted in addition to the memory that needs to be used. > > 1. Implicitly extend the range_contains_unaccepted_memory(start, end) > checks up to end+unit_size if 'end' is aligned on a unit_size > boundary. > 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' > is aligned on a unit_size boundary. > > Side note: This leads to something strange. Pages which were accepted > at boot, marked by the firmware as accepted and will never > _need_ to be accepted might be on unaccepted_pages list > This is a cue to ensure that the next page is accepted > before 'page' can be used. > > This is an actual, real-world problem which was discovered during TDX > testing. > > Signed-off-by: Kirill A. Shutemov > Reviewed-by: Dave Hansen > --- > drivers/firmware/efi/unaccepted_memory.c | 35 ++++++++++++++++++++++++ > 1 file changed, 35 insertions(+) > > diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c > index bb91c41f76fb..3d1ca60916dd 100644 > --- a/drivers/firmware/efi/unaccepted_memory.c > +++ b/drivers/firmware/efi/unaccepted_memory.c > @@ -37,6 +37,34 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > start -= unaccepted->phys_base; > end -= unaccepted->phys_base; > > + /* > + * load_unaligned_zeropad() can lead to unwanted loads across page > + * boundaries. The unwanted loads are typically harmless. But, they > + * might be made to totally unrelated or even unmapped memory. > + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now > + * #VE) to recover from these unwanted loads. > + * > + * But, this approach does not work for unaccepted memory. For TDX, a > + * load from unaccepted memory will not lead to a recoverable exception > + * within the guest. The guest will exit to the VMM where the only > + * recourse is to terminate the guest. > + * > + * There are two parts to fix this issue and comprehensively avoid > + * access to unaccepted memory. Together these ensure that an extra > + * "guard" page is accepted in addition to the memory that needs to be > + * used: > + * > + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) > + * checks up to end+unit_size if 'end' is aligned on a unit_size > + * boundary. > + * > + * 2. Implicitly extend accept_memory(start, end) to end+unit_size if > + * 'end' is aligned on a unit_size boundary. (immediately following > + * this comment) > + */ > + if (!(end % unit_size)) > + end += unit_size; > + > /* Make sure not to overrun the bitmap */ > if (end > unaccepted->size * unit_size * BITS_PER_BYTE) > end = unaccepted->size * unit_size * BITS_PER_BYTE; > @@ -84,6 +112,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) > start -= unaccepted->phys_base; > end -= unaccepted->phys_base; > > + /* > + * Also consider the unaccepted state of the *next* page. See fix #1 in > + * the comment on load_unaligned_zeropad() in accept_memory(). > + */ > + if (!(end % unit_size)) > + end += unit_size; > + > /* Make sure not to overrun the bitmap */ > if (end > unaccepted->size * unit_size * BITS_PER_BYTE) > end = unaccepted->size * unit_size * BITS_PER_BYTE; > -- > 2.39.3 >