From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C1A9C4338F for ; Mon, 26 Jul 2021 23:03:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7C4C260F6E for ; Mon, 26 Jul 2021 23:03:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7C4C260F6E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E24648D0001; Mon, 26 Jul 2021 19:03:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DD5736B0036; Mon, 26 Jul 2021 19:03:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC31D8D0001; Mon, 26 Jul 2021 19:03:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id AFA7E6B0033 for ; Mon, 26 Jul 2021 19:03:09 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 51F8B180278B7 for ; Mon, 26 Jul 2021 23:03:09 +0000 (UTC) X-FDA: 78406266498.19.79D0699 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf14.hostedemail.com (Postfix) with ESMTP id 05EFA600B39B for ; Mon, 26 Jul 2021 23:03:08 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id j1so15075334pjv.3 for ; Mon, 26 Jul 2021 16:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=lVyo1mBQvnq+f4g+Wbb/eGDEIK9yT50vF3C8uYmoKBM=; b=o4zD3bI+XDT3dIclmQT08s4T0toZ3nH3EFIZezIGl4ATdFWDbw/Ppkg73fh/Ov5Feq AvGFBUC7jvIcm3J/r3aci4dpasL7o8BoPb1SFIoFs26G7uauUXiCYYGas5GtVjE69NOw wq98RvS2PApMJtViIvJe8xzNGy9izP+Lvs51qQlkpG+ffqFkLhSOkZIkQ/pFDcWjsPVy frNqHP/Z3vjA5HbmA50qegNJK8KzaQGhWjX9abtqW7JlGiy72NLkS1BOe24KpB0qbKQm TuQddQIwXBD+sHngyUz1p1P6NkBvvRttZ5i+Hyjm8YSnkRjHu+aGMv1Z8ac5sspp68KM hI+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=lVyo1mBQvnq+f4g+Wbb/eGDEIK9yT50vF3C8uYmoKBM=; b=KJAGZ1Mn/ppBmt13LAHVO9ZSySquWPE+an6T+B3YxHrtCufOwfOO35F8Apktf+tHsx qMYnWcTlleTM3UMdZ0ihc9E2bxGQj5W4e/oRKQklauYjcu2FX4hbJoFElvztBExyqY6N ttx+9VwOJvUp33/afb8Jgsd9gmNl3fT8jGvd/bStIVVF2bZ5Girae7XOGUyFBJ9i+HXE PH47qFQ9u59c8pR+CCndNFomEzEDTDQI5LRuWf7N8Dq9joxk8L6TJqZ5BRkIOs/SgU2v oD88Ywj51hFLkoDZh7x+/OVC99eE9fnNC81oiNwvwCNWZZEif77+BtY0475ydb/lMqwp xpLA== X-Gm-Message-State: AOAM5335yvdyPaO3YK1iCnqxf+k5GhoTtNrB2ehUnyMI7xzG12vTfvrF 058asNy+qm1KIb4esXw/VtH6sDhuIG2wuOKIqFpnKw== X-Google-Smtp-Source: ABdhPJze8kIkLyb6FLovD57Yqs9zIh9DGsFVJNaK15UrIcdfvNXi592LsLmxKGlGPA6iJxRkDJGYraZw+tVTaNOOnJA= X-Received: by 2002:a05:6a00:1895:b029:32c:b091:ebc with SMTP id x21-20020a056a001895b029032cb0910ebcmr20721479pfh.4.1627340587563; Mon, 26 Jul 2021 16:03:07 -0700 (PDT) MIME-Version: 1.0 References: <20210722195130.beazbb5blvj3mruo@box> In-Reply-To: <20210722195130.beazbb5blvj3mruo@box> From: Erdem Aktas Date: Mon, 26 Jul 2021 16:02:56 -0700 Message-ID: Subject: Re: Runtime Memory Validation in Intel-TDX and AMD-SNP To: "Kirill A. Shutemov" Cc: Joerg Roedel , Andi Kleen , David Rientjes , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Vlastimil Babka , "Kirill A. Shutemov" , Brijesh Singh , Tom Lendacky , Jon Grimm , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , "Kaplan, David" , Varad Gautam , Dario Faggioli , x86 , linux-mm@kvack.org, linux-coco@lists.linux.dev Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 05EFA600B39B Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=o4zD3bI+; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of erdemaktas@google.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=erdemaktas@google.com X-Stat-Signature: p3zrmjcy3c7dxfwte5x3k6mwhtb3tuwc X-HE-Tag: 1627340588-929483 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jul 22, 2021 at 12:51 PM Kirill A. Shutemov wrote: > +void mark_unaccepted(phys_addr_t start, phys_addr_t end) > +{ > + unsigned int npages; > + > + if (start & ~PMD_MASK) { > + npages = (round_up(start, PMD_SIZE) - start) / PAGE_SIZE; > + tdx_hcall_gpa_intent(start, npages, TDX_MAP_PRIVATE); > + start = round_up(start, PMD_SIZE); > + } > + > + if (end & ~PMD_MASK) { > + npages = (end - round_down(end, PMD_SIZE)) / PAGE_SIZE; > + end = round_down(end, PMD_SIZE); > + tdx_hcall_gpa_intent(end, npages, TDX_MAP_PRIVATE); > + } Is not the above code will accept the pages that are already accepted? It is accepting the pages in the same 2MB region that is before start and after end. We do not know what code/data is stored on those pages, right? This might cause security issues depending on what is stored on those pages. > +static void __accept_pages(phys_addr_t start, phys_addr_t end) > +{ > + unsigned int rs, re; > + > + bitmap_for_each_set_region(unaccepted_memory, rs, re, > + start / PMD_SIZE, end / PMD_SIZE) { > + tdx_hcall_gpa_intent(rs * PMD_SIZE, (re - rs) * PMD_NR, > + TDX_MAP_PRIVATE); > + This assumes that the granularity of the unaccepted pages is always in PMD_SIZE. I have seen the answer above saying that mark_unaccepted makes sure that we have only 2MB unaccepted pages in our bitmap but it is not enough IMO. This function, as it is, will do double TDACCEPT for the already accepted 4KB pages in the same 2MB region. > +void maybe_set_page_offline(struct page *page, unsigned int order) > +{ > + phys_addr_t addr = page_to_phys(page); > + bool unaccepted = true; > + unsigned int i; > + > + spin_lock(&unaccepted_memory_lock); > + if (order < PMD_ORDER) { > + BUG_ON(test_bit(addr / PMD_SIZE, unaccepted_memory)); > + goto out; > + } don't we need to throw a bug when order is < PMD_ORDER, independent of what test_bit() is saying? If the page is accepted or not accepted, there is a possibility of double accepting pages. > + for (i = 0; i < (1 << (order - PMD_ORDER)); i++) { and if order < PMD_ORDER, this will be a wrong shift operation, right? > + if (unaccepted) > + __SetPageOffline(page); > + else > + __accept_pages(addr, addr + (PAGE_SIZE << order)); so all the pages that were accepted will be reaccepted?