From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E8D1C4338F for ; Thu, 12 Aug 2021 20:49:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 981F8610A4 for ; Thu, 12 Aug 2021 20:49:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 981F8610A4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0D6526B006C; Thu, 12 Aug 2021 16:49:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 086B66B0071; Thu, 12 Aug 2021 16:49:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E90928D0001; Thu, 12 Aug 2021 16:49:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id D04976B006C for ; Thu, 12 Aug 2021 16:49:13 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 67631181B0496 for ; Thu, 12 Aug 2021 20:49:13 +0000 (UTC) X-FDA: 78467618586.35.8A64A42 Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) by imf03.hostedemail.com (Postfix) with ESMTP id 23BB930097FA for ; Thu, 12 Aug 2021 20:49:13 +0000 (UTC) Received: by mail-lf1-f42.google.com with SMTP id t9so15863363lfc.6 for ; Thu, 12 Aug 2021 13:49:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=pfVYeiF6NsTodz2n4eSeO7tT8BowjN/vmYohy2o7SVU=; b=pTPOMagXaOUXVggHiYUHmrkCBZT4ZIZ8OcOlXpc7anChygiuYEiuDMFnf3RqRNcM26 OzweCu8YyeDAJcSJIP63ZV4nfBiX7FGB7RB54fv4t9PkDPyt97XfD/cx97VeKalO8dY8 d+Vns1JaZOc0hTjGsxPeXXQGbUx58KdeJwUg+GBr2fNgaAlYb7iSeVtPHY7rE3sPeZCD 8vbdsy+9twQDArQLUz8Fh0pRIxgFjQ2NjOI0+cQbO6mI4cdNtv3P9uDjwThn0OgE4cPz 7dZOYsxhT3yQTth6RzYIEvXEH0JrQYHIrMRFuuXMvjZ139FYzBm/NxPw0P3oBGXN2oir emUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=pfVYeiF6NsTodz2n4eSeO7tT8BowjN/vmYohy2o7SVU=; b=f+HQhQmK/L0tArUa32Mi8epJ6JW0EgFlSeCxHAqBMaWj6j5YHlz5MKDKi3Bf1eViiV 94PZJPWv+R5vggHl9Ibz+MxO9emTMbtNCpZ/4ljxx42KhaltkV7rAqnujzuutbw20YdH iE7d4xNrLfBN4k8yW9gMCs1zcuHgBOZyqgfPg2uaWNuhspd6IFhPXzPryvqFrVjXOxbx RyGoAQmlgf6xgzQS434SzUXO3zRn22ThSLJIVK9owA1Y1hGvqHdDqCiVL6byH9ES8J+h kpYdJAFCz1g21FGtLFT6B2eT4+OZBr2yt8KI4fssOqFqTbEaDMJ5mWTmhjmvHu70JxNo 2S/Q== X-Gm-Message-State: AOAM533HQxA9/ZXUlR2Wc8vZA8ian/1XAKzeeUbQmGGQHZGv849LCPYX zarE57R4W+BezkcCMC0VF/VgmQ== X-Google-Smtp-Source: ABdhPJx0kYDvYIJQVOD8WIttESM9kcBa3OyExZB3hoeM0L7g01xXKYY3CCXhwlpnGaNCOgkg88F9QA== X-Received: by 2002:a05:6512:3682:: with SMTP id d2mr3897174lfs.50.1628801351604; Thu, 12 Aug 2021 13:49:11 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id e1sm2294lfs.307.2021.08.12.13.49.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Aug 2021 13:49:10 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id B3CE4102BEE; Thu, 12 Aug 2021 23:49:24 +0300 (+03) Date: Thu, 12 Aug 2021 23:49:24 +0300 From: "Kirill A. Shutemov" To: Dave Hansen Cc: Joerg Roedel , Andi Kleen , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: Re: [PATCH 1/5] mm: Add support for unaccepted memory Message-ID: <20210812204924.haneuxapkmluli6t@box.shutemov.name> References: <20210810062626.1012-1-kirill.shutemov@linux.intel.com> <20210810062626.1012-2-kirill.shutemov@linux.intel.com> <9748c07c-4e59-89d0-f425-c57f778d1b42@linux.intel.com> <17b6a3a3-bd7d-f57e-8762-96258b16247a@intel.com> <796a4b20-7fa3-3086-efa0-2f728f35ae06@linux.intel.com> <3caf5e73-c104-0057-680c-7851476e67ac@linux.intel.com> <25312492-5d67-e5b0-1a51-b6880f45a550@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <25312492-5d67-e5b0-1a51-b6880f45a550@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 23BB930097FA X-Stat-Signature: jewut1ryd4knyubpzmwrm9zwpm9c6nys Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b=pTPOMagX; dmarc=none; spf=none (imf03.hostedemail.com: domain of kirill@shutemov.name has no SPF policy when checking 209.85.167.42) smtp.mailfrom=kirill@shutemov.name X-HE-Tag: 1628801353-769323 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Aug 12, 2021 at 07:14:20AM -0700, Dave Hansen wrote: > On 8/12/21 1:19 AM, Joerg Roedel wrote: > > On Tue, Aug 10, 2021 at 02:20:08PM -0700, Andi Kleen wrote: > >> Also I agree with your suggestion that we should get the slow path out of > >> the zone locks/interrupt disable region. That should be easy enough and is > >> an obvious improvement. > > > > I also agree that the slow-path needs to be outside of the memory > > allocator locks. But I think this conflicts with the concept of > > accepting memory in 2MB chunks even if allocation size is smaller. > > > > Given some kernel code allocated 2 pages and the allocator path starts > > to validate the whole 2MB page the memory is on, then there are > > potential races to take into account. > > Yeah, the PageOffline()+PageBuddy() trick breaks down as soon as > PageBuddy() gets cleared. > > I'm not 100% sure we need a page flag, though. Imagine if we just did a > static key check in prep_new_page(): > > if (static_key_whatever(tdx_accept_ongoing)) > maybe_accept_page(page, order); > > maybe_accept_page() could just check the acceptance bitmap and see if > the 2MB page has been accepted. If so, just return. If not, take the > bitmap lock, accept the 2MB page, then mark the bitmap. > > maybe_accept_page() > { > unsigned long huge_pfn = page_to_phys(page) / PMD_SIZE; > > /* Test the bit before taking any locks: */ > if (test_bit(huge_pfn, &accepted_bitmap)) > return; > > spin_lock_irq(); > /* Retest inside the lock: */ > if (test_bit(huge_pfn, &accepted_bitmap)) > return; > tdx_accept_page(page, PMD_SIZE); > set_bit(huge_pfn, &accepted_bitmap)); > spin_unlock_irq(); > } > > That's still not great. It's still a global lock and the lock is still > held for quite a while because that accept isn't going to be lightning > fast. But, at least it's not holding any *other* locks. It also > doesn't take any locks in the fast path where the 2MB page was already > accepted. I expect a cache line with bitmap to bounce around during warm up. One cache line covers a gig of RAM. It's also not clear at all at what point the static key has to be switched. We don't have any obvious point where we can say we are done with accepting (unless you advocate for proactive accepting). I like PageOffline() because we only need to consult the cache page allocator already have in hands before looking into bitmap. > The locking could be more fine-grained, for sure. The bitmap could, for > instance, have a lock bit too. Or we could just have an array of locks > and hash the huge_pfn to find a lock given a huge_pfn. But, for now, I > think it's fine to just keep the global lock. > > > Either some other code path allocates memory from that page and returns > > it before validation is finished or we end up with double validation. > > Returning unvalidated memory is a guest-problem and double validation > > will cause security issues for SNP guests. > > Yeah, I think the *canonical* source of information for accepts is the > bitmap. The page flags and any static keys or whatever are > less-canonical sources that tell you when you _might_ need to consult > the bitmap. Right. -- Kirill A. Shutemov