From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 907CBC4338F for ; Thu, 12 Aug 2021 21:23:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 17A1C60FED for ; Thu, 12 Aug 2021 21:23:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 17A1C60FED Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5B8046B006C; Thu, 12 Aug 2021 17:23:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5687C6B0071; Thu, 12 Aug 2021 17:23:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 456976B0072; Thu, 12 Aug 2021 17:23:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id 288F36B006C for ; Thu, 12 Aug 2021 17:23:03 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BCC63248B3 for ; Thu, 12 Aug 2021 21:23:02 +0000 (UTC) X-FDA: 78467703804.30.A3B7298 Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf23.hostedemail.com (Postfix) with ESMTP id 7498F9008C67 for ; Thu, 12 Aug 2021 21:23:02 +0000 (UTC) Received: by mail-lf1-f49.google.com with SMTP id t9so16026561lfc.6 for ; Thu, 12 Aug 2021 14:23:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=tjnLas4N38VG1m6ngVp/nKbMad+Le5jniGEKC9/f/AI=; b=MXWrG/3xOqi8ily+EcD8UFrv2cZ8HoPYEmzUwkZbpnFCeK+N47Cjkha3j17KWz05OK vX85Imq1+eOG78/hrTPhFMyiZo1Z6Q9/o9u//bsnP+/iHJzd3p1GJDZNkSxrT5N2oTSG 0v4LnzMXGTQ7jjGPeDkS9fCH/g+jx/cREcHkqxls1i229tBc9Cp1NHXlJLjS5wz4thod T7rDsNFGvhoqvMkkVV0Fc/wpEjKXFq/AXYy+n5sfA3TJbdOueGjS8HrovJyUj0uMh4Rl wJ6gC0hmGy+SprZa44Dh2+EcOGJwgInuv5WBpbR9URukxBOKnkVuhTDd0MIyvVmnI7Nn O0vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=tjnLas4N38VG1m6ngVp/nKbMad+Le5jniGEKC9/f/AI=; b=NRLimntfmcdplckRtuS9RIlOoZzmgUIcKbMitQiMlHLotTVoUrIDpLQUmHgRcJPIkD NvOKufQXdsXZk6kcp4QXli/t3AJdlOfIoxIZs0cYzTtwM+G+YH/PquPydc0cjwGjhR9p aqDRC1TomM6JU559Iif+yZLDuyypL8BCoNqeWC9XubxHKKclcDb0Cqv0HrqW9FfT7LgS +zPpirCAjtaBEUyeDOAtAR3Dfg4YTW1jIwhzeyA6vNhdu9uI+VdjRHP2a5HDbFYtfFd1 ngfwgSv91ejD0MrKsvHzqASJau862VrCODW4fRecBAVF4VHjezZtFN07gDLHrcz0jBTW eEDg== X-Gm-Message-State: AOAM530Xs+LX5e5/xhihktPGv00TrKFA9a2KqyxMFgVoBI9yI98XCmZT 9XvnaEHR0NEwQbjl3tTSNaPBIQ== X-Google-Smtp-Source: ABdhPJz/anlsUnp0mfpy0EkAT/Lpv+5EMg5m7i7pyxjTOplUpJrzrf3oaG3AcLuJz3F89935FLlydw== X-Received: by 2002:ac2:5fee:: with SMTP id s14mr3897521lfg.646.1628803380918; Thu, 12 Aug 2021 14:23:00 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id o16sm372463lfl.309.2021.08.12.14.22.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Aug 2021 14:23:00 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id 1BF2C102BEE; Fri, 13 Aug 2021 00:23:14 +0300 (+03) Date: Fri, 13 Aug 2021 00:23:14 +0300 From: "Kirill A. Shutemov" To: Dave Hansen Cc: Joerg Roedel , Andi Kleen , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: Re: [PATCH 1/5] mm: Add support for unaccepted memory Message-ID: <20210812212314.4fkrmzebluyl3umo@box.shutemov.name> References: <9748c07c-4e59-89d0-f425-c57f778d1b42@linux.intel.com> <17b6a3a3-bd7d-f57e-8762-96258b16247a@intel.com> <796a4b20-7fa3-3086-efa0-2f728f35ae06@linux.intel.com> <3caf5e73-c104-0057-680c-7851476e67ac@linux.intel.com> <25312492-5d67-e5b0-1a51-b6880f45a550@intel.com> <20210812204924.haneuxapkmluli6t@box.shutemov.name> <708c758f-2305-4fe6-ddcd-6881794402a5@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <708c758f-2305-4fe6-ddcd-6881794402a5@intel.com> Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b="MXWrG/3x"; spf=none (imf23.hostedemail.com: domain of kirill@shutemov.name has no SPF policy when checking 209.85.167.49) smtp.mailfrom=kirill@shutemov.name; dmarc=none X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7498F9008C67 X-Stat-Signature: i7fk93ub5gf45gch1ygsiqupo14khpm1 X-HE-Tag: 1628803382-947586 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Aug 12, 2021 at 01:59:01PM -0700, Dave Hansen wrote: > On 8/12/21 1:49 PM, Kirill A. Shutemov wrote: > > On Thu, Aug 12, 2021 at 07:14:20AM -0700, Dave Hansen wrote: > >> On 8/12/21 1:19 AM, Joerg Roedel wrote: > >> maybe_accept_page() > >> { > >> unsigned long huge_pfn = page_to_phys(page) / PMD_SIZE; > >> > >> /* Test the bit before taking any locks: */ > >> if (test_bit(huge_pfn, &accepted_bitmap)) > >> return; > >> > >> spin_lock_irq(); > >> /* Retest inside the lock: */ > >> if (test_bit(huge_pfn, &accepted_bitmap)) > >> return; > >> tdx_accept_page(page, PMD_SIZE); > >> set_bit(huge_pfn, &accepted_bitmap)); > >> spin_unlock_irq(); > >> } > >> > >> That's still not great. It's still a global lock and the lock is still > >> held for quite a while because that accept isn't going to be lightning > >> fast. But, at least it's not holding any *other* locks. It also > >> doesn't take any locks in the fast path where the 2MB page was already > >> accepted. > > > > I expect a cache line with bitmap to bounce around during warm up. One > > cache line covers a gig of RAM. > > The bitmap bouncing around isn't going to really matter when you have a > global lock protecting it from writes. The idea with static key would not work if we mark shared memory as unaccepted there. > > It's also not clear at all at what point the static key has to be > > switched. We don't have any obvious point where we can say we are done > > with accepting (unless you advocate for proactive accepting). > > Two easy options: > 1. Run over the bitmap and counts the bits left. That can be done > outside the lock even. > 2. Keep a counter of the number of bits set in the bitmap. > > > I like PageOffline() because we only need to consult the cache page > > allocator already have in hands before looking into bitmap. > > I like it too. But, it's really nasty if the value is only valid deep > in the allocator. > > We could keep the PageOffline() thing and then move it to some other > field in 'struct page' that's only valid between ClearPageOffline() and > prep_new_page(). Some magic value that says: "This page has not yet > been accepted, you better check the bitmap." Something like: > > if (TestClearPageOffline(page)) > page->private = 0Xdeadbeef; > > and then check page->private in prep_new_page(). There should be plenty > of 'struct page' space to hijack in that small window. PageOffline() encoded in mapcount and check_new_page_bad() would complain is mapcount is not -1. > BTW, I was going to actually try and hack something up, but I got > annoyed that your patches don't apply upstream and gave up. A git tree > with all of the dependencies would be nice. Okay. -- Kirill A. Shutemov