From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA27EC433F5 for ; Tue, 11 Jan 2022 11:33:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB51F6B0075; Tue, 11 Jan 2022 06:33:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D14396B007B; Tue, 11 Jan 2022 06:33:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF1816B007D; Tue, 11 Jan 2022 06:33:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 8FB616B0075 for ; Tue, 11 Jan 2022 06:33:18 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 54794824C421 for ; Tue, 11 Jan 2022 11:33:18 +0000 (UTC) X-FDA: 79017795276.31.23A4D6B Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf13.hostedemail.com (Postfix) with ESMTP id 54A4720002 for ; Tue, 11 Jan 2022 11:33:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1641900797; x=1673436797; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eAlNpFgI2dj8DL18robIsH5Fta03JBWQ7Hr8f4oKmC0=; b=WIeLSgtcfy04zkVgeRZF2FfRvWG7qh1pRsw85stlNWai65UvryQAJo0u xeJOZ+7dNApEZU/QVregQbTfGb6cjOBwH58ut33tuQ8L/HtkKIqGePNM0 1oEozy7mvzorPLPQrBMIhImvH65E0GDKJgQw0LlTVAVWQ8vSd6mG1s5Lm KVCHcaC4Wh7TFCCK2/0x7p1Er4zdkDTk99WPo0d4CFY4hnP5cBHHI+Jz+ ECWqbmhudoxBOk45u3peIhptHGo9+3nG5qMwDgwuBBjROfYCTuQvHa7G5 W+KiTB4NXTgSNIUddYEzZtOkjq9dk0ZrToNrJYiMkfaFZafdpXB1Iz6WD w==; X-IronPort-AV: E=McAfee;i="6200,9189,10223"; a="242277578" X-IronPort-AV: E=Sophos;i="5.88,279,1635231600"; d="scan'208";a="242277578" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jan 2022 03:33:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,279,1635231600"; d="scan'208";a="515063272" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 11 Jan 2022 03:33:08 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 20859125; Tue, 11 Jan 2022 13:33:19 +0200 (EET) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv2 1/7] mm: Add support for unaccepted memory Date: Tue, 11 Jan 2022 14:33:08 +0300 Message-Id: <20220111113314.27173-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220111113314.27173-1-kirill.shutemov@linux.intel.com> References: <20220111113314.27173-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 54A4720002 X-Stat-Signature: 8gym5ectpmuptxyeiofyn6gy4z9dkmh9 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WIeLSgtc; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf13.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=kirill.shutemov@linux.intel.com X-Rspamd-Server: rspam06 X-HE-Tag: 1641900797-335957 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: UEFI Specification version 2.9 introduces the concept of memory acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, requiring memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific for the Virtual Machine platform. Accepting memory is costly and it makes VMM allocate memory for the accepted guest physical address range. It's better to postpone memory acceptance until memory is needed. It lowers boot time and reduces memory overhead. Support of such memory requires a few changes in core-mm code: - memblock has to accept memory on allocation; - page allocator has to accept memory on the first allocation of the page; Memblock change is trivial. The page allocator is modified to accept pages on the first allocation. PageOffline() is used to indicate that the page requires acceptance. The flag is currently used by hotplug and ballooning. Such pages are not available to the page allocator. Architecture has to provide three helpers if it wants to support unaccepted memory: - accept_memory() makes a range of physical addresses accepted. - maybe_set_page_offline() marks a page PageOffline() if it requires acceptance. Used during boot to put pages on free lists. - accept_and_clear_page_offline() makes a page accepted and clears PageOffline(). Signed-off-by: Kirill A. Shutemov --- include/linux/page-flags.h | 4 ++++ mm/internal.h | 15 +++++++++++++++ mm/memblock.c | 1 + mm/page_alloc.c | 21 ++++++++++++++++++++- 4 files changed, 40 insertions(+), 1 deletion(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 52ec4b5e5615..281f70da329c 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -887,6 +887,10 @@ PAGE_TYPE_OPS(Buddy, buddy) * any further access to page content. PFN walkers that read content of = random * pages should check PageOffline() and synchronize with such drivers us= ing * page_offline_freeze()/page_offline_thaw(). + * + * If a PageOffline() page encountered on a buddy allocator's free list = it has + * to be "accepted" before it can be used. + * See accept_and_clear_page_offline() and CONFIG_UNACCEPTED_MEMORY. */ PAGE_TYPE_OPS(Offline, offline) =20 diff --git a/mm/internal.h b/mm/internal.h index 3b79a5c9427a..1738a4e2a27e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -713,4 +713,19 @@ void vunmap_range_noflush(unsigned long start, unsig= ned long end); int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); =20 +#ifndef CONFIG_UNACCEPTED_MEMORY +static inline void maybe_set_page_offline(struct page *page, unsigned in= t order) +{ +} + +static inline void accept_and_clear_page_offline(struct page *page, + unsigned int order) +{ +} + +static inline void accept_memory(phys_addr_t start, phys_addr_t end) +{ +} +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memblock.c b/mm/memblock.c index 1018e50566f3..6dfa594192de 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1400,6 +1400,7 @@ phys_addr_t __init memblock_alloc_range_nid(phys_ad= dr_t size, */ kmemleak_alloc_phys(found, size, 0, 0); =20 + accept_memory(found, found + size); return found; } =20 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c5952749ad40..5707b4b5f774 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1064,6 +1064,7 @@ static inline void __free_one_page(struct page *pag= e, unsigned int max_order; struct page *buddy; bool to_tail; + bool offline =3D PageOffline(page); =20 max_order =3D min_t(unsigned int, MAX_ORDER - 1, pageblock_order); =20 @@ -1097,6 +1098,10 @@ static inline void __free_one_page(struct page *pa= ge, clear_page_guard(zone, buddy, order, migratetype); else del_page_from_free_list(buddy, zone, order); + + if (PageOffline(buddy)) + offline =3D true; + combined_pfn =3D buddy_pfn & pfn; page =3D page + (combined_pfn - pfn); pfn =3D combined_pfn; @@ -1130,6 +1135,9 @@ static inline void __free_one_page(struct page *pag= e, done_merging: set_buddy_order(page, order); =20 + if (offline) + __SetPageOffline(page); + if (fpi_flags & FPI_TO_TAIL) to_tail =3D true; else if (is_shuffle_order(order)) @@ -1155,7 +1163,8 @@ static inline void __free_one_page(struct page *pag= e, static inline bool page_expected_state(struct page *page, unsigned long check_flags) { - if (unlikely(atomic_read(&page->_mapcount) !=3D -1)) + if (unlikely(atomic_read(&page->_mapcount) !=3D -1) && + !PageOffline(page)) return false; =20 if (unlikely((unsigned long)page->mapping | @@ -1734,6 +1743,8 @@ void __init memblock_free_pages(struct page *page, = unsigned long pfn, { if (early_page_uninitialised(pfn)) return; + + maybe_set_page_offline(page, order); __free_pages_core(page, order); } =20 @@ -1823,10 +1834,12 @@ static void __init deferred_free_range(unsigned l= ong pfn, if (nr_pages =3D=3D pageblock_nr_pages && (pfn & (pageblock_nr_pages - 1)) =3D=3D 0) { set_pageblock_migratetype(page, MIGRATE_MOVABLE); + maybe_set_page_offline(page, pageblock_order); __free_pages_core(page, pageblock_order); return; } =20 + accept_memory(pfn << PAGE_SHIFT, (pfn + nr_pages) << PAGE_SHIFT); for (i =3D 0; i < nr_pages; i++, page++, pfn++) { if ((pfn & (pageblock_nr_pages - 1)) =3D=3D 0) set_pageblock_migratetype(page, MIGRATE_MOVABLE); @@ -2297,6 +2310,9 @@ static inline void expand(struct zone *zone, struct= page *page, if (set_page_guard(zone, &page[size], high, migratetype)) continue; =20 + if (PageOffline(page)) + __SetPageOffline(&page[size]); + add_to_free_list(&page[size], zone, high, migratetype); set_buddy_order(&page[size], high); } @@ -2393,6 +2409,9 @@ inline void post_alloc_hook(struct page *page, unsi= gned int order, */ kernel_unpoison_pages(page, 1 << order); =20 + if (PageOffline(page)) + accept_and_clear_page_offline(page, order); + /* * As memory initialization might be integrated into KASAN, * kasan_alloc_pages and kernel_init_free_pages must be --=20 2.34.1