From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5213BC432BE for ; Thu, 26 Aug 2021 08:03:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C5066610CB for ; Thu, 26 Aug 2021 08:03:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C5066610CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E7DD68D0002; Thu, 26 Aug 2021 04:03:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E06B38D0001; Thu, 26 Aug 2021 04:03:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1C9C8D0002; Thu, 26 Aug 2021 04:03:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id AFFD38D0001 for ; Thu, 26 Aug 2021 04:03:05 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 568B48249980 for ; Thu, 26 Aug 2021 08:03:05 +0000 (UTC) X-FDA: 78516491130.21.AFE697C Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP id 0D363F0000AC for ; Thu, 26 Aug 2021 08:03:04 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0B95D610D2; Thu, 26 Aug 2021 08:03:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1629964984; bh=lfRfUXOGIU4YMr/zTdbR8FxwhkqufEf81zxtWDbyrto=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=mp6vowKKyllc2+JJB5xlTei4338zSzOYKtJJyozcgKr6aNY0omJj5UdEONhAV2bH4 jjHOwxf0URkO7Ez5ukpFG5gsDnqDRwAq0dj7Nz9Gff0uxXjAKa3yieOwNjRoup9FRj uE5ourq3uVh8UPQai4M2gi5FJab1AmYdVPS5CIVRtVVGZNMJvNKZL7itHN1VhC6EDO vIAvSf2inIODt5HYtv0s4N73agqhsXYwkI+MUcKb4OC7tTL6orjE0rnJcEoog9tCDt adib1s/Im7zxPaXy5aJqOD3tMyi9HQ1HwAfSp6zlm+EaHmb0vCHG9ZOpoX1wp4ahV7 RlSo4K9odDl4w== Date: Thu, 26 Aug 2021 11:02:57 +0300 From: Mike Rapoport To: Dave Hansen Cc: linux-mm@kvack.org, Andrew Morton , Andy Lutomirski , Dave Hansen , Ira Weiny , Kees Cook , Mike Rapoport , Peter Zijlstra , Rick Edgecombe , Vlastimil Babka , x86@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 4/4] x86/mm: write protect (most) page tables Message-ID: References: <20210823132513.15836-1-rppt@kernel.org> <20210823132513.15836-5-rppt@kernel.org> <1cccc2b6-8b5b-4aee-483d-f10e64a248a5@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1cccc2b6-8b5b-4aee-483d-f10e64a248a5@intel.com> Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=mp6vowKK; spf=pass (imf11.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: mncatnzqsbf835z7sbx5toge19qr95qn X-Rspamd-Queue-Id: 0D363F0000AC X-Rspamd-Server: rspam04 X-HE-Tag: 1629964984-756538 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 23, 2021 at 04:50:10PM -0700, Dave Hansen wrote: > On 8/23/21 6:25 AM, Mike Rapoport wrote: > > void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) > > { > > + enable_pgtable_write(page_address(pte)); > > pgtable_pte_page_dtor(pte); > > paravirt_release_pte(page_to_pfn(pte)); > > paravirt_tlb_remove_table(tlb, pte); > > @@ -69,6 +73,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) > > #ifdef CONFIG_X86_PAE > > tlb->need_flush_all = 1; > > #endif > > + enable_pgtable_write(pmd); > > pgtable_pmd_page_dtor(page); > > paravirt_tlb_remove_table(tlb, page); > > } > > I'm also cringing a bit at hacking this into the page allocator. A > *lot* of what you're trying to do with getting large allocations out and > splitting them up is done very well today by the slab allocators. It > might take some rearrangement of 'struct page' metadata to be more slab > friendly, but it does seem like a close enough fit to warrant investigating. I thought more about using slab, but it seems to me the least suitable option. The usecases at hand (page tables, secretmem, SEV/TDX) allocate in page granularity and some of them use struct page metadata, so even its rearrangement won't help. And adding support for 2M slabs to SLUB would be quite intrusive. I think that better options are moving such cache deeper into buddy or using e.g. genalloc instead of a list to deal with higher order allocations. The choice between these two will mostly depend of the API selection, i.e. a GFP flag or a dedicated alloc/free. -- Sincerely yours, Mike.