From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E466C4338F for ; Tue, 24 Aug 2021 03:34:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DC81E61248 for ; Tue, 24 Aug 2021 03:34:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DC81E61248 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 411D06B0071; Mon, 23 Aug 2021 23:34:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C1586B0072; Mon, 23 Aug 2021 23:34:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28919900002; Mon, 23 Aug 2021 23:34:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 0C7FC6B0071 for ; Mon, 23 Aug 2021 23:34:32 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A65E818023901 for ; Tue, 24 Aug 2021 03:34:31 +0000 (UTC) X-FDA: 78508556742.08.7B1F6EF Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP id 47D32F00008F for ; Tue, 24 Aug 2021 03:34:31 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 5CD4561220; Tue, 24 Aug 2021 03:34:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1629776070; bh=LKJhfUsI0RQ2Pv2LUdZ0MjvA3zWwXIhmCSxLFUFJ8ls=; h=In-Reply-To:References:Date:From:To:Cc:Subject:From; b=icxa68HUCpq/++AP/CQpwB5Zwozod5wIGocSEeMFvL0uL/yAfTWrvVnqlm8kz5gch BUuPAgNth7ZWLBM7y7WZLwgnJUnBttVib4RG1Qtgdqd0ZMqL8OdNZamWaWtLQzhRs0 hsoGZKAmLnimAsyP24KdMXc6clFf3zj/FLG69NBRBpzB24Hr4HyWn7EqK2Q4ZrOX9K qfkIywSKEMAQBX4AV0wfH6YJdIisOiaDcBJpZU5P8xswODhJN+hQIhi7QcoWvAC7Ly BDiu5TjxcSDqMWoX3Af7aR/Sz+vop2cEealQ7VCkSMGxGlxpJ/CjYUkHzRagvzIyB6 eJ/3L8uwDmy/w== Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailauth.nyi.internal (Postfix) with ESMTP id 67C6B27C0054; Mon, 23 Aug 2021 23:34:28 -0400 (EDT) Received: from imap2 ([10.202.2.52]) by compute6.internal (MEProxy); Mon, 23 Aug 2021 23:34:28 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddruddtiedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepofgfggfkjghffffhvffutgesthdtredtreertdenucfhrhhomhepfdetnhgu hicunfhuthhomhhirhhskhhifdcuoehluhhtoheskhgvrhhnvghlrdhorhhgqeenucggtf frrghtthgvrhhnpedthfehtedtvdetvdetudfgueeuhfdtudegvdelveelfedvteelfffg fedvkeegfeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhroh hmpegrnhguhidomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqudduiedukeeh ieefvddqvdeifeduieeitdekqdhluhhtoheppehkvghrnhgvlhdrohhrgheslhhinhhugi drlhhuthhordhush X-ME-Proxy: Received: by mailuser.nyi.internal (Postfix, from userid 501) id 688C8A038A7; Mon, 23 Aug 2021 23:34:26 -0400 (EDT) X-Mailer: MessagingEngine.com Webmail Interface User-Agent: Cyrus-JMAP/3.5.0-alpha0-1118-g75eff666e5-fm-20210816.002-g75eff666 Mime-Version: 1.0 Message-Id: <8b50991d-2ab5-4577-83e9-a2d74135c5f5@www.fastmail.com> In-Reply-To: <1cccc2b6-8b5b-4aee-483d-f10e64a248a5@intel.com> References: <20210823132513.15836-1-rppt@kernel.org> <20210823132513.15836-5-rppt@kernel.org> <1cccc2b6-8b5b-4aee-483d-f10e64a248a5@intel.com> Date: Mon, 23 Aug 2021 20:34:05 -0700 From: "Andy Lutomirski" To: "Dave Hansen" , "Mike Rapoport" , linux-mm@kvack.org Cc: "Andrew Morton" , "Dave Hansen" , "Ira Weiny" , "Kees Cook" , "Mike Rapoport" , "Peter Zijlstra (Intel)" , "Rick P Edgecombe" , "Vlastimil Babka" , "the arch/x86 maintainers" , "Linux Kernel Mailing List" Subject: Re: [RFC PATCH 4/4] x86/mm: write protect (most) page tables Content-Type: text/plain Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=icxa68HU; spf=pass (imf16.hostedemail.com: domain of luto@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=luto@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: zthgeknhenz9c53kii7kche43bg791jt X-Rspamd-Queue-Id: 47D32F00008F X-Rspamd-Server: rspam04 X-HE-Tag: 1629776071-380127 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 23, 2021, at 4:50 PM, Dave Hansen wrote: > On 8/23/21 6:25 AM, Mike Rapoport wrote: > > void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) > > { > > + enable_pgtable_write(page_address(pte)); > > pgtable_pte_page_dtor(pte); > > paravirt_release_pte(page_to_pfn(pte)); > > paravirt_tlb_remove_table(tlb, pte); > > @@ -69,6 +73,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) > > #ifdef CONFIG_X86_PAE > > tlb->need_flush_all = 1; > > #endif > > + enable_pgtable_write(pmd); > > pgtable_pmd_page_dtor(page); > > paravirt_tlb_remove_table(tlb, page); > > } > > I would expected this to have leveraged the pte_offset_map/unmap() code > to enable/disable write access. Granted, it would enable write access > even when only a read is needed, but that could be trivially fixed with > having a variant like: > > pte_offset_map_write() > pte_offset_unmap_write() I would also like to see a discussion of how races in which multiple threads or CPUs access ptes in the same page at the same time. > > in addition to the existing (presumably read-only) versions: > > pte_offset_map() > pte_offset_unmap() > > Although those only work for the leaf levels, it seems a shame not to to > use them. > > I'm also cringing a bit at hacking this into the page allocator. A > *lot* of what you're trying to do with getting large allocations out and > splitting them up is done very well today by the slab allocators. It > might take some rearrangement of 'struct page' metadata to be more slab > friendly, but it does seem like a close enough fit to warrant investigating. >