From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31882C4338F for ; Tue, 24 Aug 2021 13:33:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB2E86138B for ; Tue, 24 Aug 2021 13:33:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BB2E86138B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3BAB08D0005; Tue, 24 Aug 2021 09:33:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36B748D0001; Tue, 24 Aug 2021 09:33:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 259028D0005; Tue, 24 Aug 2021 09:33:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 0AA828D0001 for ; Tue, 24 Aug 2021 09:33:07 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7BBB9181AEF3F for ; Tue, 24 Aug 2021 13:33:06 +0000 (UTC) X-FDA: 78510065172.30.DD8F04D Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP id 2A9A420019D7 for ; Tue, 24 Aug 2021 13:33:06 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3C14761027; Tue, 24 Aug 2021 13:33:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1629811985; bh=oITe/jUwpv9uwAfzTp34OY4XM2xlLHAdToE8Y67MrUQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=tyd7XyESr7DjEar83gnmbRvjvpgtjKFGGW9El2hVGwcTOJm7PPCYwCBTdYVqpYLcm uegptsUdy3DN6rF0zpdTiHNmhVrS016seiuE35uBYGdVcRqRuVCjpbApgrhT2vIg7q 664QU0hkQla59MJa0RAO+wrl8DUcET2sThn4WltEjxUzb9h6Ddr4FnD0lzscDSuDFA U4FiYclmCeTerznuOey4fFD6KoocUb4DSVQqeFkQfHV67k7bNzjg2IdRJcDj6F9CGB OlleWmAPlC4Xs38Tk/cLYilOPFnHodV9bl34kGAgD6f5iDNdceW/e5q7nWqfqVZGff D8O2+TeirN7dw== Date: Tue, 24 Aug 2021 16:32:58 +0300 From: Mike Rapoport To: Dave Hansen Cc: linux-mm@kvack.org, Andrew Morton , Andy Lutomirski , Dave Hansen , Ira Weiny , Kees Cook , Mike Rapoport , Peter Zijlstra , Rick Edgecombe , Vlastimil Babka , x86@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 4/4] x86/mm: write protect (most) page tables Message-ID: References: <20210823132513.15836-1-rppt@kernel.org> <20210823132513.15836-5-rppt@kernel.org> <1cccc2b6-8b5b-4aee-483d-f10e64a248a5@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1cccc2b6-8b5b-4aee-483d-f10e64a248a5@intel.com> X-Rspamd-Queue-Id: 2A9A420019D7 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tyd7XyES; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam01 X-Stat-Signature: 3izz1n6w5ewm5mf61dk85jdhxcaemupy X-HE-Tag: 1629811986-566209 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 23, 2021 at 04:50:10PM -0700, Dave Hansen wrote: > On 8/23/21 6:25 AM, Mike Rapoport wrote: > > I'm also cringing a bit at hacking this into the page allocator. A > *lot* of what you're trying to do with getting large allocations out and > splitting them up is done very well today by the slab allocators. It > might take some rearrangement of 'struct page' metadata to be more slab > friendly, but it does seem like a close enough fit to warrant investigating. I did this at the page allocator level in a hope that (1) it would be possible to use such cache for allocations if different orders and (2) having a global cache of unmapped pages will utilize memory more efficiently and will reduce direct map fragmentation. And slab allocators may be the users of the cache at page allocator level. For the single use case of page tables, slabs may work, but in more general case I don't see them as a good fit. I'll take a closer look to using slab anyway, maybe it'll work out. -- Sincerely yours, Mike.