From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39F80C433F5 for ; Tue, 26 Apr 2022 15:22:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3B2C6B0073; Tue, 26 Apr 2022 11:22:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEA746B0074; Tue, 26 Apr 2022 11:22:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B2616B0075; Tue, 26 Apr 2022 11:22:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 8A01B6B0073 for ; Tue, 26 Apr 2022 11:22:10 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 581EE60A80 for ; Tue, 26 Apr 2022 15:22:10 +0000 (UTC) X-FDA: 79399396020.19.9D01BA7 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf24.hostedemail.com (Postfix) with ESMTP id 5C34A18005C for ; Tue, 26 Apr 2022 15:22:06 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C122AB820D6; Tue, 26 Apr 2022 15:22:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A29BC385A0; Tue, 26 Apr 2022 15:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1650986526; bh=3mtKY2swnsLIDfqeSbOdD0VESWHzKKX5nhkvH+Z/6Vc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=k80daLCvmuuBrYU7XZYjtCCEFn5L+H85rBhJUPW0ExOhP3+7KXSQ5O4EzXi1vpBcB rN+UCbLRqzvmyICfCuCqDBU46dD+yym1c/pSVqlnv6aA2A0MgRCIvsPuiQHA/pCC4s 9OjIe+erpZc3ac8QQtrT9lfMhNDJy4tW1QGq/QcDr/4O4HTgKbZKBe2s9NIaWQZBno Fw66Zhv938GBO2i/uf3/GenrPQWTzPrLgmv6YOVvY7dc7R2IpJ17yZ0AmkpSqp3fo7 j4+KONrNCSj7WlW/GzZ9i5xGrKSXY32BLpJkRXI4LQIVl7kn1w5egpV9RS96mgKoio 108uXLYfu1ZCw== Date: Tue, 26 Apr 2022 18:21:57 +0300 From: Mike Rapoport To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org, Andrew Morton , Andy Lutomirski , Dave Hansen , Ira Weiny , Kees Cook , Mike Rapoport , Peter Zijlstra , Rick Edgecombe , Vlastimil Babka , linux-kernel@vger.kernel.org, x86@kernel.org Subject: Re: [RFC PATCH 0/3] Prototype for direct map awareness in page allocator Message-ID: References: <20220127085608.306306-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 5C34A18005C X-Stat-Signature: aig5d6u368cpfiy7cbfms87b9dqt6xii X-Rspam-User: Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=k80daLCv; spf=pass (imf24.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam09 X-HE-Tag: 1650986526-83003 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello Hyeonggon, On Tue, Apr 26, 2022 at 05:54:49PM +0900, Hyeonggon Yoo wrote: > On Thu, Jan 27, 2022 at 10:56:05AM +0200, Mike Rapoport wrote: > > From: Mike Rapoport > > > > Hi, > > > > This is a second attempt to make page allocator aware of the direct map > > layout and allow grouping of the pages that must be mapped at PTE level in > > the direct map. > > > > Hello mike, It may be a silly question... > > Looking at implementation of set_memory*(), they only split > PMD/PUD-sized entries. But why not _merge_ them when all entries > have same permissions after changing permission of an entry? > > I think grouping __GFP_UNMAPPED allocations would help reducing > direct map fragmentation, but IMHO merging split entries seems better > to be done in those helpers than in page allocator. Maybe, I didn't got as far as to try merging split entries in the direct map. IIRC, Kirill sent a patch for collapsing huge pages in the direct map some time ago, but there still was something that had to initiate the collapse. > For example: > 1) set_memory_ro() splits 1 RW PMD entry into 511 RW PTE > entries and 1 RO PTE entry. > > 2) before freeing the pages, we call set_memory_rw() and we have > 512 RW PTE entries. Then we can merge it to 1 RW PMD entry. For this we need to check permissions of all 512 pages to make sure we can use a PMD entry to map them. Not sure that doing the scan in each set_memory call won't cause an overall slowdown. > 3) after 2) we can do same thing about PMD-sized entries > and merge them into 1 PUD entry if 512 PMD entries have > same permissions. > > [...] > > > Mike Rapoport (3): > > mm/page_alloc: introduce __GFP_UNMAPPED and MIGRATE_UNMAPPED > > mm/secretmem: use __GFP_UNMAPPED to allocate pages > > EXPERIMENTAL: x86/module: use __GFP_UNMAPPED in module_alloc > > > > arch/Kconfig | 7 ++ > > arch/x86/Kconfig | 1 + > > arch/x86/kernel/module.c | 2 +- > > include/linux/gfp.h | 13 +++- > > include/linux/mmzone.h | 11 +++ > > include/trace/events/mmflags.h | 3 +- > > mm/internal.h | 2 +- > > mm/page_alloc.c | 129 ++++++++++++++++++++++++++++++++- > > mm/secretmem.c | 8 +- > > 9 files changed, 162 insertions(+), 14 deletions(-) > > > > > > base-commit: e783362eb54cd99b2cac8b3a9aeac942e6f6ac07 > > -- > > 2.34.1 > > > > -- > Thanks, > Hyeonggon -- Sincerely yours, Mike.