From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B7F5C6FD1D for ; Mon, 27 Mar 2023 14:27:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB39C900003; Mon, 27 Mar 2023 10:27:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D63D1900002; Mon, 27 Mar 2023 10:27:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2AFB900003; Mon, 27 Mar 2023 10:27:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B3D32900002 for ; Mon, 27 Mar 2023 10:27:33 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3496AC0607 for ; Mon, 27 Mar 2023 14:27:33 +0000 (UTC) X-FDA: 80614906386.20.8FE20AD Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf07.hostedemail.com (Postfix) with ESMTP id 7658640013 for ; Mon, 27 Mar 2023 14:27:31 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Z/W04J2Z"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679927251; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A6vVSg2x4s5iftdodP9NpEt/GZvcGgNleEYzIva3XVE=; b=7xFr16/Bh2GbQvqkCClCY36+CDeRzE+dta53gPnro/2Qqqih2JzhhYBf0nyKAY5hpkn1fa X2ucOLPjOpQI4lcnIm5LvfdERKWf0NeVQdQj3NjAoKQz+rMJ68gyua5IWuANOpmgn6t0vx FKvqvYDkBvAkEfIC+R7AKtvZWvkFDsI= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Z/W04J2Z"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679927251; a=rsa-sha256; cv=none; b=HVyxAaUvh6tRP6ORYBrPWaz/zqbU3Rl1+QnQMVeDjDqJ5UC9cDGS/bg7QRN0BSlMoCplF8 XC3pzKrwEmKASOPL6Oa9tMPi4WQPXMagVDsLTNDvqi8L8c50lfsdmI8LSqrRYy2aS77Hqm ZTPA6/1gd9XA1shpNjFBVPaXSc4Huck= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5423B612D6; Mon, 27 Mar 2023 14:27:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F9B5C4339B; Mon, 27 Mar 2023 14:27:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679927249; bh=rEA86CWCq9jECqCbDrwvjvQaJ5N92XqdOg+PMJXuYt4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Z/W04J2ZSa8o+xyhpqgScrhAgAN/qfbG7m19RUk/8hj/3Vay9cJngzB83hVt6euoa or+i3Ax90LwMQWv/jJ5AGmVd9gqPtbfsQy0GpgminekCn4LadIW64dTDjgCC4JJyoe jyXW2OY8s4G7dtVkRssv6uc+T0NpsupDltHrJmJkLE4xRBh5FvycvKHdkCktnmyP7J jQnYluj7onx9mysWDdmXa9lfvALCopPAg2Dt93ivRuo4cURWVd6FPTndOa0i7Riw7P 9OmY3Pssb7m9ZxoSy6hlJnmF1IZsV6g8UXhvR2JAFAWfdXgo35iEcMI/3NNTbYh+c4 +g/+j03RC5NZg== Date: Mon, 27 Mar 2023 17:27:11 +0300 From: Mike Rapoport To: linux-mm@kvack.org Cc: Mel Gorman , Andrew Morton , Dave Hansen , Peter Zijlstra , Rick Edgecombe , Song Liu , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, x86@kernel.org Subject: Re: [RFC PATCH 0/5] Prototype for direct map awareness in page allocator Message-ID: References: <20230308094106.227365-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230308094106.227365-1-rppt@kernel.org> X-Rspamd-Queue-Id: 7658640013 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: s8p3k8aazgf9z4p5jopr4eaann8rdo4x X-HE-Tag: 1679927251-2943 X-HE-Meta: U2FsdGVkX19yjREyZNgkjcBrTh9tyeO+d4Jljtww1wyep5GMuhKuoykY+cxMoBqkD1IHNRTFiVy+hvUAU2x/apNYjsitwIpYCAhnVLXG8hq4NK8xDPgojXOoOEWsPDb6+vqPFbVK+JghE2x7VI0pq6IN62doMVwKZLyw/Kc5T68aGPod4Y5yJ9RuOAau/3rRTEEBC6A6G1PSHHvcWaMXZVvSghg7oz8rqV2uiKjHjlGkrrrR4814XuPqPUJJ1yVGnpd1FEHpt/Lj39STNHLVyZvNR+/h8M4RP1cL2/oCIXjURdvPadjmII2vQ9ttdrtzm+0tHEwXVR5crOA9B7P0a84hSOmDuo4Wm/Sn6pKPbUURiV7QSNTPQKawOBXyOjAIv1fz2PnXFx9KQele+kSIT1E4w2MF09HTNrd3B4zJBqV4nxY28tsKQzCjc9gXywHzo8r9Wp6PAqlVIzygPe3+tCxpW6ZN4GTvP5hJ2edeF3ShgbRvAkR3K4i4yRZshElYdEjVZSFJEapsem7enM3BmUScugC0BHpkdYD7Lsa8xkqW6FGlXRkTDOyaTKp8NI8ls7kzubteV5H8TNIgq5TNuGDcugA3Ayp145C13OqFPC/Be/+QqVlZlktM/Lz3nH0hY99yzuFgkDhl673UEOR2gcc+mvxTdrcoYVX9b2WwjRziKzNHApd7sPiOJxYiEeeD/hyLcsHJvXRy8IoduvvOw31/aST5dXHJkfYaagSWPXT0PmPYQNck6CCNNaoOQLoCzcN4LohH570vy+kPoZf4CvYBrv/d+7mP37A4tJNj/NFnH6Vj+aNeuuTM1nO81ON+3ND54yeRG7V9zWcRkVYcbPaGfx9v4VYx0qHSX+b8OOUXkF6J6Yuc6w0m9Ireed+tOdLbamz4jlKV8Vrj5xCilJaabZMbKdiDa6xq7mkrhOSP3k3NGw/eKhJqUjjV0fokKJjVjmiUyTiJfTD2Hmc fwItmPda PrIRfa6oxHVLQ4Q1ptgAovPJYMYg0tm+jxneyqA24mU3U40/Sn2nqaNSsWkYSUDWlfjaOY9W159o8XoJSTdSepNfG3jFgDfRB+qhaAuor/yb1AdowLLzeuDQRhUN/D91GmqCEGpCfTFFZIxllfEv68FT2+6kNnq9GkJpjjxSCSz4cVvgxeD4ZpIKCWo5t38g6XSelznN7DNKe9Sp+V0IMZ2fQADiK+guWj1jpPe7Wi6FGc5QrIPlHmcuTtiOSGkyGV9a87koM+GMv0huHufeBucK9Y9ympGWMueRA0waLoekvSsEuXGO6bfxL40JAUGcx+j1kui1YCrZbJLRSVLmBvVI8OuPSRVgnv6QhaL/daXqdCc4+2LIMx0bKkyYeOU7Ps18t3SYRTupgt/s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: (adding Mel) On Wed, Mar 08, 2023 at 11:41:01AM +0200, Mike Rapoport wrote: > From: "Mike Rapoport (IBM)" > > Hi, > > This is a third attempt to make page allocator aware of the direct map > layout and allow grouping of the pages that must be unmapped from > the direct map. > > This a new implementation of __GFP_UNMAPPED, kinda a follow up for this set: > > https://lore.kernel.org/all/20220127085608.306306-1-rppt@kernel.org > > but instead of using a migrate type to cache the unmapped pages, the > current implementation adds a dedicated cache to serve __GFP_UNMAPPED > allocations. > > The last two patches in the series demonstrate how __GFP_UNMAPPED can be > used in two in-tree use cases. > > First one is to switch secretmem to use the new mechanism, which is > straight forward optimization. > > The second use-case is to enable __GFP_UNMAPPED in x86::module_alloc() > that is essentially used as a method to allocate code pages and thus > requires permission changes for basic pages in the direct map. > > This set is x86 specific at the moment because other architectures either > do not support set_memory APIs that split the direct^w linear map (e.g. > PowerPC) or only enable set_memory APIs when the linear map uses basic page > size (like arm64). > > The patches are only lightly tested. > > == Motivation == > > There are use-cases that need to remove pages from the direct map or at > least map them with 4K granularity. Whenever this is done e.g. with > set_memory/set_direct_map APIs, the PUD and PMD sized mappings in the > direct map are split into smaller pages. > > To reduce the performance hit caused by the fragmentation of the direct map > it makes sense to group and/or cache the pages removed from the direct > map so that the split large pages won't be all over the place. > > There were RFCs for grouped page allocations for vmalloc permissions [1] > and for using PKS to protect page tables [2] as well as an attempt to use a > pool of large pages in secretmtm [3], but these suggestions address each > use case separately, while having a common mechanism at the core mm level > could be used by all use cases. > > == Implementation overview == > > The pages that need to be removed from the direct map are grouped in a > dedicated cache. When there is a page allocation request with > __GFP_UNMAPPED set, it is redirected from __alloc_pages() to that cache > using a new unmapped_alloc() function. > > The cache is implemented as a buddy allocator and it can handle high order > requests. > > The cache starts empty and whenever it does not have enough pages to > satisfy an allocation request the cache attempts to allocate PMD_SIZE page > to replenish the cache. If PMD_SIZE page cannot be allocated, the cache is > replenished with a page of the highest order available. That page is > removed from the direct map and added to the local buddy allocator. > > There is also a shrinker that releases pages from the unmapped cache when > there us a memory pressure in the system. When shrinker releases a page it > is mapped back into the direct map. > > [1] https://lore.kernel.org/lkml/20210405203711.1095940-1-rick.p.edgecombe@intel.com > [2] https://lore.kernel.org/lkml/20210505003032.489164-1-rick.p.edgecombe@intel.com > [3] https://lore.kernel.org/lkml/20210121122723.3446-8-rppt@kernel.org > > Mike Rapoport (IBM) (5): > mm: intorduce __GFP_UNMAPPED and unmapped_alloc() > mm/unmapped_alloc: add debugfs file similar to /proc/pagetypeinfo > mm/unmapped_alloc: add shrinker > EXPERIMENTAL: x86: use __GFP_UNMAPPED for modele_alloc() > EXPERIMENTAL: mm/secretmem: use __GFP_UNMAPPED > > arch/x86/Kconfig | 3 + > arch/x86/kernel/module.c | 2 +- > include/linux/gfp_types.h | 11 +- > include/linux/page-flags.h | 6 + > include/linux/pageblock-flags.h | 28 +++ > include/trace/events/mmflags.h | 10 +- > mm/Kconfig | 4 + > mm/Makefile | 1 + > mm/internal.h | 24 +++ > mm/page_alloc.c | 39 +++- > mm/secretmem.c | 26 +-- > mm/unmapped-alloc.c | 334 ++++++++++++++++++++++++++++++++ > mm/vmalloc.c | 2 +- > 13 files changed, 459 insertions(+), 31 deletions(-) > create mode 100644 mm/unmapped-alloc.c > > > base-commit: fe15c26ee26efa11741a7b632e9f23b01aca4cc6 > -- > 2.35.1 > -- Sincerely yours, Mike.