From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB68EC4320E for ; Sun, 29 Aug 2021 07:06:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3FC0660F56 for ; Sun, 29 Aug 2021 07:06:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3FC0660F56 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7F7678D0001; Sun, 29 Aug 2021 03:06:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A7166B0071; Sun, 29 Aug 2021 03:06:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BD108D0001; Sun, 29 Aug 2021 03:06:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 6039B6B006C for ; Sun, 29 Aug 2021 03:06:20 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F31B820BDA for ; Sun, 29 Aug 2021 07:06:19 +0000 (UTC) X-FDA: 78527234520.40.3DBB720 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP id A593F20019C9 for ; Sun, 29 Aug 2021 07:06:19 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0C1FB60F39; Sun, 29 Aug 2021 07:06:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1630220777; bh=/n4BHEfwRpAkQl+6zQT6qVT0qSub4et7m03Zgo/XEPs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=EK1gvSJl2SZyOTieaHwz8h6/K/9AimVgahG7x7Na6f2EmGseoRAYhFnD2I5Wk88mY vtw76Km759OuxFL4PaH4rm1Na3bvOhwUTJf4o6gY4BGmcJZAwRHUBMXne8y0Zlwrs2 QWsG6/UABUy2XIw2tAqEcwe2Kn/e0oOxfB4gA83FQ1xfuTy6eMLNN9TpXmW4kGEu/2 WnFg/z7uZw8sR1a+ZHcxXa8vyV05GBVBk3wsKUrF+LHV4eJPY24I7c8r7vJG1CwemG RBLFJX1sihXBRR8AecUcgjX++DyavEPQy9+RiIeu51ZGAOnrVMm6ZLuQnE6iM5RSZL OGfNNfU3f3/bw== Date: Sun, 29 Aug 2021 10:06:10 +0300 From: Mike Rapoport To: Vlastimil Babka Cc: linux-mm@kvack.org, Andrew Morton , Andy Lutomirski , Dave Hansen , Ira Weiny , Kees Cook , Mike Rapoport , Peter Zijlstra , Rick Edgecombe , x86@kernel.org, linux-kernel@vger.kernel.org, Brijesh Singh Subject: Re: [RFC PATCH 0/4] mm/page_alloc: cache pte-mapped allocations Message-ID: References: <20210823132513.15836-1-rppt@kernel.org> <9d61b4f7-82d0-5caf-88fa-ff1b78704eea@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9d61b4f7-82d0-5caf-88fa-ff1b78704eea@suse.cz> Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=EK1gvSJl; spf=pass (imf26.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: w7opjfhntz6nqgphbc5m4jxeysxzsr8g X-Rspamd-Queue-Id: A593F20019C9 X-Rspamd-Server: rspam04 X-HE-Tag: 1630220779-349722 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 24, 2021 at 06:09:44PM +0200, Vlastimil Babka wrote: > On 8/23/21 15:25, Mike Rapoport wrote: > > > > The idea is to use a gfp flag that will instruct the page allocator to use > > the cache of pte-mapped pages because the caller needs to remove them from > > the direct map or change their attributes. > > Like Dave, I don't like much the idea of a new GFP flag that all page > allocations now have to check, and freeing that has to check a new pageblock > flag, although I can see some of the benefits this brings... > > > When the cache is empty there is an attempt to refill it using PMD-sized > > allocation so that once the direct map is split we'll be able to use all 4K > > pages made available by the split. > > > > If the high order allocation fails, we fall back to order-0 and mark the > > Yeah, this fallback is where we benefit from the page allocator implementation, > because of the page freeing hook that will recognize page from such fallback > blocks and free them to the cache. But does that prevent so much fragmentation > to be worth it? I'd see first if we can do without it. I've run 'stress-ng --mmapfork 20 -t 30' in a VM with 4G or RAM and then checked splits reported in /proc/vmstat to get some ideas what may be the benefit. I've compared Rick's implementation of grouped alloc (rebased on v5.14-rc6) with this set. For that simple test there were ~30% less splits. | grouped alloc | pte-mapped ----------------------+---------------+------------ PMD splits after boot | 16 | 14 PMD splits after test | 49 | 34 (there were no PUD splits at all). I think the closer we have such cache to the buddy, the better would be memory utilization. The downside is that it will be harder to reclaim 2M blocks than with separate caches because at page allocator level we don't have enough information to make the pages allocated from the cache movable. -- Sincerely yours, Mike.