From: Michal Hocko <mhocko@suse.com>
To: Mike Rapoport <rppt@kernel.org>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Rick Edgecombe <rick.p.edgecombe@intel.com>,
Song Liu <song@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
Vlastimil Babka <vbabka@suse.cz>,
linux-kernel@vger.kernel.org, x86@kernel.org
Subject: Re: [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc()
Date: Thu, 30 Mar 2023 10:11:06 +0200 [thread overview]
Message-ID: <ZCVEGt95iORWg6xB@dhcp22.suse.cz> (raw)
In-Reply-To: <ZCUajG1uTniQcmlN@kernel.org>
On Thu 30-03-23 08:13:48, Mike Rapoport wrote:
> On Wed, Mar 29, 2023 at 10:13:23AM +0200, Michal Hocko wrote:
> > On Wed 29-03-23 10:28:02, Mike Rapoport wrote:
> > > On Tue, Mar 28, 2023 at 05:24:49PM +0200, Michal Hocko wrote:
> > > > On Tue 28-03-23 18:11:20, Mike Rapoport wrote:
> > > > > On Tue, Mar 28, 2023 at 09:39:37AM +0200, Michal Hocko wrote:
> > > > [...]
> > > > > > OK, so you want to reduce that direct map fragmentation?
> > > > >
> > > > > Yes.
> > > > >
> > > > > > Is that a real problem?
> > > > >
> > > > > A while ago Intel folks published report [1] that showed better performance
> > > > > with large pages in the direct map for majority of benchmarks.
> > > > >
> > > > > > My impression is that modules are mostly static thing. BPF
> > > > > > might be a different thing though. I have a recollection that BPF guys
> > > > > > were dealing with direct map fragmention as well.
> > > > >
> > > > > Modules are indeed static, but module_alloc() used by anything that
> > > > > allocates code pages, e.g. kprobes, ftrace and BPF. Besides, Thomas
> > > > > mentioned that having code in 2M pages reduces iTLB pressure [2], but
> > > > > that's not only about avoiding the splits in the direct map but also about
> > > > > using large mappings in the modules address space.
> > > > >
> > > > > BPF guys suggested an allocator for executable memory [3] mainly because
> > > > > they've seen performance improvement of 0.6% - 0.9% in their setups [4].
> > > >
> > > > These are fair arguments and it would have been better to have them in
> > > > the RFC. Also it is not really clear to me what is the actual benefit of
> > > > the unmapping for those usecases. I do get they want to benefit from
> > > > caching on the same permission setup but do they need unmapping as well?
> > >
> > > The pages allocated with module_alloc() get different permissions depending
> > > on whether they belong to text, rodata, data etc. The permissions are
> > > updated in both vmalloc address space and in the direct map. The updates to
> > > the direct map cause splits of the large pages.
> >
> > That much is clear (wouldn't hurt to mention that in the changelog
> > though).
> >
> > > If we cache large pages as
> > > unmapped we take out the entire 2M page from the direct map and then
> > > if/when it becomes free it can be returned to the direct map as a 2M page.
> > >
> > > Generally, the unmapped allocations are intended for use-cases that anyway
> > > map the memory elsewhere than direct map and need to modify direct mappings
> > > of the memory, be it modules_alloc(), secretmem, PKS page tables or maybe
> > > even some of the encrypted VM memory.
> >
> > I believe we are still not on the same page here. I do understand that
> > you want to re-use the caching capability of the unmapped_pages_alloc
> > for modules allocations as well. My question is whether module_alloc
> > really benefits from the fact that the memory is unmapped?
> >
> > I guess you want to say that it does because it wouldn't have to change
> > the permission for the direct map but I do not see that anywhere in the
> > patch...
>
> This happens automagically outside the patch :)
>
> Currently, to change memory permissions modules code calls set_memory APIs
> and passes vmalloced address to them. set_memory functions lookup the
> direct map alias and update the permissions there as well.
> If the memory allocated with module_alloc() is unmapped in the direct map,
> there won't be an alias for set_memory APIs to update.
>
> > Also consinder the slow path where module_alloc needs to
> > allocate a fresh (huge)page and unmap it. Does the overhead of the
> > unmapping suits needs of all module_alloc users? Module loader is likely
> > not interesting as it tends to be rather static but what about BPF
> > programs?
>
> The overhead of unmapping pages in the direct map on allocation path will
> be offset by reduced overhead of updating permissions in the direct map
> after the allocation. Both are using the same APIs and if today the
> permission update causes a split of a large page, unmapping of a large page
> won't.
>
> Of course in a loaded system unmapped_alloc() won't be able to always
> allocated large pages to replenish the cache, but still there will be fewer
> updates to the direct map.
Ok, all of that is a changelog material. I would recommend to go this
way. Start by the simple unmapped_pages_alloc interface and use it for
the secret memory. There shouldn't be anything controversial there. In a
follow up patch add a support for the vmalloc which would add a new gfp
flag with a justification that this is the simplest way to support
module_alloc usecase and do not feel shy of providing as much context as
you can. Ideally with some numbers for the best/worst/avg cases.
Thanks
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2023-03-30 8:11 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-08 9:41 [RFC PATCH 0/5] Prototype for direct map awareness in page allocator Mike Rapoport
2023-03-08 9:41 ` [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc() Mike Rapoport
2023-03-09 1:56 ` Edgecombe, Rick P
2023-03-09 14:39 ` Mike Rapoport
2023-03-09 15:34 ` Edgecombe, Rick P
2023-03-09 6:31 ` Hyeonggon Yoo
2023-03-09 15:27 ` Mike Rapoport
2023-03-24 8:37 ` Michal Hocko
2023-03-25 6:38 ` Mike Rapoport
2023-03-27 13:43 ` Michal Hocko
2023-03-27 14:31 ` Vlastimil Babka
2023-03-27 15:10 ` Michal Hocko
2023-03-28 6:25 ` Mike Rapoport
2023-03-28 7:39 ` Michal Hocko
2023-03-28 15:11 ` Mike Rapoport
2023-03-28 15:24 ` Michal Hocko
2023-03-29 7:28 ` Mike Rapoport
2023-03-29 8:13 ` Michal Hocko
2023-03-30 5:13 ` Mike Rapoport
2023-03-30 8:11 ` Michal Hocko [this message]
2023-03-28 17:18 ` Luis Chamberlain
2023-03-28 17:37 ` Matthew Wilcox
2023-03-28 17:52 ` Luis Chamberlain
2023-03-28 17:55 ` Luis Chamberlain
2023-05-18 3:35 ` Kent Overstreet
2023-05-18 15:23 ` Mike Rapoport
2023-05-18 16:33 ` Song Liu
2023-05-18 16:48 ` Kent Overstreet
2023-05-18 17:00 ` Song Liu
2023-05-18 17:23 ` Kent Overstreet
2023-05-18 18:47 ` Song Liu
2023-05-18 19:03 ` Song Liu
2023-05-18 19:15 ` Kent Overstreet
2023-05-18 20:03 ` Song Liu
2023-05-18 20:13 ` Kent Overstreet
2023-05-18 20:51 ` Song Liu
2023-05-19 1:24 ` Kent Overstreet
2023-05-19 15:08 ` Song Liu
2023-05-18 19:16 ` Kent Overstreet
2023-05-19 8:29 ` Mike Rapoport
2023-05-19 15:42 ` Song Liu
2023-05-22 22:05 ` Thomas Gleixner
2023-05-19 15:47 ` Kent Overstreet
2023-05-19 16:14 ` Mike Rapoport
2023-05-19 16:21 ` Kent Overstreet
2023-05-18 16:58 ` Kent Overstreet
2023-05-18 17:15 ` Song Liu
2023-05-18 17:25 ` Kent Overstreet
2023-05-18 18:54 ` Song Liu
2023-05-18 19:01 ` Song Liu
2023-05-18 19:10 ` Kent Overstreet
2023-03-08 9:41 ` [RFC PATCH 2/5] mm/unmapped_alloc: add debugfs file similar to /proc/pagetypeinfo Mike Rapoport
2023-03-08 9:41 ` [RFC PATCH 3/5] mm/unmapped_alloc: add shrinker Mike Rapoport
2023-03-08 9:41 ` [RFC PATCH 4/5] EXPERIMENTAL: x86: use __GFP_UNMAPPED for modele_alloc() Mike Rapoport
2023-03-09 1:54 ` Edgecombe, Rick P
2023-03-08 9:41 ` [RFC PATCH 5/5] EXPERIMENTAL: mm/secretmem: use __GFP_UNMAPPED Mike Rapoport
2023-03-09 1:59 ` [RFC PATCH 0/5] Prototype for direct map awareness in page allocator Edgecombe, Rick P
2023-03-09 15:14 ` Mike Rapoport
2023-05-19 15:40 ` Sean Christopherson
2023-05-19 16:24 ` Mike Rapoport
2023-05-19 18:25 ` Sean Christopherson
2023-05-25 20:37 ` Mike Rapoport
2023-03-10 7:27 ` Christoph Hellwig
2023-03-27 14:27 ` Mike Rapoport
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZCVEGt95iORWg6xB@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rppt@kernel.org \
--cc=song@kernel.org \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox