From: Kent Overstreet <kent.overstreet@linux.dev>
To: Song Liu <song@kernel.org>
Cc: Mike Rapoport <rppt@kernel.org>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Rick Edgecombe <rick.p.edgecombe@intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Vlastimil Babka <vbabka@suse.cz>,
linux-kernel@vger.kernel.org, x86@kernel.org
Subject: Re: [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc()
Date: Thu, 18 May 2023 15:15:10 -0400 [thread overview]
Message-ID: <ZGZ5PuQxDnjHlxAY@moria.home.lan> (raw)
In-Reply-To: <CAPhsuW4Mm8z4kbVo8-sPU=QL2B1Sb32ZO7teWT8qienGNuxaeQ@mail.gmail.com>
On Thu, May 18, 2023 at 12:03:03PM -0700, Song Liu wrote:
> On Thu, May 18, 2023 at 11:47 AM Song Liu <song@kernel.org> wrote:
> >
> > On Thu, May 18, 2023 at 10:24 AM Kent Overstreet
> > <kent.overstreet@linux.dev> wrote:
> > >
> > > On Thu, May 18, 2023 at 10:00:39AM -0700, Song Liu wrote:
> > > > On Thu, May 18, 2023 at 9:48 AM Kent Overstreet
> > > > <kent.overstreet@linux.dev> wrote:
> > > > >
> > > > > On Thu, May 18, 2023 at 09:33:20AM -0700, Song Liu wrote:
> > > > > > I am working on patches based on the discussion in [1]. I am planning to
> > > > > > send v1 for review in a week or so.
> > > > >
> > > > > Hey Song, I was reviewing that thread too,
> > > > >
> > > > > Are you taking a different approach based on Thomas's feedback? I think
> > > > > he had some fair points in that thread.
> > > >
> > > > Yes, the API is based on Thomas's suggestion, like 90% from the discussions.
> > > >
> > > > >
> > > > > My own feeling is that the buddy allocator is our tool for allocating
> > > > > larger variable sized physically contiguous allocations, so I'd like to
> > > > > see something based on that - I think we could do a hybrid buddy/slab
> > > > > allocator approach, like we have for regular memory allocations.
> > > >
> > > > I am planning to implement the allocator based on this (reuse
> > > > vmap_area logic):
> > >
> > > Ah, you're still doing vmap_area approach.
> > >
> > > Mike's approach looks like it'll be _much_ lighter weight and higher
> > > performance, to me. vmalloc is known to be slow compared to the buddy
> > > allocator, and with Mike's approach we're only modifying mappings once
> > > per 2 MB chunk.
> > >
> > > I don't see anything in your code for sub-page sized allocations too, so
> > > perhaps I should keep going with my slab allocator.
> >
> > The vmap_area approach handles sub-page allocations. In 5/5 of set [2],
> > we showed that multiple BPF programs share the same page with some
> > kernel text (_etext).
> >
> > > Could you share your thoughts on your approach vs. Mike's? I'm newer to
> > > this area of the code than you two so maybe there's an angle I've missed
> > > :)
> >
> > AFAICT, tree based solution (vmap_area) is more efficient than bitmap
> > based solution.
Tree based requires quite a bit of overhead for the rbtree pointers, and
additional vmap_area structs.
With a buddy allocator based approach, there's no additional state that
needs to be allocated, since it all fits in struct page.
> > First, for 2MiB page with 64B chunk size, we need a bitmap of
> > 2MiB / 64B = 32k bit = 4k bytes
> > While the tree based solution can adapt to the number of allocations within
> > This 2MiB page. Also, searching a free range within 4kB of bitmap may
> > actually be slower than searching in the tree.
> >
> > Second, bitmap based solution cannot handle > 2MiB allocation cleanly,
> > while tree based solution can. For example, if a big driver uses 3MiB, the
> > tree based allocator can allocate 4MiB for it, and use the rest 1MiB for
> > smaller allocations.
We're not talking about a bitmap based solution for >= PAGE_SIZE
allocations, the alternative is a buddy allocator - so no searching,
just per power-of-two freelists.
>
> Missed one:
>
> Third, bitmap based solution requires a "size" parameter in free(). It is an
> overhead for the user. Tree based solution doesn't have this issue.
No, we can recover the size of the allocation via compound_order() -
hasn't historically been done for alloc_pages() allocations to avoid
setting up the state in each page for compound head/tail, but it perhaps
should be (and is with folios, which we've generally been switching to).
next prev parent reply other threads:[~2023-05-18 19:15 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-08 9:41 [RFC PATCH 0/5] Prototype for direct map awareness in page allocator Mike Rapoport
2023-03-08 9:41 ` [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc() Mike Rapoport
2023-03-09 1:56 ` Edgecombe, Rick P
2023-03-09 14:39 ` Mike Rapoport
2023-03-09 15:34 ` Edgecombe, Rick P
2023-03-09 6:31 ` Hyeonggon Yoo
2023-03-09 15:27 ` Mike Rapoport
2023-03-24 8:37 ` Michal Hocko
2023-03-25 6:38 ` Mike Rapoport
2023-03-27 13:43 ` Michal Hocko
2023-03-27 14:31 ` Vlastimil Babka
2023-03-27 15:10 ` Michal Hocko
2023-03-28 6:25 ` Mike Rapoport
2023-03-28 7:39 ` Michal Hocko
2023-03-28 15:11 ` Mike Rapoport
2023-03-28 15:24 ` Michal Hocko
2023-03-29 7:28 ` Mike Rapoport
2023-03-29 8:13 ` Michal Hocko
2023-03-30 5:13 ` Mike Rapoport
2023-03-30 8:11 ` Michal Hocko
2023-03-28 17:18 ` Luis Chamberlain
2023-03-28 17:37 ` Matthew Wilcox
2023-03-28 17:52 ` Luis Chamberlain
2023-03-28 17:55 ` Luis Chamberlain
2023-05-18 3:35 ` Kent Overstreet
2023-05-18 15:23 ` Mike Rapoport
2023-05-18 16:33 ` Song Liu
2023-05-18 16:48 ` Kent Overstreet
2023-05-18 17:00 ` Song Liu
2023-05-18 17:23 ` Kent Overstreet
2023-05-18 18:47 ` Song Liu
2023-05-18 19:03 ` Song Liu
2023-05-18 19:15 ` Kent Overstreet [this message]
2023-05-18 20:03 ` Song Liu
2023-05-18 20:13 ` Kent Overstreet
2023-05-18 20:51 ` Song Liu
2023-05-19 1:24 ` Kent Overstreet
2023-05-19 15:08 ` Song Liu
2023-05-18 19:16 ` Kent Overstreet
2023-05-19 8:29 ` Mike Rapoport
2023-05-19 15:42 ` Song Liu
2023-05-22 22:05 ` Thomas Gleixner
2023-05-19 15:47 ` Kent Overstreet
2023-05-19 16:14 ` Mike Rapoport
2023-05-19 16:21 ` Kent Overstreet
2023-05-18 16:58 ` Kent Overstreet
2023-05-18 17:15 ` Song Liu
2023-05-18 17:25 ` Kent Overstreet
2023-05-18 18:54 ` Song Liu
2023-05-18 19:01 ` Song Liu
2023-05-18 19:10 ` Kent Overstreet
2023-03-08 9:41 ` [RFC PATCH 2/5] mm/unmapped_alloc: add debugfs file similar to /proc/pagetypeinfo Mike Rapoport
2023-03-08 9:41 ` [RFC PATCH 3/5] mm/unmapped_alloc: add shrinker Mike Rapoport
2023-03-08 9:41 ` [RFC PATCH 4/5] EXPERIMENTAL: x86: use __GFP_UNMAPPED for modele_alloc() Mike Rapoport
2023-03-09 1:54 ` Edgecombe, Rick P
2023-03-08 9:41 ` [RFC PATCH 5/5] EXPERIMENTAL: mm/secretmem: use __GFP_UNMAPPED Mike Rapoport
2023-03-09 1:59 ` [RFC PATCH 0/5] Prototype for direct map awareness in page allocator Edgecombe, Rick P
2023-03-09 15:14 ` Mike Rapoport
2023-05-19 15:40 ` Sean Christopherson
2023-05-19 16:24 ` Mike Rapoport
2023-05-19 18:25 ` Sean Christopherson
2023-05-25 20:37 ` Mike Rapoport
2023-03-10 7:27 ` Christoph Hellwig
2023-03-27 14:27 ` Mike Rapoport
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZGZ5PuQxDnjHlxAY@moria.home.lan \
--to=kent.overstreet@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rppt@kernel.org \
--cc=song@kernel.org \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox