linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Matthew Wilcox <willy@infradead.org>, Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm <linux-mm@kvack.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Vlastimil Babka <vbabka@suse.cz>, John Dias <joaodias@google.com>,
	Suren Baghdasaryan <surenb@google.com>,
	pullip.cho@samsung.com,
	Chris Goldsworthy <cgoldswo@codeaurora.org>,
	Nicholas Piggin <npiggin@gmail.com>
Subject: Re: [RFC 0/7] Support high-order page bulk allocation
Date: Tue, 18 Aug 2020 18:22:10 +0200	[thread overview]
Message-ID: <2d835b0c-da48-52d1-1792-255bbad3425d@redhat.com> (raw)
In-Reply-To: <20200818155825.GS17456@casper.infradead.org>

On 18.08.20 17:58, Matthew Wilcox wrote:
> On Tue, Aug 18, 2020 at 08:15:43AM -0700, Minchan Kim wrote:
>> I understand pfn stuff in the API is not pretty but the concept of idea
>> makes sense to me in that go though the *migratable area* and get possible
>> order pages with hard effort. It looks like GFP_NORETRY version for
>> kmem_cache_alloc_bulk.
>>
>> How about this?
>>
>>     int cma_alloc(struct cma *cma, int order, unsigned int nr_elem, struct page **pages);
> 
> I think that makes a lot more sense as an API.  Although I think you want
> 
> int cma_bulk_alloc(struct cma *cma, unsigned order, unsigned nr_elem,
> 		struct page **pages);
> 

Right, and I would start with a very simple implementation that does not
mess with alloc_contig_range() (meaning: modify it).

I'd then much rather want to see simple tweaks to alloc_contig_range()
to improve the situation. E.g., some kind of "fail fast" flag that let's
the caller specify to skip some draining (or do it manually in cma
before a bulk allocation) and rather fail fast than really trying to
allocate the range whatever it costs.

There are multiple optimizations you can play with then (start with big
granularity and split, move to smaller granularity on demand, etc., all
nicely wrapped in cma_bulk_alloc()).

Yes, it might not end up as fast as this big hack (sorry) here, but as
Nicholas correctly said, it's not our motivation to implement and
maintain such complexity just to squeeze the last milliseconds out of an
allocation path for "broken devices".

I absolutely dislike pushing this very specific allocation policy down
to the core range allocator. It's already makes my head spin every time
I look at it in detail.

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2020-08-18 16:22 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-14 17:31 Minchan Kim
2020-08-14 17:31 ` [RFC 1/7] mm: page_owner: split page by order Minchan Kim
2020-08-14 17:31 ` [RFC 2/7] mm: introduce split_page_by_order Minchan Kim
2020-08-14 17:31 ` [RFC 3/7] mm: compaction: deal with upcoming high-order page splitting Minchan Kim
2020-08-14 17:31 ` [RFC 4/7] mm: factor __alloc_contig_range out Minchan Kim
2020-08-14 17:31 ` [RFC 5/7] mm: introduce alloc_pages_bulk API Minchan Kim
2020-08-17 17:40   ` David Hildenbrand
2020-08-14 17:31 ` [RFC 6/7] mm: make alloc_pages_bulk best effort Minchan Kim
2020-08-14 17:31 ` [RFC 7/7] mm/page_isolation: avoid drain_all_pages for alloc_pages_bulk Minchan Kim
2020-08-14 17:40 ` [RFC 0/7] Support high-order page bulk allocation Matthew Wilcox
2020-08-14 20:55   ` Minchan Kim
2020-08-18  2:16     ` Cho KyongHo
2020-08-18  9:22     ` Cho KyongHo
2020-08-16 12:31 ` David Hildenbrand
2020-08-17 15:27   ` Minchan Kim
2020-08-17 15:45     ` David Hildenbrand
2020-08-17 16:30       ` Minchan Kim
2020-08-17 16:44         ` David Hildenbrand
2020-08-17 17:03           ` David Hildenbrand
2020-08-17 23:34           ` Minchan Kim
2020-08-18  7:42             ` Nicholas Piggin
2020-08-18  7:49             ` David Hildenbrand
2020-08-18 15:15               ` Minchan Kim
2020-08-18 15:58                 ` Matthew Wilcox
2020-08-18 16:22                   ` David Hildenbrand [this message]
2020-08-18 16:49                     ` Minchan Kim
2020-08-19  0:27                     ` Yang Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2d835b0c-da48-52d1-1792-255bbad3425d@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgoldswo@codeaurora.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=joaodias@google.com \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=npiggin@gmail.com \
    --cc=pullip.cho@samsung.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox