From: Jens Axboe <axboe@kernel.dk>
To: Pekka Enberg <penberg@iki.fi>, Jann Horn <jannh@google.com>
Cc: io-uring <io-uring@vger.kernel.org>,
Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>,
joseph qi <joseph.qi@linux.alibaba.com>,
Jiufei Xue <jiufei.xue@linux.alibaba.com>,
Pavel Begunkov <asml.silence@gmail.com>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH RFC} io_uring: io_kiocb alloc cache
Date: Wed, 13 May 2020 14:44:16 -0600 [thread overview]
Message-ID: <672c5a33-fcdf-86f8-e529-6341dcbdadca@kernel.dk> (raw)
In-Reply-To: <d3ff604d-2955-f8f6-dcbd-25ae90569dc3@iki.fi>
On 5/13/20 2:31 PM, Pekka Enberg wrote:
> Hi Jens,
>
> On 5/13/20 1:20 PM, Pekka Enberg wrote:
>>> So I assume if someone does "perf record", they will see significant
>>> reduction in page allocator activity with Jens' patch. One possible way
>>> around that is forcing the page allocation order to be much higher. IOW,
>>> something like the following completely untested patch:
>
> On 5/13/20 11:09 PM, Jens Axboe wrote:
>> Now tested, I gave it a shot. This seems to bring performance to
>> basically what the io_uring patch does, so that's great! Again, just in
>> the microbenchmark test case, so freshly booted and just running the
>> case.
>
> Great, thanks for testing!
>
> On 5/13/20 11:09 PM, Jens Axboe wrote:
>> Will this patch introduce latencies or non-deterministic behavior for a
>> fragmented system?
>
> You have to talk to someone who is more up-to-date with how the page
> allocator operates today. But yeah, I assume people still want to avoid
> higher-order allocations as much as possible, because they make
> allocation harder when memory is fragmented.
That was my thinking... I don't want a random io_kiocb allocation to
take a long time because of high order allocations.
> That said, perhaps it's not going to the page allocator as much as I
> thought, but the problem is that the per-CPU cache size is just to small
> for these allocations, forcing do_slab_free() to take the slow path
> often. Would be interesting to know if CONFIG_SLAB does better here
> because the per-CPU cache size is much larger IIRC.
Just tried with SLAB, and it's roughly 4-5% down from the baseline
(non-modified) SLUB. So not faster, at least for this case.
--
Jens Axboe
prev parent reply other threads:[~2020-05-13 20:44 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <492bb956-a670-8730-a35f-1d878c27175f@kernel.dk>
2020-05-13 17:42 ` Jann Horn
2020-05-13 18:34 ` Jens Axboe
2020-05-13 19:20 ` Pekka Enberg
2020-05-13 20:09 ` Jens Axboe
2020-05-13 20:31 ` Pekka Enberg
2020-05-13 20:44 ` Jens Axboe [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=672c5a33-fcdf-86f8-e529-6341dcbdadca@kernel.dk \
--to=axboe@kernel.dk \
--cc=akpm@linux-foundation.org \
--cc=asml.silence@gmail.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=io-uring@vger.kernel.org \
--cc=jannh@google.com \
--cc=jiufei.xue@linux.alibaba.com \
--cc=joseph.qi@linux.alibaba.com \
--cc=linux-mm@kvack.org \
--cc=penberg@iki.fi \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=xiaoguang.wang@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox