From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E8D4C433E0 for ; Wed, 13 May 2020 18:34:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F9FE20794 for ; Wed, 13 May 2020 18:34:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="1awYWNJj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F9FE20794 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 39EA48002A; Wed, 13 May 2020 14:34:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34E958000B; Wed, 13 May 2020 14:34:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23D238002A; Wed, 13 May 2020 14:34:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 098D58000B for ; Wed, 13 May 2020 14:34:18 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B5E59824805A for ; Wed, 13 May 2020 18:34:17 +0000 (UTC) X-FDA: 76812545754.11.stop94_7cbfdebe42c1e X-HE-Tag: stop94_7cbfdebe42c1e X-Filterd-Recvd-Size: 7523 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Wed, 13 May 2020 18:34:17 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id u15so175164plm.2 for ; Wed, 13 May 2020 11:34:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=SH+sjMwZuOMM8o6r6yEzeEIldRD9YZTLwHLIdvFCJLI=; b=1awYWNJjGLthqdDg8sO8PfSRDks8MSzZ9Qopf8YhOB7PmVSKOiNOnqwarpt9C1LqRA 3FDYE5WjX0+Z7z+R7haOLF8qKtXVdIxRwd19AJQ3IBMOa0eU4ZBJ656K3O65t00sRQ3y DVOjGp/VoPnYRM/YgOZpK+NS1vCN+8sMQZYAW6WGGHLpHmQYxghtIIZc4BfDidi8Zl7p n/YQtytQItl59JA7LZJEV54OEbTcFCc3/1/CF32JcrvcDepdcpwKOReP1zdu44URGuCU +WRrSoAQOwpCPIup1HowigObyROu3yUPQRfW3rvJM/g3Q51WGrSxz43cw+Ze+yIVrMvD 9iTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=SH+sjMwZuOMM8o6r6yEzeEIldRD9YZTLwHLIdvFCJLI=; b=E5O2+5/3gpFuFPq+ty4pTUkCZCNmBwStjZ6j7XSz2LyfpL1RDmKXCIIFsHg+G0RtUP eLFJ3uYSATEnAilrz/KF4u9kjLbZ132rxdxAHd8f0p5NgZDOltls0djmvS/gyBh9Xp/6 aba3iWK5ZOsFHxbPGrISg8ptjQZuTzcjYXbjGCsK/tD3Xi0ov4pRZ0tNQ3hdkXx6ggRd ZdoQCHw1/9UprLSgfgkJZJsiCzhLLxAPMazEoDG3RRERiglFpWxuFEjIQG9IDJirh4rm 5ZbDRSkO5YwTTYbzpFNtrzJTQu0szFiq37PuHhlL5aI3n/tYauN47rNik5kNLh4LhP77 u8+g== X-Gm-Message-State: AOAM532fWWmg9O/Dh2RO34h5Cvx3CtzIGVETGrNjJ6hvVD97tPTQ3ItS aH2u+yNYlCO8EXJRBZo1JgzuX9CNJ20= X-Google-Smtp-Source: ABdhPJwaWdgZJDB6FkHbh50cGMWPvrkyRzNSC4EjlEgDHl9rlJGIryZ1sfGk7fwP/kmM/1U9t+F1NQ== X-Received: by 2002:a17:902:b58e:: with SMTP id a14mr425108pls.247.1589394855415; Wed, 13 May 2020 11:34:15 -0700 (PDT) Received: from ?IPv6:2605:e000:100e:8c61:e16c:b526:b72a:3095? ([2605:e000:100e:8c61:e16c:b526:b72a:3095]) by smtp.gmail.com with ESMTPSA id 28sm16329433pjh.43.2020.05.13.11.34.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 13 May 2020 11:34:14 -0700 (PDT) Subject: Re: [PATCH RFC} io_uring: io_kiocb alloc cache To: Jann Horn Cc: io-uring , Xiaoguang Wang , joseph qi , Jiufei Xue , Pavel Begunkov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Linux-MM References: <492bb956-a670-8730-a35f-1d878c27175f@kernel.dk> From: Jens Axboe Message-ID: Date: Wed, 13 May 2020 12:34:12 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/13/20 11:42 AM, Jann Horn wrote: > +slab allocator people > > On Wed, May 13, 2020 at 6:30 PM Jens Axboe wrote: >> I turned the quick'n dirty from the other day into something a bit >> more done. Would be great if someone else could run some performance >> testing with this, I get about a 10% boost on the pure NOP benchmark >> with this. But that's just on my laptop in qemu, so some real iron >> testing would be awesome. > > 10% boost compared to which allocator? Are you using CONFIG_SLUB? SLUB, yes. >> The idea here is to have a percpu alloc cache. There's two sets of >> state: >> >> 1) Requests that have IRQ completion. preempt disable is not enough >> there, we need to disable local irqs. This is a lot slower in >> certain setups, so we keep this separate. >> >> 2) No IRQ completion, we can get by with just disabling preempt. > > The SLUB allocator has percpu caching, too, and as long as you don't > enable any SLUB debugging or ASAN or such, and you're not hitting any > slowpath processing, it doesn't even have to disable interrupts, it > gets away with cmpxchg_double. > > Have you profiled what the actual problem is when using SLUB? Have you > tested with CONFIG_SLAB_FREELIST_HARDENED turned off, > CONFIG_SLUB_DEBUG turned off, CONFIG_TRACING turned off, > CONFIG_FAILSLAB turned off, and so on? As far as I know, if you > disable all hardening and debugging infrastructure, SLUB's > kmem_cache_alloc()/kmem_cache_free() on the fastpaths should be really > straightforward. And if you don't turn those off, the comparison is > kinda unfair, because your custom freelist won't respect those flags. But that's sort of the point. I don't have any nasty SLUB options enabled, just the default. And that includes CONFIG_SLUB_DEBUG. Which all the distros have enabled, I believe. So yes, I could compare to a bare bones SLUB, and I'll definitely do that because I'm curious. And it also could be an artifact of qemu, sometimes that behaves differently than a real host (locks/irq is more expensive, for example). Not sure how much with SLUB in particular, haven't done targeted benchmarking of that. The patch is just tossed out there for experimentation reasons, in case it wasn't clear. It's not like I'm proposing this for inclusion. But if the wins are big enough over a _normal_ configuration, then it's definitely tempting. > When you build custom allocators like this, it interferes with > infrastructure meant to catch memory safety issues and such (both pure > debugging code and safety checks meant for production use) - for > example, ASAN and memory tagging will no longer be able to detect > use-after-free issues in objects managed by your custom allocator > cache. > > So please, don't implement custom one-off allocators in random > subsystems. And if you do see a way to actually improve the > performance of memory allocation, add that to the generic SLUB > infrastructure. I hear you. This isn't unique, fwiw. Networking has a page pool allocator for example, which I did consider tapping into. Anyway, I/we will be a lot wiser once this experiment progresses! >> Outside of that, any freed requests goes to the ce->alloc_list. >> Attempting to alloc a request will check there first. When freeing >> a request, if we're over some threshold, move requests to the >> ce->free_list. This list can be browsed by the shrinker to free >> up memory. If a CPU goes offline, all requests are reaped. >> >> That's about it. If we go further with this, it'll be split into >> a few separate patches. For now, just throwing this out there >> for testing. The patch is against my for-5.8/io_uring branch. > > That branch doesn't seem to exist on > ... Oh oops, guess I never pushed that out. Will do so. -- Jens Axboe