From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1E50C433F5 for ; Mon, 20 Sep 2021 01:54:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8F23760FF2 for ; Mon, 20 Sep 2021 01:54:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8F23760FF2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EEBBB900003; Sun, 19 Sep 2021 21:54:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E9AD5900002; Sun, 19 Sep 2021 21:54:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB133900003; Sun, 19 Sep 2021 21:54:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id CCE1F900002 for ; Sun, 19 Sep 2021 21:54:32 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7DE3E181AF5CA for ; Mon, 20 Sep 2021 01:54:32 +0000 (UTC) X-FDA: 78606282384.23.BDAC9DF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id D25EC7001A23 for ; Mon, 20 Sep 2021 01:54:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=mcG2GyjY+30D4uFBWjd8stKr6bjD50FppNHIO6OVqmA=; b=fae2GH1rQ7d/WSurj5Xhc1shOa PuTIVXK+q0mYuoNtzOZ7bs3B0UA+Uwl4l/Y+eqevgHJsBrdJlm8hSGYSmr9q40e0/dxOTvDYyQEOv 2KLUxmC5aj5MWKtD3xp6GMvU24mg64xT17MyH5sPdOTUM5vIshfhs+AquWR6NciRZ4a8yaT3r3jNu rmBijV9ZO9yxas/L23oWm1vpa2e2EnazhCW/8f4AqdAmEfPGp6mX9g6c1pDbGvRITP4c4ETqrBtZy kLTphQVhI2teotSA9fUdsk/3JjA9smVJ0V8B1fSRwEnoPon5xQoSV20w5DnJHz+iMuqUbRSn5TVqC bCNEY1HA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mS8Uw-002J3m-Kt; Mon, 20 Sep 2021 01:53:44 +0000 Date: Mon, 20 Sep 2021 02:53:34 +0100 From: Matthew Wilcox To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Christoph Lameter , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jens Axboe Subject: Re: [RFC PATCH] Introducing lockless cache built on top of slab allocator Message-ID: References: <20210919164239.49905-1-42.hyeyoo@gmail.com> <20210920010938.GA3108@kvm.asia-northeast3-a.c.our-ratio-313919.internal> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210920010938.GA3108@kvm.asia-northeast3-a.c.our-ratio-313919.internal> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D25EC7001A23 X-Stat-Signature: g1fq66cs154krx6hejgkzbk381zstgd7 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fae2GH1r; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1632102871-347064 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 20, 2021 at 01:09:38AM +0000, Hyeonggon Yoo wrote: > Hello Matthew, Thanks to give me a comment! I appreciate it. > > On Sun, Sep 19, 2021 at 08:17:44PM +0100, Matthew Wilcox wrote: > > On Sun, Sep 19, 2021 at 04:42:39PM +0000, Hyeonggon Yoo wrote: > > > It is just simple proof of concept, and not ready for submission yet. > > > There can be wrong code (like wrong gfp flags, or wrong error handling, > > > etc) it is just simple proof of concept. I want comment from you. > > > > Have you read: > > > > https://www.usenix.org/legacy/event/usenix01/full_papers/bonwick/bonwick_html/ > > The relevant part of that paper is section 3, magazines. We should have > > low and high water marks for number of objects > > I haven't read that before, but after reading it seems not different from > SLAB's percpu queuing. > > > and we should allocate > > from / free to the slab allocator in batches. Slab has bulk alloc/free > > APIs already. > > > > There's kmem_cache_alloc_{bulk,free} functions for bulk > allocation. But it's designed for large number of allocation > to reduce locking cost, not for percpu lockless allocation. What I'm saying is that rather than a linked list of objects, we should have an array of, say, 15 pointers per CPU (and a count of how many allocations we have). If we are trying to allocate and have no objects, call kmem_cache_alloc_bulk() for 8 objects. If we are trying to free and have 15 objects already, call kmem_cache_free_bulk() for the last 8 objects and set the number of allocated objects to 7. (maybe 8 and 15 are the wrong numbers. this is just an example) > Yeah, we can implement lockless cache using kmem_cache_alloc_{bulk, free} > but kmem_cache_alloc_{free,bulk} is not enough. > > > I'd rather see this be part of the slab allocator than a separate API. > > And I disagree on this. for because most of situation, we cannot > allocate without lock, it is special case for IO polling. > > To make it as part of slab allocator, we need to modify existing data > structure. But making it part of slab allocator will be waste of memory > because most of them are not using this. Oh, it would have to be an option. Maybe as a new slab_flags_t flag. Or maybe a kmem_cache_alloc_percpu_lockless().