From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0944C433EF for ; Mon, 20 Sep 2021 22:01:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2D36860F48 for ; Mon, 20 Sep 2021 22:01:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2D36860F48 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 99C136B0071; Mon, 20 Sep 2021 18:01:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 94AEB6B0072; Mon, 20 Sep 2021 18:01:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8399F900002; Mon, 20 Sep 2021 18:01:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 750066B0071 for ; Mon, 20 Sep 2021 18:01:49 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 22087181AF5E6 for ; Mon, 20 Sep 2021 22:01:49 +0000 (UTC) X-FDA: 78609324738.15.F87F96A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id D4B5050000BD for ; Mon, 20 Sep 2021 22:01:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=yy0EwQ7J0sMHzSOPWcbFHFKHWeUukuqykjyt5I+NDUo=; b=c8ZwljTjtslpLLghi95Cqy1/Hk +Yu0waLRiRkg6H9rbFRN+VMHIJtVt7AIv0NX7aD2Zxg+WTo4aIrL6ma+oPWOwgeGusYXsw7OwAlkx IhrswmJoiAvUAOKqgARM9OaT+CpqaAzyIT2BvFmlN7YW3WBIlGIk0WIBo9/Hf/FH1h1W4s1o9ci1P Z6m7moIAwWub//99wTuoNA6EABZh8wGAsrjxhRp9aClp7olPq89oJmR2XyGPoBvK/TkysYHTP+R0d S5shbYSilOSOaPGSP+4crgkPbescejHhV9c5BIg0rX+4+BEgPCjsyo907BEdHB6bR7DnYGcILYA9x 9yOZyurw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mSRLg-003IEY-VT; Mon, 20 Sep 2021 22:01:23 +0000 Date: Mon, 20 Sep 2021 23:01:16 +0100 From: Matthew Wilcox To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, Jens Axboe , John Garry , linux-block@vger.kernel.org, netdev@vger.kernel.org Subject: Re: [RFC v2 PATCH] mm, sl[au]b: Introduce lockless cache Message-ID: References: <20210920154816.31832-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210920154816.31832-1-42.hyeyoo@gmail.com> Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=c8ZwljTj; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: inj6mjb4qnzs4sfcnn8breqf41oarrdo X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D4B5050000BD X-HE-Tag: 1632175308-13447 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 20, 2021 at 03:48:16PM +0000, Hyeonggon Yoo wrote: > +#define KMEM_LOCKLESS_CACHE_QUEUE_SIZE 64 I would suggest that, to be nice to the percpu allocator, this be one less than 2^n. > +struct kmem_lockless_cache { > + void *queue[KMEM_LOCKLESS_CACHE_QUEUE_SIZE]; > + unsigned int size; > +}; I would also suggest that 'size' be first as it is going to be accessed every time, and then there's a reasonable chance that queue[size - 1] will be in the same cacheline. CPUs will tend to handle that better. > +/** > + * kmem_cache_alloc_cached - try to allocate from cache without lock > + * @s: slab cache > + * @flags: SLAB flags > + * > + * Try to allocate from cache without lock. If fails, fill the lockless cache > + * using bulk alloc API > + * > + * Be sure that there's no race condition. > + * Must create slab cache with SLAB_LOCKLESS_CACHE flag to use this function. > + * > + * Return: a pointer to free object on allocation success, NULL on failure. > + */ > +void *kmem_cache_alloc_cached(struct kmem_cache *s, gfp_t gfpflags) > +{ > + struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache); > + > + BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE)); > + > + if (cache->size) /* fastpath without lock */ > + return cache->queue[--cache->size]; > + > + /* slowpath */ > + cache->size = kmem_cache_alloc_bulk(s, gfpflags, > + KMEM_LOCKLESS_CACHE_QUEUE_SIZE, cache->queue); Go back to the Bonwick paper and look at the magazine section again. You have to allocate _half_ the size of the queue, otherwise you get into pathological situations where you start to free and allocate every time. > +void kmem_cache_free_cached(struct kmem_cache *s, void *p) > +{ > + struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache); > + > + BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE)); > + > + /* Is there better way to do this? */ > + if (cache->size == KMEM_LOCKLESS_CACHE_QUEUE_SIZE) > + kmem_cache_free(s, cache->queue[--cache->size]); Yes. if (cache->size == KMEM_LOCKLESS_CACHE_QUEUE_SIZE) { kmem_cache_free_bulk(s, KMEM_LOCKLESS_CACHE_QUEUE_SIZE / 2, &cache->queue[KMEM_LOCKLESS_CACHE_QUEUE_SIZE / 2)); cache->size = KMEM_LOCKLESS_CACHE_QUEUE_SIZE / 2; (check the maths on that; it might have some off-by-one)