From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,HK_RANDOM_FROM,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49CF6C4332F for ; Wed, 22 Sep 2021 08:19:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D7C1761184 for ; Wed, 22 Sep 2021 08:19:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D7C1761184 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7723D6B006C; Wed, 22 Sep 2021 04:19:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 721A9900002; Wed, 22 Sep 2021 04:19:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E8E76B0073; Wed, 22 Sep 2021 04:19:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id 4CF076B006C for ; Wed, 22 Sep 2021 04:19:13 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 07BA8181B10D1 for ; Wed, 22 Sep 2021 08:19:13 +0000 (UTC) X-FDA: 78614509386.21.9394AF8 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf21.hostedemail.com (Postfix) with ESMTP id BA31CD28592F for ; Wed, 22 Sep 2021 08:19:12 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id v2so1237099plp.8 for ; Wed, 22 Sep 2021 01:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=xAwC1Eak7pn1LyQq8FKE4mBzrx5YjXSoqADdyy/8Mmg=; b=CsFaiv3sTHoFKIA0SK73zuufW6xkI1kdopHwmJYmV1zMsBU8yD6tEg2iFuPPAwETeG qIfJcozVBF8Ec7dVCFV3jIJMUoZJeuToRoDM8rCRpXl5Nl0pj4DSt1S15hy2iSDQ2lUf YVF4OlO7WVq3Q/O1jnqN/r/DwZEwdjz60dyvEqEi0XdWzhUbP/P2wasZAhiy8QdpTYj3 ThUewf+gv6fN1gEJhQ740beapVzvVHPPrqW2KsZt/GbgtPmsHMETXyrByNTbjXC9vJ06 9+9+k3pCg8W6Hgd/ZkMWWFriiEHp/4CjfYtuNotpJ4vl4BoW0TBYCzYy74PqSxOznMrI fLXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=xAwC1Eak7pn1LyQq8FKE4mBzrx5YjXSoqADdyy/8Mmg=; b=W4wIoKS6PP9OT0p9WUKwJ/0vV1KttiELD43SutKXmsdEiCSCcFTUz/CR5t9iuXe9ul GYdyyZurdOoyiwwrqxpsc2U3KbAaUSZTVh1ATY4F+FbXlDXQm3rEguHZPheY4uMcDnI+ G5hupcTFAATUVauC7I4hk6eD9MNiz09S4yMxm0utLiURW11do4lxabDIBE94LicaZGX8 UTNxYV++kxm6WJlhK2Kv6ut7cUjQ8pMH60ZpFm7qI3twPgfc7DaowDKg1w4qnyDIQMJ+ OqICZLlKbTJihEdqRD7lnQHWedPLajzrlgHGpdLZGi0YVFnG0gen/dk0TTNq3rsr04/N IoVw== X-Gm-Message-State: AOAM5328CoP6WwIXmD15Qf5Rqa1xdlh327IogkK9/2zq3Ltx0MQluwo6 4vpbC0lDapUVDcSttWO9tR0= X-Google-Smtp-Source: ABdhPJwcv8awihDvADpCUhslzTOMMwVSSHWAi9JfX0NjLYWHu3RH6Ls13bobN0ncLqe5fxg+dOYeQA== X-Received: by 2002:a17:90a:8009:: with SMTP id b9mr9668511pjn.15.1632298751543; Wed, 22 Sep 2021 01:19:11 -0700 (PDT) Received: from kvm.asia-northeast3-a.c.our-ratio-313919.internal (252.229.64.34.bc.googleusercontent.com. [34.64.229.252]) by smtp.gmail.com with ESMTPSA id s22sm1563555pfe.76.2021.09.22.01.19.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Sep 2021 01:19:11 -0700 (PDT) Date: Wed, 22 Sep 2021 08:19:06 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Jens Axboe Cc: linux-mm@kvack.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, Matthew Wilcox , John Garry , linux-block@vger.kernel.org, netdev@vger.kernel.org Subject: Re: [RFC v2 PATCH] mm, sl[au]b: Introduce lockless cache Message-ID: <20210922081906.GA78305@kvm.asia-northeast3-a.c.our-ratio-313919.internal> References: <20210920154816.31832-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=CsFaiv3s; spf=pass (imf21.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: BA31CD28592F X-Stat-Signature: dee4yrwu8xm6mjdoba8hi7uf5ycn17uh X-HE-Tag: 1632298752-164837 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Sep 21, 2021 at 09:37:40AM -0600, Jens Axboe wrote: > > @@ -424,6 +431,57 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align, > > } > > EXPORT_SYMBOL(kmem_cache_create); > > > > +/** > > + * kmem_cache_alloc_cached - try to allocate from cache without lock > > + * @s: slab cache > > + * @flags: SLAB flags > > + * > > + * Try to allocate from cache without lock. If fails, fill the lockless cache > > + * using bulk alloc API > > + * > > + * Be sure that there's no race condition. > > + * Must create slab cache with SLAB_LOCKLESS_CACHE flag to use this function. > > + * > > + * Return: a pointer to free object on allocation success, NULL on failure. > > + */ > > +void *kmem_cache_alloc_cached(struct kmem_cache *s, gfp_t gfpflags) > > +{ > > + struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache); > > + > > + BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE)); > > + > > + if (cache->size) /* fastpath without lock */ > > + return cache->queue[--cache->size]; > > + > > + /* slowpath */ > > + cache->size = kmem_cache_alloc_bulk(s, gfpflags, > > + KMEM_LOCKLESS_CACHE_QUEUE_SIZE, cache->queue); > > + if (cache->size) > > + return cache->queue[--cache->size]; > > + else > > + return NULL; > > +} > > +EXPORT_SYMBOL(kmem_cache_alloc_cached); Hello Jens, I'm so happy that you gave comment. > What I implemented for IOPOLL doesn't need to care about interrupts, > hence preemption disable is enough. But we do need that, at least. To be honest, that was my mistake. I was mistakenly using percpu API. it's a shame :> Thank you for pointing that. Fixed it in v3 (work in progress now) > There are basically two types of use cases for this: > > 1) Freeing can happen from interrupts > 2) Freeing cannot happen from interrupts > I considered only case 2) when writing code. Well, To support 1), I think there are two ways: a) internally call kmem_cache_free when in_interrupt() is true b) caller must disable interrupt when freeing I think a) is okay, how do you think? note that b) can be problematic with kmem_cache_free_bulk as it says interrupts must be enabled. > How does this work for preempt? You seem to assume that the function is > invoked with preempt disabled, but then it could only be used with > GFP_ATOMIC. I wrote it just same prototype with kmem_cache_alloc, and the gfpflags parameter is unnecessary as you said. Okay, let's remove it in v3. > And if you don't care about users that free from irq/softirq, then that > should be mentioned. Locking context should be mentioned, too. The above > may be just fine IFF both alloc and free are protected by a lock higher > up. If not, both need preemption disabled and GFP_ATOMIC. I'd suggest > making the get/put cpu part of the API internally. Actually I didn't put much effort in documentation. (Especially on what context is expected before calling them) comments will be updated in v3, with your comment in mind. > > +/** > > + * kmem_cache_free_cached - return object to cache > > + * @s: slab cache > > + * @p: pointer to free > > + */ > > +void kmem_cache_free_cached(struct kmem_cache *s, void *p) > > +{ > > + struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache); > > + > > + BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE)); > > Don't use BUG_ON, just do: > > if (WARN_ON_ONCE(!(s->flags & SLAB_LOCKLESS_CACHE))) { > kmem_cache_free(s, p); > return; > } > Ok. I agree WARN is better than BUG. Thanks, Hyeonggon Yoo > -- > Jens Axboe >