From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 650F0C00140 for ; Wed, 24 Aug 2022 13:04:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6D46940007; Wed, 24 Aug 2022 09:04:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D1E196B0074; Wed, 24 Aug 2022 09:04:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B96DA940007; Wed, 24 Aug 2022 09:04:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AA73B6B0073 for ; Wed, 24 Aug 2022 09:04:14 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7909AA6C36 for ; Wed, 24 Aug 2022 13:04:14 +0000 (UTC) X-FDA: 79834504428.27.C9963D9 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf17.hostedemail.com (Postfix) with ESMTP id 28E7040017 for ; Wed, 24 Aug 2022 13:04:13 +0000 (UTC) Received: by mail-pf1-f174.google.com with SMTP id y15so12438810pfr.9 for ; Wed, 24 Aug 2022 06:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=CVO3Po59J6lNHP3opqIBWfSkBZXAUE7ofZMuyBIQMW4=; b=BG++dNMCWPazvzdtOCPTgYLfCoU/9ksM2nLUBXkozrRBLyOQbWXfJn+9Bp8i/d9JvE 3HfS286ME+hnQDtYZOq3MrBWpDd30wYyBTCb7JYyfCOGRDuCgi7GtKWp4oxS9acU8Rv9 tJRxIKdNdgwMmofYhTGiF60WlaVmFguOP7VE9p2YRDrJGdG9E2uvWgwL7pxPkTfK8BwB 5o46E8MQQxOudB72Wak/ujUUg/r+J1ItC8blsg4MeyIQ52ECtU/VSpuB4UEGkbiX7AKv RON6z+D7mNWfZu13NHp1P6vI32xa+eMAQBa0Mh/RPQy7FtVDSxzxlTrGsjio7n8ut14u V4gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=CVO3Po59J6lNHP3opqIBWfSkBZXAUE7ofZMuyBIQMW4=; b=Bpwa7KCrYoxOZ/owdTCUjTYxkek2vkzyaHUHeKHlCI7RnZeT7lqIdvcONOjgvJh50j laXmKgkD6Bl4UoXK4IcPauMJCMgc8XKvfr45PP8n2vt5IX5T3mm2QD5CwOmUFJaXZmFk IeVE+OabDFjDEdHAumggGxn9zyC7J5uaLE+tuUmc/Yy7Yubp1ArOlIBNbY3LohLD4zv5 1jlZzA0/nUd9Wzv8Ed0/Knw2Vp4Q+84L/vIoA3aBL4yZf+JPhVhdB1Xd2MrXOAx5w6i3 cgCDpStv10TdXu/yODe9kYwGhhfr/yXYxqNMoqQv/c5ucpJJQlPBinBuOn8BVDHK86Pw Jh0w== X-Gm-Message-State: ACgBeo07fAP6+MIAsjeWpfFuUvmrz9MyOJtHW6ok48ny4cWdRErFg8SH wlUBz4j7CGT5W4lup/6Qv2w= X-Google-Smtp-Source: AA6agR7vG0iVHq5nP3M4f9zSNHaA+eE4QI3ZIshzJ+9oK5f8BQ/SDcDxVNDaBwY6MzbKXZw8/Fe+dg== X-Received: by 2002:a63:86c2:0:b0:42a:42d5:a4a6 with SMTP id x185-20020a6386c2000000b0042a42d5a4a6mr19381976pgd.189.1661346253076; Wed, 24 Aug 2022 06:04:13 -0700 (PDT) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id jf3-20020a170903268300b0016ec8286733sm12312189plb.243.2022.08.24.06.04.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 06:04:12 -0700 (PDT) Date: Wed, 24 Aug 2022 22:04:06 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Rongwei Wang , Christoph Lameter , Joonsoo Kim , David Rientjes , Pekka Enberg , Roman Gushchin , linux-mm@kvack.org, Sebastian Andrzej Siewior , Thomas Gleixner , Mike Galbraith Subject: Re: [PATCH v2 5/5] mm/slub: simplify __cmpxchg_double_slab() and slab_[un]lock() Message-ID: References: <20220823170400.26546-1-vbabka@suse.cz> <20220823170400.26546-6-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220823170400.26546-6-vbabka@suse.cz> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661346254; a=rsa-sha256; cv=none; b=m/1g5j+R6/d0ojzMEeywBM5ubGwsKN8IRErunDZC6htZth8zo8qGP1v1rGlk3BjLY0b8Lp CZDc0Nh80+GqsrHa90NHJF5UJ7S3LlIGQ1ukhvGoh2EQFPA5da0IRnn8fMUauhtxQAkpn/ yxDi7dIY3uHgOloRNYNSoxp1V+3VzMo= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=BG++dNMC; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661346254; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CVO3Po59J6lNHP3opqIBWfSkBZXAUE7ofZMuyBIQMW4=; b=FHvzB8ebud9l+W5okwFPzhasgek8MND+KOZiy/6W+mFyTkaUdErd1m7bPr8ZZdK2pBGUUQ Fr0H8z+2QxI3VpD1GKWJTCty3h/yLAoW011P6WxoteQnvJANFwl9kal6Mtkglkpc3wc6/G oyv/S4aJ3QZC9K1eQNHNFH6oL9mclaU= X-Rspam-User: X-Stat-Signature: h6cega8exxa1hjrreeet75rk4ghc6hhx X-Rspamd-Queue-Id: 28E7040017 X-Rspamd-Server: rspam12 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=BG++dNMC; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1661346253-81757 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 23, 2022 at 07:04:00PM +0200, Vlastimil Babka wrote: > The PREEMPT_RT specific disabling of irqs in __cmpxchg_double_slab() > (through slab_[un]lock()) is unnecessary as bit_spin_lock() disables > preemption and that's sufficient on RT where interrupts are threaded. > > That means we no longer need the slab_[un]lock() wrappers, so delete > them and rename the current __slab_[un]lock() to slab_[un]lock(). > > Signed-off-by: Vlastimil Babka > Acked-by: David Rientjes > --- > mm/slub.c | 39 ++++++++++++--------------------------- > 1 file changed, 12 insertions(+), 27 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 0444a2ba4f12..bb8c1292d7e8 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -446,7 +446,7 @@ slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) > /* > * Per slab locking using the pagelock > */ > -static __always_inline void __slab_lock(struct slab *slab) > +static __always_inline void slab_lock(struct slab *slab) > { > struct page *page = slab_page(slab); > > @@ -454,7 +454,7 @@ static __always_inline void __slab_lock(struct slab *slab) > bit_spin_lock(PG_locked, &page->flags); > } > > -static __always_inline void __slab_unlock(struct slab *slab) > +static __always_inline void slab_unlock(struct slab *slab) > { > struct page *page = slab_page(slab); > > @@ -462,24 +462,12 @@ static __always_inline void __slab_unlock(struct slab *slab) > __bit_spin_unlock(PG_locked, &page->flags); > } > > -static __always_inline void slab_lock(struct slab *slab, unsigned long *flags) > -{ > - if (IS_ENABLED(CONFIG_PREEMPT_RT)) > - local_irq_save(*flags); > - __slab_lock(slab); > -} > - > -static __always_inline void slab_unlock(struct slab *slab, unsigned long *flags) > -{ > - __slab_unlock(slab); > - if (IS_ENABLED(CONFIG_PREEMPT_RT)) > - local_irq_restore(*flags); > -} > - > /* > * Interrupts must be disabled (for the fallback code to work right), typically > - * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different > - * so we disable interrupts as part of slab_[un]lock(). > + * by an _irqsave() lock variant. Except on PREEMPT_RT where these variants do > + * not actually disable interrupts. On the other hand the migrate_disable() You mean preempt_disable()? migrate_disable() will not be enough. > + * done by bit_spin_lock() is sufficient on PREEMPT_RT thanks to its threaded > + * interrupts. > */ > static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, > void *freelist_old, unsigned long counters_old, > @@ -498,18 +486,15 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab > } else > #endif > { > - /* init to 0 to prevent spurious warnings */ > - unsigned long flags = 0; > - > - slab_lock(slab, &flags); > + slab_lock(slab); > if (slab->freelist == freelist_old && > slab->counters == counters_old) { > slab->freelist = freelist_new; > slab->counters = counters_new; > - slab_unlock(slab, &flags); > + slab_unlock(slab); > return true; > } > - slab_unlock(slab, &flags); > + slab_unlock(slab); > } > > cpu_relax(); > @@ -540,16 +525,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, > unsigned long flags; > > local_irq_save(flags); > - __slab_lock(slab); > + slab_lock(slab); > if (slab->freelist == freelist_old && > slab->counters == counters_old) { > slab->freelist = freelist_new; > slab->counters = counters_new; > - __slab_unlock(slab); > + slab_unlock(slab); > local_irq_restore(flags); > return true; > } > - __slab_unlock(slab); > + slab_unlock(slab); > local_irq_restore(flags); > } > > -- > 2.37.2 Otherwise looks good to me. Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> -- Thanks, Hyeonggon