From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ECC8C00140 for ; Wed, 24 Aug 2022 10:24:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7DABF940007; Wed, 24 Aug 2022 06:24:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 789B96B0074; Wed, 24 Aug 2022 06:24:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 650EF940007; Wed, 24 Aug 2022 06:24:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5533E6B0073 for ; Wed, 24 Aug 2022 06:24:28 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 22922120910 for ; Wed, 24 Aug 2022 10:24:28 +0000 (UTC) X-FDA: 79834101816.28.DA0BCEE Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf24.hostedemail.com (Postfix) with ESMTP id C8283180060 for ; Wed, 24 Aug 2022 10:24:27 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id v23so9990352plo.9 for ; Wed, 24 Aug 2022 03:24:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=bUaHQ9EbnnZQkLlR5xh7tGyZbow/DXG9+S73MieIk8s=; b=nBd2UJINLa9VCse42sj3zZgSjx+gEuyHd5IRX/+y6FjSxUY0V2TLUeeFCZUbg5nWjF KgeMBEnDxu9TEiM3ZlJ1/BMT7eTPPMo9a0mQVCbnnOZmIQRiz9nn2XWRYig1m1dByrGD TTzothiL+Cfak2oGY6Qm6g04m1ST8MBpsB3Et387fZpVSxH53f5/6KYD3ix1DJP0Via8 +bXTD/E934CyYwukSbisQddVJVLxS1OllsYT4YEtPg+OtohiLuZ1q+eTxVPaz557AkQF oXeK1DHWkwER5VCpG/iSrWjKY11RPAydTfQQwtUc6acO0yCgcM/yfcd652JahUzLIXgQ uNaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=bUaHQ9EbnnZQkLlR5xh7tGyZbow/DXG9+S73MieIk8s=; b=TtxHv4dF1iWHNBNA39kPCEKCplKTL7Im1J2V7iNSWCHDcROMd388tK7fTrxvBTZFsL WjTtJpQYWqDgHq65dE7arj2pSpdbKJ9OFXuRButuDqdCWVXnI8hK/fc7nTl84TWkJmUH u7orSmK/tmCzF5MsYsEAEIKpthbFCjJmRfa3f7IYayrVA2VsQpKCiAswCK0WFsbRrbYc DN1ZkL633PAXPdA9C9h3IK9G8IVCYljLAnbbP1Ynf0m91DzKfUlUT4dq3KtkJR9C/XkK dYKz4zaPw03SrrtQeGzQNCohVn1GgR474pWQz49sqhhuFJ52OQHHkVSg9CchJT4ipwnK WSbA== X-Gm-Message-State: ACgBeo0ZEd00X++rmEziuFwNZerBBduoyfebdlaVb10iAnbjWEqpP7I6 yRQLfyETSefJzjKULBBOLRA= X-Google-Smtp-Source: AA6agR5kqYNZAg5tAqAYlW/H910c2deZMIJpu9B6Zf99dMukKB+whtjwJSK7bcsFrSBKfrGsZP83Vw== X-Received: by 2002:a17:90a:b794:b0:1f4:feec:2910 with SMTP id m20-20020a17090ab79400b001f4feec2910mr7462897pjr.214.1661336666729; Wed, 24 Aug 2022 03:24:26 -0700 (PDT) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id b6-20020a17090a6e0600b001f1f5e812e9sm1020725pjk.20.2022.08.24.03.24.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 03:24:25 -0700 (PDT) Date: Wed, 24 Aug 2022 19:24:20 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Rongwei Wang , Christoph Lameter , Joonsoo Kim , David Rientjes , Pekka Enberg , Roman Gushchin , linux-mm@kvack.org, Sebastian Andrzej Siewior , Thomas Gleixner , Mike Galbraith Subject: Re: [PATCH v2 5/5] mm/slub: simplify __cmpxchg_double_slab() and slab_[un]lock() Message-ID: References: <20220823170400.26546-1-vbabka@suse.cz> <20220823170400.26546-6-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220823170400.26546-6-vbabka@suse.cz> ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nBd2UJIN; spf=pass (imf24.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661336667; a=rsa-sha256; cv=none; b=1RDii/dU1CEc/o/WmAEm37XGpdnr+x6G8eOV738ZhjR4ehJYwmpwHTxvL4CN6Og5RmqOJ1 eqmZ7mVz/pQXL+BEuyd3hQGYoYVPWjQMMACAHqVX7NCpC6pqqEH9dWJk3ugCfZjRegKj4Z ZalDm/AdmLkuNS1TBi4jCF51XdIUFvg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661336667; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bUaHQ9EbnnZQkLlR5xh7tGyZbow/DXG9+S73MieIk8s=; b=I9PnBDfVkSw9XpQPlTp8B5HzELh8cruJ1F7/cwaJc9LG6ErDo+4HY5M8eXL8/uV9ISBtsG 6v4EZ6TPVvRJZn86VjQBgI4//qFhVTH+FkengHkmojGKe6UJaqr63yicr3neK95r3FmL11 bud8kszovuBpPH7UUoHskOpUIX0q6ws= X-Stat-Signature: fb3rinxu9ji5gtoh45i7by4iga9hj3af X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C8283180060 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nBd2UJIN; spf=pass (imf24.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1661336667-729976 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 23, 2022 at 07:04:00PM +0200, Vlastimil Babka wrote: > The PREEMPT_RT specific disabling of irqs in __cmpxchg_double_slab() > (through slab_[un]lock()) is unnecessary as bit_spin_lock() disables > preemption and that's sufficient on RT where interrupts are threaded. > > That means we no longer need the slab_[un]lock() wrappers, so delete > them and rename the current __slab_[un]lock() to slab_[un]lock(). > I'm not familiar with PREEMPT_RT preemption model so not sure I'm following. 1) Does "interrupts are threaded on RT" mean processing _most_ (all handlers that did not specified IRQF_NO_THREAD) of interrupts are delayed to irq threads and processed later in process context, and the kernel *never* use spinlock_t, local_lock_t that does not disable interrupts (and sleep) on RT in hardware/software interrupt context? 2) Do we need disabling irq in cmpxchg_double_slab() on RT? BTW Is there a good documentation/papers on PREEMPT_RT preemption model? I tried to find but only found Documentation/locking/locktypes.rst :( Thanks! > Signed-off-by: Vlastimil Babka > Acked-by: David Rientjes > --- > mm/slub.c | 39 ++++++++++++--------------------------- > 1 file changed, 12 insertions(+), 27 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 0444a2ba4f12..bb8c1292d7e8 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -446,7 +446,7 @@ slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) > /* > * Per slab locking using the pagelock > */ > -static __always_inline void __slab_lock(struct slab *slab) > +static __always_inline void slab_lock(struct slab *slab) > { > struct page *page = slab_page(slab); > > @@ -454,7 +454,7 @@ static __always_inline void __slab_lock(struct slab *slab) > bit_spin_lock(PG_locked, &page->flags); > } > > -static __always_inline void __slab_unlock(struct slab *slab) > +static __always_inline void slab_unlock(struct slab *slab) > { > struct page *page = slab_page(slab); > > @@ -462,24 +462,12 @@ static __always_inline void __slab_unlock(struct slab *slab) > __bit_spin_unlock(PG_locked, &page->flags); > } > > -static __always_inline void slab_lock(struct slab *slab, unsigned long *flags) > -{ > - if (IS_ENABLED(CONFIG_PREEMPT_RT)) > - local_irq_save(*flags); > - __slab_lock(slab); > -} > - > -static __always_inline void slab_unlock(struct slab *slab, unsigned long *flags) > -{ > - __slab_unlock(slab); > - if (IS_ENABLED(CONFIG_PREEMPT_RT)) > - local_irq_restore(*flags); > -} > - > /* > * Interrupts must be disabled (for the fallback code to work right), typically > - * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different > - * so we disable interrupts as part of slab_[un]lock(). > + * by an _irqsave() lock variant. Except on PREEMPT_RT where these variants do > + * not actually disable interrupts. On the other hand the migrate_disable() > + * done by bit_spin_lock() is sufficient on PREEMPT_RT thanks to its threaded > + * interrupts. > */ > static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, > void *freelist_old, unsigned long counters_old, > @@ -498,18 +486,15 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab > } else > #endif > { > - /* init to 0 to prevent spurious warnings */ > - unsigned long flags = 0; > - > - slab_lock(slab, &flags); > + slab_lock(slab); > if (slab->freelist == freelist_old && > slab->counters == counters_old) { > slab->freelist = freelist_new; > slab->counters = counters_new; > - slab_unlock(slab, &flags); > + slab_unlock(slab); > return true; > } > - slab_unlock(slab, &flags); > + slab_unlock(slab); > } > > cpu_relax(); > @@ -540,16 +525,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, > unsigned long flags; > > local_irq_save(flags); > - __slab_lock(slab); > + slab_lock(slab); > if (slab->freelist == freelist_old && > slab->counters == counters_old) { > slab->freelist = freelist_new; > slab->counters = counters_new; > - __slab_unlock(slab); > + slab_unlock(slab); > local_irq_restore(flags); > return true; > } > - __slab_unlock(slab); > + slab_unlock(slab); > local_irq_restore(flags); > } > > -- > 2.37.2 > -- Thanks, Hyeonggon