From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f47.google.com (mail-pa0-f47.google.com [209.85.220.47]) by kanga.kvack.org (Postfix) with ESMTP id 674E36B005C for ; Tue, 19 Nov 2013 14:42:41 -0500 (EST) Received: by mail-pa0-f47.google.com with SMTP id kq14so4245048pab.20 for ; Tue, 19 Nov 2013 11:42:41 -0800 (PST) Received: from psmtp.com ([74.125.245.164]) by mx.google.com with SMTP id am2si12330738pad.183.2013.11.19.11.42.39 for ; Tue, 19 Nov 2013 11:42:40 -0800 (PST) Subject: Re: [PATCH v5 2/4] MCS Lock: optimizations and extra comments From: Tim Chen In-Reply-To: <20131119191310.GO4138@linux.vnet.ibm.com> References: <1383940325.11046.415.camel@schen9-DESK> <20131119191310.GO4138@linux.vnet.ibm.com> Content-Type: text/plain; charset="UTF-8" Date: Tue, 19 Nov 2013 11:42:35 -0800 Message-ID: <1384890155.11046.435.camel@schen9-DESK> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: paulmck@linux.vnet.ibm.com Cc: Ingo Molnar , Andrew Morton , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-mm , linux-arch@vger.kernel.org, Linus Torvalds , Waiman Long , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , Raghavendra K T , George Spelvin , "H. Peter Anvin" , Arnd Bergmann , Aswin Chandramouleeswaran , Scott J Norton , Will Deacon , "Figo.zhang" On Tue, 2013-11-19 at 11:13 -0800, Paul E. McKenney wrote: > On Fri, Nov 08, 2013 at 11:52:05AM -0800, Tim Chen wrote: > > From: Jason Low > > > > Remove unnecessary operation and make the cmpxchg(lock, node, NULL) == node > > check in mcs_spin_unlock() likely() as it is likely that a race did not occur > > most of the time. > > > > Also add in more comments describing how the local node is used in MCS locks. > > > > Reviewed-by: Tim Chen > > Signed-off-by: Jason Low > > --- > > include/linux/mcs_spinlock.h | 13 +++++++++++-- > > 1 files changed, 11 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h > > index b5de3b0..96f14299 100644 > > --- a/include/linux/mcs_spinlock.h > > +++ b/include/linux/mcs_spinlock.h > > @@ -18,6 +18,12 @@ struct mcs_spinlock { > > }; > > > > /* > > + * In order to acquire the lock, the caller should declare a local node and > > + * pass a reference of the node to this function in addition to the lock. > > + * If the lock has already been acquired, then this will proceed to spin > > + * on this node->locked until the previous lock holder sets the node->locked > > + * in mcs_spin_unlock(). > > + * > > * We don't inline mcs_spin_lock() so that perf can correctly account for the > > * time spent in this lock function. > > */ > > @@ -33,7 +39,6 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > > prev = xchg(lock, node); > > if (likely(prev == NULL)) { > > /* Lock acquired */ > > - node->locked = 1; > > Agreed, no one looks at this field in this case, so no need to initialize > it, unless for debug purposes. > > > return; > > } > > ACCESS_ONCE(prev->next) = node; > > @@ -43,6 +48,10 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > > arch_mutex_cpu_relax(); > > } > > > > +/* > > + * Releases the lock. The caller should pass in the corresponding node that > > + * was used to acquire the lock. > > + */ > > static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > > { > > struct mcs_spinlock *next = ACCESS_ONCE(node->next); > > @@ -51,7 +60,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod > > /* > > * Release the lock by setting it to NULL > > */ > > - if (cmpxchg(lock, node, NULL) == node) > > + if (likely(cmpxchg(lock, node, NULL) == node)) > > Agreed here as well. Takes a narrow race to hit this. > > So, did your testing exercise this path? If the answer is "yes", and > if the issues that I called out in patch #1 are resolved: I haven't instrumented the code to check the hit rate of this path. But the slow path probably will only get hit in some cases under heavy contention. > > Reviewed-by: Paul E. McKenney > > > return; > > /* Wait until the next pointer is set */ > > while (!(next = ACCESS_ONCE(node->next))) > > -- > > 1.7.4.4 > > > > > > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org