linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Waiman Long <waiman.long@hp.com>,
	Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Jason Low <jason.low2@hp.com>, Ingo Molnar <mingo@elte.hu>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Alex Shi <alex.shi@linaro.org>, Andi Kleen <andi@firstfloor.org>,
	Michel Lespinasse <walken@google.com>,
	Davidlohr Bueso <davidlohr.bueso@hp.com>,
	Matthew R Wilcox <matthew.r.wilcox@intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Rik van Riel <riel@redhat.com>,
	Peter Hurley <peter@hurleysoftware.com>,
	linux-kernel@vger.kernel.org, linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file
Date: Wed, 02 Oct 2013 11:43:11 -0700	[thread overview]
Message-ID: <1380739391.11046.73.camel@schen9-DESK> (raw)
In-Reply-To: <524B75F0.2070005@hp.com>

On Tue, 2013-10-01 at 21:25 -0400, Waiman Long wrote: 
> On 10/01/2013 05:16 PM, Tim Chen wrote:
> > On Tue, 2013-10-01 at 16:01 -0400, Waiman Long wrote:
> >>>
> >>> The cpu could still be executing out of order load instruction from the
> >>> critical section before checking node->locked?  Probably smp_mb() is
> >>> still needed.
> >>>
> >>> Tim
> >> But this is the lock function, a barrier() call should be enough to
> >> prevent the critical section from creeping up there. We certainly need
> >> some kind of memory barrier at the end of the unlock function.
> > I may be missing something.  My understanding is that barrier only
> > prevents the compiler from rearranging instructions, but not for cpu out
> > of order execution (as in smp_mb). So cpu could read memory in the next
> > critical section, before node->locked is true, (i.e. unlock has been
> > completed).  If we only have a simple barrier at end of mcs_lock, then
> > say the code on CPU1 is
> >
> > 	mcs_lock
> > 	x = 1;
> > 	...
> > 	x = 2;
> > 	mcs_unlock
> >
> > and CPU 2 is
> >
> > 	mcs_lock
> > 	y = x;
> > 	...
> > 	mcs_unlock
> >
> > We expect y to be 2 after the "y = x" assignment.  But we
> > we may execute the code as
> >
> > 	CPU1		CPU2
> > 		
> > 	x = 1;
> > 	...		y = x;  ( y=1, out of order load)
> > 	x = 2
> > 	mcs_unlock
> > 			Check node->locked==true
> > 			continue executing critical section (y=1 when we expect y=2)
> >
> > So we get y to be 1 when we expect that it should be 2.  Adding smp_mb
> > after the node->locked check in lock code
> >
> >             ACCESS_ONCE(prev->next) = node;
> >             /* Wait until the lock holder passes the lock down */
> >             while (!ACCESS_ONCE(node->locked))
> >                      arch_mutex_cpu_relax();
> >             smp_mb();
> >
> > should prevent this scenario.
> >
> > Thanks.
> > Tim
> 
> If the lock and unlock functions are done right, there should be no 
> overlap of critical section. So it is job of the lock/unlock functions 
> to make sure that critical section code won't leak out. There should be 
> some kind of memory barrier at the beginning of the lock function and 
> the end of the unlock function.
> 
> The critical section also likely to have branches. The CPU may 
> speculatively execute code on the 2 branches, but one of them will be 
> discarded once the branch condition is known. Also 
> arch_mutex_cpu_relax() is a compiler barrier by itself. So we may not 
> need a barrier() after all. The while statement is a branch instruction, 
> any code after that can only be speculatively executed and cannot be 
> committed until the branch is done.

But the condition code may be checked after speculative execution? 
The condition may not be true during speculative execution and only
turns true when we check the condition, and take that branch?

The thing that bothers me is without memory barrier after the while
statement, we could speculatively execute before affirming the lock is 
in acquired state. Then when we check the lock, the lock is set
to acquired state in the mean time.    
We could be loading some memory entry *before*
the node->locked has been set true.  I think a smp_rmb (if not a 
smp_mb) should be set after the while statement.

At first I was also thinking that the memory barrier is not 
necessary but Paul convinced me otherwise in a previous email. 
https://lkml.org/lkml/2013/9/27/523 

> 
> In x86, the smp_mb() function translated to a mfence instruction which 
> cost time. That is why I try to get rid of it if it is not necessary.
> 

I also hope that the memory barrier is not necessary and I am missing
something obvious.  But I haven't been able to persuade myself.

Tim 



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-10-02 18:43 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <cover.1380144003.git.tim.c.chen@linux.intel.com>
2013-09-25 22:10 ` [PATCH v6 0/6] rwsem: performance optimizations Tim Chen
2013-09-25 22:10 ` [PATCH v6 1/6] rwsem: check the lock before cpmxchg in down_write_trylock Tim Chen
2013-09-25 22:10 ` [PATCH v6 2/6] rwsem: remove 'out' label in do_wake Tim Chen
2013-09-25 22:10 ` [PATCH v6 3/6] rwsem: remove try_reader_grant label do_wake Tim Chen
2013-09-25 22:10 ` [PATCH v6 4/6] rwsem/wake: check lock before do atomic update Tim Chen
2013-09-25 22:10 ` [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file Tim Chen
2013-09-26  6:46   ` Ingo Molnar
2013-09-26  8:40     ` Peter Zijlstra
2013-09-26  9:37       ` Ingo Molnar
2013-09-26 18:18       ` Tim Chen
2013-09-26 19:27   ` Jason Low
2013-09-26 20:06     ` Davidlohr Bueso
2013-09-26 20:23       ` Jason Low
2013-09-26 20:40         ` Davidlohr Bueso
2013-09-26 21:09           ` Jason Low
2013-09-26 21:41             ` Tim Chen
2013-09-26 22:42               ` Jason Low
2013-09-26 22:57                 ` Tim Chen
2013-09-27  6:02                   ` Ingo Molnar
2013-09-27  6:26                     ` Jason Low
2013-09-27 11:23                     ` Peter Zijlstra
2013-09-27 13:44                       ` Joe Perches
2013-09-27 13:48                         ` Peter Zijlstra
2013-09-27 14:05                           ` Joe Perches
2013-09-27 14:18                             ` Peter Zijlstra
2013-09-27 14:14                           ` [PATCH] checkpatch: Make the memory barrier test noisier Joe Perches
2013-09-27 14:26                             ` Peter Zijlstra
2013-09-27 14:34                               ` Joe Perches
2013-09-27 14:50                                 ` Peter Zijlstra
2013-09-27 15:17                                   ` Paul E. McKenney
2013-09-27 15:34                                     ` Peter Zijlstra
2013-09-27 16:04                                       ` Paul E. McKenney
2013-09-27 23:40                                   ` Oliver Neukum
2013-09-28  7:54                                     ` Peter Zijlstra
2013-09-27 16:12                     ` [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file Jason Low
2013-09-27 16:19                       ` Tim Chen
2013-10-02 19:19                 ` Waiman Long
2013-10-02 19:30                   ` Jason Low
2013-10-02 19:37                     ` Waiman Long
2013-09-26 22:22             ` Davidlohr Bueso
2013-09-27 15:29   ` Paul E. McKenney
2013-09-27 18:09     ` Tim Chen
2013-09-28  2:58       ` Waiman Long
2013-09-27 19:38     ` Tim Chen
2013-09-27 20:16       ` Jason Low
2013-09-27 20:38       ` Paul E. McKenney
2013-09-27 22:46         ` Tim Chen
2013-09-27 23:01           ` Paul E. McKenney
2013-09-27 23:54             ` Jason Low
2013-09-28  0:02               ` Davidlohr Bueso
2013-09-28  2:19               ` Paul E. McKenney
2013-09-28  4:34                 ` Jason Low
2013-09-30 15:51                   ` Waiman Long
2013-09-30 16:10                     ` Jason Low
2013-09-30 16:36                       ` Waiman Long
2013-10-01 16:48                         ` Tim Chen
2013-10-01 20:01                           ` Waiman Long
2013-10-01 21:16                             ` Tim Chen
2013-10-02  1:25                               ` Waiman Long
2013-10-02 18:43                                 ` Tim Chen [this message]
2013-10-02 19:32                                   ` Waiman Long
2013-09-30 16:28                 ` Tim Chen
2013-09-25 22:10 ` [PATCH v6 6/6] rwsem: do optimistic spinning for writer lock acquisition Tim Chen
2013-09-26  6:53   ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1380739391.11046.73.camel@schen9-DESK \
    --to=tim.c.chen@linux.intel.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linaro.org \
    --cc=andi@firstfloor.org \
    --cc=dave.hansen@intel.com \
    --cc=davidlohr.bueso@hp.com \
    --cc=jason.low2@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=matthew.r.wilcox@intel.com \
    --cc=mingo@elte.hu \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peter@hurleysoftware.com \
    --cc=riel@redhat.com \
    --cc=waiman.long@hp.com \
    --cc=walken@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox