From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail172.messagelabs.com (mail172.messagelabs.com [216.82.254.3]) by kanga.kvack.org (Postfix) with SMTP id AD0726B0044 for ; Mon, 23 Nov 2009 09:32:53 -0500 (EST) Date: Mon, 23 Nov 2009 08:31:40 -0600 (CST) From: Christoph Lameter Subject: Re: [MM] Make mm counters per cpu instead of atomic In-Reply-To: <1258966270.29789.45.camel@localhost> Message-ID: References: <1258440521.11321.32.camel@localhost> <1258443101.11321.33.camel@localhost> <1258450465.11321.36.camel@localhost> <1258966270.29789.45.camel@localhost> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org To: "Zhang, Yanmin" Cc: KAMEZAWA Hiroyuki , "hugh.dickins@tiscali.co.uk" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, Tejun Heo , Andi Kleen List-ID: On Mon, 23 Nov 2009, Zhang, Yanmin wrote: > Another theoretic issue is below scenario: > Process A get the read lock on cpu 0 and is scheduled to cpu 2 to unlock. Then > it's scheduled back to cpu 0 to repeat the step. eventually, the reader counter > will overflow. Considering multiple thread cases, it might be faster to > overflow than what we imagine. When it overflows, processes will hang there. True.... We need to find some alternative to per cpu data to scale mmap sem then. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org