From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Wed, 9 May 2007 18:44:13 -0700 (PDT) From: Christoph Lameter Subject: Re: [patch] check cpuset mems_allowed for sys_mbind In-Reply-To: Message-ID: References: <20070509164859.15dd347b.pj@sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org Return-Path: To: Ken Chen Cc: Paul Jackson , akpm@linux-foundation.org, linux-mm@kvack.org List-ID: On Wed, 9 May 2007, Ken Chen wrote: > On 5/9/07, Christoph Lameter wrote: > > > However, mbind shouldn't create discrepancy between what is allowed > > > and what is promised, especially with MPOL_BIND policy. Since a > > > numa-aware app has already gone such a detail to request memory > > > placement on a specific nodemask, they fully expect memory to be > > > placed there for performance reason. If kernel lies about it, we get > > > very unpleasant performance issue. > > > > How does the kernel lie? The memory is placed given the current cpuset and > > memory policy restrictions. > > sys_mbind lies. A task in cpuset that has mems=0-7, it can do > sys_mbind(MPOL_BIND, 0x100, ...) and such call will return success. I thought we assume that people know what they are doing if they run such NUMA applications? I do not think there is an easy way out given the current way of managing memory policies and allocation constraints. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org