From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Tue, 15 Nov 2005 04:18:22 -0800 From: William Lee Irwin III Subject: Re: [RFC] NUMA memory policy support for HUGE pages Message-ID: <20051115121822.GB6916@holomorphy.com> References: <1131980814.13502.12.camel@localhost.localdomain> <1132007410.13502.35.camel@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: owner-linux-mm@kvack.org Return-Path: To: Christoph Lameter Cc: Adam Litke , linux-mm@kvack.org, ak@suse.de, linux-kernel@vger.kernel.org, kenneth.w.chen@intel.com List-ID: On Mon, 14 Nov 2005, Adam Litke wrote: >> IMHO this is not really a cleanup. When the demand fault patch stack >> was first accepted, we decided to separate out find_or_alloc_huge_page() >> because it has the page_cache retry loop with several exit conditions. >> no_page() has its own backout logic and mixing the two makes for a >> tangled mess. Can we leave that hunk out please? On Mon, Nov 14, 2005 at 03:25:00PM -0800, Christoph Lameter wrote: > It seemed to me that find_or_alloc_huge_pages has a pretty simple backout > logic that folds nicely into no_page(). Both functions share a lot of > variables and putting them together not only increases the readability of > the code but also makes the function smaller and execution more efficient. Looks like this is on the road to inclusion and so on. I'm not picky about either approach wrt. nopage/etc. and find_or_alloc_huge_page() affairs. Just get a consensus together and send it in. Thanks. -- wli -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org