From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Tue, 26 Feb 2008 23:56:39 -0800 (PST) From: David Rientjes Subject: Re: [RFC][PATCH] page reclaim throttle take2 In-Reply-To: <20080227165139.18e5933e.kamezawa.hiroyu@jp.fujitsu.com> Message-ID: References: <47C4F9C0.5010607@linux.vnet.ibm.com> <20080227160746.425E.KOSAKI.MOTOHIRO@jp.fujitsu.com> <20080227165139.18e5933e.kamezawa.hiroyu@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org Return-Path: To: KAMEZAWA Hiroyuki Cc: KOSAKI Motohiro , Balbir Singh , Peter Zijlstra , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Rik van Riel , Lee Schermerhorn , Nick Piggin List-ID: On Wed, 27 Feb 2008, KAMEZAWA Hiroyuki wrote: > Hmm, but kswapd, which is main worker of page reclaiming, is per-node. > And reclaim is done based on zone. > per-zone/per-node throttling seems to make sense. > That's another argument for not introducing the sysctl; the number of nodes and zones are a static property of the machine that cannot change without a reboot (numa=fake, mem=, introducing movable zones, etc). We don't have node hotplug that can suddenly introduce additional zones from which to reclaim. My point was that there doesn't appear to be any use case for tuning this via a sysctl that isn't simply attempting to workaround some other reclaim problem when the VM is stressed. If that's agreed upon, then deciding between a config option that is either per-cpu or per-node should be based on the benchmarks that you've run. At this time, it appears that per-node is the more advantageous. > I know his environment has 4cpus per node but throttle to 3 was the best > number in his measurement. Then it seems num-per-cpu is excessive. > (At least, ratio(%) is better.) That seems to indicate that the NUMA topology is more important than lock contention for the reclaim throttle. David -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org