From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from digeo-nav01.digeo.com (digeo-nav01.digeo.com [192.168.1.233]) by packet.digeo.com (8.9.3+Sun/8.9.3) with SMTP id MAA10513 for ; Mon, 9 Sep 2002 12:56:13 -0700 (PDT) Message-ID: <3D7CFCCC.1A6A686A@digeo.com> Date: Mon, 09 Sep 2002 12:55:56 -0700 From: Andrew Morton MIME-Version: 1.0 Subject: Re: [PATCH] modified segq for 2.5 References: <3D7CF077.FB251EC7@digeo.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org Return-Path: To: Rik van Riel Cc: William Lee Irwin III , sfkaplan@cs.amherst.edu, linux-mm@kvack.org List-ID: Rik van Riel wrote: > > On Mon, 9 Sep 2002, Andrew Morton wrote: > > Rik van Riel wrote: > > > On Mon, 9 Sep 2002, Andrew Morton wrote: > > > > > > > I fiddled with it a bit: did you forget to move the write(2) pages > > > > to the inactive list? I changed it to do that at IO completion. > > > > It had little effect. Probably should be looking at the page state > > > > before doing that. > > > > > > Hmmm indeed, I forgot this. Note that IO completion state is > > > too late, since then you'll have already pushed other pages > > > out to the inactive list... > > > > OK. So how would you like to handle those pages? > > Move them to the inactive list the moment we're done writing > them, that is, the moment we move on to the next page. We > wouldn't want to move the last page from /var/log/messages to > the inactive list all the time ;) That's easy. > > > > The inactive list was smaller with this patch. Around 10% > > > > of allocatable memory usually. > > > > > > It should be a bit bigger than this, I think. If it isn't > > > something may be going wrong ;) > > > > Well the working set _was_ large. Sure, we'll be running refill_inactive > > a lot. But spending some CPU in there with this sort of workload is the > > right thing to do, if it ends up in better replacement decisions. So > > it doesn't seem to be a problem per-se? > > OK, in that case there's no problem. If the working set > really does take 90% of RAM that's a good thing to know ;) The working set appears to be 100.000% of RAM, hence the wild swings in throughput when you give or take half a meg. > > Generally, where do you want to go with this code? > > If this code turns out to be more predictable and better > or equal performance to use-once, I'd like to see it in > the kernel. Use-once seems just too hard to tune right > for all workloads. > gack. How do we judge that, without waiting a month and measuring the complaint level? (Here I go again). -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/