From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Thu, 2 Dec 2004 09:34:01 -0800 From: cliff white Subject: Re: page fault scalability patch V12 [0/7]: Overview and performance tests Message-Id: <20041202093401.103478e2.cliffw@osdl.org> In-Reply-To: <20041201230217.1d2071a8.akpm@osdl.org> References: <41AEB44D.2040805@pobox.com> <20041201223441.3820fbc0.akpm@osdl.org> <41AEBAB9.3050705@pobox.com> <20041201230217.1d2071a8.akpm@osdl.org> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org Return-Path: To: Andrew Morton Cc: jgarzik@pobox.com, torvalds@osdl.org, clameter@sgi.com, hugh@veritas.com, benh@kernel.crashing.org, nickpiggin@yahoo.com.au, linux-mm@kvack.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org List-ID: On Wed, 1 Dec 2004 23:02:17 -0800 Andrew Morton wrote: > Jeff Garzik wrote: > > > > Andrew Morton wrote: > > > We need to be be achieving higher-quality major releases than we did in > > > 2.6.8 and 2.6.9. Really the only tool we have to ensure this is longer > > > stabilisation periods. > > > > > > I'm still hoping that distros (like my employer) and orgs like OSDL will > > step up, and hook 2.6.x BK snapshots into daily test harnesses. > > I believe that both IBM and OSDL are doing this, or are getting geared up > to do this. With both Linus bk and -mm. Gee, OSDL has been doing this sort of testing for > 1 years now. Getting bandwidth to look at the results has been a problem. We need more eyeballs and community support badly, i'm very glad Marcelo has shown recent interest. > > However I have my doubts about how useful it will end up being. These test > suites don't seem to pick up many regressions. I've challenged Gerrit to > go back through a release cycle's bugfixes and work out how many of those > bugs would have been detected by the test suite. > > My suspicion is that the answer will be "a very small proportion", and that > really is the bottom line. > > We simply get far better coverage testing by releasing code, because of all > the wild, whacky and weird things which people do with their computers. > Bless them. > > > Something like John Cherry's reports to lkml on warnings and errors > > would be darned useful. His reports are IMO an ideal model: show > > day-to-day _changes_ in test results. Don't just dump a huge list of > > testsuite results, results which are often clogged with expected > > failures and testsuite bug noise. > > > > Yes, we need humans between the tests and the developers. Someone who has > good experience with the tests and who can say "hey, something changed > when I do X". If nothing changed, we don't hear anything. I would agree, and would do almost anything to help/assist/enable any humans interested. We need some expertise on when to run certain tests, to avoid data overload. I've noticed that when developer's submit test results with a patch, it sometimes helps in the decision on patch acceptance. Is there a way to promote this sort of behaviour? cliffw OSDL > > It's a developer role, not a testing role. All testing is, really. > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: aart@kvack.org > -- The church is near, but the road is icy. The bar is far, but i will walk carefully. - Russian proverb -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: aart@kvack.org