linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <bsingharora@gmail.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Subject: Re: [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs
Date: Tue, 29 Jan 2019 21:43:28 +1100	[thread overview]
Message-ID: <20190129104328.GJ26056@350D> (raw)
In-Reply-To: <20190128113442.GG18811@dhcp22.suse.cz>

On Mon, Jan 28, 2019 at 12:34:42PM +0100, Michal Hocko wrote:
> On Mon 28-01-19 22:20:33, Balbir Singh wrote:
> > Sending a patch to linux-mm today has become a complex task. One of the
> > reasons for the complexity is a lack of fundamental expectation of what
> > tests to run.
> > 
> > Mel Gorman has a set of tests [1], but there is no easy way to select
> > what tests to run. Some of them are proprietary (spec*), but others
> > have varying run times. A single line change may require hours or days
> > of testing, add to that complexity of configuration. It requires a lot
> > of tweaking and frequent test spawning to settle down on what to run,
> > what configuration to choose and benefit to show.
> > 
> > The proposal is to have a discussion on how to design a good sanity
> > test suite for the mm subsystem, which could potentially include
> > OOM test cases and known problem patterns with proposed changes
> 
> I am not sure I follow. So what is the problem you would like to solve.
> If tests are taking too long then there is a good reason for that most
> probably. Are you thinking of any specific tests which should be run or
> even included to MM tests or similar?

Let me elaborate, everytime I think I find something interesting, in terms
of something to develop/fix, I think of how to test the changes. I think
for well established code (such as reclaim) or even other features, it's hard
to find good test cases to run as a base to ensure that

1. There is good coverage of tests against the changes
2. The right test cases have been run from a performance perspective

The reason I brought up the time was not the time for a single test,
but all the tests cumulative in the absence of good guidance for
(1) and (2) above.

IOW, what guidance can we provide to patch writers and bug fixers in terms
of what testing to carry out? How do we avoid biases in results and
ensure consistency?

Balbir Singh.


  reply	other threads:[~2019-01-29 10:43 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-28 11:20 Balbir Singh
2019-01-28 11:34 ` Michal Hocko
2019-01-29 10:43   ` Balbir Singh [this message]
2019-01-29 11:26     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190129104328.GJ26056@350D \
    --to=bsingharora@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox