From: Donald Zickus <dzickus@redhat.com>
To: David Gow <davidgow@google.com>
Cc: workflows@vger.kernel.org,
automated-testing@lists.yoctoproject.org,
linux-kselftest@vger.kernel.org,
kernelci <kernelci@lists.linux.dev>,
Nikolai Kondrashov <nkondras@redhat.com>,
Gustavo Padovan <gustavo.padovan@collabora.com>,
kernelci-members <kernelci-members@groups.io>,
laura.nao@collabora.com
Subject: Re: [RFC] Test catalog template
Date: Thu, 21 Nov 2024 10:28:08 -0500 [thread overview]
Message-ID: <CAK18DXZCgRpS=kHgh9xGmjE9dO2s7Gm61m_yq8QUAZEMMUOEyw@mail.gmail.com> (raw)
In-Reply-To: <CABVgOS=ccEK2+ighe=7K3Ja-vH0=fthK4LgV0kHTuNCgO9JzzQ@mail.gmail.com>
Hi David,
On Wed, Nov 20, 2024 at 3:16 AM David Gow <davidgow@google.com> wrote:
>
> On Thu, 7 Nov 2024 at 01:01, Donald Zickus <dzickus@redhat.com> wrote:
> >
> > Hi,
> >
> > Thanks for the feedback. I created a more realistic test.yaml file to
> > start (we can split it when more tests are added) and a parser. I was
> > going to add patch support as input to mimic get_maintainers.pl
> > output, but that might take some time. For now, you have to manually
> > select a subsystem. I will try to find space on kernelci.org to grow
> > this work but you can find a git tree here[0].
> >
> > From the README.md
> > """
> > An attempt to map kernel subsystems to kernel tests that should be run
> > on patches or code by humans and CI systems.
> >
> > Examples:
> >
> > Find test info for a subsystem
> >
> > ./get_tests.py -s 'KUNIT TEST' --info
> >
> > Subsystem: KUNIT TEST
> > Maintainer:
> > David Gow <davidgow@google.com>
> > Mailing List: None
> > Version: None
> > Dependency: ['python3-mypy']
> > Test:
> > smoke:
> > Url: None
> > Working Directory: None
> > Cmd: ./tools/testing/kunit/kunit.py
> > Env: None
> > Param: run --kunitconfig lib/kunit
> > Hardware: arm64, x86_64
> >
> > Find copy-n-pastable tests for a subsystem
> >
> > ./get_tests.py -s 'KUNIT TEST'
> >
> > ./tools/testing/kunit/kunit.pyrun --kunitconfig lib/kunit
> > """
> >
> > Is this aligning with what people were expecting?
> >
>
>
> Awesome! I've been playing around a bit with this, and I think it's an
> excellent start.
>
> There are definitely some more features I'd want in an ideal world
> (e.g., configuration matrices, etc), but this works well enough.
Yeah, I was trying to nail down the usability angle first before
expanding with bells and whistles. I would like to think the yaml
file is flexible enough to handle those features though??
>
> I've been playing around with a branch which adds the ability to
> actually run these tests, based on the 'run_checks.py' script we use
> for KUnit:
> https://github.com/sulix/test-catalog/tree/runtest-wip
Thanks!
>
> In particular, this adds a '-r' option which runs the tests for the
> subsystem in parallel. This largely matches what I was doing manually
> — for instance, the KUnit section in test.yaml now has three different
> tests, and running it gives me this result:
> ../test-catalog/get_tests.py -r -s 'KUNIT TEST'
> Waiting on 3 checks (kunit-tool-test, uml, x86_64)...
> kunit-tool-test: PASSED
> x86_64: PASSED
> uml: PASSED
Interesting. Originally I was thinking this would be done serially.
I didn't think tests were safe enough to run in parallel. I am
definitely open to this. My python isn't the best, but I think your
PR looks reasonable.
>
> (Obviously, in the real world, I'd have more checks, including other
> architectures, checkpatch, etc, but this works as a proof-of-concept
> for me.)
>
> I think the most interesting questions will be:
> - How do we make this work with more complicated dependencies
> (containers, special hardware, etc)?
I was imagining a 'hw-requires' type line to handle the hardware
requests as that seemed natural for a lot of the driver work. Run a
quick check before running the test to see if the required hw is
present or not and bail if it isn't. The containers piece is a little
trickier and ties into the test environment I think. The script would
have to create an environment and inject the tests into the
environment and run them. I would imagine some of this would have to
be static as the setup is complicated. For example, a 'container'
label would execute custom code to setup a test environment inside a
container. Open to ideas here.
> - How do we integrate it with CI systems — can we pull the subsystem
> name for a patch from MAINTAINERS and look it up here?
There are two thoughts. First is yes. As a developer you probably
want to run something like 'get_maintainers.sh <patch> | get_tests.py
-s -' or something to figure out what variety of tests you should run
before posting. And a CI system could probably do something similar.
There is also another thought, you already know the subsystem you want
to test. For example, a patch is usually written for a particular
subsystem that happens to touch code from other subsystems. You
primarily want to run it against a specified subsystem. I know Red
Hat's CKI will run against a known subsystem git-tree and would fall
into this category. While it does leave a gap in other subsystem
testing, sometimes as a human you already know running those extra
tests is mostly a no-op because it doesn't really change anything.
> - What about things like checkpatch, or general defconfig build tests
> which aren't subsystem-specific?
My initial thought is that this is another category of testing. A lot
of CI tests are workload testing and have predefined configs. Whereas
a generic testing CI system (think 0-day) would focus on those types
of testing. So I would lean away from those checks in this approach
or we could add a category 'general' too. I do know checkpatch rules
vary from maintainer to maintainer.
> - How can we support more complicated configurations or groups of
> configurations?
Examples?
> - Do we add support for specific tools and/or parsing/combining output?
Examples? I wasn't thinking of parsing test output, just providing
what to run as a good first step. My initial thought was to help
nudge tests towards the KTAP output??
>
> But I'm content to keep playing around with this a bit more for now.
Thank you! Please do!
Cheers,
Don
prev parent reply other threads:[~2024-11-21 15:28 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-14 20:32 Donald Zickus
2024-10-15 16:01 ` [Automated-testing] " Bird, Tim
2024-10-16 13:10 ` Cyril Hrubis
2024-10-16 18:02 ` Donald Zickus
2024-10-17 11:01 ` Cyril Hrubis
2024-10-16 18:00 ` Donald Zickus
2024-10-17 12:31 ` Minas Hambardzumyan
2024-10-18 19:44 ` Donald Zickus
2024-10-18 7:21 ` David Gow
2024-10-18 14:23 ` Gustavo Padovan
2024-10-18 14:35 ` [Automated-testing] " Cyril Hrubis
2024-10-18 19:17 ` Mark Brown
2024-10-18 20:17 ` Donald Zickus
2024-10-19 6:36 ` David Gow
2024-11-06 17:01 ` Donald Zickus
2024-11-20 8:16 ` David Gow
2024-11-21 15:28 ` Donald Zickus [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAK18DXZCgRpS=kHgh9xGmjE9dO2s7Gm61m_yq8QUAZEMMUOEyw@mail.gmail.com' \
--to=dzickus@redhat.com \
--cc=automated-testing@lists.yoctoproject.org \
--cc=davidgow@google.com \
--cc=gustavo.padovan@collabora.com \
--cc=kernelci-members@groups.io \
--cc=kernelci@lists.linux.dev \
--cc=laura.nao@collabora.com \
--cc=linux-kselftest@vger.kernel.org \
--cc=nkondras@redhat.com \
--cc=workflows@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox