From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Sasha Levin <sashal@kernel.org>, ksummit@lists.linux.dev
Subject: Re: [MAINTAINERS SUMMIT] The role of AI and LLMs in the kernel process
Date: Wed, 6 Aug 2025 20:26:41 +0100 [thread overview]
Message-ID: <a1bcdee4-344b-4717-bde0-fe80bea46d67@lucifer.local> (raw)
In-Reply-To: <72ee0f61379054e327d502bbe77aae3d76966d17.camel@HansenPartnership.com>
On Tue, Aug 05, 2025 at 04:02:02PM -0400, James Bottomley wrote:
> > >
> > > I don't think we should expect a bar for AI that is higher than the
> > > one we set for humans.
> >
> > I'm not, rather I'm saying let's be aware of the kinds of issues we
> > might encounter from LLMs and take them into account when
> > establishing policy.
>
> Well, if we set a policy, it should be flexible enough to adapt as the
> AI does and not be locked to what would prevent the AI mistakes I can
> find today from happening. If we're going to codify this rigidly we
> could arguably have a policy not to accept patches from humans who
> might be (and often are) wrong as well.
Sure, I think any policy should be broad and reasonable.
Probably we want something simple and practical to begin with,
e.g. categorising by:
1. Was most or all of this patch generated by an LLM? (>=90%)
2. Was a large part of this patch generated by an LLM? (>30%)
3. Was a small part of this patch generated by an LLM? (<30%)
In addition to:
- Was the commit message of this patch generated in large part by an LLM
(excluding non-native speakers using an LLM to simply assist writing it
in english)?
All of which could have tags, and each entry in MAINTAINERS could have an
opt-in entry indicating which will be acceptable.
We could then explicitly indicate that we're fine with and no need to
disclose uses that are simple day-to-day use of LLM tools such as:
- Simple, supervised use of LLM-based 'smart' autocomplete features.
- Research being assisted by an LLM.
- Any use of an LLM for non-upstreamed code used in development of the
series.
etc.
Then we can leave the decision as to what's acceptable to individual
maintainers.
>
> I think we should stick to indicators of trustworthiness that AI is
> already generating at let that guide maintainer taste without
> necessarily having something more detailed.
Well, it's an interesting data point but I'm not sure asking the LLM to
rate its own trustworthiness is a reliable measure, and at any rate I think
we need to keep things simple to begin with.
>
> Regards,
>
> James
>
A really key thing to consider here too is maintainer resource. We're
already strained on this with human submissions, so perhaps we want to make
very clear in AI policy document that this is emphatically not an
invitation for pointing automated tools at the kernel and generating tonnes
of patches, and trying to do so might result in your patches being ignored.
Cheers, Lorenzo
next prev parent reply other threads:[~2025-08-06 19:26 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-05 16:03 Lorenzo Stoakes
2025-08-05 16:43 ` James Bottomley
2025-08-05 17:11 ` Mark Brown
2025-08-05 17:23 ` James Bottomley
2025-08-05 17:43 ` Sasha Levin
2025-08-05 17:58 ` Lorenzo Stoakes
2025-08-05 18:16 ` Mark Brown
2025-08-05 18:01 ` Lorenzo Stoakes
2025-08-05 18:46 ` Mark Brown
2025-08-05 19:18 ` Lorenzo Stoakes
2025-08-05 17:17 ` Stephen Hemminger
2025-08-05 17:55 ` Lorenzo Stoakes
2025-08-05 18:23 ` Lorenzo Stoakes
2025-08-12 13:44 ` Steven Rostedt
2025-08-05 18:34 ` James Bottomley
2025-08-05 18:55 ` Lorenzo Stoakes
2025-08-12 13:50 ` Steven Rostedt
2025-08-05 18:39 ` Sasha Levin
2025-08-05 19:15 ` Lorenzo Stoakes
2025-08-05 20:02 ` James Bottomley
2025-08-05 20:48 ` Al Viro
2025-08-06 19:26 ` Lorenzo Stoakes [this message]
2025-08-07 12:25 ` Mark Brown
2025-08-07 13:00 ` Lorenzo Stoakes
2025-08-11 21:26 ` Luis Chamberlain
2025-08-12 14:19 ` Steven Rostedt
2025-08-06 4:04 ` Alexey Dobriyan
2025-08-06 20:36 ` Sasha Levin
2025-08-05 21:58 ` Jiri Kosina
2025-08-06 6:58 ` Hannes Reinecke
2025-08-06 19:36 ` Lorenzo Stoakes
2025-08-06 19:35 ` Lorenzo Stoakes
2025-08-05 18:10 ` H. Peter Anvin
2025-08-05 18:19 ` Lorenzo Stoakes
2025-08-06 5:49 ` Julia Lawall
2025-08-06 9:25 ` Dan Carpenter
2025-08-06 9:39 ` Julia Lawall
2025-08-06 19:30 ` Lorenzo Stoakes
2025-08-12 14:37 ` Steven Rostedt
2025-08-12 15:02 ` Sasha Levin
2025-08-12 15:24 ` Paul E. McKenney
2025-08-12 15:25 ` Sasha Levin
2025-08-12 15:28 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a1bcdee4-344b-4717-bde0-fe80bea46d67@lucifer.local \
--to=lorenzo.stoakes@oracle.com \
--cc=James.Bottomley@hansenpartnership.com \
--cc=ksummit@lists.linux.dev \
--cc=sashal@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox