From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
To: Sasha Levin <sashal@kernel.org>
Cc: ksummit@lists.linux.dev
Subject: Re: [MAINTAINERS SUMMIT] The role of AI and LLMs in the kernel process
Date: Mon, 8 Dec 2025 13:15:07 +0900 [thread overview]
Message-ID: <20251208041507.GB30348@pendragon.ideasonboard.com> (raw)
In-Reply-To: <aTYmE53i3FJ_lJH2@laps>
Hi Sasha,
Thank you for summarizing the long discussions. I won't ask if this
summary has been written by an LLM :-)
On Sun, Dec 07, 2025 at 08:12:51PM -0500, Sasha Levin wrote:
> This (and parallel) threads have generated substantial discussion across
> several related topics. In preperation for the Maintainer's summit, here's a
> summary of where we appear to have consensus, where we don't, and some
> questions to consider before the summit.
>
> Where We Have Consensus:
>
> 1. Human accountability is non-negotiable:
>
> The From: line must always be a human who takes full responsibility for the
> patch. No "but the AI wrote that part" excuses. This maps cleanly to our
> existing DCO requirements and approach to tooling.
>
> 2. Some form of disclosure is needed:
>
> Whether it's a trailer tag, a note below the cut line, or something else,
> there's broad agreement that AI involvement should be disclosed. The exact
> mechanism is debatable, but the principle is not.
>
> 3. Maintainer autonomy matters:
>
> Individual subsystem maintainers should be empowered to set their own policies.
> An opt-in approach per-subsystem seems preferred over a kernel-wide mandate
> that doesn't account for different subsystem needs.
>
> 4. This isn't going away:
>
> Industry is already using AI extensively. We're already receiving AI-generated
> bug reports. Ignoring this won't make it disappear; better to have a thoughtful
> policy than no policy.
>
> 5. Language assistance for non-native speakers is legitimate
>
> Using AI to improve documentation and commit messages should not be stigmatized
> or treated the same as AI-generated code.
>
>
> Where We Don't Have Consensus:
>
> 1. The nature of AI errors:
>
> Some argue AI makes fundamentally different errors than humans - subtle
> mistakes that slip past review because we're trained to spot human-pattern
> errors. Others argue AI errors are obvious when the model is under-trained, and
> that better training can address most issues. This affects how much scrutiny
> AI-assisted patches need.
>
> 2. Same bar or higher bar?
>
> The kernel already has a significant bug rate - roughly 20% of commits in a
> release cycle are fixes. Should we hold AI to the same standard we hold humans,
> or does the kernel's criticality demand a higher bar for AI? There's genuine
> disagreement here.
>
> 3. Legal risk tolerance:
>
> DCO clause (a) requires certifying "I have the right to submit it under the
> open source license." With AI training data provenance unclear and litigation
> ongoing, how cautious should we be? Some advocate waiting for legal clarity;
> others argue the legal concerns are overblown and we should focus on practical
> guardrails.
>
> 4. The asymmetric effort problem:
>
> AI can generate patches in seconds; review takes hours. Unlike human
> contributors who learn from feedback and improve, AI models will repeat the
> same mistakes. How do we prevent maintainer overload? There's no clear answer
> yet.
>
>
> Questions for the Summit:
>
> 1. Policy scope: Should we establish a kernel-wide minimum policy, or
> simply document that subsystem maintainers set their own rules?
>
> 2. Disclosure format: What should disclosure look like? Options discussed
> include:
>
> - Trailer tag (e.g., `Assisted-by:`, `Generated-by:`)
> - Below-the-cut note
> - Verbose commit log explanation
> - Technology-agnostic "tooling" terminology vs. AI-specific
>
> 3. Generation vs. review: AI for code review and debugging seems less
> controversial than AI for code generation. Should we treat these
> differently in policy?
>
> 4. What requires disclosure?: Where's the line? Clearly, wholesale
> AI-generated patches need disclosure. What about:
>
> - AI-suggested fixes that a human then implements?
> - Using AI to understand an API before writing code?
> - AI assistance with commit message wording?
>
> 5. Legal stance: Should we take a position on AI-generated code and DCO
> compliance, or leave that to individual contributors to assess?
>
> 6. Enforcement reality: We can't even get everyone to run checkpatch.
> Whatever policy we adopt, how do we think about enforcement?
This is a pretty good summary. It's missing one point in my opinion,
partly related to the legal stance: the ethical stance.
The Linux kernel is governed by the GPL. There are contributors who care
about the copyleft aspect of the license. Even if the legal issues get
cleared in the future, not everybody will agree that usage of GPL code
as input to create proprietary LLMs is ethical: it may not breach the
letter of the license while breaching the spirit. I would like to see
this question being discussed.
> Looking forward to the discussion.
--
Regards,
Laurent Pinchart
next prev parent reply other threads:[~2025-12-08 4:23 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-05 16:03 Lorenzo Stoakes
2025-08-05 16:43 ` James Bottomley
2025-08-05 17:11 ` Mark Brown
2025-08-05 17:23 ` James Bottomley
2025-08-05 17:43 ` Sasha Levin
2025-08-05 17:58 ` Lorenzo Stoakes
2025-08-05 18:16 ` Mark Brown
2025-08-05 18:01 ` Lorenzo Stoakes
2025-08-05 18:46 ` Mark Brown
2025-08-05 19:18 ` Lorenzo Stoakes
2025-08-05 17:17 ` Stephen Hemminger
2025-08-05 17:55 ` Lorenzo Stoakes
2025-08-05 18:23 ` Lorenzo Stoakes
2025-08-12 13:44 ` Steven Rostedt
2025-08-05 18:34 ` James Bottomley
2025-08-05 18:55 ` Lorenzo Stoakes
2025-08-12 13:50 ` Steven Rostedt
2025-08-05 18:39 ` Sasha Levin
2025-08-05 19:15 ` Lorenzo Stoakes
2025-08-05 20:02 ` James Bottomley
2025-08-05 20:48 ` Al Viro
2025-08-06 19:26 ` Lorenzo Stoakes
2025-08-07 12:25 ` Mark Brown
2025-08-07 13:00 ` Lorenzo Stoakes
2025-08-11 21:26 ` Luis Chamberlain
2025-08-12 14:19 ` Steven Rostedt
2025-08-06 4:04 ` Alexey Dobriyan
2025-08-06 20:36 ` Sasha Levin
2025-08-05 21:58 ` Jiri Kosina
2025-08-06 6:58 ` Hannes Reinecke
2025-08-06 19:36 ` Lorenzo Stoakes
2025-08-06 19:35 ` Lorenzo Stoakes
2025-08-05 18:10 ` H. Peter Anvin
2025-08-05 18:19 ` Lorenzo Stoakes
2025-08-06 5:49 ` Julia Lawall
2025-08-06 9:25 ` Dan Carpenter
2025-08-06 9:39 ` Julia Lawall
2025-08-06 19:30 ` Lorenzo Stoakes
2025-08-12 14:37 ` Steven Rostedt
2025-08-12 15:02 ` Sasha Levin
2025-08-12 15:24 ` Paul E. McKenney
2025-08-12 15:25 ` Sasha Levin
2025-08-12 15:28 ` Paul E. McKenney
2025-12-08 1:12 ` Sasha Levin
2025-12-08 1:25 ` H. Peter Anvin
2025-12-08 1:59 ` Jonathan Corbet
2025-12-08 3:15 ` Steven Rostedt
2025-12-08 3:42 ` James Bottomley
2025-12-08 8:41 ` Mauro Carvalho Chehab
2025-12-08 9:16 ` James Bottomley
2025-12-08 10:22 ` Mauro Carvalho Chehab
2025-12-08 4:15 ` Laurent Pinchart [this message]
2025-12-08 4:31 ` Jonathan Corbet
2025-12-08 4:36 ` Laurent Pinchart
2025-12-08 7:00 ` Jiri Kosina
2025-12-08 7:38 ` James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251208041507.GB30348@pendragon.ideasonboard.com \
--to=laurent.pinchart@ideasonboard.com \
--cc=ksummit@lists.linux.dev \
--cc=sashal@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox