ksummit.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* kernel.org tooling update
@ 2025-12-10  4:48 Konstantin Ryabitsev
  2025-12-10  8:11 ` Mauro Carvalho Chehab
                   ` (4 more replies)
  0 siblings, 5 replies; 42+ messages in thread
From: Konstantin Ryabitsev @ 2025-12-10  4:48 UTC (permalink / raw)
  To: users, ksummit

Hi, all:

These are the topics that were touched on at the maintainer summit when
discussing tooling on the kernel.org side of things.

--

# What is the state of tooling?

## b4 development update

Past year:

- No major new features in b4 over the past year, excepting `b4 dig`
- Seeing lots of adoption and use across subsystem, with a lot of maintainers
  recommending b4 as the preferred mechanism to submit patches
- Starting to see adoption by several non-kernel projects (openembedded, u-boot, others)
- Significant behind-the-scenes move of codebase to stricter typed code
- Continued work on `b4 review` that got shelved temporarily for other
  priorities.

### LETS PUT MOAR AI INTO IT!!1

I spent a lot of time on trying to integrate AI into b4 workflows, but with
little to show for it in the end due to lackluster results.

- Used local ollama as opposed to proprietary services, with the goal to avoid
  introducing hard dependencies on third-party commercial tooling. This is
  probably the main reason why my results were not so exciting as what others
  see with much more powerful models.

- Focused on thread/series summarization features as opposed to code analysis:

    - Summarize follow-ups (trailers, acks/nacks received), though this is
      already fairly well-handed with non-AI tooling.

    - Gauge "temperature" of the discussion to highlight controversial series.

    - Gauge quality of the submission; help decide "is this series worth
      looking at" before maintainers spend their effort looking at it, using
      maintainer-tailored prompts. This may be better done via CI/patchwork
      integration, than with b4.

    - Use LLM to prepare a merge commit message using the cover letter and
      summarizing the patches.

I did not end up releasing any features based on that work, because:

    - LLM was not fantastic at following discussions and keeping a clear
      picture of who said what, which is kind of crucial for maintainer
      decision making.

    - Very large series and huge threads run out fo context window, which
      causes the LLM to get even worse at "who said what" (and it's
      already not that great at it).

    - Thread analysis requires lots of VRAM and a modern graphics card, and is
      still fairly slow there (I used a fairly powerful GeForce RTX).

    - Actual code review is best if it happens post-apply in a temporary
      workdir or a temporary branch, so the agent can see the change in the
      context of the git tree and the entire codebase, not just the context
      lines of the patch itself.

I did have much better success when I worked to represent a thread not as
multiple messages, but as a single document with all interleaved follow-up
conversations collated together. However, this was done manually --
representing emails from arbitrary threads as such collated documents is a
separate challenge.

Using proprietary models and remote services will probably show better
results, but I did not have the funds or the inkling to do it (plus see the
concern for third-party commercial tooling). I may need to collaborate more
closely with the maintainers already doing it on their own instead of
continuing my separate work on it.

### AI crawler scourge

While working on LLM integration, it was certainly ironic that one of the
top challenges for us was to try to keep AI crawlers from overwhelming
kernel.org infrastructure. While we've put several mitigations in place, it's
a temporary relief at best.

## Continuous degradation of SMTP

We're increasingly having to deal with the degradation of the SMTP support by
all commercial companies:

    - major hosts are increasingly not interested in getting mail from anyone
      who isn't also a major mail service provider

    - their "bulk sender" guidelines are no good for us (e.g. requiring that
      we add one-click unsubscribe footers to all email)

    - their "spam filters" are increasingly based on training data, which
      means that "looks different from what most of our users receive" is
      enough to have patches and code discussions put into the "Junk" folder

    - they apply arbitrary throttling ("too many deliveries for the same
      message-id", "too many messages from the DKIM domain foobar.com")

    - anti-phishing services at commercial IT companies do horrible things to
      incoming messages

## Are we finally moving away from patches sent over email?

There are still important concerns when we consider moving away from "patches
sent via email":

    - SMTP is still the only widely used protocol we have for decentralized
      communication; everything else is experimental or has important
      drawbacks, such as:

        - it relies on single-point-of-failure services (e.g. Signal), or
        - it requires standing up esoteric software (which then become
          single-point-of-failure services), or
        - it requires an "everyone-must-switch-now" flag day

    - RFC-5322, with all its warts, is a well-defined standard for internet
      messages:

        - robust, capable of dealing with change while preserving legacy
        - easy to parse with libraries for almost any framework
        - easy to archive and query
        - has lots of tooling built around it

With lore and public-inbox, we *are* in the process of moving away from
relying on the increasingly unreliable SMTP layer. Lore can already let you do
the following things:

    - lets anyone submit patches via the web endpoint
    - lets anyone subscribe to lists via several protocols (NNTP, POP, IMAP)
    - lets anyone use lei to receive arbitrary feeds
    - can aggregate any number of sources, as long as they are RFC-5322
      messages (or can be converted to them)

Lore and public-inbox is becoming a kind of a distributed, replicating
messaging bus with a robust query and retrieval interface on top of it, and I
believe it's a fairly powerful framework we can build upon.

## Work on "local lore"

One downside of lore.kernel.org is that it's a central service, which runs
counter to our goal of limiting how many single points of failure we have.
There is fault-tolerance built into the system (lore.kernel.org is actually 4
different nodes in various parts of the world), but an adversary would have no
difficulty knocking out all nodes at once, which would impact the project
significantly.

The "local lore" projects it the attempt to provide a kind of "maintainer
container" that can be run locally or in any public cloud:

    - comes with a 6-month constantly-updating mirror of lore, using a
      failover set of replication URLs (including tor/onion)
    - comes with a pre-configured mirror of git repositories that are kept
      up-to-date in the same fashion
    - lets the maintainer set up lei queries that can push into their
      inbox, supporting Gmail+OAuth, JMAP, IMAP
    - provides a web submission endpoint and an SMTP service that can
      integrate with other SMTP relays
    - publishes a public-inbox feed of maintainer activity that central
      lore can pick up and integrate

There is a parallel goal here, which is to make it easier for devs to assume
maintainer duties without having to spend a week setting up their tooling.
In theory, all they would be need to do is to set up their maintainer
container and then use the web menu to choose which feeds they want to pull
and where they want messages delivered.

This project is still early in development, but I hope to be able to provide
test containers soon that people can set up and run.

## Other tools

### Bugzilla

It may be time to kill bugzilla:

    - despite periodic "we're not dead yet" emails, it doesn't appear very
      active
    - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
      5.2 development branch and continuing with 5.1
    - question remains with what to replace bugzilla, but it's a longer
      discussion topic that I don't want to raise here; it may be a job for
      the bugspray bot that can extend the two-way bridge functionality to
      multiple bug tracker frameworks

### Patchwork

Patchwork continues to be used widely:

    - we've introduced query-based patchworks, where instead of consuming the
      entire mailing list, we feed it the results of lei queries
    - I'm hoping to work with upstream to add a couple of features that would
      be of benefit to us, such as:

        - support for annotating patches and series (e.g. with LLM summaries)
        - an API endpoint to submit patches, so maintainers could add
          arbitrary series to their patchwork project, integrating with b4

## Web of Trust work

There is an ongoing work to replace our home-grown web of trust solution (that
does work but has important bottlenecks and scaling limitations) with
something both more distributed and easier to maintain. We're working with
OpenSSF to design the framework and I hope to present it to the community in
the next few months.

## Questions?

Send away!

-K

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-10  4:48 kernel.org tooling update Konstantin Ryabitsev
@ 2025-12-10  8:11 ` Mauro Carvalho Chehab
  2025-12-10 13:30 ` Thorsten Leemhuis
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 42+ messages in thread
From: Mauro Carvalho Chehab @ 2025-12-10  8:11 UTC (permalink / raw)
  To: Konstantin Ryabitsev; +Cc: users, ksummit

Hi Konstantin,

Em Tue, 9 Dec 2025 23:48:24 -0500
Konstantin Ryabitsev <konstantin@linuxfoundation.org> escreveu:

> I spent a lot of time on trying to integrate AI into b4 workflows, but with
> little to show for it in the end due to lackluster results.
> 
> - Used local ollama as opposed to proprietary services, with the goal to avoid
>   introducing hard dependencies on third-party commercial tooling. This is
>   probably the main reason why my results were not so exciting as what others
>   see with much more powerful models.
> 
> - Focused on thread/series summarization features as opposed to code analysis:
> 
>     - Summarize follow-ups (trailers, acks/nacks received), though this is
>       already fairly well-handed with non-AI tooling.
> 
>     - Gauge "temperature" of the discussion to highlight controversial series.
> 
>     - Gauge quality of the submission; help decide "is this series worth
>       looking at" before maintainers spend their effort looking at it, using
>       maintainer-tailored prompts. This may be better done via CI/patchwork
>       integration, than with b4.
> 
>     - Use LLM to prepare a merge commit message using the cover letter and
>       summarizing the patches.
> 
> I did not end up releasing any features based on that work, because:
> 
>     - LLM was not fantastic at following discussions and keeping a clear
>       picture of who said what, which is kind of crucial for maintainer
>       decision making.
> 
>     - Very large series and huge threads run out fo context window, which
>       causes the LLM to get even worse at "who said what" (and it's
>       already not that great at it).
> 
>     - Thread analysis requires lots of VRAM and a modern graphics card, and is
>       still fairly slow there (I used a fairly powerful GeForce RTX).
> 
>     - Actual code review is best if it happens post-apply in a temporary
>       workdir or a temporary branch, so the agent can see the change in the
>       context of the git tree and the entire codebase, not just the context
>       lines of the patch itself.
> 
> I did have much better success when I worked to represent a thread not as
> multiple messages, but as a single document with all interleaved follow-up
> conversations collated together. However, this was done manually --
> representing emails from arbitrary threads as such collated documents is a
> separate challenge.

I would love to see what you got there. I tried to an experiment similar
to it, also with ollama, writing some code Python code from scratch, aiming
to run locally on my GPU (with has only 16GB VRAM but it is a brand new
RDNA4 GPU), using a prompt similar to this:

            You are an expert at summarizing email threads and discussion forums. 
	    Your task is to analyze the following text, which is a chunk of an
	    email thread with nested replies, and provide a concise, structured summary.

            **Instructions:**
            1.  **Reconstruct the Chronology:** Carefully analyze the indentation levels (e.g., `>>>`, `>`, `>>`) 
		and timestamps to determine the correct order of messages. The oldest message is likely the most indented.
            2.  **Identify Speakers:** For each message, extract the first name from the "From:" field (e.g., "From: John Doe" becomes "John").
            3.  **Consolidate by Topic and Speaker:** Group the main discussion points by topic. 
		For each topic, summarize what each person contributed, consolidating their points 
		even if they appear in multiple messages.
            4.  **Focus on New Information:** Ignore salutations (e.g., "Hi Mike,") and email
		signature blocks. Focus on the substantive content of each message.
            5.  **Output Format:** Provide the summary in the following structure:
                -   **Main Topic(s) of Discussion:** [List 1-3 main topics]
                -   **Summary by Participant:**
                    -   **[First Name 1]:** [Concise summary of their stance, questions,
			or information provided, in chronological order if important.]
                    -   **[First Name 2]:** [Concise summary of their stance, questions,
			or information provided.]
                -   **Outcome/Next Steps:** [Note any conclusions, decisions, or action items agreed upon.]

            **Text to Summarize:**
            {chunk}

Yet, grouping e-mails per thread is a challenge, specially since
I was planning to ask it to summarize it in short time intervals,
so, picking only the newer emails, and re-using already parsed data.

My goal is not to handle patches, as I doubt this would give anything
relevant. Instead, I wanted to keep in track with LKML and other high
traffic mailing lists to pick most relevant threads.

Btw, I got some success summarizing patch series from a given Kernel author
along an entire month using just the e-mail subject, with mistral-small3.2
LLM model, and a somewhat complex prompt. Goal was to summarize how many
patches were submitted, grouping them by threads and different open source
projects. Output were far a way from being perfect, and, if the number of
patches is too big, it starts forgetting about the context - with is one of
the current challenges with current LLM technology - even on proprietary
models.

It sounds to me that, with the current technology, the best approach
would be to ask AI to summarize each e-mail individually, then group 
the results using a non-AI approach (or mixing AI with normal programming).

> Using proprietary models and remote services will probably show better
> results, but I did not have the funds or the inkling to do it (plus see the
> concern for third-party commercial tooling). I may need to collaborate more
> closely with the maintainers already doing it on their own instead of
> continuing my separate work on it.

Yeah, the best is to have this not dependent on proprietary
models or on external GPU farms. I wonder if a DSX Spark would be
reasonably good with its 128GB unified RAM for something like that.
Its price is still too high, but maybe we'll end having similar
models next year to allow local tests with bigger models.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-10  4:48 kernel.org tooling update Konstantin Ryabitsev
  2025-12-10  8:11 ` Mauro Carvalho Chehab
@ 2025-12-10 13:30 ` Thorsten Leemhuis
  2025-12-11  3:04   ` Theodore Tso
  2025-12-12 23:48   ` Stephen Hemminger
  2025-12-16 16:21 ` Lukas Wunner
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 42+ messages in thread
From: Thorsten Leemhuis @ 2025-12-10 13:30 UTC (permalink / raw)
  To: Konstantin Ryabitsev, users, ksummit, Linux kernel regressions list

Lo! Thx for the update, much appreciated!

On 12/10/25 05:48, Konstantin Ryabitsev wrote:

> ### Bugzilla
> 
> It may be time to kill bugzilla:

Thx for bringing this up, as I a few months ago again looked somewhat
closer at the state of how well our bugzilla is working for the core
kernel. I didn't post the analysis in the end, but to me it looked like
the state of things was round the same as it was three years ago -- when
it wasn't working well, which was among the reasons why we came close to
abandoning bugzilla for kernel bugs[1].

[1] for those that don't remember, see https://lwn.net/Articles/910740/
and
https://lore.kernel.org/all/aa876027-1038-3e4a-b16a-c144f674c0b0@leemhuis.info/



>     - despite periodic "we're not dead yet" emails, it doesn't appear very
>       active
>     - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>       5.2 development branch and continuing with 5.1
>     - question remains with what to replace bugzilla,

To me it looks like most subsystems don't care much or at all about
bugzilla.kernel.org. This made me wonder (and maybe you could gather
some opinions on this in Tokyo):

* How many kernel subsystems have a strong interest in a bug tracking
solution at all[2]? And how many of those might be happy by using some
external issue tracker, like those in github (like Rust for Linux,
thesofproject, and a few others do), gitlab (either directly, like
apparmor, or self-hosted, like the DRM subsystem)?

* Does the kernel as a whole need a bug tracking solution at all to
receive reports? We for now require email for patches, so why not for
bugs as well, unless a subsystem really wants something (see above)?

[2] Some numbers:
$ for i in "" mailto bugzilla github gitlab; do echo -n "Searching for
'^B:.*${i}': "; grep -c -E "^B:.*${i}" MAINTAINERS; done
Searching for '^B:.*': 70
Searching for '^B:.*mailto': 12
Searching for '^B:.*bugzilla': 23
Searching for '^B:.*github': 17
Searching for '^B:.*gitlab': 11

> but it's a longer discussion topic that I don't want to raise here;

Would like to be involved there.

> it may be a job for
>       the bugspray bot that can extend the two-way bridge functionality to
>       multiple bug tracker frameworks

FWIW, development of my regression tracker (regzbot) and me using it to
track regressions nearly stalled but is slowly restarting. Would be good
if we could work together here, as there is some overlap -- and
regression tracking afaics is something that a lot of people want and
consider important. And regzbot is already capable of monitoring reports
in various places (lore, gitlab, github, bugzilla); so if we decide that
we don't need a tracker for the kernel as a whole, it might already do
nearly everything for the bugs where tracking really helps a lot.

Ciao, Thorsten

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-10 13:30 ` Thorsten Leemhuis
@ 2025-12-11  3:04   ` Theodore Tso
  2025-12-12 23:48   ` Stephen Hemminger
  1 sibling, 0 replies; 42+ messages in thread
From: Theodore Tso @ 2025-12-11  3:04 UTC (permalink / raw)
  To: Thorsten Leemhuis
  Cc: Konstantin Ryabitsev, users, ksummit, Linux kernel regressions list

On Wed, Dec 10, 2025 at 02:30:37PM +0100, Thorsten Leemhuis wrote:
> * How many kernel subsystems have a strong interest in a bug tracking
> solution at all[2]? And how many of those might be happy by using some
> external issue tracker, like those in github (like Rust for Linux,
> thesofproject, and a few others do), gitlab (either directly, like
> apparmor, or self-hosted, like the DRM subsystem)?

One of the discussions we had (both during and after Konstantin's
tools session) was that all we really need is some kind of way of
associating state with a set of URL's to lore --- this could be used
to indicate "this is a bug report" and a set of flags "this bug has
been resolved / needs more information / needs triaging, etc".  A
different set of states is also what we would need as a replacement
for patchwork --- this patch series is a not applicable for a
subsystem, has been applied, has been rejected, etc.

This is quite similar to what you've done with your regzbot dashboard,
actually.

> FWIW, development of my regression tracker (regzbot) and me using it to
> track regressions nearly stalled but is slowly restarting. Would be good
> if we could work together here, as there is some overlap -- and
> regression tracking afaics is something that a lot of people want and
> consider important. And regzbot is already capable of monitoring reports
> in various places (lore, gitlab, github, bugzilla); so if we decide that
> we don't need a tracker for the kernel as a whole, it might already do
> nearly everything for the bugs where tracking really helps a lot.

  	 	    	    	       		- Ted

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-10 13:30 ` Thorsten Leemhuis
  2025-12-11  3:04   ` Theodore Tso
@ 2025-12-12 23:48   ` Stephen Hemminger
  2025-12-12 23:54     ` Randy Dunlap
  1 sibling, 1 reply; 42+ messages in thread
From: Stephen Hemminger @ 2025-12-12 23:48 UTC (permalink / raw)
  To: Thorsten Leemhuis
  Cc: Konstantin Ryabitsev, users, ksummit, Linux kernel regressions list

On Wed, 10 Dec 2025 14:30:37 +0100
Thorsten Leemhuis <linux@leemhuis.info> wrote:

> he update, much appreciated!
> 
> On 12/10/25 05:48, Konstantin Ryabitsev wrote:
> 
> > ### Bugzilla
> > 
> > It may be time to kill bugzilla:  
> 
> Thx for bringing this up, as I a few months ago again looked somewhat
> closer at the state of how well our bugzilla is working for the core
> kernel. I didn't post the analysis in the end, but to me it looked like
> the state of things was round the same as it was three years ago -- when
> it wasn't working well, which was among the reasons why we came close to
> abandoning bugzilla for kernel bugs[1].
> 
> [1] for those that don't remember, see https://lwn.net/Articles/910740/
> and
> https://lore.kernel.org/all/aa876027-1038-3e4a-b16a-c144f674c0b0@leemhuis.info/
> 
> 
> 
> >     - despite periodic "we're not dead yet" emails, it doesn't appear very
> >       active
> >     - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
> >       5.2 development branch and continuing with 5.1
> >     - question remains with what to replace bugzilla,  
> 
> To me it looks like most subsystems don't care much or at all about
> bugzilla.kernel.org. This made me wonder (and maybe you could gather
> some opinions on this in Tokyo):
> 
> * How many kernel subsystems have a strong interest in a bug tracking
> solution at all[2]? And how many of those might be happy by using some
> external issue tracker, like those in github (like Rust for Linux,
> thesofproject, and a few others do), gitlab (either directly, like
> apparmor, or self-hosted, like the DRM subsystem)?
> 
> * Does the kernel as a whole need a bug tracking solution at all to
> receive reports? We for now require email for patches, so why not for
> bugs as well, unless a subsystem really wants something (see above)?

I am the default target for all networking bugzilla submissions.
Would be very happy to just see bugzilla die.
Right now, all I do is do a quick scan and respond to the junk submissions
and forward the rest to the netdev mailing list with a note on the bug
to go there in the future.

Issue tracking is not in the workflow for the community.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-12 23:48   ` Stephen Hemminger
@ 2025-12-12 23:54     ` Randy Dunlap
  0 siblings, 0 replies; 42+ messages in thread
From: Randy Dunlap @ 2025-12-12 23:54 UTC (permalink / raw)
  To: Stephen Hemminger, Thorsten Leemhuis
  Cc: Konstantin Ryabitsev, users, ksummit, Linux kernel regressions list



On 12/12/25 3:48 PM, Stephen Hemminger wrote:
> On Wed, 10 Dec 2025 14:30:37 +0100
> Thorsten Leemhuis <linux@leemhuis.info> wrote:
> 
>> he update, much appreciated!
>>
>> On 12/10/25 05:48, Konstantin Ryabitsev wrote:
>>
>>> ### Bugzilla
>>>
>>> It may be time to kill bugzilla:  
>>
>> Thx for bringing this up, as I a few months ago again looked somewhat
>> closer at the state of how well our bugzilla is working for the core
>> kernel. I didn't post the analysis in the end, but to me it looked like
>> the state of things was round the same as it was three years ago -- when
>> it wasn't working well, which was among the reasons why we came close to
>> abandoning bugzilla for kernel bugs[1].
>>
>> [1] for those that don't remember, see https://lwn.net/Articles/910740/
>> and
>> https://lore.kernel.org/all/aa876027-1038-3e4a-b16a-c144f674c0b0@leemhuis.info/
>>
>>
>>
>>>     - despite periodic "we're not dead yet" emails, it doesn't appear very
>>>       active
>>>     - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>>>       5.2 development branch and continuing with 5.1
>>>     - question remains with what to replace bugzilla,  
>>
>> To me it looks like most subsystems don't care much or at all about
>> bugzilla.kernel.org. This made me wonder (and maybe you could gather
>> some opinions on this in Tokyo):
>>
>> * How many kernel subsystems have a strong interest in a bug tracking
>> solution at all[2]? And how many of those might be happy by using some
>> external issue tracker, like those in github (like Rust for Linux,
>> thesofproject, and a few others do), gitlab (either directly, like
>> apparmor, or self-hosted, like the DRM subsystem)?
>>
>> * Does the kernel as a whole need a bug tracking solution at all to
>> receive reports? We for now require email for patches, so why not for
>> bugs as well, unless a subsystem really wants something (see above)?
> 
> I am the default target for all networking bugzilla submissions.
> Would be very happy to just see bugzilla die.
> Right now, all I do is do a quick scan and respond to the junk submissions
> and forward the rest to the netdev mailing list with a note on the bug
> to go there in the future.
> 
> Issue tracking is not in the workflow for the community.

which can be observed by the number of Categories/Components that don't
have an assignee -- even a mailing list.  Which should be "fixed" IMO.
I.e., we are actively helping bugzilla entries to be ignored.

I don't think it's the tool itself. More likely something else...

-- 
~Randy


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-10  4:48 kernel.org tooling update Konstantin Ryabitsev
  2025-12-10  8:11 ` Mauro Carvalho Chehab
  2025-12-10 13:30 ` Thorsten Leemhuis
@ 2025-12-16 16:21 ` Lukas Wunner
  2025-12-16 20:33   ` Jeff Johnson
  2026-01-23  9:19 ` Web of Trust work [Was: kernel.org tooling update] Uwe Kleine-König
  2026-01-23 18:42 ` kernel.org tooling update Randy Dunlap
  4 siblings, 1 reply; 42+ messages in thread
From: Lukas Wunner @ 2025-12-16 16:21 UTC (permalink / raw)
  To: Konstantin Ryabitsev, Bjorn Helgaas; +Cc: users, ksummit

[cc += Bjorn, start of thread is here:
https://lore.kernel.org/ksummit/20251209-roaring-hidden-alligator-068eea@lemur/
]

On Tue, Dec 09, 2025 at 11:48:24PM -0500, Konstantin Ryabitsev wrote:
> ### Bugzilla
> 
> It may be time to kill bugzilla:
> 
>     - despite periodic "we're not dead yet" emails, it doesn't appear very
>       active
>     - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>       5.2 development branch and continuing with 5.1
>     - question remains with what to replace bugzilla, but it's a longer
>       discussion topic that I don't want to raise here; it may be a job for
>       the bugspray bot that can extend the two-way bridge functionality to
>       multiple bug tracker frameworks

The PCI subsystem relies heavily on bugzilla to track issues,
collect dmesg/lspci output from reporters and furnish them with
debug or test patches.

The SOP when issues are reported on the mailing list without
sufficient information is to ask the reporter to open a bugzilla
issue and attach full dmesg and lspci -vvv output for analysis.

If bugzilla is deprecated, we'll need at least a way to exchange
files with reporters.  Preferably on kernel.org infrastructure
to be independent from 3rd parties.  A way to communicate with
reporters outside the mailing list is also useful to prevent
spamming linux-pci@vger.kernel.org with messages relevant only
to a single issue or system.

All the information now recorded in bugzilla should continue
to be available indefinitely so that Link: tags in commits
continue to work.  It's not uncommon to have to dig in old
bugzilla entries in order to understand the motivation for
a particular code section that was introduced years earlier.

Thanks,

Lukas

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-16 16:21 ` Lukas Wunner
@ 2025-12-16 20:33   ` Jeff Johnson
  2025-12-17  0:47     ` Mario Limonciello
  0 siblings, 1 reply; 42+ messages in thread
From: Jeff Johnson @ 2025-12-16 20:33 UTC (permalink / raw)
  To: Lukas Wunner, Konstantin Ryabitsev, Bjorn Helgaas; +Cc: users, ksummit

On 12/16/2025 8:21 AM, Lukas Wunner wrote:
> [cc += Bjorn, start of thread is here:
> https://lore.kernel.org/ksummit/20251209-roaring-hidden-alligator-068eea@lemur/
> ]
> 
> On Tue, Dec 09, 2025 at 11:48:24PM -0500, Konstantin Ryabitsev wrote:
>> ### Bugzilla
>>
>> It may be time to kill bugzilla:
>>
>>     - despite periodic "we're not dead yet" emails, it doesn't appear very
>>       active
>>     - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>>       5.2 development branch and continuing with 5.1
>>     - question remains with what to replace bugzilla, but it's a longer
>>       discussion topic that I don't want to raise here; it may be a job for
>>       the bugspray bot that can extend the two-way bridge functionality to
>>       multiple bug tracker frameworks
> 
> The PCI subsystem relies heavily on bugzilla to track issues,
> collect dmesg/lspci output from reporters and furnish them with
> debug or test patches.
> 
> The SOP when issues are reported on the mailing list without
> sufficient information is to ask the reporter to open a bugzilla
> issue and attach full dmesg and lspci -vvv output for analysis.
> 
> If bugzilla is deprecated, we'll need at least a way to exchange
> files with reporters.  Preferably on kernel.org infrastructure
> to be independent from 3rd parties.  A way to communicate with
> reporters outside the mailing list is also useful to prevent
> spamming linux-pci@vger.kernel.org with messages relevant only
> to a single issue or system.
> 
> All the information now recorded in bugzilla should continue
> to be available indefinitely so that Link: tags in commits
> continue to work.  It's not uncommon to have to dig in old
> bugzilla entries in order to understand the motivation for
> a particular code section that was introduced years earlier.

At least some of the wireless maintainers also use bugzilla.
The ath11k & ath12k drivers have guidance in the wireless wiki:
https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath11k/bugreport.html
https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath12k/bugreport.html

So we would also want this or a similar service to be maintained.

/jeff


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-16 20:33   ` Jeff Johnson
@ 2025-12-17  0:47     ` Mario Limonciello
  2025-12-18 13:37       ` Jani Nikula
  0 siblings, 1 reply; 42+ messages in thread
From: Mario Limonciello @ 2025-12-17  0:47 UTC (permalink / raw)
  To: Jeff Johnson, Lukas Wunner, Konstantin Ryabitsev, Bjorn Helgaas
  Cc: users, ksummit



On 12/16/25 2:33 PM, Jeff Johnson wrote:
> On 12/16/2025 8:21 AM, Lukas Wunner wrote:
>> [cc += Bjorn, start of thread is here:
>> https://lore.kernel.org/ksummit/20251209-roaring-hidden-alligator-068eea@lemur/
>> ]
>>
>> On Tue, Dec 09, 2025 at 11:48:24PM -0500, Konstantin Ryabitsev wrote:
>>> ### Bugzilla
>>>
>>> It may be time to kill bugzilla:
>>>
>>>      - despite periodic "we're not dead yet" emails, it doesn't appear very
>>>        active
>>>      - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>>>        5.2 development branch and continuing with 5.1
>>>      - question remains with what to replace bugzilla, but it's a longer
>>>        discussion topic that I don't want to raise here; it may be a job for
>>>        the bugspray bot that can extend the two-way bridge functionality to
>>>        multiple bug tracker frameworks
>>
>> The PCI subsystem relies heavily on bugzilla to track issues,
>> collect dmesg/lspci output from reporters and furnish them with
>> debug or test patches.
>>
>> The SOP when issues are reported on the mailing list without
>> sufficient information is to ask the reporter to open a bugzilla
>> issue and attach full dmesg and lspci -vvv output for analysis.
>>
>> If bugzilla is deprecated, we'll need at least a way to exchange
>> files with reporters.  Preferably on kernel.org infrastructure
>> to be independent from 3rd parties.  A way to communicate with
>> reporters outside the mailing list is also useful to prevent
>> spamming linux-pci@vger.kernel.org with messages relevant only
>> to a single issue or system.
>>
>> All the information now recorded in bugzilla should continue
>> to be available indefinitely so that Link: tags in commits
>> continue to work.  It's not uncommon to have to dig in old
>> bugzilla entries in order to understand the motivation for
>> a particular code section that was introduced years earlier.
> 
> At least some of the wireless maintainers also use bugzilla.
> The ath11k & ath12k drivers have guidance in the wireless wiki:
> https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath11k/bugreport.html
> https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath12k/bugreport.html
> 
> So we would also want this or a similar service to be maintained.
> 
> /jeff

I know that there was a mention of "external" Gitlab instances earlier 
in the thread.  How about standing up an LF Gitlab instance?

Subsystems that want to use it for issue tracking can have projects 
there specifically for that.

For example we could have a gitlab.kernel.org and then a project PCI for 
all PCI subsystem related issues.

This also "potentially" opens up the possibility of subsystems that want 
to engage in a forge PR/MR workflow with contributors to do so.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-17  0:47     ` Mario Limonciello
@ 2025-12-18 13:37       ` Jani Nikula
  2025-12-18 14:09         ` Mario Limonciello
  0 siblings, 1 reply; 42+ messages in thread
From: Jani Nikula @ 2025-12-18 13:37 UTC (permalink / raw)
  To: Mario Limonciello, Jeff Johnson, Lukas Wunner,
	Konstantin Ryabitsev, Bjorn Helgaas
  Cc: users, ksummit

On Tue, 16 Dec 2025, Mario Limonciello <superm1@kernel.org> wrote:
> On 12/16/25 2:33 PM, Jeff Johnson wrote:
>> On 12/16/2025 8:21 AM, Lukas Wunner wrote:
>>> [cc += Bjorn, start of thread is here:
>>> https://lore.kernel.org/ksummit/20251209-roaring-hidden-alligator-068eea@lemur/
>>> ]
>>>
>>> On Tue, Dec 09, 2025 at 11:48:24PM -0500, Konstantin Ryabitsev wrote:
>>>> ### Bugzilla
>>>>
>>>> It may be time to kill bugzilla:
>>>>
>>>>      - despite periodic "we're not dead yet" emails, it doesn't appear very
>>>>        active
>>>>      - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>>>>        5.2 development branch and continuing with 5.1
>>>>      - question remains with what to replace bugzilla, but it's a longer
>>>>        discussion topic that I don't want to raise here; it may be a job for
>>>>        the bugspray bot that can extend the two-way bridge functionality to
>>>>        multiple bug tracker frameworks
>>>
>>> The PCI subsystem relies heavily on bugzilla to track issues,
>>> collect dmesg/lspci output from reporters and furnish them with
>>> debug or test patches.
>>>
>>> The SOP when issues are reported on the mailing list without
>>> sufficient information is to ask the reporter to open a bugzilla
>>> issue and attach full dmesg and lspci -vvv output for analysis.
>>>
>>> If bugzilla is deprecated, we'll need at least a way to exchange
>>> files with reporters.  Preferably on kernel.org infrastructure
>>> to be independent from 3rd parties.  A way to communicate with
>>> reporters outside the mailing list is also useful to prevent
>>> spamming linux-pci@vger.kernel.org with messages relevant only
>>> to a single issue or system.
>>>
>>> All the information now recorded in bugzilla should continue
>>> to be available indefinitely so that Link: tags in commits
>>> continue to work.  It's not uncommon to have to dig in old
>>> bugzilla entries in order to understand the motivation for
>>> a particular code section that was introduced years earlier.
>> 
>> At least some of the wireless maintainers also use bugzilla.
>> The ath11k & ath12k drivers have guidance in the wireless wiki:
>> https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath11k/bugreport.html
>> https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath12k/bugreport.html
>> 
>> So we would also want this or a similar service to be maintained.
>> 
>> /jeff
>
> I know that there was a mention of "external" Gitlab instances earlier 
> in the thread.  How about standing up an LF Gitlab instance?

FWIW, I've been rather discouraged about the free tier GitLab issues
experience. Feature wise, it's a step down from Bugzilla, even if the UI
is more modern. The best stuff is always going into the paid tier. For
this reason alone, I'm partial to something completely community driven
like Forgejo. There's at least the possibility of getting the new
features.


BR,
Jani.


>
> Subsystems that want to use it for issue tracking can have projects 
> there specifically for that.
>
> For example we could have a gitlab.kernel.org and then a project PCI for 
> all PCI subsystem related issues.
>
> This also "potentially" opens up the possibility of subsystems that want 
> to engage in a forge PR/MR workflow with contributors to do so.
>

-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-18 13:37       ` Jani Nikula
@ 2025-12-18 14:09         ` Mario Limonciello
  0 siblings, 0 replies; 42+ messages in thread
From: Mario Limonciello @ 2025-12-18 14:09 UTC (permalink / raw)
  To: Jani Nikula, Jeff Johnson, Lukas Wunner, Konstantin Ryabitsev,
	Bjorn Helgaas
  Cc: users, ksummit



On 12/18/25 7:37 AM, Jani Nikula wrote:
> On Tue, 16 Dec 2025, Mario Limonciello <superm1@kernel.org> wrote:
>> On 12/16/25 2:33 PM, Jeff Johnson wrote:
>>> On 12/16/2025 8:21 AM, Lukas Wunner wrote:
>>>> [cc += Bjorn, start of thread is here:
>>>> https://lore.kernel.org/ksummit/20251209-roaring-hidden-alligator-068eea@lemur/
>>>> ]
>>>>
>>>> On Tue, Dec 09, 2025 at 11:48:24PM -0500, Konstantin Ryabitsev wrote:
>>>>> ### Bugzilla
>>>>>
>>>>> It may be time to kill bugzilla:
>>>>>
>>>>>       - despite periodic "we're not dead yet" emails, it doesn't appear very
>>>>>         active
>>>>>       - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>>>>>         5.2 development branch and continuing with 5.1
>>>>>       - question remains with what to replace bugzilla, but it's a longer
>>>>>         discussion topic that I don't want to raise here; it may be a job for
>>>>>         the bugspray bot that can extend the two-way bridge functionality to
>>>>>         multiple bug tracker frameworks
>>>>
>>>> The PCI subsystem relies heavily on bugzilla to track issues,
>>>> collect dmesg/lspci output from reporters and furnish them with
>>>> debug or test patches.
>>>>
>>>> The SOP when issues are reported on the mailing list without
>>>> sufficient information is to ask the reporter to open a bugzilla
>>>> issue and attach full dmesg and lspci -vvv output for analysis.
>>>>
>>>> If bugzilla is deprecated, we'll need at least a way to exchange
>>>> files with reporters.  Preferably on kernel.org infrastructure
>>>> to be independent from 3rd parties.  A way to communicate with
>>>> reporters outside the mailing list is also useful to prevent
>>>> spamming linux-pci@vger.kernel.org with messages relevant only
>>>> to a single issue or system.
>>>>
>>>> All the information now recorded in bugzilla should continue
>>>> to be available indefinitely so that Link: tags in commits
>>>> continue to work.  It's not uncommon to have to dig in old
>>>> bugzilla entries in order to understand the motivation for
>>>> a particular code section that was introduced years earlier.
>>>
>>> At least some of the wireless maintainers also use bugzilla.
>>> The ath11k & ath12k drivers have guidance in the wireless wiki:
>>> https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath11k/bugreport.html
>>> https://wireless.docs.kernel.org/en/latest/en/users/drivers/ath12k/bugreport.html
>>>
>>> So we would also want this or a similar service to be maintained.
>>>
>>> /jeff
>>
>> I know that there was a mention of "external" Gitlab instances earlier
>> in the thread.  How about standing up an LF Gitlab instance?
> 
> FWIW, I've been rather discouraged about the free tier GitLab issues
> experience. Feature wise, it's a step down from Bugzilla, even if the UI
> is more modern. The best stuff is always going into the paid tier. For
> this reason alone, I'm partial to something completely community driven
> like Forgejo. There's at least the possibility of getting the new
> features.

Sure - totally.

> 
> 
> BR,
> Jani.
> 
> 
>>
>> Subsystems that want to use it for issue tracking can have projects
>> there specifically for that.
>>
>> For example we could have a gitlab.kernel.org and then a project PCI for
>> all PCI subsystem related issues.
>>
>> This also "potentially" opens up the possibility of subsystems that want
>> to engage in a forge PR/MR workflow with contributors to do so.
>>
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Web of Trust work [Was: kernel.org tooling update]
  2025-12-10  4:48 kernel.org tooling update Konstantin Ryabitsev
                   ` (2 preceding siblings ...)
  2025-12-16 16:21 ` Lukas Wunner
@ 2026-01-23  9:19 ` Uwe Kleine-König
  2026-01-23  9:29   ` Greg KH
  2026-01-23 18:42 ` kernel.org tooling update Randy Dunlap
  4 siblings, 1 reply; 42+ messages in thread
From: Uwe Kleine-König @ 2026-01-23  9:19 UTC (permalink / raw)
  To: Konstantin Ryabitsev, users, ksummit


[-- Attachment #1.1: Type: text/plain, Size: 732 bytes --]

Hello Konstantin,

On 12/10/25 05:48, Konstantin Ryabitsev wrote:
> ## Web of Trust work
> 
> There is an ongoing work to replace our home-grown web of trust solution (that
> does work but has important bottlenecks and scaling limitations) with
> something both more distributed and easier to maintain. We're working with
> OpenSSF to design the framework and I hope to present it to the community in
> the next few months.

the current home-grown solution is https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?

I wonder what the bottlenecks and scaling limitations are that you mention.

Is there some info available already now about the path you (and OpenSSF) intend to propose?

Best regards
Uwe

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23  9:19 ` Web of Trust work [Was: kernel.org tooling update] Uwe Kleine-König
@ 2026-01-23  9:29   ` Greg KH
  2026-01-23 11:47     ` Mauro Carvalho Chehab
  2026-01-23 16:24     ` James Bottomley
  0 siblings, 2 replies; 42+ messages in thread
From: Greg KH @ 2026-01-23  9:29 UTC (permalink / raw)
  To: Uwe Kleine-König; +Cc: Konstantin Ryabitsev, users, ksummit

On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:
> Hello Konstantin,
> 
> On 12/10/25 05:48, Konstantin Ryabitsev wrote:
> > ## Web of Trust work
> > 
> > There is an ongoing work to replace our home-grown web of trust solution (that
> > does work but has important bottlenecks and scaling limitations) with
> > something both more distributed and easier to maintain. We're working with
> > OpenSSF to design the framework and I hope to present it to the community in
> > the next few months.
> 
> the current home-grown solution is https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> 
> I wonder what the bottlenecks and scaling limitations are that you mention.
> 
> Is there some info available already now about the path you (and OpenSSF) intend to propose?

There will be a presentation about this in February at a conference and
hopefully it will be made public then as the work is still ongoing.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23  9:29   ` Greg KH
@ 2026-01-23 11:47     ` Mauro Carvalho Chehab
  2026-01-23 11:58       ` Greg KH
  2026-01-23 16:24     ` James Bottomley
  1 sibling, 1 reply; 42+ messages in thread
From: Mauro Carvalho Chehab @ 2026-01-23 11:47 UTC (permalink / raw)
  To: Greg KH; +Cc: Uwe Kleine-König, Konstantin Ryabitsev, users, ksummit

On Fri, 23 Jan 2026 10:29:28 +0100
Greg KH <gregkh@linuxfoundation.org> wrote:

> On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:
> > Hello Konstantin,
> > 
> > On 12/10/25 05:48, Konstantin Ryabitsev wrote:  
> > > ## Web of Trust work
> > > 
> > > There is an ongoing work to replace our home-grown web of trust solution (that
> > > does work but has important bottlenecks and scaling limitations) with
> > > something both more distributed and easier to maintain. We're working with
> > > OpenSSF to design the framework and I hope to present it to the community in
> > > the next few months.  
> > 
> > the current home-grown solution is https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> > 
> > I wonder what the bottlenecks and scaling limitations are that you mention.
> > 
> > Is there some info available already now about the path you (and OpenSSF) intend to propose?  
> 
> There will be a presentation about this in February at a conference and
> hopefully it will be made public then as the work is still ongoing.

I got curious when I saw something about "First Person credentials"
at https://lfms26.sched.com/event/2ETT5?iframe=no that 
"would begin with the Linux Kernel project" - and more importantly
how and when it would affect my duties. I guess I'd need to
refrain my curiosity until the end of Feb :-)

Regards,
Mauro

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 11:47     ` Mauro Carvalho Chehab
@ 2026-01-23 11:58       ` Greg KH
  2026-01-23 12:24         ` Mauro Carvalho Chehab
  2026-01-23 13:57         ` Konstantin Ryabitsev
  0 siblings, 2 replies; 42+ messages in thread
From: Greg KH @ 2026-01-23 11:58 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Uwe Kleine-König, Konstantin Ryabitsev, users, ksummit

On Fri, Jan 23, 2026 at 12:47:00PM +0100, Mauro Carvalho Chehab wrote:
> On Fri, 23 Jan 2026 10:29:28 +0100
> Greg KH <gregkh@linuxfoundation.org> wrote:
> 
> > On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:
> > > Hello Konstantin,
> > > 
> > > On 12/10/25 05:48, Konstantin Ryabitsev wrote:  
> > > > ## Web of Trust work
> > > > 
> > > > There is an ongoing work to replace our home-grown web of trust solution (that
> > > > does work but has important bottlenecks and scaling limitations) with
> > > > something both more distributed and easier to maintain. We're working with
> > > > OpenSSF to design the framework and I hope to present it to the community in
> > > > the next few months.  
> > > 
> > > the current home-grown solution is https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> > > 
> > > I wonder what the bottlenecks and scaling limitations are that you mention.
> > > 
> > > Is there some info available already now about the path you (and OpenSSF) intend to propose?  
> > 
> > There will be a presentation about this in February at a conference and
> > hopefully it will be made public then as the work is still ongoing.
> 
> I got curious when I saw something about "First Person credentials"
> at https://lfms26.sched.com/event/2ETT5?iframe=no that 
> "would begin with the Linux Kernel project" - and more importantly
> how and when it would affect my duties. I guess I'd need to
> refrain my curiosity until the end of Feb :-)

Ideally it will not affect anything, just replace the use of gpg however
you use it today for kernel work.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 11:58       ` Greg KH
@ 2026-01-23 12:24         ` Mauro Carvalho Chehab
  2026-01-23 12:29           ` Greg KH
  2026-01-23 13:57         ` Konstantin Ryabitsev
  1 sibling, 1 reply; 42+ messages in thread
From: Mauro Carvalho Chehab @ 2026-01-23 12:24 UTC (permalink / raw)
  To: Greg KH; +Cc: Uwe Kleine-König, Konstantin Ryabitsev, users, ksummit

On Fri, 23 Jan 2026 12:58:48 +0100
Greg KH <gregkh@linuxfoundation.org> wrote:

> On Fri, Jan 23, 2026 at 12:47:00PM +0100, Mauro Carvalho Chehab wrote:
> > On Fri, 23 Jan 2026 10:29:28 +0100
> > Greg KH <gregkh@linuxfoundation.org> wrote:
> >   
> > > On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:  
> > > > Hello Konstantin,
> > > > 
> > > > On 12/10/25 05:48, Konstantin Ryabitsev wrote:    
> > > > > ## Web of Trust work
> > > > > 
> > > > > There is an ongoing work to replace our home-grown web of trust solution (that
> > > > > does work but has important bottlenecks and scaling limitations) with
> > > > > something both more distributed and easier to maintain. We're working with
> > > > > OpenSSF to design the framework and I hope to present it to the community in
> > > > > the next few months.    
> > > > 
> > > > the current home-grown solution is https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> > > > 
> > > > I wonder what the bottlenecks and scaling limitations are that you mention.
> > > > 
> > > > Is there some info available already now about the path you (and OpenSSF) intend to propose?    
> > > 
> > > There will be a presentation about this in February at a conference and
> > > hopefully it will be made public then as the work is still ongoing.  
> > 
> > I got curious when I saw something about "First Person credentials"
> > at https://lfms26.sched.com/event/2ETT5?iframe=no that 
> > "would begin with the Linux Kernel project" - and more importantly
> > how and when it would affect my duties. I guess I'd need to
> > refrain my curiosity until the end of Feb :-)  
> 
> Ideally it will not affect anything, just replace the use of gpg however
> you use it today for kernel work.

I suspect that, at some point, we'll need to setup our new
credentials somehow - hopefully without needing to be physically 
present on a gpg-like key party. If we can do that using our
existing infra or our current gpg keys, the replacement should be 
easy.


Thanks,
Mauro

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 12:24         ` Mauro Carvalho Chehab
@ 2026-01-23 12:29           ` Greg KH
  0 siblings, 0 replies; 42+ messages in thread
From: Greg KH @ 2026-01-23 12:29 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Uwe Kleine-König, Konstantin Ryabitsev, users, ksummit

On Fri, Jan 23, 2026 at 01:24:49PM +0100, Mauro Carvalho Chehab wrote:
> On Fri, 23 Jan 2026 12:58:48 +0100
> Greg KH <gregkh@linuxfoundation.org> wrote:
> 
> > On Fri, Jan 23, 2026 at 12:47:00PM +0100, Mauro Carvalho Chehab wrote:
> > > On Fri, 23 Jan 2026 10:29:28 +0100
> > > Greg KH <gregkh@linuxfoundation.org> wrote:
> > >   
> > > > On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:  
> > > > > Hello Konstantin,
> > > > > 
> > > > > On 12/10/25 05:48, Konstantin Ryabitsev wrote:    
> > > > > > ## Web of Trust work
> > > > > > 
> > > > > > There is an ongoing work to replace our home-grown web of trust solution (that
> > > > > > does work but has important bottlenecks and scaling limitations) with
> > > > > > something both more distributed and easier to maintain. We're working with
> > > > > > OpenSSF to design the framework and I hope to present it to the community in
> > > > > > the next few months.    
> > > > > 
> > > > > the current home-grown solution is https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> > > > > 
> > > > > I wonder what the bottlenecks and scaling limitations are that you mention.
> > > > > 
> > > > > Is there some info available already now about the path you (and OpenSSF) intend to propose?    
> > > > 
> > > > There will be a presentation about this in February at a conference and
> > > > hopefully it will be made public then as the work is still ongoing.  
> > > 
> > > I got curious when I saw something about "First Person credentials"
> > > at https://lfms26.sched.com/event/2ETT5?iframe=no that 
> > > "would begin with the Linux Kernel project" - and more importantly
> > > how and when it would affect my duties. I guess I'd need to
> > > refrain my curiosity until the end of Feb :-)  
> > 
> > Ideally it will not affect anything, just replace the use of gpg however
> > you use it today for kernel work.
> 
> I suspect that, at some point, we'll need to setup our new
> credentials somehow - hopefully without needing to be physically 
> present on a gpg-like key party. If we can do that using our
> existing infra or our current gpg keys, the replacement should be 
> easy.

Yes, we will have to "recreate" the web-of-trust somehow.  That's part
of their proposal, for how to do that and maintain it over time.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 11:58       ` Greg KH
  2026-01-23 12:24         ` Mauro Carvalho Chehab
@ 2026-01-23 13:57         ` Konstantin Ryabitsev
  1 sibling, 0 replies; 42+ messages in thread
From: Konstantin Ryabitsev @ 2026-01-23 13:57 UTC (permalink / raw)
  To: Greg KH; +Cc: Mauro Carvalho Chehab, Uwe Kleine-König, users, ksummit

On Fri, Jan 23, 2026 at 12:58:48PM +0100, Greg KH wrote:
> > I got curious when I saw something about "First Person credentials"
> > at https://lfms26.sched.com/event/2ETT5?iframe=no that 
> > "would begin with the Linux Kernel project" - and more importantly
> > how and when it would affect my duties. I guess I'd need to
> > refrain my curiosity until the end of Feb :-)
> 
> Ideally it will not affect anything, just replace the use of gpg however
> you use it today for kernel work.

Small correction -- "gpg web of trust" specifically, not gpg in general. You
wouldn't be forced to change anything in your current workflow if you're
already using gpg.

-K

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23  9:29   ` Greg KH
  2026-01-23 11:47     ` Mauro Carvalho Chehab
@ 2026-01-23 16:24     ` James Bottomley
  2026-01-23 16:33       ` Greg KH
  2026-01-23 16:38       ` Konstantin Ryabitsev
  1 sibling, 2 replies; 42+ messages in thread
From: James Bottomley @ 2026-01-23 16:24 UTC (permalink / raw)
  To: Greg KH, Uwe Kleine-König; +Cc: Konstantin Ryabitsev, users, ksummit

On Fri, 2026-01-23 at 10:29 +0100, Greg KH wrote:
> On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:
> > Hello Konstantin,
> > 
> > On 12/10/25 05:48, Konstantin Ryabitsev wrote:
> > > ## Web of Trust work
> > > 
> > > There is an ongoing work to replace our home-grown web of trust
> > > solution (that does work but has important bottlenecks and
> > > scaling limitations) with something both more distributed and
> > > easier to maintain. We're working with OpenSSF to design the
> > > framework and I hope to present it to the community in the next
> > > few months.
> > 
> > the current home-grown solution is
> > https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> > 
> > I wonder what the bottlenecks and scaling limitations are that you
> > mention.
> > 
> > Is there some info available already now about the path you (and
> > OpenSSF) intend to propose?
> 
> There will be a presentation about this in February at a conference
> and hopefully it will be made public then as the work is still
> ongoing.

Could you please stop doing this?  The Open Source norm is to release
early and often and long before you have stable code so you get
feedback incorporated *before* you're committed to something.

You're making it very hard for those of us engaged in open source
advocacy inside various companies because we seem to spend a lot of our
time trying to get our engineers not to drop fully polished projects
into the public view but engage early on prototypes.  It rather
undermines our position if they can point to the Linux Foundation and
say "but they do it so why shouldn't we?".

Regards,

James


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 16:24     ` James Bottomley
@ 2026-01-23 16:33       ` Greg KH
  2026-01-23 16:42         ` Joe Perches
  2026-01-23 17:23         ` James Bottomley
  2026-01-23 16:38       ` Konstantin Ryabitsev
  1 sibling, 2 replies; 42+ messages in thread
From: Greg KH @ 2026-01-23 16:33 UTC (permalink / raw)
  To: James Bottomley
  Cc: Uwe Kleine-König, Konstantin Ryabitsev, users, ksummit

On Fri, Jan 23, 2026 at 11:24:33AM -0500, James Bottomley wrote:
> On Fri, 2026-01-23 at 10:29 +0100, Greg KH wrote:
> > On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:
> > > Hello Konstantin,
> > > 
> > > On 12/10/25 05:48, Konstantin Ryabitsev wrote:
> > > > ## Web of Trust work
> > > > 
> > > > There is an ongoing work to replace our home-grown web of trust
> > > > solution (that does work but has important bottlenecks and
> > > > scaling limitations) with something both more distributed and
> > > > easier to maintain. We're working with OpenSSF to design the
> > > > framework and I hope to present it to the community in the next
> > > > few months.
> > > 
> > > the current home-grown solution is
> > > https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> > > 
> > > I wonder what the bottlenecks and scaling limitations are that you
> > > mention.
> > > 
> > > Is there some info available already now about the path you (and
> > > OpenSSF) intend to propose?
> > 
> > There will be a presentation about this in February at a conference
> > and hopefully it will be made public then as the work is still
> > ongoing.
> 
> Could you please stop doing this?  The Open Source norm is to release
> early and often and long before you have stable code so you get
> feedback incorporated *before* you're committed to something.

I'm not doing anything here, sorry.

> You're making it very hard for those of us engaged in open source
> advocacy inside various companies because we seem to spend a lot of our
> time trying to get our engineers not to drop fully polished projects
> into the public view but engage early on prototypes.  It rather
> undermines our position if they can point to the Linux Foundation and
> say "but they do it so why shouldn't we?".

When there is something that is reviewable, it will be released as a
starting point for everyone to review and comment on, like any other
normal open source project.  It's as if you don't think we know how any
of this works...

Surely you don't want us to be touting a bunch of vaporware at this
point in time, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 16:24     ` James Bottomley
  2026-01-23 16:33       ` Greg KH
@ 2026-01-23 16:38       ` Konstantin Ryabitsev
  2026-01-23 17:02         ` Paul Moore
  1 sibling, 1 reply; 42+ messages in thread
From: Konstantin Ryabitsev @ 2026-01-23 16:38 UTC (permalink / raw)
  To: James Bottomley; +Cc: Greg KH, Uwe Kleine-König, users, ksummit

On Fri, Jan 23, 2026 at 11:24:33AM -0500, James Bottomley wrote:
> > There will be a presentation about this in February at a conference
> > and hopefully it will be made public then as the work is still
> > ongoing.
> 
> Could you please stop doing this?  The Open Source norm is to release
> early and often and long before you have stable code so you get
> feedback incorporated *before* you're committed to something.

I will provide this feedback to them when we meet in a week. It's not the LF
itself who are writing this code, but a bunch of security devs funded by
OpenSSF and they *are* closely working with me and Greg during the initial
iteration to make sure that what they come up with is actually going to be
suitable and well-received by the kernel community (like, don't write it in
nodejs or something).

So, I'd say we're doing it right -- write the initial tool based on the
requirements provided by some key users, then release the 0.1 for broader use
and do iterative development based on feedback.

-K

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 16:33       ` Greg KH
@ 2026-01-23 16:42         ` Joe Perches
  2026-01-23 17:00           ` Steven Rostedt
  2026-01-23 17:23         ` James Bottomley
  1 sibling, 1 reply; 42+ messages in thread
From: Joe Perches @ 2026-01-23 16:42 UTC (permalink / raw)
  To: Greg KH, James Bottomley
  Cc: Uwe Kleine-König, Konstantin Ryabitsev, users, ksummit

On Fri, 2026-01-23 at 17:33 +0100, Greg KH wrote:
> Surely you don't want us to be touting a bunch of vaporware at this
> point in time, right?

By announcing it before showing it you _are_ touting it.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 16:42         ` Joe Perches
@ 2026-01-23 17:00           ` Steven Rostedt
  0 siblings, 0 replies; 42+ messages in thread
From: Steven Rostedt @ 2026-01-23 17:00 UTC (permalink / raw)
  To: Joe Perches
  Cc: Greg KH, James Bottomley, Uwe Kleine-König,
	Konstantin Ryabitsev, users, ksummit

On Fri, 23 Jan 2026 08:42:51 -0800
Joe Perches <joe@perches.com> wrote:

> On Fri, 2026-01-23 at 17:33 +0100, Greg KH wrote:
> > Surely you don't want us to be touting a bunch of vaporware at this
> > point in time, right?  
> 
> By announcing it before showing it you _are_ touting it.

Nah, I call this "Conference driven development". Where I submit talking
about something that is vaporware which drives me to develop it before I
give my talk about it.

Then at the talk I learn more about better ways to update the feature and
modify it before it gets in an upstream release.

Although, I don't think there will be a lot of kernel folks at the Member Summit.

-- Steve

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 16:38       ` Konstantin Ryabitsev
@ 2026-01-23 17:02         ` Paul Moore
  0 siblings, 0 replies; 42+ messages in thread
From: Paul Moore @ 2026-01-23 17:02 UTC (permalink / raw)
  To: Greg KH, Konstantin Ryabitsev
  Cc: James Bottomley, Uwe Kleine-König, users, ksummit

On Fri, Jan 23, 2026 at 11:38 AM Konstantin Ryabitsev <mricon@kernel.org> wrote:
> On Fri, Jan 23, 2026 at 11:24:33AM -0500, James Bottomley wrote:
> > > There will be a presentation about this in February at a conference
> > > and hopefully it will be made public then as the work is still
> > > ongoing.
> >
> > Could you please stop doing this?  The Open Source norm is to release
> > early and often and long before you have stable code so you get
> > feedback incorporated *before* you're committed to something.
>
> I will provide this feedback to them when we meet in a week. It's not the LF
> itself who are writing this code, but a bunch of security devs funded by
> OpenSSF and they *are* closely working with me and Greg during the initial
> iteration to make sure that what they come up with is actually going to be
> suitable and well-received by the kernel community (like, don't write it in
> nodejs or something).
>
> So, I'd say we're doing it right -- write the initial tool based on the
> requirements provided by some key users, then release the 0.1 for broader use
> and do iterative development based on feedback.

Based on the comments above, it sounds like there have been at least
some requirements/design discussions already, were those on a public
list?  Perhaps they were and I simply missed it (always a real
possibility), but based on the other reactions in this thread I don't
believe that is the case.

I don't believe I'm alone when I say that I have a "complicated"
relationship with the LF; a large part of that is due to what I would
call a delayed transparency, of which this seems like it might be a
good example.  If the LF is sponsoring a project/effort that somehow
involves the community, why is the kickoff not public?  Why are other
community members not involved in establishing a list of requirements,
or participating in the design discussions?

-- 
paul-moore.com

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 16:33       ` Greg KH
  2026-01-23 16:42         ` Joe Perches
@ 2026-01-23 17:23         ` James Bottomley
  2026-01-23 18:23           ` Konstantin Ryabitsev
  1 sibling, 1 reply; 42+ messages in thread
From: James Bottomley @ 2026-01-23 17:23 UTC (permalink / raw)
  To: Greg KH; +Cc: Uwe Kleine-König, Konstantin Ryabitsev, users, ksummit

On Fri, 2026-01-23 at 17:33 +0100, Greg KH wrote:
> On Fri, Jan 23, 2026 at 11:24:33AM -0500, James Bottomley wrote:
> > On Fri, 2026-01-23 at 10:29 +0100, Greg KH wrote:
> > > On Fri, Jan 23, 2026 at 10:19:56AM +0100, Uwe Kleine-König wrote:
> > > > Hello Konstantin,
> > > > 
> > > > On 12/10/25 05:48, Konstantin Ryabitsev wrote:
> > > > > ## Web of Trust work
> > > > > 
> > > > > There is an ongoing work to replace our home-grown web of
> > > > > trust solution (that does work but has important bottlenecks
> > > > > and scaling limitations) with something both more distributed
> > > > > and easier to maintain. We're working with OpenSSF to design
> > > > > the framework and I hope to present it to the community in
> > > > > the next few months.
> > > > 
> > > > the current home-grown solution is
> > > > https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/, right?
> > > > 
> > > > I wonder what the bottlenecks and scaling limitations are that
> > > > you mention.
> > > > 
> > > > Is there some info available already now about the path you
> > > > (and OpenSSF) intend to propose?
> > > 
> > > There will be a presentation about this in February at a
> > > conference and hopefully it will be made public then as the work
> > > is still ongoing.
> > 
> > Could you please stop doing this?  The Open Source norm is to
> > release early and often and long before you have stable code so you
> > get feedback incorporated *before* you're committed to something.
> 
> I'm not doing anything here, sorry.

You're listed as a presenter on the session Mauro pointed to.  And
you're the only kernel developer on it, so I was presuming you were
helping them out with kernel requirements.  If that's not true then we
have even more cause to worry that people who don't understand how we
work are coming up with what they consider to be a "solution" without
any consultation.

> > You're making it very hard for those of us engaged in open source
> > advocacy inside various companies because we seem to spend a lot of
> > our time trying to get our engineers not to drop fully polished
> > projects into the public view but engage early on prototypes.  It
> > rather undermines our position if they can point to the Linux
> > Foundation and say "but they do it so why shouldn't we?".
> 
> When there is something that is reviewable, it will be released as a
> starting point for everyone to review and comment on, like any other
> normal open source project.  It's as if you don't think we know how
> any of this works...
> 
> Surely you don't want us to be touting a bunch of vaporware at this
> point in time, right?

There's a fairly reasonable separation between touting vapourware and
discussing requirements.  You're already causing requirements based
questions in the community, like worrying that we're ditching pgp that
Konstantin just answered.  A lot of us have a variety of solutions to
the web of trust problem.  I think you already know I use DNS based
distribution of my keys over DANE and am happy with it, but it's not
available to everyone  because you need to ground your email in a
DNSSEC backed domain to use it (and kernel.org still doesn't use
DNSSEC).  I'd be unhappy if DANE stopped working for the kernel web of
trust simply because no-one thought about it.

Regards,

James


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 17:23         ` James Bottomley
@ 2026-01-23 18:23           ` Konstantin Ryabitsev
  2026-01-23 21:12             ` Uwe Kleine-König
                               ` (2 more replies)
  0 siblings, 3 replies; 42+ messages in thread
From: Konstantin Ryabitsev @ 2026-01-23 18:23 UTC (permalink / raw)
  To: James Bottomley; +Cc: Greg KH, Uwe Kleine-König, users, ksummit

On Fri, Jan 23, 2026 at 12:23:09PM -0500, James Bottomley wrote:
> > > Could you please stop doing this?  The Open Source norm is to
> > > release early and often and long before you have stable code so you
> > > get feedback incorporated *before* you're committed to something.
> > 
> > I'm not doing anything here, sorry.
> 
> You're listed as a presenter on the session Mauro pointed to.  And
> you're the only kernel developer on it, so I was presuming you were
> helping them out with kernel requirements.

They are primarily working with me, and just so it's clear -- this is not
any kind of assured thing. Here's where things stand:

- they asked us how we currently do our trust framework and I described the
  process and its drawbacks, which are real:

  - I am the bottleneck in the process, because all updates have to go through
    me; even if we add more people to have access, this would still be a
    bottleneck, because the more keys there are in the web of trust, the more
    finagling the whole process requires to deal with expirations, key
    updates, identity updates, etc. We can rely on modern keyservers for some
    of it, but not for third-party signatures, which are key for our
    distributed trust.
  - We can't reasonably expand this to all kernel developers (not just
    maintainers), because of constant churn of people coming, going, taking
    breaks, etc. Maintaining the web of trust consisting of thousands of keys,
    as opposed to hundreds, would become a full-time job if we stick to how
    it's currently done (via the git repo and manual verification on my part
    for all key additions).
  - We're limited to PGP only, but it would be nice to also support something
    like fido2 ssh key signatures.

- they said they could come up with something that would use self-sovereign
  did's that would allow scaling the trust framework to all kernel developers
  and be self-sustaining and verifiable via cross-signatures.

- I said: sure, come up with some code and let's see, as long as the following
  is assured:

  - It's opt-in; anyone who is happy using GnuPG can continue without any
    change
  - We're not forcing a complete rekeying or resigning of all keys
  - There is no central service that must be up and accessible for the tools
    to work
  - It's not written in some esoteric framework that requires curl | bash
    every 2 weeks to get the latest version

- I also made it very clear that the kernel community will have the final say
  in whether this is adopted or not.

This is pretty much where we stand.

-K

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: kernel.org tooling update
  2025-12-10  4:48 kernel.org tooling update Konstantin Ryabitsev
                   ` (3 preceding siblings ...)
  2026-01-23  9:19 ` Web of Trust work [Was: kernel.org tooling update] Uwe Kleine-König
@ 2026-01-23 18:42 ` Randy Dunlap
  4 siblings, 0 replies; 42+ messages in thread
From: Randy Dunlap @ 2026-01-23 18:42 UTC (permalink / raw)
  To: Konstantin Ryabitsev, users, ksummit

Hi,

On 12/9/25 8:48 PM, Konstantin Ryabitsev wrote:
> Hi, all:
> 
> These are the topics that were touched on at the maintainer summit when
> discussing tooling on the kernel.org side of things.
> 
> --
> 
> # What is the state of tooling?
> 
> ## b4 development update
> 
> Past year:
> 
> - No major new features in b4 over the past year, excepting `b4 dig`
> - Seeing lots of adoption and use across subsystem, with a lot of maintainers
>   recommending b4 as the preferred mechanism to submit patches
> - Starting to see adoption by several non-kernel projects (openembedded, u-boot, others)
> - Significant behind-the-scenes move of codebase to stricter typed code
> - Continued work on `b4 review` that got shelved temporarily for other
>   priorities.
> 
> ### LETS PUT MOAR AI INTO IT!!1
> 
> I spent a lot of time on trying to integrate AI into b4 workflows, but with
> little to show for it in the end due to lackluster results.
> 
> - Used local ollama as opposed to proprietary services, with the goal to avoid
>   introducing hard dependencies on third-party commercial tooling. This is
>   probably the main reason why my results were not so exciting as what others
>   see with much more powerful models.
> 
> - Focused on thread/series summarization features as opposed to code analysis:
> 
>     - Summarize follow-ups (trailers, acks/nacks received), though this is
>       already fairly well-handed with non-AI tooling.
> 
>     - Gauge "temperature" of the discussion to highlight controversial series.
> 
>     - Gauge quality of the submission; help decide "is this series worth
>       looking at" before maintainers spend their effort looking at it, using
>       maintainer-tailored prompts. This may be better done via CI/patchwork
>       integration, than with b4.
> 
>     - Use LLM to prepare a merge commit message using the cover letter and
>       summarizing the patches.
> 
> I did not end up releasing any features based on that work, because:
> 
>     - LLM was not fantastic at following discussions and keeping a clear
>       picture of who said what, which is kind of crucial for maintainer
>       decision making.
> 
>     - Very large series and huge threads run out fo context window, which
>       causes the LLM to get even worse at "who said what" (and it's
>       already not that great at it).
> 
>     - Thread analysis requires lots of VRAM and a modern graphics card, and is
>       still fairly slow there (I used a fairly powerful GeForce RTX).
> 
>     - Actual code review is best if it happens post-apply in a temporary
>       workdir or a temporary branch, so the agent can see the change in the
>       context of the git tree and the entire codebase, not just the context
>       lines of the patch itself.
> 
> I did have much better success when I worked to represent a thread not as
> multiple messages, but as a single document with all interleaved follow-up
> conversations collated together. However, this was done manually --
> representing emails from arbitrary threads as such collated documents is a
> separate challenge.
> 
> Using proprietary models and remote services will probably show better
> results, but I did not have the funds or the inkling to do it (plus see the
> concern for third-party commercial tooling). I may need to collaborate more
> closely with the maintainers already doing it on their own instead of
> continuing my separate work on it.
> 
> ### AI crawler scourge
> 
> While working on LLM integration, it was certainly ironic that one of the
> top challenges for us was to try to keep AI crawlers from overwhelming
> kernel.org infrastructure. While we've put several mitigations in place, it's
> a temporary relief at best.
> 
> ## Continuous degradation of SMTP
> 
> We're increasingly having to deal with the degradation of the SMTP support by
> all commercial companies:
> 
>     - major hosts are increasingly not interested in getting mail from anyone
>       who isn't also a major mail service provider
> 
>     - their "bulk sender" guidelines are no good for us (e.g. requiring that
>       we add one-click unsubscribe footers to all email)
> 
>     - their "spam filters" are increasingly based on training data, which
>       means that "looks different from what most of our users receive" is
>       enough to have patches and code discussions put into the "Junk" folder
> 
>     - they apply arbitrary throttling ("too many deliveries for the same
>       message-id", "too many messages from the DKIM domain foobar.com")
> 
>     - anti-phishing services at commercial IT companies do horrible things to
>       incoming messages
> 
> ## Are we finally moving away from patches sent over email?
> 
> There are still important concerns when we consider moving away from "patches
> sent via email":
> 
>     - SMTP is still the only widely used protocol we have for decentralized
>       communication; everything else is experimental or has important
>       drawbacks, such as:
> 
>         - it relies on single-point-of-failure services (e.g. Signal), or
>         - it requires standing up esoteric software (which then become
>           single-point-of-failure services), or
>         - it requires an "everyone-must-switch-now" flag day
> 
>     - RFC-5322, with all its warts, is a well-defined standard for internet
>       messages:
> 
>         - robust, capable of dealing with change while preserving legacy
>         - easy to parse with libraries for almost any framework
>         - easy to archive and query
>         - has lots of tooling built around it
> 
> With lore and public-inbox, we *are* in the process of moving away from
> relying on the increasingly unreliable SMTP layer. Lore can already let you do
> the following things:
> 
>     - lets anyone submit patches via the web endpoint
>     - lets anyone subscribe to lists via several protocols (NNTP, POP, IMAP)
>     - lets anyone use lei to receive arbitrary feeds
>     - can aggregate any number of sources, as long as they are RFC-5322
>       messages (or can be converted to them)
> 
> Lore and public-inbox is becoming a kind of a distributed, replicating
> messaging bus with a robust query and retrieval interface on top of it, and I
> believe it's a fairly powerful framework we can build upon.
> 
> ## Work on "local lore"
> 
> One downside of lore.kernel.org is that it's a central service, which runs
> counter to our goal of limiting how many single points of failure we have.
> There is fault-tolerance built into the system (lore.kernel.org is actually 4
> different nodes in various parts of the world), but an adversary would have no
> difficulty knocking out all nodes at once, which would impact the project
> significantly.
> 
> The "local lore" projects it the attempt to provide a kind of "maintainer
> container" that can be run locally or in any public cloud:
> 
>     - comes with a 6-month constantly-updating mirror of lore, using a
>       failover set of replication URLs (including tor/onion)
>     - comes with a pre-configured mirror of git repositories that are kept
>       up-to-date in the same fashion
>     - lets the maintainer set up lei queries that can push into their
>       inbox, supporting Gmail+OAuth, JMAP, IMAP
>     - provides a web submission endpoint and an SMTP service that can
>       integrate with other SMTP relays
>     - publishes a public-inbox feed of maintainer activity that central
>       lore can pick up and integrate
> 
> There is a parallel goal here, which is to make it easier for devs to assume
> maintainer duties without having to spend a week setting up their tooling.
> In theory, all they would be need to do is to set up their maintainer
> container and then use the web menu to choose which feeds they want to pull
> and where they want messages delivered.
> 
> This project is still early in development, but I hope to be able to provide
> test containers soon that people can set up and run.
> 
> ## Other tools
> 
> ### Bugzilla
> 
> It may be time to kill bugzilla:
> 
>     - despite periodic "we're not dead yet" emails, it doesn't appear very
>       active
>     - the upgrade path to 6.0 is broken for us due to bugzilla abandoning the
>       5.2 development branch and continuing with 5.1
>     - question remains with what to replace bugzilla, but it's a longer
>       discussion topic that I don't want to raise here; it may be a job for
>       the bugspray bot that can extend the two-way bridge functionality to
>       multiple bug tracker frameworks
> 
> ### Patchwork
> 
> Patchwork continues to be used widely:
> 
>     - we've introduced query-based patchworks, where instead of consuming the
>       entire mailing list, we feed it the results of lei queries
>     - I'm hoping to work with upstream to add a couple of features that would
>       be of benefit to us, such as:
> 
>         - support for annotating patches and series (e.g. with LLM summaries)
>         - an API endpoint to submit patches, so maintainers could add
>           arbitrary series to their patchwork project, integrating with b4
> 
> ## Web of Trust work
> 
> There is an ongoing work to replace our home-grown web of trust solution (that
> does work but has important bottlenecks and scaling limitations) with
> something both more distributed and easier to maintain. We're working with
> OpenSSF to design the framework and I hope to present it to the community in
> the next few months.
> 
> ## Questions?
> 
> Send away!

Was any of this discussed at the December summits?
Were there any decisions or conclusions?
Are they summarized and shared somewhere?

Thanks.
-- 
~Randy


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 18:23           ` Konstantin Ryabitsev
@ 2026-01-23 21:12             ` Uwe Kleine-König
  2026-01-26 16:23               ` Konstantin Ryabitsev
  2026-01-23 21:38             ` James Bottomley
  2026-01-23 22:55             ` Mauro Carvalho Chehab
  2 siblings, 1 reply; 42+ messages in thread
From: Uwe Kleine-König @ 2026-01-23 21:12 UTC (permalink / raw)
  To: Konstantin Ryabitsev; +Cc: Greg KH, users, ksummit


[-- Attachment #1.1: Type: text/plain, Size: 2806 bytes --]

Hello Konstantin,

On 1/23/26 19:23, Konstantin Ryabitsev wrote:
> They are primarily working with me, and just so it's clear -- this is not
> any kind of assured thing. Here's where things stand:
> 
> - they asked us how we currently do our trust framework and I described the
>   process and its drawbacks, which are real:
> 
>   - I am the bottleneck in the process, because all updates have to go through
>     me; even if we add more people to have access, this would still be a
>     bottleneck, because the more keys there are in the web of trust, the more
>     finagling the whole process requires to deal with expirations, key
>     updates, identity updates, etc. We can rely on modern keyservers for some
>     of it, but not for third-party signatures, which are key for our
>     distributed trust.

Just to ensure we're talking about the same thing: This is about calling
a script once a week or so, check the resulting diff, commit and push,
right?

>   - We can't reasonably expand this to all kernel developers (not just
>     maintainers), because of constant churn of people coming, going, taking
>     breaks, etc. Maintaining the web of trust consisting of thousands of keys,
>     as opposed to hundreds, would become a full-time job if we stick to how
>     it's currently done (via the git repo and manual verification on my part
>     for all key additions).
>   - We're limited to PGP only, but it would be nice to also support something
>     like fido2 ssh key signatures.

I personally am happy with PGP and I don't see the benefit of using ssh
keys instead. But I'm open to look at the approach that we will see in
February.

> - they said they could come up with something that would use self-sovereign
>   did's that would allow scaling the trust framework to all kernel developers
>   and be self-sustaining and verifiable via cross-signatures.

123456789012345678901234567890123456789012345678901234567890123456789012

(Maybe apart from self-sustaining) this sounds like PGP. I consider it
self-sovereign as it's only me who has control over my certificate and
cross-signatures work fine, too. I agree that using GnuPG isn't nice for
newcomers and people only using it occasionally. But it is able to do
all the things we need from it, it has integration in git and mail (and
also ssh if you want) and I'd hesitate to throw that all over board for
something shiny new. I wonder if a new tool that covers all the needed
use-cases can be considerably simpler than PGP. And if that new tool
allows to let me continue using my PGP certificate, the complexity
cannot be less than PGP alone.

Having said that, I'd like to support you in the maintenance of the
pgpkeyring if this is considered helpful.

Best regards
Uwe

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 18:23           ` Konstantin Ryabitsev
  2026-01-23 21:12             ` Uwe Kleine-König
@ 2026-01-23 21:38             ` James Bottomley
  2026-01-23 22:55             ` Mauro Carvalho Chehab
  2 siblings, 0 replies; 42+ messages in thread
From: James Bottomley @ 2026-01-23 21:38 UTC (permalink / raw)
  To: Konstantin Ryabitsev; +Cc: Greg KH, Uwe Kleine-König, users, ksummit

On Fri, 2026-01-23 at 13:23 -0500, Konstantin Ryabitsev wrote[...]
>   - We're limited to PGP only, but it would be nice to also support
> something like fido2 ssh key signatures.

Just trying to understand what you mean here: the FIDO2 ssh
implementation is really nothing more than a key that provides a
signature created by the token.  In fact FIDO2 keys are pretty similar
to TPM keys in that they can either be token resident or stored as
files (which are wrapped so only the token can decrypt them) and loaded
into the token for signature.  Unlike a TPM, FIDO 2 is a bit more
algorithm poor (most only support P256 although some of the later
devices do 25519) but the elliptic curve algorithms they do support are
sufficient for gpg to use them.  The huge downside of FIDO2 is that
unlike a TPM it can't import keys, so this means every key would be
newly created.  However, it could still be used by gpg for newly
created signing and encryption subkeys (you'd have to keep your master
key as a keyfile unless you want to create a new master key).

I do know how to plumb this into gpg, because it would be the same
places at TPM support went.  However, realistically, without the
ability to import existing keys, it would provide a less easy (and
likely less secure, given you need your master key to sign other keys)
experience than just using the existing gpg TPM2 support, so why not
simply use that?

Regards,

James


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 18:23           ` Konstantin Ryabitsev
  2026-01-23 21:12             ` Uwe Kleine-König
  2026-01-23 21:38             ` James Bottomley
@ 2026-01-23 22:55             ` Mauro Carvalho Chehab
  2 siblings, 0 replies; 42+ messages in thread
From: Mauro Carvalho Chehab @ 2026-01-23 22:55 UTC (permalink / raw)
  To: Konstantin Ryabitsev
  Cc: James Bottomley, Greg KH, Uwe Kleine-König, users, ksummit

On Fri, 23 Jan 2026 13:23:58 -0500
Konstantin Ryabitsev <mricon@kernel.org> wrote:

> - I said: sure, come up with some code and let's see, as long as the following
>   is assured:
> 
>   - It's opt-in; anyone who is happy using GnuPG can continue without any
>     change

This insurance is enough for me, provided that I can still revoke my
current keys and create new ones whenever needed. For this to keep
working for the ones that don't opt-in, it should still be possible
to update the existing GPG keychain and having gpg key parties from
time to time.

However, it actually means more work for the ones maintaining the
infra, as you'll still need to maintain the current web of trust - at 
least for the current users on it - and then maintain the new solution.

-

From my side, I don't intend to opt-in to a new solution until I trust
it enough - and even after opting in - I'll continue using my GPG key
as a backup plan.


Thanks,
Mauro

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-23 21:12             ` Uwe Kleine-König
@ 2026-01-26 16:23               ` Konstantin Ryabitsev
  2026-01-26 17:32                 ` Uwe Kleine-König
  2026-01-26 23:06                 ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 42+ messages in thread
From: Konstantin Ryabitsev @ 2026-01-26 16:23 UTC (permalink / raw)
  To: Uwe Kleine-König; +Cc: Greg KH, users, ksummit

On Fri, Jan 23, 2026 at 10:12:39PM +0100, Uwe Kleine-König wrote:
> >   - I am the bottleneck in the process, because all updates have to go through
> >     me; even if we add more people to have access, this would still be a
> >     bottleneck, because the more keys there are in the web of trust, the more
> >     finagling the whole process requires to deal with expirations, key
> >     updates, identity updates, etc. We can rely on modern keyservers for some
> >     of it, but not for third-party signatures, which are key for our
> >     distributed trust.
> 
> Just to ensure we're talking about the same thing: This is about calling
> a script once a week or so, check the resulting diff, commit and push,
> right?

This is for updates, yes, and this is mostly hands-off except final review.
Adding new keys is usually a lot more involved, because there's frequently a
back-and-forth required (they sent a key without any signatures, there is not
enough signatures, the signatures are too far removed from Linus, etc). We
currently have about 600 keys in the keyring we maintain, and we clearly can
do a much better job like being more proactive when someone's expiry date is
approaching. I'm worried that if we tried to maintain a keyring for several
thousand people as opposed to several hundred, this would snowball into an
unmaintainable mess.

> I personally am happy with PGP and I don't see the benefit of using ssh
> keys instead. But I'm open to look at the approach that we will see in
> February.

Supporting ssh keys (along with minisign keys) is a Frequently Requested
Feature (TM) -- not so much among kernel users, but among several other
projects that use non-forge workflows.

PGP and its tools (GnuPG, primarily) are seen as extremely unfriendly, arcane,
and prone to breaking. This is largely a perception problem, I agree, and it's
not helped by efforts like gpg.fail -- I appreciate the work the researchers
have put into it, but I hated the presentation for its "lol pgp" vibe.

> (Maybe apart from self-sustaining) this sounds like PGP. I consider it
> self-sovereign as it's only me who has control over my certificate and
> cross-signatures work fine, too. I agree that using GnuPG isn't nice for
> newcomers and people only using it occasionally. But it is able to do
> all the things we need from it, it has integration in git and mail (and
> also ssh if you want) and I'd hesitate to throw that all over board for
> something shiny new.

For the record, we're not. I don't see a (near) future where PGP will stop
being our recommended attestation mechanism. However, this doesn't stop us
from looking at alternatives, and this effort is exactly what it is -- looking
at alternatives. A group of security researchers are saying they can do a
better job with decentralized trust management and I am happy to let them try
and evaluate the results.

> Having said that, I'd like to support you in the maintenance of the
> pgpkeyring if this is considered helpful.

I do appreciate your work!

Thanks,
-K

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-26 16:23               ` Konstantin Ryabitsev
@ 2026-01-26 17:32                 ` Uwe Kleine-König
  2026-01-26 21:01                   ` Konstantin Ryabitsev
                                     ` (2 more replies)
  2026-01-26 23:06                 ` Mauro Carvalho Chehab
  1 sibling, 3 replies; 42+ messages in thread
From: Uwe Kleine-König @ 2026-01-26 17:32 UTC (permalink / raw)
  To: Konstantin Ryabitsev; +Cc: Greg KH, users, ksummit

[-- Attachment #1: Type: text/plain, Size: 2464 bytes --]

Hello Konstantin,

On Mon, Jan 26, 2026 at 11:23:43AM -0500, Konstantin Ryabitsev wrote:
> On Fri, Jan 23, 2026 at 10:12:39PM +0100, Uwe Kleine-König wrote:
> > >   - I am the bottleneck in the process, because all updates have to go through
> > >     me; even if we add more people to have access, this would still be a
> > >     bottleneck, because the more keys there are in the web of trust, the more
> > >     finagling the whole process requires to deal with expirations, key
> > >     updates, identity updates, etc. We can rely on modern keyservers for some
> > >     of it, but not for third-party signatures, which are key for our
> > >     distributed trust.
> > 
> > Just to ensure we're talking about the same thing: This is about calling
> > a script once a week or so, check the resulting diff, commit and push,
> > right?
> 
> This is for updates, yes, and this is mostly hands-off except final review.
> Adding new keys is usually a lot more involved, because there's frequently a
> back-and-forth required (they sent a key without any signatures, there is not
> enough signatures, the signatures are too far removed from Linus, etc). We
> currently have about 600 keys in the keyring we maintain, and we clearly can
> do a much better job like being more proactive when someone's expiry date is
> approaching. I'm worried that if we tried to maintain a keyring for several
> thousand people as opposed to several hundred, this would snowball into an
> unmaintainable mess.

Actually I'd like to see you/us add still more burden and asking
developers to only hand in keys with an expiry date <= (say) 3 years.
Something similar to what
https://www.gentoo.org/glep/glep-0063.html#bare-minimum-requirements
requests.

I suspect that among the 600 keys we have now, a considerable amount is
actually unused and it would be good for security to drop these. With an
expiry date detecting such keys would be much simpler.

I wonder why you expect the number of keys to rise considerably?!

> > Having said that, I'd like to support you in the maintenance of the
> > pgpkeyring if this is considered helpful.
> 
> I do appreciate your work!

Areas that I see where I could be helpful are:

 - moderating the keys ML
 - giving feedback to patches
   (currently I mostly see the patches when they are already handled
   because you seem to do moderation and patch handling in batches.)

Best regards
Uwe

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-26 17:32                 ` Uwe Kleine-König
@ 2026-01-26 21:01                   ` Konstantin Ryabitsev
  2026-01-26 23:23                   ` James Bottomley
  2026-01-26 23:33                   ` Mauro Carvalho Chehab
  2 siblings, 0 replies; 42+ messages in thread
From: Konstantin Ryabitsev @ 2026-01-26 21:01 UTC (permalink / raw)
  To: Uwe Kleine-König; +Cc: Greg KH, users, ksummit

On Mon, Jan 26, 2026 at 06:32:22PM +0100, Uwe Kleine-König wrote:
> Actually I'd like to see you/us add still more burden and asking
> developers to only hand in keys with an expiry date <= (say) 3 years.

That would mean me too, eh? :)

I don't want to make this decision unilaterally, so I will bring it up on the
users list.

> I suspect that among the 600 keys we have now, a considerable amount is
> actually unused and it would be good for security to drop these. With an
> expiry date detecting such keys would be much simpler.
> 
> I wonder why you expect the number of keys to rise considerably?!

That's only if we ever consider expanding the service to everyone sending
patches. It's not tenable with the current "must have a signature within 4
hops from Linus" requirement, but we could also have a special "lax" mode
where we only require an email roundtrip for verification. The b4 web frontend
is about to start publishing a keyring like that.

> > I do appreciate your work!
> 
> Areas that I see where I could be helpful are:
> 
>  - moderating the keys ML

Yes, I don't see why not. The mailing list server is in the final stages of
pre-migration work to RHEL10, so I'm limiting changes to it at the moment, but
I'll be happy to add you to moderators/gatekeepers once the migration is over.

>  - giving feedback to patches
>    (currently I mostly see the patches when they are already handled
>    because you seem to do moderation and patch handling in batches.)

Yes, I have a weekly task in my todo to review on Fridays, but sometimes I
snooze it to Mondays instead. :)

-K


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-26 16:23               ` Konstantin Ryabitsev
  2026-01-26 17:32                 ` Uwe Kleine-König
@ 2026-01-26 23:06                 ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 42+ messages in thread
From: Mauro Carvalho Chehab @ 2026-01-26 23:06 UTC (permalink / raw)
  To: Konstantin Ryabitsev; +Cc: Uwe Kleine-König, Greg KH, users, ksummit

Hi Konstantin,

On Mon, 26 Jan 2026 11:23:43 -0500
Konstantin Ryabitsev <mricon@kernel.org> wrote:

> On Fri, Jan 23, 2026 at 10:12:39PM +0100, Uwe Kleine-König wrote:
> > >   - I am the bottleneck in the process, because all updates have to go through
> > >     me; even if we add more people to have access, this would still be a
> > >     bottleneck, because the more keys there are in the web of trust, the more
> > >     finagling the whole process requires to deal with expirations, key
> > >     updates, identity updates, etc. We can rely on modern keyservers for some
> > >     of it, but not for third-party signatures, which are key for our
> > >     distributed trust.  
> > 
> > Just to ensure we're talking about the same thing: This is about calling
> > a script once a week or so, check the resulting diff, commit and push,
> > right?  
> 
> This is for updates, yes, and this is mostly hands-off except final review.
> Adding new keys is usually a lot more involved, because there's frequently a
> back-and-forth required (they sent a key without any signatures, there is not
> enough signatures, the signatures are too far removed from Linus, etc). We
> currently have about 600 keys in the keyring we maintain, and we clearly can
> do a much better job like being more proactive when someone's expiry date is
> approaching. I'm worried that if we tried to maintain a keyring for several
> thousand people as opposed to several hundred, this would snowball into an
> unmaintainable mess.
> 
> > I personally am happy with PGP and I don't see the benefit of using ssh
> > keys instead. But I'm open to look at the approach that we will see in
> > February.  
> 
> Supporting ssh keys (along with minisign keys) is a Frequently Requested
> Feature (TM) -- not so much among kernel users, but among several other
> projects that use non-forge workflows.

Replacing PGP with ssh keys to push stuff at kernel.org is
welcomed, together with any mechanism to ensure the web of trust
for ssh keys, but see, the Web of Trust PGP keys are also used when we
sign git tags before asking Linus to merge:

	https://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media.git/tag/?h=media/v6.19-3

And also when we sign tags at the userspace tools we maintain. Any
alternative Web of Trust mechanism shall keep us allowing to sign
git tags with the same trust level.

At least up to git version 2.52.0, PGP is the only allowed
mechanism to sign tags.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-26 17:32                 ` Uwe Kleine-König
  2026-01-26 21:01                   ` Konstantin Ryabitsev
@ 2026-01-26 23:23                   ` James Bottomley
  2026-01-27  8:39                     ` Uwe Kleine-König
  2026-01-26 23:33                   ` Mauro Carvalho Chehab
  2 siblings, 1 reply; 42+ messages in thread
From: James Bottomley @ 2026-01-26 23:23 UTC (permalink / raw)
  To: Uwe Kleine-König, Konstantin Ryabitsev; +Cc: Greg KH, users, ksummit

[-- Attachment #1: Type: text/plain, Size: 572 bytes --]

On Mon, 2026-01-26 at 18:32 +0100, Uwe Kleine-König wrote:
> Actually I'd like to see you/us add still more burden and asking
> developers to only hand in keys with an expiry date <= (say) 3 years.
> Something similar to what
> https://www.gentoo.org/glep/glep-0063.html#bare-minimum-requirements
> requests.

You have seen Linus' views on gpg key expiry, right?

https://lore.kernel.org/linux-scsi/CAHk-=wi03SZ4Yn9FRRsxnMv1ED5Qw25Bk9-+ofZVMYEDarHtHQ@mail.gmail.com/

As a result of that I've taken to using much longer expiry periods.

Regards,

James


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 265 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-26 17:32                 ` Uwe Kleine-König
  2026-01-26 21:01                   ` Konstantin Ryabitsev
  2026-01-26 23:23                   ` James Bottomley
@ 2026-01-26 23:33                   ` Mauro Carvalho Chehab
  2 siblings, 0 replies; 42+ messages in thread
From: Mauro Carvalho Chehab @ 2026-01-26 23:33 UTC (permalink / raw)
  To: Uwe Kleine-König; +Cc: Konstantin Ryabitsev, Greg KH, users, ksummit

On Mon, 26 Jan 2026 18:32:22 +0100
Uwe Kleine-König <ukleinek@kernel.org> wrote:

> > > Just to ensure we're talking about the same thing: This is about calling
> > > a script once a week or so, check the resulting diff, commit and push,
> > > right?  
> > 
> > This is for updates, yes, and this is mostly hands-off except final review.
> > Adding new keys is usually a lot more involved, because there's frequently a
> > back-and-forth required (they sent a key without any signatures, there is not
> > enough signatures, the signatures are too far removed from Linus, etc). We
> > currently have about 600 keys in the keyring we maintain, and we clearly can
> > do a much better job like being more proactive when someone's expiry date is
> > approaching. I'm worried that if we tried to maintain a keyring for several
> > thousand people as opposed to several hundred, this would snowball into an
> > unmaintainable mess.  
> 
> Actually I'd like to see you/us add still more burden and asking
> developers to only hand in keys with an expiry date <= (say) 3 years.
> Something similar to what

I would love to replace my main PGP key with a new one using a strong
post-quantum algorithm[1], and then using revocable sub-keys with a
small expiry periods (3 to 5 years), but there are some technical and
logistical issues [2]:

- gpg 2.4 doesn't seem to support to support it;
- "updating to 2.5 would result in new users generating incompatible 
  LibrePGP keys" (from LWN.net post at [2]);
- a change like that would require to restore the web of trust,
  asking people to resign your certs. Not hard to do on a
  conference, but doing it remotely, the right way, is not trivial.

So, I guess we need to wait for a couple of extra gpg versions
(or alternatives) to do it at the best moment - while keeping
our old keychain in place as a fallback.

[1] Replacing with traditional crypto algorithms is probably not
    worth, as, quantum computers are becoming a reality soon.

[2] https://lwn.net/Articles/1055053/

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-26 23:23                   ` James Bottomley
@ 2026-01-27  8:39                     ` Uwe Kleine-König
  2026-01-27 21:08                       ` Linus Torvalds
  0 siblings, 1 reply; 42+ messages in thread
From: Uwe Kleine-König @ 2026-01-27  8:39 UTC (permalink / raw)
  To: James Bottomley
  Cc: Konstantin Ryabitsev, Greg KH, users, ksummit, Linus Torvalds

[-- Attachment #1: Type: text/plain, Size: 1984 bytes --]

Hello James,

On Mon, Jan 26, 2026 at 06:23:08PM -0500, James Bottomley wrote:
> On Mon, 2026-01-26 at 18:32 +0100, Uwe Kleine-König wrote:
> > Actually I'd like to see you/us add still more burden and asking
> > developers to only hand in keys with an expiry date <= (say) 3 years.
> > Something similar to what
> > https://www.gentoo.org/glep/glep-0063.html#bare-minimum-requirements
> > requests.
> 
> You have seen Linus' views on gpg key expiry, right?
> 
> https://lore.kernel.org/linux-scsi/CAHk-=wi03SZ4Yn9FRRsxnMv1ED5Qw25Bk9-+ofZVMYEDarHtHQ@mail.gmail.com/

Thanks for the link. I was aware that Linus isn't a big fan of PGP and
GnuPG. Still I think that having an expiration for your PGP certificates
is a very sensible thing. All at least halfway sensible howtos about PGP
handling that I saw in the past strongly recommend to set an expiry
date. (e.g.

	https://riseup.net/en/security/message-security/openpgp/gpg-best-practices#use-an-expiration-date-less-than-two-years

which isn't up to date in every corner any more, but the section about
expiry is still accurate.
According to https://book.sequoia-pgp.org/sq_key_generation.html, the
certificates generated using sq default to a 3 year expiry.)

Yes, I agree it's inconvenient, but updating is a usual necessity for
secure systems. SSL certificates have an expiry; letsencrypt will soon
limit expiries to 45 days. We regularly preach that people should update
their kernel and roll our eyes about hardware running Linux 5.15.153
(that's my DOCSIS router) or 2.6.26.8 (that's my wifi radio).

Security is a moving target; and if you don't move with it, your
security level drops over time.

Looking at the thread you referenced above, I think Linus would have
been happy if he had your updated key in time. So I only see this as a
challenge to improve the keyring maintenance.

> As a result of that I've taken to using much longer expiry periods.

:-(

Best regards
Uwe

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-27  8:39                     ` Uwe Kleine-König
@ 2026-01-27 21:08                       ` Linus Torvalds
  2026-02-04 10:49                         ` Uwe Kleine-König
  0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2026-01-27 21:08 UTC (permalink / raw)
  To: Uwe Kleine-König
  Cc: James Bottomley, Konstantin Ryabitsev, Greg KH, users, ksummit

On Tue, 27 Jan 2026 at 00:39, Uwe Kleine-König <ukleinek@kernel.org> wrote:
>
> Thanks for the link. I was aware that Linus isn't a big fan of PGP and
> GnuPG. Still I think that having an expiration for your PGP certificates
> is a very sensible thing.

I have never ever seen any good reason for automatic expiration, and
it causes actual real problems because *NOBODY* ever renews those
expiration in time and makes sure that they actually percolate out.

We literally had that happen just last week, and that was with a
person that is supposed to be an *expert* in those things, and that
uses fancy DNS key distribution etc.

So no. No expiration dates. They are stupid and do not work in
practice. End of story.

They are ALSO stupid because they make old signatures *look*
untrusted. Just go and do

    git log --show-signature @{15.years.ago}

and look for 'expired'. It's all just sad and pointless, . What
matters was whether that key was trusted AT THAT POINT IN TIME, not
whether it's trusted now. But that's not how things work.

And here is why they are completely pointless: a key that is no longer
trusted should be *REVOKED*.

And no, I'm not talking about the (bad) support that PGP itself has,
which requires a revocation key that nobody ever actually has.

Sure, if you have a revocation key, by all means use it, but I doubt
it has ever been used in any form in reality except for testing.

So when I say "revoke it", I'm talking about just letting people know
that a key is no longer trustworthy, and then they should remove it
from their keychain.

(And no, you shouldn't randomly and automatically add keys from people
just because of some "I can reach it with a web of trust", so your
keychain shouldn't be in a situation where some old untrusted key
randomly then gets added back)

Because once the key is no longer trustworthy, some "it will expire in
two years" is COMPLETE AND UTTER GARBAGE.

WTF? You'd have to be completely insane to think that is an acceptable
or sensible in *ANY* form. It's too stupid for words, I don't
understand how anybody can even entertain that kind of complete
bullshit.

So stop with the idiotic key expiration garbage. It's completely
unacceptable because it doesn't work in practice, and IT IS INCREDIBLY
STUPID TO BEGIN WITH.

In practice, the only thing it results in is that when people lose
their private keys, they eventually expire, but why should anybody
care about that? If the key is lost, it's become *more* secure, for
chrissake.

Any web of trust that actively encourages idiocy is not a web of trust
I want to have anything to do with.

Yes, this is a pet peeve of mine. PGP is UX a disaster to begin with,
the key distribution sucks, and expiry dates just make everything
worse.

             Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-01-27 21:08                       ` Linus Torvalds
@ 2026-02-04 10:49                         ` Uwe Kleine-König
  2026-02-05 10:14                           ` James Bottomley
  0 siblings, 1 reply; 42+ messages in thread
From: Uwe Kleine-König @ 2026-02-04 10:49 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: James Bottomley, Konstantin Ryabitsev, Greg KH, users, ksummit,
	Neal H. Walfield

[-- Attachment #1: Type: text/plain, Size: 6026 bytes --]

Hello Linus,

I had valuable input for writing this mail by Neal Walfield, so the
things expressed here are a combination of our thoughts.

On Tue, Jan 27, 2026 at 01:08:12PM -0800, Linus Torvalds wrote:
> On Tue, 27 Jan 2026 at 00:39, Uwe Kleine-König <ukleinek@kernel.org> wrote:
> >
> > Thanks for the link. I was aware that Linus isn't a big fan of PGP and
> > GnuPG. Still I think that having an expiration for your PGP certificates
> > is a very sensible thing.
>
> I have never ever seen any good reason for automatic expiration, and
> it causes actual real problems because *NOBODY* ever renews those
> expiration in time and makes sure that they actually percolate out.

A good reason is that it forces the users of your certificate to
participate in the percolation of your cert. This is relevant to make
updates to the key (new or revoked UIDs and subkeys) known. For that an
expiry time of 2 years is even quite long.

> We literally had that happen just last week, and that was with a
> person that is supposed to be an *expert* in those things, and that
> uses fancy DNS key distribution etc.

Of course this all breaks if the owner of the certificate doesn't work
on extending the expiry date in time. Partly this is a tooling problem.
The tools should warn users that their certificates are going to expire.
Neal already picked up that suggestion for Sequoia:
https://gitlab.com/sequoia-pgp/sequoia-sq/-/issues/622

Also for maintaining a keyring for a group of people (e.g. kernel
developers who have write access to kernel.org archives) an extension of
an expiry date is an easy indicator for the person still being active.
So an expiry date on the PGP certificate is a good dead man's switch for
people going slowly MIA because life gets in the way. Dropping access to
the project's infrastructure for people gone missing is a good security
measure.
I intend to keep an eye on the kernel pgpkeys repo and act as a reminder
for people that already have an expiry date on their key. I already
started (even before your mail) and it seems to work, e.g.
https://lore.kernel.org/keys/aYIQdlikYqHwps3I@do-x1carbon/T/#m5285386968f9c4b9cbeab3ebca83e39344ff2b29
https://lore.kernel.org/keys/hn4exg4aukkf6oc4gfe3v2dx6kzz5tgg52gtdcmlfeq3yqdode@z5xfwu5n4osc/T/#m6733201ded6f74b4a251d02f1330d71b26fff8be

> So no. No expiration dates. They are stupid and do not work in
> practice. End of story.

This is a poor argument. Such a failure doesn't necessarily mean that
the concept of expiry dates is wrong. I think in this case it's the user
holding the tool wrong (here: failure to add a reminder and act on it)
mostly because the tool makes it harder than necessary to be held
correctly (see above). In the same way a regression in Linux between say
6.17.4 and 6.17.5 shouldn't make people stop updating to later stable
kernels. Yes, this is annoying and the responsible key owners and stable
maintainers should work hard on preventing something like that
happening. But that doesn't mean expiry dates and stable updates are
wrong.

> They are ALSO stupid because they make old signatures *look*
> untrusted. Just go and do
>
>     git log --show-signature @{15.years.ago}
>
> and look for 'expired'. It's all just sad and pointless, . What
> matters was whether that key was trusted AT THAT POINT IN TIME, not
> whether it's trusted now. But that's not how things work.

This is also a tooling problem and I agree that a signature that was
created while the key was still fresh shouldn't appear in red here.

Looking at the signature stored in commit
756f80cee766574ae282baa97fdcf9cc which was made by a key that is expired
today and verifying it by hand with GnuPG gives:

	$ gpg --verify sig input
	gpg: Signature made Wed 26 Nov 2014 05:56:50 AM CET
	gpg:                using RSA key FE3958F9067BC667
	gpg: Good signature from "Jason Cooper <jason@lakedaemon.net>" [expired]
	gpg: Note: This key has expired!
	      D783920D6D4F0C06AA4C25F3FE3958F9067BC667
	$ echo $?
	0

(If you want to reproduce:
	git cat-file commit 756f80cee766574ae282baa97fdcf9cc | sed -n '/BEGIN PGP/,/END PGP/ { s/^ //p }' > sig
	git cat-file commit 756f80cee766574ae282baa97fdcf9cc | sed -n '/^mergetag/,/Remove/ { s/^mergetag //; s/^ //; p}' > input
)

There is no coloring involved and the output looks sane. So I think it
is git that is to blame here for making the output of

	git show --show-signature 756f80cee766574ae282baa97fdcf9cc

red? I'll bring that up on the git mailing list.

> And here is why they are completely pointless: a key that is no longer
> trusted should be *REVOKED*.

Agreed. And if Jason Cooper's key material was compromised and the
certificate revoked, red in git's output would be justified.

> So when I say "revoke it", I'm talking about just letting people know
> that a key is no longer trustworthy, and then they should remove it
> from their keychain.

Here is another weakness of how GnuPG handles things. In Sequoia, import
and trusting are two separate steps whereas when using a curated keyring
(which is what you seem to do with GnuPG), importing and trusting are a
single step. This means that users have to be very careful to not
inadvertently import a certificate they don't trust. The Sequoia model
allows you to import an untrusted key and only use a broken signature as
indicator for something being wrong but without having much trust in a
good signature.

> Because once the key is no longer trustworthy, some "it will expire in
> two years" is COMPLETE AND UTTER GARBAGE.

Agreed, trust and expiry correlate only very little.

All in all I hear your opinion (and wasn't terribly surprised by it :-)
and it contains a few valid points that need to be addressed. Thanks for
your input. Still I want to gradually push for getting more people in
the kernel pgpkeyring to use an expiry date for their crypto material
for the above stated reasons.

Best regards
Uwe

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-02-04 10:49                         ` Uwe Kleine-König
@ 2026-02-05 10:14                           ` James Bottomley
  2026-02-05 18:07                             ` Uwe Kleine-König
  0 siblings, 1 reply; 42+ messages in thread
From: James Bottomley @ 2026-02-05 10:14 UTC (permalink / raw)
  To: Uwe Kleine-König, Linus Torvalds
  Cc: Konstantin Ryabitsev, Greg KH, users, ksummit, Neal H. Walfield

[-- Attachment #1: Type: text/plain, Size: 2347 bytes --]

On Wed, 2026-02-04 at 11:49 +0100, Uwe Kleine-König wrote:
> On Tue, Jan 27, 2026 at 01:08:12PM -0800, Linus Torvalds wrote
[...]
> > I have never ever seen any good reason for automatic expiration,
> > and it causes actual real problems because *NOBODY* ever renews
> > those expiration in time and makes sure that they actually
> > percolate out.
> 
> A good reason is that it forces the users of your certificate to
> participate in the percolation of your cert. This is relevant to make
> updates to the key (new or revoked UIDs and subkeys) known. For that
> an expiry time of 2 years is even quite long.

That's not a good reason: we already have a set of key distribution
mechanisms now and have no need of additional percolation ...
particularly as our key uses are mostly limited to tag signing for one
person.

[...]
> 
> > So no. No expiration dates. They are stupid and do not work in
> > practice. End of story.
> 
> This is a poor argument. Such a failure doesn't necessarily mean that
> the concept of expiry dates is wrong.

OK, so come up with a good argument how short expiry would work for the
way kernel developers use keys.  You're the one asking for us to adopt
a currently non-standard practice, so the burden is on you to argue for
it. (and the percolation argument above isn't good enough because it's
irrelevant to our workflow).

[...]
> 
> Here is another weakness of how GnuPG handles things. In Sequoia,
> import and trusting are two separate steps whereas when using a
> curated keyring (which is what you seem to do with GnuPG), importing
> and trusting are a single step. This means that users have to be very
> careful to not inadvertently import a certificate they don't trust.
> The Sequoia model allows you to import an untrusted key and only use
> a broken signature as indicator for something being wrong but without
> having much trust in a good signature.

That's just propaganda: gpg can absolutely manipulate the trust
database independently from the signatures on import.  I think I
explained this on the users list only the other day (how we could use
trustdb to compensate for all our 2011 issued sha1 key signatures in
the kernel keyring).  However, trustdb manipulations are hard for
casual users to understand and get right.

Regards,

James


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 265 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-02-05 10:14                           ` James Bottomley
@ 2026-02-05 18:07                             ` Uwe Kleine-König
  2026-02-05 18:23                               ` Konstantin Ryabitsev
  0 siblings, 1 reply; 42+ messages in thread
From: Uwe Kleine-König @ 2026-02-05 18:07 UTC (permalink / raw)
  To: James Bottomley
  Cc: Linus Torvalds, Konstantin Ryabitsev, Greg KH, users, ksummit,
	Neal H. Walfield

[-- Attachment #1: Type: text/plain, Size: 6176 bytes --]

Hello James,

On Thu, Feb 05, 2026 at 10:14:06AM +0000, James Bottomley wrote:
> On Wed, 2026-02-04 at 11:49 +0100, Uwe Kleine-König wrote:
> > On Tue, Jan 27, 2026 at 01:08:12PM -0800, Linus Torvalds wrote
> > > I have never ever seen any good reason for automatic expiration,
> > > and it causes actual real problems because *NOBODY* ever renews
> > > those expiration in time and makes sure that they actually
> > > percolate out.
> > 
> > A good reason is that it forces the users of your certificate to
> > participate in the percolation of your cert. This is relevant to make
> > updates to the key (new or revoked UIDs and subkeys) known. For that
> > an expiry time of 2 years is even quite long.
> 
> That's not a good reason: we already have a set of key distribution
> mechanisms now and have no need of additional percolation ...
> particularly as our key uses are mostly limited to tag signing for one
> person.

OK, if you're just using your key for signing tags and you don't care
about the reasons I gave in my previous mail, I probably cannot convince
you.

But let me note that it's not you who maintains the kernel.org
infrastructure and thus you don't have a strong interest to disable
accounts of people who are MIA. It is also not me, but if I were in
Konstantin's position I'd push for a policy to only accept keys with an
expiry date just that everyone has that dead man's switch that is easy
to push for them and easy to check for me.

> > > So no. No expiration dates. They are stupid and do not work in
> > > practice. End of story.
> > 
> > This is a poor argument. Such a failure doesn't necessarily mean that
> > the concept of expiry dates is wrong.
> 
> OK, so come up with a good argument how short expiry would work for the
> way kernel developers use keys.

You're changing the topic here. My point is that Linus's reasoning is
wrong and expiration dates have a justification. For that a smooth
workflow is somewhat orthogonal. Anyhow:

If you consider the reasons I gave in my previous mail as relevant for
you, the only burden is that you create a calendar reminder, and when it
triggers run:

	gpg --quick-set-expire $yourkeyid 2y

and then publish the result e.g. using

	gpg --keyserver hkps://keys.openpgp.org/ --send-key $yourkeyid

or whatever is needed to get your certificate into WKD or DNS or
the kernel keyring once every two years. Nothing more is needed and it
even works when you missed the expiry date.

And with https://gitlab.com/sequoia-pgp/sequoia-sq/-/issues/622 fixed
(for GnuPG) you don't even need the calendar reminder.

> You're the one asking for us to adopt a currently non-standard
> practice, so the burden is on you to argue for it. (and the
> percolation argument above isn't good enough because it's irrelevant
> to our workflow).

In my bubble using no expiry date on key material is non-standard. (See
also TLS certificates and DANE signatures. Also more real-life stuff
like government issued ID cards and credit cards have a validity.)

Looking at your cert (which btw I was unable to retrieve from
keys.opengpg.org and WKD which I consider the two most usual ways to get
PGP certificates; and keyserver.ubuntu.com only has an old copy that
will expire in March 2026) until recently you used 5 year intervals to
extend your expiry and only the last update uses 10 years. So it seems I
don't have to convince you to use my "non-standard" practice in general,
only maybe that you use shorter intervals ;-)
 
> > Here is another weakness of how GnuPG handles things. In Sequoia,
> > import and trusting are two separate steps whereas when using a
> > curated keyring (which is what you seem to do with GnuPG), importing
> > and trusting are a single step. This means that users have to be very
> > careful to not inadvertently import a certificate they don't trust.
> > The Sequoia model allows you to import an untrusted key and only use
> > a broken signature as indicator for something being wrong but without
> > having much trust in a good signature.
> 
> That's just propaganda: gpg can absolutely manipulate the trust
> database independently from the signatures on import.  I think I
> explained this on the users list only the other day (how we could use
> trustdb to compensate for all our 2011 issued sha1 key signatures in
> the kernel keyring).  However, trustdb manipulations are hard for
> casual users to understand and get right.

I agree to everything you said in this paragraph apart from "That's just
propaganda". So yes, GnuPG can handle the trust stuff, and it is hard to
get right and thus most people (including obviously Linus) don't use it.
That's exactly my point when I say this is a weakness of how GnuPG
handles things.

BTW, having to extend the validity of your key material regularly also
creates a good opportunity to check if everything is still on par with
reality. And there is something to do for many keys in the kernel
pgpkeys repo: If the currently 637 certificates in that repo are passed
to the Sequoia certificate linter (`sq cert lint`) it diagnoses:

	Examined 637 certificates.
	...
	  274 have at least one User ID protected by SHA-1. (BAD)
	  261 have at least one non-revoked, live subkey with a binding signature that uses SHA-1. (BAD)
	...
	  9 certificates have at least one non-revoked, live, signing-capable subkey with a strong binding signature, but a backsig that uses SHA-1. (BAD)

Now if all these keys were in need of an update regularly and GnuPG
fixed these issues en passant during such an update (which technically
it could do easily and IMHO should do but doesn't) we would already have
got rid of the SHA-1 binding issue.

(If you want to check if your key is affected, see
https://lore.kernel.org/all/fxotnlhsyl2frp54xtguy7ryrucuwselanazixeax3motyyoo3@7vf7ip6gxyvx/
for instructions or
https://www.kleine-koenig.org/~uwe/resign-sha1/?certid=79BE3E4300411886
for diagnosis also covering 3rd party signatures.
(Replace 79BE3E4300411886 by your own key ID in the 2nd link if you're
not Linus.))

Best regards
Uwe

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Web of Trust work [Was: kernel.org tooling update]
  2026-02-05 18:07                             ` Uwe Kleine-König
@ 2026-02-05 18:23                               ` Konstantin Ryabitsev
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Ryabitsev @ 2026-02-05 18:23 UTC (permalink / raw)
  To: Uwe Kleine-König
  Cc: James Bottomley, Linus Torvalds, Greg KH, users, ksummit,
	Neal H. Walfield

On Thu, Feb 05, 2026 at 07:07:30PM +0100, Uwe Kleine-König wrote:
> But let me note that it's not you who maintains the kernel.org
> infrastructure and thus you don't have a strong interest to disable
> accounts of people who are MIA.

We already have a way to disable accounts for people who are MIA -- we check
which ssh keys haven't been used in over a year and automatically disable
them. This still leaves email aliases working, but removes the most critical
level of access.

-K

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2026-02-05 18:23 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-10  4:48 kernel.org tooling update Konstantin Ryabitsev
2025-12-10  8:11 ` Mauro Carvalho Chehab
2025-12-10 13:30 ` Thorsten Leemhuis
2025-12-11  3:04   ` Theodore Tso
2025-12-12 23:48   ` Stephen Hemminger
2025-12-12 23:54     ` Randy Dunlap
2025-12-16 16:21 ` Lukas Wunner
2025-12-16 20:33   ` Jeff Johnson
2025-12-17  0:47     ` Mario Limonciello
2025-12-18 13:37       ` Jani Nikula
2025-12-18 14:09         ` Mario Limonciello
2026-01-23  9:19 ` Web of Trust work [Was: kernel.org tooling update] Uwe Kleine-König
2026-01-23  9:29   ` Greg KH
2026-01-23 11:47     ` Mauro Carvalho Chehab
2026-01-23 11:58       ` Greg KH
2026-01-23 12:24         ` Mauro Carvalho Chehab
2026-01-23 12:29           ` Greg KH
2026-01-23 13:57         ` Konstantin Ryabitsev
2026-01-23 16:24     ` James Bottomley
2026-01-23 16:33       ` Greg KH
2026-01-23 16:42         ` Joe Perches
2026-01-23 17:00           ` Steven Rostedt
2026-01-23 17:23         ` James Bottomley
2026-01-23 18:23           ` Konstantin Ryabitsev
2026-01-23 21:12             ` Uwe Kleine-König
2026-01-26 16:23               ` Konstantin Ryabitsev
2026-01-26 17:32                 ` Uwe Kleine-König
2026-01-26 21:01                   ` Konstantin Ryabitsev
2026-01-26 23:23                   ` James Bottomley
2026-01-27  8:39                     ` Uwe Kleine-König
2026-01-27 21:08                       ` Linus Torvalds
2026-02-04 10:49                         ` Uwe Kleine-König
2026-02-05 10:14                           ` James Bottomley
2026-02-05 18:07                             ` Uwe Kleine-König
2026-02-05 18:23                               ` Konstantin Ryabitsev
2026-01-26 23:33                   ` Mauro Carvalho Chehab
2026-01-26 23:06                 ` Mauro Carvalho Chehab
2026-01-23 21:38             ` James Bottomley
2026-01-23 22:55             ` Mauro Carvalho Chehab
2026-01-23 16:38       ` Konstantin Ryabitsev
2026-01-23 17:02         ` Paul Moore
2026-01-23 18:42 ` kernel.org tooling update Randy Dunlap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox