* [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
@ 2025-10-08 17:04 Chris Mason
2025-10-08 17:20 ` Konstantin Ryabitsev
` (6 more replies)
0 siblings, 7 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-08 17:04 UTC (permalink / raw)
To: ksummit, Dan Carpenter, Alexei Starovoitov
[-- Attachment #1: Type: text/plain, Size: 2485 bytes --]
Hi everyone,
Depending on how you look at things, this is potentially a topic for
either MS or KS.
One way to lower the load on maintainers is to make it easier for
contributors to send higher quality patches, and to catch errors before
they land in various git trees.
Along those lines, when the AI code submission thread started over the
summer, I decided to see if it was possible to get reasonable code
reviews out of AI.
There are certainly false positives, but Alexei and the BPF developers
wired up my prompts into the BPF CI, and you can find the results in
their github CI. Everything in red is a bug the AI review found:
https://github.com/kernel-patches/bpf/actions/workflows/ai-code-review.yml
My goal for KS/MS is to discuss how to enable maintainers to use review
automation tools to lower their workload. I don't want to build new CI
here, so the goal would be enabling integration with existing CI.
My question for everyone is what would it take to make all of this
useful? I'm working on funding for API access, so hopefully that part
won't be a problem.
There's definitely overlap between the bugs I'm finding and the bugs Dan
Carpenter finds, so I'm hoping he and I can team up as well.
In terms of actual review details, the reviews have two parts:
1) The review prompts. These are stand alone and can just work on any
kernel tree. This is what BPF CI is currently using:
https://github.com/masoncl/review-prompts/
These prompts can also debug oopsen or syzbot reports (with varying
success).
2) A code indexing tool with MCP server that Claude can use to find
functions, types, and call chains more effectively. This makes it more
likely Claude can trace complex relationships in the code:
https://github.com/facebookexperimental/semcode
Asking claude to produce a callgraph for btrfs_search_slot() consumes
~444K tokens. With semcode installed, the same query produces better
results and uses 25K tokens. (btrfs_search_slot() has a huge callchain)
I don't think BPF CI is using this yet, but we'll move to it and compare
the review results if not.
The reviews are meant to look like emails on lkml, and even when wildly
wrong they definitely succeed there. I've attached the results of a run
against 600 random commits in linux-next, and the last 400 commits of
net-next (on Oct 2nd).
There are both real bugs and false positives in there, so it gives a
good idea of the mixture of right and wrong that are common in the reviews.
-chris
[-- Attachment #2: net-next.txt --]
[-- Type: text/plain, Size: 56893 bytes --]
commit 2ee5c8c0c28e0e3ccfdb842a7b3bd2f98ee7eaf7
Author: Mohsin Bashir <mohsin.bashr@gmail.com>
eth: fbnic: Move hw_stats_lock out of fbnic_dev
Move hw_stats_lock out of fbnic_dev to a more appropriate struct
fbnic_hw_stats since the only use of this lock is to protect access to
the hardware stats. While at it, enclose the lock and stats
initialization in a single init call.
Link: https://patch.msgid.link/20250825200206.2357713-2-kuba@kernel.org
> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic.h b/drivers/net/ethernet/meta/fbnic/fbnic.h
> index c376e06880c9..311c7dda911a 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic.h
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic.h
> @@ -84,9 +84,6 @@ struct fbnic_dev {
> /* Local copy of hardware statistics */
> struct fbnic_hw_stats hw_stats;
>
> - /* Lock protecting access to hw_stats */
> - spinlock_t hw_stats_lock;
> -
> struct fbnic_fw_log fw_log;
> };
[ ... ]
> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h b/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h
> index 4fe239717497..2fc25074a5e6 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h
> @@ -5,6 +5,7 @@
> #define _FBNIC_HW_STATS_H_
>
> #include <linux/ethtool.h>
> +#include <linux/spinlock.h>
>
> #include "fbnic_csr.h"
>
> @@ -122,11 +123,15 @@ struct fbnic_hw_stats {
> struct fbnic_rxb_stats rxb;
> struct fbnic_hw_q_stats hw_q[FBNIC_MAX_QUEUES];
> struct fbnic_pcie_stats pcie;
> +
> + /* Lock protecting the access to hw stats */
> + spinlock_t lock;
> };
>
> u64 fbnic_stat_rd64(struct fbnic_dev *fbd, u32 reg, u32 offset);
>
> void fbnic_reset_hw_stats(struct fbnic_dev *fbd);
> +void fbnic_init_hw_stats(struct fbnic_dev *fbd);
> void fbnic_get_hw_q_stats(struct fbnic_dev *fbd,
> struct fbnic_hw_q_stats *hw_q);
> void fbnic_get_hw_stats32(struct fbnic_dev *fbd);
[ ... ]
> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
> index b70e4cadb37b..8190f49e1426 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
> @@ -304,10 +304,9 @@ static int fbnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>
> fbnic_devlink_register(fbd);
> fbnic_dbg_fbd_init(fbd);
> - spin_lock_init(&fbd->hw_stats_lock);
>
> /* Capture snapshot of hardware stats so netdev can calculate delta */
> - fbnic_reset_hw_stats(fbd);
> + fbnic_init_hw_stats(fbd);
>
> fbnic_hwmon_register(fbd);
Can this create a race window during device initialization? The
fbnic_dbg_fbd_init() call creates debugfs entries including pcie_stats,
which allows immediate userspace access to fbnic_dbg_pcie_stats_show().
This function calls fbnic_get_hw_stats()->spin_lock(&fbd->hw_stats.lock),
but the lock initialization has been moved to fbnic_init_hw_stats() which
is called after fbnic_dbg_fbd_init().
If userspace accesses the debugfs pcie_stats file between debugfs creation
and lock initialization, would this cause undefined behavior with an
uninitialized spinlock?
commit 330355191a2d9a59137455b774b9a66dd6d068d4
Author: Andre Przywara <andre.przywara@arm.com>
net: stmmac: sun8i: drop unneeded default syscon value
This patch removes the default_syscon_value field from emac_variant
structures and changes the syscon register configuration to start from
zero instead of using read-modify-write operations. The cleanup function
is also modified to only reset registers for internal PHY variants.
Link: https://patch.msgid.link/20250825172055.19794-1-andre.przywara@arm.com
> diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
> index 2796dc426943..690f3650f84e 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
[ ... ]
> @@ -1039,9 +1000,9 @@ static int sun8i_dwmac_set_syscon(struct device *dev,
>
> static void sun8i_dwmac_unset_syscon(struct sunxi_priv_data *gmac)
> {
> - u32 reg = gmac->variant->default_syscon_value;
> -
> - regmap_field_write(gmac->regmap_field, reg);
> + if (gmac->variant->soc_has_internal_phy)
> + regmap_field_write(gmac->regmap_field,
> + (H3_EPHY_SHUTDOWN | H3_EPHY_SELECT));
> }
Can this leave the syscon register in an uncleaned state for variants
without internal PHY? The sun8i_dwmac_remove()->sun8i_dwmac_unset_syscon()
path will now skip register cleanup entirely for A83T, A64, and R40
variants where soc_has_internal_phy is false. The previous code would
restore the default syscon value for all variants during device removal.
commit 3061d214eead8a6fb652bf69135525f394a40e52
Author: Przemek Kitszel <przemyslaw.kitszel@intel.com>
ice: extract virt/queues.c: cleanup - p2
This patch removes ice_vc_cfg_promiscuous_mode_msg and ice_vc_get_stats_msg
functions from virt/queues.c as part of a cleanup series to move content
to virtchnl.c.
> diff --git a/drivers/net/ethernet/intel/ice/virt/queues.c b/drivers/net/ethernet/intel/ice/virt/queues.c
> index 7765ac50a6cd..c1da10aa2151 100644
> --- a/drivers/net/ethernet/intel/ice/virt/queues.c
> +++ b/drivers/net/ethernet/intel/ice/virt/queues.c
> @@ -161,187 +161,6 @@ static int ice_vf_cfg_q_quanta_profile(struct ice_vf *vf, u16 quanta_size,
> return 0;
> }
>
> -/**
> - * ice_vc_cfg_promiscuous_mode_msg
> - * @vf: pointer to the VF info
> - * @msg: pointer to the msg buffer
> - *
> - * called from the VF to configure VF VSIs promiscuous mode
> - */
> -static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg)
> -{
Can this cause compilation errors? The ice_vc_cfg_promiscuous_mode_msg
function is being removed here, but it's still referenced in the
ice_virtchnl_dflt_ops structure later in this same file. The function
exists as static in virtchnl.c, which means it's not externally visible
for linking.
[ ... ]
> -/**
> - * ice_vc_get_stats_msg
> - * @vf: pointer to the VF info
> - * @msg: pointer to the msg buffer
> - *
> - * called from the VF to get VSI stats
> - */
> -static int ice_vc_get_stats_msg(struct ice_vf *vf, u8 *msg)
> -{
Similarly, ice_vc_get_stats_msg is being removed but is still referenced
in both ice_virtchnl_dflt_ops and ice_virtchnl_repr_ops structures in
this file. Since the functions in virtchnl.c are declared static, won't
this result in undefined symbol errors during linking?
The function table assignments that still reference these removed functions:
static const struct ice_virtchnl_ops ice_virtchnl_dflt_ops = {
...
.get_stats_msg = ice_vc_get_stats_msg,
.cfg_promiscuous_mode_msg = ice_vc_cfg_promiscuous_mode_msg,
...
};
static const struct ice_virtchnl_ops ice_virtchnl_repr_ops = {
...
.get_stats_msg = ice_vc_get_stats_msg,
.cfg_promiscuous_mode_msg = ice_vc_repr_cfg_promiscuous_mode,
...
};
commit 1abe21ef1adf0c5b6dbb5878c2fa4573df8d29fc
Author: Christian Marangi <ansuelsmth@gmail.com>
net: phy: introduce phy_id_compare_vendor() PHY ID helper
This patch introduces a new phy_id_compare_vendor() helper function for
comparing PHY IDs with vendor masks. It also refactors the existing
PHY_ID_MATCH macros to use dedicated mask constants.
Link: https://patch.msgid.link/20250823134431.4854-1-ansuelsmth@gmail.com
> diff --git a/include/linux/phy.h b/include/linux/phy.h
> index 4c2b8b6e7187..b67079796402 100644
> --- a/include/linux/phy.h
> +++ b/include/linux/phy.h
> @@ -1268,9 +1268,13 @@ struct phy_driver {
> #define to_phy_driver(d) container_of_const(to_mdio_common_driver(d), \
> struct phy_driver, mdiodrv)
>
> -#define PHY_ID_MATCH_EXACT(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 0)
> -#define PHY_ID_MATCH_MODEL(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 4)
> -#define PHY_ID_MATCH_VENDOR(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 10)
> +#define PHY_ID_MATCH_EXTACT_MASK GENMASK(31, 0)
^^^^^^
Is there a typo in PHY_ID_MATCH_EXTACT_MASK? Should this be
PHY_ID_MATCH_EXACT_MASK instead?
> +#define PHY_ID_MATCH_MODEL_MASK GENMASK(31, 4)
> +#define PHY_ID_MATCH_VENDOR_MASK GENMASK(31, 10)
> +
> +#define PHY_ID_MATCH_EXACT(id) .phy_id = (id), .phy_id_mask = PHY_ID_MATCH_EXTACT_MASK
The PHY_ID_MATCH_EXACT macro references the misspelled constant name.
This will cause compilation failures when PHY drivers try to use this
macro.
> +#define PHY_ID_MATCH_MODEL(id) .phy_id = (id), .phy_id_mask = PHY_ID_MATCH_MODEL_MASK
> +#define PHY_ID_MATCH_VENDOR(id) .phy_id = (id), .phy_id_mask = PHY_ID_MATCH_VENDOR_MASK
>
> /**
> * phy_id_compare - compare @id1 with @id2 taking account of @mask
> @@ -1286,6 +1290,19 @@ static inline bool phy_id_compare(u32 id1, u32 id2, u32 mask)
> return !((id1 ^ id2) & mask);
> }
>
> +/**
> + * phy_id_compare_vendor - compare @id with @vendor mask
> + * @id: PHY ID
> + * @vendor_mask: PHY Vendor mask
> + *
> + * Return: true if the bits from @id match @vendor using the
> + * generic PHY Vendor mask.
> + */
> +static inline bool phy_id_compare_vendor(u32 id, u32 vendor_mask)
> +{
> + return phy_id_compare(id, vendor_mask, PHY_ID_MATCH_VENDOR_MASK);
> +}
> +
> /**
> * phydev_id_compare - compare @id with the PHY's Clause 22 ID
> * @phydev: the PHY device
commit 4367000c0e33e0bf24e2b872ccb49527a20f87e5
Author: Po-Hao Huang <phhuang@realtek.com>
wifi: rtw89: 8852a: report per-channel noise level by get_survey ops
This patch adds per-channel noise level reporting for the RTW89 8852a
WiFi driver to optimize roaming decisions. It implements NHM (Noise
Histogram Measurement) functionality with memory allocation for
per-channel noise history tracking.
Link: https://patch.msgid.link/20250819034428.26307-2-pkshih@realtek.com
> diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
> index 0ad7562632a5..2b658ee89bb6 100644
> --- a/drivers/net/wireless/realtek/rtw89/core.c
> +++ b/drivers/net/wireless/realtek/rtw89/core.c
[ ... ]
> diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
> index 06598723074e..c3181a301f7c 100644
> --- a/drivers/net/wireless/realtek/rtw89/phy.c
> +++ b/drivers/net/wireless/realtek/rtw89/phy.c
[ ... ]
> @@ -6934,6 +7190,43 @@ void rtw89_phy_dm_reinit(struct rtw89_dev *rtwdev)
> rtw89_physts_parsing_init(rtwdev);
> }
>
> +static void __rtw89_phy_dm_init_data(struct rtw89_dev *rtwdev, struct rtw89_bb_ctx *bb)
> +{
> + struct rtw89_env_monitor_info *env = &bb->env_monitor;
> + const struct rtw89_chip_info *chip = rtwdev->chip;
> + struct ieee80211_supported_band *sband;
> + enum rtw89_band hw_band;
> + enum nl80211_band band;
> + u8 idx;
> +
> + if (!chip->support_noise)
> + return;
> +
> + for (band = 0; band < NUM_NL80211_BANDS; band++) {
> + sband = rtwdev->hw->wiphy->bands[band];
> + if (!sband)
> + continue;
> +
> + hw_band = rtw89_nl80211_to_hw_band(band);
> + env->nhm_his[hw_band] =
> + devm_kcalloc(rtwdev->dev, sband->n_channels,
> + sizeof(*env->nhm_his[0]), GFP_KERNEL);
Can this cause a NULL pointer dereference? The devm_kcalloc() call can
fail and return NULL, but there's no error checking here. Later in
__rtw89_phy_nhm_get_result()->rtw89_ops_get_survey(), the code accesses
env->nhm_his[hw_band][chan_idx] which would dereference NULL if the
allocation failed.
> +
> + for (idx = 0; idx < sband->n_channels; idx++)
> + INIT_LIST_HEAD(&env->nhm_his[hw_band][idx].list);
> +
> + INIT_LIST_HEAD(&env->nhm_rpt_list);
> + }
> +}
[ ... ]
commit 3d1b3f4ffc0aca15a0ebce0c71163b42a87efff2
Author: Daniel Golle <daniel@makrotopia.org>
net: phy: mxl-86110: add basic support for MxL86111 PHY
Add basic support for the MxL86111 PHY which in addition to the features
of the MxL86110 also comes with an SGMII interface. The patch refactors
the existing MxL86110 code and adds new functions for MxL86111 support.
Link: https://patch.msgid.link/707fd83ec0e11ea620d37f2125a394e9dd1b27fa.1755884175.git.daniel@makrotopia.org
> diff --git a/drivers/net/phy/mxl-86110.c b/drivers/net/phy/mxl-86110.c
> index ba25d5b01780..e5d137a37a1d 100644
> --- a/drivers/net/phy/mxl-86110.c
> +++ b/drivers/net/phy/mxl-86110.c
[ ... ]
> @@ -585,22 +652,15 @@ static int mxl86110_enable_led_activity_blink(struct phy_device *phydev)
> }
>
> /**
> - * mxl86110_config_init() - initialize the PHY
> + * mxl86110_config_rgmii_delay() - configure RGMII delays
> * @phydev: pointer to the phy_device
> *
> * Return: 0 or negative errno code
> */
> -static int mxl86110_config_init(struct phy_device *phydev)
> +static int mxl86110_config_rgmii_delay(struct phy_device *phydev)
> {
> - u16 val = 0;
> int ret;
> -
> - phy_lock_mdio_bus(phydev);
> -
> - /* configure syncE / clk output */
> - ret = mxl86110_synce_clk_cfg(phydev);
> - if (ret < 0)
> - goto out;
> + u16 val;
>
> switch (phydev->interface) {
> case PHY_INTERFACE_MODE_RGMII:
> @@ -642,17 +702,237 @@ static int mxl86110_config_init(struct phy_device *phydev)
> if (ret < 0)
> goto out;
>
> +out:
> + return ret;
> +}
> +
> +/**
> + * mxl86110_config_init() - initialize the MXL86110 PHY
> + * @phydev: pointer to the phy_device
> + *
> + * Return: 0 or negative errno code
> + */
> +static int mxl86110_config_init(struct phy_device *phydev)
> +{
> + int ret;
> +
> + phy_lock_mdio_bus(phydev);
> +
> + /* configure syncE / clk output */
> + ret = mxl86110_synce_clk_cfg(phydev);
> + if (ret < 0)
> + goto out;
> +
> + ret = mxl86110_config_rgmii_delay(phydev);
> + if (ret < 0)
> + goto out;
> +
> + ret = mxl86110_enable_led_activity_blink(phydev);
> + if (ret < 0)
> + goto out;
> +
> + ret = mxl86110_broadcast_cfg(phydev);
> +
> +out:
> + phy_unlock_mdio_bus(phydev);
> + return ret;
> +}
[ ... ]
> +/**
> + * mxl86111_config_init() - initialize the MXL86111 PHY
> + * @phydev: pointer to the phy_device
> + *
> + * Return: 0 or negative errno code
> + */
> +static int mxl86111_config_init(struct phy_device *phydev)
> +{
> + int ret;
> +
> + phy_lock_mdio_bus(phydev);
> +
> + /* configure syncE / clk output */
> + ret = mxl86110_synce_clk_cfg(phydev);
> + if (ret < 0)
> + goto out;
> +
> + switch (phydev->interface) {
> + case PHY_INTERFACE_MODE_100BASEX:
> + ret = __mxl86110_modify_extended_reg(phydev,
> + MXL86111_EXT_MISC_CONFIG_REG,
> + MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL,
> + MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL_100BX);
> + if (ret < 0)
> + goto out;
> + break;
> + case PHY_INTERFACE_MODE_1000BASEX:
> + case PHY_INTERFACE_MODE_SGMII:
> + ret = __mxl86110_modify_extended_reg(phydev,
> + MXL86111_EXT_MISC_CONFIG_REG,
> + MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL,
> + MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL_1000BX);
> + if (ret < 0)
> + goto out;
> + break;
> + default:
> + /* RGMII modes */
> + ret = mxl86110_config_rgmii_delay(phydev);
> + if (ret < 0)
> + goto out;
> + ret = __mxl86110_modify_extended_reg(phydev, MXL86110_EXT_RGMII_CFG1_REG,
> + MXL86110_EXT_RGMII_CFG1_FULL_MASK, ret);
^^^
Can this write the wrong delay configuration to the RGMII register? The
mxl86110_config_rgmii_delay() function returns 0 on success, but here
that return value (ret=0) is being used as the register value instead of
the calculated RGMII delay settings.
Looking at mxl86110_config_rgmii_delay(), it calculates proper delay
values like MXL86110_EXT_RGMII_CFG1_RX_DELAY_1950PS based on the
interface mode, but those values are consumed internally and the function
returns only an error code. Using ret=0 as the register value would
disable all RGMII delays regardless of the interface requirements.
> +
> + /* PL P1 requires optimized RGMII timing for 1.8V RGMII voltage
> + */
> + ret = __mxl86110_read_extended_reg(phydev, 0xf);
> + if (ret < 0)
> + goto out;
> +
> + if (ret == MXL86111_PL_P1) {
> + ret = __mxl86110_read_extended_reg(phydev, MXL86110_EXT_CHIP_CFG_REG);
> + if (ret < 0)
> + goto out;
> +
> + /* check if LDO is in 1.8V mode */
> + switch (FIELD_GET(MXL86111_EXT_CHIP_CFG_CLDO_MASK, ret)) {
> + case MXL86111_EXT_CHIP_CFG_CLDO_1V8_3:
> + case MXL86111_EXT_CHIP_CFG_CLDO_1V8_2:
> + ret = __mxl86110_write_extended_reg(phydev, 0xa010, 0xabff);
> + if (ret < 0)
> + goto out;
> + break;
> + default:
> + break;
> + }
> + }
> + break;
> + }
> +
> + ret = mxl86110_enable_led_activity_blink(phydev);
> + if (ret < 0)
> + goto out;
> +
> + ret = mxl86110_broadcast_cfg(phydev);
> +out:
> + phy_unlock_mdio_bus(phydev);
> +
> + return ret;
> +}
[ ... ]
commit db03a1ced61c4b9297996c67cc4b8ade9fdb7cd1
Author: Fan Gong <gongfan1@huawei.com>
hinic3: Command Queue framework
Add Command Queue framework initialization. It is used to set the related
table items of the driver and obtain the HW configuration.
Link: https://patch.msgid.link/1aeed56de39078bde8fff4597d7aa22d350058fc.1755673097.git.zhuyikai1@h-partners.com
> diff --git a/drivers/net/ethernet/huawei/hinic3/Makefile b/drivers/net/ethernet/huawei/hinic3/Makefile
> index 5fb4d1370049..2a0ed8e2c63e 100644
> --- a/drivers/net/ethernet/huawei/hinic3/Makefile
> +++ b/drivers/net/ethernet/huawei/hinic3/Makefile
> @@ -3,7 +3,8 @@
>
> obj-$(CONFIG_HINIC3) += hinic3.o
>
> -hinic3-objs := hinic3_common.o \
> +hinic3-objs := hinic3_cmdq.o \
> + hinic3_common.o \
> hinic3_eqs.o \
> hinic3_hw_cfg.o \
> hinic3_hw_comm.o \
[ ... ]
> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_cmdq.c b/drivers/net/ethernet/huawei/hinic3/hinic3_cmdq.c
> new file mode 100644
> index 000000000000..f1e61a212f2a
> --- /dev/null
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_cmdq.c
[ ... ]
> +int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev)
> +{
> + struct hinic3_cmdqs *cmdqs;
> + void __iomem *db_base;
> + u8 cmdq_type;
> + int err;
> +
> + err = init_cmdqs(hwdev);
> + if (err)
> + goto err_out;
> +
> + cmdqs = hwdev->cmdqs;
> + err = create_cmdq_wq(hwdev, cmdqs);
> + if (err)
> + goto err_free_cmdqs;
> +
> + err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
> + if (err) {
> + dev_err(hwdev->dev, "Failed to allocate doorbell address\n");
> + goto err_destroy_cmdq_wq;
> + }
> + cmdqs->cmdqs_db_base = db_base;
> +
> + for (cmdq_type = 0; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
> + err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, cmdq_type);
> + if (err) {
> + dev_err(hwdev->dev,
> + "Failed to initialize cmdq type : %d\n",
> + cmdq_type);
> + goto err_free_cmd_infos;
> + }
> + }
> +
> + err = hinic3_set_cmdq_ctxts(hwdev);
> + if (err)
> + goto err_free_cmd_infos;
> +
> + return 0;
> +
> +err_free_cmd_infos:
> + while (cmdq_type > 0) {
> + cmdq_type--;
> + kfree(cmdqs->cmdq[cmdq_type].cmd_infos);
> + }
Can hinic3_set_cmdq_ctxts() failure leak cmd_infos? When hinic3_set_cmdq_ctxts()
fails, all cmdq_type indices from 0 to cmdq_num-1 have been successfully
initialized by init_cmdq(), but the cleanup loop only frees indices 0 to
cmdq_type-1, missing the current cmdq_type.
> +
> + hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base);
> +
> +err_destroy_cmdq_wq:
> + destroy_cmdq_wq(hwdev, cmdqs);
> +
> +err_free_cmdqs:
> + dma_pool_destroy(cmdqs->cmd_buf_pool);
> + kfree(cmdqs);
> +
> +err_out:
> + return err;
> +}
[ ... ]
commit c4bbfd9b0d3241ab4a0b6b0095659037ad136d46
Author: Fan Gong <gongfan1@huawei.com>
hinic3: Complete Event Queue interfaces
Add complete event queue interfaces initialization. It informs that
driver should handle the messages from HW.
Link: https://patch.msgid.link/837837f13b96c7155644428a329d5d47b7242153.1755673097.git.zhuyikai1@h-partners.com
> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h b/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h
> index 39e15fbf0ed7..e7417e8efa99 100644
> --- a/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h
[ ... ]
> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
> index 15b1345f7e48..01686472985b 100644
> --- a/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
[ ... ]
> @@ -495,3 +713,64 @@ void hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
>
> kfree(aeqs);
> }
> +
> +int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
> + struct msix_entry *msix_entries)
> +{
> + struct hinic3_ceqs *ceqs;
> + u16 q_id;
> + int err;
> +
> + ceqs = kzalloc(sizeof(*ceqs), GFP_KERNEL);
> + if (!ceqs)
> + return -ENOMEM;
> +
> + hwdev->ceqs = ceqs;
> + ceqs->hwdev = hwdev;
> + ceqs->num_ceqs = num_ceqs;
> +
> + for (q_id = 0; q_id < num_ceqs; q_id++) {
> + err = init_eq(&ceqs->ceq[q_id], hwdev, q_id,
> + HINIC3_DEFAULT_CEQ_LEN, HINIC3_CEQ,
> + &msix_entries[q_id]);
> + if (err) {
> + dev_err(hwdev->dev, "Failed to init ceq %u\n",
> + q_id);
> + goto err_free_ceqs;
> + }
> + }
> + for (q_id = 0; q_id < num_ceqs; q_id++)
> + hinic3_set_msix_state(hwdev, ceqs->ceq[q_id].msix_entry_idx,
> + HINIC3_MSIX_ENABLE);
> +
> + return 0;
> +
> +err_free_ceqs:
> + while (q_id > 0) {
> + q_id--;
> + remove_eq(&ceqs->ceq[q_id]);
> + }
> +
> + kfree(ceqs);
> +
> + return err;
> +}
> +
> +void hinic3_ceqs_free(struct hinic3_hwdev *hwdev)
> +{
> + struct hinic3_ceqs *ceqs = hwdev->ceqs;
> + enum hinic3_ceq_event ceq_event;
> + struct hinic3_eq *eq;
> + u16 q_id;
> +
> + for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
> + eq = ceqs->ceq + q_id;
> + remove_eq(eq);
> + hinic3_free_irq(hwdev, eq->irq_id);
> + }
Can this double-free the IRQ resource? The remove_eq() function calls
free_irq() on eq->irq_id, and then hinic3_free_irq() is called on the
same IRQ ID. This would attempt to free the same IRQ twice during CEQ
cleanup.
> +
> + for (ceq_event = 0; ceq_event < HINIC3_MAX_CEQ_EVENTS; ceq_event++)
> + hinic3_ceq_unregister_cb(hwdev, ceq_event);
> +
> + kfree(ceqs);
> +}
[ ... ]
commit a4511307be8659c482e792feefe671e891cff59d
Author: Fan Gong <gongfan1@huawei.com>
hinic3: Async Event Queue interfaces
Add async event queue interfaces initialization. It allows driver to
handle async events reported by HW.
Link: https://patch.msgid.link/553ebd562b61cd854a2beb25c3d4d98ad3073db0.1755673097.git.zhuyikai1@h-partners.com
> diff --git a/drivers/net/ethernet/huawei/hinic3/Makefile b/drivers/net/ethernet/huawei/hinic3/Makefile
> index 509dfbfb0e96..5fb4d1370049 100644
> --- a/drivers/net/ethernet/huawei/hinic3/Makefile
> +++ b/drivers/net/ethernet/huawei/hinic3/Makefile
> @@ -4,6 +4,7 @@
> obj-$(CONFIG_HINIC3) += hinic3.o
>
> hinic3-objs := hinic3_common.o \
> + hinic3_eqs.o \
> hinic3_hw_cfg.o \
> hinic3_hw_comm.o \
> hinic3_hwdev.o \
[ ... ]
> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
> new file mode 100644
> index 000000000000..15b1345f7e48
> --- /dev/null
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
[ ... ]
> +static void remove_eq(struct hinic3_eq *eq)
> +{
> + hinic3_set_msix_state(eq->hwdev, eq->msix_entry_idx,
> + HINIC3_MSIX_DISABLE);
> + free_irq(eq->irq_id, eq);
[ ... ]
> +void hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
> +{
> + struct hinic3_aeqs *aeqs = hwdev->aeqs;
> + enum hinic3_aeq_type aeq_event;
> + struct hinic3_eq *eq;
> + u16 q_id;
> +
> + for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
> + eq = aeqs->aeq + q_id;
> + remove_eq(eq);
> + hinic3_free_irq(hwdev, eq->irq_id);
Can this cause a double-free of the IRQ? The remove_eq() function calls
free_irq() on eq->irq_id, then hinic3_aeqs_free() immediately calls
hinic3_free_irq() on the same eq->irq_id. This could trigger a kernel
warning or crash when free_irq() is called twice on the same interrupt.
> + }
> +
> + for (aeq_event = 0; aeq_event < HINIC3_MAX_AEQ_EVENTS; aeq_event++)
> + hinic3_aeq_unregister_cb(hwdev, aeq_event);
> +
> + destroy_workqueue(aeqs->workq);
> +
> + kfree(aeqs);
> +}
[ ... ]
commit bcb28bee987a1e161eaa5cc4cf2fb0e21306d4a7
Author: Ujwal Kundur <ujwal.kundur@gmail.com>
rds: Fix endianness annotations for RDS extension headers
Per the RDS 3.1 spec, RDS extension headers EXTHDR_NPATHS and
EXTHDR_GEN_NUM are be16 and be32 values respectively, exchanged during
normal operations over-the-wire (RDS Ping/Pong). This patch fixes the
annotations across occurrences.
Link: https://patch.msgid.link/20250820175550.498-5-ujwal.kundur@gmail.com
> diff --git a/net/rds/message.c b/net/rds/message.c
> index 7af59d2443e5..199a899a43e9 100644
> --- a/net/rds/message.c
> +++ b/net/rds/message.c
> @@ -44,8 +44,8 @@ static unsigned int rds_exthdr_size[__RDS_EXTHDR_MAX] = {
> [RDS_EXTHDR_VERSION] = sizeof(struct rds_ext_header_version),
> [RDS_EXTHDR_RDMA] = sizeof(struct rds_ext_header_rdma),
> [RDS_EXTHDR_RDMA_DEST] = sizeof(struct rds_ext_header_rdma_dest),
> -[RDS_EXTHDR_NPATHS] = sizeof(u16),
> -[RDS_EXTHDR_GEN_NUM] = sizeof(u32),
> +[RDS_EXTHDR_NPATHS] = sizeof(__be16),
> +[RDS_EXTHDR_GEN_NUM] = sizeof(__be32),
> };
>
> void rds_message_addref(struct rds_message *rm)
> diff --git a/net/rds/recv.c b/net/rds/recv.c
> index 5627f80013f8..66205d6924bf 100644
> --- a/net/rds/recv.c
> +++ b/net/rds/recv.c
> @@ -202,8 +202,8 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
> unsigned int pos = 0, type, len;
> union {
> struct rds_ext_header_version version;
> - u16 rds_npaths;
> - u32 rds_gen_num;
> + __be16 rds_npaths;
> + __be32 rds_gen_num;
> } buffer;
> u32 new_peer_gen_num = 0;
>
> diff --git a/net/rds/send.c b/net/rds/send.c
> index 42d991bc8543..0b3d0ef2f008 100644
> --- a/net/rds/send.c
> +++ b/net/rds/send.c
> @@ -1454,8 +1454,8 @@ rds_send_probe(struct rds_conn_path *cp, __be16 sport,
>
> if (RDS_HS_PROBE(be16_to_cpu(sport), be16_to_cpu(dport)) &&
> cp->cp_conn->c_trans->t_mp_capable) {
> - u16 npaths = cpu_to_be16(RDS_MPATH_WORKERS);
> - u32 my_gen_num = cpu_to_be32(cp->cp_conn->c_my_gen_num);
> + __be16 npaths = cpu_to_be16(RDS_MPATH_WORKERS);
> + __be32 my_gen_num = cpu_to_be32(cp->cp_conn->c_my_gen_num);
>
> rds_message_add_extension(&rm->m_inc.i_hdr,
> RDS_EXTHDR_NPATHS, &npaths,
> @@ -1459,7 +1459,7 @@ rds_send_probe(struct rds_conn_path *cp, __be16 sport,
> rds_message_add_extension(&rm->m_inc.i_hdr,
> RDS_EXTHDR_GEN_NUM,
> &my_gen_num,
> sizeof(u32));
Will this validation fail in rds_message_add_extension()? The
rds_exthdr_size array now expects sizeof(__be32) for RDS_EXTHDR_GEN_NUM,
but this call passes sizeof(u32). The validation check compares
len != rds_exthdr_size[type] and returns 0 on mismatch, which could
prevent the generation number extension from being added to probe messages.
commit 5a4cf42322a0260c7391a3e64d288861e43de673
Author: Shruti Parab <shruti.parab@broadcom.com>
bnxt_en: Add pcie_ctx_v2 support for ethtool -d
Add support to dump the expanded v2 struct that contains PCIE read/write
latency and credit histogram data.
Link: https://patch.msgid.link/20250819163919.104075-5-michael.chan@broadcom.com
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
> index abb895fb1a9c..2830a2b17a27 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
> @@ -2088,14 +2088,16 @@ __bnxt_hwrm_pcie_qstats(struct bnxt *bp, struct hwrm_pcie_qstats_input *req)
> }
>
> #define BNXT_PCIE_32B_ENTRY(start, end) \
> - { offsetof(struct pcie_ctx_hw_stats, start), \
> - offsetof(struct pcie_ctx_hw_stats, end) }
> + { offsetof(struct pcie_ctx_hw_stats_v2, start),\
> + offsetof(struct pcie_ctx_hw_stats_v2, end) }
>
> static const struct {
> u16 start;
> u16 end;
> } bnxt_pcie_32b_entries[] = {
> BNXT_PCIE_32B_ENTRY(pcie_ltssm_histogram[0], pcie_ltssm_histogram[3]),
> + BNXT_PCIE_32B_ENTRY(pcie_tl_credit_nph_histogram[0], unused_1),
> + BNXT_PCIE_32B_ENTRY(pcie_rd_latency_histogram[0], unused_2),
> };
The bnxt_pcie_32b_entries[] array now contains offsets from struct
pcie_ctx_hw_stats_v2, but bnxt_get_regs() still uses the old struct type.
>
> static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
> @@ -2123,7 +2125,13 @@ static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
> int i, j, len;
>
> len = min(bp->pcie_stat_len, le16_to_cpu(resp->pcie_stat_size));
> - regs->version = 1;
> + if (len <= sizeof(struct pcie_ctx_hw_stats))
> + regs->version = 1;
> + else if (len < sizeof(struct pcie_ctx_hw_stats_v2))
> + regs->version = 2;
> + else
> + regs->version = 3;
> +
> for (i = 0, j = 0; i < len; ) {
> if (i >= bnxt_pcie_32b_entries[j].start &&
> i <= bnxt_pcie_32b_entries[j].end) {
Can this access memory beyond the allocated buffer? The function still
declares hw_pcie_stats as "struct pcie_ctx_hw_stats *" and allocates
sizeof(*hw_pcie_stats) bytes, but bnxt_pcie_32b_entries[] now contains
offsets from the larger pcie_ctx_hw_stats_v2 struct.
When the loop accesses bnxt_pcie_32b_entries[j].start for the new entries
(pcie_tl_credit_nph_histogram or pcie_rd_latency_histogram), those offsets
will be beyond the allocated buffer size since they exist only in the v2
struct.
commit e2afa83296bbac40829624b508492b562a21e4d4
Author: Kuniyuki Iwashima <kuniyu@google.com>
tcp: Simplify error path in inet_csk_accept().
The patch simplifies the error handling path in inet_csk_accept() by
removing redundant label jumping and NULL assignments. However, it also
removes NULL checks that may be needed in certain code paths.
Link: https://patch.msgid.link/20250815201712.1745332-4-kuniyu@google.com
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index 1e2df51427fe..724bd9ed6cd4 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -706,9 +706,9 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg)
> spin_unlock_bh(&queue->fastopenq.lock);
> }
>
> -out:
> release_sock(sk);
> - if (newsk && mem_cgroup_sockets_enabled) {
> +
> + if (mem_cgroup_sockets_enabled) {
> gfp_t gfp = GFP_KERNEL | __GFP_NOFAIL;
> int amt = 0;
>
> @@ -732,18 +732,17 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg)
>
> release_sock(newsk);
> }
> +
> if (req)
> reqsk_put(req);
>
> - if (newsk)
> - inet_init_csk_locks(newsk);
> -
> + inet_init_csk_locks(newsk);
> return newsk;
Can inet_init_csk_locks() be called with a NULL newsk? Looking at the TCP
Fast Open path earlier in the function, there's this code:
if (sk->sk_protocol == IPPROTO_TCP &&
tcp_rsk(req)->tfo_listener) {
spin_lock_bh(&queue->fastopenq.lock);
if (tcp_rsk(req)->tfo_listener) {
req->sk = NULL; // Sets to NULL
req = NULL;
}
spin_unlock_bh(&queue->fastopenq.lock);
}
Since newsk = req->sk is set earlier, this would make newsk NULL when the
Fast Open path sets req->sk = NULL. The original code checked "if (newsk)"
before calling inet_init_csk_locks(), but the patch removes this check.
> +
> out_err:
> - newsk = NULL;
> - req = NULL;
> + release_sock(sk);
> arg->err = error;
> - goto out;
> + return NULL;
> }
> EXPORT_SYMBOL(inet_csk_accept);
commit e97e6a1830ddb5885ba312e56b6fa3aa39b5f47e
Author: Stanislav Fomichev <sdf@fomichev.me>
net: Switch to skb_dstref_steal/skb_dstref_restore for ip_route_input callers
Going forward skb_dst_set will assert that skb dst_entry is empty during
skb_dst_set. This patch converts icmp_route_lookup and ip_options_rcv_srr
to use new helper functions that properly handle dst reference stealing
and restoration.
Link: https://patch.msgid.link/20250818154032.3173645-5-sdf@fomichev.me
> diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
> index 2ffe73ea644f..91765057aa1d 100644
> --- a/net/ipv4/icmp.c
> +++ b/net/ipv4/icmp.c
> @@ -544,14 +544,15 @@ static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4,
> goto relookup_failed;
> }
> /* Ugh! */
> - orefdst = skb_in->_skb_refdst; /* save old refdst */
> - skb_dst_set(skb_in, NULL);
> + orefdst = skb_dstref_steal(skb_in);
> err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr,
> dscp, rt2->dst.dev) ? -EINVAL : 0;
>
> dst_release(&rt2->dst);
> rt2 = skb_rtable(skb_in);
> - skb_in->_skb_refdst = orefdst; /* restore old refdst */
> + /* steal dst entry from skb_in, don't drop refcnt */
> + skb_dstref_steal(skb_in);
> + skb_dstref_restore(skb_in, orefdst);
Can this leak the dst entry from ip_route_input()? The sequence appears to:
1. Save original dst via skb_dstref_steal(skb_in) -> orefdst
2. Call ip_route_input() which sets a new dst in skb_in
3. Call skb_dstref_steal(skb_in) again, stealing the new dst but not
storing or releasing it
4. Restore the original dst via skb_dstref_restore(skb_in, orefdst)
The dst entry created by ip_route_input() seems to be abandoned without
proper cleanup, while the original dst is restored.
> }
>
> if (err)
> diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
> index e3321932bec0..be8815ce3ac2 100644
> --- a/net/ipv4/ip_options.c
> +++ b/net/ipv4/ip_options.c
> @@ -615,14 +615,13 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
> }
> memcpy(&nexthop, &optptr[srrptr-1], 4);
>
> - orefdst = skb->_skb_refdst;
> - skb_dst_set(skb, NULL);
> + orefdst = skb_dstref_steal(skb);
> err = ip_route_input(skb, nexthop, iph->saddr, ip4h_dscp(iph),
> dev) ? -EINVAL : 0;
> rt2 = skb_rtable(skb);
> if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
> skb_dst_drop(skb);
> - skb->_skb_refdst = orefdst;
> + skb_dstref_restore(skb, orefdst);
> return -EINVAL;
> }
> refdst_drop(orefdst);
commit da114122b83149d1f1db0586b1d67947b651aa20
Author: Chaoyi Chen <chaoyi.chen@rock-chips.com>
net: ethernet: stmmac: dwmac-rk: Make the clk_phy could be used for external phy
This patch modifies rk_gmac_clk_init() to allow clk_phy to be used by
external PHYs. Previously, clk_phy was only obtained for integrated PHYs,
but now it can be obtained for any PHY with a phy_node.
Link: https://patch.msgid.link/20250815023515.114-1-kernel@airkyi.com
> diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> index 7c898768d544..9fc41207cc45 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> @@ -1412,12 +1412,15 @@ static int rk_gmac_clk_init(struct plat_stmmacenet_data *plat)
> clk_set_rate(plat->stmmac_clk, 50000000);
> }
>
> - if (plat->phy_node && bsp_priv->integrated_phy) {
> + if (plat->phy_node) {
> bsp_priv->clk_phy = of_clk_get(plat->phy_node, 0);
> ret = PTR_ERR_OR_ZERO(bsp_priv->clk_phy);
> - if (ret)
> - return dev_err_probe(dev, ret, "Cannot get PHY clock\n");
> - clk_set_rate(bsp_priv->clk_phy, 50000000);
> + /* If it is not integrated_phy, clk_phy is optional */
> + if (bsp_priv->integrated_phy) {
> + if (ret)
> + return dev_err_probe(dev, ret, "Cannot get PHY clock\n");
> + clk_set_rate(bsp_priv->clk_phy, 50000000);
> + }
> }
>
> return 0;
Can this leak the clock reference for external PHYs? With this change,
of_clk_get() is called for both integrated and external PHYs, but the
cleanup path in rk_gmac_remove() only calls clk_put() when integrated_phy
is true:
if (priv->plat->phy_node && bsp_priv->integrated_phy)
clk_put(bsp_priv->clk_phy);
If an external PHY successfully gets a clock reference via of_clk_get(),
that reference will never be released with clk_put().
commit 9064ab485f04df40e7f0838245849e2e4c5159d9
Author: Mohsin Bashir <mohsin.bashr@gmail.com>
eth: fbnic: Prefetch packet headers on Rx
Issue a prefetch for the start of the buffer on Rx to try to avoid cache
miss on packet headers.
Link: https://patch.msgid.link/20250813221319.3367670-5-mohsin.bashr@gmail.com
> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> index 2adbe175ac09..65d1e40addec 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> @@ -888,7 +888,7 @@ static void fbnic_pkt_prepare(struct fbnic_napi_vector *nv, u64 rcd,
>
> /* Build frame around buffer */
> hdr_start = page_address(page) + hdr_pg_start;
> -
> + net_prefetch(pkt->buff.data);
Can this access uninitialized memory? Looking at the function sequence in
fbnic_pkt_prepare():
1. xdp_init_buff(&pkt->buff, frame_sz, NULL) only sets frame_sz, rxq,
and flags fields
2. net_prefetch(pkt->buff.data) accesses the data field before it's set
3. xdp_prepare_buff() initializes pkt->buff.data = hard_start + headroom
The data field appears to be accessed before xdp_prepare_buff() initializes
it to point to the actual packet buffer.
> xdp_prepare_buff(&pkt->buff, hdr_start, headroom,
> len - FBNIC_RX_PAD, true);
commit 61f9a066c3099264f40737d134c7921567f85072
Author: Mohsin Bashir <mohsin.bashr@gmail.com>
eth: fbnic: Use shinfo to track frags state on Rx
This patch removes local fields that track fragment state and instead
stores this information directly in the shinfo struct by using standard
XDP buffer management functions.
Link: https://patch.msgid.link/20250813221319.3367670-4-mohsin.bashr@gmail.com
> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> index 7c69f6381d9e..2adbe175ac09 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> @@ -892,9 +892,8 @@ static void fbnic_pkt_prepare(struct fbnic_napi_vector *nv, u64 rcd,
> xdp_prepare_buff(&pkt->buff, hdr_start, headroom,
> len - FBNIC_RX_PAD, true);
>
> - pkt->data_truesize = 0;
> - pkt->data_len = 0;
> - pkt->nr_frags = 0;
> + pkt->hwtstamp = 0;
> + pkt->add_frag_failed = false;
> }
>
> static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
> @@ -905,8 +904,8 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
> unsigned int pg_off = FIELD_GET(FBNIC_RCD_AL_BUFF_OFF_MASK, rcd);
> unsigned int len = FIELD_GET(FBNIC_RCD_AL_BUFF_LEN_MASK, rcd);
> struct page *page = fbnic_page_pool_get(&qt->sub1, pg_idx);
> - struct skb_shared_info *shinfo;
> unsigned int truesize;
> + bool added;
>
> truesize = FIELD_GET(FBNIC_RCD_AL_PAGE_FIN, rcd) ?
> FBNIC_BD_FRAG_SIZE - pg_off : ALIGN(len, 128);
> @@ -918,34 +917,34 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
> dma_sync_single_range_for_cpu(nv->dev, page_pool_get_dma_addr(page),
> pg_off, truesize, DMA_BIDIRECTIONAL);
>
> - /* Add page to xdp shared info */
> - shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
> -
> - /* We use gso_segs to store truesize */
> - pkt->data_truesize += truesize;
> -
> - __skb_fill_page_desc_noacc(shinfo, pkt->nr_frags++, page, pg_off, len);
> -
> - /* Store data_len in gso_size */
> - pkt->data_len += len;
> + added = xdp_buff_add_frag(&pkt->buff, page_to_netmem(page), pg_off, len,
> + truesize);
> + if (unlikely(!added)) {
> + pkt->add_frag_failed = true;
> + netdev_err_once(nv->napi.dev,
> + "Failed to add fragment to xdp_buff\n");
> + }
Can this leak the page when xdp_buff_add_frag() fails? The page was
obtained from fbnic_page_pool_get() and synchronized via DMA, but if
xdp_buff_add_frag() returns false (when nr_frags reaches MAX_SKB_FRAGS),
the page is never returned to the pool. The old code using
__skb_fill_page_desc_noacc() never failed, but xdp_buff_add_frag() can
fail and appears to leave the allocated page unhandled.
> }
>
> static void fbnic_put_pkt_buff(struct fbnic_napi_vector *nv,
> struct fbnic_pkt_buff *pkt, int budget)
> {
> - struct skb_shared_info *shinfo;
> struct page *page;
> - int nr_frags;
>
> if (!pkt->buff.data_hard_start)
> return;
>
> - shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
> - nr_frags = pkt->nr_frags;
> + if (xdp_buff_has_frags(&pkt->buff)) {
> + struct skb_shared_info *shinfo;
> + int nr_frags;
>
> - while (nr_frags--) {
> - page = skb_frag_page(&shinfo->frags[nr_frags]);
> - page_pool_put_full_page(nv->page_pool, page, !!budget);
> + shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
> + nr_frags = shinfo->nr_frags;
> +
> + while (nr_frags--) {
> + page = skb_frag_page(&shinfo->frags[nr_frags]);
> + page_pool_put_full_page(nv->page_pool, page, !!budget);
> + }
> }
>
> page = virt_to_page(pkt->buff.data_hard_start);
[ ... ]
> @@ -1094,7 +1062,9 @@ static int fbnic_clean_rcq(struct fbnic_napi_vector *nv,
> /* We currently ignore the action table index */
> break;
> case FBNIC_RCD_TYPE_META:
> - if (likely(!fbnic_rcd_metadata_err(rcd)))
> + if (unlikely(pkt->add_frag_failed))
> + skb = NULL;
> + else if (likely(!fbnic_rcd_metadata_err(rcd)))
> skb = fbnic_build_skb(nv, pkt);
>
> /* Populate skb and invalidate XDP */
[ ... ]
commit 520369ef43a8504f9d54ee219bb6c692d2e40028
Author: Daniel Jurgens <danielj@nvidia.com>
net/mlx5: Support disabling host PFs
Some devices support disabling the physical function on the host. When
this is configured the vports for the host functions do not exist. This
patch checks if host functions are enabled before trying to access their
vports.
Link: https://patch.msgid.link/1755112796-467444-3-git-send-email-tariqt@nvidia.com
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
> index 31059fff30ec..3d533061311b 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
[ ... ]
> @@ -1769,21 +1775,23 @@ static int mlx5_esw_vports_init(struct mlx5_eswitch *esw)
>
> xa_init(&esw->vports);
>
> - err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_PF);
> - if (err)
> - goto err;
> - if (esw->first_host_vport == MLX5_VPORT_PF)
> - xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
> - idx++;
> -
> - for (i = 0; i < mlx5_core_max_vfs(dev); i++) {
> - err = mlx5_esw_vport_alloc(esw, idx, idx);
> + if (mlx5_esw_host_functions_enabled(dev)) {
> + err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_PF);
> if (err)
> goto err;
> - xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_VF);
> - xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
> + if (esw->first_host_vport == MLX5_VPORT_PF)
> + xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
> idx++;
> + for (i = 0; i < mlx5_core_max_vfs(dev); i++) {
> + err = mlx5_esw_vport_alloc(esw, idx, idx);
> + if (err)
> + goto err;
> + xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_VF);
> + xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
> + idx++;
> + }
> }
Is mlx5_esw_host_functions_enabled() accessing initialized data here?
Looking at the call sequence in mlx5_eswitch_init():
- Line 1893: dev->priv.eswitch = esw (moved earlier by this patch)
- Line 1901: mlx5_esw_vports_init() calls mlx5_esw_host_functions_enabled()
- Line 1905: mlx5_esw_host_functions_enabled_query() initializes
esw->esw_funcs.host_funcs_disabled
The mlx5_esw_host_functions_enabled() function accesses
dev->priv.eswitch->esw_funcs.host_funcs_disabled, but this field gets
initialized by mlx5_esw_host_functions_enabled_query() which runs after
mlx5_esw_vports_init().
> +
> base_sf_num = mlx5_sf_start_function_id(dev);
> for (i = 0; i < mlx5_sf_max_functions(dev); i++) {
> err = mlx5_esw_vport_alloc(esw, idx, base_sf_num + i);
[ ... ]
> @@ -1883,6 +1891,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
> goto free_esw;
>
> esw->dev = dev;
> + dev->priv.eswitch = esw;
> esw->manager_vport = mlx5_eswitch_manager_vport(dev);
> esw->first_host_vport = mlx5_eswitch_first_host_vport_num(dev);
>
> @@ -1901,7 +1910,6 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
> if (err)
> goto abort;
>
> - dev->priv.eswitch = esw;
> err = esw_offloads_init(esw);
> if (err)
> goto reps_err;
Moving the dev->priv.eswitch assignment earlier allows
mlx5_esw_vports_init()->mlx5_esw_host_functions_enabled() to access the
eswitch struct, but the esw_funcs.host_funcs_disabled field remains
uninitialized until mlx5_esw_host_functions_enabled_query() runs later
in the function.
[ ... ]
> @@ -2433,3 +2441,11 @@ void mlx5_eswitch_unblock_ipsec(struct mlx5_core_dev *dev)
> dev->num_ipsec_offloads--;
> mutex_unlock(&esw->state_lock);
> }
> +
> +bool mlx5_esw_host_functions_enabled(const struct mlx5_core_dev *dev)
> +{
> + if (!dev->priv.eswitch)
> + return true;
> +
> + return !dev->priv.eswitch->esw_funcs.host_funcs_disabled;
> +}
commit c6f68f69416d0950965e5744489382ccebdc72b4
Author: Dan Carpenter <dan.carpenter@linaro.org>
nfc: pn533: Delete an unnecessary check
This patch removes what the author believes is an unnecessary check for
rc < 0 since rc is set to PTR_ERR(resp) which is always negative when
IS_ERR(resp) is true.
Link: https://patch.msgid.link/aJwn2ox5g9WsD2Vx@stanley.mountain
> diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
> index 14661249c690..2b043a9f9533 100644
> --- a/drivers/nfc/pn533/pn533.c
> +++ b/drivers/nfc/pn533/pn533.c
> @@ -1412,11 +1412,9 @@ static int pn533_autopoll_complete(struct pn533 *dev, void *arg,
> if (dev->poll_mod_count != 0)
> return rc;
> goto stop_poll;
> - } else if (rc < 0) {
> - nfc_err(dev->dev,
> - "Error %d when running autopoll\n", rc);
> - goto stop_poll;
> }
> + nfc_err(dev->dev, "Error %d when running autopoll\n", rc);
> + goto stop_poll;
> }
Can this break the success path in pn533_autopoll_complete()? When
IS_ERR(resp) is false, the function continues to the success path, but
now the error logging and goto stop_poll execute unconditionally. The
variable rc would be uninitialized in this case, leading to logging of
garbage values and incorrect error handling for successful operations.
>
> nbtg = resp->data[0];
> @@ -1505,11 +1503,9 @@ static int pn533_poll_complete(struct pn533 *dev, void *arg,
> if (dev->poll_mod_count != 0)
> return rc;
> goto stop_poll;
> - } else if (rc < 0) {
> - nfc_err(dev->dev,
> - "Error %d when running poll\n", rc);
> - goto stop_poll;
> }
> + nfc_err(dev->dev, "Error %d when running poll\n", rc);
> + goto stop_poll;
> }
The same issue appears in pn533_poll_complete() - does this also
prevent successful poll operations from continuing to the normal
processing code below?
commit 355b82c54c122e59487c52c084a146101bedc2c8
Author: Jijie Shao <shaojijie@huawei.com>
net: phy: motorcomm: Add support for PHY LEDs on YT8521
Add minimal LED controller driver supporting the most common uses with
the 'netdev' trigger.
Link: https://patch.msgid.link/20250813124542.3450447-1-shaojijie@huawei.com
> diff --git a/drivers/net/phy/motorcomm.c b/drivers/net/phy/motorcomm.c
> index 0e91f5d1a4fd..a3593e663059 100644
> --- a/drivers/net/phy/motorcomm.c
> +++ b/drivers/net/phy/motorcomm.c
[ ... ]
> +static int yt8521_led_hw_control_set(struct phy_device *phydev, u8 index,
> + unsigned long rules)
> +{
> + u16 val = 0;
> +
> + if (index >= YT8521_MAX_LEDS)
> + return -EINVAL;
> +
> + if (test_bit(TRIGGER_NETDEV_LINK, &rules)) {
> + val |= YT8521_LED_10_ON_EN;
> + val |= YT8521_LED_100_ON_EN;
> + val |= YT8521_LED_1000_ON_EN;
> + }
> +
> + if (test_bit(TRIGGER_NETDEV_LINK_10, &rules))
> + val |= YT8521_LED_10_ON_EN;
> +
> + if (test_bit(TRIGGER_NETDEV_LINK_100, &rules))
> + val |= YT8521_LED_100_ON_EN;
> +
> + if (test_bit(TRIGGER_NETDEV_LINK_1000, &rules))
> + val |= YT8521_LED_1000_ON_EN;
> +
> + if (test_bit(TRIGGER_NETDEV_FULL_DUPLEX, &rules))
> + val |= YT8521_LED_HDX_ON_EN;
> +
> + if (test_bit(TRIGGER_NETDEV_HALF_DUPLEX, &rules))
> + val |= YT8521_LED_FDX_ON_EN;
^^^^^^^^^^^^^^^^^^^^^
Are the duplex bit assignments swapped here? The TRIGGER_NETDEV_FULL_DUPLEX
trigger sets the YT8521_LED_HDX_ON_EN bit, while TRIGGER_NETDEV_HALF_DUPLEX
sets YT8521_LED_FDX_ON_EN. This appears inconsistent with the
yt8521_led_hw_control_get function which maps YT8521_LED_FDX_ON_EN to
TRIGGER_NETDEV_FULL_DUPLEX and YT8521_LED_HDX_ON_EN to
TRIGGER_NETDEV_HALF_DUPLEX.
> +
> + if (test_bit(TRIGGER_NETDEV_TX, &rules) ||
> + test_bit(TRIGGER_NETDEV_RX, &rules))
> + val |= YT8521_LED_ACT_BLK_IND;
> +
> + if (test_bit(TRIGGER_NETDEV_TX, &rules))
> + val |= YT8521_LED_TXACT_BLK_EN;
> +
> + if (test_bit(TRIGGER_NETDEV_RX, &rules))
> + val |= YT8521_LED_RXACT_BLK_EN;
> +
> + return ytphy_write_ext(phydev, YT8521_LED0_CFG_REG + index, val);
> +}
> +
> +static int yt8521_led_hw_control_get(struct phy_device *phydev, u8 index,
> + unsigned long *rules)
> +{
> + int val;
> +
> + if (index >= YT8521_MAX_LEDS)
> + return -EINVAL;
> +
> + val = ytphy_read_ext(phydev, YT8521_LED0_CFG_REG + index);
> + if (val < 0)
> + return val;
> +
> + if (val & YT8521_LED_TXACT_BLK_EN || val & YT8521_LED_ACT_BLK_IND)
> + __set_bit(TRIGGER_NETDEV_TX, rules);
> +
> + if (val & YT8521_LED_RXACT_BLK_EN || val & YT8521_LED_ACT_BLK_IND)
> + __set_bit(TRIGGER_NETDEV_RX, rules);
> +
> + if (val & YT8521_LED_FDX_ON_EN)
> + __set_bit(TRIGGER_NETDEV_FULL_DUPLEX, rules);
> +
> + if (val & YT8521_LED_HDX_ON_EN)
> + __set_bit(TRIGGER_NETDEV_HALF_DUPLEX, rules);
The get function correctly maps YT8521_LED_FDX_ON_EN to
TRIGGER_NETDEV_FULL_DUPLEX and YT8521_LED_HDX_ON_EN to
TRIGGER_NETDEV_HALF_DUPLEX, which confirms the set function has the
assignments reversed.
> +
> + if (val & YT8521_LED_1000_ON_EN)
> + __set_bit(TRIGGER_NETDEV_LINK_1000, rules);
> +
> + if (val & YT8521_LED_100_ON_EN)
> + __set_bit(TRIGGER_NETDEV_LINK_100, rules);
> +
> + if (val & YT8521_LED_10_ON_EN)
> + __set_bit(TRIGGER_NETDEV_LINK_10, rules);
> +
> + return 0;
> +}
[ ... ]
commit 07bbbfe7addf5b032e04f3c38f0b183d067a3f0d
Author: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
net: stmmac: add suspend()/resume() platform ops
Add suspend/resume platform operations, which, when populated, override
the init/exit platform operations when we suspend and resume. These
suspend()/resume() methods are called by core code, and thus are
designed to support any struct device, not just platform devices.
Link: https://patch.msgid.link/E1ulXbX-008gqZ-Bb@rmk-PC.armlinux.org.uk
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index f1abf4242cd2..2da4f7bb2899 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -7879,6 +7879,9 @@ int stmmac_suspend(struct device *dev)
> if (stmmac_fpe_supported(priv))
> ethtool_mmsv_stop(&priv->fpe_cfg.mmsv);
>
> + if (priv->plat->suspend)
> + return priv->plat->suspend(dev, priv->plat->bsp_priv);
> +
When priv->plat->suspend exists, does this bypass all the network device
cleanup that happens in the rest of stmmac_suspend()? The function returns
early here, skipping netif_device_detach(), stmmac_disable_all_queues(),
timer cancellations, DMA stopping, power management setup, and phylink
suspension. Can platform drivers be expected to handle all this network
device state management correctly?
> return 0;
> }
> EXPORT_SYMBOL_GPL(stmmac_suspend);
> @@ -7931,6 +7934,12 @@ int stmmac_resume(struct device *dev)
> struct stmmac_priv *priv = netdev_priv(ndev);
> int ret;
>
> + if (priv->plat->resume) {
> + ret = priv->plat->resume(dev, priv->plat->bsp_priv);
> + if (ret)
> + return ret;
> + }
> +
If priv->plat->resume() fails here, will the device remain in a partially
suspended state? The function returns without calling stmmac_hw_setup(),
phylink_resume(), or netif_device_attach(). What happens to network
functionality when the platform resume succeeds but the core network
device restoration never occurs?
> if (!netif_running(ndev))
> return 0;
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
> index 030fcf1b5993..21df052eeed0 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
> @@ -901,7 +901,9 @@ static int __maybe_unused stmmac_pltfr_suspend(struct device *dev)
> struct platform_device *pdev = to_platform_device(dev);
>
> ret = stmmac_suspend(dev);
> - stmmac_pltfr_exit(pdev, priv->plat);
> +
> + if (!priv->plat->suspend)
> + stmmac_pltfr_exit(pdev, priv->plat);
Is there an ordering issue here? When platform suspend is used,
stmmac_suspend() calls priv->plat->suspend() and returns immediately.
This means the platform suspend function executes before the platform
exit is skipped here. Does the platform suspend function need to handle
both device-specific suspension AND what stmmac_pltfr_exit() normally does?
>
> return ret;
> }
> @@ -920,9 +922,11 @@ static int __maybe_unused stmmac_pltfr_resume(struct device *dev)
> struct platform_device *pdev = to_platform_device(dev);
> int ret;
>
> - ret = stmmac_pltfr_init(pdev, priv->plat);
> - if (ret)
> - return ret;
> + if (!priv->plat->resume) {
> + ret = stmmac_pltfr_init(pdev, priv->plat);
> + if (ret)
> + return ret;
> + }
>
> return stmmac_resume(dev);
> }
When platform resume is used, stmmac_pltfr_init() is skipped but
stmmac_resume() is still called. However, if priv->plat->resume is
provided, stmmac_resume() will call it first before doing the core
network setup. Should the platform resume function handle both what
stmmac_pltfr_init() normally does AND ensure proper ordering with the
subsequent core network restoration?
> diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
> index 26ddf95d23f9..22c24dacbc65 100644
> --- a/include/linux/stmmac.h
> +++ b/include/linux/stmmac.h
> @@ -248,6 +248,8 @@ struct plat_stmmacenet_data {
> void (*ptp_clk_freq_config)(struct stmmac_priv *priv);
> int (*init)(struct platform_device *pdev, void *priv);
> void (*exit)(struct platform_device *pdev, void *priv);
> + int (*suspend)(struct device *dev, void *priv);
> + int (*resume)(struct device *dev, void *priv);
The new suspend/resume callbacks take struct device* while init/exit take
struct platform_device*. When these platform suspend/resume functions
are used, do they need to handle all the network device state management
that the core functions normally perform? Is there documentation about
what platform drivers must implement when providing these callbacks?
commit 60cb69214148fbe7fc50239c28e4d052eec6ae61
Author: Breno Leitao <leitao@debian.org>
netconsole: use netpoll_parse_ip_addr in local_ip_store
Replace manual IP address parsing with a call to netpoll_parse_ip_addr
in local_ip_store(), simplifying the code and reducing the chance of
errors. Also, remove the pr_err() if the user enters an invalid value in
configfs entries.
> diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
> index 2919522d963e..a9b30b5891d7 100644
> --- a/drivers/net/netconsole.c
> +++ b/drivers/net/netconsole.c
> @@ -757,6 +757,7 @@ static ssize_t local_ip_store(struct config_item *item, const char *buf,
> {
> struct netconsole_target *nt = to_target(item);
> ssize_t ret = -EINVAL;
> + int ipv6;
>
> mutex_lock(&dynamic_netconsole_mutex);
> if (nt->enabled) {
> @@ -765,23 +766,10 @@ static ssize_t local_ip_store(struct config_item *item, const char *buf,
> goto out_unlock;
> }
>
> - if (strnchr(buf, count, ':')) {
> - const char *end;
> -
> - if (in6_pton(buf, count, nt->np.local_ip.in6.s6_addr, -1, &end) > 0) {
> - if (*end && *end != '\n') {
> - pr_err("invalid IPv6 address at: <%c>\n", *end);
> - goto out_unlock;
> - }
> - nt->np.ipv6 = true;
> - } else
> - goto out_unlock;
> - } else {
> - if (!nt->np.ipv6)
> - nt->np.local_ip.ip = in_aton(buf);
> - else
> - goto out_unlock;
> - }
> + ipv6 = netpoll_parse_ip_addr(buf, &nt->np.local_ip);
> + if (ipv6 == -1)
> + goto out_unlock;
> + nt->np.ipv6 = !!ipv6;
Can netpoll_parse_ip_addr() read beyond the intended buffer? The original
code used in6_pton(buf, count, ...) which respects the count parameter,
but netpoll_parse_ip_addr() uses strlen(str) and could read beyond count
bytes if buf is not null-terminated. Other configfs store functions like
dev_name_store() use strscpy() with explicit size limits.
Does this change the IPv4/IPv6 switching behavior? The original code
rejected IPv4 addresses when nt->np.ipv6 was true (the "else goto
out_unlock" path), but the new code allows switching from IPv6 back to
IPv4 mode by always updating nt->np.ipv6 based on the parsed result.
>
> ret = strnlen(buf, count);
> out_unlock:
[-- Attachment #3: linux-next.txt --]
[-- Type: text/plain, Size: 35538 bytes --]
commit dc32c8d4875fdd27b43d5c295853860f4d647055
Author: Bijan Tabatabai <bijantabatab@micron.com>
mm/damon/core: skip needless update of damon_attrs in damon_commit_ctx()
This patch adds an optimization to avoid calling damon_set_attrs() when
attributes haven't changed, preventing unnecessary resets of timing fields
that can delay aggregation and ops update operations.
Link: https://lkml.kernel.org/r/20250807001924.76275-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20250806234254.10572-1-bijan311@gmail.com
> diff --git a/mm/damon/core.c b/mm/damon/core.c
> index 106ee8b0f2d5..52ecc3a4426f 100644
> --- a/mm/damon/core.c
> +++ b/mm/damon/core.c
> @@ -570,6 +570,23 @@ void damon_destroy_ctx(struct damon_ctx *ctx)
> kfree(ctx);
> }
>
> +static bool damon_attrs_equals(const struct damon_attrs *attrs1,
> + const struct damon_attrs *attrs2)
> +{
> + const struct damon_intervals_goal *ig1 = &attrs1->intervals_goal;
> + const struct damon_intervals_goal *ig2 = &attrs2->intervals_goal;
> +
> + return attrs1->sample_interval == attrs2->sample_interval &&
> + attrs1->aggr_interval == attrs2->aggr_interval &&
> + attrs1->ops_update_interval == attrs2->ops_update_interval &&
> + attrs1->min_nr_regions == attrs2->min_nr_regions &&
> + attrs1->max_nr_regions == attrs2->max_nr_regions &&
> + ig1->access_bp == ig2->access_bp &&
> + ig1->aggrs == ig2->aggrs &&
> + ig1->min_sample_us == ig2->min_sample_us &&
> + ig1->max_sample_us == ig2->max_sample_us;
> +}
Does this comparison miss the aggr_samples field? The damon_attrs struct
contains an aggr_samples field that is used by damon_set_attrs() and
kdamond_tune_intervals() for timing calculations. If only aggr_samples
differs between the two attrs structs, this function would return true
when the attributes are actually different, causing damon_set_attrs() to
be incorrectly skipped.
--clm false positive, SJ explained there's a comment explanining this, but
claude didn't quite parse it right.
> +
> static unsigned int damon_age_for_new_attrs(unsigned int age,
> struct damon_attrs *old_attrs, struct damon_attrs *new_attrs)
> {
> @@ -1222,9 +1239,11 @@ int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
> * 2. ops update should be done after pid handling is done (target
> * committing require putting pids).
> */
> - err = damon_set_attrs(dst, &src->attrs);
> - if (err)
> - return err;
> + if (!damon_attrs_equals(&dst->attrs, &src->attrs)) {
> + err = damon_set_attrs(dst, &src->attrs);
> + if (err)
> + return err;
> + }
> dst->ops = src->ops;
>
> return 0;
commit 7663e963a51122792811811c8119fd55c9ab254a
Author: Jeff Layton <jlayton@kernel.org>
nfsd: track original timestamps in nfs4_delegation
This commit stores file timestamps in the nfs4_delegation structure at
delegation grant time to support RFC 9754 requirements. It adds three
timespec64 fields and modifies delegation logic to capture timestamps.
Link: https://lore.kernel.org/linux-nfs/47a4e40310e797f21b5137e847b06bb203d99e66.camel@kernel.org/
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 77eea2ad93cc..8737b721daf3 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -6157,7 +6157,8 @@ nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh,
> path.dentry = file_dentry(nf->nf_file);
>
> rc = vfs_getattr(&path, stat,
> - (STATX_MODE | STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
> + STATX_MODE | STATX_SIZE | STATX_ATIME |
> + STATX_MTIME | STATX_CTIME | STATX_CHANGE_COOKIE,
> AT_STATX_SYNC_AS_STAT);
>
> nfsd_file_put(nf);
> @@ -6274,10 +6275,14 @@ nfs4_open_delegation(struct svc_rqst *rqstp, struct nfsd4_open *open,
> OPEN_DELEGATE_WRITE;
> dp->dl_cb_fattr.ncf_cur_fsize = stat.size;
> dp->dl_cb_fattr.ncf_initial_cinfo = nfsd4_change_attribute(&stat);
> + dp->dl_atime = stat.atime;
> + dp->dl_ctime = stat.ctime;
> + dp->dl_mtime = stat.mtime;
> trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
> } else {
> - open->op_delegate_type = deleg_ts ? OPEN_DELEGATE_READ_ATTRS_DELEG :
> - OPEN_DELEGATE_READ;
> + open->op_delegate_type = deleg_ts && nfs4_delegation_stat(dp, currentfh, &stat) ?
> + OPEN_DELEGATE_READ_ATTRS_DELEG : OPEN_DELEGATE_READ;
> + dp->dl_atime = stat.atime;
When nfs4_delegation_stat() fails in the read delegation path, can this
assignment use uninitialized data from the stat structure? The stat
variable is declared on the stack and if vfs_getattr() fails within
nfs4_delegation_stat(), the atime field would contain undefined values.
> trace_nfsd_deleg_read(&dp->dl_stid.sc_stateid);
> }
> nfs4_put_stid(&dp->dl_stid);
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index 8adc2550129e..ce7c0d129ba3 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -224,6 +224,11 @@ struct nfs4_delegation {
>
> /* for CB_GETATTR */
> struct nfs4_cb_fattr dl_cb_fattr;
> +
> + /* For delegated timestamps */
> + struct timespec64 dl_atime;
> + struct timespec64 dl_mtime;
> + struct timespec64 dl_ctime;
> };
>
> static inline bool deleg_is_read(u32 dl_type)
commit b93128f29733af5d427a335978a19884c2c230e2
Author: Trond Myklebust <trond.myklebust@hammerspace.com>
NFSv4.2: Serialise O_DIRECT i/o and fallocate()
This patch adds a call to nfs_file_block_o_direct() before nfs_sync_inode()
in nfs42_proc_fallocate() to ensure all O_DIRECT reads and writes complete
before calling fallocate operations, preventing races with attribute
updates.
> diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
> index e2fea37c5348..1a169372ca16 100644
> --- a/fs/nfs/nfs42proc.c
> +++ b/fs/nfs/nfs42proc.c
> @@ -114,6 +114,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
> exception.inode = inode;
> exception.state = lock->open_context->state;
>
> + nfs_file_block_o_direct(NFS_I(inode));
Can nfs_file_block_o_direct() be called here without holding inode->i_rwsem?
The function definition in fs/nfs/internal.h has this comment:
/* Must be called with exclusively locked inode->i_rwsem */
static inline void nfs_file_block_o_direct(struct nfs_inode *nfsi)
{
if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
inode_dio_wait(&nfsi->vfs_inode);
}
}
All other callers in the NFS code acquire the lock first. For example,
nfs_start_io_write() calls down_write_killable(&inode->i_rwsem) before
calling nfs_file_block_o_direct(), and nfs_start_io_read() does the same.
The VFS fallocate path (vfs_fallocate()->nfs42_fallocate()->nfs42_proc_
fallocate()) only calls file_start_write() which acquires sb->s_writers
but not inode->i_rwsem.
> err = nfs_sync_inode(inode);
> if (err)
> goto out;
commit d442670c0f63c46b7f348f68fb2002af597708f2
Author: Jonathan Curley <jcurley@purestorage.com>
NFSv4/flexfiles: Add data structure support for striped layouts
Adds a new struct nfs4_ff_layout_ds_stripe that represents a data
server stripe within a layout. A new dynamically allocated array of
this type has been added to nfs4_ff_layout_mirror and per stripe
configuration information has been moved from the mirror type to the
stripe based on the RFC.
> diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
> index 2ad371faab98..debdc8952ab0 100644
> --- a/fs/nfs/flexfilelayout/flexfilelayout.c
> +++ b/fs/nfs/flexfilelayout/flexfilelayout.c
[ ... ]
> @@ -452,10 +459,13 @@ ff_layout_alloc_lseg(struct pnfs_layout_hdr *lh,
> goto out_err_free;
> }
>
> - fls->mirror_array[i]->ds_count = ds_count;
> + fls->mirror_array[i]->dss_count = dss_count;
> + fls->mirror_array[i]->dss =
> + kcalloc(dss_count, sizeof(struct nfs4_ff_layout_ds_stripe),
> + gfp_flags);
>
> /* deviceid */
> - rc = decode_deviceid(&stream, &fls->mirror_array[i]->devid);
> + rc = decode_deviceid(&stream, &fls->mirror_array[i]->dss[dss_id].devid);
Can this dereference NULL in ff_layout_alloc_lseg()? The kcalloc() call
above can return NULL under memory pressure, but there's no check before
accessing fls->mirror_array[i]->dss[dss_id].devid in the decode_deviceid()
call. All other memory allocations in this function check for NULL and
jump to out_err_free on failure.
> if (rc)
> goto out_err_free;
>
[ ... ]
commit 8c48e1c7520321cc87ff651e96093e2f412785fb
Author: Stefan Metzmacher <metze@samba.org>
smb: client: don't wait for info->send_pending == 0 on error
This commit addresses an issue where smbd_destroy() would wait
indefinitely for send_pending to reach zero after ib_drain_qp() is
called. The patch removes the problematic wait and modifies smbd_send()
to exit early when the socket is disconnected.
> diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
> index c628e91c328b..02d6db431fd4 100644
> --- a/fs/smb/client/smbdirect.c
> +++ b/fs/smb/client/smbdirect.c
> @@ -1337,10 +1337,6 @@ void smbd_destroy(struct TCP_Server_Info *server)
> log_rdma_event(INFO, "cancelling idle timer\n");
> cancel_delayed_work_sync(&info->idle_timer_work);
>
> - log_rdma_event(INFO, "wait for all send posted to IB to finish\n");
> - wait_event(info->wait_send_pending,
> - atomic_read(&info->send_pending) == 0);
> -
Can smbd_destroy() proceed safely without waiting for send operations to
complete? The send_done() callback accesses the info structure after
this point:
In send_done(), when wc->status != IB_WC_SUCCESS (which happens after
ib_drain_qp() above), the function calls smbd_disconnect_rdma_connection()
but skips the atomic_dec_and_test(&info->send_pending) call. However,
when the operation succeeds, send_done() does call:
if (atomic_dec_and_test(&info->send_pending))
wake_up(&info->wait_send_pending);
After ib_drain_qp() triggers completions with IB_WC_WR_FLUSH_ERR status,
these send_done() callbacks can execute after smbd_destroy() has called
kfree(info). When send_done() then tries to access info->send_pending,
does this create a use-after-free condition?
> /* It's not possible for upper layer to get to reassembly */
> log_rdma_event(INFO, "drain the reassembly queue\n");
> do {
> @@ -1986,7 +1982,11 @@ int smbd_send(struct TCP_Server_Info *server,
> */
>
> wait_event(info->wait_send_pending,
> - atomic_read(&info->send_pending) == 0);
> + atomic_read(&info->send_pending) == 0 ||
> + sc->status != SMBDIRECT_SOCKET_CONNECTED);
> +
> + if (sc->status != SMBDIRECT_SOCKET_CONNECTED && rc == 0)
> + rc = -EAGAIN;
>
> return rc;
> }
commit 1f7168b28f667c0920fd83151ae35ce593950663
Author: André Almeida <andrealmeid@igalia.com>
ovl: Ensure that all layers have the same encoding
This patch adds encoding consistency validation to overlayfs when casefold
is enabled. It introduces ovl_set_encoding() to copy encoding settings from
the first layer and validates all layers have matching encoding.
> diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
> index df85a76597e9..548ea9b1a48d 100644
> --- a/fs/overlayfs/super.c
> +++ b/fs/overlayfs/super.c
> @@ -991,6 +991,25 @@ static int ovl_get_data_fsid(struct ovl_fs *ofs)
> return ofs->numfs;
> }
>
> +/*
> + * Set the ovl sb encoding as the same one used by the first layer
> + */
> +static int ovl_set_encoding(struct super_block *sb, struct super_block *fs_sb)
> +{
> + if (!sb_has_encoding(fs_sb))
> + return 0;
> +
> +#if IS_ENABLED(CONFIG_UNICODE)
> + if (sb_has_strict_encoding(fs_sb)) {
> + pr_err("strict encoding not supported\n");
> + return -EINVAL;
> + }
> +
> + sb->s_encoding = fs_sb->s_encoding;
> + sb->s_encoding_flags = fs_sb->s_encoding_flags;
^^^^
Will this leak the unicode_map on overlayfs unmount? Other filesystems
like shmem and ext4 call utf8_unload(sb->s_encoding) in their put_super
operations, but ovl_put_super()->ovl_free_fs() has no utf8_unload() call.
The unicode_map appears to be reference counted via utf8_load()/utf8_unload()
pairs in other parts of the kernel, but this code just copies the pointer
without taking a proper reference.
> +#endif
> + return 0;
> +}
>
> static int ovl_get_layers(struct super_block *sb, struct ovl_fs *ofs,
> struct ovl_fs_context *ctx, struct ovl_layer *layers)
commit 8004ea01cf6338298e0c6ab055bc3ec659ce381b
Author: Babu Moger <babu.moger@amd.com>
fs/resctrl: Introduce the interface to switch between monitor modes
This patch introduces mbm_assign_mode resctrl file to switch between
mbm_event and default monitoring modes. It also resets MBM event
counters automatically when the mode changes.
Link: https://lore.kernel.org/cover.1757108044.git.babu.moger@amd.com
> diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst
> index f60f6a96cb6b..006d23af66e1 100644
> --- a/Documentation/filesystems/resctrl.rst
> +++ b/Documentation/filesystems/resctrl.rst
[ ... ]
> diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
> index f388dbcdbdcd..50c24460d992 100644
> --- a/fs/resctrl/monitor.c
> +++ b/fs/resctrl/monitor.c
[ ... ]
> +ssize_t resctrl_mbm_assign_mode_write(struct kernfs_open_file *of, char *buf,
> + size_t nbytes, loff_t off)
> +{
> + struct rdt_resource *r = rdt_kn_parent_priv(of->kn);
> + struct rdt_mon_domain *d;
> + int ret = 0;
> + bool enable;
> +
> + /* Valid input requires a trailing newline */
> + if (nbytes == 0 || buf[nbytes - 1] != '\n')
> + return -EINVAL;
> +
> + buf[nbytes - 1] = '\0';
> +
> + cpus_read_lock();
> + mutex_lock(&rdtgroup_mutex);
> +
> + rdt_last_cmd_clear();
> +
> + if (!strcmp(buf, "default")) {
> + enable = 0;
> + } else if (!strcmp(buf, "mbm_event")) {
> + if (r->mon.mbm_cntr_assignable) {
> + enable = 1;
> + } else {
> + ret = -EINVAL;
> + rdt_last_cmd_puts("mbm_event mode is not supported\n");
> + goto out_unlock;
> + }
> + } else {
> + ret = -EINVAL;
> + rdt_last_cmd_puts("Unsupported assign mode\n");
> + goto out_unlock;
> + }
> +
> + if (enable != resctrl_arch_mbm_cntr_assign_enabled(r)) {
> + ret = resctrl_arch_mbm_cntr_assign_set(r, enable);
> + if (ret)
> + goto out_unlock;
> +
> + /* Update the visibility of BMEC related files */
> + resctrl_bmec_files_show(r, NULL, !enable);
> +
> + /*
> + * Initialize the default memory transaction values for
> + * total and local events.
> + */
> + if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
> + mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask;
> + if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
> + mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask &
> + (READS_TO_LOCAL_MEM |
> + READS_TO_LOCAL_S_MEM |
> + NON_TEMP_WRITE_TO_LOCAL_MEM);
> + /* Enable auto assignment when switching to "mbm_event" mode */
> + if (enable)
> + r->mon.mbm_assign_on_mkdir = true;
> + /*
> + * Reset all the non-achitectural RMID state and assignable counters.
> + */
> + list_for_each_entry(d, &r->mon_domains, hdr.list) {
> + mbm_cntr_free_all(r, d);
> + resctrl_reset_rmid_all(r, d);
> + }
> + }
> +
> +out_unlock:
> + mutex_lock(&rdtgroup_mutex);
^^^^
Can this cause a deadlock? The function already acquired rdtgroup_mutex
at the beginning, so attempting to lock it again here would block
indefinitely. Should this be mutex_unlock(&rdtgroup_mutex) instead?
> + cpus_read_unlock();
> +
> + return ret ?: nbytes;
> +}
[ ... ]
commit aab2c5088cdb26e80d51ffbe72d24ab23fa1533e
Author: Babu Moger <babu.moger@amd.com>
fs/resctrl: Add the functionality to unassign MBM events
This patch adds counter cleanup functionality for MBM events by
implementing rdtgroup_unassign_cntrs() and supporting functions to free
assigned counters when resource groups are deleted.
Link: https://lore.kernel.org/cover.1757108044.git.babu.moger@amd.com
> diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h
> index 762705d7eb8d..c6b66d4a6a37 100644
> --- a/fs/resctrl/internal.h
> +++ b/fs/resctrl/internal.h
> @@ -398,6 +398,8 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_open_file *of, struct seq_fil
>
> void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp);
>
> +void rdtgroup_unassign_cntrs(struct rdtgroup *rdtgrp);
> +
The rdtgroup_unassign_cntrs() function is declared but where should it be
called? Looking at rdtgroup_assign_cntrs() which is called during group
creation, shouldn't rdtgroup_unassign_cntrs() be called during group
deletion in rdtgroup_rmdir_ctrl() and rdtgroup_rmdir_mon()?
> diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
> index 106e9bdb8a9d..2ed29ae831a4 100644
> --- a/fs/resctrl/monitor.c
> +++ b/fs/resctrl/monitor.c
> @@ -405,6 +405,14 @@ static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_mon_domain *d,
> return -ENOSPC;
> }
>
> +/*
> + * mbm_cntr_free() - Clear the counter ID configuration details in the domain @d.
> + */
> +static void mbm_cntr_free(struct rdt_mon_domain *d, int cntr_id)
> +{
> + memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg));
> +}
> +
[ ... ]
> @@ -1043,6 +1051,64 @@ void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp)
> &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
> }
>
> +/*
> + * rdtgroup_free_unassign_cntr() - Unassign and reset the counter ID configuration
> + * for the event pointed to by @mevt within the domain @d and resctrl group @rdtgrp.
> + */
> +static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
> + struct rdtgroup *rdtgrp, struct mon_evt *mevt)
> +{
> + int cntr_id;
> +
> + cntr_id = mbm_cntr_get(r, d, rdtgrp, mevt->evtid);
> +
> + /* If there is no cntr_id assigned, nothing to do */
> + if (cntr_id < 0)
> + return;
> +
> + rdtgroup_assign_cntr(r, d, mevt->evtid, rdtgrp->mon.rmid, rdtgrp->closid, cntr_id, false);
> +
> + mbm_cntr_free(d, cntr_id);
> +}
> +
> +/*
> + * rdtgroup_unassign_cntr_event() - Unassign a hardware counter associated with
> + * the event structure @mevt from the domain @d and the group @rdtgrp. Unassign
> + * the counters from all the domains if @d is NULL else unassign from @d.
> + */
> +static void rdtgroup_unassign_cntr_event(struct rdt_mon_domain *d, struct rdtgroup *rdtgrp,
> + struct mon_evt *mevt)
> +{
> + struct rdt_resource *r = resctrl_arch_get_resource(mevt->rid);
> +
> + if (!d) {
> + list_for_each_entry(d, &r->mon_domains, hdr.list)
> + rdtgroup_free_unassign_cntr(r, d, rdtgrp, mevt);
> + } else {
> + rdtgroup_free_unassign_cntr(r, d, rdtgrp, mevt);
> + }
> +}
> +
> +/*
> + * rdtgroup_unassign_cntrs() - Unassign the counters associated with MBM events.
> + * Called when a group is deleted.
> + */
> +void rdtgroup_unassign_cntrs(struct rdtgroup *rdtgrp)
> +{
> + struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
> +
> + if (!r->mon_capable || !resctrl_arch_mbm_cntr_assign_enabled(r))
> + return;
> +
> + if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
> + rdtgroup_unassign_cntr_event(NULL, rdtgrp,
> + &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);
> +
> + if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
> + rdtgroup_unassign_cntr_event(NULL, rdtgrp,
> + &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
> +}
> +
> int resctrl_mbm_assign_mode_show(struct kernfs_open_file *of,
> struct seq_file *s, void *v)
> {
The comment states "Called when a group is deleted" but the function has
no callers. Won't this create a counter leak when groups are deleted
since counters assigned during group creation via rdtgroup_assign_cntrs()
->rdtgroup_assign_cntr_event()->rdtgroup_alloc_assign_cntr() won't be
cleaned up during deletion?
commit 2d8bd8049e89efe42a5397de4effd899e8dd2249
Author: Pankaj Raghav <p.raghav@samsung.com>
mm: add persistent huge zero folio
This patch introduces a config option CONFIG_PERSISTENT_HUGE_ZERO_FOLIO
that changes huge zero folio lifecycle from dynamic allocation/deallocation
to persistent allocation during boot. When enabled, the shrinker is
disabled and the folio is never freed.
Link: https://lkml.kernel.org/r/20250811084113.647267-4-kernel@pankajraghav.com
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 7748489fde1b..bd547857c6c1 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -495,6 +495,17 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
> struct folio *mm_get_huge_zero_folio(struct mm_struct *mm);
> void mm_put_huge_zero_folio(struct mm_struct *mm);
>
> +static inline struct folio *get_persistent_huge_zero_folio(void)
> +{
> + if (!IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> + return NULL;
> +
> + if (unlikely(!huge_zero_folio))
> + return NULL;
> +
> + return huge_zero_folio;
> +}
> +
> static inline bool thp_migration_supported(void)
> {
> return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);
[ ... ]
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2801ce9bbde9..b8bb078a1a34 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -248,6 +248,9 @@ static void put_huge_zero_folio(void)
>
> struct folio *mm_get_huge_zero_folio(struct mm_struct *mm)
> {
> + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> + return huge_zero_folio;
> +
> if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags))
> return READ_ONCE(huge_zero_folio);
>
> @@ -262,6 +265,9 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct *mm)
>
> void mm_put_huge_zero_folio(struct mm_struct *mm)
> {
> + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> + return;
> +
> if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags))
> put_huge_zero_folio();
> }
[ ... ]
> @@ -849,16 +855,34 @@ static inline void hugepage_exit_sysfs(struct kobject *hugepage_kobj)
>
> static int __init thp_shrinker_init(void)
> {
> - huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
> - if (!huge_zero_folio_shrinker)
> - return -ENOMEM;
> -
> deferred_split_shrinker = shrinker_alloc(SHRINKER_NUMA_AWARE |
> SHRINKER_MEMCG_AWARE |
> SHRINKER_NONSLAB,
> "thp-deferred_split");
> - if (!deferred_split_shrinker) {
> - shrinker_free(huge_zero_folio_shrinker);
> + if (!deferred_split_shrinker)
> + return -ENOMEM;
> +
> + deferred_split_shrinker->count_objects = deferred_split_count;
> + deferred_split_shrinker->scan_objects = deferred_split_scan;
> + shrinker_register(deferred_split_shrinker);
> +
> + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO)) {
> + /*
> + * Bump the reference of the huge_zero_folio and do not
> + * initialize the shrinker.
> + *
> + * huge_zero_folio will always be NULL on failure. We assume
> + * that get_huge_zero_folio() will most likely not fail as
> + * thp_shrinker_init() is invoked early on during boot.
> + */
> + if (!get_huge_zero_folio())
> + pr_warn("Allocating persistent huge zero folio failed\n");
> + return 0;
> + }
Should thp_shrinker_init() return an error code when get_huge_zero_folio()
fails in persistent mode? Currently it returns 0 (success) even when the
allocation fails, which means hugepage_init() will believe initialization
succeeded. This leaves the deferred_split_shrinker allocated but the
persistent huge zero folio unavailable.
If the system is supposed to continue working without the persistent huge
zero folio, the current behavior might be intentional. However, callers
like hugepage_init() have no way to know that persistent allocation failed.
> +
> + huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
> + if (!huge_zero_folio_shrinker) {
> + shrinker_free(deferred_split_shrinker);
> return -ENOMEM;
> }
>
> @@ -866,10 +890,6 @@ static int __init thp_shrinker_init(void)
> huge_zero_folio_shrinker->scan_objects = shrink_huge_zero_folio_scan;
> shrinker_register(huge_zero_folio_shrinker);
>
> - deferred_split_shrinker->count_objects = deferred_split_count;
> - deferred_split_shrinker->scan_objects = deferred_split_scan;
> - shrinker_register(deferred_split_shrinker);
> -
> return 0;
> }
commit 12e423ba4eaed7b1561b677d32e6599f932d03db
Author: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
mm: convert core mm to mm_flags_*() accessors
This patch converts existing users of direct mm_struct->flags bit
operations to use the new mm_flags_*() accessor functions as part of
migrating mm->flags to a bitmap field. The commit states that no
functional changes are intended.
Link: https://lkml.kernel.org/r/1eb2266f4408798a55bda00cb04545a3203aa572.1755012943.git.lorenzo.stoakes@oracle.com
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 14d424830fa8..84b7eebe0d68 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -327,7 +327,7 @@ static inline bool vma_thp_disabled(struct vm_area_struct *vma,
> * example, s390 kvm.
> */
> return (vm_flags & VM_NOHUGEPAGE) ||
> - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags);
> + mm_flags_test(MMF_DISABLE_THP, vma->vm_mm);
> }
>
> static inline bool thp_disabled_by_hw(void)
[ ... ]
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 25923cfec9c6..17650f0b516e 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
[ ... ]
> @@ -1251,7 +1251,7 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
> * Check MMF_OOM_SKIP again under mmap_read_lock protection to ensure
> * possible change in exit_mmap is seen
> */
> - if (!test_bit(MMF_OOM_SKIP, &mm->flags) && !__oom_reap_task_mm(mm))
> + if (mm_flags_test(MMF_OOM_SKIP, mm) && !__oom_reap_task_mm(mm))
> ret = -EAGAIN;
> mmap_read_unlock(mm);
Does this condition invert the logic? The original code checked if
MMF_OOM_SKIP was NOT set, but the new code checks if it IS set. This
appears to change when process_mrelease()->__oom_reap_task_mm() gets
called and when -EAGAIN is returned to userspace.
commit 7c80f3c72467b180a9d7b3852c74b7540a5a6ce9
Author: Mike Rapoport (Microsoft) <rppt@kernel.org>
kho: add support for preserving vmalloc allocations
A vmalloc allocation is preserved using binary structure similar to global
KHO memory tracker. It's a linked list of pages where each page is an
array of physical address of pages in vmalloc area.
Link: https://lkml.kernel.org/r/20250921054458.4043761-4-rppt@kernel.org
> diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h
> index cec663b39861..1b3c41ab7a0b 100644
> --- a/include/linux/kexec_handover.h
> +++ b/include/linux/kexec_handover.h
[ ... ]
> diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c
> index d30c2a99f3ea..57336e510f3b 100644
> --- a/kernel/kexec_handover.c
> +++ b/kernel/kexec_handover.c
[ ... ]
> +void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
> +{
> + struct kho_vmalloc_chunk *chunk = KHOSER_LOAD_PTR(preservation->first);
> + unsigned int align, order, shift, vm_flags;
> + unsigned long total_pages, contig_pages;
> + unsigned long addr, size;
> + struct vm_struct *area;
> + struct page **pages;
> + unsigned int idx = 0;
> + int err;
> +
> + vm_flags = kho_flags_to_vmalloc(preservation->flags);
> + if (vm_flags & ~KHO_VMALLOC_SUPPORTED_FLAGS)
> + return NULL;
> +
> + total_pages = preservation->total_pages;
> + pages = kvmalloc_array(total_pages, sizeof(*pages), GFP_KERNEL);
> + if (!pages)
> + return NULL;
> + order = preservation->order;
> + contig_pages = (1 << order);
> + shift = PAGE_SHIFT + order;
> + align = 1 << shift;
> +
> + while (chunk) {
> + struct page *page;
> +
> + for (int i = 0; chunk->phys[i]; i++) {
> + phys_addr_t phys = chunk->phys[i];
> +
> + if (idx + contig_pages > total_pages)
> + goto err_free_pages_array;
> +
> + page = kho_restore_pages(phys, contig_pages);
> + if (!page)
> + goto err_free_pages_array;
> +
> + for (int j = 0; j < contig_pages; j++)
> + pages[idx++] = page;
^^^^^
Can this page array handling work correctly for higher-order pages?
When contig_pages > 1, the loop stores the same compound page head
pointer multiple times instead of individual page pointers (page + j).
The vmap_pages_range() function expects individual page pointers for
each PAGE_SIZE page, but this provides duplicate pointers to the
compound head.
> +
> + phys += contig_pages * PAGE_SIZE;
> + }
> +
> + page = kho_restore_pages(virt_to_phys(chunk), 1);
> + if (!page)
> + goto err_free_pages_array;
> + chunk = KHOSER_LOAD_PTR(chunk->hdr.next);
> + __free_page(page);
> + }
> +
> + if (idx != total_pages)
> + goto err_free_pages_array;
> +
> + area = __get_vm_area_node(total_pages * PAGE_SIZE, align, shift,
> + vm_flags, VMALLOC_START, VMALLOC_END,
> + NUMA_NO_NODE, GFP_KERNEL,
> + __builtin_return_address(0));
> + if (!area)
> + goto err_free_pages_array;
> +
> + addr = (unsigned long)area->addr;
> + size = get_vm_area_size(area);
> + err = vmap_pages_range(addr, addr + size, PAGE_KERNEL, pages, shift);
> + if (err)
> + goto err_free_vm_area;
> +
> + area->nr_pages = total_pages;
> + area->pages = pages;
> +
> + return area->addr;
> +
> +err_free_vm_area:
> + free_vm_area(area);
> +err_free_pages_array:
> + kvfree(pages);
> + return NULL;
> +}
commit 2191b3471df53d87a46179b0c302c77c9ccfec11
Author: Stefan Metzmacher <metze@samba.org>
smb: smbdirect: introduce smbdirect_socket.send_io.credits.{count,wait_queue}
This patch adds new credit management fields to the smbdirect_socket
structure that will be shared between client and server soon. The
patch initializes these fields in smbdirect_socket_init().
> diff --git a/fs/smb/common/smbdirect/smbdirect_socket.h b/fs/smb/common/smbdirect/smbdirect_socket.h
> index 1df43b7a87f4..7c2147a0244a 100644
> --- a/fs/smb/common/smbdirect/smbdirect_socket.h
> +++ b/fs/smb/common/smbdirect/smbdirect_socket.h
> @@ -78,6 +78,14 @@ struct smbdirect_socket {
> mempool_t *pool;
> } mem;
>
> + /*
> + * The credit state for the send side
> + */
> + struct {
> + atomic_t count;
> + wait_queue_head_t wait_queue;
> + } credits;
> +
Are these new fields properly initialized when smbdirect_socket
structures are created? Looking at _smbd_get_connection() in
fs/smb/client/smbdirect.c, the smbd_connection structure (which
contains smbdirect_socket as its first member) is allocated with
kzalloc() but smbdirect_socket_init() is never called.
> /*
> * The state about posted/pending sends
> */
> @@ -163,6 +171,9 @@ static __always_inline void smbdirect_socket_init(struct smbdirect_socket *sc)
>
> init_waitqueue_head(&sc->status_wait);
>
> + atomic_set(&sc->send_io.credits.count, 0);
> + init_waitqueue_head(&sc->send_io.credits.wait_queue);
> +
The initialization code looks correct, but can this function be reached?
A grep through the fs/smb/ directory shows smbdirect_socket_init() is
defined but never called. The actual initialization in _smbd_get_connection()
only calls init_waitqueue_head() for some specific wait queues but not
for the new credits.wait_queue.
Without init_waitqueue_head() being called, any subsequent wait_event()
or wake_up() operations on sc->send_io.credits.wait_queue will access
uninitialized spinlocks and list heads, leading to kernel crashes.
> atomic_set(&sc->send_io.pending.count, 0);
> init_waitqueue_head(&sc->send_io.pending.dec_wait_queue);
> init_waitqueue_head(&sc->send_io.pending.zero_wait_queue);
commit 371d3ab51ce44c8a7c0f9b835b74b7340a922059
Author: Stefan Metzmacher <metze@samba.org>
smb: smbdirect: introduce smbdirect_socket.status_wait
This patch adds a wait_queue_head_t status_wait field to the
smbdirect_socket struct for future use by server and client components.
> diff --git a/fs/smb/common/smbdirect/smbdirect_socket.h b/fs/smb/common/smbdirect/smbdirect_socket.h
> index 8484c6f09317..a69b9649b88e 100644
> --- a/fs/smb/common/smbdirect/smbdirect_socket.h
> +++ b/fs/smb/common/smbdirect/smbdirect_socket.h
> @@ -41,6 +41,7 @@ const char *smbdirect_socket_status_string(enum smbdirect_socket_status status)
>
> struct smbdirect_socket {
> enum smbdirect_socket_status status;
> + wait_queue_head_t status_wait;
>
> /* RDMA related */
> struct {
Is the new status_wait field being properly initialized? Looking at
_smbd_get_connection() in fs/smb/client/smbdirect.c, the smbdirect_socket
struct is allocated via kzalloc() which zeros the memory, but wait queue
heads require explicit initialization via init_waitqueue_head() before
use. The existing info->status_wait is correctly initialized, but I don't
see where sc->status_wait gets initialized.
Can uninitialized wait queue operations cause memory corruption when this
field is eventually used?
commit 603b4416232524dafde8e2cf859788dae786dea1
Author: KP Singh <kpsingh@kernel.org>
bpf: Update the bpf_prog_calc_tag to use SHA256
This patch changes bpf_prog_calc_tag() to use SHA256 instead of SHA1
and modifies the bpf_prog structure to store the full SHA256 digest.
The digest and tag fields are placed in a union to maintain compatibility.
Link: https://lore.kernel.org/r/20250914215141.15144-2-kpsingh@kernel.org
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 41f776071ff5..d75902074bd1 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -31,6 +31,7 @@
> #include <linux/memcontrol.h>
> #include <linux/cfi.h>
> #include <asm/rqspinlock.h>
> +#include <crypto/sha2.h>
>
> struct bpf_verifier_env;
> struct bpf_verifier_log;
> @@ -1717,7 +1718,10 @@ struct bpf_prog {
> enum bpf_attach_type expected_attach_type; /* For some prog types */
> u32 len; /* Number of filter blocks */
> u32 jited_len; /* Size of jited insns in bytes */
> - u8 tag[BPF_TAG_SIZE];
> + union {
> + u8 digest[SHA256_DIGEST_SIZE];
> + u8 tag[BPF_TAG_SIZE];
> + };
Can this union create memory access issues? The digest array is 32 bytes
(SHA256_DIGEST_SIZE) while tag is 8 bytes (BPF_TAG_SIZE). Code accessing
prog->tag after bpf_prog_calc_tag() writes to prog->digest might read
incorrect data from the union.
-clm false positive, but not sure I want to teach claude to like unions
> struct bpf_prog_stats __percpu *stats;
> int __percpu *active;
> unsigned int (*bpf_func)(const void *ctx,
[ ... ]
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 1cda2589d4b3..9b64674df16b 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -39,6 +39,7 @@
> #include <linux/bpf_mem_alloc.h>
> #include <linux/memcontrol.h>
> #include <linux/execmem.h>
> +#include <crypto/sha2.h>
>
> #include <asm/barrier.h>
> #include <linux/unaligned.h>
> @@ -296,7 +297,6 @@ void __bpf_prog_free(struct bpf_prog *fp)
> int bpf_prog_calc_tag(struct bpf_prog *fp)
> {
> size_t size = bpf_prog_insn_size(fp);
> - u8 digest[SHA1_DIGEST_SIZE];
> struct bpf_insn *dst;
> bool was_ld_map;
> u32 i;
> @@ -327,8 +327,7 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
> was_ld_map = false;
> }
> }
> - sha1((const u8 *)dst, size, digest);
> - memcpy(fp->tag, digest, sizeof(fp->tag));
> + sha256((u8 *)dst, size, fp->digest);
Are prog->tag users broken by this change? The function now writes to
fp->digest but existing code accesses fp->tag. For example:
- perf_event_bpf_event() calls memcpy(bpf_event.event_id.tag, prog->tag, BPF_TAG_SIZE)
- bpf_prog_get_info_by_fd() calls memcpy(info.tag, prog->tag, sizeof(prog->tag))
- bpf_prog_get_info_by_fd() also calls copy_to_user(user_prog_tags[0], prog->tag, BPF_TAG_SIZE)
Since fp->digest gets written but fp->tag gets read, will these functions
access uninitialized memory in the tag field?
> vfree(dst);
> return 0;
> }
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
@ 2025-10-08 17:20 ` Konstantin Ryabitsev
2025-10-08 18:11 ` Sasha Levin
2025-10-08 18:35 ` Chris Mason
2025-10-08 17:57 ` Bart Van Assche
` (5 subsequent siblings)
6 siblings, 2 replies; 68+ messages in thread
From: Konstantin Ryabitsev @ 2025-10-08 17:20 UTC (permalink / raw)
To: Chris Mason; +Cc: ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
> My goal for KS/MS is to discuss how to enable maintainers to use review
> automation tools to lower their workload. I don't want to build new CI
> here, so the goal would be enabling integration with existing CI.
>
> My question for everyone is what would it take to make all of this
> useful?
I am generally of the opinion that it's more useful to talk about it than to
ignore it. Clearly, this is a controversial subject -- many will liken
introducing proprietary AI tooling to what happened with bitkeeper.
I've been working on incorporating series summarization with b4, but I'm
trying to make it work with ollama/gemma3 so as not to introduce a proprietary
dependency. My results are probably a lot more hit-and-miss than with Claude
4.5 Sonnet -- but I find it hard to judge because the summaries *look*
reasonably good to someone who is not a maintainer of that particular
subsystem.
Maybe it's more of a BoF session material?
-K
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
2025-10-08 17:20 ` Konstantin Ryabitsev
@ 2025-10-08 17:57 ` Bart Van Assche
2025-10-08 18:04 ` Chris Mason
2025-10-08 18:33 ` Sasha Levin
` (4 subsequent siblings)
6 siblings, 1 reply; 68+ messages in thread
From: Bart Van Assche @ 2025-10-08 17:57 UTC (permalink / raw)
To: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On 10/8/25 10:04 AM, Chris Mason wrote:
> 2) A code indexing tool with MCP server that Claude can use to find
> functions, types, and call chains more effectively. This makes it more
> likely Claude can trace complex relationships in the code:
Different kernel developers may prefer different AI systems. As an
example, my employer expects me to use Gemini and I have seen it
producing interesting code reviews. So I would prefer that any AI
code review prompts that are developed for kernel developers support at
least the most widely used AI systems.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:57 ` Bart Van Assche
@ 2025-10-08 18:04 ` Chris Mason
2025-10-08 18:14 ` Bart Van Assche
2025-10-08 21:08 ` Kees Cook
0 siblings, 2 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-08 18:04 UTC (permalink / raw)
To: Bart Van Assche, ksummit, Dan Carpenter, Alexei Starovoitov
On 10/8/25 1:57 PM, Bart Van Assche wrote:
> On 10/8/25 10:04 AM, Chris Mason wrote:
>> 2) A code indexing tool with MCP server that Claude can use to find
>> functions, types, and call chains more effectively. This makes it more
>> likely Claude can trace complex relationships in the code:
>
> Different kernel developers may prefer different AI systems. As an
> example, my employer expects me to use Gemini and I have seen it
> producing interesting code reviews. So I would prefer that any AI
> code review prompts that are developed for kernel developers support at
> least the most widely used AI systems.
The prompts aren't specific to any AI, but so far I've only tested with
claude. I'd gladly take patches that enable other tools, but if you
read through things, you can see it's really just a basic framework
around gathering context about the kernel code.
Claude really needs aggressive use of checklists to get it to stay
focused, other AIs may need other tricks.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:20 ` Konstantin Ryabitsev
@ 2025-10-08 18:11 ` Sasha Levin
2025-10-08 18:35 ` Chris Mason
1 sibling, 0 replies; 68+ messages in thread
From: Sasha Levin @ 2025-10-08 18:11 UTC (permalink / raw)
To: Konstantin Ryabitsev
Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 01:20:21PM -0400, Konstantin Ryabitsev wrote:
>On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
>> My goal for KS/MS is to discuss how to enable maintainers to use review
>> automation tools to lower their workload. I don't want to build new CI
>> here, so the goal would be enabling integration with existing CI.
>>
>> My question for everyone is what would it take to make all of this
>> useful?
>
>I am generally of the opinion that it's more useful to talk about it than to
>ignore it. Clearly, this is a controversial subject -- many will liken
>introducing proprietary AI tooling to what happened with bitkeeper.
>
>I've been working on incorporating series summarization with b4, but I'm
>trying to make it work with ollama/gemma3 so as not to introduce a proprietary
>dependency. My results are probably a lot more hit-and-miss than with Claude
>4.5 Sonnet -- but I find it hard to judge because the summaries *look*
>reasonably good to someone who is not a maintainer of that particular
>subsystem.
The approach I've been trying to take with my tools is that instead of
embedding the AI API calling logic into the tool itself (like I did initially
with the AUTOSEL rewrite), I just provided a "hook" in the form of a script
that takes the prompt as input and outputs the respose from whatever LLM was
used behind the scenes.
It make it easier both in the sense that I can easily switch between providers
just by calling a different script, and also I don't get locked into any single
provider and can decide if I want to use one of the open models or a
proprietary one.
We can provide a library of these scripts to make it easy for users and give
them the freedom to choose which model they want to be using.
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 18:04 ` Chris Mason
@ 2025-10-08 18:14 ` Bart Van Assche
2025-10-08 18:42 ` Chris Mason
2025-10-08 21:08 ` Kees Cook
1 sibling, 1 reply; 68+ messages in thread
From: Bart Van Assche @ 2025-10-08 18:14 UTC (permalink / raw)
To: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On 10/8/25 11:04 AM, Chris Mason wrote:
> Claude really needs aggressive use of checklists to get it to stay
> focused, other AIs may need other tricks.
Would that be a reason to avoid Claude for code reviews? All I need to
make ChatGPT or Gemini come up with a code review is to tell it that it
should review a kernel patch pretending that it is an experienced kernel
developer and also what issues it should look for. The AI kernel patch
review prompts I have seen from my colleagues are about 30-40 lines
long. However, I do not know what copyright license applies to these
prompts so I cannot share these prompts.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
2025-10-08 17:20 ` Konstantin Ryabitsev
2025-10-08 17:57 ` Bart Van Assche
@ 2025-10-08 18:33 ` Sasha Levin
2025-10-09 1:43 ` Chris Mason
2025-10-08 19:08 ` Andrew Lunn
` (3 subsequent siblings)
6 siblings, 1 reply; 68+ messages in thread
From: Sasha Levin @ 2025-10-08 18:33 UTC (permalink / raw)
To: Chris Mason; +Cc: ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
>Hi everyone,
>
>Depending on how you look at things, this is potentially a topic for
>either MS or KS.
>
>One way to lower the load on maintainers is to make it easier for
>contributors to send higher quality patches, and to catch errors before
>they land in various git trees.
>
>Along those lines, when the AI code submission thread started over the
>summer, I decided to see if it was possible to get reasonable code
>reviews out of AI.
This is a really great subject to discuss.
IMO when the AI topic initially came up the concerns around AI drowned
discussions of finding places where AI could be useful and provide support for
maintainers struggling under the load.
>There are certainly false positives, but Alexei and the BPF developers
>wired up my prompts into the BPF CI, and you can find the results in
>their github CI. Everything in red is a bug the AI review found:
>
>https://github.com/kernel-patches/bpf/actions/workflows/ai-code-review.yml
>
>My goal for KS/MS is to discuss how to enable maintainers to use review
>automation tools to lower their workload. I don't want to build new CI
>here, so the goal would be enabling integration with existing CI.
This is great to see!
I think that AUTOSEL is already a large scale kernel patch review workflow and
I'd love to work more towards a unified kernel patch AI review story that
community members could use.
I've gotten good feedback[1] from Linus about the AI AUTOSEL workflow and his
idea around expanding our AI tooling to help with maintainer workflows sounds
in line with what you're describing here.
>My question for everyone is what would it take to make all of this
>useful? I'm working on funding for API access, so hopefully that part
>won't be a problem.
I've been playing with extending[2] b4 (unofficially) with AI workflows that
are useful for myself. The patches themselves aren't too interesting, but what
I found exciting is expanding b4 with tooling that maintainers can run locally
and which will save them a good amound of time/effort.
Something along the lines of "b4 ai review [commit]" or "b4 ai style [patch]".
>There's definitely overlap between the bugs I'm finding and the bugs Dan
>Carpenter finds, so I'm hoping he and I can team up as well.
>
>In terms of actual review details, the reviews have two parts:
>
>1) The review prompts. These are stand alone and can just work on any
>kernel tree. This is what BPF CI is currently using:
>
>https://github.com/masoncl/review-prompts/
I think I'll borrow some of these :)
It was interesting to read through some of the prompts and see where you had
good and bad experience with the AI ("You're not smart enough to understand
smp_mb()" :D )
>These prompts can also debug oopsen or syzbot reports (with varying
>success).
>
>2) A code indexing tool with MCP server that Claude can use to find
>functions, types, and call chains more effectively. This makes it more
>likely Claude can trace complex relationships in the code:
>
>https://github.com/facebookexperimental/semcode
>
>Asking claude to produce a callgraph for btrfs_search_slot() consumes
>~444K tokens. With semcode installed, the same query produces better
>results and uses 25K tokens. (btrfs_search_slot() has a huge callchain)
Also very interesting! AUTOSEL is a token eating machine. Reducing token counts
while still getting the same (or better) quality of outputs is really great.
[1] https://lore.kernel.org/all/CAHk-=wgX-2+hH5fM344_wKYNZS5CSf4ispS4X+s5xkma_Mdu_Q@mail.gmail.com/
[2] https://github.com/sashalevin/b4-ai
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:20 ` Konstantin Ryabitsev
2025-10-08 18:11 ` Sasha Levin
@ 2025-10-08 18:35 ` Chris Mason
1 sibling, 0 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-08 18:35 UTC (permalink / raw)
To: Konstantin Ryabitsev; +Cc: ksummit, Dan Carpenter, Alexei Starovoitov
On 10/8/25 1:20 PM, Konstantin Ryabitsev wrote:
> On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
>> My goal for KS/MS is to discuss how to enable maintainers to use review
>> automation tools to lower their workload. I don't want to build new CI
>> here, so the goal would be enabling integration with existing CI.
>>
>> My question for everyone is what would it take to make all of this
>> useful?
>
> I am generally of the opinion that it's more useful to talk about it than to
> ignore it. Clearly, this is a controversial subject -- many will liken
> introducing proprietary AI tooling to what happened with bitkeeper.
>
Yeah, I understand this concern.
> I've been working on incorporating series summarization with b4, but I'm
> trying to make it work with ollama/gemma3 so as not to introduce a proprietary
> dependency. My results are probably a lot more hit-and-miss than with Claude
> 4.5 Sonnet -- but I find it hard to judge because the summaries *look*
> reasonably good to someone who is not a maintainer of that particular
> subsystem.
>
You know way more about the variety of kernel developer workflows than
me, but since everyone will want something different, I think any
reasonable and open default, along with a way people can plug in their
own API and prompt is fantastic.
For the code reviews, I don't want to exclude someone using an open
tool, and I'm actually really curious how close they can get. At some
point I'm going to collect example bugs of the major categories and add
them to an automated test setup for the prompts, and we can also use
that against alternative AIs.
> Maybe it's more of a BoF session material?
>
I'm up for anything.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 18:14 ` Bart Van Assche
@ 2025-10-08 18:42 ` Chris Mason
0 siblings, 0 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-08 18:42 UTC (permalink / raw)
To: Bart Van Assche, ksummit, Dan Carpenter, Alexei Starovoitov
On 10/8/25 2:14 PM, Bart Van Assche wrote:
> On 10/8/25 11:04 AM, Chris Mason wrote:
>> Claude really needs aggressive use of checklists to get it to stay
>> focused, other AIs may need other tricks.
>
> Would that be a reason to avoid Claude for code reviews? All I need to
> make ChatGPT or Gemini come up with a code review is to tell it that it
> should review a kernel patch pretending that it is an experienced kernel
> developer and also what issues it should look for. The AI kernel patch
> review prompts I have seen from my colleagues are about 30-40 lines
> long. However, I do not know what copyright license applies to these
> prompts so I cannot share these prompts.
>
Absolutely. If something else is better I'd gladly switch. The
discussion around how we decide what's better, and how we make it more
broadly consumable is the part I find most interesting.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
` (2 preceding siblings ...)
2025-10-08 18:33 ` Sasha Levin
@ 2025-10-08 19:08 ` Andrew Lunn
2025-10-08 19:28 ` Arnaldo Carvalho de Melo
2025-10-08 19:29 ` Laurent Pinchart
2025-10-09 14:52 ` Paul E. McKenney
` (2 subsequent siblings)
6 siblings, 2 replies; 68+ messages in thread
From: Andrew Lunn @ 2025-10-08 19:08 UTC (permalink / raw)
To: Chris Mason; +Cc: ksummit, Dan Carpenter, Alexei Starovoitov
> My goal for KS/MS is to discuss how to enable maintainers to use review
> automation tools to lower their workload.
Maintainers will want to use these tools, if they prove to be
useful. But ideally, we want the developers to use these tools and fix
the issues before they post code for review. That reduces the
maintainers workload even more. So Maintainers just need to run the
tools to prove that the developers have run the tools and have already
fixed the problems.
So i'm not sure your goal is the correct long term goal. It should be
a tool for everybody, not just maintainers.
Andrew
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:08 ` Andrew Lunn
@ 2025-10-08 19:28 ` Arnaldo Carvalho de Melo
2025-10-08 19:33 ` Laurent Pinchart
2025-10-08 19:29 ` Laurent Pinchart
1 sibling, 1 reply; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-08 19:28 UTC (permalink / raw)
To: Andrew Lunn; +Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > My goal for KS/MS is to discuss how to enable maintainers to use review
> > automation tools to lower their workload.
> Maintainers will want to use these tools, if they prove to be useful.
Right.
> But ideally, we want the developers to use these tools and fix
> the issues before they post code for review.
Sure, as before, people should try to follow the best practices before
sending pull requests, its in the best interest of everybody.
But if they do so, and I guess most will, there will be more patches
flowing upstream, thus Chris effort, I think, right?
> That reduces the maintainers workload even more. So Maintainers just
> need to run the tools to prove that the developers have run the tools
> and have already fixed the problems.
I think tools evolve and competing tools may provide different results,
so the more tools that are used, in all steps, there will be a greater
chance of catching things _before_ maintainers have to look at
something.
As b4 does all sorts of checks, making sure that the patch was vetted by
whatever automation before it hits their inboxes seems to reduce the
number of steps maintainers will have to perform.
- Arnaldo
> So i'm not sure your goal is the correct long term goal. It should be
> a tool for everybody, not just maintainers.
>
> Andrew
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:08 ` Andrew Lunn
2025-10-08 19:28 ` Arnaldo Carvalho de Melo
@ 2025-10-08 19:29 ` Laurent Pinchart
2025-10-08 19:50 ` Bird, Tim
2025-10-08 20:37 ` Andrew Lunn
1 sibling, 2 replies; 68+ messages in thread
From: Laurent Pinchart @ 2025-10-08 19:29 UTC (permalink / raw)
To: Andrew Lunn
Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > My goal for KS/MS is to discuss how to enable maintainers to use review
> > automation tools to lower their workload.
>
> Maintainers will want to use these tools, if they prove to be
> useful. But ideally, we want the developers to use these tools and fix
> the issues before they post code for review. That reduces the
> maintainers workload even more. So Maintainers just need to run the
> tools to prove that the developers have run the tools and have already
> fixed the problems.
>
> So i'm not sure your goal is the correct long term goal. It should be
> a tool for everybody, not just maintainers.
This raises the interesting and important question of how to get patch
submitters to follow a recommended workflow. We routinely get patches
that produce checkpatch errors that are clearly not false positives.
Rob Herring implemented a bot to run checks on device tree bindings and
device tree sources because lots of patches fail those checks. I'm sure
there are lots of other examples that have led maintainers to automate
checks on the receiver's side, through various types of standard CIs or
hand-made solutions. Submitters should run more tests, how to get them
to do so is a broader question.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:28 ` Arnaldo Carvalho de Melo
@ 2025-10-08 19:33 ` Laurent Pinchart
2025-10-08 19:39 ` Arnaldo Carvalho de Melo
0 siblings, 1 reply; 68+ messages in thread
From: Laurent Pinchart @ 2025-10-08 19:33 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: Andrew Lunn, Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 04:28:14PM -0300, Arnaldo Carvalho de Melo wrote:
> On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > My goal for KS/MS is to discuss how to enable maintainers to use review
> > > automation tools to lower their workload.
>
> > Maintainers will want to use these tools, if they prove to be useful.
>
> Right.
>
> > But ideally, we want the developers to use these tools and fix
> > the issues before they post code for review.
>
> Sure, as before, people should try to follow the best practices before
> sending pull requests, its in the best interest of everybody.
>
> But if they do so, and I guess most will, there will be more patches
> flowing upstream, thus Chris effort, I think, right?
I'd argue there will be less patches flowing upstream, lots of v1 (and
sometimes subsequent versions) where maintainers point obvious mistakes
will be avoided. The new v1 that would end up on the list will take more
time to review than the old v1, but that's just because the new v1 will
be the old v2.
To make this happen, though, maintainers will need to be reasonably
confident that obvious mistakes will have already been fixed.
> > That reduces the maintainers workload even more. So Maintainers just
> > need to run the tools to prove that the developers have run the tools
> > and have already fixed the problems.
>
> I think tools evolve and competing tools may provide different results,
> so the more tools that are used, in all steps, there will be a greater
> chance of catching things _before_ maintainers have to look at
> something.
>
> As b4 does all sorts of checks, making sure that the patch was vetted by
> whatever automation before it hits their inboxes seems to reduce the
> number of steps maintainers will have to perform.
>
> - Arnaldo
>
> > So i'm not sure your goal is the correct long term goal. It should be
> > a tool for everybody, not just maintainers.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:33 ` Laurent Pinchart
@ 2025-10-08 19:39 ` Arnaldo Carvalho de Melo
2025-10-08 20:29 ` Andrew Lunn
2025-10-09 9:37 ` Laurent Pinchart
0 siblings, 2 replies; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-08 19:39 UTC (permalink / raw)
To: Laurent Pinchart
Cc: Andrew Lunn, Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 10:33:49PM +0300, Laurent Pinchart wrote:
> On Wed, Oct 08, 2025 at 04:28:14PM -0300, Arnaldo Carvalho de Melo wrote:
> > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > But ideally, we want the developers to use these tools and fix
> > > the issues before they post code for review.
> > Sure, as before, people should try to follow the best practices before
> > sending pull requests, its in the best interest of everybody.
> > But if they do so, and I guess most will, there will be more patches
> > flowing upstream, thus Chris effort, I think, right?
> I'd argue there will be less patches flowing upstream, lots of v1 (and
> sometimes subsequent versions) where maintainers point obvious mistakes
> will be avoided. The new v1 that would end up on the list will take more
> time to review than the old v1, but that's just because the new v1 will
> be the old v2.
Hopefully, but then all that time the contributors had to spend on
writing multiple versions for the same patch could be used to send tons
of good v1 patches, leading to more features, or dealing with lots of tech
debt most people have. :-)
> To make this happen, though, maintainers will need to be reasonably
> confident that obvious mistakes will have already been fixed.
I think maintainers can't take anything for sure, even when dealing with
contributors that posted tons of patches before :-/
And as you said, we can't count on contributors running existing tests,
or using things like linters, checkpatch, you name it, let alone AI
assistants.
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:29 ` Laurent Pinchart
@ 2025-10-08 19:50 ` Bird, Tim
2025-10-08 20:30 ` Sasha Levin
2025-10-08 20:30 ` James Bottomley
2025-10-08 20:37 ` Andrew Lunn
1 sibling, 2 replies; 68+ messages in thread
From: Bird, Tim @ 2025-10-08 19:50 UTC (permalink / raw)
To: laurent.pinchart, Andrew Lunn
Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
> -----Original Message-----
> From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > My goal for KS/MS is to discuss how to enable maintainers to use review
> > > automation tools to lower their workload.
> >
> > Maintainers will want to use these tools, if they prove to be
> > useful. But ideally, we want the developers to use these tools and fix
> > the issues before they post code for review. That reduces the
> > maintainers workload even more. So Maintainers just need to run the
> > tools to prove that the developers have run the tools and have already
> > fixed the problems.
> >
> > So i'm not sure your goal is the correct long term goal. It should be
> > a tool for everybody, not just maintainers.
>
> This raises the interesting and important question of how to get patch
> submitters to follow a recommended workflow. We routinely get patches
> that produce checkpatch errors that are clearly not false positives.
> Rob Herring implemented a bot to run checks on device tree bindings and
> device tree sources because lots of patches fail those checks. I'm sure
> there are lots of other examples that have led maintainers to automate
> checks on the receiver's side, through various types of standard CIs or
> hand-made solutions. Submitters should run more tests, how to get them
> to do so is a broader question.
Maybe it would be worthwhile to annotate patch submissions with tags
indicating what tools have been run on them. I know we're trying to avoid
overuse of commit tags, but maybe we could automate this a bit, and/or'
reuse the 'Reviewed-by:' tag in the commit message. I could envision, in some
future workflow utopia, where a missing 'Reviewed-by: checkpatch.pl AND claude AI review'
would be grounds for requesting these before human review.
-- Tim
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:39 ` Arnaldo Carvalho de Melo
@ 2025-10-08 20:29 ` Andrew Lunn
2025-10-08 20:53 ` Mark Brown
2025-10-09 9:37 ` Laurent Pinchart
1 sibling, 1 reply; 68+ messages in thread
From: Andrew Lunn @ 2025-10-08 20:29 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: Laurent Pinchart, Chris Mason, ksummit, Dan Carpenter,
Alexei Starovoitov
> I think maintainers can't take anything for sure, even when dealing with
> contributors that posted tons of patches before :-/
>
> And as you said, we can't count on contributors running existing tests,
> or using things like linters, checkpatch, you name it, let alone AI
> assistants.
I agree. Maintainers run these tests as well just to confirm the
developer has run the test. And Maintainers probably take less notice
of the output, and look more at the passed/failed exist status. Why it
failed is generally not too important, the fact it failed just needs
to be bounced back to the submitter so they can investigate and fix
the issue.
But this also requires these tests are reproducible, and that is one
question i have. checkpatch, sparse, kdoc, Coccinelle are all
reproducible. They give the same answer every time. If it passes for
the developer, it should also pass for the Maintainer. But is this
true of AI tools? Are they reproducible?
Andrew
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:50 ` Bird, Tim
@ 2025-10-08 20:30 ` Sasha Levin
2025-10-09 12:32 ` Arnaldo Carvalho de Melo
2025-10-08 20:30 ` James Bottomley
1 sibling, 1 reply; 68+ messages in thread
From: Sasha Levin @ 2025-10-08 20:30 UTC (permalink / raw)
To: Bird, Tim
Cc: laurent.pinchart, Andrew Lunn, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
On Wed, Oct 08, 2025 at 07:50:32PM +0000, Bird, Tim wrote:
>
>
>> -----Original Message-----
>> From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
>> On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
>> > > My goal for KS/MS is to discuss how to enable maintainers to use review
>> > > automation tools to lower their workload.
>> >
>> > Maintainers will want to use these tools, if they prove to be
>> > useful. But ideally, we want the developers to use these tools and fix
>> > the issues before they post code for review. That reduces the
>> > maintainers workload even more. So Maintainers just need to run the
>> > tools to prove that the developers have run the tools and have already
>> > fixed the problems.
>> >
>> > So i'm not sure your goal is the correct long term goal. It should be
>> > a tool for everybody, not just maintainers.
>>
>> This raises the interesting and important question of how to get patch
>> submitters to follow a recommended workflow. We routinely get patches
>> that produce checkpatch errors that are clearly not false positives.
>> Rob Herring implemented a bot to run checks on device tree bindings and
>> device tree sources because lots of patches fail those checks. I'm sure
>> there are lots of other examples that have led maintainers to automate
>> checks on the receiver's side, through various types of standard CIs or
>> hand-made solutions. Submitters should run more tests, how to get them
>> to do so is a broader question.
>
>Maybe it would be worthwhile to annotate patch submissions with tags
>indicating what tools have been run on them. I know we're trying to avoid
>overuse of commit tags, but maybe we could automate this a bit, and/or'
>reuse the 'Reviewed-by:' tag in the commit message. I could envision, in some
>future workflow utopia, where a missing 'Reviewed-by: checkpatch.pl AND claude AI review'
>would be grounds for requesting these before human review.
This is similar to what was proposed in the last round[1] of discussions around
disclosing (AI) tool usage.
From the cover letter:
Assisted-by: Claude-claude-3-opus-20240229 checkpatch
At which point maintainers can set their own policies for their subsystem and
automate workflows based on those policies.
[1] https://lore.kernel.org/all/20250809234008.1540324-1-sashal@kernel.org/
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:50 ` Bird, Tim
2025-10-08 20:30 ` Sasha Levin
@ 2025-10-08 20:30 ` James Bottomley
2025-10-08 20:38 ` Bird, Tim
2025-10-09 1:15 ` Chris Mason
1 sibling, 2 replies; 68+ messages in thread
From: James Bottomley @ 2025-10-08 20:30 UTC (permalink / raw)
To: Bird, Tim, laurent.pinchart, Andrew Lunn
Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
On Wed, 2025-10-08 at 19:50 +0000, Bird, Tim wrote:
>
>
> > -----Original Message-----
> > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > > My goal for KS/MS is to discuss how to enable maintainers to
> > > > use review automation tools to lower their workload.
> > >
> > > Maintainers will want to use these tools, if they prove to be
> > > useful. But ideally, we want the developers to use these tools
> > > and fix the issues before they post code for review. That reduces
> > > the maintainers workload even more. So Maintainers just need to
> > > run the tools to prove that the developers have run the tools and
> > > have already fixed the problems.
> > >
> > > So i'm not sure your goal is the correct long term goal. It
> > > should be a tool for everybody, not just maintainers.
> >
> > This raises the interesting and important question of how to get
> > patch submitters to follow a recommended workflow. We routinely get
> > patches that produce checkpatch errors that are clearly not false
> > positives. Rob Herring implemented a bot to run checks on device
> > tree bindings and device tree sources because lots of patches fail
> > those checks. I'm sure there are lots of other examples that have
> > led maintainers to automate checks on the receiver's side, through
> > various types of standard CIs or hand-made solutions. Submitters
> > should run more tests, how to get them to do so is a broader
> > question.
>
> Maybe it would be worthwhile to annotate patch submissions with tags
> indicating what tools have been run on them. I know we're trying to
> avoid overuse of commit tags, but maybe we could automate this a bit,
> and/or' reuse the 'Reviewed-by:' tag in the commit message. I could
> envision, in some future workflow utopia, where a missing 'Reviewed-
> by: checkpatch.pl AND claude AI review' would be grounds for
> requesting these before human review.
Realistically, we can't even get some submitters to run checkpatch, so
I don't think the additional tag would help. What does help is having
the 0day bot take a feed of the mailing list, select the [PATCH] tag
and run checkpatch.pl as part of the tests, so someone could do the
same with whatever AI acceptance tests are agreed.
Although the problem with AI acceptance testing is that these models
and the inferencing required to run them doesn't come for free so
someone is going to end up paying for all this AI ... unless we can get
some cloud company to donate it, of course ...
Regards,
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:29 ` Laurent Pinchart
2025-10-08 19:50 ` Bird, Tim
@ 2025-10-08 20:37 ` Andrew Lunn
2025-10-09 12:40 ` Arnaldo Carvalho de Melo
1 sibling, 1 reply; 68+ messages in thread
From: Andrew Lunn @ 2025-10-08 20:37 UTC (permalink / raw)
To: Laurent Pinchart
Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
> This raises the interesting and important question of how to get patch
> submitters to follow a recommended workflow. We routinely get patches
> that produce checkpatch errors that are clearly not false positives.
> Rob Herring implemented a bot to run checks on device tree bindings and
> device tree sources because lots of patches fail those checks. I'm sure
> there are lots of other examples that have led maintainers to automate
> checks on the receiver's side, through various types of standard CIs or
> hand-made solutions. Submitters should run more tests, how to get them
> to do so is a broader question.
The netdev CI tooling is available from github. You can run it as a
docker image. So it is possible for a netdev developer to run the same
tests as netdev Maintainers do. Maybe we need more subsystems to make
their CI tooling available to their developers so they can be run
locally?
It has also been pointed out elsewhere, b4 is gaining more testing
capabilities. We should keep building this out, making subsystem
tooling more subsystem specific, while b4 can do all the standard
stuff we expect all developers to do before submission.
Andrew
^ permalink raw reply [flat|nested] 68+ messages in thread
* RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 20:30 ` James Bottomley
@ 2025-10-08 20:38 ` Bird, Tim
2025-10-08 22:21 ` Jiri Kosina
2025-10-09 9:14 ` Laurent Pinchart
2025-10-09 1:15 ` Chris Mason
1 sibling, 2 replies; 68+ messages in thread
From: Bird, Tim @ 2025-10-08 20:38 UTC (permalink / raw)
To: James Bottomley, laurent.pinchart, Andrew Lunn
Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
> -----Original Message-----
> From: James Bottomley <James.Bottomley@HansenPartnership.com>
> On Wed, 2025-10-08 at 19: 50 +0000, Bird, Tim wrote: > > > > -----Original Message----- > > From: Laurent Pinchart
> <laurent. pinchart@ ideasonboard. com> > > On Wed, Oct 08, 2025 at 09: 08: 33PM +0200, Andrew Lunn wrote:
> On Wed, 2025-10-08 at 19:50 +0000, Bird, Tim wrote:
> >
> >
> > > -----Original Message-----
> > > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > > > My goal for KS/MS is to discuss how to enable maintainers to
> > > > > use review automation tools to lower their workload.
> > > >
> > > > Maintainers will want to use these tools, if they prove to be
> > > > useful. But ideally, we want the developers to use these tools
> > > > and fix the issues before they post code for review. That reduces
> > > > the maintainers workload even more. So Maintainers just need to
> > > > run the tools to prove that the developers have run the tools and
> > > > have already fixed the problems.
> > > >
> > > > So i'm not sure your goal is the correct long term goal. It
> > > > should be a tool for everybody, not just maintainers.
> > >
> > > This raises the interesting and important question of how to get
> > > patch submitters to follow a recommended workflow. We routinely get
> > > patches that produce checkpatch errors that are clearly not false
> > > positives. Rob Herring implemented a bot to run checks on device
> > > tree bindings and device tree sources because lots of patches fail
> > > those checks. I'm sure there are lots of other examples that have
> > > led maintainers to automate checks on the receiver's side, through
> > > various types of standard CIs or hand-made solutions. Submitters
> > > should run more tests, how to get them to do so is a broader
> > > question.
> >
> > Maybe it would be worthwhile to annotate patch submissions with tags
> > indicating what tools have been run on them. I know we're trying to
> > avoid overuse of commit tags, but maybe we could automate this a bit,
> > and/or' reuse the 'Reviewed-by:' tag in the commit message. I could
> > envision, in some future workflow utopia, where a missing 'Reviewed-
> > by: checkpatch.pl AND claude AI review' would be grounds for
> > requesting these before human review.
>
> Realistically, we can't even get some submitters to run checkpatch, so
> I don't think the additional tag would help. What does help is having
> the 0day bot take a feed of the mailing list, select the [PATCH] tag
> and run checkpatch.pl as part of the tests, so someone could do the
> same with whatever AI acceptance tests are agreed.
There's no question that 0day automation of checkpatch.pl feedback
has been a great thing. I suspect that more submitters would run
checkpatch before sending their patches, if failure to do so resulted
in automatic rejection of the patch. This is more of a process backbone
issue than anything else.
>
> Although the problem with AI acceptance testing is that these models
> and the inferencing required to run them doesn't come for free so
> someone is going to end up paying for all this AI ... unless we can get
> some cloud company to donate it, of course ...
Indeed. All the more reason to enforce it at the source. It then becomes
a cost for the contributor instead of the upstream community, which is
going to scale better.
-- Tim
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 20:29 ` Andrew Lunn
@ 2025-10-08 20:53 ` Mark Brown
0 siblings, 0 replies; 68+ messages in thread
From: Mark Brown @ 2025-10-08 20:53 UTC (permalink / raw)
To: Andrew Lunn
Cc: Arnaldo Carvalho de Melo, Laurent Pinchart, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov
[-- Attachment #1: Type: text/plain, Size: 557 bytes --]
On Wed, Oct 08, 2025 at 10:29:11PM +0200, Andrew Lunn wrote:
> But this also requires these tests are reproducible, and that is one
> question i have. checkpatch, sparse, kdoc, Coccinelle are all
> reproducible. They give the same answer every time. If it passes for
> the developer, it should also pass for the Maintainer. But is this
> true of AI tools? Are they reproducible?
Even for the more stable tools there's issues with people running
different versions of the tools, or for something like DT validation
running with a different schema version.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 18:04 ` Chris Mason
2025-10-08 18:14 ` Bart Van Assche
@ 2025-10-08 21:08 ` Kees Cook
2025-10-09 1:37 ` Chris Mason
1 sibling, 1 reply; 68+ messages in thread
From: Kees Cook @ 2025-10-08 21:08 UTC (permalink / raw)
To: Chris Mason; +Cc: Bart Van Assche, ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 02:04:26PM -0400, Chris Mason wrote:
> Claude really needs aggressive use of checklists to get it to stay
> focused, other AIs may need other tricks.
So very very true. "I did the high priority stuff; I'm finished now".
Your prompts are very detailed! Have you been able to use a feedback loop
at all to have it update its own prompts to help you clarify its results?
I've found that pretty helpful for development-oriented prompting. i.e.
"Compare the foo.md file to what you now know about the foo process
and update it to reflect anything that needs to be added, removed,
or changed."
Additionally, have you been able to build any patterns from existing
bugs? The "Pattern ID" stuff feels like you built this from specific
instances?
I'm really curious if the "CONTEXT MANAGEMENT" section actually works? I
never considered tell it to discard specific context like you have,
"Discard non-essential details after each phase to manage token limits"
I'm going to try that immediately -- I just have to figure out how to
give it, uh, context about the context I want dropped.
And the semcode MCP sounds very interesting! I look forward to finding
out more about that. :)
Awesome!
-Kees
--
Kees Cook
^ permalink raw reply [flat|nested] 68+ messages in thread
* RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 20:38 ` Bird, Tim
@ 2025-10-08 22:21 ` Jiri Kosina
2025-10-09 9:14 ` Laurent Pinchart
1 sibling, 0 replies; 68+ messages in thread
From: Jiri Kosina @ 2025-10-08 22:21 UTC (permalink / raw)
To: Bird, Tim
Cc: James Bottomley, laurent.pinchart, Andrew Lunn, Chris Mason,
ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
On Wed, 8 Oct 2025, Bird, Tim wrote:
> > Realistically, we can't even get some submitters to run checkpatch, so
> > I don't think the additional tag would help.
Agreed.
> > What does help is having the 0day bot take a feed of the mailing list,
> > select the [PATCH] tag and run checkpatch.pl as part of the tests, so
> > someone could do the same with whatever AI acceptance tests are
> > agreed.
>
> There's no question that 0day automation of checkpatch.pl feedback
> has been a great thing. I suspect that more submitters would run
> checkpatch before sending their patches, if failure to do so resulted
> in automatic rejection of the patch. This is more of a process backbone
> issue than anything else.
Quite often the checkpatch-reported warnings are so negligible/trivial
that it takes less energy to just (a) either take the patch as-is, or (b)
modify it before application than to go through the whole resubmission
round (*), so there is no incentive for the submitter to do better next
time.
I am not saying what is right or wrong, that's just the way it (sometimes)
is.
(*) Especially if the patch is actually a good and needed one, and you'd
need to keep track whether the resubmission really happened, etc.
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 20:30 ` James Bottomley
2025-10-08 20:38 ` Bird, Tim
@ 2025-10-09 1:15 ` Chris Mason
1 sibling, 0 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-09 1:15 UTC (permalink / raw)
To: James Bottomley, Bird, Tim, laurent.pinchart, Andrew Lunn
Cc: ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
On 10/8/25 4:30 PM, James Bottomley wrote:
> On Wed, 2025-10-08 at 19:50 +0000, Bird, Tim wrote:
>>
>>
>>> -----Original Message-----
>>> From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
>>> On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
>>>>> My goal for KS/MS is to discuss how to enable maintainers to
>>>>> use review automation tools to lower their workload.
>>>>
>>>> Maintainers will want to use these tools, if they prove to be
>>>> useful. But ideally, we want the developers to use these tools
>>>> and fix the issues before they post code for review. That reduces
>>>> the maintainers workload even more. So Maintainers just need to
>>>> run the tools to prove that the developers have run the tools and
>>>> have already fixed the problems.
>>>>
>>>> So i'm not sure your goal is the correct long term goal. It
>>>> should be a tool for everybody, not just maintainers.
>>>
>>> This raises the interesting and important question of how to get
>>> patch submitters to follow a recommended workflow. We routinely get
>>> patches that produce checkpatch errors that are clearly not false
>>> positives. Rob Herring implemented a bot to run checks on device
>>> tree bindings and device tree sources because lots of patches fail
>>> those checks. I'm sure there are lots of other examples that have
>>> led maintainers to automate checks on the receiver's side, through
>>> various types of standard CIs or hand-made solutions. Submitters
>>> should run more tests, how to get them to do so is a broader
>>> question.
>>
>> Maybe it would be worthwhile to annotate patch submissions with tags
>> indicating what tools have been run on them. I know we're trying to
>> avoid overuse of commit tags, but maybe we could automate this a bit,
>> and/or' reuse the 'Reviewed-by:' tag in the commit message. I could
>> envision, in some future workflow utopia, where a missing 'Reviewed-
>> by: checkpatch.pl AND claude AI review' would be grounds for
>> requesting these before human review.
>
> Realistically, we can't even get some submitters to run checkpatch, so
> I don't think the additional tag would help. What does help is having
> the 0day bot take a feed of the mailing list, select the [PATCH] tag
> and run checkpatch.pl as part of the tests, so someone could do the
> same with whatever AI acceptance tests are agreed.
>
> Although the problem with AI acceptance testing is that these models
> and the inferencing required to run them doesn't come for free so
> someone is going to end up paying for all this AI ... unless we can get
> some cloud company to donate it, of course ...
>
I agree with James here on both points. Maintainers really benefit from
having some automation consistently running the tools they find most
valuable. It lets them know a basic baseline has been met before they
engage, and it fits into the workflow in a predictable way.
When everyone else is able to run those tools as well, great. Some
people will and some people won't, and that's ok.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 21:08 ` Kees Cook
@ 2025-10-09 1:37 ` Chris Mason
0 siblings, 0 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-09 1:37 UTC (permalink / raw)
To: Kees Cook; +Cc: Bart Van Assche, ksummit, Dan Carpenter, Alexei Starovoitov
On 10/8/25 5:08 PM, Kees Cook wrote:
> On Wed, Oct 08, 2025 at 02:04:26PM -0400, Chris Mason wrote:
>> Claude really needs aggressive use of checklists to get it to stay
>> focused, other AIs may need other tricks.
>
> So very very true. "I did the high priority stuff; I'm finished now".
>
> Your prompts are very detailed! Have you been able to use a feedback loop
> at all to have it update its own prompts to help you clarify its results?
Yeah, I started with a very basic prompt and ran it through ~300 Fixes:
tagged patches with known bugs. So I had the buggy patch and the fix,
and I'd have AI review the bad patch and then ask it to compare the
review against the known regression.
If the review failed, I'd ask it to fix the prompt in a generic way to
find the bug. The prompts would grow and grow and then I'd ask AI to
streamline them. I did those iterations a few times and fixed things up
as it went.
The false positive detection was more manual, that was mostly me asking
claude which part of the prompt made it flag something, and then either
fixing that part of the prompt or adding onto the false positive guide.
> I've found that pretty helpful for development-oriented prompting. i.e.
> "Compare the foo.md file to what you now know about the foo process
> and update it to reflect anything that needs to be added, removed,
> or changed."
>
> Additionally, have you been able to build any patterns from existing
> bugs? The "Pattern ID" stuff feels like you built this from specific
> instances?
>
> I'm really curious if the "CONTEXT MANAGEMENT" section actually works? I
> never considered tell it to discard specific context like you have,
> "Discard non-essential details after each phase to manage token limits"
> I'm going to try that immediately -- I just have to figure out how to
> give it, uh, context about the context I want dropped.
I don't actually know...I asked claude if it helped and it said yes, but
claude blows a lot of sunshine about how smart your prompts are.
>
> And the semcode MCP sounds very interesting! I look forward to finding
> out more about that. :)
>
> Awesome!
Glad it looks useful, please let me know what bugs you find!
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 18:33 ` Sasha Levin
@ 2025-10-09 1:43 ` Chris Mason
2025-10-09 14:49 ` Paul E. McKenney
0 siblings, 1 reply; 68+ messages in thread
From: Chris Mason @ 2025-10-09 1:43 UTC (permalink / raw)
To: Sasha Levin; +Cc: ksummit, Dan Carpenter, Alexei Starovoitov
On 10/8/25 2:33 PM, Sasha Levin wrote:
> On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
>> Hi everyone,
>>
>> Depending on how you look at things, this is potentially a topic for
>> either MS or KS.
>>
>> One way to lower the load on maintainers is to make it easier for
>> contributors to send higher quality patches, and to catch errors before
>> they land in various git trees.
>>
>> Along those lines, when the AI code submission thread started over the
>> summer, I decided to see if it was possible to get reasonable code
>> reviews out of AI.
>
> This is a really great subject to discuss.
>
> IMO when the AI topic initially came up the concerns around AI drowned
> discussions of finding places where AI could be useful and provide
> support for
> maintainers struggling under the load.
>
>> There are certainly false positives, but Alexei and the BPF developers
>> wired up my prompts into the BPF CI, and you can find the results in
>> their github CI. Everything in red is a bug the AI review found:
>>
>> https://github.com/kernel-patches/bpf/actions/workflows/ai-code-
>> review.yml
>> My goal for KS/MS is to discuss how to enable maintainers to use review
>> automation tools to lower their workload. I don't want to build new CI
>> here, so the goal would be enabling integration with existing CI.
>
> This is great to see!
>
> I think that AUTOSEL is already a large scale kernel patch review
> workflow and
> I'd love to work more towards a unified kernel patch AI review story that
> community members could use.
>
> I've gotten good feedback[1] from Linus about the AI AUTOSEL workflow
> and his
> idea around expanding our AI tooling to help with maintainer workflows
> sounds
> in line with what you're describing here.
Fantastic.
>
>> My question for everyone is what would it take to make all of this
>> useful? I'm working on funding for API access, so hopefully that part
>> won't be a problem.
>
> I've been playing with extending[2] b4 (unofficially) with AI workflows
> that
> are useful for myself. The patches themselves aren't too interesting,
> but what
> I found exciting is expanding b4 with tooling that maintainers can run
> locally
> and which will save them a good amound of time/effort.
>
> Something along the lines of "b4 ai review [commit]" or "b4 ai style
> [patch]".
Yeah, I think that'll really help as well.
>
>> There's definitely overlap between the bugs I'm finding and the bugs Dan
>> Carpenter finds, so I'm hoping he and I can team up as well.
>>
>> In terms of actual review details, the reviews have two parts:
>>
>> 1) The review prompts. These are stand alone and can just work on any
>> kernel tree. This is what BPF CI is currently using:
>>
>> https://github.com/masoncl/review-prompts/
>
> I think I'll borrow some of these :)
>
> It was interesting to read through some of the prompts and see where you
> had
> good and bad experience with the AI ("You're not smart enough to understand
> smp_mb()" :D )
If I'm being honest, I'm probably not smart enough to teach AI about
smp_mb(), but that's a different problem.
>
>> These prompts can also debug oopsen or syzbot reports (with varying
>> success).
>>
>> 2) A code indexing tool with MCP server that Claude can use to find
>> functions, types, and call chains more effectively. This makes it more
>> likely Claude can trace complex relationships in the code:
>>
>> https://github.com/facebookexperimental/semcode
>> Asking claude to produce a callgraph for btrfs_search_slot() consumes
>> ~444K tokens. With semcode installed, the same query produces better
>> results and uses 25K tokens. (btrfs_search_slot() has a huge callchain)
>
> Also very interesting! AUTOSEL is a token eating machine. Reducing token
> counts
> while still getting the same (or better) quality of outputs is really
> great.
I think/hope the extra context will make a huge difference, and I want
to add indexing for git history and lore archives as well.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 20:38 ` Bird, Tim
2025-10-08 22:21 ` Jiri Kosina
@ 2025-10-09 9:14 ` Laurent Pinchart
2025-10-09 10:03 ` Chris Mason
` (2 more replies)
1 sibling, 3 replies; 68+ messages in thread
From: Laurent Pinchart @ 2025-10-09 9:14 UTC (permalink / raw)
To: Bird, Tim
Cc: James Bottomley, Andrew Lunn, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
On Wed, Oct 08, 2025 at 08:38:45PM +0000, Bird, Tim wrote:
> > -----Original Message-----
> > From: James Bottomley <James.Bottomley@HansenPartnership.com>
> > On Wed, 2025-10-08 at 19: 50 +0000, Bird, Tim wrote:
> > > > -----Original Message-----
> > From: Laurent Pinchart <laurent. pinchart@ideasonboard.com>
> > On Wed, 2025-10-08 at 19:50 +0000, Bird, Tim wrote:
> > > > -----Original Message-----
> > > > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > > > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > > > > My goal for KS/MS is to discuss how to enable maintainers to
> > > > > > use review automation tools to lower their workload.
> > > > >
> > > > > Maintainers will want to use these tools, if they prove to be
> > > > > useful. But ideally, we want the developers to use these tools
> > > > > and fix the issues before they post code for review. That reduces
> > > > > the maintainers workload even more. So Maintainers just need to
> > > > > run the tools to prove that the developers have run the tools and
> > > > > have already fixed the problems.
> > > > >
> > > > > So i'm not sure your goal is the correct long term goal. It
> > > > > should be a tool for everybody, not just maintainers.
> > > >
> > > > This raises the interesting and important question of how to get
> > > > patch submitters to follow a recommended workflow. We routinely get
> > > > patches that produce checkpatch errors that are clearly not false
> > > > positives. Rob Herring implemented a bot to run checks on device
> > > > tree bindings and device tree sources because lots of patches fail
> > > > those checks. I'm sure there are lots of other examples that have
> > > > led maintainers to automate checks on the receiver's side, through
> > > > various types of standard CIs or hand-made solutions. Submitters
> > > > should run more tests, how to get them to do so is a broader
> > > > question.
> > >
> > > Maybe it would be worthwhile to annotate patch submissions with tags
> > > indicating what tools have been run on them. I know we're trying to
> > > avoid overuse of commit tags, but maybe we could automate this a bit,
> > > and/or' reuse the 'Reviewed-by:' tag in the commit message. I could
> > > envision, in some future workflow utopia, where a missing 'Reviewed-
> > > by: checkpatch.pl AND claude AI review' would be grounds for
> > > requesting these before human review.
> >
> > Realistically, we can't even get some submitters to run checkpatch, so
> > I don't think the additional tag would help. What does help is having
> > the 0day bot take a feed of the mailing list, select the [PATCH] tag
> > and run checkpatch.pl as part of the tests, so someone could do the
> > same with whatever AI acceptance tests are agreed.
>
> There's no question that 0day automation of checkpatch.pl feedback
> has been a great thing. I suspect that more submitters would run
> checkpatch before sending their patches, if failure to do so resulted
> in automatic rejection of the patch. This is more of a process backbone
> issue than anything else.
>
> > Although the problem with AI acceptance testing is that these models
> > and the inferencing required to run them doesn't come for free so
> > someone is going to end up paying for all this AI ... unless we can get
> > some cloud company to donate it, of course ...
>
> Indeed. All the more reason to enforce it at the source. It then becomes
> a cost for the contributor instead of the upstream community, which is
> going to scale better.
Forcing contributors to pay for access to proprietary tools is not
acceptable. Forcing contributors to even run proprietary tools is not
acceptable. If maintainers want contributions to go through any
proprietary tooling before submission, then this has to run on the
maintainer side (be it on a maintainer's machine, in some form of CI, or
somewhere else).
You're right that cost would then be a problem. I can certainly imagine
$popular_ai_company sponsoring this short term, until we're
vendor-locked and they stop sponsorship. I don't think rushing in that
direction is a good idea.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 19:39 ` Arnaldo Carvalho de Melo
2025-10-08 20:29 ` Andrew Lunn
@ 2025-10-09 9:37 ` Laurent Pinchart
2025-10-09 12:48 ` Arnaldo Carvalho de Melo
1 sibling, 1 reply; 68+ messages in thread
From: Laurent Pinchart @ 2025-10-09 9:37 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: Andrew Lunn, Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 04:39:17PM -0300, Arnaldo Carvalho de Melo wrote:
> On Wed, Oct 08, 2025 at 10:33:49PM +0300, Laurent Pinchart wrote:
> > On Wed, Oct 08, 2025 at 04:28:14PM -0300, Arnaldo Carvalho de Melo wrote:
> > > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > > But ideally, we want the developers to use these tools and fix
> > > > the issues before they post code for review.
> > >
> > > Sure, as before, people should try to follow the best practices before
> > > sending pull requests, its in the best interest of everybody.
> > >
> > > But if they do so, and I guess most will, there will be more patches
> > > flowing upstream, thus Chris effort, I think, right?
> >
> > I'd argue there will be less patches flowing upstream, lots of v1 (and
> > sometimes subsequent versions) where maintainers point obvious mistakes
> > will be avoided. The new v1 that would end up on the list will take more
> > time to review than the old v1, but that's just because the new v1 will
> > be the old v2.
>
> Hopefully, but then all that time the contributors had to spend on
> writing multiple versions for the same patch could be used to send tons
> of good v1 patches, leading to more features, or dealing with lots of tech
> debt most people have. :-)
One can always dream :-)
If you think of a series that goes from v1 to v10 today, we will just
not see v1 on the list and v2 to v10 will be renamed v1 to v9, but the
submitter will still make a v1, run it through tests, and fix issues
before submitting. This will likely take less time than waiting for a
review on v1, but will still take development time.
> > To make this happen, though, maintainers will need to be reasonably
> > confident that obvious mistakes will have already been fixed.
>
> I think maintainers can't take anything for sure, even when dealing with
> contributors that posted tons of patches before :-/
>
> And as you said, we can't count on contributors running existing tests,
> or using things like linters, checkpatch, you name it, let alone AI
> assistants.
A big difference is that we can complain about submitters not running
checkpatch, but we can't insist they run proprietary tools.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 9:14 ` Laurent Pinchart
@ 2025-10-09 10:03 ` Chris Mason
2025-10-10 7:54 ` Laurent Pinchart
2025-10-09 14:30 ` Steven Rostedt
2025-10-09 14:47 ` Bird, Tim
2 siblings, 1 reply; 68+ messages in thread
From: Chris Mason @ 2025-10-09 10:03 UTC (permalink / raw)
To: Laurent Pinchart, Bird, Tim
Cc: James Bottomley, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On 10/9/25 5:14 AM, Laurent Pinchart wrote:
>> Indeed. All the more reason to enforce it at the source. It then becomes
>> a cost for the contributor instead of the upstream community, which is
>> going to scale better.
>
> Forcing contributors to pay for access to proprietary tools is not
> acceptable. Forcing contributors to even run proprietary tools is not
> acceptable. If maintainers want contributions to go through any
> proprietary tooling before submission, then this has to run on the
> maintainer side (be it on a maintainer's machine, in some form of CI, or
> somewhere else).
I strongly agree with Laurent here.
>
> You're right that cost would then be a problem. I can certainly imagine
> $popular_ai_company sponsoring this short term, until we're
> vendor-locked and they stop sponsorship. I don't think rushing in that
> direction is a good idea.
>
I don't think vendor lock in is a problem though. We're in a phase
where there is a great deal of competition and innovation in the space,
and the prompts themselves are just english text against a general
purpose AI.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 20:30 ` Sasha Levin
@ 2025-10-09 12:32 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-09 12:32 UTC (permalink / raw)
To: Sasha Levin
Cc: Bird, Tim, laurent.pinchart, Andrew Lunn, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
On Wed, Oct 08, 2025 at 04:30:32PM -0400, Sasha Levin wrote:
> On Wed, Oct 08, 2025 at 07:50:32PM +0000, Bird, Tim wrote:
> > > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > > > My goal for KS/MS is to discuss how to enable maintainers to use review
> > > > > automation tools to lower their workload.
> > > > Maintainers will want to use these tools, if they prove to be
> > > > useful. But ideally, we want the developers to use these tools and fix
> > > > the issues before they post code for review. That reduces the
> > > > maintainers workload even more. So Maintainers just need to run the
> > > > tools to prove that the developers have run the tools and have already
> > > > fixed the problems.
> > > > So i'm not sure your goal is the correct long term goal. It should be
> > > > a tool for everybody, not just maintainers.
> > > This raises the interesting and important question of how to get patch
> > > submitters to follow a recommended workflow. We routinely get patches
> > > that produce checkpatch errors that are clearly not false positives.
> > > Rob Herring implemented a bot to run checks on device tree bindings and
> > > device tree sources because lots of patches fail those checks. I'm sure
> > > there are lots of other examples that have led maintainers to automate
> > > checks on the receiver's side, through various types of standard CIs or
> > > hand-made solutions. Submitters should run more tests, how to get them
> > > to do so is a broader question.
> > Maybe it would be worthwhile to annotate patch submissions with tags
> > indicating what tools have been run on them. I know we're trying to avoid
> > overuse of commit tags, but maybe we could automate this a bit, and/or'
> > reuse the 'Reviewed-by:' tag in the commit message. I could envision, in some
> > future workflow utopia, where a missing 'Reviewed-by: checkpatch.pl AND claude AI review'
> > would be grounds for requesting these before human review.
> This is similar to what was proposed in the last round[1] of discussions around
> disclosing (AI) tool usage.
> From the cover letter:
> Assisted-by: Claude-claude-3-opus-20240229 checkpatch
>
> At which point maintainers can set their own policies for their subsystem and
> automate workflows based on those policies.
But this is just a disclosure about tools supposedly used. What about
the version, specific model, checkpatch version, etc?
Its good info, that can help understand why some parts of a patch came
into being, but shouldn't preclude the maintainer from using his best
judgement and extra tooling.
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 20:37 ` Andrew Lunn
@ 2025-10-09 12:40 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-09 12:40 UTC (permalink / raw)
To: Andrew Lunn
Cc: Laurent Pinchart, Chris Mason, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On Wed, Oct 08, 2025 at 10:37:51PM +0200, Andrew Lunn wrote:
> > This raises the interesting and important question of how to get patch
> > submitters to follow a recommended workflow. We routinely get patches
> > that produce checkpatch errors that are clearly not false positives.
> > Rob Herring implemented a bot to run checks on device tree bindings and
> > device tree sources because lots of patches fail those checks. I'm sure
> > there are lots of other examples that have led maintainers to automate
> > checks on the receiver's side, through various types of standard CIs or
> > hand-made solutions. Submitters should run more tests, how to get them
> > to do so is a broader question.
> The netdev CI tooling is available from github. You can run it as a
> docker image. So it is possible for a netdev developer to run the same
> tests as netdev Maintainers do. Maybe we need more subsystems to make
> their CI tooling available to their developers so they can be run
> locally?
> It has also been pointed out elsewhere, b4 is gaining more testing
> capabilities. We should keep building this out, making subsystem
> tooling more subsystem specific, while b4 can do all the standard
> stuff we expect all developers to do before submission.
Yeah, if b4 would run these then the process would be streamlined,
great.
But the good thing would be to maybe have a CI see patches,
test/LLM/lint/kitchen sink it/etc them and sign them, giving it a "seal
of approval", or some brownie point that would then make procmail (or
equivalent) move this to a "special folder", that would be the first one
maintainers would read in the morning.
Maintainers would receive everything, as today, all of lkml/etc, but
some classification would be done so that they would look first at the
"good stuff".
I'd really love if running 'make -C tools/perf build', 'perf test' and
https://github.com/acmel/linux-tools-container-builds, after applying a
series would complete successfully. One can dream :-)
Even if all of this was done, I think b4 should do as you suggest,
checking one last time.
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 9:37 ` Laurent Pinchart
@ 2025-10-09 12:48 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-09 12:48 UTC (permalink / raw)
To: Laurent Pinchart
Cc: Andrew Lunn, Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On Thu, Oct 09, 2025 at 12:37:50PM +0300, Laurent Pinchart wrote:
> On Wed, Oct 08, 2025 at 04:39:17PM -0300, Arnaldo Carvalho de Melo wrote:
> > On Wed, Oct 08, 2025 at 10:33:49PM +0300, Laurent Pinchart wrote:
> > > On Wed, Oct 08, 2025 at 04:28:14PM -0300, Arnaldo Carvalho de Melo wrote:
> > > > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > > > But ideally, we want the developers to use these tools and fix
> > > > > the issues before they post code for review.
> > > > Sure, as before, people should try to follow the best practices before
> > > > sending pull requests, its in the best interest of everybody.
> > > > But if they do so, and I guess most will, there will be more patches
> > > > flowing upstream, thus Chris effort, I think, right?
> > > I'd argue there will be less patches flowing upstream, lots of v1 (and
> > > sometimes subsequent versions) where maintainers point obvious mistakes
> > > will be avoided. The new v1 that would end up on the list will take more
> > > time to review than the old v1, but that's just because the new v1 will
> > > be the old v2.
> >
> > Hopefully, but then all that time the contributors had to spend on
> > writing multiple versions for the same patch could be used to send tons
> > of good v1 patches, leading to more features, or dealing with lots of tech
> > debt most people have. :-)
> One can always dream :-)
:-)
> If you think of a series that goes from v1 to v10 today, we will just
> not see v1 on the list and v2 to v10 will be renamed v1 to v9, but the
> submitter will still make a v1, run it through tests, and fix issues
> before submitting. This will likely take less time than waiting for a
> review on v1, but will still take development time.
Right, using AI to do pre-review should be encouraged, some people will
not want to use some, while find some more acceptable, some will not use
any and will ask a friend, more effort before sending patches is what
maintainers need.
> > > To make this happen, though, maintainers will need to be reasonably
> > > confident that obvious mistakes will have already been fixed.
> > I think maintainers can't take anything for sure, even when dealing with
> > contributors that posted tons of patches before :-/
> > And as you said, we can't count on contributors running existing tests,
> > or using things like linters, checkpatch, you name it, let alone AI
> > assistants.
> A big difference is that we can complain about submitters not running
> checkpatch, but we can't insist they run proprietary tools.
I agree that we shouldn't make it a requirement that a proprietary tool
be used to make development dependent on it, not even require that
something that doesn't run locally, I'd say.
But this is no bitkeeper, I think, or shouldn't be, i.e. if you
completely stop thinking and trust just LLMs to code and review for
you...
Supposedly we can at any time stop using it and go back to our old ways,
no? :-)
Oh, it may be a bitkeeper, so convenient, who would want to use just
patches....
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 9:14 ` Laurent Pinchart
2025-10-09 10:03 ` Chris Mason
@ 2025-10-09 14:30 ` Steven Rostedt
2025-10-09 14:51 ` Andrew Lunn
2025-10-09 16:31 ` Chris Mason
2025-10-09 14:47 ` Bird, Tim
2 siblings, 2 replies; 68+ messages in thread
From: Steven Rostedt @ 2025-10-09 14:30 UTC (permalink / raw)
To: Laurent Pinchart
Cc: Bird, Tim, James Bottomley, Andrew Lunn, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
On Thu, 9 Oct 2025 12:14:05 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
> Forcing contributors to pay for access to proprietary tools is not
> acceptable. Forcing contributors to even run proprietary tools is not
> acceptable. If maintainers want contributions to go through any
> proprietary tooling before submission, then this has to run on the
> maintainer side (be it on a maintainer's machine, in some form of CI, or
> somewhere else).
One way I see this working is to attach it to patchwork. Sending a patch to
the BPF mailing list has their patchwork trigger a bunch of tests and it
will tell you if it passed or failed. I'm assuming if it failed, it doesn't
add it to patchwork and the maintainers will ignore it.
Attaching AI to patchwork could be useful as well. But this would run on
some server that someone will have to pay for. But it will not be the
submitter.
I've been thinking of adding tests to run when people submit to the tracing
mailing list, but I don't want to waste my electricity on it ;-) I have
solar now, so perhaps I should.
>
> You're right that cost would then be a problem. I can certainly imagine
> $popular_ai_company sponsoring this short term, until we're
> vendor-locked and they stop sponsorship. I don't think rushing in that
> direction is a good idea.
I don't see lock in being too much of an issue, unless the server that is
going to run this adds a lot of scripts that are built on one kind of API
that is vendor lock in. If anything, you just lose the service if it
becomes too expensive and you can't find an alternative.
-- Steve
^ permalink raw reply [flat|nested] 68+ messages in thread
* RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 9:14 ` Laurent Pinchart
2025-10-09 10:03 ` Chris Mason
2025-10-09 14:30 ` Steven Rostedt
@ 2025-10-09 14:47 ` Bird, Tim
2025-10-09 15:11 ` Andrew Lunn
2025-10-09 17:58 ` Mark Brown
2 siblings, 2 replies; 68+ messages in thread
From: Bird, Tim @ 2025-10-09 14:47 UTC (permalink / raw)
To: laurent.pinchart
Cc: James Bottomley, Andrew Lunn, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
> -----Original Message-----
> From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
>
> On Wed, Oct 08, 2025 at 08: 38: 45PM +0000, Bird, Tim wrote: > > -----Original Message----- > > From: James Bottomley
> <James. Bottomley@ HansenPartnership. com> > > On Wed, 2025-10-08 at 19: 50 +0000, Bird, Tim wrote: >
> On Wed, Oct 08, 2025 at 08:38:45PM +0000, Bird, Tim wrote:
> > > -----Original Message-----
> > > From: James Bottomley <James.Bottomley@HansenPartnership.com>
> > > On Wed, 2025-10-08 at 19: 50 +0000, Bird, Tim wrote:
> > > > > -----Original Message-----
> > > From: Laurent Pinchart <laurent. pinchart@ideasonboard.com>
> > > On Wed, 2025-10-08 at 19:50 +0000, Bird, Tim wrote:
> > > > > -----Original Message-----
> > > > > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > > > > On Wed, Oct 08, 2025 at 09:08:33PM +0200, Andrew Lunn wrote:
> > > > > > > My goal for KS/MS is to discuss how to enable maintainers to
> > > > > > > use review automation tools to lower their workload.
> > > > > >
> > > > > > Maintainers will want to use these tools, if they prove to be
> > > > > > useful. But ideally, we want the developers to use these tools
> > > > > > and fix the issues before they post code for review. That reduces
> > > > > > the maintainers workload even more. So Maintainers just need to
> > > > > > run the tools to prove that the developers have run the tools and
> > > > > > have already fixed the problems.
> > > > > >
> > > > > > So i'm not sure your goal is the correct long term goal. It
> > > > > > should be a tool for everybody, not just maintainers.
> > > > >
> > > > > This raises the interesting and important question of how to get
> > > > > patch submitters to follow a recommended workflow. We routinely get
> > > > > patches that produce checkpatch errors that are clearly not false
> > > > > positives. Rob Herring implemented a bot to run checks on device
> > > > > tree bindings and device tree sources because lots of patches fail
> > > > > those checks. I'm sure there are lots of other examples that have
> > > > > led maintainers to automate checks on the receiver's side, through
> > > > > various types of standard CIs or hand-made solutions. Submitters
> > > > > should run more tests, how to get them to do so is a broader
> > > > > question.
> > > >
> > > > Maybe it would be worthwhile to annotate patch submissions with tags
> > > > indicating what tools have been run on them. I know we're trying to
> > > > avoid overuse of commit tags, but maybe we could automate this a bit,
> > > > and/or' reuse the 'Reviewed-by:' tag in the commit message. I could
> > > > envision, in some future workflow utopia, where a missing 'Reviewed-
> > > > by: checkpatch.pl AND claude AI review' would be grounds for
> > > > requesting these before human review.
> > >
> > > Realistically, we can't even get some submitters to run checkpatch, so
> > > I don't think the additional tag would help. What does help is having
> > > the 0day bot take a feed of the mailing list, select the [PATCH] tag
> > > and run checkpatch.pl as part of the tests, so someone could do the
> > > same with whatever AI acceptance tests are agreed.
> >
> > There's no question that 0day automation of checkpatch.pl feedback
> > has been a great thing. I suspect that more submitters would run
> > checkpatch before sending their patches, if failure to do so resulted
> > in automatic rejection of the patch. This is more of a process backbone
> > issue than anything else.
> >
> > > Although the problem with AI acceptance testing is that these models
> > > and the inferencing required to run them doesn't come for free so
> > > someone is going to end up paying for all this AI ... unless we can get
> > > some cloud company to donate it, of course ...
> >
> > Indeed. All the more reason to enforce it at the source. It then becomes
> > a cost for the contributor instead of the upstream community, which is
> > going to scale better.
>
> Forcing contributors to pay for access to proprietary tools is not
> acceptable. Forcing contributors to even run proprietary tools is not
> acceptable. If maintainers want contributions to go through any
> proprietary tooling before submission, then this has to run on the
> maintainer side (be it on a maintainer's machine, in some form of CI, or
> somewhere else).
I agree.
We don't even have the resolve to get contributors to run checkpatch.pl
(a freely available and imminently accessible tool).
I think it would be illustrative to figure out what the barriers are to that, or what a proper
incentive structure would be (and why we're missing it), before tackling the AI review issue.
Then we have to deal with the cost/proprietary tool issue for AI review.
There are open-source, locally runnable models out there. They currently don't yield
results that are as good as the large proprietary models, but they'll get better over time.
I did say that this was in the context of a future vision of a utopian workflow.
What bugs me about the current discussion is the implication that we're willing to just
add more cost to maintainers or the workflow infrastructure, when the cost should
IMHO, be born by contributors.
>
> You're right that cost would then be a problem. I can certainly imagine
> $popular_ai_company sponsoring this short term, until we're
> vendor-locked and they stop sponsorship. I don't think rushing in that
> direction is a good idea.
Good point about needing to avoid lockin.
-- Tim
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 1:43 ` Chris Mason
@ 2025-10-09 14:49 ` Paul E. McKenney
0 siblings, 0 replies; 68+ messages in thread
From: Paul E. McKenney @ 2025-10-09 14:49 UTC (permalink / raw)
To: Chris Mason; +Cc: Sasha Levin, ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 09:43:48PM -0400, Chris Mason wrote:
> On 10/8/25 2:33 PM, Sasha Levin wrote:
> > On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
[ . . . ]
> >> There's definitely overlap between the bugs I'm finding and the bugs Dan
> >> Carpenter finds, so I'm hoping he and I can team up as well.
> >>
> >> In terms of actual review details, the reviews have two parts:
> >>
> >> 1) The review prompts. These are stand alone and can just work on any
> >> kernel tree. This is what BPF CI is currently using:
> >>
> >> https://github.com/masoncl/review-prompts/
> >
> > I think I'll borrow some of these :)
> >
> > It was interesting to read through some of the prompts and see where you
> > had
> > good and bad experience with the AI ("You're not smart enough to understand
> > smp_mb()" :D )
>
> If I'm being honest, I'm probably not smart enough to teach AI about
> smp_mb(), but that's a different problem.
Ask the AI to read tools/memory-model and Documentation/memory-barriers.txt?
More seriously, yes, the AI might need considerable background information
to make sense of the .bell and .cat files.
Thanx, Paul
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 14:30 ` Steven Rostedt
@ 2025-10-09 14:51 ` Andrew Lunn
2025-10-09 15:05 ` Steven Rostedt
2025-10-10 7:59 ` Laurent Pinchart
2025-10-09 16:31 ` Chris Mason
1 sibling, 2 replies; 68+ messages in thread
From: Andrew Lunn @ 2025-10-09 14:51 UTC (permalink / raw)
To: Steven Rostedt
Cc: Laurent Pinchart, Bird, Tim, James Bottomley, Chris Mason,
ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
On Thu, Oct 09, 2025 at 10:30:19AM -0400, Steven Rostedt wrote:
> On Thu, 9 Oct 2025 12:14:05 +0300
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
>
> > Forcing contributors to pay for access to proprietary tools is not
> > acceptable. Forcing contributors to even run proprietary tools is not
> > acceptable. If maintainers want contributions to go through any
> > proprietary tooling before submission, then this has to run on the
> > maintainer side (be it on a maintainer's machine, in some form of CI, or
> > somewhere else).
>
> One way I see this working is to attach it to patchwork. Sending a patch to
> the BPF mailing list has their patchwork trigger a bunch of tests and it
> will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> add it to patchwork and the maintainers will ignore it.
Not quite, at least not for netdev, which shares some infrastructure
with BPF. Patches get ingested into patchwork, and then some time
later, the CI system will get them from patchwork and run the
tests. The results are then added to patchwork, both as red/yellow/green
traffic lights, and clickable links to the test results. Failing tests
don't automatically cause patchwork to change the status to "Change
Requested", a human is needed to look at the actual failures and
decide if the patch should be rejected or not.
We also make it clear, this is not a public CI system anybody can
submit patches to for testing. We expect patches to be "submission
quality", not WIP. Anybody abusing that will get spoken to.
> Attaching AI to patchwork could be useful as well. But this would run on
> some server that someone will have to pay for. But it will not be the
> submitter.
The netdev CI does not run on the patchwork instance. It just uses the
API patchwork offers to get patches from it, and tell it about test
results. But you are correct, somebody pays for the netdev CI, and it
is not the submitter.
Andrew
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
` (3 preceding siblings ...)
2025-10-08 19:08 ` Andrew Lunn
@ 2025-10-09 14:52 ` Paul E. McKenney
2025-10-10 3:08 ` Krzysztof Kozlowski
2025-10-14 7:16 ` Dan Carpenter
6 siblings, 0 replies; 68+ messages in thread
From: Paul E. McKenney @ 2025-10-09 14:52 UTC (permalink / raw)
To: Chris Mason; +Cc: ksummit, Dan Carpenter, Alexei Starovoitov
On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
> Hi everyone,
>
> Depending on how you look at things, this is potentially a topic for
> either MS or KS.
>
> One way to lower the load on maintainers is to make it easier for
> contributors to send higher quality patches, and to catch errors before
> they land in various git trees.
>
> Along those lines, when the AI code submission thread started over the
> summer, I decided to see if it was possible to get reasonable code
> reviews out of AI.
>
> There are certainly false positives, but Alexei and the BPF developers
> wired up my prompts into the BPF CI, and you can find the results in
> their github CI. Everything in red is a bug the AI review found:
>
> https://github.com/kernel-patches/bpf/actions/workflows/ai-code-review.yml
>
> My goal for KS/MS is to discuss how to enable maintainers to use review
> automation tools to lower their workload. I don't want to build new CI
> here, so the goal would be enabling integration with existing CI.
>
> My question for everyone is what would it take to make all of this
> useful? I'm working on funding for API access, so hopefully that part
> won't be a problem.
>
> There's definitely overlap between the bugs I'm finding and the bugs Dan
> Carpenter finds, so I'm hoping he and I can team up as well.
Using AI to analyze code and patches seems much safer than using it
to generate actual code. I asked this AI a couple of RCU questions,
and although the answers were not perfect, they were not half bad.
Especially not at all bad compared to what you would likely get out of
human beings on their first trawl through that code. ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 14:51 ` Andrew Lunn
@ 2025-10-09 15:05 ` Steven Rostedt
2025-10-10 7:59 ` Laurent Pinchart
1 sibling, 0 replies; 68+ messages in thread
From: Steven Rostedt @ 2025-10-09 15:05 UTC (permalink / raw)
To: Andrew Lunn
Cc: Laurent Pinchart, Bird, Tim, James Bottomley, Chris Mason,
ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
On Thu, 9 Oct 2025 16:51:39 +0200
Andrew Lunn <andrew@lunn.ch> wrote:
> > One way I see this working is to attach it to patchwork. Sending a patch to
> > the BPF mailing list has their patchwork trigger a bunch of tests and it
> > will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> > add it to patchwork and the maintainers will ignore it.
>
> Not quite, at least not for netdev, which shares some infrastructure
> with BPF. Patches get ingested into patchwork, and then some time
> later, the CI system will get them from patchwork and run the
> tests. The results are then added to patchwork, both as red/yellow/green
> traffic lights, and clickable links to the test results. Failing tests
> don't automatically cause patchwork to change the status to "Change
> Requested", a human is needed to look at the actual failures and
> decide if the patch should be rejected or not.
Thanks for the clarification. I'm just a submitter to the system and see
the feedback it gives, which looks nice (even though it's mostly irrelevant
to me as I simply Cc the bpf mailing list to let them know what I'm doing
on the tracing side).
-- Steve
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 14:47 ` Bird, Tim
@ 2025-10-09 15:11 ` Andrew Lunn
2025-10-09 17:58 ` Mark Brown
1 sibling, 0 replies; 68+ messages in thread
From: Andrew Lunn @ 2025-10-09 15:11 UTC (permalink / raw)
To: Bird, Tim
Cc: laurent.pinchart, James Bottomley, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
> We don't even have the resolve to get contributors to run checkpatch.pl
> (a freely available and imminently accessible tool).
> I think it would be illustrative to figure out what the barriers are to that, or what a proper
> incentive structure would be (and why we're missing it), before tackling the AI review issue.
The netdev CI does run checkpatch, and makes it obvious to netdev
Maintainers it fails. And this results in push back to submitters,
when we reject the patches pointing out checkpatch issues, and don't
bother looking at the patch any deeper. If you hang out around netdev,
you learn it is a waste of time posting patches which are checkpatch
unclean, you are not going to get any meaningful review other than
sort out all the trivial stuff.
> What bugs me about the current discussion is the implication that we're willing to just
> add more cost to maintainers or the workflow infrastructure, when the cost should
> IMHO, be born by contributors.
I partially disagree with this. If you do push back as a Maintainer,
initially only point out the trivial stuff in a low overhead way,
those developers who are going to be around a while learn to run all
the tools themselves. It makes their workflow more productive not
getting immediate rejects with nothing 'useful' other than fix
checkpatch.
But i do agree with you, Maintainers need to also run all these tools,
just to know the developers have run the tool and fixed all the issues
before submission. So there are costs for everybody. But for big
subsystems, it is mostly automated, so does not cost too much human
time for Maintainers.
Andrew
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 14:30 ` Steven Rostedt
2025-10-09 14:51 ` Andrew Lunn
@ 2025-10-09 16:31 ` Chris Mason
2025-10-09 17:19 ` Arnaldo Carvalho de Melo
1 sibling, 1 reply; 68+ messages in thread
From: Chris Mason @ 2025-10-09 16:31 UTC (permalink / raw)
To: Steven Rostedt, Laurent Pinchart
Cc: Bird, Tim, James Bottomley, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On 10/9/25 10:30 AM, Steven Rostedt wrote:
> On Thu, 9 Oct 2025 12:14:05 +0300
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
>
>> Forcing contributors to pay for access to proprietary tools is not
>> acceptable. Forcing contributors to even run proprietary tools is not
>> acceptable. If maintainers want contributions to go through any
>> proprietary tooling before submission, then this has to run on the
>> maintainer side (be it on a maintainer's machine, in some form of CI, or
>> somewhere else).
>
> One way I see this working is to attach it to patchwork. Sending a patch to
> the BPF mailing list has their patchwork trigger a bunch of tests and it
> will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> add it to patchwork and the maintainers will ignore it.
>
> Attaching AI to patchwork could be useful as well. But this would run on
> some server that someone will have to pay for. But it will not be the
> submitter.
Just to clarify, that's what already happens with BPF today.
Ex: https://github.com/kernel-patches/bpf/pull/9962 are all from the
review prompts.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 16:31 ` Chris Mason
@ 2025-10-09 17:19 ` Arnaldo Carvalho de Melo
2025-10-09 17:24 ` Arnaldo Carvalho de Melo
0 siblings, 1 reply; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-09 17:19 UTC (permalink / raw)
To: Chris Mason
Cc: Steven Rostedt, Laurent Pinchart, Bird, Tim, James Bottomley,
Andrew Lunn, ksummit, Dan Carpenter, Alexei Starovoitov,
Rob Herring
On Thu, Oct 09, 2025 at 12:31:48PM -0400, Chris Mason wrote:
> On 10/9/25 10:30 AM, Steven Rostedt wrote:
> > On Thu, 9 Oct 2025 12:14:05 +0300 Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
> >> Forcing contributors to pay for access to proprietary tools is not
> >> acceptable. Forcing contributors to even run proprietary tools is not
> >> acceptable. If maintainers want contributions to go through any
> >> proprietary tooling before submission, then this has to run on the
> >> maintainer side (be it on a maintainer's machine, in some form of CI, or
> >> somewhere else).
>
> > One way I see this working is to attach it to patchwork. Sending a patch to
> > the BPF mailing list has their patchwork trigger a bunch of tests and it
> > will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> > add it to patchwork and the maintainers will ignore it.
> > Attaching AI to patchwork could be useful as well. But this would run on
> > some server that someone will have to pay for. But it will not be the
> > submitter.
> Just to clarify, that's what already happens with BPF today.
> Ex: https://github.com/kernel-patches/bpf/pull/9962 are all from the
> review prompts.
This almost relieves me from the guilt not to have reviewed that series
from Alan ;-\
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 17:19 ` Arnaldo Carvalho de Melo
@ 2025-10-09 17:24 ` Arnaldo Carvalho de Melo
2025-10-09 17:31 ` Alexei Starovoitov
2025-10-09 18:42 ` Chris Mason
0 siblings, 2 replies; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-09 17:24 UTC (permalink / raw)
To: Chris Mason
Cc: Steven Rostedt, Laurent Pinchart, Bird, Tim, James Bottomley,
Andrew Lunn, ksummit, Dan Carpenter, Alexei Starovoitov,
Rob Herring
On Thu, Oct 09, 2025 at 02:19:58PM -0300, Arnaldo Carvalho de Melo wrote:
> On Thu, Oct 09, 2025 at 12:31:48PM -0400, Chris Mason wrote:
> > On 10/9/25 10:30 AM, Steven Rostedt wrote:
> > > One way I see this working is to attach it to patchwork. Sending a patch to
> > > the BPF mailing list has their patchwork trigger a bunch of tests and it
> > > will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> > > add it to patchwork and the maintainers will ignore it.
> > > Attaching AI to patchwork could be useful as well. But this would run on
> > > some server that someone will have to pay for. But it will not be the
> > > submitter.
> > Just to clarify, that's what already happens with BPF today.
> > Ex: https://github.com/kernel-patches/bpf/pull/9962 are all from the
> > review prompts.
> This almost relieves me from the guilt not to have reviewed that series
> from Alan ;-\
But this goes back to "developers should run these tools before
submitting upstream", which would provide them with reviewing comments
that would improve the quality of their pull requests by using all the
money that is being dreamed into AI and would saved all of us from
looking at github, etc, before AI is satisfied with the quality of the
submitters work?
Its all about what should distract maintainers (humans?), no?
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 17:24 ` Arnaldo Carvalho de Melo
@ 2025-10-09 17:31 ` Alexei Starovoitov
2025-10-09 17:47 ` Arnaldo Carvalho de Melo
2025-10-09 18:42 ` Chris Mason
1 sibling, 1 reply; 68+ messages in thread
From: Alexei Starovoitov @ 2025-10-09 17:31 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: Chris Mason, Steven Rostedt, Laurent Pinchart, Bird, Tim,
James Bottomley, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On Thu, Oct 9, 2025 at 10:24 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> On Thu, Oct 09, 2025 at 02:19:58PM -0300, Arnaldo Carvalho de Melo wrote:
> > On Thu, Oct 09, 2025 at 12:31:48PM -0400, Chris Mason wrote:
> > > On 10/9/25 10:30 AM, Steven Rostedt wrote:
> > > > One way I see this working is to attach it to patchwork. Sending a patch to
> > > > the BPF mailing list has their patchwork trigger a bunch of tests and it
> > > > will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> > > > add it to patchwork and the maintainers will ignore it.
>
> > > > Attaching AI to patchwork could be useful as well. But this would run on
> > > > some server that someone will have to pay for. But it will not be the
> > > > submitter.
>
> > > Just to clarify, that's what already happens with BPF today.
>
> > > Ex: https://github.com/kernel-patches/bpf/pull/9962 are all from the
> > > review prompts.
>
> > This almost relieves me from the guilt not to have reviewed that series
> > from Alan ;-\
>
> But this goes back to "developers should run these tools before
> submitting upstream", which would provide them with reviewing comments
> that would improve the quality of their pull requests by using all the
> money that is being dreamed into AI and would saved all of us from
> looking at github, etc, before AI is satisfied with the quality of the
> submitters work?
>
> Its all about what should distract maintainers (humans?), no?
Our next step is to send them as plain text emails, so that reviews
will blend in into natural kernel development process:
submitter send patches, AI and/or human replies, submitter replies
and insists that their code is correct and AI/human is wrong,
or admits the bug and fixes it in the next respin. And so on.
Chris did a tremendous job in reducing false positives.
These reviews were proven to be quite accurate and spotted
bugs that maintainers didn't. In a few cases the maintainer
found a bug and pointed it out, but AI explained the bug better.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 17:31 ` Alexei Starovoitov
@ 2025-10-09 17:47 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-09 17:47 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Chris Mason, Steven Rostedt, Laurent Pinchart, Bird, Tim,
James Bottomley, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On Thu, Oct 09, 2025 at 10:31:10AM -0700, Alexei Starovoitov wrote:
> On Thu, Oct 9, 2025 at 10:24 AM Arnaldo Carvalho de Melo <acme@kernel.org> wrote:
> > On Thu, Oct 09, 2025 at 02:19:58PM -0300, Arnaldo Carvalho de Melo wrote:
> > > On Thu, Oct 09, 2025 at 12:31:48PM -0400, Chris Mason wrote:
> > > > On 10/9/25 10:30 AM, Steven Rostedt wrote:
> > > > > One way I see this working is to attach it to patchwork. Sending a patch to
> > > > > the BPF mailing list has their patchwork trigger a bunch of tests and it
> > > > > will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> > > > > add it to patchwork and the maintainers will ignore it.
> > > > > Attaching AI to patchwork could be useful as well. But this would run on
> > > > > some server that someone will have to pay for. But it will not be the
> > > > > submitter.
> > > > Just to clarify, that's what already happens with BPF today.
> > > > Ex: https://github.com/kernel-patches/bpf/pull/9962 are all from the
> > > > review prompts.
> > > This almost relieves me from the guilt not to have reviewed that series
> > > from Alan ;-\
> > But this goes back to "developers should run these tools before
> > submitting upstream", which would provide them with reviewing comments
> > that would improve the quality of their pull requests by using all the
> > money that is being dreamed into AI and would saved all of us from
> > looking at github, etc, before AI is satisfied with the quality of the
> > submitters work?
> > Its all about what should distract maintainers (humans?), no?
> Our next step is to send them as plain text emails, so that reviews
Great!
> will blend in into natural kernel development process:
> submitter send patches, AI and/or human replies, submitter replies
> and insists that their code is correct and AI/human is wrong,
> or admits the bug and fixes it in the next respin. And so on.
Right, the thing I see making lots of people not trying to see AI as a
helping hand is that they do some one-off attempt while crossing their
hands wishing not to get a good answer. When they get a good answer they
think they are "cheating".
This is a knee-jerk reaction, focus on how you can get rid of mundane
work.
> Chris did a tremendous job in reducing false positives.
I have not tested it, but people in my team have and gave me great
feedback, we're on it!
> These reviews were proven to be quite accurate and spotted
> bugs that maintainers didn't. In a few cases the maintainer
> found a bug and pointed it out, but AI explained the bug better.
I'm a believer, there is help to be obtained from all this, lets reap
it.
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 14:47 ` Bird, Tim
2025-10-09 15:11 ` Andrew Lunn
@ 2025-10-09 17:58 ` Mark Brown
1 sibling, 0 replies; 68+ messages in thread
From: Mark Brown @ 2025-10-09 17:58 UTC (permalink / raw)
To: Bird, Tim
Cc: laurent.pinchart, James Bottomley, Andrew Lunn, Chris Mason,
ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
[-- Attachment #1: Type: text/plain, Size: 1081 bytes --]
On Thu, Oct 09, 2025 at 02:47:52PM +0000, Bird, Tim wrote:
> I agree.
> We don't even have the resolve to get contributors to run checkpatch.pl
> (a freely available and imminently accessible tool).
> I think it would be illustrative to figure out what the barriers are to that, or what a proper
> incentive structure would be (and why we're missing it), before tackling the AI review issue.
I think anyone who cares already does get contributors to run whatever
tools by virtue of pushing back when that doesn't happen, if people
aren't running checkpatch it's because the maintainers they're sending
things to don't care. For example I don't really care if people run
checkpatch but I do get a bit grumpy at people for missing some of the
audio specific tests, and the flow people are describing with the
networking CI is similar.
> There are open-source, locally runnable models out there. They currently don't yield
> results that are as good as the large proprietary models, but they'll get better over time.
I think you mean open weights rather than open source there.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 17:24 ` Arnaldo Carvalho de Melo
2025-10-09 17:31 ` Alexei Starovoitov
@ 2025-10-09 18:42 ` Chris Mason
2025-10-09 18:56 ` Linus Torvalds
1 sibling, 1 reply; 68+ messages in thread
From: Chris Mason @ 2025-10-09 18:42 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: Steven Rostedt, Laurent Pinchart, Bird, Tim, James Bottomley,
Andrew Lunn, ksummit, Dan Carpenter, Alexei Starovoitov,
Rob Herring
On 10/9/25 1:24 PM, Arnaldo Carvalho de Melo wrote:
> On Thu, Oct 09, 2025 at 02:19:58PM -0300, Arnaldo Carvalho de Melo wrote:
>> On Thu, Oct 09, 2025 at 12:31:48PM -0400, Chris Mason wrote:
>>> On 10/9/25 10:30 AM, Steven Rostedt wrote:
>>>> One way I see this working is to attach it to patchwork. Sending a patch to
>>>> the BPF mailing list has their patchwork trigger a bunch of tests and it
>>>> will tell you if it passed or failed. I'm assuming if it failed, it doesn't
>>>> add it to patchwork and the maintainers will ignore it.
>
>>>> Attaching AI to patchwork could be useful as well. But this would run on
>>>> some server that someone will have to pay for. But it will not be the
>>>> submitter.
>
>>> Just to clarify, that's what already happens with BPF today.
>
>>> Ex: https://github.com/kernel-patches/bpf/pull/9962 are all from the
>>> review prompts.
>
>> This almost relieves me from the guilt not to have reviewed that series
>> from Alan ;-\
>
> But this goes back to "developers should run these tools before
> submitting upstream",
I'd replace "should" with "can". Developers are coming to us with
different levels of experience and resources, and we want to help them
expand the size of the problem they can tackle by giving them access to
these tools. (szybot, 0day, perf, lockdep and all the different
analysis tools in the kernel apply here too).
But I also want to recognize that some people just aren't going to get
there. We'll spend a lot more energy trying to force them than we would
letting them use the CI.
> which would provide them with reviewing comments
> that would improve the quality of their pull requests by using all the
> money that is being dreamed into AI and would saved all of us from
> looking at github, etc, before AI is satisfied with the quality of the
> submitters work?
>
I think it's also important to remember that AI is sometimes wildly
wrong. Having the reviews show up on a list where more established
developers can call bullshit definitely helps protect against wasting
people's time.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 18:42 ` Chris Mason
@ 2025-10-09 18:56 ` Linus Torvalds
2025-10-10 15:52 ` Mauro Carvalho Chehab
0 siblings, 1 reply; 68+ messages in thread
From: Linus Torvalds @ 2025-10-09 18:56 UTC (permalink / raw)
To: Chris Mason
Cc: Arnaldo Carvalho de Melo, Steven Rostedt, Laurent Pinchart, Bird,
Tim, James Bottomley, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On Thu, 9 Oct 2025 at 11:43, Chris Mason <clm@meta.com> wrote:
>
> I think it's also important to remember that AI is sometimes wildly
> wrong. Having the reviews show up on a list where more established
> developers can call bullshit definitely helps protect against wasting
> people's time.
I really want any AI effort to be primarily geared towards
maintainers, not end developers.
And this is a large part of the reason. I think we've all seen the
garbage end of AI, and how it can generate more work rather than less.
We see the downsides on the security list where people use AI to
generate questionable reports, we see it in various general bug
handling, and we've seen it on the infrastructure where the AI tools
cause huge issues.
Fairly recently Jens gave up maintaining his own git server just
because of the horror that is AI scraping tools.
Yes, that scraping issue a "secondary" kind of "more work rather than
less", but it's very much a real issue nonetheless. Just the
infrastructure load is a real thing.
Some long term goal may well be to help end developers, but I really
think it should not be seen as any kind of primary goal for now and
shouldn't even be on anybody's radar as a design issue.
Exactly because I think we need that "established developers can call
bullshit" without being overwhelmed by said BS from external sources.
So I think that only once any AI tools are actively helping
maintainers in a day-to-day workflow should people even *look* at
having non-maintainers use them.
And I say that as somebody who does think it's going to be a big help.
I know there are lots of less optimistic people around.
Linus
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
` (4 preceding siblings ...)
2025-10-09 14:52 ` Paul E. McKenney
@ 2025-10-10 3:08 ` Krzysztof Kozlowski
2025-10-10 14:12 ` Chris Mason
2025-10-31 16:51 ` Stephen Hemminger
2025-10-14 7:16 ` Dan Carpenter
6 siblings, 2 replies; 68+ messages in thread
From: Krzysztof Kozlowski @ 2025-10-10 3:08 UTC (permalink / raw)
To: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On 08/10/2025 19:04, Chris Mason wrote:
> Hi everyone,
>
> Depending on how you look at things, this is potentially a topic for
> either MS or KS.
>
> One way to lower the load on maintainers is to make it easier for
> contributors to send higher quality patches, and to catch errors before
> they land in various git trees.
>
> Along those lines, when the AI code submission thread started over the
> summer, I decided to see if it was possible to get reasonable code
> reviews out of AI.
>
> There are certainly false positives, but Alexei and the BPF developers
> wired up my prompts into the BPF CI, and you can find the results in
> their github CI. Everything in red is a bug the AI review found:
>
> https://github.com/kernel-patches/bpf/actions/workflows/ai-code-review.yml
>
> My goal for KS/MS is to discuss how to enable maintainers to use review
> automation tools to lower their workload. I don't want to build new CI
> here, so the goal would be enabling integration with existing CI.
>
> My question for everyone is what would it take to make all of this
> useful? I'm working on funding for API access, so hopefully that part
> won't be a problem.
>
> There's definitely overlap between the bugs I'm finding and the bugs Dan
> Carpenter finds, so I'm hoping he and I can team up as well.
>
> In terms of actual review details, the reviews have two parts:
>
> 1) The review prompts. These are stand alone and can just work on any
> kernel tree. This is what BPF CI is currently using:
>
> https://github.com/masoncl/review-prompts/
>
> These prompts can also debug oopsen or syzbot reports (with varying
> success).
In general, I like this entire idea a lot, because I believe it could
drop many style or trivial review points, including obsolete/older code
patterns.
Qualcomm is trying to do something similar internally and they published
their code as well:
https://github.com/qualcomm/PatchWise/tree/main/patchwise/patch_review/ai_review
Different AI engines can be plugged, which solves some of the concerns
in this thread that some are expected to use employer's AI.
They run that instance of bot internally on all patches BEFORE posting
upstream, however that bot does not have yet AI-review enabled, maybe
because of too many false positives?
I also think this might be very useful tool for beginners to get
accustomed to kernel style of commit msgs and how the patch is supposed
to look like.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 10:03 ` Chris Mason
@ 2025-10-10 7:54 ` Laurent Pinchart
2025-10-10 11:40 ` James Bottomley
0 siblings, 1 reply; 68+ messages in thread
From: Laurent Pinchart @ 2025-10-10 7:54 UTC (permalink / raw)
To: Chris Mason
Cc: Bird, Tim, James Bottomley, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On Thu, Oct 09, 2025 at 06:03:45AM -0400, Chris Mason wrote:
> On 10/9/25 5:14 AM, Laurent Pinchart wrote:
>
> >> Indeed. All the more reason to enforce it at the source. It then becomes
> >> a cost for the contributor instead of the upstream community, which is
> >> going to scale better.
> >
> > Forcing contributors to pay for access to proprietary tools is not
> > acceptable. Forcing contributors to even run proprietary tools is not
> > acceptable. If maintainers want contributions to go through any
> > proprietary tooling before submission, then this has to run on the
> > maintainer side (be it on a maintainer's machine, in some form of CI, or
> > somewhere else).
>
> I strongly agree with Laurent here.
>
> > You're right that cost would then be a problem. I can certainly imagine
> > $popular_ai_company sponsoring this short term, until we're
> > vendor-locked and they stop sponsorship. I don't think rushing in that
> > direction is a good idea.
>
> I don't think vendor lock in is a problem though. We're in a phase
> where there is a great deal of competition and innovation in the space,
> and the prompts themselves are just english text against a general
> purpose AI.
I'm not that concerned about being locked to a specific vendor, but more
about being locked to a proprietory technology. Today I'm sure that we
could get sponsorship from different LLM vendors. They're competing to
attract more users, so it would be in their interest. Tomorrow, when the
market will calm down, the story will be different.
I liken this to building a dam to contain the patch flood. It could give
us some room to breathe, and we may then decide to ignore that the water
rises behind the flood gates. When that happens, we won't be able to go
back to where we were before if we later realize we can't have
FOSS-compatible flood gates.
If we were to push the burden of running LLM-based review to
contributors this wouldn't affect us that much, but if we run it on the
maintainers' side (be it on the server side with bots that get patches
from mailing lists, CI systems that feed from patchwork, or on the
client side) the risk of vendor lock-in is higher. Maybe proprietary
technology lock-in would be a better description as this isn't about a
single vendor.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 14:51 ` Andrew Lunn
2025-10-09 15:05 ` Steven Rostedt
@ 2025-10-10 7:59 ` Laurent Pinchart
2025-10-10 14:15 ` Bird, Tim
1 sibling, 1 reply; 68+ messages in thread
From: Laurent Pinchart @ 2025-10-10 7:59 UTC (permalink / raw)
To: Andrew Lunn
Cc: Steven Rostedt, Bird, Tim, James Bottomley, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
On Thu, Oct 09, 2025 at 04:51:39PM +0200, Andrew Lunn wrote:
> On Thu, Oct 09, 2025 at 10:30:19AM -0400, Steven Rostedt wrote:
> > On Thu, 9 Oct 2025 12:14:05 +0300 Laurent Pinchart wrote:
> >
> > > Forcing contributors to pay for access to proprietary tools is not
> > > acceptable. Forcing contributors to even run proprietary tools is not
> > > acceptable. If maintainers want contributions to go through any
> > > proprietary tooling before submission, then this has to run on the
> > > maintainer side (be it on a maintainer's machine, in some form of CI, or
> > > somewhere else).
> >
> > One way I see this working is to attach it to patchwork. Sending a patch to
> > the BPF mailing list has their patchwork trigger a bunch of tests and it
> > will tell you if it passed or failed. I'm assuming if it failed, it doesn't
> > add it to patchwork and the maintainers will ignore it.
>
> Not quite, at least not for netdev, which shares some infrastructure
> with BPF. Patches get ingested into patchwork, and then some time
> later, the CI system will get them from patchwork and run the
> tests. The results are then added to patchwork, both as red/yellow/green
> traffic lights, and clickable links to the test results. Failing tests
> don't automatically cause patchwork to change the status to "Change
> Requested", a human is needed to look at the actual failures and
> decide if the patch should be rejected or not.
The Linux media subsystem CI does pretty much the same. It's been
working quite OK, although dealing with false positives is not easy.
checkpatch.pl for instance returns lots of false positives (or rather
real violations of the coding style that are fine to ignore), and
contributors get an automated e-mail to tell them to fix and resubmit.
> We also make it clear, this is not a public CI system anybody can
> submit patches to for testing. We expect patches to be "submission
> quality", not WIP. Anybody abusing that will get spoken to.
>
> > Attaching AI to patchwork could be useful as well. But this would run on
> > some server that someone will have to pay for. But it will not be the
> > submitter.
>
> The netdev CI does not run on the patchwork instance. It just uses the
> API patchwork offers to get patches from it, and tell it about test
> results. But you are correct, somebody pays for the netdev CI, and it
> is not the submitter.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 7:54 ` Laurent Pinchart
@ 2025-10-10 11:40 ` James Bottomley
2025-10-10 11:53 ` Laurent Pinchart
2025-10-10 14:35 ` Bird, Tim
0 siblings, 2 replies; 68+ messages in thread
From: James Bottomley @ 2025-10-10 11:40 UTC (permalink / raw)
To: Laurent Pinchart, Chris Mason
Cc: Bird, Tim, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On Fri, 2025-10-10 at 10:54 +0300, Laurent Pinchart wrote:
> On Thu, Oct 09, 2025 at 06:03:45AM -0400, Chris Mason wrote:
[...]
> I'm not that concerned about being locked to a specific vendor, but
> more about being locked to a proprietory technology. Today I'm sure
> that we could get sponsorship from different LLM vendors. They're
> competing to attract more users, so it would be in their interest.
> Tomorrow, when the market will calm down, the story will be
> different.
>
> I liken this to building a dam to contain the patch flood. It could
> give us some room to breathe, and we may then decide to ignore that
> the water rises behind the flood gates. When that happens, we won't
> be able to go back to where we were before if we later realize we
> can't have FOSS-compatible flood gates.
This is pre-supposing that AI will always be proprietary. Given the
vast expansion of open source over the last couple of decades I
wouldn't take that bet.
> If we were to push the burden of running LLM-based review to
> contributors this wouldn't affect us that much, but if we run it on
> the maintainers' side (be it on the server side with bots that get
> patches from mailing lists, CI systems that feed from patchwork, or
> on the client side) the risk of vendor lock-in is higher.
Pushing the burden on to contributors always causes more friction than
building it into the CI. Plus if there's a cost involved you're making
contribution pay for play which really doesn't work out well.
> Maybe proprietary technology lock-in would be a better description
> as this isn't about a single vendor.
Well, firstly we've had this before: us with bitkeeper and most
recently Kubernetes with Slack and everyone has survived. But secondly
the far more likely scenario is that the AI stock bubble bursts and the
investment dries up changing the way AI is done (definitely killing the
who can by powerstations and huge hardware installations model of
today) and who the players are (probably the point at which open source
becomes more prevalent).
However, regardless of what happens in future, it will be way easier to
cope with if we've got AI in the CI rather than trying to push it into
contributor tooling.
Regards,
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 11:40 ` James Bottomley
@ 2025-10-10 11:53 ` Laurent Pinchart
2025-10-10 14:21 ` Steven Rostedt
2025-10-10 14:35 ` Bird, Tim
1 sibling, 1 reply; 68+ messages in thread
From: Laurent Pinchart @ 2025-10-10 11:53 UTC (permalink / raw)
To: James Bottomley
Cc: Chris Mason, Bird, Tim, Andrew Lunn, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
On Fri, Oct 10, 2025 at 07:40:45AM -0400, James Bottomley wrote:
> On Fri, 2025-10-10 at 10:54 +0300, Laurent Pinchart wrote:
> > On Thu, Oct 09, 2025 at 06:03:45AM -0400, Chris Mason wrote:
> [...]
> > I'm not that concerned about being locked to a specific vendor, but
> > more about being locked to a proprietory technology. Today I'm sure
> > that we could get sponsorship from different LLM vendors. They're
> > competing to attract more users, so it would be in their interest.
> > Tomorrow, when the market will calm down, the story will be
> > different.
> >
> > I liken this to building a dam to contain the patch flood. It could
> > give us some room to breathe, and we may then decide to ignore that
> > the water rises behind the flood gates. When that happens, we won't
> > be able to go back to where we were before if we later realize we
> > can't have FOSS-compatible flood gates.
>
> This is pre-supposing that AI will always be proprietary. Given the
> vast expansion of open source over the last couple of decades I
> wouldn't take that bet.
I wouldn't bet either way, but I tend to play it safe. I wouldn't want
to go in the direction of relying on proprietary technologies based
solely on the hope that at some point in the future we will get free
alternatives. I'm absolutely fine if people want to start experimenting
now based on that assumption, as long as we make it clear it's too early
to cross a point of no return.
> > If we were to push the burden of running LLM-based review to
> > contributors this wouldn't affect us that much, but if we run it on
> > the maintainers' side (be it on the server side with bots that get
> > patches from mailing lists, CI systems that feed from patchwork, or
> > on the client side) the risk of vendor lock-in is higher.
>
> Pushing the burden on to contributors always causes more friction than
> building it into the CI. Plus if there's a cost involved you're making
> contribution pay for play which really doesn't work out well.
Neither side is ideal :-/ Given all the arguments given in this thread I
think we would need to run checks on the maintainer side in any case,
even if we were to try and push them to the developer side. For that
reason it makes sense to first focus on the maintainer side, and try to
solve the technical, financial and freedom issues there as we'll have to
anyway.
As for pay-for-play, let's discuss a tax on upstream contributions
around a bottle of nihonshu at LPC. That topic requires not being sober
:-)
> > Maybe proprietary technology lock-in would be a better description
> > as this isn't about a single vendor.
>
> Well, firstly we've had this before: us with bitkeeper and most
> recently Kubernetes with Slack and everyone has survived. But secondly
> the far more likely scenario is that the AI stock bubble bursts and the
> investment dries up changing the way AI is done (definitely killing the
> who can by powerstations and huge hardware installations model of
> today) and who the players are (probably the point at which open source
> becomes more prevalent).
I'd love to fast-track to that and skip the current tragicomedy.
> However, regardless of what happens in future, it will be way easier to
> cope with if we've got AI in the CI rather than trying to push it into
> contributor tooling.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 3:08 ` Krzysztof Kozlowski
@ 2025-10-10 14:12 ` Chris Mason
2025-10-31 16:51 ` Stephen Hemminger
1 sibling, 0 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-10 14:12 UTC (permalink / raw)
To: Krzysztof Kozlowski, ksummit, Dan Carpenter, Alexei Starovoitov
On 10/9/25 11:08 PM, Krzysztof Kozlowski wrote:
> On 08/10/2025 19:04, Chris Mason wrote:
>> Hi everyone,
[ ... ]
>>
>> https://github.com/masoncl/review-prompts/
>>
>> These prompts can also debug oopsen or syzbot reports (with varying
>> success).
>
>
> In general, I like this entire idea a lot, because I believe it could
> drop many style or trivial review points, including obsolete/older code
> patterns.
>
> Qualcomm is trying to do something similar internally and they published
> their code as well:
> https://github.com/qualcomm/PatchWise/tree/main/patchwise/patch_review/ai_review
> Different AI engines can be plugged, which solves some of the concerns
> in this thread that some are expected to use employer's AI.
>
> They run that instance of bot internally on all patches BEFORE posting
> upstream, however that bot does not have yet AI-review enabled, maybe
> because of too many false positives?
>
> I also think this might be very useful tool for beginners to get
> accustomed to kernel style of commit msgs and how the patch is supposed
> to look like.
I really agree with Linus's comments about how much we can and can't
trust AI output, and I've tried to focus my effort for now on high
signal tools that maintainers would be able to use in CI without too
much frustration.
My first prompts told AI to assume the patches had bugs, and it would
consistently just invent bugs. That's not the end of the world, but the
explanations are always convincing enough that you'd waste a bunch of
time tracking it down.
This is why the prompts spend so many words telling the AI how to prove
a problem is real, and framing the review to assume the patch author is
already an expert. There's still a bunch of work to do there, but
hopefully it's small subsystem specific rules, or short explanations
that rule out large classes of problems.
At some point, I really want to be able to go back to telling the AI to
assume the patch is wrong, but it feels really far off.
I hadn't seen qualcomm's tools, but I think it's a really interesting
and slightly different use case.
If you're standing up internal gatekeepers to make sure early reviews
are done and internal process is followed, it's a chance to shift the
review bias toward teaching relatively new contributors about the kernel
and helping them research alternative approaches. The gatekeepers still
need a person in the loop, but this does fit with the maintainer usecase
I've been targeting, it's just someone that might not be getting
external credit for their work.
-chris
^ permalink raw reply [flat|nested] 68+ messages in thread
* RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 7:59 ` Laurent Pinchart
@ 2025-10-10 14:15 ` Bird, Tim
2025-10-10 15:07 ` Joe Perches
0 siblings, 1 reply; 68+ messages in thread
From: Bird, Tim @ 2025-10-10 14:15 UTC (permalink / raw)
To: laurent.pinchart, Andrew Lunn
Cc: Steven Rostedt, James Bottomley, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
> -----Original Message-----
> From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> On Thu, Oct 09, 2025 at 04: 51: 39PM +0200, Andrew Lunn wrote: > On Thu, Oct 09, 2025 at 10: 30: 19AM -0400, Steven Rostedt wrote: > > On
> Thu, 9 Oct 2025 12: 14: 05 +0300 Laurent Pinchart wrote: > > > > > Forcing contributors
> On Thu, Oct 09, 2025 at 04:51:39PM +0200, Andrew Lunn wrote:
> > On Thu, Oct 09, 2025 at 10:30:19AM -0400, Steven Rostedt wrote:
> > > On Thu, 9 Oct 2025 12:14:05 +0300 Laurent Pinchart wrote:
> > >
...
> The Linux media subsystem CI does pretty much the same. It's been
> working quite OK, although dealing with false positives is not easy.
> checkpatch.pl for instance returns lots of false positives (or rather
> real violations of the coding style that are fine to ignore), and
> contributors get an automated e-mail to tell them to fix and resubmit.
False positivies is one of the issues we should address. See below for
some ideas to address them.
In my understanding, if developers are not running
checkpatch.pl before sending things in, it creates some churn. I think
our current way of handling this is just to automate handling the churn
instead of actually addressing the problem at the time of development.
Ostensibly, people are not ignoring errors from gcc before sending patches in.
Maybe if some of the checks that checkpatch.pl does (the non-false-positive ones)
were integrated into the build, it would result in those items having immediate
visibility to the developer, at the time of code creation. And they'd get fixed
before the patch was ever created.
One issue with checkpatch.pl is that some items it complains about are low
importance or even ignorable, depending on the taste of the maintainer.
This means different maintainers will have different views of what constitutes
a false positive. This makes it difficult to be strict about running the tool or
addressing the issues it raises. Also, it is run too late in development, IMHO
(see above).
I have ideas to address the false positive rate, based on features that checkpatch.pl
already has, as well as ideas for handling some of the concerns that running
checkpatch.pl (or an equivalent) at build time would raise. Some of these might
apply to AI review as well. Let me know if you want me to elaborate, or if we
should just discuss in Tokyo.
-- Tim
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 11:53 ` Laurent Pinchart
@ 2025-10-10 14:21 ` Steven Rostedt
0 siblings, 0 replies; 68+ messages in thread
From: Steven Rostedt @ 2025-10-10 14:21 UTC (permalink / raw)
To: Laurent Pinchart
Cc: James Bottomley, Chris Mason, Bird, Tim, Andrew Lunn, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
On Fri, 10 Oct 2025 14:53:34 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
> >
> > This is pre-supposing that AI will always be proprietary. Given the
> > vast expansion of open source over the last couple of decades I
> > wouldn't take that bet.
>
> I wouldn't bet either way, but I tend to play it safe. I wouldn't want
> to go in the direction of relying on proprietary technologies based
> solely on the hope that at some point in the future we will get free
> alternatives. I'm absolutely fine if people want to start experimenting
> now based on that assumption, as long as we make it clear it's too early
> to cross a point of no return.
If it's just in CI, I don't think we need to worry too much for proprietary
lock in. It's currently a "nice to have". That is, we can catch issues of
submissions faster and hopefully maintainers do not need to waste time on
reviewing bugs in a patch that an AI caught. If it sends the report to the
submitter and also Cc's the maintainer, it would benefit both. I find
reading a review from someone else (and verifying it) much easier than
doing the review myself ;-)
>
> > > If we were to push the burden of running LLM-based review to
> > > contributors this wouldn't affect us that much, but if we run it on
> > > the maintainers' side (be it on the server side with bots that get
> > > patches from mailing lists, CI systems that feed from patchwork, or
> > > on the client side) the risk of vendor lock-in is higher.
> >
> > Pushing the burden on to contributors always causes more friction than
> > building it into the CI. Plus if there's a cost involved you're making
> > contribution pay for play which really doesn't work out well.
>
> Neither side is ideal :-/ Given all the arguments given in this thread I
> think we would need to run checks on the maintainer side in any case,
> even if we were to try and push them to the developer side. For that
> reason it makes sense to first focus on the maintainer side, and try to
> solve the technical, financial and freedom issues there as we'll have to
> anyway.
>
> As for pay-for-play, let's discuss a tax on upstream contributions
> around a bottle of nihonshu at LPC. That topic requires not being sober
> :-)
I'd like to be part of this "discussion" ;-)
>
> > > Maybe proprietary technology lock-in would be a better description
> > > as this isn't about a single vendor.
> >
> > Well, firstly we've had this before: us with bitkeeper and most
> > recently Kubernetes with Slack and everyone has survived. But secondly
> > the far more likely scenario is that the AI stock bubble bursts and the
> > investment dries up changing the way AI is done (definitely killing the
> > who can by powerstations and huge hardware installations model of
> > today) and who the players are (probably the point at which open source
> > becomes more prevalent).
>
> I'd love to fast-track to that and skip the current tragicomedy.
>
> > However, regardless of what happens in future, it will be way easier to
> > cope with if we've got AI in the CI rather than trying to push it into
> > contributor tooling.
100% agree
-- Steve
^ permalink raw reply [flat|nested] 68+ messages in thread
* RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 11:40 ` James Bottomley
2025-10-10 11:53 ` Laurent Pinchart
@ 2025-10-10 14:35 ` Bird, Tim
1 sibling, 0 replies; 68+ messages in thread
From: Bird, Tim @ 2025-10-10 14:35 UTC (permalink / raw)
To: James Bottomley, laurent.pinchart, Chris Mason
Cc: Andrew Lunn, ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
> -----Original Message-----
> From: James Bottomley <James.Bottomley@HansenPartnership.com>
>
> On Fri, 2025-10-10 at 10: 54 +0300, Laurent Pinchart wrote: > On Thu, Oct 09, 2025 at 06: 03: 45AM -0400, Chris Mason wrote: [. . . ] > I'm not
> that concerned about being locked to a specific vendor, but > more about being locked to a proprietory
> On Fri, 2025-10-10 at 10:54 +0300, Laurent Pinchart wrote:
> > On Thu, Oct 09, 2025 at 06:03:45AM -0400, Chris Mason wrote:
> [...]
[...]
> > If we were to push the burden of running LLM-based review to
> > contributors this wouldn't affect us that much, but if we run it on
> > the maintainers' side (be it on the server side with bots that get
> > patches from mailing lists, CI systems that feed from patchwork, or
> > on the client side) the risk of vendor lock-in is higher.
>
> Pushing the burden on to contributors always causes more friction than
> building it into the CI.
This is true, but unfortunately this doesn't do anything to help contributors
perform this step themselves. It actually, IMO, disincentives this from happening.
> Plus if there's a cost involved you're making
> contribution pay for play which really doesn't work out well.
>
> > Maybe proprietary technology lock-in would be a better description
> > as this isn't about a single vendor.
>
> Well, firstly we've had this before: us with bitkeeper and most
> recently Kubernetes with Slack and everyone has survived. But secondly
> the far more likely scenario is that the AI stock bubble bursts and the
> investment dries up changing the way AI is done (definitely killing the
> who can by powerstations and huge hardware installations model of
> today) and who the players are (probably the point at which open source
> becomes more prevalent).
>
> However, regardless of what happens in future, it will be way easier to
> cope with if we've got AI in the CI rather than trying to push it into
> contributor tooling.
I think if you put AI review into contributor tooling (like B4, or as a target in
the kernel build), it still allows for central control over the mechanisms and
policies for AI review, to be able to adapt to changing AI considerations.
It should also make integration into CI easier.
But it lowers the barriers (e.g. AI learning curve) for contributors to more easily
do it themselves, and migrate to a model where these bugs are caught earlier
in the development cycle, before the patches are created, and ultimately reduce
the "submit-feedback-fix" churn I mention elsewhere.
We use a "build-feedback-fix" cycle for compilation errors. It would be nice if we could
move to a similar model (instead of "submit-feedback-fix" cycle) for coding style
and semantic errors as well.
Regards,
-- Tim
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 14:15 ` Bird, Tim
@ 2025-10-10 15:07 ` Joe Perches
2025-10-10 16:01 ` checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools) Bird, Tim
2025-10-10 16:11 ` [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Steven Rostedt
0 siblings, 2 replies; 68+ messages in thread
From: Joe Perches @ 2025-10-10 15:07 UTC (permalink / raw)
To: Bird, Tim, laurent.pinchart, Andrew Lunn
Cc: Steven Rostedt, James Bottomley, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring
On Fri, 2025-10-10 at 14:15 +0000, Bird, Tim wrote:
> I have ideas to address the false positive rate, based on features that checkpatch.pl
> already has, as well as ideas for handling some of the concerns that running
> checkpatch.pl (or an equivalent) at build time would raise. Some of these might
> apply to AI review as well. Let me know if you want me to elaborate, or if we
> should just discuss in Tokyo.
Please elaborate.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-09 18:56 ` Linus Torvalds
@ 2025-10-10 15:52 ` Mauro Carvalho Chehab
0 siblings, 0 replies; 68+ messages in thread
From: Mauro Carvalho Chehab @ 2025-10-10 15:52 UTC (permalink / raw)
To: Linus Torvalds
Cc: Chris Mason, Arnaldo Carvalho de Melo, Steven Rostedt,
Laurent Pinchart, Bird, Tim, James Bottomley, Andrew Lunn,
ksummit, Dan Carpenter, Alexei Starovoitov, Rob Herring
Em Thu, 9 Oct 2025 11:56:23 -0700
Linus Torvalds <torvalds@linux-foundation.org> escreveu:
> On Thu, 9 Oct 2025 at 11:43, Chris Mason <clm@meta.com> wrote:
> >
> > I think it's also important to remember that AI is sometimes wildly
> > wrong. Having the reviews show up on a list where more established
> > developers can call bullshit definitely helps protect against wasting
> > people's time.
>
> I really want any AI effort to be primarily geared towards
> maintainers, not end developers.
>
> And this is a large part of the reason. I think we've all seen the
> garbage end of AI, and how it can generate more work rather than less.
>
> We see the downsides on the security list where people use AI to
> generate questionable reports, we see it in various general bug
> handling, and we've seen it on the infrastructure where the AI tools
> cause huge issues.
I recently read an article about LLM Needle-In-a-Haystack issues
with current models(*). The problem is that, even when they do support
big input tokens (with is proportional to the number of statements at
the program code), current AI models usually loses needles (for instance
a bad code) when the context is bigger than just a small percentage
of the maximum context(*).
(*) I didn't save the link of the article I read, but this one
https://arxiv.org/html/2406.10149v2 is somewhat similar.
Basically, even when the model supports large contexts,
the probability of losing a needle in a stack increases
with the input size in a way that, even a relatively small
input can cause false negatives. See this comment at figure
1:
"GPT-4 effectively uses only about 10% of the full 128K window."
So, yeah, if one uses AI to review patches, there is a high risk
of mistakes, meaning a maintainer or an experienced developer is needed
to review "AI reviews".
> Fairly recently Jens gave up maintaining his own git server just
> because of the horror that is AI scraping tools.
This is a different issue. I needed solve such issues on linuxTV.org
as well. The problem there doesn't seem due to someone giving AI some
prompt, but, instead, due to bad automation used by LLM training
that behaves like a DDoS attack, trying to retrieve all possible
existing and previously-existing pages continuously, using
multiple connections and from different ip addresses.
>
> Yes, that scraping issue a "secondary" kind of "more work rather than
> less", but it's very much a real issue nonetheless. Just the
> infrastructure load is a real thing.
>
> Some long term goal may well be to help end developers, but I really
> think it should not be seen as any kind of primary goal for now and
> shouldn't even be on anybody's radar as a design issue.
>
> Exactly because I think we need that "established developers can call
> bullshit" without being overwhelmed by said BS from external sources.
>
> So I think that only once any AI tools are actively helping
> maintainers in a day-to-day workflow should people even *look* at
> having non-maintainers use them.
For bug reviews, agreed. AI can still be helpful for developers to
do things like writing unit tests and more trivial tasks, provided
that will be reviewed/fixed by the develper using such tools.
>
> And I say that as somebody who does think it's going to be a big help.
> I know there are lots of less optimistic people around.
>
> Linus
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools)
2025-10-10 15:07 ` Joe Perches
@ 2025-10-10 16:01 ` Bird, Tim
2025-10-10 17:11 ` Rob Herring
2025-10-10 16:11 ` [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Steven Rostedt
1 sibling, 1 reply; 68+ messages in thread
From: Bird, Tim @ 2025-10-10 16:01 UTC (permalink / raw)
To: Joe Perches, laurent.pinchart, Andrew Lunn
Cc: Steven Rostedt, James Bottomley, Chris Mason, ksummit,
Dan Carpenter, Alexei Starovoitov, Rob Herring, Knut.omang
> -----Original Message-----
> From: Joe Perches <joe@perches.com>
> On Fri, 2025-10-10 at 14: 15 +0000, Bird, Tim wrote: > I have ideas to address the false positive rate, based on features that checkpatch. pl >
> already has, as well as ideas for handling some of the concerns that running > checkpatch. pl
> On Fri, 2025-10-10 at 14:15 +0000, Bird, Tim wrote:
> > I have ideas to address the false positive rate, based on features that checkpatch.pl
> > already has, as well as ideas for handling some of the concerns that running
> > checkpatch.pl (or an equivalent) at build time would raise. Some of these might
> > apply to AI review as well. Let me know if you want me to elaborate, or if we
> > should just discuss in Tokyo.
>
> Please elaborate.
Before I rattled off my ideas, I did a quick check to see if anything like what I had
in mind had already been suggested, and I found this:
https://lore.kernel.org/all/cover.3dd1d0c1f15a7a7c418f20ebafc9719afc1c2704.1510840787.git-series.knut.omang@oracle.com/
That's a submission by Knut Omang from 2017 that adds support for running
checkpatch.pl on the whole kernel source, integrated into kbuild.
It looks like it was not adopted.
That patchset allows:
1) fine-tuning the set of items that checkpatch checks for
2) using a config file to control the fine tuning
3) allowing selection of the groups of checks (so that preferences of
individual maintainers or sub-systems could be adhered to)
My ideas for tackling false-positives was along those same lines.
1) determine a list of "must-have" items, that everyone supports as true-positives
2) having a way to designate groups of items that were deemed as true-positives
- by creating a profile of items, specific to a subsystem or maintainer
- having the 'true-positives' profile under the control of the maintainer or subsystem
3) creating a mechanism to tell checkpatch which profile to use
In terms of runtime issues, there are two ways to "encourage (/force?)" users
to run the checks, while still managing overhead:
1) add a checkpatch.pl invocation sometime during a regular build (either
before or after the compiler invocation)
- My idea to avoid overhead here is to add a command-line option to the kernel
build, to indicate the number of commits to include in patch review (default to 1), and only
run checkpatch on either a) each commit itself, or b) the files touched by each commit.
- OR, only run checkpatch on source files that differ from the git index.
2) alternatively, add a gcc plugin that performs some set of checks similar to checkpatch.pl,
maybe calling checkpatch itself, but maybe just directly checking for definite true-positives
itself. This would involve migrating the true-positives check from checkpatch into a gcc plugin.
- then dealing with clang builds
I think there would be too much noise, and too much latency, to run
checkpatch on the entire kernel source on every build.
But my goal here is to have checkpatch issues show up like gcc errors - early in development,
as the errors are made, so they never make it into patch emails.
I've been working recently with gcc plugins, and they aren't as scary to write
as I initially thought. But converting all the checkpatch checks to C code would still
be quite a bit of work.
-- Tim
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 15:07 ` Joe Perches
2025-10-10 16:01 ` checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools) Bird, Tim
@ 2025-10-10 16:11 ` Steven Rostedt
2025-10-10 16:47 ` Joe Perches
2025-10-11 10:28 ` Mark Brown
1 sibling, 2 replies; 68+ messages in thread
From: Steven Rostedt @ 2025-10-10 16:11 UTC (permalink / raw)
To: Joe Perches
Cc: Bird, Tim, laurent.pinchart, Andrew Lunn, James Bottomley,
Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov,
Rob Herring
On Fri, 10 Oct 2025 08:07:39 -0700
Joe Perches <joe@perches.com> wrote:
> On Fri, 2025-10-10 at 14:15 +0000, Bird, Tim wrote:
> > I have ideas to address the false positive rate, based on features that checkpatch.pl
> > already has, as well as ideas for handling some of the concerns that running
> > checkpatch.pl (or an equivalent) at build time would raise. Some of these might
> > apply to AI review as well. Let me know if you want me to elaborate, or if we
> > should just discuss in Tokyo.
>
> Please elaborate.
Note, checkpatch.pl fails on pretty much anything in include/trace, as
those files contain complex macros that are basically their own language.
Anything that tries to make them follow kernel conventions simply makes
the output uglier and harder to read.
-- Steve
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 16:11 ` [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Steven Rostedt
@ 2025-10-10 16:47 ` Joe Perches
2025-10-10 17:42 ` Steven Rostedt
2025-10-11 10:28 ` Mark Brown
1 sibling, 1 reply; 68+ messages in thread
From: Joe Perches @ 2025-10-10 16:47 UTC (permalink / raw)
To: Steven Rostedt
Cc: Bird, Tim, laurent.pinchart, Andrew Lunn, James Bottomley,
Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov,
Rob Herring
On Fri, 2025-10-10 at 12:11 -0400, Steven Rostedt wrote:
> On Fri, 10 Oct 2025 08:07:39 -0700
> Joe Perches <joe@perches.com> wrote:
>
> > On Fri, 2025-10-10 at 14:15 +0000, Bird, Tim wrote:
> > > I have ideas to address the false positive rate, based on features that checkpatch.pl
> > > already has, as well as ideas for handling some of the concerns that running
> > > checkpatch.pl (or an equivalent) at build time would raise. Some of these might
> > > apply to AI review as well. Let me know if you want me to elaborate, or if we
> > > should just discuss in Tokyo.
> >
> > Please elaborate.
>
> Note, checkpatch.pl fails on pretty much anything in include/trace, as
> those files contain complex macros that are basically their own language.
> Anything that tries to make them follow kernel conventions simply makes
> the output uglier and harder to read.
I recall suggesting something about that once.
Basically, ignore the files.
https://lore.kernel.org/lkml/a587dac9e02cfde669743fd54ab41a3c6014c5e9.camel@perches.com/
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools)
2025-10-10 16:01 ` checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools) Bird, Tim
@ 2025-10-10 17:11 ` Rob Herring
2025-10-10 17:33 ` Arnaldo Carvalho de Melo
2025-10-10 19:21 ` Joe Perches
0 siblings, 2 replies; 68+ messages in thread
From: Rob Herring @ 2025-10-10 17:11 UTC (permalink / raw)
To: Bird, Tim
Cc: Joe Perches, laurent.pinchart, Andrew Lunn, Steven Rostedt,
James Bottomley, Chris Mason, ksummit, Dan Carpenter,
Alexei Starovoitov, Knut.omang
On Fri, Oct 10, 2025 at 11:03 AM Bird, Tim <Tim.Bird@sony.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Joe Perches <joe@perches.com>
> > On Fri, 2025-10-10 at 14: 15 +0000, Bird, Tim wrote: > I have ideas to address the false positive rate, based on features that checkpatch. pl >
> > already has, as well as ideas for handling some of the concerns that running > checkpatch. pl
> > On Fri, 2025-10-10 at 14:15 +0000, Bird, Tim wrote:
> > > I have ideas to address the false positive rate, based on features that checkpatch.pl
> > > already has, as well as ideas for handling some of the concerns that running
> > > checkpatch.pl (or an equivalent) at build time would raise. Some of these might
> > > apply to AI review as well. Let me know if you want me to elaborate, or if we
> > > should just discuss in Tokyo.
> >
> > Please elaborate.
>
> Before I rattled off my ideas, I did a quick check to see if anything like what I had
> in mind had already been suggested, and I found this:
> https://lore.kernel.org/all/cover.3dd1d0c1f15a7a7c418f20ebafc9719afc1c2704.1510840787.git-series.knut.omang@oracle.com/
>
> That's a submission by Knut Omang from 2017 that adds support for running
> checkpatch.pl on the whole kernel source, integrated into kbuild.
>
> It looks like it was not adopted.
>
> That patchset allows:
> 1) fine-tuning the set of items that checkpatch checks for
> 2) using a config file to control the fine tuning
> 3) allowing selection of the groups of checks (so that preferences of
> individual maintainers or sub-systems could be adhered to)
>
> My ideas for tackling false-positives was along those same lines.
> 1) determine a list of "must-have" items, that everyone supports as true-positives
> 2) having a way to designate groups of items that were deemed as true-positives
> - by creating a profile of items, specific to a subsystem or maintainer
> - having the 'true-positives' profile under the control of the maintainer or subsystem
> 3) creating a mechanism to tell checkpatch which profile to use
>
> In terms of runtime issues, there are two ways to "encourage (/force?)" users
> to run the checks, while still managing overhead:
> 1) add a checkpatch.pl invocation sometime during a regular build (either
> before or after the compiler invocation)
> - My idea to avoid overhead here is to add a command-line option to the kernel
> build, to indicate the number of commits to include in patch review (default to 1), and only
> run checkpatch on either a) each commit itself, or b) the files touched by each commit.
> - OR, only run checkpatch on source files that differ from the git index.
> 2) alternatively, add a gcc plugin that performs some set of checks similar to checkpatch.pl,
> maybe calling checkpatch itself, but maybe just directly checking for definite true-positives
> itself. This would involve migrating the true-positives check from checkpatch into a gcc plugin.
> - then dealing with clang builds
>
> I think there would be too much noise, and too much latency, to run
> checkpatch on the entire kernel source on every build.
>
> But my goal here is to have checkpatch issues show up like gcc errors - early in development,
> as the errors are made, so they never make it into patch emails.
Running on the source wouldn't expose things which only show up in the
patch. Missing or other S-o-b errors, missing commit msg, using 'This
patch/commit...', etc. I think it would be good if we ran checkpatch
on every patch, but start with only checks with no false positives and
ones every maintainer agrees on. Then we can add to it from there
perhaps enabling per subsystem checks.
Rob
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools)
2025-10-10 17:11 ` Rob Herring
@ 2025-10-10 17:33 ` Arnaldo Carvalho de Melo
2025-10-10 19:21 ` Joe Perches
1 sibling, 0 replies; 68+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-10 17:33 UTC (permalink / raw)
To: Rob Herring, Bird, Tim
Cc: Joe Perches, laurent.pinchart, Andrew Lunn, Steven Rostedt,
James Bottomley, Chris Mason, ksummit, Dan Carpenter,
Alexei Starovoitov, Knut.omang
On October 10, 2025 2:11:43 PM GMT-03:00, Rob Herring <robh@kernel.org> wrote:
>On Fri, Oct 10, 2025 at 11:03 AM Bird, Tim <Tim.Bird@sony.com> wrote:
>> > From: Joe Perches <joe@perches.com>
>> But my goal here is to have checkpatch issues show up like gcc errors - early in development,
>> as the errors are made, so they never make it into patch emails.
>
>Running on the source wouldn't expose things which only show up in the
>patch. Missing or other S-o-b errors, missing commit msg, using 'This
>patch/commit...', etc. I think it would be good if we ran checkpatch
>on every patch, but start with only checks with no false positives and
>ones every maintainer agrees on. Then we can add to it from there
>perhaps enabling per subsystem checks.
There are so many simple things that can be done to signal people how to prepare their contribution in a way that will increase the odds that it'll be accepted that don't include AI, just automation, but since this started as an AI review thread and this message is about tags...
Recently a member of our team showed me the review comments from a LLM, Claude iirc, and it , included:
Submission Guidelines**: ✓ COMPLIANT
- Has Signed-off-by
- Has mailing list link
- Follows kernel patch conventions
But probably that "Has mailing list link" isn't following Linus' latest fatwa, that Link tags should only be used if they add some useful info, not just the original patch.
To check if that is in place AI may be needed, I think.
- Arnaldo
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 16:47 ` Joe Perches
@ 2025-10-10 17:42 ` Steven Rostedt
0 siblings, 0 replies; 68+ messages in thread
From: Steven Rostedt @ 2025-10-10 17:42 UTC (permalink / raw)
To: Joe Perches
Cc: Bird, Tim, laurent.pinchart, Andrew Lunn, James Bottomley,
Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov,
Rob Herring
On Fri, 10 Oct 2025 09:47:23 -0700
Joe Perches <joe@perches.com> wrote:
> I recall suggesting something about that once.
> Basically, ignore the files.
>
> https://lore.kernel.org/lkml/a587dac9e02cfde669743fd54ab41a3c6014c5e9.camel@perches.com/
Something like that would be nice.
-- Steve
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools)
2025-10-10 17:11 ` Rob Herring
2025-10-10 17:33 ` Arnaldo Carvalho de Melo
@ 2025-10-10 19:21 ` Joe Perches
1 sibling, 0 replies; 68+ messages in thread
From: Joe Perches @ 2025-10-10 19:21 UTC (permalink / raw)
To: Rob Herring, Bird, Tim
Cc: laurent.pinchart, Andrew Lunn, Steven Rostedt, James Bottomley,
Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov,
Knut.omang
On Fri, 2025-10-10 at 12:11 -0500, Rob Herring wrote:
> On Fri, Oct 10, 2025 at 11:03 AM Bird, Tim <Tim.Bird@sony.com> wrote:
> > Before I rattled off my ideas, I did a quick check to see if anything like what I had
> > in mind had already been suggested, and I found this:
> > https://lore.kernel.org/all/cover.3dd1d0c1f15a7a7c418f20ebafc9719afc1c2704.1510840787.git-series.knut.omang@oracle.com/
> >
> > That's a submission by Knut Omang from 2017 that adds support for running
> > checkpatch.pl on the whole kernel source, integrated into kbuild.
> >
> > It looks like it was not adopted.
> >
> > That patchset allows:
> > 1) fine-tuning the set of items that checkpatch checks for
> > 2) using a config file to control the fine tuning
> > 3) allowing selection of the groups of checks (so that preferences of
> > individual maintainers or sub-systems could be adhered to)
> >
> > My ideas for tackling false-positives was along those same lines.
> > 1) determine a list of "must-have" items, that everyone supports as true-positives
> > 2) having a way to designate groups of items that were deemed as true-positives
> > - by creating a profile of items, specific to a subsystem or maintainer
> > - having the 'true-positives' profile under the control of the maintainer or subsystem
> > 3) creating a mechanism to tell checkpatch which profile to use
[]
> I think it would be good if we ran checkpatch
> on every patch, but start with only checks with no false positives and
> ones every maintainer agrees on. Then we can add to it from there
> perhaps enabling per subsystem checks.
checkpatch is stupid.
checkpatch has very few checks with no false positives.
Maybe have something like a per-subdirectory/subsystem
.checkpatch.conf file and have checkpatch walk up the directory chain
when checking patches or files.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 16:11 ` [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Steven Rostedt
2025-10-10 16:47 ` Joe Perches
@ 2025-10-11 10:28 ` Mark Brown
1 sibling, 0 replies; 68+ messages in thread
From: Mark Brown @ 2025-10-11 10:28 UTC (permalink / raw)
To: Steven Rostedt
Cc: Joe Perches, Bird, Tim, laurent.pinchart, Andrew Lunn,
James Bottomley, Chris Mason, ksummit, Dan Carpenter,
Alexei Starovoitov, Rob Herring
[-- Attachment #1: Type: text/plain, Size: 459 bytes --]
On Fri, Oct 10, 2025 at 12:11:27PM -0400, Steven Rostedt wrote:
> Note, checkpatch.pl fails on pretty much anything in include/trace, as
> those files contain complex macros that are basically their own language.
> Anything that tries to make them follow kernel conventions simply makes
> the output uglier and harder to read.
Some of the arch code is prone to those issues too, IIRC as well as the
macros inline assembly is also prone to upsetting things.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
` (5 preceding siblings ...)
2025-10-10 3:08 ` Krzysztof Kozlowski
@ 2025-10-14 7:16 ` Dan Carpenter
6 siblings, 0 replies; 68+ messages in thread
From: Dan Carpenter @ 2025-10-14 7:16 UTC (permalink / raw)
To: Chris Mason; +Cc: ksummit, Alexei Starovoitov
On Wed, Oct 08, 2025 at 01:04:54PM -0400, Chris Mason wrote:
> There's definitely overlap between the bugs I'm finding and the bugs Dan
> Carpenter finds, so I'm hoping he and I can team up as well.
Sure!
In my experience the big thing when there are two tools that produce
similar output, the frustrating thing is duplicate bug reports. For
example, Clang generates uninitialized variable warnings and so does
Smatch. Smatch is a bit more ambitious so it catches more bugs but
also has more false positives. I always search on lore for the
function name before reporting uninitialized warnings.
> 2) A code indexing tool with MCP server that Claude can use to find
> functions, types, and call chains more effectively. This makes it more
> likely Claude can trace complex relationships in the code:
>
> https://github.com/facebookexperimental/semcode
>
> Asking claude to produce a callgraph for btrfs_search_slot() consumes
> ~444K tokens. With semcode installed, the same query produces better
> results and uses 25K tokens. (btrfs_search_slot() has a huge callchain)
>
Smatch has better call trees than any other tools. The UI might not be
the best, but the data is there in the cross function database. Where
Smatch falls down is when functions are called from assembly which isn't
often.
regards,
dan carpenter
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
2025-10-10 3:08 ` Krzysztof Kozlowski
2025-10-10 14:12 ` Chris Mason
@ 2025-10-31 16:51 ` Stephen Hemminger
1 sibling, 0 replies; 68+ messages in thread
From: Stephen Hemminger @ 2025-10-31 16:51 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: Chris Mason, ksummit, Dan Carpenter, Alexei Starovoitov
On Fri, 10 Oct 2025 05:08:47 +0200
Krzysztof Kozlowski <krzk@kernel.org> wrote:
> On 08/10/2025 19:04, Chris Mason wrote:
> > Hi everyone,
> >
> > Depending on how you look at things, this is potentially a topic for
> > either MS or KS.
> >
> > One way to lower the load on maintainers is to make it easier for
> > contributors to send higher quality patches, and to catch errors before
> > they land in various git trees.
> >
> > Along those lines, when the AI code submission thread started over the
> > summer, I decided to see if it was possible to get reasonable code
> > reviews out of AI.
> >
> > There are certainly false positives, but Alexei and the BPF developers
> > wired up my prompts into the BPF CI, and you can find the results in
> > their github CI. Everything in red is a bug the AI review found:
> >
> > https://github.com/kernel-patches/bpf/actions/workflows/ai-code-review.yml
> >
> > My goal for KS/MS is to discuss how to enable maintainers to use review
> > automation tools to lower their workload. I don't want to build new CI
> > here, so the goal would be enabling integration with existing CI.
> >
> > My question for everyone is what would it take to make all of this
> > useful? I'm working on funding for API access, so hopefully that part
> > won't be a problem.
> >
> > There's definitely overlap between the bugs I'm finding and the bugs Dan
> > Carpenter finds, so I'm hoping he and I can team up as well.
> >
> > In terms of actual review details, the reviews have two parts:
> >
> > 1) The review prompts. These are stand alone and can just work on any
> > kernel tree. This is what BPF CI is currently using:
> >
> > https://github.com/masoncl/review-prompts/
> >
> > These prompts can also debug oopsen or syzbot reports (with varying
> > success).
>
>
> In general, I like this entire idea a lot, because I believe it could
> drop many style or trivial review points, including obsolete/older code
> patterns.
>
> Qualcomm is trying to do something similar internally and they published
> their code as well:
> https://github.com/qualcomm/PatchWise/tree/main/patchwise/patch_review/ai_review
> Different AI engines can be plugged, which solves some of the concerns
> in this thread that some are expected to use employer's AI.
>
> They run that instance of bot internally on all patches BEFORE posting
> upstream, however that bot does not have yet AI-review enabled, maybe
> because of too many false positives?
>
> I also think this might be very useful tool for beginners to get
> accustomed to kernel style of commit msgs and how the patch is supposed
> to look like.
>
> Best regards,
> Krzysztof
>
Has anyone tried asking AI to use existing mailing list review of previous patches
to generate its own future prompt?
If so, how did it go and what worked?
^ permalink raw reply [flat|nested] 68+ messages in thread
end of thread, other threads:[~2025-10-31 16:51 UTC | newest]
Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
2025-10-08 17:20 ` Konstantin Ryabitsev
2025-10-08 18:11 ` Sasha Levin
2025-10-08 18:35 ` Chris Mason
2025-10-08 17:57 ` Bart Van Assche
2025-10-08 18:04 ` Chris Mason
2025-10-08 18:14 ` Bart Van Assche
2025-10-08 18:42 ` Chris Mason
2025-10-08 21:08 ` Kees Cook
2025-10-09 1:37 ` Chris Mason
2025-10-08 18:33 ` Sasha Levin
2025-10-09 1:43 ` Chris Mason
2025-10-09 14:49 ` Paul E. McKenney
2025-10-08 19:08 ` Andrew Lunn
2025-10-08 19:28 ` Arnaldo Carvalho de Melo
2025-10-08 19:33 ` Laurent Pinchart
2025-10-08 19:39 ` Arnaldo Carvalho de Melo
2025-10-08 20:29 ` Andrew Lunn
2025-10-08 20:53 ` Mark Brown
2025-10-09 9:37 ` Laurent Pinchart
2025-10-09 12:48 ` Arnaldo Carvalho de Melo
2025-10-08 19:29 ` Laurent Pinchart
2025-10-08 19:50 ` Bird, Tim
2025-10-08 20:30 ` Sasha Levin
2025-10-09 12:32 ` Arnaldo Carvalho de Melo
2025-10-08 20:30 ` James Bottomley
2025-10-08 20:38 ` Bird, Tim
2025-10-08 22:21 ` Jiri Kosina
2025-10-09 9:14 ` Laurent Pinchart
2025-10-09 10:03 ` Chris Mason
2025-10-10 7:54 ` Laurent Pinchart
2025-10-10 11:40 ` James Bottomley
2025-10-10 11:53 ` Laurent Pinchart
2025-10-10 14:21 ` Steven Rostedt
2025-10-10 14:35 ` Bird, Tim
2025-10-09 14:30 ` Steven Rostedt
2025-10-09 14:51 ` Andrew Lunn
2025-10-09 15:05 ` Steven Rostedt
2025-10-10 7:59 ` Laurent Pinchart
2025-10-10 14:15 ` Bird, Tim
2025-10-10 15:07 ` Joe Perches
2025-10-10 16:01 ` checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools) Bird, Tim
2025-10-10 17:11 ` Rob Herring
2025-10-10 17:33 ` Arnaldo Carvalho de Melo
2025-10-10 19:21 ` Joe Perches
2025-10-10 16:11 ` [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Steven Rostedt
2025-10-10 16:47 ` Joe Perches
2025-10-10 17:42 ` Steven Rostedt
2025-10-11 10:28 ` Mark Brown
2025-10-09 16:31 ` Chris Mason
2025-10-09 17:19 ` Arnaldo Carvalho de Melo
2025-10-09 17:24 ` Arnaldo Carvalho de Melo
2025-10-09 17:31 ` Alexei Starovoitov
2025-10-09 17:47 ` Arnaldo Carvalho de Melo
2025-10-09 18:42 ` Chris Mason
2025-10-09 18:56 ` Linus Torvalds
2025-10-10 15:52 ` Mauro Carvalho Chehab
2025-10-09 14:47 ` Bird, Tim
2025-10-09 15:11 ` Andrew Lunn
2025-10-09 17:58 ` Mark Brown
2025-10-09 1:15 ` Chris Mason
2025-10-08 20:37 ` Andrew Lunn
2025-10-09 12:40 ` Arnaldo Carvalho de Melo
2025-10-09 14:52 ` Paul E. McKenney
2025-10-10 3:08 ` Krzysztof Kozlowski
2025-10-10 14:12 ` Chris Mason
2025-10-31 16:51 ` Stephen Hemminger
2025-10-14 7:16 ` Dan Carpenter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox