ksummit.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [MAINTAINERS / KERNEL SUMMIT] AI patch review tools
@ 2025-10-08 17:04 Chris Mason
  2025-10-08 17:20 ` Konstantin Ryabitsev
                   ` (6 more replies)
  0 siblings, 7 replies; 68+ messages in thread
From: Chris Mason @ 2025-10-08 17:04 UTC (permalink / raw)
  To: ksummit, Dan Carpenter, Alexei Starovoitov

[-- Attachment #1: Type: text/plain, Size: 2485 bytes --]

Hi everyone,

Depending on how you look at things, this is potentially a topic for
either MS or KS.

One way to lower the load on maintainers is to make it easier for
contributors to send higher quality patches, and to catch errors before
they land in various git trees.

Along those lines, when the AI code submission thread started over the
summer, I decided to see if it was possible to get reasonable code
reviews out of AI.

There are certainly false positives, but Alexei and the BPF developers
wired up my prompts into the BPF CI, and you can find the results in
their github CI.  Everything in red is a bug the AI review found:

https://github.com/kernel-patches/bpf/actions/workflows/ai-code-review.yml

My goal for KS/MS is to discuss how to enable maintainers to use review
automation tools to lower their workload.  I don't want to build new CI
here, so the goal would be enabling integration with existing CI.

My question for everyone is what would it take to make all of this
useful?  I'm working on funding for API access, so hopefully that part
won't be a problem.

There's definitely overlap between the bugs I'm finding and the bugs Dan
Carpenter finds, so I'm hoping he and I can team up as well.

In terms of actual review details, the reviews have two parts:

1) The review prompts.  These are stand alone and can just work on any
kernel tree.  This is what BPF CI is currently using:

https://github.com/masoncl/review-prompts/

These prompts can also debug oopsen or syzbot reports (with varying
success).

2) A code indexing tool with MCP server that Claude can use to find
functions, types, and call chains more effectively.  This makes it more
likely Claude can trace complex relationships in the code:

https://github.com/facebookexperimental/semcode

Asking claude to produce a callgraph for btrfs_search_slot() consumes
~444K tokens.  With semcode installed, the same query produces better
results and uses 25K tokens. (btrfs_search_slot() has a huge callchain)

I don't think BPF CI is using this yet, but we'll move to it and compare
the review results if not.

The reviews are meant to look like emails on lkml, and even when wildly
wrong they definitely succeed there.  I've attached the results of a run
against 600 random commits in linux-next, and the last 400 commits of
net-next (on Oct 2nd).

There are both real bugs and false positives in there, so it gives a
good idea of the mixture of right and wrong that are common in the reviews.

-chris

[-- Attachment #2: net-next.txt --]
[-- Type: text/plain, Size: 56893 bytes --]

commit 2ee5c8c0c28e0e3ccfdb842a7b3bd2f98ee7eaf7
Author: Mohsin Bashir <mohsin.bashr@gmail.com>

eth: fbnic: Move hw_stats_lock out of fbnic_dev

Move hw_stats_lock out of fbnic_dev to a more appropriate struct
fbnic_hw_stats since the only use of this lock is to protect access to
the hardware stats. While at it, enclose the lock and stats
initialization in a single init call.

Link: https://patch.msgid.link/20250825200206.2357713-2-kuba@kernel.org

> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic.h b/drivers/net/ethernet/meta/fbnic/fbnic.h
> index c376e06880c9..311c7dda911a 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic.h
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic.h
> @@ -84,9 +84,6 @@ struct fbnic_dev {
> 	/* Local copy of hardware statistics */
> 	struct fbnic_hw_stats hw_stats;
>
> -	/* Lock protecting access to hw_stats */
> -	spinlock_t hw_stats_lock;
> -
> 	struct fbnic_fw_log fw_log;
> };

[ ... ]

> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h b/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h
> index 4fe239717497..2fc25074a5e6 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_hw_stats.h
> @@ -5,6 +5,7 @@
> #define _FBNIC_HW_STATS_H_
>
> #include <linux/ethtool.h>
> +#include <linux/spinlock.h>
>
> #include "fbnic_csr.h"
>
> @@ -122,11 +123,15 @@ struct fbnic_hw_stats {
> 	struct fbnic_rxb_stats rxb;
> 	struct fbnic_hw_q_stats hw_q[FBNIC_MAX_QUEUES];
> 	struct fbnic_pcie_stats pcie;
> +
> +	/* Lock protecting the access to hw stats */
> +	spinlock_t lock;
> };
>
> u64 fbnic_stat_rd64(struct fbnic_dev *fbd, u32 reg, u32 offset);
>
> void fbnic_reset_hw_stats(struct fbnic_dev *fbd);
> +void fbnic_init_hw_stats(struct fbnic_dev *fbd);
> void fbnic_get_hw_q_stats(struct fbnic_dev *fbd,
> 			  struct fbnic_hw_q_stats *hw_q);
> void fbnic_get_hw_stats32(struct fbnic_dev *fbd);

[ ... ]

> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
> index b70e4cadb37b..8190f49e1426 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
> @@ -304,10 +304,9 @@ static int fbnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>
> 	fbnic_devlink_register(fbd);
> 	fbnic_dbg_fbd_init(fbd);
> -	spin_lock_init(&fbd->hw_stats_lock);
>
> 	/* Capture snapshot of hardware stats so netdev can calculate delta */
> -	fbnic_reset_hw_stats(fbd);
> +	fbnic_init_hw_stats(fbd);
>
> 	fbnic_hwmon_register(fbd);

Can this create a race window during device initialization? The
fbnic_dbg_fbd_init() call creates debugfs entries including pcie_stats,
which allows immediate userspace access to fbnic_dbg_pcie_stats_show().
This function calls fbnic_get_hw_stats()->spin_lock(&fbd->hw_stats.lock),
but the lock initialization has been moved to fbnic_init_hw_stats() which
is called after fbnic_dbg_fbd_init().

If userspace accesses the debugfs pcie_stats file between debugfs creation
and lock initialization, would this cause undefined behavior with an
uninitialized spinlock?

commit 330355191a2d9a59137455b774b9a66dd6d068d4
Author: Andre Przywara <andre.przywara@arm.com>

net: stmmac: sun8i: drop unneeded default syscon value

This patch removes the default_syscon_value field from emac_variant
structures and changes the syscon register configuration to start from
zero instead of using read-modify-write operations. The cleanup function
is also modified to only reset registers for internal PHY variants.

Link: https://patch.msgid.link/20250825172055.19794-1-andre.przywara@arm.com

> diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
> index 2796dc426943..690f3650f84e 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c

[ ... ]

> @@ -1039,9 +1000,9 @@ static int sun8i_dwmac_set_syscon(struct device *dev,
>
>  static void sun8i_dwmac_unset_syscon(struct sunxi_priv_data *gmac)
>  {
> -	u32 reg = gmac->variant->default_syscon_value;
> -
> -	regmap_field_write(gmac->regmap_field, reg);
> +	if (gmac->variant->soc_has_internal_phy)
> +		regmap_field_write(gmac->regmap_field,
> +				   (H3_EPHY_SHUTDOWN | H3_EPHY_SELECT));
>  }

Can this leave the syscon register in an uncleaned state for variants
without internal PHY? The sun8i_dwmac_remove()->sun8i_dwmac_unset_syscon()
path will now skip register cleanup entirely for A83T, A64, and R40
variants where soc_has_internal_phy is false. The previous code would
restore the default syscon value for all variants during device removal.

commit 3061d214eead8a6fb652bf69135525f394a40e52
Author: Przemek Kitszel <przemyslaw.kitszel@intel.com>

ice: extract virt/queues.c: cleanup - p2

This patch removes ice_vc_cfg_promiscuous_mode_msg and ice_vc_get_stats_msg
functions from virt/queues.c as part of a cleanup series to move content
to virtchnl.c.

> diff --git a/drivers/net/ethernet/intel/ice/virt/queues.c b/drivers/net/ethernet/intel/ice/virt/queues.c
> index 7765ac50a6cd..c1da10aa2151 100644
> --- a/drivers/net/ethernet/intel/ice/virt/queues.c
> +++ b/drivers/net/ethernet/intel/ice/virt/queues.c
> @@ -161,187 +161,6 @@ static int ice_vf_cfg_q_quanta_profile(struct ice_vf *vf, u16 quanta_size,
>  	return 0;
>  }
>
> -/**
> - * ice_vc_cfg_promiscuous_mode_msg
> - * @vf: pointer to the VF info
> - * @msg: pointer to the msg buffer
> - *
> - * called from the VF to configure VF VSIs promiscuous mode
> - */
> -static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg)
> -{

Can this cause compilation errors? The ice_vc_cfg_promiscuous_mode_msg
function is being removed here, but it's still referenced in the
ice_virtchnl_dflt_ops structure later in this same file. The function
exists as static in virtchnl.c, which means it's not externally visible
for linking.

[ ... ]

> -/**
> - * ice_vc_get_stats_msg
> - * @vf: pointer to the VF info
> - * @msg: pointer to the msg buffer
> - *
> - * called from the VF to get VSI stats
> - */
> -static int ice_vc_get_stats_msg(struct ice_vf *vf, u8 *msg)
> -{

Similarly, ice_vc_get_stats_msg is being removed but is still referenced
in both ice_virtchnl_dflt_ops and ice_virtchnl_repr_ops structures in
this file. Since the functions in virtchnl.c are declared static, won't
this result in undefined symbol errors during linking?

The function table assignments that still reference these removed functions:

static const struct ice_virtchnl_ops ice_virtchnl_dflt_ops = {
	...
	.get_stats_msg = ice_vc_get_stats_msg,
	.cfg_promiscuous_mode_msg = ice_vc_cfg_promiscuous_mode_msg,
	...
};

static const struct ice_virtchnl_ops ice_virtchnl_repr_ops = {
	...
	.get_stats_msg = ice_vc_get_stats_msg,
	.cfg_promiscuous_mode_msg = ice_vc_repr_cfg_promiscuous_mode,
	...
};

commit 1abe21ef1adf0c5b6dbb5878c2fa4573df8d29fc
Author: Christian Marangi <ansuelsmth@gmail.com>

net: phy: introduce phy_id_compare_vendor() PHY ID helper

This patch introduces a new phy_id_compare_vendor() helper function for
comparing PHY IDs with vendor masks. It also refactors the existing
PHY_ID_MATCH macros to use dedicated mask constants.

Link: https://patch.msgid.link/20250823134431.4854-1-ansuelsmth@gmail.com

> diff --git a/include/linux/phy.h b/include/linux/phy.h
> index 4c2b8b6e7187..b67079796402 100644
> --- a/include/linux/phy.h
> +++ b/include/linux/phy.h
> @@ -1268,9 +1268,13 @@ struct phy_driver {
>  #define to_phy_driver(d) container_of_const(to_mdio_common_driver(d),		\
>  				      struct phy_driver, mdiodrv)
>
> -#define PHY_ID_MATCH_EXACT(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 0)
> -#define PHY_ID_MATCH_MODEL(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 4)
> -#define PHY_ID_MATCH_VENDOR(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 10)
> +#define PHY_ID_MATCH_EXTACT_MASK GENMASK(31, 0)
                            ^^^^^^

Is there a typo in PHY_ID_MATCH_EXTACT_MASK? Should this be
PHY_ID_MATCH_EXACT_MASK instead?

> +#define PHY_ID_MATCH_MODEL_MASK GENMASK(31, 4)
> +#define PHY_ID_MATCH_VENDOR_MASK GENMASK(31, 10)
> +
> +#define PHY_ID_MATCH_EXACT(id) .phy_id = (id), .phy_id_mask = PHY_ID_MATCH_EXTACT_MASK

The PHY_ID_MATCH_EXACT macro references the misspelled constant name.
This will cause compilation failures when PHY drivers try to use this
macro.

> +#define PHY_ID_MATCH_MODEL(id) .phy_id = (id), .phy_id_mask = PHY_ID_MATCH_MODEL_MASK
> +#define PHY_ID_MATCH_VENDOR(id) .phy_id = (id), .phy_id_mask = PHY_ID_MATCH_VENDOR_MASK
>
>  /**
>   * phy_id_compare - compare @id1 with @id2 taking account of @mask
> @@ -1286,6 +1290,19 @@ static inline bool phy_id_compare(u32 id1, u32 id2, u32 mask)
>  	return !((id1 ^ id2) & mask);
>  }
>
> +/**
> + * phy_id_compare_vendor - compare @id with @vendor mask
> + * @id: PHY ID
> + * @vendor_mask: PHY Vendor mask
> + *
> + * Return: true if the bits from @id match @vendor using the
> + *	   generic PHY Vendor mask.
> + */
> +static inline bool phy_id_compare_vendor(u32 id, u32 vendor_mask)
> +{
> +	return phy_id_compare(id, vendor_mask, PHY_ID_MATCH_VENDOR_MASK);
> +}
> +
>  /**
>   * phydev_id_compare - compare @id with the PHY's Clause 22 ID
>   * @phydev: the PHY device

commit 4367000c0e33e0bf24e2b872ccb49527a20f87e5
Author: Po-Hao Huang <phhuang@realtek.com>

wifi: rtw89: 8852a: report per-channel noise level by get_survey ops

This patch adds per-channel noise level reporting for the RTW89 8852a
WiFi driver to optimize roaming decisions. It implements NHM (Noise
Histogram Measurement) functionality with memory allocation for
per-channel noise history tracking.

Link: https://patch.msgid.link/20250819034428.26307-2-pkshih@realtek.com

> diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
> index 0ad7562632a5..2b658ee89bb6 100644
> --- a/drivers/net/wireless/realtek/rtw89/core.c
> +++ b/drivers/net/wireless/realtek/rtw89/core.c

[ ... ]

> diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
> index 06598723074e..c3181a301f7c 100644
> --- a/drivers/net/wireless/realtek/rtw89/phy.c
> +++ b/drivers/net/wireless/realtek/rtw89/phy.c

[ ... ]

> @@ -6934,6 +7190,43 @@ void rtw89_phy_dm_reinit(struct rtw89_dev *rtwdev)
>  	rtw89_physts_parsing_init(rtwdev);
>  }
>
> +static void __rtw89_phy_dm_init_data(struct rtw89_dev *rtwdev, struct rtw89_bb_ctx *bb)
> +{
> +	struct rtw89_env_monitor_info *env = &bb->env_monitor;
> +	const struct rtw89_chip_info *chip = rtwdev->chip;
> +	struct ieee80211_supported_band *sband;
> +	enum rtw89_band hw_band;
> +	enum nl80211_band band;
> +	u8 idx;
> +
> +	if (!chip->support_noise)
> +		return;
> +
> +	for (band = 0; band < NUM_NL80211_BANDS; band++) {
> +		sband = rtwdev->hw->wiphy->bands[band];
> +		if (!sband)
> +			continue;
> +
> +		hw_band = rtw89_nl80211_to_hw_band(band);
> +		env->nhm_his[hw_band] =
> +			devm_kcalloc(rtwdev->dev, sband->n_channels,
> +				     sizeof(*env->nhm_his[0]), GFP_KERNEL);

Can this cause a NULL pointer dereference? The devm_kcalloc() call can
fail and return NULL, but there's no error checking here. Later in
__rtw89_phy_nhm_get_result()->rtw89_ops_get_survey(), the code accesses
env->nhm_his[hw_band][chan_idx] which would dereference NULL if the
allocation failed.

> +
> +		for (idx = 0; idx < sband->n_channels; idx++)
> +			INIT_LIST_HEAD(&env->nhm_his[hw_band][idx].list);
> +
> +		INIT_LIST_HEAD(&env->nhm_rpt_list);
> +	}
> +}

[ ... ]

commit 3d1b3f4ffc0aca15a0ebce0c71163b42a87efff2
Author: Daniel Golle <daniel@makrotopia.org>

net: phy: mxl-86110: add basic support for MxL86111 PHY

Add basic support for the MxL86111 PHY which in addition to the features
of the MxL86110 also comes with an SGMII interface. The patch refactors
the existing MxL86110 code and adds new functions for MxL86111 support.

Link: https://patch.msgid.link/707fd83ec0e11ea620d37f2125a394e9dd1b27fa.1755884175.git.daniel@makrotopia.org

> diff --git a/drivers/net/phy/mxl-86110.c b/drivers/net/phy/mxl-86110.c
> index ba25d5b01780..e5d137a37a1d 100644
> --- a/drivers/net/phy/mxl-86110.c
> +++ b/drivers/net/phy/mxl-86110.c

[ ... ]

> @@ -585,22 +652,15 @@ static int mxl86110_enable_led_activity_blink(struct phy_device *phydev)
>  }
>
>  /**
> - * mxl86110_config_init() - initialize the PHY
> + * mxl86110_config_rgmii_delay() - configure RGMII delays
>   * @phydev: pointer to the phy_device
>   *
>   * Return: 0 or negative errno code
>   */
> -static int mxl86110_config_init(struct phy_device *phydev)
> +static int mxl86110_config_rgmii_delay(struct phy_device *phydev)
>  {
> -	u16 val = 0;
>  	int ret;
> -
> -	phy_lock_mdio_bus(phydev);
> -
> -	/* configure syncE / clk output */
> -	ret = mxl86110_synce_clk_cfg(phydev);
> -	if (ret < 0)
> -		goto out;
> +	u16 val;
>
>  	switch (phydev->interface) {
>  	case PHY_INTERFACE_MODE_RGMII:
> @@ -642,17 +702,237 @@ static int mxl86110_config_init(struct phy_device *phydev)
>  	if (ret < 0)
>  		goto out;
>
> +out:
> +	return ret;
> +}
> +
> +/**
> + * mxl86110_config_init() - initialize the MXL86110 PHY
> + * @phydev: pointer to the phy_device
> + *
> + * Return: 0 or negative errno code
> + */
> +static int mxl86110_config_init(struct phy_device *phydev)
> +{
> +	int ret;
> +
> +	phy_lock_mdio_bus(phydev);
> +
> +	/* configure syncE / clk output */
> +	ret = mxl86110_synce_clk_cfg(phydev);
> +	if (ret < 0)
> +		goto out;
> +
> +	ret = mxl86110_config_rgmii_delay(phydev);
> +	if (ret < 0)
> +		goto out;
> +
> +	ret = mxl86110_enable_led_activity_blink(phydev);
> +	if (ret < 0)
> +		goto out;
> +
> +	ret = mxl86110_broadcast_cfg(phydev);
> +
> +out:
> +	phy_unlock_mdio_bus(phydev);
> +	return ret;
> +}

[ ... ]

> +/**
> + * mxl86111_config_init() - initialize the MXL86111 PHY
> + * @phydev: pointer to the phy_device
> + *
> + * Return: 0 or negative errno code
> + */
> +static int mxl86111_config_init(struct phy_device *phydev)
> +{
> +	int ret;
> +
> +	phy_lock_mdio_bus(phydev);
> +
> +	/* configure syncE / clk output */
> +	ret = mxl86110_synce_clk_cfg(phydev);
> +	if (ret < 0)
> +		goto out;
> +
> +	switch (phydev->interface) {
> +	case PHY_INTERFACE_MODE_100BASEX:
> +		ret = __mxl86110_modify_extended_reg(phydev,
> +						     MXL86111_EXT_MISC_CONFIG_REG,
> +						     MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL,
> +						     MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL_100BX);
> +		if (ret < 0)
> +			goto out;
> +		break;
> +	case PHY_INTERFACE_MODE_1000BASEX:
> +	case PHY_INTERFACE_MODE_SGMII:
> +		ret = __mxl86110_modify_extended_reg(phydev,
> +						     MXL86111_EXT_MISC_CONFIG_REG,
> +						     MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL,
> +						     MXL86111_EXT_MISC_CONFIG_FIB_SPEED_SEL_1000BX);
> +		if (ret < 0)
> +			goto out;
> +		break;
> +	default:
> +		/* RGMII modes */
> +		ret = mxl86110_config_rgmii_delay(phydev);
> +		if (ret < 0)
> +			goto out;
> +		ret = __mxl86110_modify_extended_reg(phydev, MXL86110_EXT_RGMII_CFG1_REG,
> +						     MXL86110_EXT_RGMII_CFG1_FULL_MASK, ret);
                                                                                         ^^^

Can this write the wrong delay configuration to the RGMII register? The
mxl86110_config_rgmii_delay() function returns 0 on success, but here
that return value (ret=0) is being used as the register value instead of
the calculated RGMII delay settings.

Looking at mxl86110_config_rgmii_delay(), it calculates proper delay
values like MXL86110_EXT_RGMII_CFG1_RX_DELAY_1950PS based on the
interface mode, but those values are consumed internally and the function
returns only an error code. Using ret=0 as the register value would
disable all RGMII delays regardless of the interface requirements.

> +
> +		/* PL P1 requires optimized RGMII timing for 1.8V RGMII voltage
> +		 */
> +		ret = __mxl86110_read_extended_reg(phydev, 0xf);
> +		if (ret < 0)
> +			goto out;
> +
> +		if (ret == MXL86111_PL_P1) {
> +			ret = __mxl86110_read_extended_reg(phydev, MXL86110_EXT_CHIP_CFG_REG);
> +			if (ret < 0)
> +				goto out;
> +
> +			/* check if LDO is in 1.8V mode */
> +			switch (FIELD_GET(MXL86111_EXT_CHIP_CFG_CLDO_MASK, ret)) {
> +			case MXL86111_EXT_CHIP_CFG_CLDO_1V8_3:
> +			case MXL86111_EXT_CHIP_CFG_CLDO_1V8_2:
> +				ret = __mxl86110_write_extended_reg(phydev, 0xa010, 0xabff);
> +				if (ret < 0)
> +					goto out;
> +				break;
> +			default:
> +				break;
> +			}
> +		}
> +		break;
> +	}
> +
> +	ret = mxl86110_enable_led_activity_blink(phydev);
> +	if (ret < 0)
> +		goto out;
> +
> +	ret = mxl86110_broadcast_cfg(phydev);
> +out:
> +	phy_unlock_mdio_bus(phydev);
> +
> +	return ret;
> +}

[ ... ]

commit db03a1ced61c4b9297996c67cc4b8ade9fdb7cd1
Author: Fan Gong <gongfan1@huawei.com>

hinic3: Command Queue framework

Add Command Queue framework initialization. It is used to set the related
table items of the driver and obtain the HW configuration.

Link: https://patch.msgid.link/1aeed56de39078bde8fff4597d7aa22d350058fc.1755673097.git.zhuyikai1@h-partners.com

> diff --git a/drivers/net/ethernet/huawei/hinic3/Makefile b/drivers/net/ethernet/huawei/hinic3/Makefile
> index 5fb4d1370049..2a0ed8e2c63e 100644
> --- a/drivers/net/ethernet/huawei/hinic3/Makefile
> +++ b/drivers/net/ethernet/huawei/hinic3/Makefile
> @@ -3,7 +3,8 @@
>
>  obj-$(CONFIG_HINIC3) += hinic3.o
>
> -hinic3-objs := hinic3_common.o \
> +hinic3-objs := hinic3_cmdq.o \
> +	       hinic3_common.o \
>  	       hinic3_eqs.o \
>  	       hinic3_hw_cfg.o \
>  	       hinic3_hw_comm.o \

[ ... ]

> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_cmdq.c b/drivers/net/ethernet/huawei/hinic3/hinic3_cmdq.c
> new file mode 100644
> index 000000000000..f1e61a212f2a
> --- /dev/null
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_cmdq.c

[ ... ]

> +int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev)
> +{
> +	struct hinic3_cmdqs *cmdqs;
> +	void __iomem *db_base;
> +	u8 cmdq_type;
> +	int err;
> +
> +	err = init_cmdqs(hwdev);
> +	if (err)
> +		goto err_out;
> +
> +	cmdqs = hwdev->cmdqs;
> +	err = create_cmdq_wq(hwdev, cmdqs);
> +	if (err)
> +		goto err_free_cmdqs;
> +
> +	err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
> +	if (err) {
> +		dev_err(hwdev->dev, "Failed to allocate doorbell address\n");
> +		goto err_destroy_cmdq_wq;
> +	}
> +	cmdqs->cmdqs_db_base = db_base;
> +
> +	for (cmdq_type = 0; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
> +		err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, cmdq_type);
> +		if (err) {
> +			dev_err(hwdev->dev,
> +				"Failed to initialize cmdq type : %d\n",
> +				cmdq_type);
> +			goto err_free_cmd_infos;
> +		}
> +	}
> +
> +	err = hinic3_set_cmdq_ctxts(hwdev);
> +	if (err)
> +		goto err_free_cmd_infos;
> +
> +	return 0;
> +
> +err_free_cmd_infos:
> +	while (cmdq_type > 0) {
> +		cmdq_type--;
> +		kfree(cmdqs->cmdq[cmdq_type].cmd_infos);
> +	}

Can hinic3_set_cmdq_ctxts() failure leak cmd_infos? When hinic3_set_cmdq_ctxts()
fails, all cmdq_type indices from 0 to cmdq_num-1 have been successfully
initialized by init_cmdq(), but the cleanup loop only frees indices 0 to
cmdq_type-1, missing the current cmdq_type.

> +
> +	hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base);
> +
> +err_destroy_cmdq_wq:
> +	destroy_cmdq_wq(hwdev, cmdqs);
> +
> +err_free_cmdqs:
> +	dma_pool_destroy(cmdqs->cmd_buf_pool);
> +	kfree(cmdqs);
> +
> +err_out:
> +	return err;
> +}

[ ... ]

commit c4bbfd9b0d3241ab4a0b6b0095659037ad136d46
Author: Fan Gong <gongfan1@huawei.com>

hinic3: Complete Event Queue interfaces

Add complete event queue interfaces initialization. It informs that
driver should handle the messages from HW.

Link: https://patch.msgid.link/837837f13b96c7155644428a329d5d47b7242153.1755673097.git.zhuyikai1@h-partners.com

> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h b/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h
> index 39e15fbf0ed7..e7417e8efa99 100644
> --- a/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_csr.h

[ ... ]

> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
> index 15b1345f7e48..01686472985b 100644
> --- a/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c

[ ... ]

> @@ -495,3 +713,64 @@ void hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
>
>  	kfree(aeqs);
>  }
> +
> +int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
> +		     struct msix_entry *msix_entries)
> +{
> +	struct hinic3_ceqs *ceqs;
> +	u16 q_id;
> +	int err;
> +
> +	ceqs = kzalloc(sizeof(*ceqs), GFP_KERNEL);
> +	if (!ceqs)
> +		return -ENOMEM;
> +
> +	hwdev->ceqs = ceqs;
> +	ceqs->hwdev = hwdev;
> +	ceqs->num_ceqs = num_ceqs;
> +
> +	for (q_id = 0; q_id < num_ceqs; q_id++) {
> +		err = init_eq(&ceqs->ceq[q_id], hwdev, q_id,
> +			      HINIC3_DEFAULT_CEQ_LEN, HINIC3_CEQ,
> +			      &msix_entries[q_id]);
> +		if (err) {
> +			dev_err(hwdev->dev, "Failed to init ceq %u\n",
> +				q_id);
> +			goto err_free_ceqs;
> +		}
> +	}
> +	for (q_id = 0; q_id < num_ceqs; q_id++)
> +		hinic3_set_msix_state(hwdev, ceqs->ceq[q_id].msix_entry_idx,
> +				      HINIC3_MSIX_ENABLE);
> +
> +	return 0;
> +
> +err_free_ceqs:
> +	while (q_id > 0) {
> +		q_id--;
> +		remove_eq(&ceqs->ceq[q_id]);
> +	}
> +
> +	kfree(ceqs);
> +
> +	return err;
> +}
> +
> +void hinic3_ceqs_free(struct hinic3_hwdev *hwdev)
> +{
> +	struct hinic3_ceqs *ceqs = hwdev->ceqs;
> +	enum hinic3_ceq_event ceq_event;
> +	struct hinic3_eq *eq;
> +	u16 q_id;
> +
> +	for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
> +		eq = ceqs->ceq + q_id;
> +		remove_eq(eq);
> +		hinic3_free_irq(hwdev, eq->irq_id);
> +	}

Can this double-free the IRQ resource? The remove_eq() function calls
free_irq() on eq->irq_id, and then hinic3_free_irq() is called on the
same IRQ ID. This would attempt to free the same IRQ twice during CEQ
cleanup.

> +
> +	for (ceq_event = 0; ceq_event < HINIC3_MAX_CEQ_EVENTS; ceq_event++)
> +		hinic3_ceq_unregister_cb(hwdev, ceq_event);
> +
> +	kfree(ceqs);
> +}

[ ... ]

commit a4511307be8659c482e792feefe671e891cff59d
Author: Fan Gong <gongfan1@huawei.com>

hinic3: Async Event Queue interfaces

Add async event queue interfaces initialization. It allows driver to
handle async events reported by HW.

Link: https://patch.msgid.link/553ebd562b61cd854a2beb25c3d4d98ad3073db0.1755673097.git.zhuyikai1@h-partners.com

> diff --git a/drivers/net/ethernet/huawei/hinic3/Makefile b/drivers/net/ethernet/huawei/hinic3/Makefile
> index 509dfbfb0e96..5fb4d1370049 100644
> --- a/drivers/net/ethernet/huawei/hinic3/Makefile
> +++ b/drivers/net/ethernet/huawei/hinic3/Makefile
> @@ -4,6 +4,7 @@
>  obj-$(CONFIG_HINIC3) += hinic3.o
>
>  hinic3-objs := hinic3_common.o \
> +	       hinic3_eqs.o \
>  	       hinic3_hw_cfg.o \
>  	       hinic3_hw_comm.o \
>  	       hinic3_hwdev.o \

[ ... ]

> diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c
> new file mode 100644
> index 000000000000..15b1345f7e48
> --- /dev/null
> +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_eqs.c

[ ... ]

> +static void remove_eq(struct hinic3_eq *eq)
> +{
> +	hinic3_set_msix_state(eq->hwdev, eq->msix_entry_idx,
> +			      HINIC3_MSIX_DISABLE);
> +	free_irq(eq->irq_id, eq);

[ ... ]

> +void hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
> +{
> +	struct hinic3_aeqs *aeqs = hwdev->aeqs;
> +	enum hinic3_aeq_type aeq_event;
> +	struct hinic3_eq *eq;
> +	u16 q_id;
> +
> +	for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
> +		eq = aeqs->aeq + q_id;
> +		remove_eq(eq);
> +		hinic3_free_irq(hwdev, eq->irq_id);

Can this cause a double-free of the IRQ? The remove_eq() function calls
free_irq() on eq->irq_id, then hinic3_aeqs_free() immediately calls
hinic3_free_irq() on the same eq->irq_id. This could trigger a kernel
warning or crash when free_irq() is called twice on the same interrupt.

> +	}
> +
> +	for (aeq_event = 0; aeq_event < HINIC3_MAX_AEQ_EVENTS; aeq_event++)
> +		hinic3_aeq_unregister_cb(hwdev, aeq_event);
> +
> +	destroy_workqueue(aeqs->workq);
> +
> +	kfree(aeqs);
> +}

[ ... ]

commit bcb28bee987a1e161eaa5cc4cf2fb0e21306d4a7
Author: Ujwal Kundur <ujwal.kundur@gmail.com>

rds: Fix endianness annotations for RDS extension headers

Per the RDS 3.1 spec, RDS extension headers EXTHDR_NPATHS and
EXTHDR_GEN_NUM are be16 and be32 values respectively, exchanged during
normal operations over-the-wire (RDS Ping/Pong). This patch fixes the
annotations across occurrences.

Link: https://patch.msgid.link/20250820175550.498-5-ujwal.kundur@gmail.com

> diff --git a/net/rds/message.c b/net/rds/message.c
> index 7af59d2443e5..199a899a43e9 100644
> --- a/net/rds/message.c
> +++ b/net/rds/message.c
> @@ -44,8 +44,8 @@ static unsigned int	rds_exthdr_size[__RDS_EXTHDR_MAX] = {
>  [RDS_EXTHDR_VERSION]	= sizeof(struct rds_ext_header_version),
>  [RDS_EXTHDR_RDMA]	= sizeof(struct rds_ext_header_rdma),
>  [RDS_EXTHDR_RDMA_DEST]	= sizeof(struct rds_ext_header_rdma_dest),
> -[RDS_EXTHDR_NPATHS]	= sizeof(u16),
> -[RDS_EXTHDR_GEN_NUM]	= sizeof(u32),
> +[RDS_EXTHDR_NPATHS]	= sizeof(__be16),
> +[RDS_EXTHDR_GEN_NUM]	= sizeof(__be32),
>  };
>
>  void rds_message_addref(struct rds_message *rm)
> diff --git a/net/rds/recv.c b/net/rds/recv.c
> index 5627f80013f8..66205d6924bf 100644
> --- a/net/rds/recv.c
> +++ b/net/rds/recv.c
> @@ -202,8 +202,8 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
>  	unsigned int pos = 0, type, len;
>  	union {
>  		struct rds_ext_header_version version;
> -		u16 rds_npaths;
> -		u32 rds_gen_num;
> +		__be16 rds_npaths;
> +		__be32 rds_gen_num;
>  	} buffer;
>  	u32 new_peer_gen_num = 0;
>
> diff --git a/net/rds/send.c b/net/rds/send.c
> index 42d991bc8543..0b3d0ef2f008 100644
> --- a/net/rds/send.c
> +++ b/net/rds/send.c
> @@ -1454,8 +1454,8 @@ rds_send_probe(struct rds_conn_path *cp, __be16 sport,
>
>  	if (RDS_HS_PROBE(be16_to_cpu(sport), be16_to_cpu(dport)) &&
>  	    cp->cp_conn->c_trans->t_mp_capable) {
> -		u16 npaths = cpu_to_be16(RDS_MPATH_WORKERS);
> -		u32 my_gen_num = cpu_to_be32(cp->cp_conn->c_my_gen_num);
> +		__be16 npaths = cpu_to_be16(RDS_MPATH_WORKERS);
> +		__be32 my_gen_num = cpu_to_be32(cp->cp_conn->c_my_gen_num);
>
>  		rds_message_add_extension(&rm->m_inc.i_hdr,
>  					  RDS_EXTHDR_NPATHS, &npaths,
> @@ -1459,7 +1459,7 @@ rds_send_probe(struct rds_conn_path *cp, __be16 sport,
>  		rds_message_add_extension(&rm->m_inc.i_hdr,
>  					  RDS_EXTHDR_GEN_NUM,
>  					  &my_gen_num,
>  					  sizeof(u32));

Will this validation fail in rds_message_add_extension()? The
rds_exthdr_size array now expects sizeof(__be32) for RDS_EXTHDR_GEN_NUM,
but this call passes sizeof(u32). The validation check compares
len != rds_exthdr_size[type] and returns 0 on mismatch, which could
prevent the generation number extension from being added to probe messages.

commit 5a4cf42322a0260c7391a3e64d288861e43de673
Author: Shruti Parab <shruti.parab@broadcom.com>

bnxt_en: Add pcie_ctx_v2 support for ethtool -d

Add support to dump the expanded v2 struct that contains PCIE read/write
latency and credit histogram data.

Link: https://patch.msgid.link/20250819163919.104075-5-michael.chan@broadcom.com

> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
> index abb895fb1a9c..2830a2b17a27 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
> @@ -2088,14 +2088,16 @@ __bnxt_hwrm_pcie_qstats(struct bnxt *bp, struct hwrm_pcie_qstats_input *req)
>  }
>
>  #define BNXT_PCIE_32B_ENTRY(start, end)			\
> -	 { offsetof(struct pcie_ctx_hw_stats, start),	\
> -	   offsetof(struct pcie_ctx_hw_stats, end) }
> +	 { offsetof(struct pcie_ctx_hw_stats_v2, start),\
> +	   offsetof(struct pcie_ctx_hw_stats_v2, end) }
>
>  static const struct {
>  	u16 start;
>  	u16 end;
>  } bnxt_pcie_32b_entries[] = {
>  	BNXT_PCIE_32B_ENTRY(pcie_ltssm_histogram[0], pcie_ltssm_histogram[3]),
> +	BNXT_PCIE_32B_ENTRY(pcie_tl_credit_nph_histogram[0], unused_1),
> +	BNXT_PCIE_32B_ENTRY(pcie_rd_latency_histogram[0], unused_2),
>  };

The bnxt_pcie_32b_entries[] array now contains offsets from struct
pcie_ctx_hw_stats_v2, but bnxt_get_regs() still uses the old struct type.

>
>  static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
> @@ -2123,7 +2125,13 @@ static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
>  		int i, j, len;
>
>  		len = min(bp->pcie_stat_len, le16_to_cpu(resp->pcie_stat_size));
> -		regs->version = 1;
> +		if (len <= sizeof(struct pcie_ctx_hw_stats))
> +			regs->version = 1;
> +		else if (len < sizeof(struct pcie_ctx_hw_stats_v2))
> +			regs->version = 2;
> +		else
> +			regs->version = 3;
> +
>  		for (i = 0, j = 0; i < len; ) {
>  			if (i >= bnxt_pcie_32b_entries[j].start &&
>  			    i <= bnxt_pcie_32b_entries[j].end) {

Can this access memory beyond the allocated buffer? The function still
declares hw_pcie_stats as "struct pcie_ctx_hw_stats *" and allocates
sizeof(*hw_pcie_stats) bytes, but bnxt_pcie_32b_entries[] now contains
offsets from the larger pcie_ctx_hw_stats_v2 struct.

When the loop accesses bnxt_pcie_32b_entries[j].start for the new entries
(pcie_tl_credit_nph_histogram or pcie_rd_latency_histogram), those offsets
will be beyond the allocated buffer size since they exist only in the v2
struct.

commit e2afa83296bbac40829624b508492b562a21e4d4
Author: Kuniyuki Iwashima <kuniyu@google.com>

tcp: Simplify error path in inet_csk_accept().

The patch simplifies the error handling path in inet_csk_accept() by
removing redundant label jumping and NULL assignments. However, it also
removes NULL checks that may be needed in certain code paths.

Link: https://patch.msgid.link/20250815201712.1745332-4-kuniyu@google.com

> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index 1e2df51427fe..724bd9ed6cd4 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -706,9 +706,9 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg)
>  		spin_unlock_bh(&queue->fastopenq.lock);
>  	}
>
> -out:
>  	release_sock(sk);
> -	if (newsk && mem_cgroup_sockets_enabled) {
> +
> +	if (mem_cgroup_sockets_enabled) {
>  		gfp_t gfp = GFP_KERNEL | __GFP_NOFAIL;
>  		int amt = 0;
>
> @@ -732,18 +732,17 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg)
>
>  		release_sock(newsk);
>  	}
> +
>  	if (req)
>  		reqsk_put(req);
>
> -	if (newsk)
> -		inet_init_csk_locks(newsk);
> -
> +	inet_init_csk_locks(newsk);
>  	return newsk;

Can inet_init_csk_locks() be called with a NULL newsk? Looking at the TCP
Fast Open path earlier in the function, there's this code:

	if (sk->sk_protocol == IPPROTO_TCP &&
	    tcp_rsk(req)->tfo_listener) {
		spin_lock_bh(&queue->fastopenq.lock);
		if (tcp_rsk(req)->tfo_listener) {
			req->sk = NULL;  // Sets to NULL
			req = NULL;
		}
		spin_unlock_bh(&queue->fastopenq.lock);
	}

Since newsk = req->sk is set earlier, this would make newsk NULL when the
Fast Open path sets req->sk = NULL. The original code checked "if (newsk)"
before calling inet_init_csk_locks(), but the patch removes this check.

> +
>  out_err:
> -	newsk = NULL;
> -	req = NULL;
> +	release_sock(sk);
>  	arg->err = error;
> -	goto out;
> +	return NULL;
>  }
>  EXPORT_SYMBOL(inet_csk_accept);

commit e97e6a1830ddb5885ba312e56b6fa3aa39b5f47e
Author: Stanislav Fomichev <sdf@fomichev.me>

net: Switch to skb_dstref_steal/skb_dstref_restore for ip_route_input callers

Going forward skb_dst_set will assert that skb dst_entry is empty during
skb_dst_set. This patch converts icmp_route_lookup and ip_options_rcv_srr
to use new helper functions that properly handle dst reference stealing
and restoration.

Link: https://patch.msgid.link/20250818154032.3173645-5-sdf@fomichev.me

> diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
> index 2ffe73ea644f..91765057aa1d 100644
> --- a/net/ipv4/icmp.c
> +++ b/net/ipv4/icmp.c
> @@ -544,14 +544,15 @@ static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4,
>  			goto relookup_failed;
>  		}
>  		/* Ugh! */
> -		orefdst = skb_in->_skb_refdst; /* save old refdst */
> -		skb_dst_set(skb_in, NULL);
> +		orefdst = skb_dstref_steal(skb_in);
>  		err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr,
>  				     dscp, rt2->dst.dev) ? -EINVAL : 0;
>
>  		dst_release(&rt2->dst);
>  		rt2 = skb_rtable(skb_in);
> -		skb_in->_skb_refdst = orefdst; /* restore old refdst */
> +		/* steal dst entry from skb_in, don't drop refcnt */
> +		skb_dstref_steal(skb_in);
> +		skb_dstref_restore(skb_in, orefdst);

Can this leak the dst entry from ip_route_input()? The sequence appears to:
1. Save original dst via skb_dstref_steal(skb_in) -> orefdst
2. Call ip_route_input() which sets a new dst in skb_in
3. Call skb_dstref_steal(skb_in) again, stealing the new dst but not
   storing or releasing it
4. Restore the original dst via skb_dstref_restore(skb_in, orefdst)

The dst entry created by ip_route_input() seems to be abandoned without
proper cleanup, while the original dst is restored.

>  	}
>
>  	if (err)
> diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
> index e3321932bec0..be8815ce3ac2 100644
> --- a/net/ipv4/ip_options.c
> +++ b/net/ipv4/ip_options.c
> @@ -615,14 +615,13 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
>  		}
>  		memcpy(&nexthop, &optptr[srrptr-1], 4);
>
> -		orefdst = skb->_skb_refdst;
> -		skb_dst_set(skb, NULL);
> +		orefdst = skb_dstref_steal(skb);
>  		err = ip_route_input(skb, nexthop, iph->saddr, ip4h_dscp(iph),
>  				     dev) ? -EINVAL : 0;
>  		rt2 = skb_rtable(skb);
>  		if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
>  			skb_dst_drop(skb);
> -			skb->_skb_refdst = orefdst;
> +			skb_dstref_restore(skb, orefdst);
>  			return -EINVAL;
>  		}
>  		refdst_drop(orefdst);

commit da114122b83149d1f1db0586b1d67947b651aa20
Author: Chaoyi Chen <chaoyi.chen@rock-chips.com>

net: ethernet: stmmac: dwmac-rk: Make the clk_phy could be used for external phy

This patch modifies rk_gmac_clk_init() to allow clk_phy to be used by
external PHYs. Previously, clk_phy was only obtained for integrated PHYs,
but now it can be obtained for any PHY with a phy_node.

Link: https://patch.msgid.link/20250815023515.114-1-kernel@airkyi.com

> diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> index 7c898768d544..9fc41207cc45 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> @@ -1412,12 +1412,15 @@ static int rk_gmac_clk_init(struct plat_stmmacenet_data *plat)
>  		clk_set_rate(plat->stmmac_clk, 50000000);
>  	}
>
> -	if (plat->phy_node && bsp_priv->integrated_phy) {
> +	if (plat->phy_node) {
>  		bsp_priv->clk_phy = of_clk_get(plat->phy_node, 0);
>  		ret = PTR_ERR_OR_ZERO(bsp_priv->clk_phy);
> -		if (ret)
> -			return dev_err_probe(dev, ret, "Cannot get PHY clock\n");
> -		clk_set_rate(bsp_priv->clk_phy, 50000000);
> +		/* If it is not integrated_phy, clk_phy is optional */
> +		if (bsp_priv->integrated_phy) {
> +			if (ret)
> +				return dev_err_probe(dev, ret, "Cannot get PHY clock\n");
> +			clk_set_rate(bsp_priv->clk_phy, 50000000);
> +		}
>  	}
>
>  	return 0;

Can this leak the clock reference for external PHYs? With this change,
of_clk_get() is called for both integrated and external PHYs, but the
cleanup path in rk_gmac_remove() only calls clk_put() when integrated_phy
is true:

	if (priv->plat->phy_node && bsp_priv->integrated_phy)
		clk_put(bsp_priv->clk_phy);

If an external PHY successfully gets a clock reference via of_clk_get(),
that reference will never be released with clk_put().

commit 9064ab485f04df40e7f0838245849e2e4c5159d9
Author: Mohsin Bashir <mohsin.bashr@gmail.com>

eth: fbnic: Prefetch packet headers on Rx

Issue a prefetch for the start of the buffer on Rx to try to avoid cache
miss on packet headers.

Link: https://patch.msgid.link/20250813221319.3367670-5-mohsin.bashr@gmail.com

> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> index 2adbe175ac09..65d1e40addec 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> @@ -888,7 +888,7 @@ static void fbnic_pkt_prepare(struct fbnic_napi_vector *nv, u64 rcd,
>
>  	/* Build frame around buffer */
>  	hdr_start = page_address(page) + hdr_pg_start;
> -
> +	net_prefetch(pkt->buff.data);

Can this access uninitialized memory? Looking at the function sequence in
fbnic_pkt_prepare():

1. xdp_init_buff(&pkt->buff, frame_sz, NULL) only sets frame_sz, rxq,
   and flags fields
2. net_prefetch(pkt->buff.data) accesses the data field before it's set
3. xdp_prepare_buff() initializes pkt->buff.data = hard_start + headroom

The data field appears to be accessed before xdp_prepare_buff() initializes
it to point to the actual packet buffer.

>  	xdp_prepare_buff(&pkt->buff, hdr_start, headroom,
>  			 len - FBNIC_RX_PAD, true);

commit 61f9a066c3099264f40737d134c7921567f85072
Author: Mohsin Bashir <mohsin.bashr@gmail.com>

eth: fbnic: Use shinfo to track frags state on Rx

This patch removes local fields that track fragment state and instead
stores this information directly in the shinfo struct by using standard
XDP buffer management functions.

Link: https://patch.msgid.link/20250813221319.3367670-4-mohsin.bashr@gmail.com

> diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> index 7c69f6381d9e..2adbe175ac09 100644
> --- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> +++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
> @@ -892,9 +892,8 @@ static void fbnic_pkt_prepare(struct fbnic_napi_vector *nv, u64 rcd,
>  	xdp_prepare_buff(&pkt->buff, hdr_start, headroom,
>  			 len - FBNIC_RX_PAD, true);
>
> -	pkt->data_truesize = 0;
> -	pkt->data_len = 0;
> -	pkt->nr_frags = 0;
> +	pkt->hwtstamp = 0;
> +	pkt->add_frag_failed = false;
>  }
>
>  static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
> @@ -905,8 +904,8 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
>  	unsigned int pg_off = FIELD_GET(FBNIC_RCD_AL_BUFF_OFF_MASK, rcd);
>  	unsigned int len = FIELD_GET(FBNIC_RCD_AL_BUFF_LEN_MASK, rcd);
>  	struct page *page = fbnic_page_pool_get(&qt->sub1, pg_idx);
> -	struct skb_shared_info *shinfo;
>  	unsigned int truesize;
> +	bool added;
>
>  	truesize = FIELD_GET(FBNIC_RCD_AL_PAGE_FIN, rcd) ?
>  		   FBNIC_BD_FRAG_SIZE - pg_off : ALIGN(len, 128);
> @@ -918,34 +917,34 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
>  	dma_sync_single_range_for_cpu(nv->dev, page_pool_get_dma_addr(page),
>  				      pg_off, truesize, DMA_BIDIRECTIONAL);
>
> -	/* Add page to xdp shared info */
> -	shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
> -
> -	/* We use gso_segs to store truesize */
> -	pkt->data_truesize += truesize;
> -
> -	__skb_fill_page_desc_noacc(shinfo, pkt->nr_frags++, page, pg_off, len);
> -
> -	/* Store data_len in gso_size */
> -	pkt->data_len += len;
> +	added = xdp_buff_add_frag(&pkt->buff, page_to_netmem(page), pg_off, len,
> +				  truesize);
> +	if (unlikely(!added)) {
> +		pkt->add_frag_failed = true;
> +		netdev_err_once(nv->napi.dev,
> +				"Failed to add fragment to xdp_buff\n");
> +	}

Can this leak the page when xdp_buff_add_frag() fails? The page was
obtained from fbnic_page_pool_get() and synchronized via DMA, but if
xdp_buff_add_frag() returns false (when nr_frags reaches MAX_SKB_FRAGS),
the page is never returned to the pool. The old code using
__skb_fill_page_desc_noacc() never failed, but xdp_buff_add_frag() can
fail and appears to leave the allocated page unhandled.

>  }
>
>  static void fbnic_put_pkt_buff(struct fbnic_napi_vector *nv,
>  			       struct fbnic_pkt_buff *pkt, int budget)
>  {
> -	struct skb_shared_info *shinfo;
>  	struct page *page;
> -	int nr_frags;
>
>  	if (!pkt->buff.data_hard_start)
>  		return;
>
> -	shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
> -	nr_frags = pkt->nr_frags;
> +	if (xdp_buff_has_frags(&pkt->buff)) {
> +		struct skb_shared_info *shinfo;
> +		int nr_frags;
>
> -	while (nr_frags--) {
> -		page = skb_frag_page(&shinfo->frags[nr_frags]);
> -		page_pool_put_full_page(nv->page_pool, page, !!budget);
> +		shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
> +		nr_frags = shinfo->nr_frags;
> +
> +		while (nr_frags--) {
> +			page = skb_frag_page(&shinfo->frags[nr_frags]);
> +			page_pool_put_full_page(nv->page_pool, page, !!budget);
> +		}
>  	}
>
>  	page = virt_to_page(pkt->buff.data_hard_start);

[ ... ]

> @@ -1094,7 +1062,9 @@ static int fbnic_clean_rcq(struct fbnic_napi_vector *nv,
>  			/* We currently ignore the action table index */
>  			break;
>  		case FBNIC_RCD_TYPE_META:
> -			if (likely(!fbnic_rcd_metadata_err(rcd)))
> +			if (unlikely(pkt->add_frag_failed))
> +				skb = NULL;
> +			else if (likely(!fbnic_rcd_metadata_err(rcd)))
>  				skb = fbnic_build_skb(nv, pkt);
>
>  			/* Populate skb and invalidate XDP */

[ ... ]

commit 520369ef43a8504f9d54ee219bb6c692d2e40028
Author: Daniel Jurgens <danielj@nvidia.com>

net/mlx5: Support disabling host PFs

Some devices support disabling the physical function on the host. When
this is configured the vports for the host functions do not exist. This
patch checks if host functions are enabled before trying to access their
vports.

Link: https://patch.msgid.link/1755112796-467444-3-git-send-email-tariqt@nvidia.com

> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
> index 31059fff30ec..3d533061311b 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c

[ ... ]

> @@ -1769,21 +1775,23 @@ static int mlx5_esw_vports_init(struct mlx5_eswitch *esw)
>
>  	xa_init(&esw->vports);
>
> -	err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_PF);
> -	if (err)
> -		goto err;
> -	if (esw->first_host_vport == MLX5_VPORT_PF)
> -		xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
> -	idx++;
> -
> -	for (i = 0; i < mlx5_core_max_vfs(dev); i++) {
> -		err = mlx5_esw_vport_alloc(esw, idx, idx);
> +	if (mlx5_esw_host_functions_enabled(dev)) {
> +		err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_PF);
>  		if (err)
>  			goto err;
> -		xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_VF);
> -		xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
> +		if (esw->first_host_vport == MLX5_VPORT_PF)
> +			xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
>  		idx++;
> +		for (i = 0; i < mlx5_core_max_vfs(dev); i++) {
> +			err = mlx5_esw_vport_alloc(esw, idx, idx);
> +			if (err)
> +				goto err;
> +			xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_VF);
> +			xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_HOST_FN);
> +			idx++;
> +		}
>  	}

Is mlx5_esw_host_functions_enabled() accessing initialized data here?

Looking at the call sequence in mlx5_eswitch_init():
- Line 1893: dev->priv.eswitch = esw (moved earlier by this patch)
- Line 1901: mlx5_esw_vports_init() calls mlx5_esw_host_functions_enabled()
- Line 1905: mlx5_esw_host_functions_enabled_query() initializes
  esw->esw_funcs.host_funcs_disabled

The mlx5_esw_host_functions_enabled() function accesses
dev->priv.eswitch->esw_funcs.host_funcs_disabled, but this field gets
initialized by mlx5_esw_host_functions_enabled_query() which runs after
mlx5_esw_vports_init().

> +
>  	base_sf_num = mlx5_sf_start_function_id(dev);
>  	for (i = 0; i < mlx5_sf_max_functions(dev); i++) {
>  		err = mlx5_esw_vport_alloc(esw, idx, base_sf_num + i);

[ ... ]

> @@ -1883,6 +1891,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
>  		goto free_esw;
>
>  	esw->dev = dev;
> +	dev->priv.eswitch = esw;
>  	esw->manager_vport = mlx5_eswitch_manager_vport(dev);
>  	esw->first_host_vport = mlx5_eswitch_first_host_vport_num(dev);
>
> @@ -1901,7 +1910,6 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
>  	if (err)
>  		goto abort;
>
> -	dev->priv.eswitch = esw;
>  	err = esw_offloads_init(esw);
>  	if (err)
>  		goto reps_err;

Moving the dev->priv.eswitch assignment earlier allows
mlx5_esw_vports_init()->mlx5_esw_host_functions_enabled() to access the
eswitch struct, but the esw_funcs.host_funcs_disabled field remains
uninitialized until mlx5_esw_host_functions_enabled_query() runs later
in the function.

[ ... ]

> @@ -2433,3 +2441,11 @@ void mlx5_eswitch_unblock_ipsec(struct mlx5_core_dev *dev)
>  	dev->num_ipsec_offloads--;
>  	mutex_unlock(&esw->state_lock);
>  }
> +
> +bool mlx5_esw_host_functions_enabled(const struct mlx5_core_dev *dev)
> +{
> +	if (!dev->priv.eswitch)
> +		return true;
> +
> +	return !dev->priv.eswitch->esw_funcs.host_funcs_disabled;
> +}

commit c6f68f69416d0950965e5744489382ccebdc72b4
Author: Dan Carpenter <dan.carpenter@linaro.org>

nfc: pn533: Delete an unnecessary check

This patch removes what the author believes is an unnecessary check for
rc < 0 since rc is set to PTR_ERR(resp) which is always negative when
IS_ERR(resp) is true.

Link: https://patch.msgid.link/aJwn2ox5g9WsD2Vx@stanley.mountain

> diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
> index 14661249c690..2b043a9f9533 100644
> --- a/drivers/nfc/pn533/pn533.c
> +++ b/drivers/nfc/pn533/pn533.c
> @@ -1412,11 +1412,9 @@ static int pn533_autopoll_complete(struct pn533 *dev, void *arg,
>  		if (dev->poll_mod_count != 0)
>  			return rc;
>  		goto stop_poll;
> -	} else if (rc < 0) {
> -		nfc_err(dev->dev,
> -			"Error %d when running autopoll\n", rc);
> -		goto stop_poll;
>  	}
> +	nfc_err(dev->dev, "Error %d when running autopoll\n", rc);
> +	goto stop_poll;
>  }

Can this break the success path in pn533_autopoll_complete()? When
IS_ERR(resp) is false, the function continues to the success path, but
now the error logging and goto stop_poll execute unconditionally. The
variable rc would be uninitialized in this case, leading to logging of
garbage values and incorrect error handling for successful operations.

>
>  nbtg = resp->data[0];
> @@ -1505,11 +1503,9 @@ static int pn533_poll_complete(struct pn533 *dev, void *arg,
>  		if (dev->poll_mod_count != 0)
>  			return rc;
>  		goto stop_poll;
> -	} else if (rc < 0) {
> -		nfc_err(dev->dev,
> -			"Error %d when running poll\n", rc);
> -		goto stop_poll;
>  	}
> +	nfc_err(dev->dev, "Error %d when running poll\n", rc);
> +	goto stop_poll;
>  }

The same issue appears in pn533_poll_complete() - does this also
prevent successful poll operations from continuing to the normal
processing code below?

commit 355b82c54c122e59487c52c084a146101bedc2c8
Author: Jijie Shao <shaojijie@huawei.com>

net: phy: motorcomm: Add support for PHY LEDs on YT8521

Add minimal LED controller driver supporting the most common uses with
the 'netdev' trigger.

Link: https://patch.msgid.link/20250813124542.3450447-1-shaojijie@huawei.com

> diff --git a/drivers/net/phy/motorcomm.c b/drivers/net/phy/motorcomm.c
> index 0e91f5d1a4fd..a3593e663059 100644
> --- a/drivers/net/phy/motorcomm.c
> +++ b/drivers/net/phy/motorcomm.c

[ ... ]

> +static int yt8521_led_hw_control_set(struct phy_device *phydev, u8 index,
> +				     unsigned long rules)
> +{
> +	u16 val = 0;
> +
> +	if (index >= YT8521_MAX_LEDS)
> +		return -EINVAL;
> +
> +	if (test_bit(TRIGGER_NETDEV_LINK, &rules)) {
> +		val |= YT8521_LED_10_ON_EN;
> +		val |= YT8521_LED_100_ON_EN;
> +		val |= YT8521_LED_1000_ON_EN;
> +	}
> +
> +	if (test_bit(TRIGGER_NETDEV_LINK_10, &rules))
> +		val |= YT8521_LED_10_ON_EN;
> +
> +	if (test_bit(TRIGGER_NETDEV_LINK_100, &rules))
> +		val |= YT8521_LED_100_ON_EN;
> +
> +	if (test_bit(TRIGGER_NETDEV_LINK_1000, &rules))
> +		val |= YT8521_LED_1000_ON_EN;
> +
> +	if (test_bit(TRIGGER_NETDEV_FULL_DUPLEX, &rules))
> +		val |= YT8521_LED_HDX_ON_EN;
> +
> +	if (test_bit(TRIGGER_NETDEV_HALF_DUPLEX, &rules))
> +		val |= YT8521_LED_FDX_ON_EN;
                       ^^^^^^^^^^^^^^^^^^^^^

Are the duplex bit assignments swapped here? The TRIGGER_NETDEV_FULL_DUPLEX
trigger sets the YT8521_LED_HDX_ON_EN bit, while TRIGGER_NETDEV_HALF_DUPLEX
sets YT8521_LED_FDX_ON_EN. This appears inconsistent with the
yt8521_led_hw_control_get function which maps YT8521_LED_FDX_ON_EN to
TRIGGER_NETDEV_FULL_DUPLEX and YT8521_LED_HDX_ON_EN to
TRIGGER_NETDEV_HALF_DUPLEX.

> +
> +	if (test_bit(TRIGGER_NETDEV_TX, &rules) ||
> +	    test_bit(TRIGGER_NETDEV_RX, &rules))
> +		val |= YT8521_LED_ACT_BLK_IND;
> +
> +	if (test_bit(TRIGGER_NETDEV_TX, &rules))
> +		val |= YT8521_LED_TXACT_BLK_EN;
> +
> +	if (test_bit(TRIGGER_NETDEV_RX, &rules))
> +		val |= YT8521_LED_RXACT_BLK_EN;
> +
> +	return ytphy_write_ext(phydev, YT8521_LED0_CFG_REG + index, val);
> +}
> +
> +static int yt8521_led_hw_control_get(struct phy_device *phydev, u8 index,
> +				     unsigned long *rules)
> +{
> +	int val;
> +
> +	if (index >= YT8521_MAX_LEDS)
> +		return -EINVAL;
> +
> +	val = ytphy_read_ext(phydev, YT8521_LED0_CFG_REG + index);
> +	if (val < 0)
> +		return val;
> +
> +	if (val & YT8521_LED_TXACT_BLK_EN || val & YT8521_LED_ACT_BLK_IND)
> +		__set_bit(TRIGGER_NETDEV_TX, rules);
> +
> +	if (val & YT8521_LED_RXACT_BLK_EN || val & YT8521_LED_ACT_BLK_IND)
> +		__set_bit(TRIGGER_NETDEV_RX, rules);
> +
> +	if (val & YT8521_LED_FDX_ON_EN)
> +		__set_bit(TRIGGER_NETDEV_FULL_DUPLEX, rules);
> +
> +	if (val & YT8521_LED_HDX_ON_EN)
> +		__set_bit(TRIGGER_NETDEV_HALF_DUPLEX, rules);

The get function correctly maps YT8521_LED_FDX_ON_EN to
TRIGGER_NETDEV_FULL_DUPLEX and YT8521_LED_HDX_ON_EN to
TRIGGER_NETDEV_HALF_DUPLEX, which confirms the set function has the
assignments reversed.

> +
> +	if (val & YT8521_LED_1000_ON_EN)
> +		__set_bit(TRIGGER_NETDEV_LINK_1000, rules);
> +
> +	if (val & YT8521_LED_100_ON_EN)
> +		__set_bit(TRIGGER_NETDEV_LINK_100, rules);
> +
> +	if (val & YT8521_LED_10_ON_EN)
> +		__set_bit(TRIGGER_NETDEV_LINK_10, rules);
> +
> +	return 0;
> +}

[ ... ]

commit 07bbbfe7addf5b032e04f3c38f0b183d067a3f0d
Author: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

net: stmmac: add suspend()/resume() platform ops

Add suspend/resume platform operations, which, when populated, override
the init/exit platform operations when we suspend and resume. These
suspend()/resume() methods are called by core code, and thus are
designed to support any struct device, not just platform devices.

Link: https://patch.msgid.link/E1ulXbX-008gqZ-Bb@rmk-PC.armlinux.org.uk

> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index f1abf4242cd2..2da4f7bb2899 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -7879,6 +7879,9 @@ int stmmac_suspend(struct device *dev)
>  	if (stmmac_fpe_supported(priv))
>  		ethtool_mmsv_stop(&priv->fpe_cfg.mmsv);
>
> +	if (priv->plat->suspend)
> +		return priv->plat->suspend(dev, priv->plat->bsp_priv);
> +

When priv->plat->suspend exists, does this bypass all the network device
cleanup that happens in the rest of stmmac_suspend()? The function returns
early here, skipping netif_device_detach(), stmmac_disable_all_queues(),
timer cancellations, DMA stopping, power management setup, and phylink
suspension. Can platform drivers be expected to handle all this network
device state management correctly?

>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(stmmac_suspend);
> @@ -7931,6 +7934,12 @@ int stmmac_resume(struct device *dev)
>  	struct stmmac_priv *priv = netdev_priv(ndev);
>  	int ret;
>
> +	if (priv->plat->resume) {
> +		ret = priv->plat->resume(dev, priv->plat->bsp_priv);
> +		if (ret)
> +			return ret;
> +	}
> +

If priv->plat->resume() fails here, will the device remain in a partially
suspended state? The function returns without calling stmmac_hw_setup(),
phylink_resume(), or netif_device_attach(). What happens to network
functionality when the platform resume succeeds but the core network
device restoration never occurs?

>  	if (!netif_running(ndev))
>  		return 0;
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
> index 030fcf1b5993..21df052eeed0 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
> @@ -901,7 +901,9 @@ static int __maybe_unused stmmac_pltfr_suspend(struct device *dev)
>  	struct platform_device *pdev = to_platform_device(dev);
>
>  	ret = stmmac_suspend(dev);
> -	stmmac_pltfr_exit(pdev, priv->plat);
> +
> +	if (!priv->plat->suspend)
> +		stmmac_pltfr_exit(pdev, priv->plat);

Is there an ordering issue here? When platform suspend is used,
stmmac_suspend() calls priv->plat->suspend() and returns immediately.
This means the platform suspend function executes before the platform
exit is skipped here. Does the platform suspend function need to handle
both device-specific suspension AND what stmmac_pltfr_exit() normally does?

>
>  	return ret;
>  }
> @@ -920,9 +922,11 @@ static int __maybe_unused stmmac_pltfr_resume(struct device *dev)
>  	struct platform_device *pdev = to_platform_device(dev);
>  	int ret;
>
> -	ret = stmmac_pltfr_init(pdev, priv->plat);
> -	if (ret)
> -		return ret;
> +	if (!priv->plat->resume) {
> +		ret = stmmac_pltfr_init(pdev, priv->plat);
> +		if (ret)
> +			return ret;
> +	}
>
>  	return stmmac_resume(dev);
>  }

When platform resume is used, stmmac_pltfr_init() is skipped but
stmmac_resume() is still called. However, if priv->plat->resume is
provided, stmmac_resume() will call it first before doing the core
network setup. Should the platform resume function handle both what
stmmac_pltfr_init() normally does AND ensure proper ordering with the
subsequent core network restoration?

> diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
> index 26ddf95d23f9..22c24dacbc65 100644
> --- a/include/linux/stmmac.h
> +++ b/include/linux/stmmac.h
> @@ -248,6 +248,8 @@ struct plat_stmmacenet_data {
>  	void (*ptp_clk_freq_config)(struct stmmac_priv *priv);
>  	int (*init)(struct platform_device *pdev, void *priv);
>  	void (*exit)(struct platform_device *pdev, void *priv);
> +	int (*suspend)(struct device *dev, void *priv);
> +	int (*resume)(struct device *dev, void *priv);

The new suspend/resume callbacks take struct device* while init/exit take
struct platform_device*. When these platform suspend/resume functions
are used, do they need to handle all the network device state management
that the core functions normally perform? Is there documentation about
what platform drivers must implement when providing these callbacks?

commit 60cb69214148fbe7fc50239c28e4d052eec6ae61
Author: Breno Leitao <leitao@debian.org>

netconsole: use netpoll_parse_ip_addr in local_ip_store

Replace manual IP address parsing with a call to netpoll_parse_ip_addr
in local_ip_store(), simplifying the code and reducing the chance of
errors. Also, remove the pr_err() if the user enters an invalid value in
configfs entries.

> diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
> index 2919522d963e..a9b30b5891d7 100644
> --- a/drivers/net/netconsole.c
> +++ b/drivers/net/netconsole.c
> @@ -757,6 +757,7 @@ static ssize_t local_ip_store(struct config_item *item, const char *buf,
>  {
>  	struct netconsole_target *nt = to_target(item);
>  	ssize_t ret = -EINVAL;
> +	int ipv6;
>
>  	mutex_lock(&dynamic_netconsole_mutex);
>  	if (nt->enabled) {
> @@ -765,23 +766,10 @@ static ssize_t local_ip_store(struct config_item *item, const char *buf,
>  		goto out_unlock;
>  	}
>
> -	if (strnchr(buf, count, ':')) {
> -		const char *end;
> -
> -		if (in6_pton(buf, count, nt->np.local_ip.in6.s6_addr, -1, &end) > 0) {
> -			if (*end && *end != '\n') {
> -				pr_err("invalid IPv6 address at: <%c>\n", *end);
> -				goto out_unlock;
> -			}
> -			nt->np.ipv6 = true;
> -		} else
> -			goto out_unlock;
> -	} else {
> -		if (!nt->np.ipv6)
> -			nt->np.local_ip.ip = in_aton(buf);
> -		else
> -			goto out_unlock;
> -	}
> +	ipv6 = netpoll_parse_ip_addr(buf, &nt->np.local_ip);
> +	if (ipv6 == -1)
> +		goto out_unlock;
> +	nt->np.ipv6 = !!ipv6;

Can netpoll_parse_ip_addr() read beyond the intended buffer? The original
code used in6_pton(buf, count, ...) which respects the count parameter,
but netpoll_parse_ip_addr() uses strlen(str) and could read beyond count
bytes if buf is not null-terminated. Other configfs store functions like
dev_name_store() use strscpy() with explicit size limits.

Does this change the IPv4/IPv6 switching behavior? The original code
rejected IPv4 addresses when nt->np.ipv6 was true (the "else goto
out_unlock" path), but the new code allows switching from IPv6 back to
IPv4 mode by always updating nt->np.ipv6 based on the parsed result.

>
>  	ret = strnlen(buf, count);
>  out_unlock:


[-- Attachment #3: linux-next.txt --]
[-- Type: text/plain, Size: 35538 bytes --]

commit dc32c8d4875fdd27b43d5c295853860f4d647055
Author: Bijan Tabatabai <bijantabatab@micron.com>

mm/damon/core: skip needless update of damon_attrs in damon_commit_ctx()

This patch adds an optimization to avoid calling damon_set_attrs() when
attributes haven't changed, preventing unnecessary resets of timing fields
that can delay aggregation and ops update operations.

Link: https://lkml.kernel.org/r/20250807001924.76275-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20250806234254.10572-1-bijan311@gmail.com

> diff --git a/mm/damon/core.c b/mm/damon/core.c
> index 106ee8b0f2d5..52ecc3a4426f 100644
> --- a/mm/damon/core.c
> +++ b/mm/damon/core.c
> @@ -570,6 +570,23 @@ void damon_destroy_ctx(struct damon_ctx *ctx)
>  	kfree(ctx);
>  }
>
> +static bool damon_attrs_equals(const struct damon_attrs *attrs1,
> +		const struct damon_attrs *attrs2)
> +{
> +	const struct damon_intervals_goal *ig1 = &attrs1->intervals_goal;
> +	const struct damon_intervals_goal *ig2 = &attrs2->intervals_goal;
> +
> +	return attrs1->sample_interval == attrs2->sample_interval &&
> +		attrs1->aggr_interval == attrs2->aggr_interval &&
> +		attrs1->ops_update_interval == attrs2->ops_update_interval &&
> +		attrs1->min_nr_regions == attrs2->min_nr_regions &&
> +		attrs1->max_nr_regions == attrs2->max_nr_regions &&
> +		ig1->access_bp == ig2->access_bp &&
> +		ig1->aggrs == ig2->aggrs &&
> +		ig1->min_sample_us == ig2->min_sample_us &&
> +		ig1->max_sample_us == ig2->max_sample_us;
> +}

Does this comparison miss the aggr_samples field? The damon_attrs struct
contains an aggr_samples field that is used by damon_set_attrs() and
kdamond_tune_intervals() for timing calculations. If only aggr_samples
differs between the two attrs structs, this function would return true
when the attributes are actually different, causing damon_set_attrs() to
be incorrectly skipped.

--clm false positive, SJ explained there's a comment explanining this, but
claude didn't quite parse it right.

> +
>  static unsigned int damon_age_for_new_attrs(unsigned int age,
>  		struct damon_attrs *old_attrs, struct damon_attrs *new_attrs)
>  {
> @@ -1222,9 +1239,11 @@ int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
>  	 * 2. ops update should be done after pid handling is done (target
>  	 *    committing require putting pids).
>  	 */
> -	err = damon_set_attrs(dst, &src->attrs);
> -	if (err)
> -		return err;
> +	if (!damon_attrs_equals(&dst->attrs, &src->attrs)) {
> +		err = damon_set_attrs(dst, &src->attrs);
> +		if (err)
> +			return err;
> +	}
>  	dst->ops = src->ops;
>
>  	return 0;

commit 7663e963a51122792811811c8119fd55c9ab254a
Author: Jeff Layton <jlayton@kernel.org>

nfsd: track original timestamps in nfs4_delegation

This commit stores file timestamps in the nfs4_delegation structure at
delegation grant time to support RFC 9754 requirements. It adds three
timespec64 fields and modifies delegation logic to capture timestamps.

Link: https://lore.kernel.org/linux-nfs/47a4e40310e797f21b5137e847b06bb203d99e66.camel@kernel.org/

> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 77eea2ad93cc..8737b721daf3 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -6157,7 +6157,8 @@ nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh,
>  	path.dentry = file_dentry(nf->nf_file);
>
>  	rc = vfs_getattr(&path, stat,
> -			 (STATX_MODE | STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
> +			 STATX_MODE | STATX_SIZE | STATX_ATIME |
> +			 STATX_MTIME | STATX_CTIME | STATX_CHANGE_COOKIE,
>  			 AT_STATX_SYNC_AS_STAT);
>
>  	nfsd_file_put(nf);
> @@ -6274,10 +6275,14 @@ nfs4_open_delegation(struct svc_rqst *rqstp, struct nfsd4_open *open,
>  						    OPEN_DELEGATE_WRITE;
>  		dp->dl_cb_fattr.ncf_cur_fsize = stat.size;
>  		dp->dl_cb_fattr.ncf_initial_cinfo = nfsd4_change_attribute(&stat);
> +		dp->dl_atime = stat.atime;
> +		dp->dl_ctime = stat.ctime;
> +		dp->dl_mtime = stat.mtime;
>  		trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
>  	} else {
> -		open->op_delegate_type = deleg_ts ? OPEN_DELEGATE_READ_ATTRS_DELEG :
> -						    OPEN_DELEGATE_READ;
> +		open->op_delegate_type = deleg_ts && nfs4_delegation_stat(dp, currentfh, &stat) ?
> +					 OPEN_DELEGATE_READ_ATTRS_DELEG : OPEN_DELEGATE_READ;
> +		dp->dl_atime = stat.atime;

When nfs4_delegation_stat() fails in the read delegation path, can this
assignment use uninitialized data from the stat structure? The stat
variable is declared on the stack and if vfs_getattr() fails within
nfs4_delegation_stat(), the atime field would contain undefined values.

>  		trace_nfsd_deleg_read(&dp->dl_stid.sc_stateid);
>  	}
>  	nfs4_put_stid(&dp->dl_stid);
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index 8adc2550129e..ce7c0d129ba3 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -224,6 +224,11 @@ struct nfs4_delegation {
>
>  	/* for CB_GETATTR */
>  	struct nfs4_cb_fattr    dl_cb_fattr;
> +
> +	/* For delegated timestamps */
> +	struct timespec64	dl_atime;
> +	struct timespec64	dl_mtime;
> +	struct timespec64	dl_ctime;
>  };
>
>  static inline bool deleg_is_read(u32 dl_type)
commit b93128f29733af5d427a335978a19884c2c230e2
Author: Trond Myklebust <trond.myklebust@hammerspace.com>

NFSv4.2: Serialise O_DIRECT i/o and fallocate()

This patch adds a call to nfs_file_block_o_direct() before nfs_sync_inode()
in nfs42_proc_fallocate() to ensure all O_DIRECT reads and writes complete
before calling fallocate operations, preventing races with attribute
updates.

> diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
> index e2fea37c5348..1a169372ca16 100644
> --- a/fs/nfs/nfs42proc.c
> +++ b/fs/nfs/nfs42proc.c
> @@ -114,6 +114,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
>  	exception.inode = inode;
>  	exception.state = lock->open_context->state;
>
> +	nfs_file_block_o_direct(NFS_I(inode));

Can nfs_file_block_o_direct() be called here without holding inode->i_rwsem?

The function definition in fs/nfs/internal.h has this comment:

/* Must be called with exclusively locked inode->i_rwsem */
static inline void nfs_file_block_o_direct(struct nfs_inode *nfsi)
{
        if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
                clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
                inode_dio_wait(&nfsi->vfs_inode);
        }
}

All other callers in the NFS code acquire the lock first. For example,
nfs_start_io_write() calls down_write_killable(&inode->i_rwsem) before
calling nfs_file_block_o_direct(), and nfs_start_io_read() does the same.

The VFS fallocate path (vfs_fallocate()->nfs42_fallocate()->nfs42_proc_
fallocate()) only calls file_start_write() which acquires sb->s_writers
but not inode->i_rwsem.

>  	err = nfs_sync_inode(inode);
>  	if (err)
>  		goto out;

commit d442670c0f63c46b7f348f68fb2002af597708f2
Author: Jonathan Curley <jcurley@purestorage.com>

NFSv4/flexfiles: Add data structure support for striped layouts

Adds a new struct nfs4_ff_layout_ds_stripe that represents a data
server stripe within a layout. A new dynamically allocated array of
this type has been added to nfs4_ff_layout_mirror and per stripe
configuration information has been moved from the mirror type to the
stripe based on the RFC.

> diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
> index 2ad371faab98..debdc8952ab0 100644
> --- a/fs/nfs/flexfilelayout/flexfilelayout.c
> +++ b/fs/nfs/flexfilelayout/flexfilelayout.c

[ ... ]

> @@ -452,10 +459,13 @@ ff_layout_alloc_lseg(struct pnfs_layout_hdr *lh,
>  			goto out_err_free;
>  		}
>
> -		fls->mirror_array[i]->ds_count = ds_count;
> +		fls->mirror_array[i]->dss_count = dss_count;
> +		fls->mirror_array[i]->dss =
> +		    kcalloc(dss_count, sizeof(struct nfs4_ff_layout_ds_stripe),
> +			    gfp_flags);
>
>  		/* deviceid */
> -		rc = decode_deviceid(&stream, &fls->mirror_array[i]->devid);
> +		rc = decode_deviceid(&stream, &fls->mirror_array[i]->dss[dss_id].devid);

Can this dereference NULL in ff_layout_alloc_lseg()? The kcalloc() call
above can return NULL under memory pressure, but there's no check before
accessing fls->mirror_array[i]->dss[dss_id].devid in the decode_deviceid()
call. All other memory allocations in this function check for NULL and
jump to out_err_free on failure.

>  		if (rc)
>  			goto out_err_free;
>

[ ... ]

commit 8c48e1c7520321cc87ff651e96093e2f412785fb
Author: Stefan Metzmacher <metze@samba.org>

smb: client: don't wait for info->send_pending == 0 on error

This commit addresses an issue where smbd_destroy() would wait
indefinitely for send_pending to reach zero after ib_drain_qp() is
called. The patch removes the problematic wait and modifies smbd_send()
to exit early when the socket is disconnected.

> diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
> index c628e91c328b..02d6db431fd4 100644
> --- a/fs/smb/client/smbdirect.c
> +++ b/fs/smb/client/smbdirect.c
> @@ -1337,10 +1337,6 @@ void smbd_destroy(struct TCP_Server_Info *server)
>  	log_rdma_event(INFO, "cancelling idle timer\n");
>  	cancel_delayed_work_sync(&info->idle_timer_work);
>
> -	log_rdma_event(INFO, "wait for all send posted to IB to finish\n");
> -	wait_event(info->wait_send_pending,
> -		atomic_read(&info->send_pending) == 0);
> -

Can smbd_destroy() proceed safely without waiting for send operations to
complete? The send_done() callback accesses the info structure after
this point:

In send_done(), when wc->status != IB_WC_SUCCESS (which happens after
ib_drain_qp() above), the function calls smbd_disconnect_rdma_connection()
but skips the atomic_dec_and_test(&info->send_pending) call. However,
when the operation succeeds, send_done() does call:

    if (atomic_dec_and_test(&info->send_pending))
        wake_up(&info->wait_send_pending);

After ib_drain_qp() triggers completions with IB_WC_WR_FLUSH_ERR status,
these send_done() callbacks can execute after smbd_destroy() has called
kfree(info). When send_done() then tries to access info->send_pending,
does this create a use-after-free condition?

>  	/* It's not possible for upper layer to get to reassembly */
>  	log_rdma_event(INFO, "drain the reassembly queue\n");
>  	do {
> @@ -1986,7 +1982,11 @@ int smbd_send(struct TCP_Server_Info *server,
>  	 */
>
>  	wait_event(info->wait_send_pending,
> -		atomic_read(&info->send_pending) == 0);
> +		atomic_read(&info->send_pending) == 0 ||
> +		sc->status != SMBDIRECT_SOCKET_CONNECTED);
> +
> +	if (sc->status != SMBDIRECT_SOCKET_CONNECTED && rc == 0)
> +		rc = -EAGAIN;
>
>  	return rc;
>  }

commit 1f7168b28f667c0920fd83151ae35ce593950663
Author: André Almeida <andrealmeid@igalia.com>

ovl: Ensure that all layers have the same encoding

This patch adds encoding consistency validation to overlayfs when casefold
is enabled. It introduces ovl_set_encoding() to copy encoding settings from
the first layer and validates all layers have matching encoding.

> diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
> index df85a76597e9..548ea9b1a48d 100644
> --- a/fs/overlayfs/super.c
> +++ b/fs/overlayfs/super.c
> @@ -991,6 +991,25 @@ static int ovl_get_data_fsid(struct ovl_fs *ofs)
>  	return ofs->numfs;
>  }
>
> +/*
> + * Set the ovl sb encoding as the same one used by the first layer
> + */
> +static int ovl_set_encoding(struct super_block *sb, struct super_block *fs_sb)
> +{
> +	if (!sb_has_encoding(fs_sb))
> +		return 0;
> +
> +#if IS_ENABLED(CONFIG_UNICODE)
> +	if (sb_has_strict_encoding(fs_sb)) {
> +		pr_err("strict encoding not supported\n");
> +		return -EINVAL;
> +	}
> +
> +	sb->s_encoding = fs_sb->s_encoding;
> +	sb->s_encoding_flags = fs_sb->s_encoding_flags;
                         ^^^^

Will this leak the unicode_map on overlayfs unmount? Other filesystems
like shmem and ext4 call utf8_unload(sb->s_encoding) in their put_super
operations, but ovl_put_super()->ovl_free_fs() has no utf8_unload() call.

The unicode_map appears to be reference counted via utf8_load()/utf8_unload()
pairs in other parts of the kernel, but this code just copies the pointer
without taking a proper reference.

> +#endif
> +	return 0;
> +}
>
>  static int ovl_get_layers(struct super_block *sb, struct ovl_fs *ofs,
>  			  struct ovl_fs_context *ctx, struct ovl_layer *layers)

commit 8004ea01cf6338298e0c6ab055bc3ec659ce381b
Author: Babu Moger <babu.moger@amd.com>

fs/resctrl: Introduce the interface to switch between monitor modes

This patch introduces mbm_assign_mode resctrl file to switch between
mbm_event and default monitoring modes. It also resets MBM event
counters automatically when the mode changes.

Link: https://lore.kernel.org/cover.1757108044.git.babu.moger@amd.com

> diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst
> index f60f6a96cb6b..006d23af66e1 100644
> --- a/Documentation/filesystems/resctrl.rst
> +++ b/Documentation/filesystems/resctrl.rst

[ ... ]

> diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
> index f388dbcdbdcd..50c24460d992 100644
> --- a/fs/resctrl/monitor.c
> +++ b/fs/resctrl/monitor.c

[ ... ]

> +ssize_t resctrl_mbm_assign_mode_write(struct kernfs_open_file *of, char *buf,
> +				      size_t nbytes, loff_t off)
> +{
> +	struct rdt_resource *r = rdt_kn_parent_priv(of->kn);
> +	struct rdt_mon_domain *d;
> +	int ret = 0;
> +	bool enable;
> +
> +	/* Valid input requires a trailing newline */
> +	if (nbytes == 0 || buf[nbytes - 1] != '\n')
> +		return -EINVAL;
> +
> +	buf[nbytes - 1] = '\0';
> +
> +	cpus_read_lock();
> +	mutex_lock(&rdtgroup_mutex);
> +
> +	rdt_last_cmd_clear();
> +
> +	if (!strcmp(buf, "default")) {
> +		enable = 0;
> +	} else if (!strcmp(buf, "mbm_event")) {
> +		if (r->mon.mbm_cntr_assignable) {
> +			enable = 1;
> +		} else {
> +			ret = -EINVAL;
> +			rdt_last_cmd_puts("mbm_event mode is not supported\n");
> +			goto out_unlock;
> +		}
> +	} else {
> +		ret = -EINVAL;
> +		rdt_last_cmd_puts("Unsupported assign mode\n");
> +		goto out_unlock;
> +	}
> +
> +	if (enable != resctrl_arch_mbm_cntr_assign_enabled(r)) {
> +		ret = resctrl_arch_mbm_cntr_assign_set(r, enable);
> +		if (ret)
> +			goto out_unlock;
> +
> +		/* Update the visibility of BMEC related files */
> +		resctrl_bmec_files_show(r, NULL, !enable);
> +
> +		/*
> +		 * Initialize the default memory transaction values for
> +		 * total and local events.
> +		 */
> +		if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
> +			mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask;
> +		if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
> +			mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask &
> +									   (READS_TO_LOCAL_MEM |
> +									    READS_TO_LOCAL_S_MEM |
> +									    NON_TEMP_WRITE_TO_LOCAL_MEM);
> +		/* Enable auto assignment when switching to "mbm_event" mode */
> +		if (enable)
> +			r->mon.mbm_assign_on_mkdir = true;
> +		/*
> +		 * Reset all the non-achitectural RMID state and assignable counters.
> +		 */
> +		list_for_each_entry(d, &r->mon_domains, hdr.list) {
> +			mbm_cntr_free_all(r, d);
> +			resctrl_reset_rmid_all(r, d);
> +		}
> +	}
> +
> +out_unlock:
> +	mutex_lock(&rdtgroup_mutex);
            ^^^^

Can this cause a deadlock? The function already acquired rdtgroup_mutex
at the beginning, so attempting to lock it again here would block
indefinitely. Should this be mutex_unlock(&rdtgroup_mutex) instead?

> +	cpus_read_unlock();
> +
> +	return ret ?: nbytes;
> +}

[ ... ]

commit aab2c5088cdb26e80d51ffbe72d24ab23fa1533e
Author: Babu Moger <babu.moger@amd.com>

fs/resctrl: Add the functionality to unassign MBM events

This patch adds counter cleanup functionality for MBM events by
implementing rdtgroup_unassign_cntrs() and supporting functions to free
assigned counters when resource groups are deleted.

Link: https://lore.kernel.org/cover.1757108044.git.babu.moger@amd.com

> diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h
> index 762705d7eb8d..c6b66d4a6a37 100644
> --- a/fs/resctrl/internal.h
> +++ b/fs/resctrl/internal.h
> @@ -398,6 +398,8 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_open_file *of, struct seq_fil
>
>  void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp);
>
> +void rdtgroup_unassign_cntrs(struct rdtgroup *rdtgrp);
> +

The rdtgroup_unassign_cntrs() function is declared but where should it be
called? Looking at rdtgroup_assign_cntrs() which is called during group
creation, shouldn't rdtgroup_unassign_cntrs() be called during group
deletion in rdtgroup_rmdir_ctrl() and rdtgroup_rmdir_mon()?

> diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
> index 106e9bdb8a9d..2ed29ae831a4 100644
> --- a/fs/resctrl/monitor.c
> +++ b/fs/resctrl/monitor.c
> @@ -405,6 +405,14 @@ static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_mon_domain *d,
>  	return -ENOSPC;
>  }
>
> +/*
> + * mbm_cntr_free() - Clear the counter ID configuration details in the domain @d.
> + */
> +static void mbm_cntr_free(struct rdt_mon_domain *d, int cntr_id)
> +{
> +	memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg));
> +}
> +

[ ... ]

> @@ -1043,6 +1051,64 @@ void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp)
>  					   &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
>  }
>
> +/*
> + * rdtgroup_free_unassign_cntr() - Unassign and reset the counter ID configuration
> + * for the event pointed to by @mevt within the domain @d and resctrl group @rdtgrp.
> + */
> +static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
> +					struct rdtgroup *rdtgrp, struct mon_evt *mevt)
> +{
> +	int cntr_id;
> +
> +	cntr_id = mbm_cntr_get(r, d, rdtgrp, mevt->evtid);
> +
> +	/* If there is no cntr_id assigned, nothing to do */
> +	if (cntr_id < 0)
> +		return;
> +
> +	rdtgroup_assign_cntr(r, d, mevt->evtid, rdtgrp->mon.rmid, rdtgrp->closid, cntr_id, false);
> +
> +	mbm_cntr_free(d, cntr_id);
> +}
> +
> +/*
> + * rdtgroup_unassign_cntr_event() - Unassign a hardware counter associated with
> + * the event structure @mevt from the domain @d and the group @rdtgrp. Unassign
> + * the counters from all the domains if @d is NULL else unassign from @d.
> + */
> +static void rdtgroup_unassign_cntr_event(struct rdt_mon_domain *d, struct rdtgroup *rdtgrp,
> +					 struct mon_evt *mevt)
> +{
> +	struct rdt_resource *r = resctrl_arch_get_resource(mevt->rid);
> +
> +	if (!d) {
> +		list_for_each_entry(d, &r->mon_domains, hdr.list)
> +			rdtgroup_free_unassign_cntr(r, d, rdtgrp, mevt);
> +	} else {
> +		rdtgroup_free_unassign_cntr(r, d, rdtgrp, mevt);
> +	}
> +}
> +
> +/*
> + * rdtgroup_unassign_cntrs() - Unassign the counters associated with MBM events.
> + *			       Called when a group is deleted.
> + */
> +void rdtgroup_unassign_cntrs(struct rdtgroup *rdtgrp)
> +{
> +	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
> +
> +	if (!r->mon_capable || !resctrl_arch_mbm_cntr_assign_enabled(r))
> +		return;
> +
> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
> +		rdtgroup_unassign_cntr_event(NULL, rdtgrp,
> +					     &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);
> +
> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
> +		rdtgroup_unassign_cntr_event(NULL, rdtgrp,
> +					     &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
> +}
> +
>  int resctrl_mbm_assign_mode_show(struct kernfs_open_file *of,
>  				 struct seq_file *s, void *v)
>  {

The comment states "Called when a group is deleted" but the function has
no callers. Won't this create a counter leak when groups are deleted
since counters assigned during group creation via rdtgroup_assign_cntrs()
->rdtgroup_assign_cntr_event()->rdtgroup_alloc_assign_cntr() won't be
cleaned up during deletion?

commit 2d8bd8049e89efe42a5397de4effd899e8dd2249
Author: Pankaj Raghav <p.raghav@samsung.com>

mm: add persistent huge zero folio

This patch introduces a config option CONFIG_PERSISTENT_HUGE_ZERO_FOLIO
that changes huge zero folio lifecycle from dynamic allocation/deallocation
to persistent allocation during boot. When enabled, the shrinker is
disabled and the folio is never freed.

Link: https://lkml.kernel.org/r/20250811084113.647267-4-kernel@pankajraghav.com

> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 7748489fde1b..bd547857c6c1 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -495,6 +495,17 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
>  struct folio *mm_get_huge_zero_folio(struct mm_struct *mm);
>  void mm_put_huge_zero_folio(struct mm_struct *mm);
>
> +static inline struct folio *get_persistent_huge_zero_folio(void)
> +{
> +	if (!IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> +		return NULL;
> +
> +	if (unlikely(!huge_zero_folio))
> +		return NULL;
> +
> +	return huge_zero_folio;
> +}
> +
>  static inline bool thp_migration_supported(void)
>  {
>  	return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);

[ ... ]

> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2801ce9bbde9..b8bb078a1a34 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -248,6 +248,9 @@ static void put_huge_zero_folio(void)
>
>  struct folio *mm_get_huge_zero_folio(struct mm_struct *mm)
>  {
> +	if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> +		return huge_zero_folio;
> +
>  	if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags))
>  		return READ_ONCE(huge_zero_folio);
>
> @@ -262,6 +265,9 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct *mm)
>
>  void mm_put_huge_zero_folio(struct mm_struct *mm)
>  {
> +	if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> +		return;
> +
>  	if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags))
>  		put_huge_zero_folio();
>  }

[ ... ]

> @@ -849,16 +855,34 @@ static inline void hugepage_exit_sysfs(struct kobject *hugepage_kobj)
>
>  static int __init thp_shrinker_init(void)
>  {
> -	huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
> -	if (!huge_zero_folio_shrinker)
> -		return -ENOMEM;
> -
>  	deferred_split_shrinker = shrinker_alloc(SHRINKER_NUMA_AWARE |
>  						 SHRINKER_MEMCG_AWARE |
>  						 SHRINKER_NONSLAB,
>  						 "thp-deferred_split");
> -	if (!deferred_split_shrinker) {
> -		shrinker_free(huge_zero_folio_shrinker);
> +	if (!deferred_split_shrinker)
> +		return -ENOMEM;
> +
> +	deferred_split_shrinker->count_objects = deferred_split_count;
> +	deferred_split_shrinker->scan_objects = deferred_split_scan;
> +	shrinker_register(deferred_split_shrinker);
> +
> +	if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO)) {
> +		/*
> +		 * Bump the reference of the huge_zero_folio and do not
> +		 * initialize the shrinker.
> +		 *
> +		 * huge_zero_folio will always be NULL on failure. We assume
> +		 * that get_huge_zero_folio() will most likely not fail as
> +		 * thp_shrinker_init() is invoked early on during boot.
> +		 */
> +		if (!get_huge_zero_folio())
> +			pr_warn("Allocating persistent huge zero folio failed\n");
> +		return 0;
> +	}

Should thp_shrinker_init() return an error code when get_huge_zero_folio()
fails in persistent mode? Currently it returns 0 (success) even when the
allocation fails, which means hugepage_init() will believe initialization
succeeded. This leaves the deferred_split_shrinker allocated but the
persistent huge zero folio unavailable.

If the system is supposed to continue working without the persistent huge
zero folio, the current behavior might be intentional. However, callers
like hugepage_init() have no way to know that persistent allocation failed.

> +
> +	huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
> +	if (!huge_zero_folio_shrinker) {
> +		shrinker_free(deferred_split_shrinker);
>  		return -ENOMEM;
>  	}
>
> @@ -866,10 +890,6 @@ static int __init thp_shrinker_init(void)
>  	huge_zero_folio_shrinker->scan_objects = shrink_huge_zero_folio_scan;
>  	shrinker_register(huge_zero_folio_shrinker);
>
> -	deferred_split_shrinker->count_objects = deferred_split_count;
> -	deferred_split_shrinker->scan_objects = deferred_split_scan;
> -	shrinker_register(deferred_split_shrinker);
> -
>  	return 0;
>  }

commit 12e423ba4eaed7b1561b677d32e6599f932d03db
Author: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

mm: convert core mm to mm_flags_*() accessors

This patch converts existing users of direct mm_struct->flags bit
operations to use the new mm_flags_*() accessor functions as part of
migrating mm->flags to a bitmap field. The commit states that no
functional changes are intended.

Link: https://lkml.kernel.org/r/1eb2266f4408798a55bda00cb04545a3203aa572.1755012943.git.lorenzo.stoakes@oracle.com

> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 14d424830fa8..84b7eebe0d68 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -327,7 +327,7 @@ static inline bool vma_thp_disabled(struct vm_area_struct *vma,
>  	 * example, s390 kvm.
>  	 */
>  	return (vm_flags & VM_NOHUGEPAGE) ||
> -	       test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags);
> +	       mm_flags_test(MMF_DISABLE_THP, vma->vm_mm);
>  }
>
>  static inline bool thp_disabled_by_hw(void)

[ ... ]

> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 25923cfec9c6..17650f0b516e 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c

[ ... ]

> @@ -1251,7 +1251,7 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
>  	 * Check MMF_OOM_SKIP again under mmap_read_lock protection to ensure
>  	 * possible change in exit_mmap is seen
>  	 */
> -	if (!test_bit(MMF_OOM_SKIP, &mm->flags) && !__oom_reap_task_mm(mm))
> +	if (mm_flags_test(MMF_OOM_SKIP, mm) && !__oom_reap_task_mm(mm))
>  		ret = -EAGAIN;
>  	mmap_read_unlock(mm);

Does this condition invert the logic? The original code checked if
MMF_OOM_SKIP was NOT set, but the new code checks if it IS set. This
appears to change when process_mrelease()->__oom_reap_task_mm() gets
called and when -EAGAIN is returned to userspace.

commit 7c80f3c72467b180a9d7b3852c74b7540a5a6ce9
Author: Mike Rapoport (Microsoft) <rppt@kernel.org>

kho: add support for preserving vmalloc allocations

A vmalloc allocation is preserved using binary structure similar to global
KHO memory tracker. It's a linked list of pages where each page is an
array of physical address of pages in vmalloc area.

Link: https://lkml.kernel.org/r/20250921054458.4043761-4-rppt@kernel.org

> diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h
> index cec663b39861..1b3c41ab7a0b 100644
> --- a/include/linux/kexec_handover.h
> +++ b/include/linux/kexec_handover.h

[ ... ]

> diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c
> index d30c2a99f3ea..57336e510f3b 100644
> --- a/kernel/kexec_handover.c
> +++ b/kernel/kexec_handover.c

[ ... ]

> +void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
> +{
> +	struct kho_vmalloc_chunk *chunk = KHOSER_LOAD_PTR(preservation->first);
> +	unsigned int align, order, shift, vm_flags;
> +	unsigned long total_pages, contig_pages;
> +	unsigned long addr, size;
> +	struct vm_struct *area;
> +	struct page **pages;
> +	unsigned int idx = 0;
> +	int err;
> +
> +	vm_flags = kho_flags_to_vmalloc(preservation->flags);
> +	if (vm_flags & ~KHO_VMALLOC_SUPPORTED_FLAGS)
> +		return NULL;
> +
> +	total_pages = preservation->total_pages;
> +	pages = kvmalloc_array(total_pages, sizeof(*pages), GFP_KERNEL);
> +	if (!pages)
> +		return NULL;
> +	order = preservation->order;
> +	contig_pages = (1 << order);
> +	shift = PAGE_SHIFT + order;
> +	align = 1 << shift;
> +
> +	while (chunk) {
> +		struct page *page;
> +
> +		for (int i = 0; chunk->phys[i]; i++) {
> +			phys_addr_t phys = chunk->phys[i];
> +
> +			if (idx + contig_pages > total_pages)
> +				goto err_free_pages_array;
> +
> +			page = kho_restore_pages(phys, contig_pages);
> +			if (!page)
> +				goto err_free_pages_array;
> +
> +			for (int j = 0; j < contig_pages; j++)
> +				pages[idx++] = page;
                                ^^^^^
Can this page array handling work correctly for higher-order pages?
When contig_pages > 1, the loop stores the same compound page head
pointer multiple times instead of individual page pointers (page + j).
The vmap_pages_range() function expects individual page pointers for
each PAGE_SIZE page, but this provides duplicate pointers to the
compound head.

> +
> +			phys += contig_pages * PAGE_SIZE;
> +		}
> +
> +		page = kho_restore_pages(virt_to_phys(chunk), 1);
> +		if (!page)
> +			goto err_free_pages_array;
> +		chunk = KHOSER_LOAD_PTR(chunk->hdr.next);
> +		__free_page(page);
> +	}
> +
> +	if (idx != total_pages)
> +		goto err_free_pages_array;
> +
> +	area = __get_vm_area_node(total_pages * PAGE_SIZE, align, shift,
> +				  vm_flags, VMALLOC_START, VMALLOC_END,
> +				  NUMA_NO_NODE, GFP_KERNEL,
> +				  __builtin_return_address(0));
> +	if (!area)
> +		goto err_free_pages_array;
> +
> +	addr = (unsigned long)area->addr;
> +	size = get_vm_area_size(area);
> +	err = vmap_pages_range(addr, addr + size, PAGE_KERNEL, pages, shift);
> +	if (err)
> +		goto err_free_vm_area;
> +
> +	area->nr_pages = total_pages;
> +	area->pages = pages;
> +
> +	return area->addr;
> +
> +err_free_vm_area:
> +	free_vm_area(area);
> +err_free_pages_array:
> +	kvfree(pages);
> +	return NULL;
> +}

commit 2191b3471df53d87a46179b0c302c77c9ccfec11
Author: Stefan Metzmacher <metze@samba.org>

smb: smbdirect: introduce smbdirect_socket.send_io.credits.{count,wait_queue}

This patch adds new credit management fields to the smbdirect_socket
structure that will be shared between client and server soon. The
patch initializes these fields in smbdirect_socket_init().

> diff --git a/fs/smb/common/smbdirect/smbdirect_socket.h b/fs/smb/common/smbdirect/smbdirect_socket.h
> index 1df43b7a87f4..7c2147a0244a 100644
> --- a/fs/smb/common/smbdirect/smbdirect_socket.h
> +++ b/fs/smb/common/smbdirect/smbdirect_socket.h
> @@ -78,6 +78,14 @@ struct smbdirect_socket {
>  			mempool_t		*pool;
>  		} mem;
>
> +		/*
> +		 * The credit state for the send side
> +		 */
> +		struct {
> +			atomic_t count;
> +			wait_queue_head_t wait_queue;
> +		} credits;
> +

Are these new fields properly initialized when smbdirect_socket
structures are created? Looking at _smbd_get_connection() in
fs/smb/client/smbdirect.c, the smbd_connection structure (which
contains smbdirect_socket as its first member) is allocated with
kzalloc() but smbdirect_socket_init() is never called.

>  		/*
>  		 * The state about posted/pending sends
>  		 */
> @@ -163,6 +171,9 @@ static __always_inline void smbdirect_socket_init(struct smbdirect_socket *sc)
>
>  	init_waitqueue_head(&sc->status_wait);
>
> +	atomic_set(&sc->send_io.credits.count, 0);
> +	init_waitqueue_head(&sc->send_io.credits.wait_queue);
> +

The initialization code looks correct, but can this function be reached?
A grep through the fs/smb/ directory shows smbdirect_socket_init() is
defined but never called. The actual initialization in _smbd_get_connection()
only calls init_waitqueue_head() for some specific wait queues but not
for the new credits.wait_queue.

Without init_waitqueue_head() being called, any subsequent wait_event()
or wake_up() operations on sc->send_io.credits.wait_queue will access
uninitialized spinlocks and list heads, leading to kernel crashes.

>  	atomic_set(&sc->send_io.pending.count, 0);
>  	init_waitqueue_head(&sc->send_io.pending.dec_wait_queue);
>  	init_waitqueue_head(&sc->send_io.pending.zero_wait_queue);

commit 371d3ab51ce44c8a7c0f9b835b74b7340a922059
Author: Stefan Metzmacher <metze@samba.org>

smb: smbdirect: introduce smbdirect_socket.status_wait

This patch adds a wait_queue_head_t status_wait field to the
smbdirect_socket struct for future use by server and client components.

> diff --git a/fs/smb/common/smbdirect/smbdirect_socket.h b/fs/smb/common/smbdirect/smbdirect_socket.h
> index 8484c6f09317..a69b9649b88e 100644
> --- a/fs/smb/common/smbdirect/smbdirect_socket.h
> +++ b/fs/smb/common/smbdirect/smbdirect_socket.h
> @@ -41,6 +41,7 @@ const char *smbdirect_socket_status_string(enum smbdirect_socket_status status)
>
>  struct smbdirect_socket {
>  	enum smbdirect_socket_status status;
> +	wait_queue_head_t status_wait;
>
>  	/* RDMA related */
>  	struct {

Is the new status_wait field being properly initialized? Looking at
_smbd_get_connection() in fs/smb/client/smbdirect.c, the smbdirect_socket
struct is allocated via kzalloc() which zeros the memory, but wait queue
heads require explicit initialization via init_waitqueue_head() before
use. The existing info->status_wait is correctly initialized, but I don't
see where sc->status_wait gets initialized.

Can uninitialized wait queue operations cause memory corruption when this
field is eventually used?

commit 603b4416232524dafde8e2cf859788dae786dea1
Author: KP Singh <kpsingh@kernel.org>

bpf: Update the bpf_prog_calc_tag to use SHA256

This patch changes bpf_prog_calc_tag() to use SHA256 instead of SHA1
and modifies the bpf_prog structure to store the full SHA256 digest.
The digest and tag fields are placed in a union to maintain compatibility.

Link: https://lore.kernel.org/r/20250914215141.15144-2-kpsingh@kernel.org

> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 41f776071ff5..d75902074bd1 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -31,6 +31,7 @@
>  #include <linux/memcontrol.h>
>  #include <linux/cfi.h>
>  #include <asm/rqspinlock.h>
> +#include <crypto/sha2.h>
>
>  struct bpf_verifier_env;
>  struct bpf_verifier_log;
> @@ -1717,7 +1718,10 @@ struct bpf_prog {
>  	enum bpf_attach_type	expected_attach_type; /* For some prog types */
>  	u32			len;		/* Number of filter blocks */
>  	u32			jited_len;	/* Size of jited insns in bytes */
> -	u8			tag[BPF_TAG_SIZE];
> +	union {
> +		u8 digest[SHA256_DIGEST_SIZE];
> +		u8 tag[BPF_TAG_SIZE];
> +	};

Can this union create memory access issues? The digest array is 32 bytes
(SHA256_DIGEST_SIZE) while tag is 8 bytes (BPF_TAG_SIZE). Code accessing
prog->tag after bpf_prog_calc_tag() writes to prog->digest might read
incorrect data from the union.


-clm false positive, but not sure I want to teach claude to like unions

>  	struct bpf_prog_stats __percpu *stats;
>  	int __percpu		*active;
>  	unsigned int		(*bpf_func)(const void *ctx,

[ ... ]

> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 1cda2589d4b3..9b64674df16b 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -39,6 +39,7 @@
>  #include <linux/bpf_mem_alloc.h>
>  #include <linux/memcontrol.h>
>  #include <linux/execmem.h>
> +#include <crypto/sha2.h>
>
>  #include <asm/barrier.h>
>  #include <linux/unaligned.h>
> @@ -296,7 +297,6 @@ void __bpf_prog_free(struct bpf_prog *fp)
>  int bpf_prog_calc_tag(struct bpf_prog *fp)
>  {
>  	size_t size = bpf_prog_insn_size(fp);
> -	u8 digest[SHA1_DIGEST_SIZE];
>  	struct bpf_insn *dst;
>  	bool was_ld_map;
>  	u32 i;
> @@ -327,8 +327,7 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
>  			was_ld_map = false;
>  		}
>  	}
> -	sha1((const u8 *)dst, size, digest);
> -	memcpy(fp->tag, digest, sizeof(fp->tag));
> +	sha256((u8 *)dst, size, fp->digest);

Are prog->tag users broken by this change? The function now writes to
fp->digest but existing code accesses fp->tag. For example:

- perf_event_bpf_event() calls memcpy(bpf_event.event_id.tag, prog->tag, BPF_TAG_SIZE)
- bpf_prog_get_info_by_fd() calls memcpy(info.tag, prog->tag, sizeof(prog->tag))
- bpf_prog_get_info_by_fd() also calls copy_to_user(user_prog_tags[0], prog->tag, BPF_TAG_SIZE)

Since fp->digest gets written but fp->tag gets read, will these functions
access uninitialized memory in the tag field?

>  	vfree(dst);
>  	return 0;
>  }


^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2025-10-31 16:51 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-08 17:04 [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Chris Mason
2025-10-08 17:20 ` Konstantin Ryabitsev
2025-10-08 18:11   ` Sasha Levin
2025-10-08 18:35   ` Chris Mason
2025-10-08 17:57 ` Bart Van Assche
2025-10-08 18:04   ` Chris Mason
2025-10-08 18:14     ` Bart Van Assche
2025-10-08 18:42       ` Chris Mason
2025-10-08 21:08     ` Kees Cook
2025-10-09  1:37       ` Chris Mason
2025-10-08 18:33 ` Sasha Levin
2025-10-09  1:43   ` Chris Mason
2025-10-09 14:49     ` Paul E. McKenney
2025-10-08 19:08 ` Andrew Lunn
2025-10-08 19:28   ` Arnaldo Carvalho de Melo
2025-10-08 19:33     ` Laurent Pinchart
2025-10-08 19:39       ` Arnaldo Carvalho de Melo
2025-10-08 20:29         ` Andrew Lunn
2025-10-08 20:53           ` Mark Brown
2025-10-09  9:37         ` Laurent Pinchart
2025-10-09 12:48           ` Arnaldo Carvalho de Melo
2025-10-08 19:29   ` Laurent Pinchart
2025-10-08 19:50     ` Bird, Tim
2025-10-08 20:30       ` Sasha Levin
2025-10-09 12:32         ` Arnaldo Carvalho de Melo
2025-10-08 20:30       ` James Bottomley
2025-10-08 20:38         ` Bird, Tim
2025-10-08 22:21           ` Jiri Kosina
2025-10-09  9:14           ` Laurent Pinchart
2025-10-09 10:03             ` Chris Mason
2025-10-10  7:54               ` Laurent Pinchart
2025-10-10 11:40                 ` James Bottomley
2025-10-10 11:53                   ` Laurent Pinchart
2025-10-10 14:21                     ` Steven Rostedt
2025-10-10 14:35                   ` Bird, Tim
2025-10-09 14:30             ` Steven Rostedt
2025-10-09 14:51               ` Andrew Lunn
2025-10-09 15:05                 ` Steven Rostedt
2025-10-10  7:59                 ` Laurent Pinchart
2025-10-10 14:15                   ` Bird, Tim
2025-10-10 15:07                     ` Joe Perches
2025-10-10 16:01                       ` checkpatch encouragement improvements (was RE: [MAINTAINERS / KERNEL SUMMIT] AI patch review tools) Bird, Tim
2025-10-10 17:11                         ` Rob Herring
2025-10-10 17:33                           ` Arnaldo Carvalho de Melo
2025-10-10 19:21                           ` Joe Perches
2025-10-10 16:11                       ` [MAINTAINERS / KERNEL SUMMIT] AI patch review tools Steven Rostedt
2025-10-10 16:47                         ` Joe Perches
2025-10-10 17:42                           ` Steven Rostedt
2025-10-11 10:28                         ` Mark Brown
2025-10-09 16:31               ` Chris Mason
2025-10-09 17:19                 ` Arnaldo Carvalho de Melo
2025-10-09 17:24                   ` Arnaldo Carvalho de Melo
2025-10-09 17:31                     ` Alexei Starovoitov
2025-10-09 17:47                       ` Arnaldo Carvalho de Melo
2025-10-09 18:42                     ` Chris Mason
2025-10-09 18:56                       ` Linus Torvalds
2025-10-10 15:52                         ` Mauro Carvalho Chehab
2025-10-09 14:47             ` Bird, Tim
2025-10-09 15:11               ` Andrew Lunn
2025-10-09 17:58               ` Mark Brown
2025-10-09  1:15         ` Chris Mason
2025-10-08 20:37     ` Andrew Lunn
2025-10-09 12:40       ` Arnaldo Carvalho de Melo
2025-10-09 14:52 ` Paul E. McKenney
2025-10-10  3:08 ` Krzysztof Kozlowski
2025-10-10 14:12   ` Chris Mason
2025-10-31 16:51   ` Stephen Hemminger
2025-10-14  7:16 ` Dan Carpenter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox