> A lot of the problems accepting patches into the kernel stem from trying > to get them reviewed So true. > I'm really looking for evidence of actually having read (and > understood) the patch, so the best review usually comes with a > sequence of comments, questions and minor nits and a reviewed-by at > the end. Yes, this is good (and as much needed as describing the tests done for a patch by the submitter). Still, trust is an issue here. Even with comments you described above, if it is from a person you don't know the review needs to be reviewed. Depending on the reviewer that might take more time than to do the review on your own. Which pays off if the person stays. If not, well, then I still see this as something a maintainer should do, simply to educate people. It just might not help getting patches reviewed faster. > review pool, so we need more than the Reviewed-by: tag. It has been > suggested that people who submit patches should also be required to > review patches from other people Eeks. Enforced reviews are likely to be sloppy IMO. And sloppy reviews can easily cause more work, just like sloppy documentation. > conferences and development sprints which do hack time. Another > possibility is to publish a subsystem to review list (similar to the > to do list idea). This at least shows submitters why their patch > series is languishing and could serve as a call to arms if it gets too > long. For those using patchwork, such lists are on the web and referenced in MAINTAINERS. Submitters don't use it much (yet), according to my experiences. The thing I'd like to see way more in the Linux ecosystem: Paid reviewers/maintainers (selected people, no hiring offers). The number of developers increases faster than the number of quality keepers. So, the latter should be given the chance to focus on it, if they want to. Wolfram