From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 95C1C483 for ; Fri, 26 Aug 2016 11:55:30 +0000 (UTC) Received: from isis.lip6.fr (isis.lip6.fr [132.227.60.2]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id EE365FC for ; Fri, 26 Aug 2016 11:55:29 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 26 Aug 2016 13:55:27 +0200 From: Julia Lawall To: Greg KH In-Reply-To: <20160826112635.GA27627@kroah.com> References: <20160826044651.GA25341@sasha-lappy> <20160826112635.GA27627@kroah.com> Message-ID: Cc: ksummit-discuss@lists.linuxfoundation.org, "Levin, Alexander" Subject: Re: [Ksummit-discuss] Self nomination - Sasha Levin List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Not sure if I was clear about what I was asking you to agree to :) Basically, we can take the patches sent to stable and the patches not sent to stable as a training set, but then the machine learning comes up with some algorithm that produces some results. An expert is needed to evaluate the results. Ie for a thousand (number chosen at random) patches, if the algorithm says it is a bug fixing patch, is it or isn't it, and vice versa. Of course, we could also evaluate on patches that previously have and have not been sent too stable, but there is a problem there, because our goal is to have more patches sent to stable than are already being sent there, so we need to show that the algorithm can capture what humans are missing. julia