From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B36D19F103; Wed, 19 Feb 2025 16:33:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739982789; cv=none; b=BjyaInXl8HLgfOCwgVpa6NkW59DLsccfGSGzwlIB3bD91yn3OYowAgiK3Y1L9dJEq1CexU9WiLFOACTeAdm0/+kjbLexlNEVVu8VQpDeANqSivD/Vk5i8NgyHQXCDBnbqSoIjakY+sBHdx9pamLiv9TtCUPlUxvJSGnPdYCdbCs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739982789; c=relaxed/simple; bh=HJvtIisUYTgX8+6LT5bI9lDoiz0eya4NKhLzbXJW+o4=; h=Date:From:To:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=YIaosg8/fdKmKGv0NCISBHEa8E0dR8DcwE7I8kBYcY+Pv0tuZVWz8XOX/iluCTVqucjE+BlXbjFkLHfw1L4ZEJQaQiM0bLRdjNdATOM9xqt6YlJ0Wo8hXu5ZJbMZYDUvxJWsopSbPhdRX6onbGP70tqjllXxdiafMjHsXpWz6bE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D09DC4CED1; Wed, 19 Feb 2025 16:33:07 +0000 (UTC) Date: Wed, 19 Feb 2025 11:33:31 -0500 From: Steven Rostedt To: Willy Tarreau Cc: Laurent Pinchart , James Bottomley , "Martin K. Petersen" , Dan Carpenter , Christoph Hellwig , Miguel Ojeda , rust-for-linux , Linus Torvalds , Greg KH , David Airlie , linux-kernel@vger.kernel.org, ksummit@lists.linux.dev Subject: Re: Rust kernel policy Message-ID: <20250219113331.17f014f4@gandalf.local.home> In-Reply-To: <20250219161543.GI19203@1wt.eu> References: <2bcf7cb500403cb26ad04934e664f34b0beafd18.camel@HansenPartnership.com> <20250219153350.GG19203@1wt.eu> <20250219155617.GH19203@1wt.eu> <20250219160723.GB11480@pendragon.ideasonboard.com> <20250219161543.GI19203@1wt.eu> X-Mailer: Claws Mail 3.20.0git84 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: ksummit@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Wed, 19 Feb 2025 17:15:43 +0100 Willy Tarreau wrote: > Yeah absolutely. However I remember having faced code in the past where > developers had abused this "unlock on return" concept resulting in locks > lazily being kept way too long after an operation. I don't think this > will happen in the kernel thanks to reviews, but typically all the stuff > that's done after a locked retrieval was done normally is down outside > of the lock, while here for the sake of not dealing with unlocks, quite > a few lines were still covered by the lock for no purpose. Anyway > there's no perfect solution. This was one of my concerns, and it does creep up slightly (even in my own use cases where I implemented them!). But we should be encouraging the use of: scoped_guard(mutex)(&my_mutex) { /* Do the work needed for for my_mutex */ } Which does work out very well. And the fact that the code guarded by the mutex is now also indented, it makes it easier to review. -- Steve