From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3393C4332F for ; Wed, 12 Oct 2022 13:43:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5991A6B0071; Wed, 12 Oct 2022 09:43:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5216D6B0073; Wed, 12 Oct 2022 09:43:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 373036B0074; Wed, 12 Oct 2022 09:43:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1C4676B0071 for ; Wed, 12 Oct 2022 09:43:10 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DFC94AB762 for ; Wed, 12 Oct 2022 13:43:09 +0000 (UTC) X-FDA: 80012413698.14.E511FC0 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by imf30.hostedemail.com (Postfix) with ESMTP id 4510E8002B for ; Wed, 12 Oct 2022 13:43:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1665582189; x=1697118189; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=FeNOVra0cl4W7hTzlgAF3UxoDuMIfhPlMvRJT86FU/M=; b=HNvv3dr/L4qec+V2SrBoYOVQ0fnzYM0O46UaOCMGu+9OLr7AIy66e4P+ 2oIN5graXq6cQF9eoyh5GACdnOsqpT04n4Ib9pWhnWQxVVNBZXq/A3TG8 Oag4FfOgUOaDs/W4Ow1jc/8luPIL4B32YODuvtACa71dbqZPKQ16jW/+o Q=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-01.qualcomm.com with ESMTP; 12 Oct 2022 06:43:08 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.45.79.139]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2022 06:43:07 -0700 Received: from [10.216.35.42] (10.80.80.8) by nasanex01c.na.qualcomm.com (10.45.79.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 12 Oct 2022 06:43:04 -0700 Message-ID: Date: Wed, 12 Oct 2022 19:12:59 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.13.1 Subject: Re: [PATCH] locking/rwsem: Prevent non-first waiter from spinning in down_write() slowpath Content-Language: en-US To: Hillf Danton CC: , , , Waiman Long , Peter Zijlstra , Will Deacon , Boqun Feng , Ingo Molnar References: <20221012040410.403-1-hdanton@sina.com> From: Mukesh Ojha In-Reply-To: <20221012040410.403-1-hdanton@sina.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01c.na.qualcomm.com (10.45.79.139) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665582189; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FeNOVra0cl4W7hTzlgAF3UxoDuMIfhPlMvRJT86FU/M=; b=zG05DrsJNrrFgSlX7NrnbL0VdmLB/cUYoYKtQ6PPBnsg/KjSuSgpAUzrkVDMGGvZSwaokI F5+V45JRW3y/8eF7ETd3upMnVEGR7sq8HFXXiCRzFdopqDqinw2g/f6i72kG3+9HB0quBT 34ODUSAKW99gOPuqiI7W7eZAg3GS/VI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcdkim header.b="HNvv3dr/"; spf=pass (imf30.hostedemail.com: domain of quic_mojha@quicinc.com designates 199.106.114.38 as permitted sender) smtp.mailfrom=quic_mojha@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665582189; a=rsa-sha256; cv=none; b=BndpbC8ZKFu8zOKKR2D4VL9e2zHAaBFCgnD9qgHvFZulIbh3t6fSsTeveYwtAKjvNrVd/m hbHwU4k244W+eY+cbRVjSGCbSggVWK0h/HXNZPB3dO0POPebb5WCs50w+nmfCtMsOObQLF SQSXjjVMd60ISJFaMvMO1gtKfRSGV/I= X-Rspam-User: X-Rspamd-Server: rspam11 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcdkim header.b="HNvv3dr/"; spf=pass (imf30.hostedemail.com: domain of quic_mojha@quicinc.com designates 199.106.114.38 as permitted sender) smtp.mailfrom=quic_mojha@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com X-Stat-Signature: cqd4uh1q64818kjgf3th6m8ysr9rbr4f X-Rspamd-Queue-Id: 4510E8002B X-HE-Tag: 1665582188-17276 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, On 10/12/2022 9:34 AM, Hillf Danton wrote: > On 11 Oct 2022 18:46:20 +0530 Mukesh Ojha >> On 10/11/2022 4:16 PM, Hillf Danton wrote: >>> On 10/10/22 06:24 Mukesh Ojha >>>> Hi Waiman, >>>> >>>> On 9/29/2022 11:36 PM, Waiman Long wrote: >>>>> On 9/29/22 14:04, Waiman Long wrote: >>>>>> A non-first waiter can potentially spin in the for loop of >>>>>> rwsem_down_write_slowpath() without sleeping but fail to acquire the >>>>>> lock even if the rwsem is free if the following sequence happens: >>>>>> >>>>>> Non-first waiter First waiter Lock holder >>>>>> ---------------- ------------ ----------- >>>>>> Acquire wait_lock >>>>>> rwsem_try_write_lock(): >>>>>> Set handoff bit if RT or >>>>>> wait too long >>>>>> Set waiter->handoff_set >>>>>> Release wait_lock >>>>>> Acquire wait_lock >>>>>> Inherit waiter->handoff_set >>>>>> Release wait_lock >>>>>> Clear owner >>>>>> Release lock >>>>>> if (waiter.handoff_set) { >>>>>> rwsem_spin_on_owner((); >>>>>> if (OWNER_NULL) >>>>>> goto trylock_again; >>>>>> } >>>>>> trylock_again: >>>>>> Acquire wait_lock >>>>>> rwsem_try_write_lock(): >>>>>> if (first->handoff_set && (waiter != first)) >>>>>> return false; >>>>>> Release wait_lock >>>>>> >>>>>> It is especially problematic if the non-first waiter is an RT task and >>>>>> it is running on the same CPU as the first waiter as this can lead to >>>>>> live lock. >>>>>> >>>>>> Fixes: d257cc8cb8d5 ("locking/rwsem: Make handoff bit handling more >>>>>> consistent") >>>>>> Signed-off-by: Waiman Long >>>>>> --- >>>>>> kernel/locking/rwsem.c | 13 ++++++++++--- >>>>>> 1 file changed, 10 insertions(+), 3 deletions(-) >>>>> >>>>> Mukesh, can you test if this patch can fix the RT task lockup problem? >>>>> >>>> >>>> Looks like, There is still a window for a race. >>>> >>>> There is a chance when a reader who came first added it's BIAS and >>>> goes to slowpath and before it gets added to wait list it got >>>> preempted by RT task which goes to slowpath as well and being the >>>> first waiter gets its hand-off bit set and not able to get the lock >>>> due to following condition in rwsem_try_write_lock() >> >> [] >> >>>> >>>> 630 if (count & RWSEM_LOCK_MASK) { ==> reader has >>>> sets its bias >>>> .. >>>> ... >>>> >>>> 634 >>>> 635 new |= RWSEM_FLAG_HANDOFF; >>>> 636 } else { >>>> 637 new |= RWSEM_WRITER_LOCKED; >>>> >>>> >>>> ---------------------->----------------------->------------------------- >>>> >>>> First reader (1) writer(2) RT task Lock holder(3) >>>> >>>> It sets >>>> RWSEM_READER_BIAS. >>>> while it is going to >>>> slowpath(as the lock >>>> was held by (3)) and >>>> before it got added >>>> to the waiters list >>>> it got preempted >>>> by (2). >>>> RT task also takes >>>> the slowpath and add release the >>>> itself into waiting list rwsem lock >>>> and since it is the first clear the >>>> it is the next one to get owner. >>>> the lock but it can not >>>> get the lock as (count & >>>> RWSEM_LOCK_MASK) is set >>>> as (1) has added it but >>>> not able to remove its >>>> adjustment. >> >> [] >> >>>> >>> Hey Mukesh, >>> >>> Can you test the diff if it makes sense to you? >>> >>> It simply prevents the first waiter from spinning any longer after detecting >>> it barely makes any progress to spin without lock owner. >>> >>> Hillf >>> >>> --- mainline/kernel/locking/rwsem.c >>> +++ b/kernel/locking/rwsem.c >>> @@ -611,26 +611,15 @@ static inline bool rwsem_try_write_lock( >>> long count, new; >>> >>> lockdep_assert_held(&sem->wait_lock); >>> + waiter->handoff_set = false; >>> >>> count = atomic_long_read(&sem->count); >>> do { >>> bool has_handoff = !!(count & RWSEM_FLAG_HANDOFF); >>> >>> if (has_handoff) { >>> - /* >>> - * Honor handoff bit and yield only when the first >>> - * waiter is the one that set it. Otherwisee, we >>> - * still try to acquire the rwsem. >>> - */ >>> - if (first->handoff_set && (waiter != first)) >>> + if (waiter != first) >>> return false; >> >> you mean, you want to check and change waiter->handoff_set on every run >> rwsem_try_write_lock(). >> > Yes, with RWSEM_FLAG_HANDOFF set, it is too late for non first waiters to > spin, and with both RWSEM_LOCK_MASK and RWSEM_FLAG_HANDOFF set, the rivals > in the RWSEM_LOCK_MASK have an uphand over the first waiter wrt acquiring > the lock, and it is not a bad option for the first waiter to take a step > back off. > > if (count & RWSEM_LOCK_MASK) { > if (has_handoff || (!rt_task(waiter->task) && > !time_after(jiffies, waiter->timeout))) > return false; > > new |= RWSEM_FLAG_HANDOFF; > } else { > >> But does it break optimistic spinning ? @waiman ? > > Waiters spin for acquiring lock instead of lockup and your report shows > spinning too much makes trouble. The key is stop spinning neither too > late nor too early. My proposal is a simple one with as few heuristics > added as possible. From the high level, it looks like it will work. Let me check and get back on this. -Mukesh > > Hillf