From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C71FC46CD2 for ; Wed, 3 Jan 2024 02:59:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2B886B02E9; Tue, 2 Jan 2024 21:59:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB4226B0310; Tue, 2 Jan 2024 21:59:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D06DA6B0311; Tue, 2 Jan 2024 21:59:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BE1C06B02E9 for ; Tue, 2 Jan 2024 21:59:14 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 846401408E2 for ; Wed, 3 Jan 2024 02:59:14 +0000 (UTC) X-FDA: 81636493428.05.37773E6 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by imf27.hostedemail.com (Postfix) with ESMTP id 194A240007 for ; Wed, 3 Jan 2024 02:59:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b="SW PbE30"; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf27.hostedemail.com: domain of quic_aiquny@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_aiquny@quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704250752; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=78wPF0LpF+mCsihLbL7RxjqIwFhHSG0CJebLE27EJ7A=; b=PNemhEHfHSzD/WXEe31REDFhZzQcDelIHlG6f6U4ihfmCS0O7zJWPLwJ+Yipy/pY4895K0 GwoixF1Tuzbap6GukRBOrya+cCL1p6U2rEfQVC6+XJU/9hpte9Wb3jN6P2wQMjZDbRDKJy efq64DZ/L8l6fD0HHAKNuqsB4nB5XSc= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b="SW PbE30"; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf27.hostedemail.com: domain of quic_aiquny@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_aiquny@quicinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704250752; a=rsa-sha256; cv=none; b=jkcyOd+TlGJ9e7BULRY74YKOHH9nHgyZJ7u6X4Y5fHicZ66Dg6Uf7Z/+TaBqelKw0jjNAc LVvw/qtAMpAoISMzPGEhaOLcKEgVCWt3cCqm1PDFGHKSLOlNoBxbNzjMcER+1V0ZQq+BQK u7W44ZgJwBsZUGTiPHpr01mSUMRjb0k= Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4031rH8u024488; Wed, 3 Jan 2024 02:58:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= message-id:date:mime-version:subject:to:cc:references:from :in-reply-to:content-type:content-transfer-encoding; s= qcppdkim1; bh=78wPF0LpF+mCsihLbL7RxjqIwFhHSG0CJebLE27EJ7A=; b=SW PbE308oFm1Ak4hQC8cEqwf2AhPJ80ppP7abzkSVekcxti4PSosah7eey0EEu/PEy yfWQJzsm/7tRl39xuUt4fsdh/Wo3x/exEZEI6U+qWAtMfKd1BD1KeE6yjnWwNnLe yPjSJ92bZB6O6pU8keh332ZEuT09Sr96A/C2Gz0CJZFMh1nI8zVSVohhJV/tbTaZ 4w0z5lwARisQ6isgho6xp9iGMbvO5u7aIJzjVaAuIekuop+B40GH3ZC9Qt8ZZ4RV ycrZcfXDbUKRH4MovRH6BfvOCwlSp6J889HDfshPaWcs2NisHTLKdiqTTzELHjos lmvfu/e6DlH7vJdim0GA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vcgku9t91-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 03 Jan 2024 02:58:46 +0000 (GMT) Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 4032wj23031959 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 3 Jan 2024 02:58:45 GMT Received: from [10.239.132.150] (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 2 Jan 2024 18:58:35 -0800 Message-ID: <99c44790-5f1b-4535-9858-c5e9c752159c@quicinc.com> Date: Wed, 3 Jan 2024 10:58:33 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] kernel: Introduce a write lock/unlock wrapper for tasklist_lock Content-Language: en-US To: Matthew Wilcox CC: "Eric W. Biederman" , Hillf Danton , , , , , , , , , , , , , , , , , References: <20231213101745.4526-1-quic_aiquny@quicinc.com> <87o7eu7ybq.fsf@email.froward.int.ebiederm.org> From: "Aiqun Yu (Maria)" In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 6nIpjVYtECWvlGZA32t55-3fW4z4m_OS X-Proofpoint-ORIG-GUID: 6nIpjVYtECWvlGZA32t55-3fW4z4m_OS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 malwarescore=0 adultscore=0 clxscore=1011 phishscore=0 mlxlogscore=645 lowpriorityscore=0 priorityscore=1501 impostorscore=0 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401030023 X-Rspamd-Queue-Id: 194A240007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 3noo1s6eki4b8h4exednk9sx3p94u36h X-HE-Tag: 1704250751-412107 X-HE-Meta: U2FsdGVkX1/3obt2+BljgM3v4TRisHvN6a/G3FPXorr/6p51fxsPWpA42TwzC/s/gmfZpDUGeeBomukJVUO9JYLv1sGM2uo0ymNSIVaDfc59NaRrV2bDZKXQyehFJFkjVimX2D5WV4lu/ukbDC8fpMxpLYiqE++7kDyecOQv8ih7SnTW63kx6DLz+Con5D3QTXnMLcc4L9rtIlZH9q+EkhFrOqj4WvDyIyGoyomfWBoPslVsj6Ym1F7zxi4LWwqgDXDrv1c5KUfUOzDKcn2pa/zwUvkW9VoPSJLPu81bze4kUB+rHGPGmmrnqo734ADLTda0DeOVveB725AnUvUyLf6PFhFKg1wRLJ4LwslqDc8rIz9HOQHCGzS2xdhivlmvas+gfCbIfqc2htSVh8Nb5/ptmc85okObkWL1S+ZCZJbmwM4Ha6lh1PzWDSN27CY5qO4CR80MxISwZrq3p+zn8pwMYSFUU7eUU+lMhvTyqkPPDwQ52IgkhePS0DppmpGYxbRdSQortmj98GxUOKcE27wRoyMpGXtrWXrd+RnEUMa2sbN/LbZFT8HK0eOB4Fa18tbcdhnSzI/Z6efcBidKzIq5gnpUhl2j7sN+u3DCs2XEK5gMh+OAz9WSuQ3wrWKOaozJaBtpiQ9JhSb4ufnJ006Hv7Jf9Kj9Rzx57f7vmrj5Nc4amFTIl5PDpke0Lj2SjdsQ8S2yWEUai8TEE1+/iCP5n69BcVd7i4Zt85dyZVASLWn91dgkiRHcymVb3+awTmDrjzMBhT/10SULIy0YgJ7+/DuYEspmW8CqyHCkc3s6l+UUcj7LMr5h+vJWIC5UbeSSErr+OKPzrLIcp3AiU9K3tWJa2Wq98OAxl5AjWufd/XAD/HgdXLY62LsJkP6V41si6JwpHrj7nZRmXEOTdtQNClp9iCZWtJ2svmO5eQqiTmZGbbXK8nPPobOXFHOdb1P3vVMlaQgpK3Rdt/e 7fFWzGW1 XfGpheWNzs/bmSSXGBuBnqOy/PH97I86x7bsPXCiUV8f+g4jdvnHKz3YmYbLlufgq+3g6no4ou4pzsek4uzubhamb3WnSdMinJHYwz3Tso76H/w7rxV2lOAjRLwYR1yXWXh5W9THj45KaAi1sRfKmxxSM2PbrN3SCUs07u/YxIsvM04Wzo5Z21GpHa+2hgZMfbIuL3WPfNHUOJ5Y+af4lfBlBEk2OGE7XjAbTZDpfrZNZM52WRz7f8gPs6otXdjsOmOoid3lKGo9x6gCHQXYttQei4vQ+C7UzbO3g1uGy4C2rLvtrmXyiOpnXjz5joWLcbbi8OVXg8+X+01AaQzSnHHccfnZ63eIcoPAsOhOtIzk+ypZC3D3rdV7ezziu/bg//eF0WC1dOowlHsc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 1/2/2024 5:14 PM, Matthew Wilcox wrote: > On Tue, Jan 02, 2024 at 10:19:47AM +0800, Aiqun Yu (Maria) wrote: >> On 12/29/2023 6:20 AM, Matthew Wilcox wrote: >>> On Wed, Dec 13, 2023 at 12:27:05PM -0600, Eric W. Biederman wrote: >>>> Matthew Wilcox writes: >>>>> I think the right way to fix this is to pass a boolean flag to >>>>> queued_write_lock_slowpath() to let it know whether it can re-enable >>>>> interrupts while checking whether _QW_WAITING is set. >>>> >>>> Yes. It seems to make sense to distinguish between write_lock_irq and >>>> write_lock_irqsave and fix this for all of write_lock_irq. >>> >>> I wasn't planning on doing anything here, but Hillf kind of pushed me into >>> it. I think it needs to be something like this. Compile tested only. >>> If it ends up getting used, >> Happy new year! > > Thank you! I know your new year is a few weeks away still ;-) Yeah, Chinese new year will come about 5 weeks later. :) > >>> -void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock) >>> +void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock, bool irq) >>> { >>> int cnts; >>> @@ -82,7 +83,11 @@ void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock) >> Also a new state showed up after the current design: >> 1. locked flag with _QW_WAITING, while irq enabled. >> 2. And this state will be only in interrupt context. >> 3. lock->wait_lock is hold by the write waiter. >> So per my understanding, a different behavior also needed to be done in >> queued_write_lock_slowpath: >> when (unlikely(in_interrupt())) , get the lock directly. > > I don't think so. Remember that write_lock_irq() can only be called in > process context, and when interrupts are enabled. In current kernel drivers, I can see same lock called with write_lock_irq and write_lock_irqsave in different drivers. And this is the scenario I am talking about: 1. cpu0 have task run and called write_lock_irq.(Not in interrupt context) 2. cpu0 hold the lock->wait_lock and re-enabled the interrupt. * this is the new state with _QW_WAITING set, lock->wait_lock locked, interrupt enabled. * 3. cpu0 in-interrupt context and want to do write_lock_irqsave. 4. cpu0 tried to acquire lock->wait_lock again. I was thinking to support both write_lock_irq and write_lock_irqsave with interrupt enabled together in queued_write_lock_slowpath. That's why I am suggesting in write_lock_irqsave when (in_interrupt()), instead spin for the lock->wait_lock, spin to get the lock->cnts directly. > >> So needed to be done in release path. This is to address Hillf's concern on >> possibility of deadlock. > > Hillf's concern is invalid. > >>> /* When no more readers or writers, set the locked flag */ >>> do { >>> + if (irq) >>> + local_irq_enable(); >> I think write_lock_irqsave also needs to be take account. So >> loal_irq_save(flags) should be take into account here. > > If we did want to support the same kind of spinning with interrupts > enabled for write_lock_irqsave(), we'd want to pass the flags in > and do local_irq_restore(), but I don't know how we'd support > write_lock_irq() if we did that -- can we rely on passing in 0 for flags > meaning "reenable" on all architectures? And ~0 meaning "don't > reenable" on all architectures? What about for all write_lock_irq, pass the real flags from local_irq_save(flags) into the queued_write_lock_slowpath? Arch specific valid flags won't be !0 limited then. > > That all seems complicated, so I didn't do that. This is complicated. Also need test verify to ensure. More careful design more better. Fixed previous wrong email address. ^-^! > -- Thx and BRs, Aiqun(Maria) Yu