From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDC5FC2D0CD for ; Mon, 19 May 2025 09:19:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 539E46B00AD; Mon, 19 May 2025 05:19:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19E9A6B00AB; Mon, 19 May 2025 05:19:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE8566B00AA; Mon, 19 May 2025 05:19:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4D8F36B00AA for ; Mon, 19 May 2025 05:19:10 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 93EC8E611B for ; Mon, 19 May 2025 09:19:10 +0000 (UTC) X-FDA: 83459108460.25.961976B Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf03.hostedemail.com (Postfix) with ESMTP id 764F020005 for ; Mon, 19 May 2025 09:19:08 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf03.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747646349; a=rsa-sha256; cv=none; b=UDaVRm4aPW+5K3P1mYP5jnRppE86+61+/qI1tovFFobq1oTiD1oGu7t/eqGsl4jYUtFT33 7w0LoAd11V4huRlnI0CfmhesiOBu4F2RWEp1zXkuOa2nkLyib46K+M4x+esis5H4BnDePR hcw3RKJDIHUeYS+UUlD+mRKh2570Rto= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf03.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747646349; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=M6G3hSwZOzmZBIMGynALdQj5ntcw9v2P/2KzRkwzJsA=; b=R5ZCv2QRhp6f43S3+WS7ZMWQoF2wP3YKhaGQ3HSm7OEVXs8+jwR/32ipH54AWgQYapqig/ 4a1rCT4bhsmDMjKxQ0W3ANoVdUHxT/WpOIewZzw6XyVxPnP2zNJBsrb8B0gp5L1UBZopkI 1IMCT+DHKpHhajsfwXg5PWLCXbgVAkc= X-AuditID: a67dfc5b-681ff7000002311f-8b-682af76f766c From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, harry.yoo@oracle.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, yskelg@gmail.com, yunseong.kim@ericsson.com, yeoreum.yun@arm.com, netdev@vger.kernel.org, matthew.brost@intel.com, her0gyugyu@gmail.com Subject: [PATCH v16 28/42] dept: add documentation for dept Date: Mon, 19 May 2025 18:18:12 +0900 Message-Id: <20250519091826.19752-29-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20250519091826.19752-1-byungchul@sk.com> References: <20250519091826.19752-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSb0xbZRSHfd9773tvu9XcdESv4KKpIyrGSedYTnQufnDxzRJ1cV+MU2dj r2sDdNhuMIwGupUG6cAxwtjYwMKw1LZztZ0ZKl2QShHRWUctFIFtqETGP8NokQ2nhWxfTn55 Ts5zzocjMOo+LlMwmvbLZpOuQEOUrHJmbcuTRYs5htyB+BpILlSycPqcj0D0cy8C33krhsme F2EwNY3g1k8/M9BQH0XQcm2UgfORMQQh9yECA3/cC7HkHIG+egeBw2fOEfhlahnDyPFjGLyB l+CKa4KF/qOtGBomCZxqOIzT5S8MSy4PD67ybBh3N/KwfE0LfWNxDkLDT8DJ5hECnaE+FiId 4xgGvj5NYMz3Hwf9ke9ZSNVkQbS2moOzs60EplIuBlzJOR4udzkxRJz3gd+WFtpv3Oagt7oL g73tCwyxxDcILlZexRDwxQmEk9MYgoF6Bm629yAYr5nhoeLIEg+nrDUIHBXHWbCN5MGtf9Kb mxa0YP3Ez8LZf+Po+eeor9mHaHh6jqG2YAm9mfyV0FDKydIfWiX6VeMoT20Xh3nqDBygQXcO PdM5iWnLfJKjAc9HhAbmj/G0aiaG6eylS/zOB19XbtXLBcZi2fzUtreVho7Qj1zR9Rg6uBCZ JOWotwlVIYUgiZuluguV+G722BeZlUzER6WhoaXVnCE+LAWrJ7gqpBQYMb5GGmxKrA6vE5+V /AthUoUEgRWzJcff61awStwiDba33fE/JHn9XaseRZoPO8KrXC3mSTFvM7vilMQ6hXRh4rM7 RzwgfeseYo8ilRPd40Fqo6m4UGcs2LzRUGoyHtz4zr7CAEo/mOvD5d0daD66qxuJAtKsVflD jxvUnK7YUlrYjSSB0WSoPMHHDGqVXlf6vmzet8d8oEC2dKMsgdXcr9qUKtGrxb26/XK+LBfJ 5rtdLCgyy9EH+lyt90RTXsXShp4tQhnzdH/C/rKJ3Vb7m/1KZ8b2y47Gwg3XXx3tOWHLbMj9 eCq7F5FDwV3L0hvvfbdd67k9mwjseOHN3cinzfK7Ilv1N+oeKWvfmXAf+fStq7W/v7LDan1t JvrnppKyPfkn9yqgja5/1zJLa/NbF8PPzK3H6EsNazHotDmM2aL7H8di33hcAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRiH+5+7q9VpWR0aUQyjMjKFjBe60wcPQRF9CfpQrTy1lU7Z0loZ eUtsa8NJJnldKtO2VWsz0GpijlZ2tVzzgllJN9MyamemabWkvrz8eH68z/vlZXCZg1zAqDVH Ba1GmaKgJIRk+9q8lWmjsap4sS8exFAhARXXnBR0XHUgcDbmYDB4Nwm6wsMIfj5+ikNpSQeC S29e4tDo70fgbciloPPtTAiIIxS0lxgpyKu9RsGzoQkM+i4UY+Bwb4NXtvcEPCyqwaB0kILy 0jwsMj5iMGaz02DLXgIDDWU0TLxJgPb+IAm+ynYSvL0r4GJVHwW3ve0E+JsGMOi8WUFBv/M3 CQ/99wkIm+XQYTGRcOVLDQVDYRsONnGEhuetVgz81nngyo9YC77/IuGeqRWDgrrrGAR6biFo KXyNgdsZpMAnDmPgcZfgMF5/F8GA+TMNZ86N0VCeY0ZgPHOBgPy+RPj5I3K5MpQAOdUuAq5M BtGmDbyzyol43/AIzud7jvHj4guK94atBP+ghuOby17SfH5LL81b3Rm8pyGWr709iPGXvokk 77afpXj3t2KaN3wOYPyXJ0/oHQt3S9YlCynqTEG7asM+iarJ+4hM/xRAx0P+QSob3atEBhTF cOxqzl4wiv/NFLuU6+4em8rR7GLOY3pPGpCEwdngdK6rsmdqYQ67lnOFfJQBMQzBLuGMX+f8 xVJ2DddVX/fPuYhzuFqnPFER3mv0TXEZm8gFHFVEEZJY0TQ7ilZrMlOV6pTEON0RlV6jPh53 IC3VjSIvZDs1YWlCoc6kNsQySDFD6vIuV8lIZaZOn9qGOAZXREvtnmUqmTRZqT8haNP2ajNS BF0bkjOEYr506y5hn4w9pDwqHBGEdEH7v8WYqAXZqDtaF3XwdLVDPnmjemaMZTzdGJPQU5Ns /jBtz6KSpz9OZj3I0pdNFqXdWd+04/yvnXFii3n13ODF17dyD+LdG+Vba/e/6hiwl7+rk8eY Zg81bFaZcqdL4rMq2NFUi+dwM2PIMZwrHrOlbwknZ6zXFIqzsLBTYdH3fk8Knldd/pilIHQq ZUIsrtUp/wAY/LVmPgMAAA== X-CFilter-Loop: Reflected X-Stat-Signature: ue8yfr7regy8q1ancbrncmkd5btt3js3 X-Rspam-User: X-Rspamd-Queue-Id: 764F020005 X-Rspamd-Server: rspam06 X-HE-Tag: 1747646348-560956 X-HE-Meta: U2FsdGVkX1/Eccg4YPhatg845NhAf4var0NKFuUcuoerSILn2xOmgABHmka50WDE8t2utfXITfD0HJawRgYak7g4xisTXHeuXjTw5BXz/IJKf59tEnKrRpdjxQ0onSEkWiwUX2qtxGwUWAw3AZSibFllfUPeBrF9ZC9fl7NDw8ZKuzj8cKAFYINYWNUniY5FpklF7gA4bLabGz/bhjHG7J80ydz+115jbhGU+bRVhFAi4vkaqAHg6dycKjrqkhTVxFWnWKd5M2QToGAd3hvWXYkILUSJBCAz3NXcFpNTwqemTn8SVZv8yTYH7eewLonNKyARxbZSQmTBM0In1cLFMiYPPG42RBsyWesR0sgGL5Bhc4tLHcnQOzMAL3uQ5/VCamQMSNBw5BkQ72eOdIdKftdGEWIjp1o1541IKL4/gNQcANV3xvf9sUkX/A4sxmUfSU+fyVCSXEazkfrBXf5Tc+hQy6bSpCSon83kQPn35uuohbMy4+LrgsM/9iTsklAPmpyEPCM+qYheGfBJoqlq5M64+1dezMbWKkqSTfnoGvb04e0hVQ2Gy5/Uc7YBfo4T/EiS1H53zxieewCNLX1Rg/ZKnPcHVDiXF98Y0GFCje76K9s+9sjRgApYdy8YbDPM5lgmvs3tbZLGIXr670U/mCT6kdhaIv+fOHPwTvwzieJD5PRWE1hLa+zpGzOtHRqHfBtFnl/h+YHtM4uZqsKP2NW7bObBi5pDnlKi1eQG9rECMXkJBgiXOT9+nHX7fMsH9sShLlx4vtfsdl03qlC5AGX9RUFayNBF0s3S4l1hYRNtHOs7qH6vwliwSYv4VR613i0zWTH3Zl1zZmnUceCq+7In30ptLCvP6qGWloUZQoIDbW+l6VwXhyEIRp8PVOCt5NtIR7dbWadq8SD76LZ1yeWE3bmFKArP1y91mJZkQXn5fBw3NnsuKUuXWTBnssh/6O+Jp2Vc3KMQvkfIXjS oWQ860UD aicfiem0iNQSESkBfjDV+MhcXVu2PHTeJKWgZUZRYK/KVlV9NHa5GpX8Nop7bXYE4SF3paCRIiSgcB2Z2FjVZLVUWyt+oQ1HT03bjYg2mNKaNnuRPlk5IX+FneHj7fP1yC5RPXNO4LGlVmOd/gmA/rEANgiC2cF6LYwTRl+PwQxfdtS/p55mP1TvsmEjS+O4L21EbjEPTbpWMTOHAZwyBU2250olbZ82EHq0CiFKP23DIcE3VVB6Wy1D5sz9GXKmHYpCV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This document describes the concept and APIs of dept. Signed-off-by: Byungchul Park --- Documentation/dependency/dept.txt | 735 ++++++++++++++++++++++++++ Documentation/dependency/dept_api.txt | 117 ++++ 2 files changed, 852 insertions(+) create mode 100644 Documentation/dependency/dept.txt create mode 100644 Documentation/dependency/dept_api.txt diff --git a/Documentation/dependency/dept.txt b/Documentation/dependency/dept.txt new file mode 100644 index 000000000000..5dd358b96734 --- /dev/null +++ b/Documentation/dependency/dept.txt @@ -0,0 +1,735 @@ +DEPT(DEPendency Tracker) +======================== + +Started by Byungchul Park + +How lockdep works +----------------- + +Lockdep detects a deadlock by checking lock acquisition order. For +example, a graph to track acquisition order built by lockdep might look +like: + + A -> B - + \ + -> E + / + C -> D - + + where 'A -> B' means that acquisition A is prior to acquisition B + with A still held. + +Lockdep keeps adding each new acquisition order into the graph in +runtime. For example, 'E -> C' will be added when the two locks have +been acquired in the order, E and then C. The graph will look like: + + A -> B - + \ + -> E - + / \ + -> C -> D - \ + / / + \ / + ------------------ + + where 'A -> B' means that acquisition A is prior to acquisition B + with A still held. + +This graph contains a subgraph that demonstrates a loop like: + + -> E - + / \ + -> C -> D - \ + / / + \ / + ------------------ + + where 'A -> B' means that acquisition A is prior to acquisition B + with A still held. + +Lockdep reports it as a deadlock on detection of a loop and stops its +working. + +CONCLUSION + +Lockdep detects a deadlock by checking if a loop has been created after +adding a new acquisition order into the graph. + + +Limitation of lockdep +--------------------- + +Lockdep deals with a deadlock by typical lock e.g. spinlock and mutex, +that are supposed to be released within the acquisition context. However, +when it comes to a deadlock by folio lock that is not supposed to be +released within the acquisition context or other general synchronization +mechanisms, lockdep doesn't work. + +Can lockdep detect the following deadlock? + + context X context Y context Z + + mutex_lock A + folio_lock B + folio_lock B <- DEADLOCK + mutex_lock A <- DEADLOCK + folio_unlock B + folio_unlock B + mutex_unlock A + mutex_unlock A + +No. What about the following? + + context X context Y + + mutex_lock A + mutex_lock A <- DEADLOCK + wait_for_complete B <- DEADLOCK + complete B + mutex_unlock A + mutex_unlock A + +No. + +CONCLUSION + +Lockdep cannot detect a deadlock by folio lock or other general +synchronization mechanisms. + + +What leads a deadlock +--------------------- + +A deadlock occurs when one or multi contexts are waiting for events that +will never happen. For example: + + context X context Y context Z + + | | | + v | | + 1 wait for A v | + . 2 wait for C v + event C . 3 wait for B + event B . + event A + +Event C cannot be triggered because context X is stuck at 1, event B +cannot be triggered because context Y is stuck at 2, and event A cannot +be triggered because context Z is stuck at 3. All the contexts are stuck. +We call this *deadlock*. + +If an event occurrence is a prerequisite to reaching another event, we +call it *dependency*. In this example: + + Event A occurrence is a prerequisite to reaching event C. + Event C occurrence is a prerequisite to reaching event B. + Event B occurrence is a prerequisite to reaching event A. + +In terms of dependency: + + Event C depends on event A. + Event B depends on event C. + Event A depends on event B. + +Dependency graph reflecting this example will look like: + + -> C -> A -> B - + / \ + \ / + ---------------- + + where 'A -> B' means that event A depends on event B. + +A circular dependency exists. Such a circular dependency leads a +deadlock since no waiters can have desired events triggered. + +CONCLUSION + +A circular dependency of events leads a deadlock. + + +Introduce DEPT +-------------- + +DEPT(DEPendency Tracker) tracks wait and event instead of lock +acquisition order so as to recognize the following situation: + + context X context Y context Z + + | | | + v | | + wait for A v | + . wait for C v + event C . wait for B + event B . + event A + +and builds up a dependency graph in runtime that is similar to lockdep. +The graph might look like: + + -> C -> A -> B - + / \ + \ / + ---------------- + + where 'A -> B' means that event A depends on event B. + +DEPT keeps adding each new dependency into the graph in runtime. For +example, 'B -> D' will be added when event D occurrence is a +prerequisite to reaching event B like: + + | + v + wait for D + . + event B + +After the addition, the graph will look like: + + -> D + / + -> C -> A -> B - + / \ + \ / + ---------------- + + where 'A -> B' means that event A depends on event B. + +DEPT is going to report a deadlock on detection of a new loop. + +CONCLUSION + +DEPT works on wait and event so as to theoretically detect all the +potential deadlocks. + + +How DEPT works +-------------- + +Let's take a look how DEPT works with the 1st example in the section +'Limitation of lockdep'. + + context X context Y context Z + + mutex_lock A + folio_lock B + folio_lock B <- DEADLOCK + mutex_lock A <- DEADLOCK + folio_unlock B + folio_unlock B + mutex_unlock A + mutex_unlock A + +Adding comments to describe DEPT's view in terms of wait and event: + + context X context Y context Z + + mutex_lock A + /* wait for A */ + folio_lock B + /* wait for A */ + /* start event A context */ + + folio_lock B + /* wait for B */ <- DEADLOCK + /* start event B context */ + + mutex_lock A + /* wait for A */ <- DEADLOCK + /* start event A context */ + + folio_unlock B + /* event B */ + folio_unlock B + /* event B */ + + mutex_unlock A + /* event A */ + mutex_unlock A + /* event A */ + +Adding more supplementary comments to describe DEPT's view in detail: + + context X context Y context Z + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 1 */ + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 2 */ + + folio_lock B + /* might wait for B */ <- DEADLOCK + /* start to take into account event B's context */ + /* 3 */ + + mutex_lock A + /* might wait for A */ <- DEADLOCK + /* start to take into account + event A's context */ + /* 4 */ + + folio_unlock B + /* event B that's been valid since 2 */ + folio_unlock B + /* event B that's been valid since 3 */ + + mutex_unlock A + /* event A that's been valid since 1 */ + + mutex_unlock A + /* event A that's been valid since 4 */ + +Let's build up dependency graph with this example. Firstly, context X: + + context X + + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 2 */ + +There are no events to create dependency. Next, context Y: + + context Y + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 1 */ + + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 3 */ + + folio_unlock B + /* event B that's been valid since 3 */ + + mutex_unlock A + /* event A that's been valid since 1 */ + +There are two events. For event B, folio_unlock B, since there are no +waits between 3 and the event, event B does not create dependency. For +event A, there is a wait, folio_lock B, between 1 and the event. Which +means event A cannot be triggered if event B does not wake up the wait. +Therefore, we can say event A depends on event B, say, 'A -> B'. The +graph will look like after adding the dependency: + + A -> B + + where 'A -> B' means that event A depends on event B. + +Lastly, context Z: + + context Z + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 4 */ + + folio_unlock B + /* event B that's been valid since 2 */ + + mutex_unlock A + /* event A that's been valid since 4 */ + +There are also two events. For event B, folio_unlock B, there is a +wait, mutex_lock A, between 2 and the event - remind 2 is at a very +start and before the wait in timeline. Which means event B cannot be +triggered if event A does not wake up the wait. Therefore, we can say +event B depends on event A, say, 'B -> A'. The graph will look like +after adding the dependency: + + -> A -> B - + / \ + \ / + ----------- + + where 'A -> B' means that event A depends on event B. + +A new loop has been created. So DEPT can report it as a deadlock. For +event A, mutex_unlock A, since there are no waits between 4 and the +event, event A does not create dependency. That's it. + +CONCLUSION + +DEPT works well with any general synchronization mechanisms by focusing +on wait, event and its context. + + +Interpret DEPT report +--------------------- + +The following is the example in the section 'How DEPT works'. + + context X context Y context Z + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 1 */ + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 2 */ + + folio_lock B + /* might wait for B */ <- DEADLOCK + /* start to take into account event B's context */ + /* 3 */ + + mutex_lock A + /* might wait for A */ <- DEADLOCK + /* start to take into account + event A's context */ + /* 4 */ + + folio_unlock B + /* event B that's been valid since 2 */ + folio_unlock B + /* event B that's been valid since 3 */ + + mutex_unlock A + /* event A that's been valid since 1 */ + + mutex_unlock A + /* event A that's been valid since 4 */ + +We can Simplify this by replacing each waiting point with [W], each +point where its event's context starts with [S] and each event with [E]. +This example will look like after the replacement: + + context X context Y context Z + + [W][S] mutex_lock A + [W][S] folio_lock B + [W][S] folio_lock B <- DEADLOCK + + [W][S] mutex_lock A <- DEADLOCK + [E] folio_unlock B + [E] folio_unlock B + [E] mutex_unlock A + [E] mutex_unlock A + +DEPT uses the symbols [W], [S] and [E] in its report as described above. +The following is an example reported by DEPT for a real problem. + + Link: https://lore.kernel.org/lkml/6383cde5-cf4b-facf-6e07-1378a485657d@I-love.SAKURA.ne.jp/#t + Link: https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.park@lge.com/ + + =================================================== + DEPT: Circular dependency has been detected. + 6.2.0-rc1-00025-gb0c20ebf51ac-dirty #28 Not tainted + --------------------------------------------------- + summary + --------------------------------------------------- + *** DEADLOCK *** + + context A + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + [E] unlock(&ni->ni_lock:0) + + context B + [S] (unknown)(PG_locked_map:0) + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(PG_locked_map:0) + + [S]: start of the event context + [W]: the wait blocked + [E]: the event not reachable + --------------------------------------------------- + context A's detail + --------------------------------------------------- + context A + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + [E] unlock(&ni->ni_lock:0) + + [S] lock(&ni->ni_lock:0): + [] ntfs3_setattr+0x54b/0xd40 + stacktrace: + ntfs3_setattr+0x54b/0xd40 + notify_change+0xcb3/0x1430 + do_truncate+0x149/0x210 + path_openat+0x21a3/0x2a90 + do_filp_open+0x1ba/0x410 + do_sys_openat2+0x16d/0x4e0 + __x64_sys_creat+0xcd/0x120 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + + [W] folio_wait_bit_common(PG_locked_map:0): + [] truncate_inode_pages_range+0x9b0/0xf20 + stacktrace: + folio_wait_bit_common+0x5e0/0xaf0 + truncate_inode_pages_range+0x9b0/0xf20 + truncate_pagecache+0x67/0x90 + ntfs3_setattr+0x55a/0xd40 + notify_change+0xcb3/0x1430 + do_truncate+0x149/0x210 + path_openat+0x21a3/0x2a90 + do_filp_open+0x1ba/0x410 + do_sys_openat2+0x16d/0x4e0 + __x64_sys_creat+0xcd/0x120 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + + [E] unlock(&ni->ni_lock:0): + (N/A) + --------------------------------------------------- + context B's detail + --------------------------------------------------- + context B + [S] (unknown)(PG_locked_map:0) + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(PG_locked_map:0) + + [S] (unknown)(PG_locked_map:0): + (N/A) + + [W] lock(&ni->ni_lock:0): + [] attr_data_get_block+0x32c/0x19f0 + stacktrace: + attr_data_get_block+0x32c/0x19f0 + ntfs_get_block_vbo+0x264/0x1330 + __block_write_begin_int+0x3bd/0x14b0 + block_write_begin+0xb9/0x4d0 + ntfs_write_begin+0x27e/0x480 + generic_perform_write+0x256/0x570 + __generic_file_write_iter+0x2ae/0x500 + ntfs_file_write_iter+0x66d/0x1d70 + do_iter_readv_writev+0x20b/0x3c0 + do_iter_write+0x188/0x710 + vfs_iter_write+0x74/0xa0 + iter_file_splice_write+0x745/0xc90 + direct_splice_actor+0x114/0x180 + splice_direct_to_actor+0x33b/0x8b0 + do_splice_direct+0x1b7/0x280 + do_sendfile+0xb49/0x1310 + + [E] folio_unlock(PG_locked_map:0): + [] generic_write_end+0xf2/0x440 + stacktrace: + generic_write_end+0xf2/0x440 + ntfs_write_end+0x42e/0x980 + generic_perform_write+0x316/0x570 + __generic_file_write_iter+0x2ae/0x500 + ntfs_file_write_iter+0x66d/0x1d70 + do_iter_readv_writev+0x20b/0x3c0 + do_iter_write+0x188/0x710 + vfs_iter_write+0x74/0xa0 + iter_file_splice_write+0x745/0xc90 + direct_splice_actor+0x114/0x180 + splice_direct_to_actor+0x33b/0x8b0 + do_splice_direct+0x1b7/0x280 + do_sendfile+0xb49/0x1310 + __x64_sys_sendfile64+0x1d0/0x210 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + --------------------------------------------------- + information that might be helpful + --------------------------------------------------- + CPU: 1 PID: 8060 Comm: a.out Not tainted + 6.2.0-rc1-00025-gb0c20ebf51ac-dirty #28 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), + BIOS Bochs 01/01/2011 + Call Trace: + + dump_stack_lvl+0xf2/0x169 + print_circle.cold+0xca4/0xd28 + ? lookup_dep+0x240/0x240 + ? extend_queue+0x223/0x300 + cb_check_dl+0x1e7/0x260 + bfs+0x27b/0x610 + ? print_circle+0x240/0x240 + ? llist_add_batch+0x180/0x180 + ? extend_queue_rev+0x300/0x300 + ? __add_dep+0x60f/0x810 + add_dep+0x221/0x5b0 + ? __add_idep+0x310/0x310 + ? add_iecxt+0x1bc/0xa60 + ? add_iecxt+0x1bc/0xa60 + ? add_iecxt+0x1bc/0xa60 + ? add_iecxt+0x1bc/0xa60 + __dept_wait+0x600/0x1490 + ? add_iecxt+0x1bc/0xa60 + ? truncate_inode_pages_range+0x9b0/0xf20 + ? check_new_class+0x790/0x790 + ? dept_enirq_transition+0x519/0x9c0 + dept_wait+0x159/0x3b0 + ? truncate_inode_pages_range+0x9b0/0xf20 + folio_wait_bit_common+0x5e0/0xaf0 + ? filemap_get_folios_contig+0xa30/0xa30 + ? dept_enirq_transition+0x519/0x9c0 + ? lock_is_held_type+0x10e/0x160 + ? lock_is_held_type+0x11e/0x160 + truncate_inode_pages_range+0x9b0/0xf20 + ? truncate_inode_partial_folio+0xba0/0xba0 + ? setattr_prepare+0x142/0xc40 + truncate_pagecache+0x67/0x90 + ntfs3_setattr+0x55a/0xd40 + ? ktime_get_coarse_real_ts64+0x1e5/0x2f0 + ? ntfs_extend+0x5c0/0x5c0 + ? mode_strip_sgid+0x210/0x210 + ? ntfs_extend+0x5c0/0x5c0 + notify_change+0xcb3/0x1430 + ? do_truncate+0x149/0x210 + do_truncate+0x149/0x210 + ? file_open_root+0x430/0x430 + ? process_measurement+0x18c0/0x18c0 + ? ntfs_file_release+0x230/0x230 + path_openat+0x21a3/0x2a90 + ? path_lookupat+0x840/0x840 + ? dept_enirq_transition+0x519/0x9c0 + ? lock_is_held_type+0x10e/0x160 + do_filp_open+0x1ba/0x410 + ? may_open_dev+0xf0/0xf0 + ? find_held_lock+0x2d/0x110 + ? lock_release+0x43c/0x830 + ? dept_ecxt_exit+0x31a/0x590 + ? _raw_spin_unlock+0x3b/0x50 + ? alloc_fd+0x2de/0x6e0 + do_sys_openat2+0x16d/0x4e0 + ? __ia32_sys_get_robust_list+0x3b0/0x3b0 + ? build_open_flags+0x6f0/0x6f0 + ? dept_enirq_transition+0x519/0x9c0 + ? dept_enirq_transition+0x519/0x9c0 + ? lock_is_held_type+0x4e/0x160 + ? lock_is_held_type+0x4e/0x160 + __x64_sys_creat+0xcd/0x120 + ? __x64_compat_sys_openat+0x1f0/0x1f0 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + RIP: 0033:0x7f8b9e4e4469 + Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 + 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> + 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48 + RSP: 002b:00007f8b9eea4ef8 EFLAGS: 00000202 ORIG_RAX: 0000000000000055 + RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9e4e4469 + RDX: 0000000000737562 RSI: 0000000000000000 RDI: 0000000020000000 + RBP: 00007f8b9eea4f20 R08: 0000000000000000 R09: 0000000000000000 + R10: 0000000000000000 R11: 0000000000000202 R12: 00007fffa75511ee + R13: 00007fffa75511ef R14: 00007f8b9ee85000 R15: 0000000000000003 + + +Let's take a look at the summary that is the most important part. + + --------------------------------------------------- + summary + --------------------------------------------------- + *** DEADLOCK *** + + context A + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + [E] unlock(&ni->ni_lock:0) + + context B + [S] (unknown)(PG_locked_map:0) + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(PG_locked_map:0) + + [S]: start of the event context + [W]: the wait blocked + [E]: the event not reachable + +The summary shows the following scenario: + + context A context B context ?(unknown) + + [S] folio_lock(&f1) + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(&f1) + + [E] unlock(&ni->ni_lock:0) + +Adding supplementary comments to describe DEPT's view in detail: + + context A context B context ?(unknown) + + [S] folio_lock(&f1) + /* start to take into account context + B heading for folio_unlock(&f1) */ + /* 1 */ + [S] lock(&ni->ni_lock:0) + /* start to take into account this context heading for + unlock(&ni->ni_lock:0) */ + /* 2 */ + + [W] folio_wait_bit_common(PG_locked_map:0) (= folio_lock(&f1)) + /* might wait for folio_unlock(&f1) */ + + [W] lock(&ni->ni_lock:0) + /* might wait for unlock(&ni->ni_lock:0) */ + + [E] folio_unlock(&f1) + /* event that's been valid since 1 */ + + [E] unlock(&ni->ni_lock:0) + /* event that's been valid since 2 */ + +Let's build up dependency graph with this report. Firstly, context A: + + context A + + [S] lock(&ni->ni_lock:0) + /* start to take into account this context heading for + unlock(&ni->ni_lock:0) */ + /* 2 */ + + [W] folio_wait_bit_common(PG_locked_map:0) (= folio_lock(&f1)) + /* might wait for folio_unlock(&f1) */ + + [E] unlock(&ni->ni_lock:0) + /* event that's been valid since 2 */ + +There is one interesting event, unlock(&ni->ni_lock:0). There is a +wait, folio_lock(&f1), between 2 and the event. Which means +unlock(&ni->ni_lock:0) is not reachable if folio_unlock(&f1) does not +wake up the wait. Therefore, we can say unlock(&ni->ni_lock:0) depends +on folio_unlock(&f1), say, 'unlock(&ni->ni_lock:0) -> folio_unlock(&f1)'. +The graph will look like after adding the dependency: + + unlock(&ni->ni_lock:0) -> folio_unlock(&f1) + + where 'A -> B' means that event A depends on event B. + +Secondly, context B: + + context B + + [W] lock(&ni->ni_lock:0) + /* might wait for unlock(&ni->ni_lock:0) */ + + [E] folio_unlock(&f1) + /* event that's been valid since 1 */ + +There is also one interesting event, folio_unlock(&f1). There is a +wait, lock(&ni->ni_lock:0), between 1 and the event - remind 1 is at a +very start and before the wait in timeline. Which means folio_unlock(&f1) +is not reachable if unlock(&ni->ni_lock:0) does not wake up the wait. +Therefore, we can say folio_unlock(&f1) depends on unlock(&ni->ni_lock:0), +say, 'folio_unlock(&f1) -> unlock(&ni->ni_lock:0)'. The graph will look +like after adding the dependency: + + -> unlock(&ni->ni_lock:0) -> folio_unlock(&f1) - + / \ + \ / + ------------------------------------------------ + + where 'A -> B' means that event A depends on event B. + +A new loop has been created. So DEPT can report it as a deadlock! Cool! + +CONCLUSION + +DEPT works awesome! diff --git a/Documentation/dependency/dept_api.txt b/Documentation/dependency/dept_api.txt new file mode 100644 index 000000000000..8e0d5a118a46 --- /dev/null +++ b/Documentation/dependency/dept_api.txt @@ -0,0 +1,117 @@ +DEPT(DEPendency Tracker) APIs +============================= + +Started by Byungchul Park + +SDT(Single-event Dependency Tracker) APIs +----------------------------------------- +Use these APIs to annotate on either wait or event. These have been +already applied into the existing synchronization primitives e.g. +waitqueue, swait, wait_for_completion(), dma fence and so on. The basic +APIs of SDT are: + + /* + * After defining 'struct dept_map map', initialize the instance. + */ + sdt_map_init(map); + + /* + * Place just before the interesting wait. + */ + sdt_wait(map); + + /* + * Place just before the interesting event. + */ + sdt_event(map); + +The advanced APIs of SDT are: + + /* + * After defining 'struct dept_map map', initialize the instance + * using an external key. + */ + sdt_map_init_key(map, key); + + /* + * Place just before the interesting timeout wait. + */ + sdt_wait_timeout(map, time); + + /* + * Use sdt_might_sleep_start() and sdt_might_sleep_end() in pair. + * Place at the start of the interesting section that might enter + * schedule() or its family that needs to be woken up by + * try_to_wake_up(). + */ + sdt_might_sleep_start(map); + + /* + * Use sdt_might_sleep_start_timeout() and sdt_might_sleep_end() in + * pair. Place at the start of the interesting section that might + * enter schedule_timeout() or its family that needs to be woken up + * by try_to_wake_up(). + */ + sdt_might_sleep_start_timeout(map, time); + + /* + * Use sdt_might_sleep_start() and sdt_might_sleep_end() in pair. + * Place at the end of the interesting section that might enter + * schedule(), schedule_timeout() or its family that needs to be + * woken up by try_to_wake_up(). + */ + sdt_might_sleep_end(); + + /* + * Use sdt_ecxt_enter() and sdt_ecxt_exit() in pair. Place at the + * start of the interesting section where the interesting event might + * be triggered. + */ + sdt_ecxt_enter(map); + + /* + * Use sdt_ecxt_enter() and sdt_ecxt_exit() in pair. Place at the + * end of the interesting section where the interesting event might + * be triggered. + */ + sdt_ecxt_exit(map); + + +LDT(Lock Dependency Tracker) APIs +--------------------------------- +Do not use these APIs directly. These are the wrappers for typical +locks, that have been already applied into major locks internally e.g. +spin lock, mutex, rwlock and so on. The APIs of LDT are: + + ldt_init(map, key, sub, name); + ldt_lock(map, sub_local, try, nest, ip); + ldt_rlock(map, sub_local, try, nest, ip, queued); + ldt_wlock(map, sub_local, try, nest, ip); + ldt_unlock(map, ip); + ldt_downgrade(map, ip); + ldt_set_class(map, name, key, sub_local, ip); + + +Raw APIs +-------- +Do not use these APIs directly. The raw APIs of dept are: + + dept_free_range(start, size); + dept_map_init(map, key, sub, name); + dept_map_reinit(map, key, sub, name); + dept_ext_wgen_init(ext_wgen); + dept_map_copy(map_to, map_from); + dept_wait(map, wait_flags, ip, wait_func, sub_local, time); + dept_stage_wait(map, key, ip, wait_func, time); + dept_request_event_wait_commit(); + dept_clean_stage(); + dept_stage_event(task, ip); + dept_ecxt_enter(map, evt_flags, ip, ecxt_func, evt_func, sub_local); + dept_ecxt_holding(map, evt_flags); + dept_request_event(map, ext_wgen); + dept_event(map, evt_flags, ip, evt_func, ext_wgen); + dept_ecxt_exit(map, evt_flags, ip); + dept_ecxt_enter_nokeep(map); + dept_key_init(key); + dept_key_destroy(key); + dept_map_ecxt_modify(map, cur_evt_flags, key, evt_flags, ip, ecxt_func, evt_func, sub_local); -- 2.17.1