From: Byungchul Park <byungchul@sk.com>
To: linux-kernel@vger.kernel.org
Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org,
damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org,
adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org,
mingo@redhat.com, peterz@infradead.org, will@kernel.org,
tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org,
sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com,
johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu,
willy@infradead.org, david@fromorbit.com, amir73il@gmail.com,
gregkh@linuxfoundation.org, kernel-team@lge.com,
linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org,
minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com,
sj@kernel.org, jglisse@redhat.com, dennis@kernel.org,
cl@linux.com, penberg@kernel.org, rientjes@google.com,
vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org,
josef@toxicpanda.com, linux-fsdevel@vger.kernel.org,
jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com,
hch@infradead.org, djwong@kernel.org,
dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com,
melissa.srw@gmail.com, hamohammed.sa@gmail.com,
harry.yoo@oracle.com, chris.p.wilson@intel.com,
gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com,
boqun.feng@gmail.com, longman@redhat.com,
yunseong.kim@ericsson.com, ysk@kzalloc.com, yeoreum.yun@arm.com,
netdev@vger.kernel.org, matthew.brost@intel.com,
her0gyugyu@gmail.com, corbet@lwn.net, catalin.marinas@arm.com,
bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
hpa@zytor.com, luto@kernel.org, sumit.semwal@linaro.org,
gustavo@padovan.org, christian.koenig@amd.com,
andi.shyti@kernel.org, arnd@arndb.de, lorenzo.stoakes@oracle.com,
Liam.Howlett@oracle.com, rppt@kernel.org, surenb@google.com,
mcgrof@kernel.org, petr.pavlu@suse.com, da.gomez@kernel.org,
samitolvanen@google.com, paulmck@kernel.org, frederic@kernel.org,
neeraj.upadhyay@kernel.org, joelagnelf@nvidia.com,
josh@joshtriplett.org, urezki@gmail.com,
mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
qiang.zhang@linux.dev, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
bsegall@google.com, mgorman@suse.de, vschneid@redhat.com,
chuck.lever@oracle.com, neil@brown.name, okorniev@redhat.com,
Dai.Ngo@oracle.com, tom@talpey.com, trondmy@kernel.org,
anna@kernel.org, kees@kernel.org, bigeasy@linutronix.de,
clrkwllms@kernel.org, mark.rutland@arm.com,
ada.coupriediaz@arm.com, kristina.martsenko@arm.com,
wangkefeng.wang@huawei.com, broonie@kernel.org,
kevin.brodsky@arm.com, dwmw@amazon.co.uk, shakeel.butt@linux.dev,
ast@kernel.org, ziy@nvidia.com, yuzhao@google.com,
baolin.wang@linux.alibaba.com, usamaarif642@gmail.com,
joel.granados@kernel.org, richard.weiyang@gmail.com,
geert+renesas@glider.be, tim.c.chen@linux.intel.com,
linux@treblig.org, alexander.shishkin@linux.intel.com,
lillian@star-ark.net, chenhuacai@kernel.org, francesco@valla.it,
guoweikang.kernel@gmail.com, link@vivo.com, jpoimboe@kernel.org,
masahiroy@kernel.org, brauner@kernel.org,
thomas.weissschuh@linutronix.de, oleg@redhat.com,
mjguzik@gmail.com, andrii@kernel.org, wangfushuai@baidu.com,
linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org,
linux-i2c@vger.kernel.org, linux-arch@vger.kernel.org,
linux-modules@vger.kernel.org, rcu@vger.kernel.org,
linux-nfs@vger.kernel.org, linux-rt-devel@lists.linux.dev
Subject: [PATCH v17 25/47] dept: track PG_locked with dept
Date: Thu, 2 Oct 2025 17:12:25 +0900 [thread overview]
Message-ID: <20251002081247.51255-26-byungchul@sk.com> (raw)
In-Reply-To: <20251002081247.51255-1-byungchul@sk.com>
Makes dept able to track PG_locked waits and events, which will be
useful in practice. See the following link that shows dept worked with
PG_locked and detected real issues in practice:
https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.park@lge.com/
Signed-off-by: Byungchul Park <byungchul@sk.com>
---
include/linux/mm_types.h | 2 +
include/linux/page-flags.h | 125 +++++++++++++++++++++++++++++++++----
include/linux/pagemap.h | 37 ++++++++++-
mm/filemap.c | 26 ++++++++
mm/mm_init.c | 2 +
5 files changed, 179 insertions(+), 13 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index a643fae8a349..5ebc565309af 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -20,6 +20,7 @@
#include <linux/seqlock.h>
#include <linux/percpu_counter.h>
#include <linux/types.h>
+#include <linux/dept.h>
#include <asm/mmu.h>
@@ -223,6 +224,7 @@ struct page {
struct page *kmsan_shadow;
struct page *kmsan_origin;
#endif
+ struct dept_ext_wgen pg_locked_wgen;
} _struct_page_alignment;
/*
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 8d3fa3a91ce4..d3c4954c4218 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -198,6 +198,61 @@ enum pageflags {
#ifndef __GENERATING_BOUNDS_H
+#ifdef CONFIG_DEPT
+#include <linux/kernel.h>
+#include <linux/dept.h>
+
+extern struct dept_map pg_locked_map;
+
+/*
+ * Place the following annotations in its suitable point in code:
+ *
+ * Annotate dept_page_set_bit() around firstly set_bit*()
+ * Annotate dept_page_clear_bit() around clear_bit*()
+ * Annotate dept_page_wait_on_bit() around wait_on_bit*()
+ */
+
+static inline void dept_page_set_bit(struct page *p, int bit_nr)
+{
+ if (bit_nr == PG_locked)
+ dept_request_event(&pg_locked_map, &p->pg_locked_wgen);
+}
+
+static inline void dept_page_clear_bit(struct page *p, int bit_nr)
+{
+ if (bit_nr == PG_locked)
+ dept_event(&pg_locked_map, 1UL, _RET_IP_, __func__, &p->pg_locked_wgen);
+}
+
+static inline void dept_page_wait_on_bit(struct page *p, int bit_nr)
+{
+ if (bit_nr == PG_locked)
+ dept_wait(&pg_locked_map, 1UL, _RET_IP_, __func__, 0, -1L);
+}
+
+static inline void dept_folio_set_bit(struct folio *f, int bit_nr)
+{
+ dept_page_set_bit(&f->page, bit_nr);
+}
+
+static inline void dept_folio_clear_bit(struct folio *f, int bit_nr)
+{
+ dept_page_clear_bit(&f->page, bit_nr);
+}
+
+static inline void dept_folio_wait_on_bit(struct folio *f, int bit_nr)
+{
+ dept_page_wait_on_bit(&f->page, bit_nr);
+}
+#else
+#define dept_page_set_bit(p, bit_nr) do { } while (0)
+#define dept_page_clear_bit(p, bit_nr) do { } while (0)
+#define dept_page_wait_on_bit(p, bit_nr) do { } while (0)
+#define dept_folio_set_bit(f, bit_nr) do { } while (0)
+#define dept_folio_clear_bit(f, bit_nr) do { } while (0)
+#define dept_folio_wait_on_bit(f, bit_nr) do { } while (0)
+#endif
+
#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
@@ -419,27 +474,51 @@ static __always_inline bool folio_test_##name(const struct folio *folio) \
#define FOLIO_SET_FLAG(name, page) \
static __always_inline void folio_set_##name(struct folio *folio) \
-{ set_bit(PG_##name, folio_flags(folio, page)); }
+{ \
+ set_bit(PG_##name, folio_flags(folio, page)); \
+ dept_folio_set_bit(folio, PG_##name); \
+}
#define FOLIO_CLEAR_FLAG(name, page) \
static __always_inline void folio_clear_##name(struct folio *folio) \
-{ clear_bit(PG_##name, folio_flags(folio, page)); }
+{ \
+ clear_bit(PG_##name, folio_flags(folio, page)); \
+ dept_folio_clear_bit(folio, PG_##name); \
+}
#define __FOLIO_SET_FLAG(name, page) \
static __always_inline void __folio_set_##name(struct folio *folio) \
-{ __set_bit(PG_##name, folio_flags(folio, page)); }
+{ \
+ __set_bit(PG_##name, folio_flags(folio, page)); \
+ dept_folio_set_bit(folio, PG_##name); \
+}
#define __FOLIO_CLEAR_FLAG(name, page) \
static __always_inline void __folio_clear_##name(struct folio *folio) \
-{ __clear_bit(PG_##name, folio_flags(folio, page)); }
+{ \
+ __clear_bit(PG_##name, folio_flags(folio, page)); \
+ dept_folio_clear_bit(folio, PG_##name); \
+}
#define FOLIO_TEST_SET_FLAG(name, page) \
static __always_inline bool folio_test_set_##name(struct folio *folio) \
-{ return test_and_set_bit(PG_##name, folio_flags(folio, page)); }
+{ \
+ bool __ret = test_and_set_bit(PG_##name, folio_flags(folio, page)); \
+ \
+ if (!__ret) \
+ dept_folio_set_bit(folio, PG_##name); \
+ return __ret; \
+}
#define FOLIO_TEST_CLEAR_FLAG(name, page) \
static __always_inline bool folio_test_clear_##name(struct folio *folio) \
-{ return test_and_clear_bit(PG_##name, folio_flags(folio, page)); }
+{ \
+ bool __ret = test_and_clear_bit(PG_##name, folio_flags(folio, page)); \
+ \
+ if (__ret) \
+ dept_folio_clear_bit(folio, PG_##name); \
+ return __ret; \
+}
#define FOLIO_FLAG(name, page) \
FOLIO_TEST_FLAG(name, page) \
@@ -454,32 +533,54 @@ static __always_inline int Page##uname(const struct page *page) \
#define SETPAGEFLAG(uname, lname, policy) \
FOLIO_SET_FLAG(lname, FOLIO_##policy) \
static __always_inline void SetPage##uname(struct page *page) \
-{ set_bit(PG_##lname, &policy(page, 1)->flags); }
+{ \
+ set_bit(PG_##lname, &policy(page, 1)->flags); \
+ dept_page_set_bit(page, PG_##lname); \
+}
#define CLEARPAGEFLAG(uname, lname, policy) \
FOLIO_CLEAR_FLAG(lname, FOLIO_##policy) \
static __always_inline void ClearPage##uname(struct page *page) \
-{ clear_bit(PG_##lname, &policy(page, 1)->flags); }
+{ \
+ clear_bit(PG_##lname, &policy(page, 1)->flags); \
+ dept_page_clear_bit(page, PG_##lname); \
+}
#define __SETPAGEFLAG(uname, lname, policy) \
__FOLIO_SET_FLAG(lname, FOLIO_##policy) \
static __always_inline void __SetPage##uname(struct page *page) \
-{ __set_bit(PG_##lname, &policy(page, 1)->flags); }
+{ \
+ __set_bit(PG_##lname, &policy(page, 1)->flags); \
+ dept_page_set_bit(page, PG_##lname); \
+}
#define __CLEARPAGEFLAG(uname, lname, policy) \
__FOLIO_CLEAR_FLAG(lname, FOLIO_##policy) \
static __always_inline void __ClearPage##uname(struct page *page) \
-{ __clear_bit(PG_##lname, &policy(page, 1)->flags); }
+{ \
+ __clear_bit(PG_##lname, &policy(page, 1)->flags); \
+ dept_page_clear_bit(page, PG_##lname); \
+}
#define TESTSETFLAG(uname, lname, policy) \
FOLIO_TEST_SET_FLAG(lname, FOLIO_##policy) \
static __always_inline int TestSetPage##uname(struct page *page) \
-{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); }
+{ \
+ bool ret = test_and_set_bit(PG_##lname, &policy(page, 1)->flags);\
+ if (!ret) \
+ dept_page_set_bit(page, PG_##lname); \
+ return ret; \
+}
#define TESTCLEARFLAG(uname, lname, policy) \
FOLIO_TEST_CLEAR_FLAG(lname, FOLIO_##policy) \
static __always_inline int TestClearPage##uname(struct page *page) \
-{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); }
+{ \
+ bool ret = test_and_clear_bit(PG_##lname, &policy(page, 1)->flags);\
+ if (ret) \
+ dept_page_clear_bit(page, PG_##lname); \
+ return ret; \
+}
#define PAGEFLAG(uname, lname, policy) \
TESTPAGEFLAG(uname, lname, policy) \
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 12a12dae727d..53b68b7a3f17 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -1093,7 +1093,12 @@ void folio_unlock(struct folio *folio);
*/
static inline bool folio_trylock(struct folio *folio)
{
- return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0)));
+ bool ret = !test_and_set_bit_lock(PG_locked, folio_flags(folio, 0));
+
+ if (ret)
+ dept_page_set_bit(&folio->page, PG_locked);
+
+ return likely(ret);
}
/*
@@ -1129,6 +1134,16 @@ static inline bool trylock_page(struct page *page)
static inline void folio_lock(struct folio *folio)
{
might_sleep();
+
+ /*
+ * dept_page_wait_on_bit() will be called if __folio_lock() goes
+ * through a real wait path. However, for better job to detect
+ * *potential* deadlocks, let's assume that folio_lock() always
+ * goes through wait so that dept can take into account all the
+ * potential cases.
+ */
+ dept_page_wait_on_bit(&folio->page, PG_locked);
+
if (!folio_trylock(folio))
__folio_lock(folio);
}
@@ -1149,6 +1164,15 @@ static inline void lock_page(struct page *page)
struct folio *folio;
might_sleep();
+ /*
+ * dept_page_wait_on_bit() will be called if __folio_lock() goes
+ * through a real wait path. However, for better job to detect
+ * *potential* deadlocks, let's assume that lock_page() always
+ * goes through wait so that dept can take into account all the
+ * potential cases.
+ */
+ dept_page_wait_on_bit(page, PG_locked);
+
folio = page_folio(page);
if (!folio_trylock(folio))
__folio_lock(folio);
@@ -1167,6 +1191,17 @@ static inline void lock_page(struct page *page)
static inline int folio_lock_killable(struct folio *folio)
{
might_sleep();
+
+ /*
+ * dept_page_wait_on_bit() will be called if
+ * __folio_lock_killable() goes through a real wait path.
+ * However, for better job to detect *potential* deadlocks,
+ * let's assume that folio_lock_killable() always goes through
+ * wait so that dept can take into account all the potential
+ * cases.
+ */
+ dept_page_wait_on_bit(&folio->page, PG_locked);
+
if (!folio_trylock(folio))
return __folio_lock_killable(folio);
return 0;
diff --git a/mm/filemap.c b/mm/filemap.c
index 751838ef05e5..edb0710ddb3f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -48,6 +48,7 @@
#include <linux/rcupdate_wait.h>
#include <linux/sched/mm.h>
#include <linux/sysctl.h>
+#include <linux/dept.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include "internal.h"
@@ -1145,6 +1146,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync,
if (flags & WQ_FLAG_CUSTOM) {
if (test_and_set_bit(key->bit_nr, &key->folio->flags))
return -1;
+ dept_page_set_bit(&key->folio->page, key->bit_nr);
flags |= WQ_FLAG_DONE;
}
}
@@ -1228,6 +1230,7 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr,
if (wait->flags & WQ_FLAG_EXCLUSIVE) {
if (test_and_set_bit(bit_nr, &folio->flags))
return false;
+ dept_page_set_bit(&folio->page, bit_nr);
} else if (test_bit(bit_nr, &folio->flags))
return false;
@@ -1235,6 +1238,9 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr,
return true;
}
+struct dept_map __maybe_unused pg_locked_map = DEPT_MAP_INITIALIZER(pg_locked_map, NULL);
+EXPORT_SYMBOL(pg_locked_map);
+
static inline int folio_wait_bit_common(struct folio *folio, int bit_nr,
int state, enum behavior behavior)
{
@@ -1246,6 +1252,8 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr,
unsigned long pflags;
bool in_thrashing;
+ dept_page_wait_on_bit(&folio->page, bit_nr);
+
if (bit_nr == PG_locked &&
!folio_test_uptodate(folio) && folio_test_workingset(folio)) {
delayacct_thrashing_start(&in_thrashing);
@@ -1339,6 +1347,23 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr,
break;
}
+ /*
+ * dept_page_set_bit() might have been called already in
+ * folio_trylock_flag(), wake_page_function() or somewhere.
+ * However, call it again to reset the wgen of dept to ensure
+ * dept_page_wait_on_bit() is called prior to
+ * dept_page_set_bit().
+ *
+ * Remind dept considers all the waits between
+ * dept_page_set_bit() and dept_page_clear_bit() as potential
+ * event disturbers. Ensure the correct sequence so that dept
+ * can make correct decisions:
+ *
+ * wait -> acquire(set bit) -> release(clear bit)
+ */
+ if (wait->flags & WQ_FLAG_DONE)
+ dept_page_set_bit(&folio->page, bit_nr);
+
/*
* If a signal happened, this 'finish_wait()' may remove the last
* waiter from the wait-queues, but the folio waiters bit will remain
@@ -1496,6 +1521,7 @@ void folio_unlock(struct folio *folio)
BUILD_BUG_ON(PG_waiters != 7);
BUILD_BUG_ON(PG_locked > 7);
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+ dept_page_clear_bit(&folio->page, PG_locked);
if (folio_xor_flags_has_waiters(folio, 1 << PG_locked))
folio_wake_bit(folio, PG_locked);
}
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 5c21b3af216b..09e4ac6a73c7 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -32,6 +32,7 @@
#include <linux/vmstat.h>
#include <linux/kexec_handover.h>
#include <linux/hugetlb.h>
+#include <linux/dept.h>
#include "internal.h"
#include "slab.h"
#include "shuffle.h"
@@ -587,6 +588,7 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn,
atomic_set(&page->_mapcount, -1);
page_cpupid_reset_last(page);
page_kasan_tag_reset(page);
+ dept_ext_wgen_init(&page->pg_locked_wgen);
INIT_LIST_HEAD(&page->lru);
#ifdef WANT_PAGE_VIRTUAL
--
2.17.1
next prev parent reply other threads:[~2025-10-02 8:14 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-02 8:12 [PATCH v17 00/47] DEPT(DEPendency Tracker) Byungchul Park
2025-10-02 8:12 ` [PATCH v17 01/47] llist: move llist_{head,node} definition to types.h Byungchul Park
2025-10-02 8:24 ` Greg KH
2025-10-02 13:53 ` Mathieu Desnoyers
2025-10-02 23:19 ` Arnd Bergmann
2025-10-16 0:46 ` Byungchul Park
2025-10-16 7:59 ` Arnd Bergmann
2025-10-16 0:38 ` Byungchul Park
2025-10-02 8:12 ` [PATCH v17 02/47] dept: implement DEPT(DEPendency Tracker) Byungchul Park
2025-10-02 8:25 ` Greg KH
2025-10-02 12:56 ` Geert Uytterhoeven
2025-10-02 8:12 ` [PATCH v17 03/47] dept: add single event dependency tracker APIs Byungchul Park
2025-10-02 8:12 ` [PATCH v17 04/47] dept: add lock " Byungchul Park
2025-10-02 8:12 ` [PATCH v17 05/47] dept: tie to lockdep and IRQ tracing Byungchul Park
2025-10-02 8:12 ` [PATCH v17 06/47] dept: add proc knobs to show stats and dependency graph Byungchul Park
2025-10-02 8:12 ` [PATCH v17 07/47] dept: distinguish each kernel context from another Byungchul Park
2025-10-02 8:12 ` [PATCH v17 08/47] x86_64, dept: add support CONFIG_ARCH_HAS_DEPT_SUPPORT to x86_64 Byungchul Park
2025-10-02 15:22 ` Dave Hansen
2025-10-03 1:12 ` Byungchul Park
2025-10-02 8:12 ` [PATCH v17 09/47] arm64, dept: add support CONFIG_ARCH_HAS_DEPT_SUPPORT to arm64 Byungchul Park
2025-10-02 11:39 ` Mark Brown
2025-10-03 1:46 ` Byungchul Park
2025-10-03 11:33 ` Mark Brown
2025-10-13 1:51 ` Byungchul Park
2025-10-03 14:36 ` Mark Rutland
2025-10-13 4:28 ` Byungchul Park
2025-10-02 8:12 ` [PATCH v17 10/47] dept: distinguish each work from another Byungchul Park
2025-10-02 8:12 ` [PATCH v17 11/47] dept: add a mechanism to refill the internal memory pools on running out Byungchul Park
2025-10-02 8:12 ` [PATCH v17 12/47] dept: record the latest one out of consecutive waits of the same class Byungchul Park
2025-10-02 8:12 ` [PATCH v17 13/47] dept: apply sdt_might_sleep_{start,end}() to wait_for_completion()/complete() Byungchul Park
2025-10-02 8:12 ` [PATCH v17 14/47] dept: apply sdt_might_sleep_{start,end}() to swait Byungchul Park
2025-10-02 8:12 ` [PATCH v17 15/47] dept: apply sdt_might_sleep_{start,end}() to waitqueue wait Byungchul Park
2025-10-02 8:12 ` [PATCH v17 16/47] dept: apply sdt_might_sleep_{start,end}() to hashed-waitqueue wait Byungchul Park
2025-10-02 8:12 ` [PATCH v17 17/47] dept: apply sdt_might_sleep_{start,end}() to dma fence Byungchul Park
2025-10-02 8:12 ` [PATCH v17 18/47] dept: track timeout waits separately with a new Kconfig Byungchul Park
2025-10-02 8:12 ` [PATCH v17 19/47] dept: apply timeout consideration to wait_for_completion()/complete() Byungchul Park
2025-10-02 8:12 ` [PATCH v17 20/47] dept: apply timeout consideration to swait Byungchul Park
2025-10-02 8:12 ` [PATCH v17 21/47] dept: apply timeout consideration to waitqueue wait Byungchul Park
2025-10-02 8:12 ` [PATCH v17 22/47] dept: apply timeout consideration to hashed-waitqueue wait Byungchul Park
2025-10-02 8:12 ` [PATCH v17 23/47] dept: apply timeout consideration to dma fence wait Byungchul Park
2025-10-02 8:12 ` [PATCH v17 24/47] dept: make dept able to work with an external wgen Byungchul Park
2025-10-02 8:12 ` Byungchul Park [this message]
2025-10-02 8:12 ` [PATCH v17 26/47] dept: print staged wait's stacktrace on report Byungchul Park
2025-10-02 8:12 ` [PATCH v17 27/47] locking/lockdep: prevent various lockdep assertions when lockdep_off()'ed Byungchul Park
2025-10-02 8:12 ` [PATCH v17 28/47] dept: add documentation for dept Byungchul Park
2025-10-03 2:44 ` Bagas Sanjaya
2025-10-13 1:28 ` Byungchul Park
2025-10-03 5:36 ` Jonathan Corbet
2025-10-13 1:03 ` Byungchul Park
2025-10-03 6:55 ` NeilBrown
2025-10-13 5:23 ` Byungchul Park
2025-10-14 6:03 ` NeilBrown
2025-10-14 6:38 ` Byungchul Park
2025-10-02 8:12 ` [PATCH v17 29/47] cpu/hotplug: use a weaker annotation in AP thread Byungchul Park
2025-10-02 8:12 ` [PATCH v17 30/47] fs/jbd2: use a weaker annotation in journal handling Byungchul Park
2025-10-02 8:40 ` Jan Kara
2025-10-03 1:13 ` Byungchul Park
2025-10-02 8:12 ` [PATCH v17 31/47] dept: assign dept map to mmu notifier invalidation synchronization Byungchul Park
2025-10-02 8:12 ` [PATCH v17 32/47] dept: assign unique dept_key to each distinct dma fence caller Byungchul Park
2025-10-02 8:12 ` [PATCH v17 33/47] dept: make dept aware of lockdep_set_lock_cmp_fn() annotation Byungchul Park
2025-10-02 8:12 ` [PATCH v17 34/47] dept: make dept stop from working on debug_locks_off() Byungchul Park
2025-10-02 8:12 ` [PATCH v17 35/47] i2c: rename wait_for_completion callback to wait_for_completion_cb Byungchul Park
2025-10-04 16:39 ` Wolfram Sang
2025-10-13 5:27 ` Byungchul Park
2025-10-02 8:12 ` [PATCH v17 36/47] dept: assign unique dept_key to each distinct wait_for_completion() caller Byungchul Park
2025-10-02 8:12 ` [PATCH v17 37/47] completion, dept: introduce init_completion_dmap() API Byungchul Park
2025-10-02 8:12 ` [PATCH v17 38/47] dept: introduce a new type of dependency tracking between multi event sites Byungchul Park
2025-10-02 8:12 ` [PATCH v17 39/47] dept: add module support for struct dept_event_site and dept_event_site_dep Byungchul Park
2025-10-02 8:12 ` [PATCH v17 40/47] dept: introduce event_site() to disable event tracking if it's recoverable Byungchul Park
2025-10-02 8:12 ` [PATCH v17 41/47] dept: implement a basic unit test for dept Byungchul Park
2025-10-02 8:12 ` [PATCH v17 42/47] dept: call dept_hardirqs_off() in local_irq_*() regardless of irq state Byungchul Park
2025-10-02 8:12 ` [PATCH v17 43/47] rcu/update: fix same dept key collision between various types of RCU Byungchul Park
2025-10-02 8:12 ` [PATCH v17 44/47] dept: introduce APIs to set page usage and use subclasses_evt for the usage Byungchul Park
2025-11-19 10:53 ` Byungchul Park
2025-11-19 14:37 ` Matthew Wilcox
2025-11-20 2:09 ` Byungchul Park
2025-11-20 2:34 ` Byungchul Park
2025-11-20 5:14 ` Byungchul Park
2025-12-01 7:18 ` Byungchul Park
2025-10-02 8:12 ` [PATCH v17 45/47] dept: track PG_writeback with dept Byungchul Park
2025-10-02 8:12 ` [PATCH v17 46/47] SUNRPC: relocate struct rcu_head to the first field of struct rpc_xprt Byungchul Park
2025-10-02 8:12 ` [PATCH v17 47/47] mm: percpu: increase PERCPU_DYNAMIC_SIZE_SHIFT on DEPT and large PAGE_SIZE Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251002081247.51255-26-byungchul@sk.com \
--to=byungchul@sk.com \
--cc=Dai.Ngo@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=ada.coupriediaz@arm.com \
--cc=adilger.kernel@dilger.ca \
--cc=akpm@linux-foundation.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=amir73il@gmail.com \
--cc=andi.shyti@kernel.org \
--cc=andrii@kernel.org \
--cc=anna@kernel.org \
--cc=arnd@arndb.de \
--cc=ast@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=bp@alien8.de \
--cc=brauner@kernel.org \
--cc=broonie@kernel.org \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=chenhuacai@kernel.org \
--cc=chris.p.wilson@intel.com \
--cc=christian.koenig@amd.com \
--cc=chuck.lever@oracle.com \
--cc=cl@linux.com \
--cc=clrkwllms@kernel.org \
--cc=corbet@lwn.net \
--cc=da.gomez@kernel.org \
--cc=damien.lemoal@opensource.wdc.com \
--cc=dan.j.williams@intel.com \
--cc=daniel.vetter@ffwll.ch \
--cc=dave.hansen@linux.intel.com \
--cc=david@fromorbit.com \
--cc=dennis@kernel.org \
--cc=dietmar.eggemann@arm.com \
--cc=djwong@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=duyuyang@gmail.com \
--cc=dwmw@amazon.co.uk \
--cc=francesco@valla.it \
--cc=frederic@kernel.org \
--cc=geert+renesas@glider.be \
--cc=gregkh@linuxfoundation.org \
--cc=guoweikang.kernel@gmail.com \
--cc=gustavo@padovan.org \
--cc=gwan-gyeong.mun@intel.com \
--cc=hamohammed.sa@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=harry.yoo@oracle.com \
--cc=hch@infradead.org \
--cc=her0gyugyu@gmail.com \
--cc=hpa@zytor.com \
--cc=jack@suse.cz \
--cc=jglisse@redhat.com \
--cc=jiangshanlai@gmail.com \
--cc=jlayton@kernel.org \
--cc=joel.granados@kernel.org \
--cc=joel@joelfernandes.org \
--cc=joelagnelf@nvidia.com \
--cc=johannes.berg@intel.com \
--cc=josef@toxicpanda.com \
--cc=josh@joshtriplett.org \
--cc=jpoimboe@kernel.org \
--cc=juri.lelli@redhat.com \
--cc=kees@kernel.org \
--cc=kernel-team@lge.com \
--cc=kernel_team@skhynix.com \
--cc=kevin.brodsky@arm.com \
--cc=kristina.martsenko@arm.com \
--cc=lillian@star-ark.net \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=link@vivo.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-i2c@vger.kernel.org \
--cc=linux-ide@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-modules@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=linux@treblig.org \
--cc=longman@redhat.com \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=masahiroy@kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=matthew.brost@intel.com \
--cc=max.byungchul.park@gmail.com \
--cc=mcgrof@kernel.org \
--cc=melissa.srw@gmail.com \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=minchan@kernel.org \
--cc=mingo@redhat.com \
--cc=mjguzik@gmail.com \
--cc=neeraj.upadhyay@kernel.org \
--cc=neil@brown.name \
--cc=netdev@vger.kernel.org \
--cc=ngupta@vflare.org \
--cc=okorniev@redhat.com \
--cc=oleg@redhat.com \
--cc=paulmck@kernel.org \
--cc=penberg@kernel.org \
--cc=peterz@infradead.org \
--cc=petr.pavlu@suse.com \
--cc=qiang.zhang@linux.dev \
--cc=rcu@vger.kernel.org \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=rodrigosiqueiramelo@gmail.com \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=samitolvanen@google.com \
--cc=sashal@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=sj@kernel.org \
--cc=sumit.semwal@linaro.org \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=thomas.weissschuh@linutronix.de \
--cc=tim.c.chen@linux.intel.com \
--cc=tj@kernel.org \
--cc=tom@talpey.com \
--cc=torvalds@linux-foundation.org \
--cc=trondmy@kernel.org \
--cc=tytso@mit.edu \
--cc=urezki@gmail.com \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=vdavydov.dev@gmail.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=wangfushuai@baidu.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=yeoreum.yun@arm.com \
--cc=ysk@kzalloc.com \
--cc=yunseong.kim@ericsson.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox