From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98802C7619A for ; Wed, 12 Apr 2023 12:48:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 186BB6B0074; Wed, 12 Apr 2023 08:48:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 112726B0075; Wed, 12 Apr 2023 08:48:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECC5C900002; Wed, 12 Apr 2023 08:48:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D73C06B0074 for ; Wed, 12 Apr 2023 08:48:14 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 83D58C014C for ; Wed, 12 Apr 2023 12:48:14 +0000 (UTC) X-FDA: 80672716908.18.F96B601 Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) by imf27.hostedemail.com (Postfix) with ESMTP id 33B0540023 for ; Wed, 12 Apr 2023 12:48:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=pqfu+Jnc; spf=none (imf27.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681303692; a=rsa-sha256; cv=none; b=8lHo1Q8xcpf+rXjkBeor/LbUc0VsxjVaQD0mO07vCpA8W43BfA4L9z0oKjJb/3+cP6tIGv 6fqk20hcU8HiA0NMNBWVP2fKcBk36JSwwb+d3XiuQ3N1xPnINukgcuRkWbs3VLVs50NcvR UzZSA6Y4PWSjSofVtBm5chjahTrm+FA= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=pqfu+Jnc; spf=none (imf27.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681303692; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gHtUC9ZbjIgk4RPtGtrL1TvESI1pfT3DgBqtzWNBp54=; b=5h7WMAhP0uynN1UyG9ve6ED+lYkhj3KeHQ0c0G1n/Xu7q1l8eLhYjVRJ4GPgVHPi+iVBUc Vv5oWA7ZMZynS8xAGXLiB+cNzVR7K9J+Em03+00ukxTvAJKZsamIm2LqthTHBqc4Bc88q+ LaaJ3VqTm104IBZXPd7Xz4/Kn2Re2Ws= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=gHtUC9ZbjIgk4RPtGtrL1TvESI1pfT3DgBqtzWNBp54=; b=pqfu+JncaaA7vblGM3GJBfOy9A N7fAck3Zn11IGydLgB79oD3a8b51X6bqgbojBDx2Ln7Hebk/OwuSo8T8TLukC8vMTtXBONbZZG7yO NbXdsw2c5u2OGB8x2SqJyee0epR/RmH4vG77Z3rkFR1dZF5UY0eMIIBViyEzGzeTiyUEWxErx8pPw l5bL5s1xm27Nx8g9qK5O4U5cUwztL3GbLLJNU10MziHfWZqPWSq7MQkUXVFSwZah+DCKP0ondB4YV 4AtYm1cB3u8Ion6I4g4QqPTFXevX/joGg6bpF4cW34NQl2SMVDz0AX8YW3a7nHUti9zQgN3s566Uw 4EVDdCPA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pmZsx-00DvZw-2N; Wed, 12 Apr 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 82744300274; Wed, 12 Apr 2023 14:47:36 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 14174235CC4A0; Wed, 12 Apr 2023 14:47:36 +0200 (CEST) Date: Wed, 12 Apr 2023 14:47:35 +0200 From: Peter Zijlstra To: Vlastimil Babka Cc: "Zhang, Qiang1" , Boqun Feng , Qi Zheng , "42.hyeyoo@gmail.com" <42.hyeyoo@gmail.com>, "akpm@linux-foundation.org" , "roman.gushchin@linux.dev" , "iamjoonsoo.kim@lge.com" , "rientjes@google.com" , "penberg@kernel.org" , "cl@linux.com" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Zhao Gongyi , Sebastian Andrzej Siewior , Thomas Gleixner , RCU , "Paul E . McKenney" Subject: Re: [PATCH] mm: slub: annotate kmem_cache_node->list_lock as raw_spinlock Message-ID: <20230412124735.GE628377@hirez.programming.kicks-ass.net> References: <20230411130854.46795-1-zhengqi.arch@bytedance.com> <932bf921-a076-e166-4f95-1adb24d544cf@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: 33B0540023 X-Rspamd-Server: rspam01 X-Stat-Signature: q9887skgm8hdyjad69fmjui3kxxk7wa6 X-HE-Tag: 1681303691-100249 X-HE-Meta: U2FsdGVkX1/aO6N0Q5umEcvT4QHAZtztIktS7yy56d0MUDaOUpvCcBXukcaa289BSIwyYvyHLvs/T3l/d2QCIE7afVpck2Jnh0Dh9qVv8lte9bi5LWJngPhoDcqITMpslvjVN6d9ZtUWeMu/nXb9P1LzEOC86VFJ+bKdeRhGacqprS7NQr5D2Em6WjQ2QQDE8qK6KA4ZACndEy0yYWiYcfG+3UF+b0l143+gJvRxN1pqYf1ORFuHEoBF+PnDfcmh+I8ijwnSckGnzLF2RZFRD93qA970lwORy4SzXc4FkpHLLuN4JuqNBMCwSewskoU9kWdG/I5Mapbk1D/fzmhykLJUzSOS7WUWggjinajyo9CF+mePXTjQUjIWEui4u4gqZe24OanwbLmiNR8sCONCUpH05WhU5XUz3nT+VkZ3qtrivIwp+z0qRDidokKtjP2HjYCXNY/PHSZ3SmCor/5/m8EQ/CAqNp8dsHJ/zi06fnolsTZSKU1Aqv5OcG9Bk9S+fpE7ZS9K1D7hRmFqPBGkthb523I5c6jouQ1hIEaPRitH0FrY1Eei6kYLGIy/2UDgE7BycF0ltsHGbu7kN7iKzQOpWFil08VMDyudpbBXGO6+WW2kWIjvl0at842nRAsHGXJX4JD/Pzh4dwaa4Nx2ZlhVyTIUpYdCzbMaGItOL7P1ZC5tXSuKlauXg707jE8EbpRN0GpVLeE+ewPVBl2A//rJvfPEWSqtS5DYcUqHxmdGXPvjp2COc5RHbW3H5Gq1grpxmw7/YJrT1CCNIQxgqMthKAVfeXrYLEsI3vW0NBREvJil5UFYFzoxOkEOy+1+4WS8lf1XTv3iqlyNFXDiD9lc0+2Ig6wh2jj7R+COx78b5VHAswlb61Qvis2QyGuKi4NQzaS2vWGFO/5XhdXE36GlrWgp+h8bANjyR7VUXwXtpxUwveTSwyKW+l0BImM+I4IFYIe/fDwLoBUsS6V hQsZW+Uz U/z2HJ9HsUteqQVc3qj9HYPoC1Qu6ZZY6Y/uHO6t6EYjrJmvF7s6ID/C0PrXpirkPqq8zdBDdRkgX5SeEwDTJ7vd1xqjklnUEt24Vwp3ksqhmyvu2ut9ZK1EIGN9kKzk6UJRwmea22oltJw+JIeizdcGw2vVa2fismcsyNWHVsxCXZmHrVBtkdwsK+awDD7K8dfvQmr0g1+/TPXZa8cwNK2AuS4v5L4QriQojPCRV3Ib6/cPl3Lg99HdOcDUcRvtcaFmjfW3dOmJwtbHzgQrMmQO6AK5Hp36JR7HbRxi9d7jQ2j+VFyncZrHim1G26Bp301AQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Apr 12, 2023 at 08:50:29AM +0200, Vlastimil Babka wrote: > > --- a/lib/debugobjects.c > > +++ b/lib/debugobjects.c > > @@ -562,10 +562,10 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack > > unsigned long flags; > > > > /* > > - * On RT enabled kernels the pool refill must happen in preemptible > > + * The pool refill must happen in preemptible > > * context: > > */ > > - if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) > > + if (preemptible()) > > fill_pool(); > > +CC Peterz > > Aha so this is in fact another case where the code is written with > actual differences between PREEMPT_RT and !PREEMPT_RT in mind, but > CONFIG_PROVE_RAW_LOCK_NESTING always assumes PREEMPT_RT? Ooh, tricky, yes. PROVE_RAW_LOCK_NESTING always follows the PREEMP_RT rules and does not expect trickery like the above. Something like the completely untested below might be of help.. --- diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h index d22430840b53..f3120d6a7d9e 100644 --- a/include/linux/lockdep_types.h +++ b/include/linux/lockdep_types.h @@ -33,6 +33,7 @@ enum lockdep_wait_type { enum lockdep_lock_type { LD_LOCK_NORMAL = 0, /* normal, catch all */ LD_LOCK_PERCPU, /* percpu */ + LD_LOCK_WAIT, /* annotation */ LD_LOCK_MAX, }; diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 50d4863974e7..a4077f5bb75b 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -2279,8 +2279,9 @@ static inline bool usage_skip(struct lock_list *entry, void *mask) * As a result, we will skip local_lock(), when we search for irq * inversion bugs. */ - if (entry->class->lock_type == LD_LOCK_PERCPU) { - if (DEBUG_LOCKS_WARN_ON(entry->class->wait_type_inner < LD_WAIT_CONFIG)) + if (entry->class->lock_type != LD_LOCK_NORMAL) { + if (entry->class->lock_type == LD_LOCK_PERCPU && + DEBUG_LOCKS_WARN_ON(entry->class->wait_type_inner < LD_WAIT_CONFIG)) return false; return true; @@ -4752,7 +4753,8 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next) for (; depth < curr->lockdep_depth; depth++) { struct held_lock *prev = curr->held_locks + depth; - u8 prev_inner = hlock_class(prev)->wait_type_inner; + struct lock_class *class = hlock_class(prev); + u8 prev_inner = class->wait_type_inner; if (prev_inner) { /* @@ -4762,6 +4764,12 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next) * Also due to trylocks. */ curr_inner = min(curr_inner, prev_inner); + + /* + * Allow override for annotations. + */ + if (unlikely(class->lock_type == LD_LOCK_WAIT)) + curr_inner = prev_inner; } } diff --git a/lib/debugobjects.c b/lib/debugobjects.c index df86e649d8be..fae71ef72a16 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -565,8 +565,16 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack * On RT enabled kernels the pool refill must happen in preemptible * context: */ - if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) + if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) { + static lockdep_map dep_map = { + .name = "wait-type-override", + .wait_type_inner = LD_WAIT_SLEEP, + .lock_type = LD_LOCK_WAIT, + }; + lock_map_acquire(&dep_map); fill_pool(); + lock_map_release(&dep_map); + } db = get_bucket((unsigned long) addr);