From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B090CC433EF for ; Thu, 30 Dec 2021 02:00:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A33F16B0071; Wed, 29 Dec 2021 21:00:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E34C6B0072; Wed, 29 Dec 2021 21:00:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8AB0E6B0074; Wed, 29 Dec 2021 21:00:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 7C19E6B0071 for ; Wed, 29 Dec 2021 21:00:56 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2A66482499B9 for ; Thu, 30 Dec 2021 02:00:56 +0000 (UTC) X-FDA: 78972807312.12.8485FEA Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf12.hostedemail.com (Postfix) with ESMTP id 9AACA4000B for ; Thu, 30 Dec 2021 02:00:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1640829654; x=1672365654; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=Udnw99HBsy0KbtlGgwoErMwND1kwq1XS+ZN01+qHvXU=; b=Z73f4xf4uQtqivgBd1K/6im8CIXfiquPcigPTs88oHPYcgbRlt4cNmtR qbGrsXeeq//jzDFgygHdgSOTWqT1hvO30Kzv5a4oPctcB/EwZmM/V+iDP oUL1QoPY0eMUVetB5e2uvXLqMr8gc33QzvnbbsZnZAg6yNbpjdOTa1j7W add2BfqqCOvlJacgifrbFoLUGcAxU7P4i/T6eKeveAu/ujgVxHJ2cK5in E+rA9zQ6OB/wOXcRKYyIyM19YHZau6KJyjXrVKPo5O+RZ7hDQJPsWgzAu KYNyhVkVK9OhJUiMqGwzN6yikdpqVFtxi10GS9KxM5+u7bbVoWYRPhTgN w==; X-IronPort-AV: E=McAfee;i="6200,9189,10212"; a="304960290" X-IronPort-AV: E=Sophos;i="5.88,247,1635231600"; d="scan'208";a="304960290" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Dec 2021 18:00:52 -0800 X-IronPort-AV: E=Sophos;i="5.88,247,1635231600"; d="scan'208";a="619238323" Received: from xsang-optiplex-9020.sh.intel.com (HELO xsang-OptiPlex-9020) ([10.239.159.143]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Dec 2021 18:00:51 -0800 Date: Thu, 30 Dec 2021 10:00:44 +0800 From: Oliver Sang To: "Paul E. McKenney" Cc: Neeraj Upadhyay , LKML , Linux Memory Management List , lkp@lists.01.org, lkp@intel.com Subject: Re: [rcutorture] 82e310033d: WARNING:possible_recursive_locking_detected Message-ID: <20211230020044.GA17043@xsang-OptiPlex-9020> References: <20211228151135.GB31268@xsang-OptiPlex-9020> <20211229000609.GY4109570@paulmck-ThinkPad-P17-Gen-1> <20211229140121.GA10390@xsang-OptiPlex-9020> <20211229172441.GA4109570@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211229172441.GA4109570@paulmck-ThinkPad-P17-Gen-1> User-Agent: Mutt/1.9.4 (2018-02-28) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9AACA4000B X-Stat-Signature: 98sjiegbo9sgyi1wkas11p45z4m3nwqh Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Z73f4xf4; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf12.hostedemail.com: domain of oliver.sang@intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=oliver.sang@intel.com X-HE-Tag: 1640829634-258198 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: hi Paul, On Wed, Dec 29, 2021 at 09:24:41AM -0800, Paul E. McKenney wrote: > On Wed, Dec 29, 2021 at 10:01:21PM +0800, Oliver Sang wrote: > > hi Paul, > > > > we applied below patch upon next-20211224, > > confirmed no "WARNING:possible_recursive_locking_detected" after patch. > > > > Good to hear! May I add your Tested-by? sure (: Tested-by: Oliver Sang > > Many of the remainder appear to be due to memory exhaustion, FWIW. thanks for information > > Thanx, Paul > > > > ------------------------------------------------------------------------ > > > > > > commit dd47cbdcc2f72ba3df1248fb7fe210acca18d09c > > > Author: Paul E. McKenney > > > Date: Tue Dec 28 15:59:38 2021 -0800 > > > > > > rcutorture: Fix rcu_fwd_mutex deadlock > > > > > > The rcu_torture_fwd_cb_hist() function acquires rcu_fwd_mutex, but is > > > invoked from rcutorture_oom_notify() function, which hold this same > > > mutex across this call. This commit fixes the resulting deadlock. > > > > > > Reported-by: kernel test robot > > > Signed-off-by: Paul E. McKenney > > > > > > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c > > > index 918a2ea34ba13..9190dce686208 100644 > > > --- a/kernel/rcu/rcutorture.c > > > +++ b/kernel/rcu/rcutorture.c > > > @@ -2184,7 +2184,6 @@ static void rcu_torture_fwd_cb_hist(struct rcu_fwd *rfp) > > > for (i = ARRAY_SIZE(rfp->n_launders_hist) - 1; i > 0; i--) > > > if (rfp->n_launders_hist[i].n_launders > 0) > > > break; > > > - mutex_lock(&rcu_fwd_mutex); // Serialize histograms. > > > pr_alert("%s: Callback-invocation histogram %d (duration %lu jiffies):", > > > __func__, rfp->rcu_fwd_id, jiffies - rfp->rcu_fwd_startat); > > > gps_old = rfp->rcu_launder_gp_seq_start; > > > @@ -2197,7 +2196,6 @@ static void rcu_torture_fwd_cb_hist(struct rcu_fwd *rfp) > > > gps_old = gps; > > > } > > > pr_cont("\n"); > > > - mutex_unlock(&rcu_fwd_mutex); > > > } > > > > > > /* Callback function for continuous-flood RCU callbacks. */ > > > @@ -2435,7 +2433,9 @@ static void rcu_torture_fwd_prog_cr(struct rcu_fwd *rfp) > > > n_launders, n_launders_sa, > > > n_max_gps, n_max_cbs, cver, gps); > > > atomic_long_add(n_max_cbs, &rcu_fwd_max_cbs); > > > + mutex_lock(&rcu_fwd_mutex); // Serialize histograms. > > > rcu_torture_fwd_cb_hist(rfp); > > > + mutex_unlock(&rcu_fwd_mutex); > > > } > > > schedule_timeout_uninterruptible(HZ); /* Let CBs drain. */ > > > tick_dep_clear_task(current, TICK_DEP_BIT_RCU);