From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD777C433EF for ; Thu, 19 May 2022 21:29:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3FE016B0071; Thu, 19 May 2022 17:29:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3ADEF6B0072; Thu, 19 May 2022 17:29:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 276236B0073; Thu, 19 May 2022 17:29:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1988D6B0071 for ; Thu, 19 May 2022 17:29:44 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id DBAC880FAF for ; Thu, 19 May 2022 21:29:43 +0000 (UTC) X-FDA: 79483784646.21.148C366 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf21.hostedemail.com (Postfix) with ESMTP id 167451C00DA for ; Thu, 19 May 2022 21:29:32 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A7971B8255D; Thu, 19 May 2022 21:29:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F2ADC385B8; Thu, 19 May 2022 21:29:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652995780; bh=A4UO2mkv0gXLuuIYXOlq70yi5YlkbdD2b9NHw2O/2M0=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=kn2mVWy0gz+IH88Sj26CuNophm3JYrrgTa+fIUGETV9Fv3IiZ29oSD9xQ7zYxVuE8 rOrgqKETiCpm3vi2cK84V2+sBSxKoSsXsqoRdJAONK/tyPXmzEJeJdyJJPILUeGdAi YYNgqS8rZIGv+5GTKVa9dVSFAF7OXa9LKDoUzPXdzIY5+t/JYkI8scrldI8Mjbz0+7 wkGr+y/beCKgpc7aQDxv7v2WD1GID9eL80cdn+2IkBnbc+ELhXt+qmGopazU1xzapL o1g+TFUnWfDiab6cWPzJWS/0yNXqX2W6MZgqxUKNnQM05BOoOYPoeBD2Zrc5H2PDFU XdcgxU/Rz2Mmw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 05B325C04E0; Thu, 19 May 2022 14:29:40 -0700 (PDT) Date: Thu, 19 May 2022 14:29:39 -0700 From: "Paul E. McKenney" To: Qian Cai Cc: Mel Gorman , Andrew Morton , Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , kafai@fb.com, kpsingh@kernel.org Subject: Re: [PATCH 0/6] Drain remote per-cpu directly v3 Message-ID: <20220519212939.GE1790663@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20220512085043.5234-1-mgorman@techsingularity.net> <20220517233507.GA423@qian> <20220518125152.GQ3441@techsingularity.net> <20220518171503.GQ1790663@paulmck-ThinkPad-P17-Gen-1> <20220519191524.GC1790663@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: 8iqfdyfuh9csph9ubcqthp88m5bn5koc X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 167451C00DA X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kn2mVWy0; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of "SRS0=/sY9=V3=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.68.75 as permitted sender) smtp.mailfrom="SRS0=/sY9=V3=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" X-HE-Tag: 1652995772-90395 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 19, 2022 at 05:05:04PM -0400, Qian Cai wrote: > On Thu, May 19, 2022 at 12:15:24PM -0700, Paul E. McKenney wrote: > > Is the task doing offline_pages()->synchronize_rcu() doing this > > repeatedly? Or is there a stalled RCU grace period? (From what > > I can see, offline_pages() is not doing huge numbers of calls to > > synchronize_rcu() in any of its loops, but I freely admit that I do not > > know this code.) > > Yes, we are running into an endless loop in isolate_single_pageblock(). > There was a similar issue happened not long ago, so I am wondering if we > did not solve it entirely then. Anyway, I will continue the thread over > there. > > https://lore.kernel.org/all/YoavU%2F+NfQIzQiDF@qian/ I do know that feeling. > > Or is it possible that reverting those three patches simply decreases > > the probability of failure, rather than eliminating the failure? > > Such a decrease could be due to many things, for example, changes to > > offsets and sizes of data structures. > > Entirely possible. Sorry for the false alarm. Not a problem! > > Do you ever see RCU CPU stall warnings? > > No. OK, then perhaps a sequence of offline_pages() calls. Hmmm... The percpu_up_write() function sets ->block to zero before awakening waiters. Given wakeup latencies, might this allow an only somewhat unfortunate sequence of events to allow offline_pages() to starve readers? Or is there something I am missing that prevents this from happening? Thanx, Paul