From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3613AC433F5 for ; Fri, 13 May 2022 06:19:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4B2A6B0073; Fri, 13 May 2022 02:19:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF8E56B0075; Fri, 13 May 2022 02:19:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEAFB6B0078; Fri, 13 May 2022 02:19:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9C3CD6B0073 for ; Fri, 13 May 2022 02:19:44 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 72C9732272 for ; Fri, 13 May 2022 06:19:44 +0000 (UTC) X-FDA: 79459718688.25.DB84C39 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf08.hostedemail.com (Postfix) with ESMTP id 66FAA1600A5 for ; Fri, 13 May 2022 06:19:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652422783; x=1683958783; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=udY4Q9TqJXaY/BvCVp8tIfaechv4h4aAvzUAyPZxRRA=; b=Iqnxt3gbyHGMcTmCuPuzMb5oFIjfCHitUsJJTXBxiJ3ziaaRvhs8HBA1 EoclEYMASsdYTAPIsaKx9YrexZQrlZPePuVLTa37/EqodR7qjQhSfxxKM mjAs+plNCnRMvs41tm98N5tN0N3UGB1c+WA1trgIcmGcEKUNrDSth2NU9 ELQbNY/bzv2iatsPaR6Ev2ycR7Cxt+3YELzH5bnh5eotGvSL5MWM2rSyc oKCZSUmZ8xWSsLobp7xdLBLx1R2nGhDzYOB9Db81ioQvjX7lj+BAegMjy aBryukXs2oAEvWlpUOKHq/tZVlI2c3ySuMIs+xyXNHwlfAzB0GA7ijqUA A==; X-IronPort-AV: E=McAfee;i="6400,9594,10345"; a="250129849" X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="250129849" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2022 23:19:41 -0700 X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="595068461" Received: from jliu69-mobl.ccr.corp.intel.com ([10.254.212.158]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2022 23:19:36 -0700 Message-ID: Subject: Re: [mm/page_alloc] f26b3fa046: netperf.Throughput_Mbps -18.0% regression From: "ying.huang@intel.com" To: Linus Torvalds , Aaron Lu , Feng Tang Cc: Waiman Long , Peter Zijlstra , Will Deacon , Mel Gorman , kernel test robot , Vlastimil Babka , Dave Hansen , Jesper Dangaard Brouer , Michal Hocko , Andrew Morton , LKML , lkp@lists.01.org, kernel test robot , Zhengjun Xing , fengwei.yin@intel.com, linux-mm@kvack.org Date: Fri, 13 May 2022 14:19:32 +0800 In-Reply-To: References: <37dac785a08e3a341bf05d9ee35f19718ce83d26.camel@intel.com> <41c08a5371957acac5310a2e608b2e42bd231558.camel@intel.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.3-1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 66FAA1600A5 X-Stat-Signature: s9pazpj5swityih7ydzyc6fjcz4rc6uo Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Iqnxt3gb; spf=none (imf08.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1652422768-815259 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 2022-05-12 at 10:42 -0700, Linus Torvalds wrote: > On Thu, May 12, 2022 at 5:46 AM Aaron Lu wrote: > > > > When nr_process=16, zone lock contention increased about 21% from 6% to > > 27%, performance dropped 17.8%, overall lock contention increased 14.3%: > > So the contention issue seems real and nasty, and while the queued > locks may have helped a bit, I don't think they ended up making a > *huge* change: the queued locks help make sure the lock itself doesn't > bounce all over the place, but clearly if the lock holder writes close > to the lock, it will still bounce with at least *one* lock waiter. > > And having looked at the qspinlock code, I have to agree with Waiman > and PeterZ that I don't think the locking code can reasonably eb > changed - I'm sure this particular case could be improved, but the > downsides for other cases would be quite large enough to make that a > bad idea. > > So I think the issue is that > >  (a) that zone lock is too hot. > >  (b) given lock contention, the fields that get written to under the > lock are too close to the lock > > Now, the optimal fix would of course be to just fix the lock so that > it isn't so hot. But assuming that's not possible, just looking at the > definition of that 'struct zone', I do have to say that the > ZONE_PADDING fields seem to have bit-rotted over the years. > > The whole and only reason for them would be to avoid the cache > bouncing, but commit 6168d0da2b47 ("mm/lru: replace pgdat lru_lock > with lruvec lock") actively undid that for the 'lru_lock' case, and > way back when commit a368ab67aa55 ("mm: move zone lock to a different > cache line than order-0 free page lists") tried to make it true for > the 'lock' vs free_area[] cases, but did it without actually using the > ZONE_PADDING thing, but by moving things around, and not really > *guaranteeing* that 'lock' was in a different cacheline, but really > just making 'free_area[]' aligned, but still potentially in the same > cache-line as 'lock' (so now the lower-order 'free_area[]' entries are > not sharing a cache-line, but the higher-order 'free_area[]' ones > probably are). > > So I get the feeling that those 'ZONE_PADDING' things are a bit random > and not really effective. > > In a perfect world, somebody would fix the locking to just not have as > much contention. But assuming that isn't an option, maybe somebody > should just look at that 'struct zone' layout a bit more. Sure. We will work on this. Best Regards, Huang, Ying