From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59FE0C433E0 for ; Wed, 30 Dec 2020 14:19:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D969C22225 for ; Wed, 30 Dec 2020 14:19:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D969C22225 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DFC528D0087; Wed, 30 Dec 2020 09:19:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D84F58D007F; Wed, 30 Dec 2020 09:19:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4D638D0087; Wed, 30 Dec 2020 09:19:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id A7FFE8D007F for ; Wed, 30 Dec 2020 09:19:30 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6EEA7180AD811 for ; Wed, 30 Dec 2020 14:19:30 +0000 (UTC) X-FDA: 77650156500.16.death45_2b08cb1274a5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 4985E100E6903 for ; Wed, 30 Dec 2020 14:19:30 +0000 (UTC) X-HE-Tag: death45_2b08cb1274a5 X-Filterd-Recvd-Size: 4965 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Dec 2020 14:19:28 +0000 (UTC) IronPort-SDR: /FZIwY40HStHjazeocCLtKVwWz0v1HGReyjhuyaWfwB2ECMk3YDufb8ZzpeE3p6WctjpqV5Caa g3kg8ebkJN/g== X-IronPort-AV: E=McAfee;i="6000,8403,9849"; a="176760840" X-IronPort-AV: E=Sophos;i="5.78,461,1599548400"; d="scan'208";a="176760840" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Dec 2020 06:19:26 -0800 IronPort-SDR: tmOl3Z62Uo84jH8hAJiYOeYTECaUdq/2xEvydyjaDffdtZnffmZMTE+4w4FYMqdEf41sOtYtCh fHlYantPHPCw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,461,1599548400"; d="scan'208";a="395850007" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.147.98]) by fmsmga002.fm.intel.com with ESMTP; 30 Dec 2020 06:19:23 -0800 Date: Wed, 30 Dec 2020 22:19:23 +0800 From: Feng Tang To: Roman Gushchin Cc: Andrew Morton , Michal Hocko , Johannes Weiner , Vladimir Davydov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, andi.kleen@intel.com, tim.c.chen@intel.com, dave.hansen@intel.com, ying.huang@intel.com Subject: Re: [PATCH 1/2] mm: page_counter: relayout structure to reduce false sharing Message-ID: <20201230141923.GA43248@shbuild999.sh.intel.com> References: <1609252514-27795-1-git-send-email-feng.tang@intel.com> <20201229165642.GA371241@carbon.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201229165642.GA371241@carbon.dhcp.thefacebook.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 29, 2020 at 08:56:42AM -0800, Roman Gushchin wrote: > On Tue, Dec 29, 2020 at 10:35:13PM +0800, Feng Tang wrote: > > When checking a memory cgroup related performance regression [1], > > from the perf c2c profiling data, we found high false sharing for > > accessing 'usage' and 'parent'. > > > > On 64 bit system, the 'usage' and 'parent' are close to each other, > > and easy to be in one cacheline (for cacheline size == 64+ B). 'usage' > > is usally written, while 'parent' is usually read as the cgroup's > > hierarchical counting nature. > > > > So move the 'parent' to the end of the structure to make sure they > > are in different cache lines. > > > > Following are some performance data with the patch, against > > v5.11-rc1, on several generations of Xeon platforms. Most of the > > results are improvements, with only one malloc case on one platform > > shows a -4.0% regression. Each category below has several subcases > > run on different platform, and only the worst and best scores are > > listed: > > > > fio: +1.8% ~ +8.3% > > will-it-scale/malloc1: -4.0% ~ +8.9% > > will-it-scale/page_fault1: no change > > will-it-scale/page_fault2: +2.4% ~ +20.2% > > > > [1].https://lore.kernel.org/lkml/20201102091543.GM31092@shao2-debian/ > > Signed-off-by: Feng Tang > > Cc: Roman Gushchin > > Cc: Johannes Weiner > > --- > > include/linux/page_counter.h | 9 ++++++++- > > 1 file changed, 8 insertions(+), 1 deletion(-) > > > > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h > > index 85bd413..6795913 100644 > > --- a/include/linux/page_counter.h > > +++ b/include/linux/page_counter.h > > @@ -12,7 +12,6 @@ struct page_counter { > > unsigned long low; > > unsigned long high; > > unsigned long max; > > - struct page_counter *parent; > > > > /* effective memory.min and memory.min usage tracking */ > > unsigned long emin; > > @@ -27,6 +26,14 @@ struct page_counter { > > /* legacy */ > > unsigned long watermark; > > unsigned long failcnt; > > + > > + /* > > + * 'parent' is placed here to be far from 'usage' to reduce > > + * cache false sharing, as 'usage' is written mostly while > > + * parent is frequently read for cgroup's hierarchical > > + * counting nature. > > + */ > > + struct page_counter *parent; > > }; > > LGTM! > > Reviewed-by: Roman Gushchin Thanks for the review! > I wonder if we have the same problem with min/low/high/max? > Maybe try to group all mostly-read-only fields (min, low, high, > max and parent) and separate them with some padding? Yep, we thought about it too. From current perf c2c profiling data, I haven't noticed obvious hot spots of false sharing for min/low/high/max (which are read mostly). For padding, we had some proposal before, current page_counter for 64 bits platform is 112 bytes, padding to 2 cacheline will only cost 16 bytes more. If this is fine, I can send another patch or folder it to this one. Thanks, Feng > Thank you!