From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 025EFC433E0 for ; Mon, 4 Jan 2021 14:44:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85EAB207BC for ; Mon, 4 Jan 2021 14:44:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85EAB207BC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AF5A28D0011; Mon, 4 Jan 2021 09:44:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7D088D000A; Mon, 4 Jan 2021 09:44:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 945098D0011; Mon, 4 Jan 2021 09:44:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id 7A0208D000A for ; Mon, 4 Jan 2021 09:44:10 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 19215362D for ; Mon, 4 Jan 2021 14:44:10 +0000 (UTC) X-FDA: 77668362660.10.lake37_610e7b8274d1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id D47C616A0DD for ; Mon, 4 Jan 2021 14:44:09 +0000 (UTC) X-HE-Tag: lake37_610e7b8274d1 X-Filterd-Recvd-Size: 4750 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 4 Jan 2021 14:44:08 +0000 (UTC) IronPort-SDR: 8P8RF2aH34Pxpp2e60D1xyZvhlLVYFsXeK1Z29Akqak4izfwZCWzPsM9TDoqXbRpy3VG1ITzZl OllxapdfVBlA== X-IronPort-AV: E=McAfee;i="6000,8403,9853"; a="176181343" X-IronPort-AV: E=Sophos;i="5.78,474,1599548400"; d="scan'208";a="176181343" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2021 06:44:06 -0800 IronPort-SDR: XRIgj5AvGINP6AgVMIqFxg2i9myokIn9RIwu1kx9dS7iugzfQsWem9yhAVUMsxbU2vNfphg0M2 9RM4xjBJ+whA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,474,1599548400"; d="scan'208";a="378445958" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.147.98]) by orsmga008.jf.intel.com with ESMTP; 04 Jan 2021 06:44:02 -0800 Date: Mon, 4 Jan 2021 22:44:02 +0800 From: Feng Tang To: Michal Hocko Cc: Andrew Morton , Johannes Weiner , Vladimir Davydov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, andi.kleen@intel.com, tim.c.chen@intel.com, dave.hansen@intel.com, ying.huang@intel.com, Roman Gushchin Subject: Re: [PATCH 1/2] mm: page_counter: relayout structure to reduce false sharing Message-ID: <20210104144402.GB101866@shbuild999.sh.intel.com> References: <1609252514-27795-1-git-send-email-feng.tang@intel.com> <20210104130357.GF13207@dhcp22.suse.cz> <20210104133445.GA101866@shbuild999.sh.intel.com> <20210104141140.GH13207@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210104141140.GH13207@dhcp22.suse.cz> User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 04, 2021 at 03:11:40PM +0100, Michal Hocko wrote: > On Mon 04-01-21 21:34:45, Feng Tang wrote: > > Hi Michal, > > > > On Mon, Jan 04, 2021 at 02:03:57PM +0100, Michal Hocko wrote: > > > On Tue 29-12-20 22:35:13, Feng Tang wrote: > > > > When checking a memory cgroup related performance regression [1], > > > > from the perf c2c profiling data, we found high false sharing for > > > > accessing 'usage' and 'parent'. > > > > > > > > On 64 bit system, the 'usage' and 'parent' are close to each other, > > > > and easy to be in one cacheline (for cacheline size == 64+ B). 'usage' > > > > is usally written, while 'parent' is usually read as the cgroup's > > > > hierarchical counting nature. > > > > > > > > So move the 'parent' to the end of the structure to make sure they > > > > are in different cache lines. > > > > > > Yes, parent is write-once field so having it away from other heavy RW > > > fields makes sense to me. > > > > > > > Following are some performance data with the patch, against > > > > v5.11-rc1, on several generations of Xeon platforms. Most of the > > > > results are improvements, with only one malloc case on one platform > > > > shows a -4.0% regression. Each category below has several subcases > > > > run on different platform, and only the worst and best scores are > > > > listed: > > > > > > > > fio: +1.8% ~ +8.3% > > > > will-it-scale/malloc1: -4.0% ~ +8.9% > > > > will-it-scale/page_fault1: no change > > > > will-it-scale/page_fault2: +2.4% ~ +20.2% > > > > > > What is the second number? Std? > > > > For each case like 'page_fault2', I run several subcases on different > > generations of Xeon, and only listed the lowest (first number) and > > highest (second number) scores. > > > > There are 5 runs and the result are: +3.6%, +2.4%, +10.4%, +20.2%, > > +4.7%, and +2.4% and +20.2% are listed. > > This should be really explained in the changelog and ideally mention the > model as well. Seeing a std would be appreciated as well. I guess I haven't made it clear (due to my poor English :)) The five scores are for different parameters on different HW: Cascadelake (100%) 77844 +3.6% 80667 will-it-scale.per_process_ops Cascadelake (50%) 182475 +2.4% 186866 will-it-scale.per_process_ops Haswell (100%) 84870 +10.4% 93671 will-it-scale.per_process_ops Haswell (50%) 197684 +20.2% 237585 will-it-scale.per_process_ops Newer Xeon (50%) 268569 +4.7% 281320 will-it-scale.per_process_ops +2.4% is the lowest improvement, while +20.2% is the highest. 100% means the number of forked test processes eqauls to CPU number, while 50% is the half. Each line has been runed several times, whose score are consistent without big deviations. Thanks, Feng