From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABADDC433E0 for ; Wed, 6 Jan 2021 02:12:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A3BE22CE3 for ; Wed, 6 Jan 2021 02:12:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A3BE22CE3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 22F858D00D6; Tue, 5 Jan 2021 21:12:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E06B8D00D1; Tue, 5 Jan 2021 21:12:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F63B8D00D6; Tue, 5 Jan 2021 21:12:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id EC5F98D00D1 for ; Tue, 5 Jan 2021 21:12:20 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BA3E5181AEF1A for ; Wed, 6 Jan 2021 02:12:20 +0000 (UTC) X-FDA: 77673725640.22.dress24_52003df274dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 9F01118038E60 for ; Wed, 6 Jan 2021 02:12:20 +0000 (UTC) X-HE-Tag: dress24_52003df274dd X-Filterd-Recvd-Size: 4910 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 02:12:19 +0000 (UTC) IronPort-SDR: iIOResbALmELwL0XoSzHfEEiIAye3wjOLQVOL4CCw36bIJ3AQ1AALpcKZuM8XNRXMdJ5Zcm4Ae qWpfxf7JK3og== X-IronPort-AV: E=McAfee;i="6000,8403,9855"; a="261978541" X-IronPort-AV: E=Sophos;i="5.78,478,1599548400"; d="scan'208";a="261978541" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jan 2021 18:12:17 -0800 IronPort-SDR: i6BjHQCTGiTSbmJTLNZuZdyY3yWoZJj4nMQ3V/KInbUW0o6BZgXm9/5F+amNJieJDlhGsQHt0x xp5mYDTGjwiw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,478,1599548400"; d="scan'208";a="402536536" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.147.98]) by FMSMGA003.fm.intel.com with ESMTP; 05 Jan 2021 18:12:14 -0800 Date: Wed, 6 Jan 2021 10:12:13 +0800 From: Feng Tang To: Shakeel Butt Cc: Andrew Morton , Michal Hocko , Johannes Weiner , Vladimir Davydov , Linux MM , LKML , andi.kleen@intel.com, "Chen, Tim C" , Dave Hansen , Huang Ying , Roman Gushchin Subject: Re: [PATCH 2/2] mm: memcg: add a new MEMCG_UPDATE_BATCH Message-ID: <20210106021213.GD101866@shbuild999.sh.intel.com> References: <1609252514-27795-1-git-send-email-feng.tang@intel.com> <1609252514-27795-2-git-send-email-feng.tang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Shakeel, On Tue, Jan 05, 2021 at 04:47:33PM -0800, Shakeel Butt wrote: > On Tue, Dec 29, 2020 at 6:35 AM Feng Tang wrote: > > > > When profiling memory cgroup involved benchmarking, status update > > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH > > is used for both charging and statistics/events updating, and is > > set to 32, which may be good for accuracy of memcg charging, but > > too small for stats update which causes concurrent access to global > > stats data instead of per-cpu ones. > > > > So handle them differently, by adding a new bigger batch number > > for stats updating, while keeping the value for charging (though > > comments in memcontrol.h suggests to consider a bigger value too) > > > > The new batch is set to 512, which considers 2MB huge pages (512 > > pages), as the check logic mostly is: > > > > if (x > BATCH), then skip updating global data > > > > so it will save 50% global data updating for 2MB pages > > > > Following are some performance data with the patch, against > > v5.11-rc1, on several generations of Xeon platforms. Each category > > below has several subcases run on different platform, and only the > > worst and best scores are listed: > > > > fio: +2.0% ~ +6.8% > > will-it-scale/malloc: -0.9% ~ +6.2% > > will-it-scale/page_fault1: no change > > will-it-scale/page_fault2: +13.7% ~ +26.2% > > > > One thought is it could be dynamically calculated according to > > memcg limit and number of CPUs, and another is to add a periodic > > syncing of the data for accuracy reason similar to vmstat, as > > suggested by Ying. > > > > I am going to push back on this change. On a large system where jobs > can run on any available cpu, this will totally mess up the stats > (which is actually what happens on our production servers). These > stats are used for multiple purposes like debugging or understanding > the memory usage of the job or doing data analysis. Thanks for sharing the usage case, and I agree it will bring more trouble for debugging and analyzing. Though we lack real world load, but the micro benchmarks do show obvious benefits, 0day rebot reported a 43.4% improvement for vm-scalability lru-shm case, and it is up to +60% against 5.11-rc1. The memory cgroup stats updating hotspots has been on our radar for a long time, which could be seen in the perf profile data. So I am wondering if we could make the batch a configurable knob, so that it can benefit workload without need for accurate stats. One further thought is, there are quite some "BATCH" number in kernel for perf-cpu/global data updating, maybe we can add a global flag 'sysctl_need_accurate_stats' for if (sysctl_need_accurate_stats) batch = SMALLER_BATCH else batch = BIGGER_BATCH Thanks, Feng