From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB846C433E0 for ; Mon, 4 Jan 2021 02:53:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2A68720DD4 for ; Mon, 4 Jan 2021 02:53:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A68720DD4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5DA8F6B00C0; Sun, 3 Jan 2021 21:53:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 58C076B00C1; Sun, 3 Jan 2021 21:53:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A1CB6B00C2; Sun, 3 Jan 2021 21:53:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id 2568E6B00C0 for ; Sun, 3 Jan 2021 21:53:22 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DBA42181AC9BF for ; Mon, 4 Jan 2021 02:53:21 +0000 (UTC) X-FDA: 77666571402.16.arm64_3d05884274cc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id B2850100E6903 for ; Mon, 4 Jan 2021 02:53:21 +0000 (UTC) X-HE-Tag: arm64_3d05884274cc X-Filterd-Recvd-Size: 4728 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 4 Jan 2021 02:53:20 +0000 (UTC) IronPort-SDR: 9YykOKkbzR5XZ8+80XQlATi66fmchtoDsgydFjszXHs8bZTpxY5GQJQD4hFSlQnOPa1Pb6WOtj 39XNk1ZWwU9Q== X-IronPort-AV: E=McAfee;i="6000,8403,9853"; a="177059043" X-IronPort-AV: E=Sophos;i="5.78,472,1599548400"; d="scan'208";a="177059043" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2021 18:53:18 -0800 IronPort-SDR: TJjiJPoulMMq1bRpGIFoxY7M2vriaAK6CdxUuoHMVonCT1jOif92MMjo9JIwwERUXPFlBCcasd gsJ20CIoYBpA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,472,1599548400"; d="scan'208";a="378242151" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.147.98]) by orsmga008.jf.intel.com with ESMTP; 03 Jan 2021 18:53:15 -0800 Date: Mon, 4 Jan 2021 10:53:14 +0800 From: Feng Tang To: Roman Gushchin Cc: Andrew Morton , Michal Hocko , Johannes Weiner , Vladimir Davydov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, andi.kleen@intel.com, tim.c.chen@intel.com, dave.hansen@intel.com, ying.huang@intel.com, Shakeel Butt Subject: Re: [PATCH 2/2] mm: memcg: add a new MEMCG_UPDATE_BATCH Message-ID: <20210104025314.GA32269@shbuild999.sh.intel.com> References: <1609252514-27795-1-git-send-email-feng.tang@intel.com> <1609252514-27795-2-git-send-email-feng.tang@intel.com> <20201229171327.GB371241@carbon.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201229171327.GB371241@carbon.dhcp.thefacebook.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Roman, On Tue, Dec 29, 2020 at 09:13:27AM -0800, Roman Gushchin wrote: > On Tue, Dec 29, 2020 at 10:35:14PM +0800, Feng Tang wrote: > > When profiling memory cgroup involved benchmarking, status update > > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH > > is used for both charging and statistics/events updating, and is > > set to 32, which may be good for accuracy of memcg charging, but > > too small for stats update which causes concurrent access to global > > stats data instead of per-cpu ones. > > > > So handle them differently, by adding a new bigger batch number > > for stats updating, while keeping the value for charging (though > > comments in memcontrol.h suggests to consider a bigger value too) > > > > The new batch is set to 512, which considers 2MB huge pages (512 > > pages), as the check logic mostly is: > > > > if (x > BATCH), then skip updating global data > > > > so it will save 50% global data updating for 2MB pages > > > > Following are some performance data with the patch, against > > v5.11-rc1, on several generations of Xeon platforms. Each category > > below has several subcases run on different platform, and only the > > worst and best scores are listed: > > > > fio: +2.0% ~ +6.8% > > will-it-scale/malloc: -0.9% ~ +6.2% > > will-it-scale/page_fault1: no change > > will-it-scale/page_fault2: +13.7% ~ +26.2% > > I wonder if there are any wins noticeable in the real world? > Lowering the accuracy of statistics makes harder to interpret it, > so it should be very well justified. This is a valid concern. I only had test results for fio, will-it-scale and vm-scalability (mostly impovements) so far, and I will try to run on some Redis/RockDB like workload. I have seen hotspots related with memcg statistics counting in some customers' report, which is part of the motivation of the patch. > 512 * nr_cpus is a large number. I also tested 128, 256, 2048, 4096, which all show similar gains with the benchmarks above, and 512 is chosed for 2MB pages. 128 could be less harmful for accuracy. > > > > One thought is it could be dynamically calculated according to > > memcg limit and number of CPUs, and another is to add a periodic > > syncing of the data for accuracy reason similar to vmstat, as > > suggested by Ying. > > It sounds good to me, but it's quite tricky to implement properly, > given that thee number of cgroups can be really big. It makes the > traversing of the whole cgroup tree and syncing stats quite expensive, > so it will not be easy to find a good balance. Agreed. Also could you shed some light about how these statistics data are used, so that we can better understand the usage. Thanks again for the valuable feedback! - Feng > Thanks!