linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Feng Tang <feng.tang@intel.com>
To: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	andi.kleen@intel.com, tim.c.chen@intel.com,
	dave.hansen@intel.com, ying.huang@intel.com,
	Shakeel Butt <shakeelb@google.com>
Subject: Re: [PATCH 2/2] mm: memcg: add a new MEMCG_UPDATE_BATCH
Date: Mon, 4 Jan 2021 10:53:14 +0800	[thread overview]
Message-ID: <20210104025314.GA32269@shbuild999.sh.intel.com> (raw)
In-Reply-To: <20201229171327.GB371241@carbon.dhcp.thefacebook.com>

Hi Roman,

On Tue, Dec 29, 2020 at 09:13:27AM -0800, Roman Gushchin wrote:
> On Tue, Dec 29, 2020 at 10:35:14PM +0800, Feng Tang wrote:
> > When profiling memory cgroup involved benchmarking, status update
> > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH
> > is used for both charging and statistics/events updating, and is
> > set to 32, which may be good for accuracy of memcg charging, but
> > too small for stats update which causes concurrent access to global
> > stats data instead of per-cpu ones.
> > 
> > So handle them differently, by adding a new bigger batch number
> > for stats updating, while keeping the value for charging (though
> > comments in memcontrol.h suggests to consider a bigger value too)
> > 
> > The new batch is set to 512, which considers 2MB huge pages (512
> > pages), as the check logic mostly is:
> > 
> >     if (x > BATCH), then skip updating global data
> > 
> > so it will save 50% global data updating for 2MB pages
> > 
> > Following are some performance data with the patch, against
> > v5.11-rc1, on several generations of Xeon platforms. Each category
> > below has several subcases run on different platform, and only the
> > worst and best scores are listed:
> > 
> > fio:				 +2.0% ~  +6.8%
> > will-it-scale/malloc:		 -0.9% ~  +6.2%
> > will-it-scale/page_fault1:	 no change
> > will-it-scale/page_fault2:	+13.7% ~ +26.2%
> 
> I wonder if there are any wins noticeable in the real world?
> Lowering the accuracy of statistics makes harder to interpret it,
> so it should be very well justified.

This is a valid concern. I only had test results for fio, 
will-it-scale and vm-scalability (mostly impovements) so far, 
and I will try to run on some Redis/RockDB like workload. I have
seen hotspots related with memcg statistics counting in some
customers' report, which is part of the motivation of the patch. 

> 512 * nr_cpus is a large number.

I also tested 128, 256, 2048, 4096, which all show similar gains
with the benchmarks above, and 512 is chosed for 2MB pages. 128
could be less harmful for accuracy.

> > 
> > One thought is it could be dynamically calculated according to
> > memcg limit and number of CPUs, and another is to add a periodic
> > syncing of the data for accuracy reason similar to vmstat, as
> > suggested by Ying.
> 
> It sounds good to me, but it's quite tricky to implement properly,
> given that thee number of cgroups can be really big. It makes the
> traversing of the whole cgroup tree and syncing stats quite expensive,
> so it will not be easy to find a good balance.

Agreed. Also could you shed some light about how these statistics
data are used, so that we can better understand the usage.

Thanks again for the valuable feedback! 

- Feng

> Thanks!


  reply	other threads:[~2021-01-04  2:53 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-29 14:35 [PATCH 1/2] mm: page_counter: relayout structure to reduce false sharing Feng Tang
2020-12-29 14:35 ` [PATCH 2/2] mm: memcg: add a new MEMCG_UPDATE_BATCH Feng Tang
2020-12-29 17:13   ` Roman Gushchin
2021-01-04  2:53     ` Feng Tang [this message]
2021-01-04  7:46   ` [mm] 4d8191276e: vm-scalability.throughput 43.4% improvement kernel test robot
2021-01-04 13:15   ` [PATCH 2/2] mm: memcg: add a new MEMCG_UPDATE_BATCH Michal Hocko
2021-01-05  1:57     ` Feng Tang
2021-01-06  0:47   ` Shakeel Butt
2021-01-06  2:12     ` Feng Tang
2021-01-06  3:43       ` Chris Down
2021-01-06  3:45         ` Chris Down
2021-01-06  4:45         ` Feng Tang
2020-12-29 16:56 ` [PATCH 1/2] mm: page_counter: relayout structure to reduce false sharing Roman Gushchin
2020-12-30 14:19   ` Feng Tang
2021-01-04 13:03 ` Michal Hocko
2021-01-04 13:34   ` Feng Tang
2021-01-04 14:11     ` Michal Hocko
2021-01-04 14:44       ` Feng Tang
2021-01-04 15:34         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210104025314.GA32269@shbuild999.sh.intel.com \
    --to=feng.tang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi.kleen@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shakeelb@google.com \
    --cc=tim.c.chen@intel.com \
    --cc=vdavydov.dev@gmail.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox