From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3176EC433F5 for ; Sun, 13 Mar 2022 02:51:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 276D98D0002; Sat, 12 Mar 2022 21:51:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 226058D0001; Sat, 12 Mar 2022 21:51:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EED68D0002; Sat, 12 Mar 2022 21:51:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 00B258D0001 for ; Sat, 12 Mar 2022 21:51:05 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9E33D181E4CDD for ; Sun, 13 Mar 2022 02:51:05 +0000 (UTC) X-FDA: 79237836090.31.18D03CD Received: from r3-20.sinamail.sina.com.cn (r3-20.sinamail.sina.com.cn [202.108.3.20]) by imf20.hostedemail.com (Postfix) with SMTP id 00BCB1C002F for ; Sun, 13 Mar 2022 02:51:02 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([114.249.61.131]) by sina.com (172.16.97.35) with ESMTP id 622D5C0E000006DE; Sun, 13 Mar 2022 10:50:56 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 40780715073900 From: Hillf Danton To: Shakeel Butt Cc: Michal Koutny , Ivan Babrou , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Daniel Dao Subject: Re: [PATCH] memcg: sync flush only if periodic flush is delayed Date: Sun, 13 Mar 2022 10:50:50 +0800 Message-Id: <20220313025050.1463-1-hdanton@sina.com> In-Reply-To: <20220304184040.1304781-1-shakeelb@google.com> References: <20220304184040.1304781-1-shakeelb@google.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 00BCB1C002F X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf20.hostedemail.com: domain of hdanton@sina.com designates 202.108.3.20 as permitted sender) smtp.mailfrom=hdanton@sina.com X-Stat-Signature: wnss3yrb6pourmzq15eeqtta4e91bzjy X-Rspamd-Server: rspam04 X-HE-Tag: 1647139862-940000 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 4 Mar 2022 18:40:40 +0000 Shakeel Butt wrote: > Daniel Dao has reported [1] a regression on workloads that may trigger > a lot of refaults (anon and file). The underlying issue is that flushin= g > rstat is expensive. Although rstat flush are batched with (nr_cpus * > MEMCG_BATCH) stat updates, it seems like there are workloads which > genuinely do stat updates larger than batch value within short amount o= f > time. Since the rstat flush can happen in the performance critical > codepaths like page faults, such workload can suffer greatly. >=20 > This patch fixes this regression by making the rstat flushing > conditional in the performance critical codepaths. More specifically, > the kernel relies on the async periodic rstat flusher to flush the stat= s > and only if the periodic flusher is delayed by more than twice the > amount of its normal time window then the kernel allows rstat flushing > from the performance critical codepaths. >=20 > Now the question: what are the side-effects of this change? The worst > that can happen is the refault codepath will see 4sec old lruvec stats > and may cause false (or missed) activations of the refaulted page which > may under-or-overestimate the workingset size. Though that is not very > concerning as the kernel can already miss or do false activations. >=20 > There are two more codepaths whose flushing behavior is not changed by > this patch and we may need to come to them in future. One is the > writeback stats used by dirty throttling and second is the deactivation > heuristic in the reclaim. For now keeping an eye on them and if there i= s > report of regression due to these codepaths, we will reevaluate then. >=20 > Link: https://lore.kernel.org/all/CA+wXwBSyO87ZX5PVwdHm-=3DdBjZYECGmfny= dUicUyrQqndgX2MQ@mail.gmail.com [1] > Fixes: 1f828223b799 ("memcg: flush lruvec stats in the refault") > Signed-off-by: Shakeel Butt > Reported-by: Daniel Dao > Cc: > --- > include/linux/memcontrol.h | 5 +++++ > mm/memcontrol.c | 12 +++++++++++- > mm/workingset.c | 2 +- > 3 files changed, 17 insertions(+), 2 deletions(-) >=20 > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index a68dce3873fc..89b14729d59f 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1012,6 +1012,7 @@ static inline unsigned long lruvec_page_state_loc= al(struct lruvec *lruvec, > } > =20 > void mem_cgroup_flush_stats(void); > +void mem_cgroup_flush_stats_delayed(void); > =20 > void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_it= em idx, > int val); > @@ -1455,6 +1456,10 @@ static inline void mem_cgroup_flush_stats(void) > { > } > =20 > +static inline void mem_cgroup_flush_stats_delayed(void) > +{ > +} > + > static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec, > enum node_stat_item idx, int val) > { > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index f79bb3f25ce4..edfb337e6948 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -587,6 +587,9 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, f= lush_memcg_stats_dwork); > static DEFINE_SPINLOCK(stats_flush_lock); > static DEFINE_PER_CPU(unsigned int, stats_updates); > static atomic_t stats_flush_threshold =3D ATOMIC_INIT(0); > +static u64 flush_next_time; > + > +#define FLUSH_TIME (2UL*HZ) > =20 > /* > * Accessors to ensure that preemption is disabled on PREEMPT_RT becau= se it can > @@ -637,6 +640,7 @@ static void __mem_cgroup_flush_stats(void) > if (!spin_trylock_irqsave(&stats_flush_lock, flag)) > return; > =20 > + flush_next_time =3D jiffies_64 + 2*FLUSH_TIME; > cgroup_rstat_flush_irqsafe(root_mem_cgroup->css.cgroup); > atomic_set(&stats_flush_threshold, 0); > spin_unlock_irqrestore(&stats_flush_lock, flag); > @@ -648,10 +652,16 @@ void mem_cgroup_flush_stats(void) > __mem_cgroup_flush_stats(); > } > =20 > +void mem_cgroup_flush_stats_delayed(void) > +{ > + if (rstat_flush_time && time_after64(jiffies_64, flush_next_time)) > + mem_cgroup_flush_stats(); > +} > + > static void flush_memcg_stats_dwork(struct work_struct *w) > { > __mem_cgroup_flush_stats(); > - queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); > + queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME)= ; > } > =20 Given flush_next_time is updated only by the periodic flusher, flushing stats is effectively disabled for page faults as sync flush can hardly run before the periodic one. If that is the case, turn the sync flush into a signal that requests flushing stats sooner. Only for thoughts now. Hillf --- x/mm/memcontrol.c +++ y/mm/memcontrol.c @@ -587,9 +587,9 @@ static DECLARE_DEFERRABLE_WORK(stats_flu static DEFINE_SPINLOCK(stats_flush_lock); static DEFINE_PER_CPU(unsigned int, stats_updates); static atomic_t stats_flush_threshold =3D ATOMIC_INIT(0); -static u64 flush_next_time; =20 -#define FLUSH_TIME (2UL*HZ) +#define FLUSH_STATS_TICK (2UL*HZ) +static unsigned long flush_stats_tick =3D FLUSH_STATS_TICK; =20 /* * Accessors to ensure that preemption is disabled on PREEMPT_RT because= it can @@ -640,7 +640,6 @@ static void __mem_cgroup_flush_stats(voi if (!spin_trylock_irqsave(&stats_flush_lock, flag)) return; =20 - flush_next_time =3D jiffies_64 + 2*FLUSH_TIME; cgroup_rstat_flush_irqsafe(root_mem_cgroup->css.cgroup); atomic_set(&stats_flush_threshold, 0); spin_unlock_irqrestore(&stats_flush_lock, flag); @@ -654,14 +653,15 @@ void mem_cgroup_flush_stats(void) =20 void mem_cgroup_flush_stats_delayed(void) { - if (time_after64(jiffies_64, flush_next_time)) - mem_cgroup_flush_stats(); + /* s/mem_cgroup_flush_stats_delayed/mem_cgroup_signal_flush_stats/ */ + flush_stats_tick /=3D 2; } =20 static void flush_memcg_stats_dwork(struct work_struct *w) { __mem_cgroup_flush_stats(); - queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); + queue_delayed_work(system_unbound_wq, &stats_flush_dwork, flush_stats_t= ick); + flush_stats_tick =3D FLUSH_STATS_TICK; } =20 /**