From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03EB6C433EF for ; Sun, 12 Sep 2021 13:29:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8B03861056 for ; Sun, 12 Sep 2021 13:29:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8B03861056 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E46606B0071; Sun, 12 Sep 2021 09:29:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF6416B0072; Sun, 12 Sep 2021 09:29:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE52F900002; Sun, 12 Sep 2021 09:29:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id BADA56B0071 for ; Sun, 12 Sep 2021 09:29:19 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5FDD51801A809 for ; Sun, 12 Sep 2021 13:29:19 +0000 (UTC) X-FDA: 78579002838.01.FAD7AAD Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf13.hostedemail.com (Postfix) with ESMTP id 72C37102C4DB for ; Sun, 12 Sep 2021 13:29:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10104"; a="219580834" X-IronPort-AV: E=Sophos;i="5.85,287,1624345200"; d="scan'208";a="219580834" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2021 06:29:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,287,1624345200"; d="scan'208";a="542372164" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.151]) by FMSMGA003.fm.intel.com with ESMTP; 12 Sep 2021 06:29:15 -0700 Date: Sun, 12 Sep 2021 21:29:14 +0800 From: Feng Tang To: Hillf Danton Cc: Shakeel Butt , LKML , Xing Zhengjun , Linux MM Subject: Re: [memcg] 45208c9105: aim7.jobs-per-min -14.0% regression Message-ID: <20210912132914.GA56674@shbuild999.sh.intel.com> References: <20210902215504.dSSfDKJZu%akpm@linux-foundation.org> <20210905124439.GA15026@xsang-OptiPlex-9020> <20210907033000.GA88160@shbuild999.sh.intel.com> <20210912111756.4158-1-hdanton@sina.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210912111756.4158-1-hdanton@sina.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Stat-Signature: zq37iuypkoigy9i1yo1xxs4rpagu8b7g Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf13.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=feng.tang@intel.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 72C37102C4DB X-HE-Tag: 1631453358-83179 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Sep 12, 2021 at 07:17:56PM +0800, Hillf Danton wrote: [...] > > +// if (!(__this_cpu_inc_return(stats_flush_threshold) % MEMCG_CHARGE_BATCH)) > > + if (!(__this_cpu_inc_return(stats_flush_threshold) % 128)) > > queue_work(system_unbound_wq, &stats_flush_work); > > } > > Hi Feng, > > Would you please check if it helps fix the regression to avoid queuing a > queued work by adding and checking an atomic counter. Hi Hillf, I just tested your patch, and it didn't recover the regression, but just reduced it from -14% to around -13%, similar to the patch increasing the batch charge number. Thanks, Feng > Hillf > > --- x/mm/memcontrol.c > +++ y/mm/memcontrol.c > @@ -108,6 +108,7 @@ static void flush_memcg_stats_dwork(stru > static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); > static void flush_memcg_stats_work(struct work_struct *w); > static DECLARE_WORK(stats_flush_work, flush_memcg_stats_work); > +static atomic_t sfwork_queued; > static DEFINE_PER_CPU(unsigned int, stats_flush_threshold); > static DEFINE_SPINLOCK(stats_flush_lock); > > @@ -660,8 +661,13 @@ void __mod_memcg_lruvec_state(struct lru > > /* Update lruvec */ > __this_cpu_add(pn->lruvec_stats_percpu->state[idx], val); > - if (!(__this_cpu_inc_return(stats_flush_threshold) % MEMCG_CHARGE_BATCH)) > - queue_work(system_unbound_wq, &stats_flush_work); > + if (!(__this_cpu_inc_return(stats_flush_threshold) % > + MEMCG_CHARGE_BATCH)) { > + int queued = atomic_read(&sfwork_queued); > + > + if (!queued && atomic_try_cmpxchg(&sfwork_queued, &queued, 1)) > + queue_work(system_unbound_wq, &stats_flush_work); > + } > } > > /** > @@ -5376,6 +5382,7 @@ static void flush_memcg_stats_dwork(stru > static void flush_memcg_stats_work(struct work_struct *w) > { > mem_cgroup_flush_stats(); > + atomic_dec(&sfwork_queued); > } > > static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu)