From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0DAFC433B4 for ; Thu, 15 Apr 2021 03:27:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 741A66103E for ; Thu, 15 Apr 2021 03:27:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 741A66103E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 152196B0036; Wed, 14 Apr 2021 23:27:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 101576B0070; Wed, 14 Apr 2021 23:27:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBDC86B0071; Wed, 14 Apr 2021 23:27:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id D0C1E6B0036 for ; Wed, 14 Apr 2021 23:27:50 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9BF48182C371E for ; Thu, 15 Apr 2021 03:27:50 +0000 (UTC) X-FDA: 78033167100.33.72E44B9 Received: from mail-qv1-f47.google.com (mail-qv1-f47.google.com [209.85.219.47]) by imf26.hostedemail.com (Postfix) with ESMTP id ABD9B40002D3 for ; Thu, 15 Apr 2021 03:27:45 +0000 (UTC) Received: by mail-qv1-f47.google.com with SMTP id i3so5506942qvj.7 for ; Wed, 14 Apr 2021 20:27:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=ZewygDwmHdq6xdev+sHHUZNlvlau4881WxunfG37r7A=; b=rXthvvKpJW5ubM3JhbMfyuP6GD/x2iw15KlAPPi8FDrLiadaHWY0VJ/vZqSA958+oi aYVuChvL8siJXaGEibL5KNTauCkk+/VUzBIZPBoVuo3PSWLFe2VZ9nMw+vO/UELzL989 QNolmx63Igls8yqbffc7aqyzqNT5UIv8Ouy9lz2LZPRmK3VUPZCUv+B8POIN362B+DPy HSdDXzt890ZNc+LwDnz6U0ZzeMu3Owidfux5884QxIk27FpTtZSR15bqZE3Hhv7ZFjTp KvyQ94lZtMioin31k203Q1a0ipkNda/kmwyZYoiCe/ZeCKZ7wNfJNixRkcWzUqevAdrm K1Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ZewygDwmHdq6xdev+sHHUZNlvlau4881WxunfG37r7A=; b=QkhCowhIGbU6FjtWgzM85/FVW+21xaa7P1rZSkeR544dzQ+cZuwqa1UVtp+VuicvOf ZHtVUXeZrcm/PQMvpRaegauMh3AAe2iKerUbZGyfRpvEY1Kk23UVWffJzcVL7+IXoDUV A9B9cbnr3yxB5qK3IV5JIrus/Nc6+8XpxnZrhAETVAtqzVrDnEdK1JmVJIAQiWnzSszE quXJ+SyTU4MIlNea8jk4TEGPPgiqiihwwII1udn6oCC8NqBWSq2J06/htRg8U7E73/fx MEvmUEPLlaQcYH9t5IfZcMW+XJTBL+M95Uwk+aNlExU6cEk70/aHClkVR8HVeRB8Ba1P NzCw== X-Gm-Message-State: AOAM532EM1SYyZB7Ywg1PbXeNLeREPMgASonViIFN2+8WYDQpdgtxPGd 2+wfK0klVwwaJvNI54tshQ== X-Google-Smtp-Source: ABdhPJz8ThtCjRqYYUJKvu3zV0fzdztY438WDuOL1Qj/w2AFsRaEewf/TIAUNdHmNuMApsfYFhtglg== X-Received: by 2002:a05:6214:12e4:: with SMTP id w4mr1306969qvv.30.1618457269576; Wed, 14 Apr 2021 20:27:49 -0700 (PDT) Received: from gabell (209-6-122-159.s2973.c3-0.arl-cbr1.sbo-arl.ma.cable.rcncustomer.com. [209.6.122.159]) by smtp.gmail.com with ESMTPSA id d62sm1102401qkg.55.2021.04.14.20.27.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Apr 2021 20:27:49 -0700 (PDT) Date: Wed, 14 Apr 2021 23:27:46 -0400 From: Masayoshi Mizuma To: Waiman Long Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Xing Zhengjun Subject: Re: [PATCH v3 2/5] mm/memcg: Introduce obj_cgroup_uncharge_mod_state() Message-ID: <20210415032746.hahhc5l5lbjhdvnr@gabell> References: <20210414012027.5352-1-longman@redhat.com> <20210414012027.5352-3-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210414012027.5352-3-longman@redhat.com> X-Rspamd-Queue-Id: ABD9B40002D3 X-Stat-Signature: i5czzcfgfk7f5z5onrguwsszzqwxz6bx X-Rspamd-Server: rspam02 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mail-qv1-f47.google.com; client-ip=209.85.219.47 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618457265-564772 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 13, 2021 at 09:20:24PM -0400, Waiman Long wrote: > In memcg_slab_free_hook()/pcpu_memcg_free_hook(), obj_cgroup_uncharge() > is followed by mod_objcg_state()/mod_memcg_state(). Each of these > function call goes through a separate irq_save/irq_restore cycle. That > is inefficient. Introduce a new function obj_cgroup_uncharge_mod_state() > that combines them with a single irq_save/irq_restore cycle. > > Signed-off-by: Waiman Long > Reviewed-by: Shakeel Butt > Acked-by: Roman Gushchin > --- > include/linux/memcontrol.h | 2 ++ > mm/memcontrol.c | 31 +++++++++++++++++++++++++++---- > mm/percpu.c | 9 ++------- > mm/slab.h | 6 +++--- > 4 files changed, 34 insertions(+), 14 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 95f12996e66c..6890f999c1a3 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1592,6 +1592,8 @@ struct obj_cgroup *get_obj_cgroup_from_current(void); > > int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size); > void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size); > +void obj_cgroup_uncharge_mod_state(struct obj_cgroup *objcg, size_t size, > + struct pglist_data *pgdat, int idx); > > extern struct static_key_false memcg_kmem_enabled_key; > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index d66e1e38f8ac..b19100c68aa0 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3225,12 +3225,9 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, > return false; > } > > -static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) > +static void __refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) > { > struct memcg_stock_pcp *stock; > - unsigned long flags; > - > - local_irq_save(flags); > > stock = this_cpu_ptr(&memcg_stock); > if (stock->cached_objcg != objcg) { /* reset if necessary */ > @@ -3243,7 +3240,14 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) > > if (stock->nr_bytes > PAGE_SIZE) > drain_obj_stock(stock); > +} > + > +static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) > +{ > + unsigned long flags; > > + local_irq_save(flags); > + __refill_obj_stock(objcg, nr_bytes); > local_irq_restore(flags); > } > > @@ -3292,6 +3296,25 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) > refill_obj_stock(objcg, size); > } > > +void obj_cgroup_uncharge_mod_state(struct obj_cgroup *objcg, size_t size, > + struct pglist_data *pgdat, int idx) > +{ > + unsigned long flags; > + struct mem_cgroup *memcg; > + struct lruvec *lruvec = NULL; > + > + local_irq_save(flags); > + __refill_obj_stock(objcg, size); > + > + rcu_read_lock(); > + memcg = obj_cgroup_memcg(objcg); > + if (pgdat) > + lruvec = mem_cgroup_lruvec(memcg, pgdat); > + __mod_memcg_lruvec_state(memcg, lruvec, idx, -(int)size); > + rcu_read_unlock(); > + local_irq_restore(flags); > +} > + > #endif /* CONFIG_MEMCG_KMEM */ > > /* > diff --git a/mm/percpu.c b/mm/percpu.c > index 23308113a5ff..fd7aad6d7f90 100644 > --- a/mm/percpu.c > +++ b/mm/percpu.c > @@ -1631,13 +1631,8 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size) > objcg = chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT]; > chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] = NULL; > > - obj_cgroup_uncharge(objcg, size * num_possible_cpus()); > - > - rcu_read_lock(); > - mod_memcg_state(obj_cgroup_memcg(objcg), MEMCG_PERCPU_B, > - -(size * num_possible_cpus())); > - rcu_read_unlock(); > - > + obj_cgroup_uncharge_mod_state(objcg, size * num_possible_cpus(), > + NULL, MEMCG_PERCPU_B); > obj_cgroup_put(objcg); > } > > diff --git a/mm/slab.h b/mm/slab.h > index bc6c7545e487..677cdc52e641 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -366,9 +366,9 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, > continue; > > objcgs[off] = NULL; > - obj_cgroup_uncharge(objcg, obj_full_size(s)); > - mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s), > - -obj_full_size(s)); > + obj_cgroup_uncharge_mod_state(objcg, obj_full_size(s), > + page_pgdat(page), > + cache_vmstat_idx(s)); > obj_cgroup_put(objcg); > } > } > -- > 2.18.1 > Please feel free to add: Tested-by: Masayoshi Mizuma Thanks! Masa