From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.0 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C06FC07E99 for ; Fri, 9 Jul 2021 15:18:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1AC29613C2 for ; Fri, 9 Jul 2021 15:18:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1AC29613C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C092F6B0074; Fri, 9 Jul 2021 11:18:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB8EA6B0075; Fri, 9 Jul 2021 11:18:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A34446B0078; Fri, 9 Jul 2021 11:18:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id 780286B0074 for ; Fri, 9 Jul 2021 11:18:49 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B182B18027981 for ; Fri, 9 Jul 2021 15:18:48 +0000 (UTC) X-FDA: 78343406736.39.31F6566 Received: from mail-yb1-f173.google.com (mail-yb1-f173.google.com [209.85.219.173]) by imf27.hostedemail.com (Postfix) with ESMTP id 5D6D270000A0 for ; Fri, 9 Jul 2021 15:18:48 +0000 (UTC) Received: by mail-yb1-f173.google.com with SMTP id k184so15164384ybf.12 for ; Fri, 09 Jul 2021 08:18:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=hGX3UGEwXfFwKKdhbKxdN5Z9a9eTLdUcU8IFnRtBKXI=; b=dxLikpQTw+JGI68eVGXPE6+Vh6CNs2aa72+3GaFp/nhZH3pkxKvuJ4GDphfvTwANQh DwW5jmyPQNxQp+Gyg+r7oZaSKpbXMTnjJ9thXEB5FircFSbyu2WlopPeIjGFD4KnbEJ3 bYzsdtMFPHBOdJiiJ2OcgObtZ/BHPgIcxPnlYvdZIjtQpa1ltOWVphSG/gUJ6w+ODLMF 9RBzHBKRDJXEzH9lBmrhfqHDt7i9UzkjLi0ltOMOhrvm+TUFqvi/v0HjDEIMYHVm2mga 7RvC6esBxwFX8ldoMQR+kadRcZVFoEXd48ZRjWo3iW1tQHu+xqXdgTmwcUhQXbR+S8So 4Srw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=hGX3UGEwXfFwKKdhbKxdN5Z9a9eTLdUcU8IFnRtBKXI=; b=KEdoQBNy4D6TUAelAAQcbwD+VvSXcr0QwShaDaHPbAKXziZsI4UQvGfCktA1+zqbw6 igMv/HFaJakdmW2OMDvgAVeLSAY6pn1AgXZZtYv091Zej1MJ6+4w1nQHia/hoCThTsVn gxLRtQNv6Ng/jgFQyQsZKsIlB92JXLF2wBcWrJHGBBOnKUWbwX1e90WvX0Q+/sZ1m2wJ rgQU6KBrKvbxA9/WWBil3nNW5o/H+Rx3xXHmPaTb9ULasjed7BhIxoiAstbW1d3xHXSR ct+TwEwGzGE9hfEE64reccuzZ6IdaTAn5MkvGNDZ7YuJhY5jVuZP5lLUEGek5mGom6Pu Xhdw== X-Gm-Message-State: AOAM531/2cTm8w4BDNpooMpTvbIqZkS3byjZkdj1adZ8+KuWVhTLp0g4 R6ZheFU6RBPCWkLmODIw5hh/TTwSABvpG9Vv06iqVQ== X-Google-Smtp-Source: ABdhPJwy1KyUlSJdT8D6JvKzadspjS0MDYDuIZ/heFt7araixWEYiSoXGf5pfMlKnIsH+B/BkTG08YBlEtc8qa7V6Ts= X-Received: by 2002:a25:ba08:: with SMTP id t8mr47499494ybg.111.1625843927371; Fri, 09 Jul 2021 08:18:47 -0700 (PDT) MIME-Version: 1.0 References: <20210709000509.2618345-1-surenb@google.com> <20210709000509.2618345-3-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Fri, 9 Jul 2021 08:18:36 -0700 Message-ID: Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config To: Johannes Weiner Cc: Tejun Heo , Michal Hocko , vdavydov.dev@gmail.com, Andrew Morton , Shakeel Butt , Roman Gushchin , songmuchun@bytedance.com, Yang Shi , alexs@kernel.org, alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com, Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple@nvidia.com, Minchan Kim , linmiaohe@huawei.com, LKML , cgroups mailinglist , linux-mm , kernel-team Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=dxLikpQT; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of surenb@google.com designates 209.85.219.173 as permitted sender) smtp.mailfrom=surenb@google.com X-Stat-Signature: cmqhpw368qbowyyp8bkgx7ceijwfx16e X-Rspamd-Queue-Id: 5D6D270000A0 X-Rspamd-Server: rspam05 X-HE-Tag: 1625843928-23195 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 9, 2021 at 7:48 AM Johannes Weiner wrote: > > On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote: > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > functions to perform mem_cgroup_disabled static key check inline before > > calling the main body of the function. This minimizes the memcg overhead > > in the pagefault and exit_mmap paths when memcgs are disabled using > > cgroup_disable=memory command-line option. > > This change results in ~0.4% overhead reduction when running PFT test > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > configurationon on an 8-core ARM64 Android device. > > > > Signed-off-by: Suren Baghdasaryan > > Sounds reasonable to me as well. One comment: > > > @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > page_counter_read(&memcg->memory); > > } > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > > + > > +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > + gfp_t gfp); > > +/** > > + * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * @page: page to charge > > + * @mm: mm context of the victim > > + * @gfp_mask: reclaim mode > > + * > > + * Try to charge @page to the memcg that @mm belongs to, reclaiming > > + * pages according to @gfp_mask if necessary. if @mm is NULL, try to > > + * charge to the active memcg. > > + * > > + * Do not use this for pages allocated for swapin. > > + * > > + * Returns 0 on success. Otherwise, an error code is returned. > > + */ > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > +{ > > + struct mem_cgroup *memcg; > > + int ret; > > + > > + if (mem_cgroup_disabled()) > > + return 0; > > + > > + memcg = get_mem_cgroup_from_mm(mm); > > + ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > + css_put(&memcg->css); > > + > > + return ret; > > Why not do > > int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > gfp_t gfp_mask); > > static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > gfp_t gfp_mask) > { > if (mem_cgroup_disabled()) > return 0; > > return __mem_cgroup_charge(page, memcg, gfp_mask); > } > > like in the other cases as well? > > That would avoid inlining two separate function calls into all the > callsites... > > There is an (internal) __mem_cgroup_charge() already, but you can > rename it charge_memcg(). Sounds good. I'll post an updated version with your suggestion. Thanks for the review, Johannes! >