From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.0 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C257BC07E99 for ; Fri, 9 Jul 2021 17:18:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4A3B6613C1 for ; Fri, 9 Jul 2021 17:18:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A3B6613C1 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 379836B0074; Fri, 9 Jul 2021 13:18:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3294F6B0078; Fri, 9 Jul 2021 13:18:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A35E8D0001; Fri, 9 Jul 2021 13:18:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id E63996B0074 for ; Fri, 9 Jul 2021 13:18:00 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3589D2C5B4 for ; Fri, 9 Jul 2021 17:18:00 +0000 (UTC) X-FDA: 78343707120.38.2EBBCB4 Received: from mail-yb1-f172.google.com (mail-yb1-f172.google.com [209.85.219.172]) by imf21.hostedemail.com (Postfix) with ESMTP id F32F7D0048B5 for ; Fri, 9 Jul 2021 17:17:59 +0000 (UTC) Received: by mail-yb1-f172.google.com with SMTP id i4so15802147ybe.2 for ; Fri, 09 Jul 2021 10:17:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6F4LOb4/qbICNruJHwbsWh8y31MCaegySpX98sO3li0=; b=NMc2lbTxvLMqVFevQe6lCca1Wk+YJmMCWhkCP6Iy4HwWowK2M9RJ2jBgpcfpA+yn/t 2FMysA7140dTnBDBQrluNz0Y7eYYGHjLLpFg/WjfRb4WvKMaZxXuAU2z7VjzRMPx+9Rv DMaFCjFdwle1AchujQt/7vV48KFnN9nQOsXeqQK44TnPH43LwFGSoCklkIT9o2jRJKlY SOCUi4qdZXCyjlPmJMkV2NHx+mpdy8iSGld11vCUYr2Xc+nbof7t0CznipuL94S/THLf RRNXF9WsCaD6ZFmEauMBLXdLJvyj93jP/mI0ACpdmWs80NC+gb+A1VPVKHSWPoqsQYyG lqyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6F4LOb4/qbICNruJHwbsWh8y31MCaegySpX98sO3li0=; b=l6kg/Q35D8Or/IPysKDMlxAL3xP8a7U9kOcO9Q6AL5+4H3r2hZXlwQEYf4ijb939wJ bS6H+kI6QAp+se4iCxC4lvn64bmZK09S17ngw1KYnJzLwCiSVmFtJoG16SkvOQkWQPpg 2xnRXLqUOExGbyiexMBEKniVi1Eanw1JjtxhRe9aUg0/s4ofR8ibcLJ5k7Mnu268r8CR +dLqFI40SPWYNzFZapLCfeTNbqrsn6vJpdzotwECVV6wZydC03xEOQp6X+UZP7tU5yR2 TkgV1i4wbQ2oAL6g0oMhlV+QDelYFMFRD0ISqx2MdmlbiBUkU9z/wquludRlYS1L7a8v dWXg== X-Gm-Message-State: AOAM530TqfU0RquNi6FsTNexoGlqxkbXYrd6q1E2Z0l34bO6fx61lgQB N6IzdHHNXEq/8bDiVeTHpiEmLgy5m/wOQPzYmYPT0g== X-Google-Smtp-Source: ABdhPJxvDpErNQxWRcZ+MAU94htzfVrGLAtZSmM+xT/xKHWMkDpKUC0qh5Gifw5VgIUNzKQBoMCh/4II+yF8rMrM14c= X-Received: by 2002:a25:71c4:: with SMTP id m187mr42560004ybc.397.1625851079037; Fri, 09 Jul 2021 10:17:59 -0700 (PDT) MIME-Version: 1.0 References: <20210709000509.2618345-1-surenb@google.com> <20210709000509.2618345-3-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Fri, 9 Jul 2021 10:17:48 -0700 Message-ID: Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config To: Johannes Weiner Cc: Tejun Heo , Michal Hocko , vdavydov.dev@gmail.com, Andrew Morton , Shakeel Butt , Roman Gushchin , songmuchun@bytedance.com, Yang Shi , alexs@kernel.org, alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com, Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple@nvidia.com, Minchan Kim , linmiaohe@huawei.com, LKML , cgroups mailinglist , linux-mm , kernel-team Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: F32F7D0048B5 X-Stat-Signature: cgpez3q1aawwpxhm7d9hghyk16eomgwk Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=NMc2lbTx; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of surenb@google.com designates 209.85.219.172 as permitted sender) smtp.mailfrom=surenb@google.com X-HE-Tag: 1625851079-447074 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 9, 2021 at 8:18 AM Suren Baghdasaryan wrote: > > On Fri, Jul 9, 2021 at 7:48 AM Johannes Weiner wrote: > > > > On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote: > > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > > functions to perform mem_cgroup_disabled static key check inline before > > > calling the main body of the function. This minimizes the memcg overhead > > > in the pagefault and exit_mmap paths when memcgs are disabled using > > > cgroup_disable=memory command-line option. > > > This change results in ~0.4% overhead reduction when running PFT test > > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > > configurationon on an 8-core ARM64 Android device. > > > > > > Signed-off-by: Suren Baghdasaryan > > > > Sounds reasonable to me as well. One comment: > > > > > @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > > page_counter_read(&memcg->memory); > > > } > > > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > > +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > > > + > > > +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > > + gfp_t gfp); > > > +/** > > > + * mem_cgroup_charge - charge a newly allocated page to a cgroup > > > + * @page: page to charge > > > + * @mm: mm context of the victim > > > + * @gfp_mask: reclaim mode > > > + * > > > + * Try to charge @page to the memcg that @mm belongs to, reclaiming > > > + * pages according to @gfp_mask if necessary. if @mm is NULL, try to > > > + * charge to the active memcg. > > > + * > > > + * Do not use this for pages allocated for swapin. > > > + * > > > + * Returns 0 on success. Otherwise, an error code is returned. > > > + */ > > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > > + gfp_t gfp_mask) > > > +{ > > > + struct mem_cgroup *memcg; > > > + int ret; > > > + > > > + if (mem_cgroup_disabled()) > > > + return 0; > > > + > > > + memcg = get_mem_cgroup_from_mm(mm); > > > + ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > > + css_put(&memcg->css); > > > + > > > + return ret; > > > > Why not do > > > > int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > gfp_t gfp_mask); > > > > static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > gfp_t gfp_mask) > > { > > if (mem_cgroup_disabled()) > > return 0; > > > > return __mem_cgroup_charge(page, memcg, gfp_mask); > > } > > > > like in the other cases as well? > > > > That would avoid inlining two separate function calls into all the > > callsites... > > > > There is an (internal) __mem_cgroup_charge() already, but you can > > rename it charge_memcg(). > > Sounds good. I'll post an updated version with your suggestion. > Thanks for the review, Johannes! Posted v2 just for this patch at https://lore.kernel.org/patchwork/patch/1455550 . Please let me know if you want me to resent the whole patchset instead of just this patch. > > >