From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F37D7C433C1 for ; Mon, 29 Mar 2021 16:13:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 47C7261582 for ; Mon, 29 Mar 2021 16:13:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 47C7261582 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BB4756B007B; Mon, 29 Mar 2021 12:13:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B617B6B007E; Mon, 29 Mar 2021 12:13:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93DF76B0080; Mon, 29 Mar 2021 12:13:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 7A1CA6B007B for ; Mon, 29 Mar 2021 12:13:44 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 29E8E8249980 for ; Mon, 29 Mar 2021 16:13:44 +0000 (UTC) X-FDA: 77973407568.40.1E1F6A8 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf07.hostedemail.com (Postfix) with ESMTP id B112CA0049FB for ; Mon, 29 Mar 2021 16:13:39 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id v3so9737201pgq.2 for ; Mon, 29 Mar 2021 09:13:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=iwgqrSQ7OtcTD6Kmq+G2j4+SoTiaaT1H8kRq2h2aVwg=; b=B9aWLAKZAcMxqzTDYkFoN6eV0fA9YoPBu44LBrfz1T4nCQajwJ1+96uuUowCoffKP7 MBg0CU2oSAFSCKP0UMxIFoaCf8MgRmHFgZ7w4sCHhaHykw5JmWgJZSohSe4SVtfP8lx3 2BK6kH92QLkrAsORbl2SG85f70wbzRu95WMoIyW9+Ci/QgcNuz5se6sTzgaWUnAr2v5V S913EZLQMV3S20eRdWWJ40lcdNl+T0T96Wl44fH9J7mbgaPeVwj0PmfTEoSyD5a+f6fL sVSU06qRWs+2PQvOE/d6zfVul1SWs3lkSBX4GkdKDi1XGP84/Ms+8vrYWqdpPd0NI3zz JcRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=iwgqrSQ7OtcTD6Kmq+G2j4+SoTiaaT1H8kRq2h2aVwg=; b=MGxhwj2oFultsZV0Q+hcIDZL1w/dHTlQL0v3VPoWYXX65K4HjIYm/ai0Cj8E7G8d1I LIW2xj7+HK9qYADevpX3iGfI6eIbCoAoKoTxd9JM8yAChtllkqAvJXBrEEAYAoSPdHw1 sFcvyIa+p+W/WsyC1bBXgLbf5iYG7b0ogyHOBUV/4MHu34pnXg/Yw1137HzbtEmPJQFf MZSGQOxvPyEdIapUoM0egXzF9P0/i+vcLDqK4UnMfaLdn3smtONN1xNzjlRTKXQb2/oR zcOldXM1HLDJisaXJCtmnc4wUy56+5/yPxvtLnFVbfg9b3RT8vqMVDKPSvgD2wd7IH+2 dTdQ== X-Gm-Message-State: AOAM530WUxTckMEtZU+8DSkrcGEoxPur0SSPJIrC1LqJPTnW5C7Rp7EM 0cAFnMWSDvO223RIy/7jOVgexUrleIQt+9ICtKoImA== X-Google-Smtp-Source: ABdhPJzDdNHn7lMvpiWlglXJdENyPLTvYYnykndFE+76WwdSXAzELmjvsy99r2VOdgLpipcV5gwxvwvJ7dxByTXMd6U= X-Received: by 2002:aa7:9614:0:b029:1fa:e77b:722 with SMTP id q20-20020aa796140000b02901fae77b0722mr26913282pfg.2.1617034418611; Mon, 29 Mar 2021 09:13:38 -0700 (PDT) MIME-Version: 1.0 References: <20210329144829.1834347-1-schatzberg.dan@gmail.com> <20210329144829.1834347-3-schatzberg.dan@gmail.com> In-Reply-To: <20210329144829.1834347-3-schatzberg.dan@gmail.com> From: Muchun Song Date: Tue, 30 Mar 2021 00:13:01 +0800 Message-ID: Subject: Re: [External] [PATCH 2/3] mm: Charge active memcg when no mm is set To: Dan Schatzberg Cc: Jens Axboe , Tejun Heo , Zefan Li , Johannes Weiner , Andrew Morton , Michal Hocko , Vladimir Davydov , Hugh Dickins , Shakeel Butt , Roman Gushchin , Yang Shi , Alex Shi , Alexander Duyck , Yafang Shao , Wei Yang , "open list:BLOCK LAYER" , open list , "open list:CONTROL GROUP (CGROUP)" , "open list:MEMORY MANAGEMENT" , Chris Down Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: 6755ep1im7g9r81ef68jgzpi846q7yi8 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B112CA0049FB Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf07; identity=mailfrom; envelope-from=""; helo=mail-pg1-f174.google.com; client-ip=209.85.215.174 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617034419-593349 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 29, 2021 at 10:49 PM Dan Schatzberg wrote: > > set_active_memcg() worked for kernel allocations but was silently > ignored for user pages. > > This patch establishes a precedence order for who gets charged: > > 1. If there is a memcg associated with the page already, that memcg is > charged. This happens during swapin. > > 2. If an explicit mm is passed, mm->memcg is charged. This happens > during page faults, which can be triggered in remote VMs (eg gup). > > 3. Otherwise consult the current process context. If there is an > active_memcg, use that. Otherwise, current->mm->memcg. > > Previously, if a NULL mm was passed to mem_cgroup_charge (case 3) it > would always charge the root cgroup. Now it looks up the active_memcg > first (falling back to charging the root cgroup if not set). > > Signed-off-by: Dan Schatzberg > Acked-by: Johannes Weiner > Acked-by: Tejun Heo > Acked-by: Chris Down > Reviewed-by: Shakeel Butt > --- > mm/filemap.c | 2 +- > mm/memcontrol.c | 72 ++++++++++++++++++++++++++++--------------------- > mm/shmem.c | 4 +-- > 3 files changed, 44 insertions(+), 34 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index eeeb8e2cc36a..63fd980e863a 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -872,7 +872,7 @@ noinline int __add_to_page_cache_locked(struct page *= page, > page->index =3D offset; > > if (!huge) { > - error =3D mem_cgroup_charge(page, current->mm, gfp); > + error =3D mem_cgroup_charge(page, NULL, gfp); > if (error) > goto error; > charged =3D true; > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 668d1d7c2645..adc618814fd2 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -884,13 +884,38 @@ struct mem_cgroup *mem_cgroup_from_task(struct task= _struct *p) > } > EXPORT_SYMBOL(mem_cgroup_from_task); > > +static __always_inline struct mem_cgroup *active_memcg(void) > +{ > + if (in_interrupt()) > + return this_cpu_read(int_active_memcg); > + else > + return current->active_memcg; > +} > + > +static __always_inline struct mem_cgroup *get_active_memcg(void) > +{ > + struct mem_cgroup *memcg; > + > + rcu_read_lock(); > + memcg =3D active_memcg(); > + /* remote memcg must hold a ref. */ > + if (memcg && WARN_ON_ONCE(!css_tryget(&memcg->css))) > + memcg =3D root_mem_cgroup; > + rcu_read_unlock(); > + > + return memcg; > +} This function is already removed since the patchset below. Use obj_cgroup APIs to charge kmem pages https://lore.kernel.org/patchwork/cover/1399132/ I also suggest not reintroducing get_active_memcg. There is only one user of it, just inline it into get_mem_cgroup_from_mm(). Actually we don=E2=80=99t need get_active_memcg() either. > + > /** > * get_mem_cgroup_from_mm: Obtain a reference on given mm_struct's memcg= . > * @mm: mm from which memcg should be extracted. It can be NULL. > * > - * Obtain a reference on mm->memcg and returns it if successful. Otherwi= se > - * root_mem_cgroup is returned. However if mem_cgroup is disabled, NULL = is > - * returned. > + * Obtain a reference on mm->memcg and returns it if successful. If mm > + * is NULL, then the memcg is chosen as follows: > + * 1) The active memcg, if set. > + * 2) current->mm->memcg, if available > + * 3) root memcg > + * If mem_cgroup is disabled, NULL is returned. > */ > struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) > { > @@ -899,13 +924,19 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm= _struct *mm) > if (mem_cgroup_disabled()) > return NULL; > > + /* > + * Page cache insertions can happen without an > + * actual mm context, e.g. during disk probing > + * on boot, loopback IO, acct() writes etc. > + */ > + if (unlikely(!mm)) { > + if (unlikely(active_memcg())) > + return get_active_memcg(); Since remote memcg must hold a reference, we do not need to do something like get_active_memcg() does. Just use css_get to obtain a ref, it is simpler. Just Like below. + if (unlikely(!mm)) { + memcg =3D active_memcg(); + if (unlikely(memcg)) { + /* remote memcg must hold a ref. */ + css_get(memcg); + return memcg; + } Thanks. > + mm =3D current->mm; > + } > + > rcu_read_lock(); > do { > - /* > - * Page cache insertions can happen withou an > - * actual mm context, e.g. during disk probing > - * on boot, loopback IO, acct() writes etc. > - */ > if (unlikely(!mm)) > memcg =3D root_mem_cgroup; > else { > @@ -919,28 +950,6 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_= struct *mm) > } > EXPORT_SYMBOL(get_mem_cgroup_from_mm); > > -static __always_inline struct mem_cgroup *active_memcg(void) > -{ > - if (in_interrupt()) > - return this_cpu_read(int_active_memcg); > - else > - return current->active_memcg; > -} > - > -static __always_inline struct mem_cgroup *get_active_memcg(void) > -{ > - struct mem_cgroup *memcg; > - > - rcu_read_lock(); > - memcg =3D active_memcg(); > - /* remote memcg must hold a ref. */ > - if (memcg && WARN_ON_ONCE(!css_tryget(&memcg->css))) > - memcg =3D root_mem_cgroup; > - rcu_read_unlock(); > - > - return memcg; > -} > - > static __always_inline bool memcg_kmem_bypass(void) > { > /* Allow remote memcg charging from any context. */ > @@ -6549,7 +6558,8 @@ static int __mem_cgroup_charge(struct page *page, s= truct mem_cgroup *memcg, > * @gfp_mask: reclaim mode > * > * Try to charge @page to the memcg that @mm belongs to, reclaiming > - * pages according to @gfp_mask if necessary. > + * pages according to @gfp_mask if necessary. if @mm is NULL, try to > + * charge to the active memcg. > * > * Do not use this for pages allocated for swapin. > * > diff --git a/mm/shmem.c b/mm/shmem.c > index 78ab81a62b29..7c09276125d5 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1694,7 +1694,7 @@ static int shmem_swapin_page(struct inode *inode, p= goff_t index, > { > struct address_space *mapping =3D inode->i_mapping; > struct shmem_inode_info *info =3D SHMEM_I(inode); > - struct mm_struct *charge_mm =3D vma ? vma->vm_mm : current->mm; > + struct mm_struct *charge_mm =3D vma ? vma->vm_mm : NULL; > struct page *page; > swp_entry_t swap; > int error; > @@ -1815,7 +1815,7 @@ static int shmem_getpage_gfp(struct inode *inode, p= goff_t index, > } > > sbinfo =3D SHMEM_SB(inode->i_sb); > - charge_mm =3D vma ? vma->vm_mm : current->mm; > + charge_mm =3D vma ? vma->vm_mm : NULL; > > page =3D pagecache_get_page(mapping, index, > FGP_ENTRY | FGP_HEAD | FGP_LOCK, = 0); > -- > 2.30.2 >