From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AF48C33CAF for ; Fri, 17 Jan 2020 01:15:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3B54720728 for ; Fri, 17 Jan 2020 01:15:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="q3FjlsFX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B54720728 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8E256B02FE; Thu, 16 Jan 2020 20:15:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C3F0E6B02FF; Thu, 16 Jan 2020 20:15:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B56346B0300; Thu, 16 Jan 2020 20:15:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9909C6B02FE for ; Thu, 16 Jan 2020 20:15:13 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 2EE95181AEF09 for ; Fri, 17 Jan 2020 01:15:13 +0000 (UTC) X-FDA: 76385357706.16.sheep09_67b40a96e9e62 X-HE-Tag: sheep09_67b40a96e9e62 X-Filterd-Recvd-Size: 10042 Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 17 Jan 2020 01:15:12 +0000 (UTC) Received: by mail-io1-f65.google.com with SMTP id t26so24191115ioi.13 for ; Thu, 16 Jan 2020 17:15:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=/VTRz9ZvXdV6RwRngzoH8EzBJ5B0A7yMPHHw+V+uLN0=; b=q3FjlsFX/DwE/uC5qbs9CTWWUx5PPrrcWOr+F1eMPbk7gZ70haJsgKC8X2d3BxdUc0 sjQhmDBXb4cx7QKAz8ESEkuzNGgJ1z0PgYMezyyGvKSLmZkHkOS7qkOzId2e9AVUFyY+ fizMa5FX8bk7qO5O02l1Pj5TgBaRP2QQ7vy8Z+yncpuwyHvmhzodcpHKHyz0M5S10A3p L+14Sk2D+JZJFSn7SRbuP9qetH4VJSP1R6o7SdIN3tIlXMLF4AragHUelJWpcNMGRLTJ 1KGj5omEIKspa0LTygv/w1zkhwwJw18ND0phSdix8QydQMfKk1BZo81uSbzYbPpNQ1Rs aXRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=/VTRz9ZvXdV6RwRngzoH8EzBJ5B0A7yMPHHw+V+uLN0=; b=gJ5QtjmuYVDLfIiq5pgWPdu6LHz/cCINJmiZtPPT4297o5lJO1M6lzsRtijPJocGm9 WE5G2Euya69YUj3NFTpfinGuRz1/XEYTWtK/DFFRlGGETjuK6vDA9D3MheFAVi2wGPB+ 8aoEbmc2paRaitYwakcqlOsUr1HThs/bxdaY1QlyRl9gpkLKBi8KAyLHFrTQvPAlWcfe /OPvkVKDid1TYgRjpoI2XbIo+GmGf/3dAWgMKK1xxuHvKe0QXUgZAOBh3jzgavxSXU+D k81iN1ndtnP0q7wQ6ANOmZvRurYCiGmuG6cPEljGhN7sNPnvp0VVjopsmtSkR1dkgSUt UQcg== X-Gm-Message-State: APjAAAU5ayqysBcW3HeoSfacqYpUFxt6Nsa7tj7GTvnEJoc0PKrTS9B1 hiilCokNGnXKFk1W+DqWMTR8PWjn1qjYBK0QG/8= X-Google-Smtp-Source: APXvYqyuYn1+jaizCeuF/dJBP1F0RFPSlH6CMHltSjapzCmbA/Jun9rqQFGg0fgKJsn1FvrmIFtdjams9Q1j+wk0kDY= X-Received: by 2002:a6b:f214:: with SMTP id q20mr30707721ioh.137.1579223711761; Thu, 16 Jan 2020 17:15:11 -0800 (PST) MIME-Version: 1.0 References: <1579183811-1898-1-git-send-email-laoar.shao@gmail.com> <20200116155056.GA19428@dhcp22.suse.cz> <20200116161904.GA14228@tower.DHCP.thefacebook.com> In-Reply-To: <20200116161904.GA14228@tower.DHCP.thefacebook.com> From: Yafang Shao Date: Fri, 17 Jan 2020 09:14:35 +0800 Message-ID: Subject: Re: [PATCH] mm: verify page type before getting memcg from it To: Roman Gushchin Cc: Michal Hocko , "dchinner@redhat.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 17, 2020 at 12:19 AM Roman Gushchin wrote: > > On Thu, Jan 16, 2020 at 04:50:56PM +0100, Michal Hocko wrote: > > [Cc Roman] > > Thanks! > > > > > On Thu 16-01-20 09:10:11, Yafang Shao wrote: > > > Per disccusion with Dave[1], we always assume we only ever put objects from > > > memcg associated slab pages in the list_lru. In list_lru_from_kmem() it > > > calls memcg_from_slab_page() which makes no attempt to verify the page is > > > actually a slab page. But currently the binder coder (in > > > drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather > > > than slab objects. The only reason the binder doesn't catch issue is that > > > the list_lru is not configured to be memcg aware. > > > In order to make it more stable, we should verify the page type before > > > getting memcg from it. In this patch, a new helper is introduced and the > > > old helper is modified. Now we have two helpers as bellow, > > > > > > struct mem_cgroup *__memcg_from_slab_page(struct page *page); > > > struct mem_cgroup *memcg_from_slab_page(struct page *page); > > > > > > The first helper is used when we are sure the page is a slab page and also > > > a head page, while the second helper is used when we are not sure the page > > > type. > > > > > > [1]. > > > https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/ > > > > > > Suggested-by: Dave Chinner > > > Signed-off-by: Yafang Shao > > Hello Yafang! > > I actually have something similar in my patch queue, but I'm adding > a helper which takes a kernel pointer rather than a page: > struct mem_cgroup *mem_cgroup_from_obj(void *p); > > Will it work for you? If so, I can send it separately. > Yes, it fixes the issue as well. Pls. send it separately. > (I'm working on switching to per-object accounting of slab object, > so that slab pages will be shared between multiple cgroups. So it will > require a change like this). > > Thanks! > > -- > > From fc2b1ec53285edcb0017275019d60bd577bf64a9 Mon Sep 17 00:00:00 2001 > From: Roman Gushchin > Date: Thu, 2 Jan 2020 15:22:19 -0800 > Subject: [PATCH] mm: memcg/slab: introduce mem_cgroup_from_obj() > > Sometimes we need to get a memcg pointer from a charged kernel object. > The right way to do it depends on whether it's a proper slab object > or it's backed by raw pages (e.g. it's a vmalloc alloction). In the > first case the kmem_cache->memcg_params.memcg indirection should be > used, however in the the second case it's just page->mem_cgroup. > > To simplify this task and hide these implementation details let's > introduce the mem_cgroup_from_obj() helper, which takes a pointer > to any kernel object and returns a valid memcg pointer or NULL. > > The caller is still responsible to ensure that the returned memcg > isn't going away underneath: take the rcu read lock, cgroup mutex etc. > > mem_cgroup_from_kmem() defined in mm/list_lru.c is now obsolete > and can be removed. > > Signed-off-by: Roman Gushchin Acked-by: Yafang Shao > --- > include/linux/memcontrol.h | 7 +++++++ > mm/list_lru.c | 12 +----------- > mm/memcontrol.c | 32 +++++++++++++++++++++++++++++--- > 3 files changed, 37 insertions(+), 14 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index c372bed6be80..0f6f8e18029e 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -420,6 +420,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); > > struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); > > +struct mem_cgroup *mem_cgroup_from_obj(void *p); > + > struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > > struct mem_cgroup *get_mem_cgroup_from_page(struct page *page); > @@ -912,6 +914,11 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, > return true; > } > > +static inline struct mem_cgroup *mem_cgroup_from_obj(void *p) > +{ > + return NULL; > +} > + > static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) > { > return NULL; > diff --git a/mm/list_lru.c b/mm/list_lru.c > index 0f1f6b06b7f3..8de5e3784ee4 100644 > --- a/mm/list_lru.c > +++ b/mm/list_lru.c > @@ -57,16 +57,6 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx) > return &nlru->lru; > } > > -static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr) > -{ > - struct page *page; > - > - if (!memcg_kmem_enabled()) > - return NULL; > - page = virt_to_head_page(ptr); > - return memcg_from_slab_page(page); > -} > - > static inline struct list_lru_one * > list_lru_from_kmem(struct list_lru_node *nlru, void *ptr, > struct mem_cgroup **memcg_ptr) > @@ -77,7 +67,7 @@ list_lru_from_kmem(struct list_lru_node *nlru, void *ptr, > if (!nlru->memcg_lrus) > goto out; > > - memcg = mem_cgroup_from_kmem(ptr); > + memcg = mem_cgroup_from_obj(ptr); > if (!memcg) > goto out; > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 6e1ee8577ecf..99d6fe9d7026 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -757,13 +757,12 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, > > void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val) > { > - struct page *page = virt_to_head_page(p); > - pg_data_t *pgdat = page_pgdat(page); > + pg_data_t *pgdat = page_pgdat(virt_to_page(p)); > struct mem_cgroup *memcg; > struct lruvec *lruvec; > > rcu_read_lock(); > - memcg = memcg_from_slab_page(page); > + memcg = mem_cgroup_from_obj(p); > > /* Untracked pages have no memcg, no lruvec. Update only the node */ > if (!memcg || memcg == root_mem_cgroup) { > @@ -2636,6 +2635,33 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, > unlock_page_lru(page, isolated); > } > > +/* > + * Returns a pointer to the memory cgroup to which the kernel object is charged. > + * > + * The caller must ensure the memcg lifetime, e.g. by owning a charged object, > + * taking rcu_read_lock() or cgroup_mutex. > + */ > +struct mem_cgroup *mem_cgroup_from_obj(void *p) > +{ > + struct page *page; > + > + if (mem_cgroup_disabled()) > + return NULL; > + > + page = virt_to_head_page(p); > + > + /* > + * Slab pages don't have page->mem_cgroup set because corresponding > + * kmem caches can be reparented during the lifetime. That's why > + * cache->memcg_params.memcg pointer should be used instead. > + */ > + if (PageSlab(page)) > + return memcg_from_slab_page(page); > + > + /* All other pages use page->mem_cgroup */ > + return page->mem_cgroup; > +} > + > #ifdef CONFIG_MEMCG_KMEM > static int memcg_alloc_cache_id(void) > { > -- > 2.21.1 >