From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B89FC433B4 for ; Tue, 4 May 2021 19:37:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 72590613D0 for ; Tue, 4 May 2021 19:37:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 72590613D0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7E2E16B0036; Tue, 4 May 2021 15:37:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 792DE6B006E; Tue, 4 May 2021 15:37:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F8A46B0070; Tue, 4 May 2021 15:37:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id 442A86B0036 for ; Tue, 4 May 2021 15:37:32 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E7187A759 for ; Tue, 4 May 2021 19:37:31 +0000 (UTC) X-FDA: 78104557902.30.5184B32 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf20.hostedemail.com (Postfix) with ESMTP id EB42C13A for ; Tue, 4 May 2021 19:37:23 +0000 (UTC) Received: by mail-lf1-f50.google.com with SMTP id c3so10958351lfs.7 for ; Tue, 04 May 2021 12:37:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=btYsPAMNSGmXffsSk7+VR60HmUlYSHIt+oz7jjLh4E8=; b=qm3h49sUOH2s6tvF7KNhGDE1XdJRjRppqoGSbXNvcIvjPgsmygPRL+Huz/bo6LwJJk /0FRs5hMe3DuFnhdCHSW6+PF81z+rW7Yt1/7pBaFsY8xruoFVGCDcCF3ric6+eGNMf26 QJi7EdjlGvaBNdL89o5778YfWQd2Lkstsx85AT0wSngk1QGxewfEqdZBOCTYlqRsu9jH X9EGZsayyWCKd39XPQumXqjjs0V/gicwgCK+mb4TGO6nhk+gLJK9zX6SjCWoDh5tJOWU viTvPxRimjGbUel53oqUg7BoKFeJM8s4RDOXTvYSemwmre0bTiaM5Gf4FCkfmH0nKTmb m5Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=btYsPAMNSGmXffsSk7+VR60HmUlYSHIt+oz7jjLh4E8=; b=LY+3xPqw9ow3Fykqr3JnBQwFHN6Dw+lhH0btIcJWpigWzCMqffo61SCBSc2b8N3EtB qQtCLSjHI9uYsaOf60eVrOkt+nM+Hozaba7UQm+2nCqBDdMkqDZCjNwEaURHaKAr/roC bR3cp1a8lNMVb1CCPBlifPK1pvGte6MfA9hiynnBbbwD+1w3nICI0U9SNh1zlgTbLN9F uLy3AK8DUcA5YR2EFixgXixWSV/OcTA7d3iTEKmFP/KG24oMTRxztEJqFavcETZE9xJi I3+rrRBWgM7l2P7IaBY2HewuCd+iDeVymtNLrtEJdkPv+lZUADn81ax4IJMvsW2IzhR2 vaSg== X-Gm-Message-State: AOAM532Vv2Q8aVsPhvu0DGXZ9jNVpEgJ57HcBKAg0DDU4u8n9t65y7+D QjiVLwfw1J125O3MDfdEfq4Wwr3UKisemzFuE65YSA== X-Google-Smtp-Source: ABdhPJwlb9KA8U4u+KoRPFZ8uOLR3ragJIioyKxSSJZgFafWkkxACRJsoZDnRYbmY44M0lLfIIhm8avycXHH9tNpfNs= X-Received: by 2002:a05:6512:acd:: with SMTP id n13mr17648346lfu.432.1620157049746; Tue, 04 May 2021 12:37:29 -0700 (PDT) MIME-Version: 1.0 References: <20210504132350.4693-1-longman@redhat.com> <20210504132350.4693-2-longman@redhat.com> In-Reply-To: <20210504132350.4693-2-longman@redhat.com> From: Shakeel Butt Date: Tue, 4 May 2021 12:37:18 -0700 Message-ID: Subject: Re: [PATCH v2 1/2] mm: memcg/slab: Properly set up gfp flags for objcg pointer array To: Waiman Long Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , LKML , Cgroups , Linux MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EB42C13A X-Stat-Signature: rcoztxioxyzy6z54nn9738ji9ssbzbks Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=qm3h49sU; spf=pass (imf20.hostedemail.com: domain of shakeelb@google.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=shakeelb@google.com; dmarc=pass (policy=reject) header.from=google.com Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf20; identity=mailfrom; envelope-from=""; helo=mail-lf1-f50.google.com; client-ip=209.85.167.50 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620157043-904613 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 4, 2021 at 6:24 AM Waiman Long wrote: > > Since the merging of the new slab memory controller in v5.9, the page > structure may store a pointer to obj_cgroup pointer array for slab pages. > Currently, only the __GFP_ACCOUNT bit is masked off. However, the array > is not readily reclaimable and doesn't need to come from the DMA buffer. > So those GFP bits should be masked off as well. > > Do the flag bit clearing at memcg_alloc_page_obj_cgroups() to make sure > that it is consistently applied no matter where it is called. > > Fixes: 286e04b8ed7a ("mm: memcg/slab: allocate obj_cgroups for non-root slab pages") > Signed-off-by: Waiman Long > --- > mm/memcontrol.c | 8 ++++++++ > mm/slab.h | 1 - > 2 files changed, 8 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index c100265dc393..5e3b4f23b830 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2863,6 +2863,13 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) > } > > #ifdef CONFIG_MEMCG_KMEM > +/* > + * The allocated objcg pointers array is not accounted directly. > + * Moreover, it should not come from DMA buffer and is not readily > + * reclaimable. So those GFP bits should be masked off. > + */ > +#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) What about __GFP_DMA32? Does it matter? It seems like DMA32 requests go to normal caches. > + > int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, > gfp_t gfp, bool new_page) > { > @@ -2870,6 +2877,7 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, > unsigned long memcg_data; > void *vec; > > + gfp &= ~OBJCGS_CLEAR_MASK; > vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, > page_to_nid(page)); > if (!vec) > diff --git a/mm/slab.h b/mm/slab.h > index 18c1927cd196..b3294712a686 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -309,7 +309,6 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, > if (!memcg_kmem_enabled() || !objcg) > return; > > - flags &= ~__GFP_ACCOUNT; > for (i = 0; i < size; i++) { > if (likely(p[i])) { > page = virt_to_head_page(p[i]); > -- > 2.18.1 >