From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 068FBC433E0 for ; Wed, 10 Mar 2021 22:06:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21D8D64FAB for ; Wed, 10 Mar 2021 22:06:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21D8D64FAB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D76038D023F; Wed, 10 Mar 2021 17:05:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C64658D023C; Wed, 10 Mar 2021 17:05:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A91338D023F; Wed, 10 Mar 2021 17:05:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 84EBB8D023C for ; Wed, 10 Mar 2021 17:05:59 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4D2268249980 for ; Wed, 10 Mar 2021 22:05:59 +0000 (UTC) X-FDA: 77905348038.19.92F1E1C Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) by imf26.hostedemail.com (Postfix) with ESMTP id E0CEA407F8F3 for ; Wed, 10 Mar 2021 22:05:55 +0000 (UTC) Received: by mail-qk1-f175.google.com with SMTP id 130so18597023qkh.11 for ; Wed, 10 Mar 2021 14:05:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=NWPWYdWvWcOBp2T3gIhmVnLIkAxq/bhGSpQPWfx2I2w=; b=iYi0EfVrT6z67CWn2YUps4KYY2aTNsuGmqRf2Z+VTK2I/FXe7CjlVMAms4KumtwoK+ 2u/nDAY95sETpUUpM7iZS/JWizhIye+YnQUVJwHKEhmw8V9P0swH3R+jdvdTU5yOpjVg VPsKXcci6F0HIaDR5sW3MM5035f3WkAbficradxylOc85vUZHOKD53NosL8ipzVxrGyn h2cLAe+Z5uj+6boOW1dwBIelde7JrkhhcKDiSkHeFYn2h4OF74TxWPMLAh6DyrTrMyKd RYLzDX73uYBVR7+GCrTnA5p2g3ry5oPBz4QAlHUgoCVwb+WB7ivrC9+a4mZIoa/qo7+J sI0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=NWPWYdWvWcOBp2T3gIhmVnLIkAxq/bhGSpQPWfx2I2w=; b=GyiaDnl8jpfZuK3E+PN/Ipmcl5eMh+783zAygEbXQS2C41BsrsewSjhfxcvhjT4Eeu 6kk/T6gIG3OtTblyANDPxWm9pF9AJArHhhSXsw0P6oXIa4zLUXVpZSKegV5aBiukYA+D JbIClx1Yyq/m3YFiNGCqWPv9M8JRb+x9svCMBj/bUcKgc9AJ4HyfKQTXmMDHGugGKdDg V5+LZHroJ5PwH/9fPoFlgQP5vSyXlX3v6UdOLJLgBFp5mO2DVkLrBSsxYpb9OKMNCwk3 4E9j2JIes0hUCQ1dhx4X0kGipl1vqBAKLCmqP744lSf7MBkVCyEhphjR94MNiQ3VpaOq iYYQ== X-Gm-Message-State: AOAM533HfMbUPSJ15j+OnyhqfjZ8jOGSqFp+vps5dZ4537Y51SFD7iAY nktq1FdzpE2Exaz8bY0IOXMj0g== X-Google-Smtp-Source: ABdhPJwYjRYwWe0mtckQIBE9tIbDlVWot/V0zyNS99ti0BBnTawmWK+Mwby35mAWO2gxzFYCOEBSig== X-Received: by 2002:a37:9d93:: with SMTP id g141mr4746631qke.270.1615413957643; Wed, 10 Mar 2021 14:05:57 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id 84sm568843qkg.8.2021.03.10.14.05.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Mar 2021 14:05:57 -0800 (PST) Date: Wed, 10 Mar 2021 17:05:56 -0500 From: Johannes Weiner To: Muchun Song Cc: guro@fb.com, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com Subject: Re: [PATCH v3 3/4] mm: memcontrol: use obj_cgroup APIs to charge kmem pages Message-ID: References: <20210309100717.253-1-songmuchun@bytedance.com> <20210309100717.253-4-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210309100717.253-4-songmuchun@bytedance.com> X-Stat-Signature: ze67mumbjeryrnu5zowtx5tyjjgcnfkf X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E0CEA407F8F3 Received-SPF: none (cmpxchg.org>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mail-qk1-f175.google.com; client-ip=209.85.222.175 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615413955-499214 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello Munchun, On Tue, Mar 09, 2021 at 06:07:16PM +0800, Muchun Song wrote: > @@ -6806,11 +6823,23 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) > static void uncharge_batch(const struct uncharge_gather *ug) > { > unsigned long flags; > + unsigned long nr_pages; > > - if (!mem_cgroup_is_root(ug->memcg)) { > - page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); > + /* > + * The kmem pages can be reparented to the root memcg, in > + * order to prevent the memory counter of root memcg from > + * increasing indefinitely. We should decrease the memory > + * counter when unchange. > + */ > + if (mem_cgroup_is_root(ug->memcg)) > + nr_pages = ug->nr_kmem; > + else > + nr_pages = ug->nr_pages; Correct or not, I find this unreadable. We're uncharging nr_kmem on the root, and nr_pages against leaf groups? It implies several things that might not be immediately obvious to the reader of this function. Namely, that nr_kmem is a subset of nr_pages. Or that we don't *want* to account LRU pages for the root cgroup. The old code followed a very simple pattern: the root memcg's page counters aren't touched. This is no longer true: we modify them depending on very specific circumstances. But that's too clever for the stupid uncharge_batch() which is only supposed to flush a number of accumulators into their corresponding page counters. This distinction really needs to be moved down to uncharge_page() now. > @@ -6828,7 +6857,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) > > static void uncharge_page(struct page *page, struct uncharge_gather *ug) > { > - unsigned long nr_pages; > + unsigned long nr_pages, nr_kmem; > struct mem_cgroup *memcg; > > VM_BUG_ON_PAGE(PageLRU(page), page); > @@ -6836,34 +6865,44 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > if (!page_memcg_charged(page)) > return; > > + nr_pages = compound_nr(page); > /* > * Nobody should be changing or seriously looking at > - * page memcg at this point, we have fully exclusive > - * access to the page. > + * page memcg or objcg at this point, we have fully > + * exclusive access to the page. > */ > - memcg = page_memcg_check(page); > + if (PageMemcgKmem(page)) { > + struct obj_cgroup *objcg; > + > + objcg = page_objcg(page); > + memcg = obj_cgroup_memcg_get(objcg); > + > + page->memcg_data = 0; > + obj_cgroup_put(objcg); > + nr_kmem = nr_pages; > + } else { > + memcg = page_memcg(page); > + page->memcg_data = 0; > + nr_kmem = 0; > + } Why is all this moved above the uncharge_batch() call? It separates the pointer manipulations from the refcounting, which makes the code very difficult to follow. > + > if (ug->memcg != memcg) { > if (ug->memcg) { > uncharge_batch(ug); > uncharge_gather_clear(ug); > } > ug->memcg = memcg; > + ug->dummy_page = page; Why this change? > /* pairs with css_put in uncharge_batch */ > css_get(&ug->memcg->css); > } > > - nr_pages = compound_nr(page); > ug->nr_pages += nr_pages; > + ug->nr_kmem += nr_kmem; > + ug->pgpgout += !nr_kmem; Oof. Yes, this pgpgout line is an equivalent transformation for counting LRU compound pages. But unless you already know that, it's completely impossible to understand what the intent here is. Please avoid clever tricks like this. If you need to check whether the page is kmem, test PageMemcgKmem() instead of abusing the counters as boolean flags. This is supposed to be read by human beings, too. > - if (PageMemcgKmem(page)) > - ug->nr_kmem += nr_pages; > - else > - ug->pgpgout++; > - > - ug->dummy_page = page; > - page->memcg_data = 0; > - css_put(&ug->memcg->css); > + css_put(&memcg->css); Sorry, these two functions are no longer readable after your changes. Please retain the following sequence as discrete steps: 1. look up memcg from the page 2. flush existing batch if memcg changed 3. add page's various counts to the current batch 4. clear page->memcg and decrease the referece count to whatever it was pointing to And as per above, step 3. is where we should check whether to uncharge the memcg's page counter at all: if (PageMemcgKmem(page)) { ug->nr_pages += nr_pages; ug->nr_kmem += nr_pages; } else { /* LRU pages aren't accounted at the root level */ if (!mem_cgroup_is_root(memcg)) ug->nr_pages += nr_pages; ug->pgpgout++; } In fact, it might be a good idea to rename ug->nr_pages to ug->nr_memory to highlight how it maps to the page_counter.