From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C224C433EF for ; Mon, 14 Feb 2022 16:25:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC7D06B0075; Mon, 14 Feb 2022 11:25:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D761D6B007B; Mon, 14 Feb 2022 11:25:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C17D26B007D; Mon, 14 Feb 2022 11:25:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id B03EB6B0075 for ; Mon, 14 Feb 2022 11:25:24 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 67A86181AC9C6 for ; Mon, 14 Feb 2022 16:25:24 +0000 (UTC) X-FDA: 79141910568.18.7187401 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 7FE7610000B for ; Mon, 14 Feb 2022 16:23:13 +0000 (UTC) Received: by mail-qt1-f172.google.com with SMTP id s1so15839682qtw.9 for ; Mon, 14 Feb 2022 08:23:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=0I6QCfcZvTXg/UIVMtXAqnAJEFMpz4YPEOXp860qAcQ=; b=ZbOs/VFsA0tNTT51uy23Gy4Eibq2IbTAzBHaLm6ub5lmEKlZFXUqvU9JTDKzk3L9n1 SlkGGq23eWcOqQ7MnmhcwrgKHTmAT43k7rB0+fYs5wpr3tOjyh4rdLr27J02Z6h1UZTP EECDCgTpw1IXTfBNYqppgkzaJlPWtSit/JTfYIBX8GKN+tkXTSTKWm3UDU0nY7C2mt5I kKUbWxdIkn+2V+bt0cMnt9AcqmXYeihyeliG8h1ktTcbuswQpfMy4LLsnIjngQUaUVsZ xeSIHig17I34sKfSPZOGQjcoBJubENZb4SVPQHcuCT7PeouC82fdMHwPnvaRAteeHG2L ykXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=0I6QCfcZvTXg/UIVMtXAqnAJEFMpz4YPEOXp860qAcQ=; b=EXDzTwyNouHH8DMxrqJF1JnM85S4cJ0D9SbCKEOka9NfjRsLFDXaxN0J0E9Fzdg7/X DM53dzEj9JoDBk/xy4GYH1by7slJT/vqPCdva3ivZaC7bGRff3cCWrs7kk9cgTp3/uF5 vttJfNHw9q7RuS8z/radX7h9f9wQg5A4YkxG/2Tr5ucvvVZeXcaO2mFWkhrwhVPb7+Lv XKS68SEhlDsCDIL189WA2OqTxDvNYKUlnWJDB0Dh2k/MuajjlzH26WFou15Bpr6rIKqx vQNIdQZzhRFcQjYq93FtyTMSXMjIX2gm5TWmJ9d337DN5b+HM2YGNYodMucftargrGjJ JD8A== X-Gm-Message-State: AOAM532DkHE3tUzi65R6YF/bUHhuxE/oGYh4oqJp93npkvgu2nCquK7D IeVZxnXjdXzsrKsMlvzGzPLCVA== X-Google-Smtp-Source: ABdhPJyz/Mo8APKWts8w76SzPaBdPe53vN/BZGAjIhucRxc0vNPIwKXYpJJHU99IuEQYtwMaGHp2zw== X-Received: by 2002:a05:622a:1787:: with SMTP id s7mr398070qtk.631.1644855792746; Mon, 14 Feb 2022 08:23:12 -0800 (PST) Received: from localhost (cpe-98-15-154-102.hvc.res.rr.com. [98.15.154.102]) by smtp.gmail.com with ESMTPSA id h6sm15745633qkk.14.2022.02.14.08.23.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Feb 2022 08:23:12 -0800 (PST) Date: Mon, 14 Feb 2022 11:23:11 -0500 From: Johannes Weiner To: Sebastian Andrzej Siewior Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Michal Hocko , Michal =?iso-8859-1?Q?Koutn=FD?= , Peter Zijlstra , Thomas Gleixner , Vladimir Davydov , Waiman Long , kernel test robot Subject: Re: [PATCH v2 4/4] mm/memcg: Protect memcg_stock with a local_lock_t Message-ID: References: <20220211223537.2175879-1-bigeasy@linutronix.de> <20220211223537.2175879-5-bigeasy@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220211223537.2175879-5-bigeasy@linutronix.de> Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b="ZbOs/VFs"; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf14.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.172 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 7FE7610000B X-Stat-Signature: gqmemyu1fw5dgxwbh5cbnp56iauedza6 X-HE-Tag: 1644855793-366293 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Sebastian, On Fri, Feb 11, 2022 at 11:35:37PM +0100, Sebastian Andrzej Siewior wrote: > +static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock) > { > struct obj_cgroup *old = stock->cached_objcg; > > if (!old) > - return; > + return NULL; > > if (stock->nr_bytes) { > unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; > unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); > > if (nr_pages) > - obj_cgroup_uncharge_pages(old, nr_pages); > + obj_cgroup_uncharge_pages_locked(old, nr_pages); This is a bit dubious in terms of layering. It's an objcg operation, but what's "locked" isn't the objcg, it's the underlying stock. That function then looks it up again, even though we have it right there. You can open-code it and factor out the stock operation instead, and it makes things much simpler and clearer. I.e. something like this (untested!): diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1b3550f09335..eed6e0ff84d7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2215,6 +2215,20 @@ static void drain_local_stock(struct work_struct *dummy) local_irq_restore(flags); } +static void __refill_stock(struct memcg_stock_pcp *stock, + struct mem_cgroup *memcg, + unsigned int nr_pages) +{ + if (stock->cached != memcg) { + drain_stock(stock); + css_get(&memcg->css); + stock->cached = memcg; + } + stock->nr_pages += nr_pages; + if (stock->nr_pages > MEMCG_CHARGE_BATCH) + drain_stock(stock); +} + /* * Cache charges(val) to local per_cpu area. * This will be consumed by consume_stock() function, later. @@ -2225,18 +2239,8 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) unsigned long flags; local_irq_save(flags); - stock = this_cpu_ptr(&memcg_stock); - if (stock->cached != memcg) { /* reset if necessary */ - drain_stock(stock); - css_get(&memcg->css); - stock->cached = memcg; - } - stock->nr_pages += nr_pages; - - if (stock->nr_pages > MEMCG_CHARGE_BATCH) - drain_stock(stock); - + __refill_stock(stock, memcg, nr_pages); local_irq_restore(flags); } @@ -3213,8 +3217,15 @@ static void drain_obj_stock(struct obj_stock *stock) unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); - if (nr_pages) - obj_cgroup_uncharge_pages(old, nr_pages); + /* Flush complete pages back to the page stock */ + if (nr_pages) { + struct mem_cgroup *memcg; + + memcg = get_mem_cgroup_from_objcg(objcg); + mem_cgroup_kmem_record(memcg, -nr_pages); + __refill_stock(stock, memcg, nr_pages); + css_put(&memcg->css); + } /* * The leftover is flushed to the centralized per-memcg value.