From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0E08C433EF for ; Thu, 21 Apr 2022 16:33:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1102D6B007D; Thu, 21 Apr 2022 12:33:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 098AE6B007E; Thu, 21 Apr 2022 12:33:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E53BA6B0080; Thu, 21 Apr 2022 12:33:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id CF7196B007D for ; Thu, 21 Apr 2022 12:33:46 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A713C26F6F for ; Thu, 21 Apr 2022 16:33:46 +0000 (UTC) X-FDA: 79381432452.31.4A05242 Received: from out1.migadu.com (out1.migadu.com [91.121.223.63]) by imf03.hostedemail.com (Postfix) with ESMTP id 2053B20023 for ; Thu, 21 Apr 2022 16:33:43 +0000 (UTC) Date: Thu, 21 Apr 2022 09:33:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1650558823; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=2j4GtVd07DMMzhasl57D24rZquXEpncLF4sks2SC8Tg=; b=RzjnYy6yqwp7ePlZWOGy4zJ7p9tV7LrQaNicdlcEY3otLSxF2tbfcewQc9CCnxrqW1twUV PjfdmRGXTs8IcJWhdFuts+UCnSa5O7u1XeW3jZzhif5rGQE0fMuGg6v+TLoMZC6988/yR4 Ru8kP7QTO3rUc1mNC/uroAU8bT28HyM= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Waiman Long Cc: Johannes Weiner , Michal Hocko , Shakeel Butt , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Muchun Song , "Matthew Wilcox (Oracle)" , Yang Shi , Vlastimil Babka Subject: Re: [PATCH] mm/memcg: Free percpu stats memory of dying memcg's Message-ID: References: <20220421145845.1044652-1-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220421145845.1044652-1-longman@redhat.com> X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: linux.dev X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2053B20023 X-Stat-Signature: cf345akju3dj4s9a8g8tt5oys8z1bk67 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=RzjnYy6y; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf03.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.121.223.63 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev X-HE-Tag: 1650558823-589887 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 21, 2022 at 10:58:45AM -0400, Waiman Long wrote: > For systems with large number of CPUs, the majority of the memory > consumed by the mem_cgroup structure is actually the percpu stats > memory. When a large number of memory cgroups are continuously created > and destroyed (like in a container host), it is possible that more > and more mem_cgroup structures remained in the dying state holding up > increasing amount of percpu memory. > > We can't free up the memory of the dying mem_cgroup structure due to > active references in some other places. However, the percpu stats memory > allocated to that mem_cgroup is a different story. > > This patch adds a new percpu_stats_disabled variable to keep track of > the state of the percpu stats memory. If the variable is set, percpu > stats update will be disabled for that particular memcg. All the stats > update will be forward to its parent instead. Reading of the its percpu > stats will return 0. > > The flushing and freeing of the percpu stats memory is a multi-step > process. The percpu_stats_disabled variable is set when the memcg is > being set to offline state. After a grace period with the help of RCU, > the percpu stats data are flushed and then freed. > > This will greatly reduce the amount of memory held up by dying memory > cgroups. > > By running a simple management tool for container 2000 times per test > run, below are the results of increases of percpu memory (as reported > in /proc/meminfo) and nr_dying_descendants in root's cgroup.stat. Hi Waiman! I've been proposing the same idea some time ago: https://lore.kernel.org/all/20190312223404.28665-7-guro@fb.com/T/ . However I dropped it with the thinking that with many other fixes preventing the accumulation of the dying cgroups it's not worth the added complexity and a potential cpu overhead. I think it ultimately comes to the number of dying cgroups. If it's low, memory savings are not worth the cpu overhead. If it's high, they are. I hope long-term to drive it down significantly (with lru-pages reparenting being the first major milestone), but it might take a while. I don't have a strong opinion either way, just want to dump my thoughts on this. Thanks!