From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4776C38142 for ; Fri, 27 Jan 2023 18:18:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C55E6B0072; Fri, 27 Jan 2023 13:18:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 475436B0073; Fri, 27 Jan 2023 13:18:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33D376B0074; Fri, 27 Jan 2023 13:18:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 23D876B0072 for ; Fri, 27 Jan 2023 13:18:14 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DE4A2C05FF for ; Fri, 27 Jan 2023 18:18:13 +0000 (UTC) X-FDA: 80401388466.14.1393F85 Received: from out-52.mta0.migadu.com (out-52.mta0.migadu.com [91.218.175.52]) by imf14.hostedemail.com (Postfix) with ESMTP id CCC9A100002 for ; Fri, 27 Jan 2023 18:18:10 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=VZI9VXlf; spf=pass (imf14.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.52 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674843491; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5Oj9LUot8v9UmWs1s5SZ+IUpPrWuuoZ3XZ2hVXEhy5g=; b=T/VmiHtK9AdlyOZNh98v1MjgEy4v3AN0VZw/wOQE4mNfvHVnOm8k2AFqFhikHd+tvUtuL7 sIPsoCCBXOJ/PGGIwrnLPIC0rMExSf6REZ/5tSFxzwskSjeeV2xXgewVk25d49fS0WDoK8 lWqNENBLn9xvXoet+JpHzNSLTGgGRaw= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=VZI9VXlf; spf=pass (imf14.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.52 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674843491; a=rsa-sha256; cv=none; b=ex8f32Artu9Wc3YrKP/KYLbvSe1X8uW4oWJrgyzRPZEP9+76R5QA2ui5jXNeo7/pfdb3Mt 4JCTa9kMd5oQ1fgCefqaXD8FqaW8sJpFb9ESZ/3lJrQtrUSm1gLtsFtS1cwE+SuoiovH2Y zzcBXsfvBAgTG1hWK+kBUjhom2mmqXs= Date: Fri, 27 Jan 2023 10:18:03 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1674843488; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=5Oj9LUot8v9UmWs1s5SZ+IUpPrWuuoZ3XZ2hVXEhy5g=; b=VZI9VXlfrdRerdLzb6CKAsXoUkHokf3UukfdpnDvbITMbonS+ZoXjIB1MUGbIdCV2Wd8D/ 9Jh2qysBomyX24SkHR7PZ5bPyIhx8Gk5KmmwuRlYtp5/NGyXgA9UmF+kZuDmiZgZRLi2Jk aYJG8E4rLBLWxCvua/0IFHBLGxuS290= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Michal Hocko Cc: Marcelo Tosatti , Leonardo =?iso-8859-1?Q?Br=E1s?= , Johannes Weiner , Shakeel Butt , Muchun Song , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Frederic Weisbecker Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Message-ID: References: <20230125073502.743446-1-leobras@redhat.com> <9e61ab53e1419a144f774b95230b789244895424.camel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: CCC9A100002 X-Stat-Signature: bf84p68qbkma6kios37z3kuijwf1oj36 X-HE-Tag: 1674843490-844006 X-HE-Meta: U2FsdGVkX19FU1T1LE9BkOCZkLbCpBfhCHaK4t2ugNBDP19jR5nmpFyWQdq/WB4R/f+aVxiEkFrqGILvREL1bvLfh+rCQCe9XRBItzCd2TQ8UkGTOrWX5agJq6uS7n9kXe4olQc4d3yDb1ZU4VqpBI+vx9IDMK7PcTslMMmxSki4JC0hNdyZmIadRwKn/iv86IElfwc7RqDtKzhTOmMaWvkCzMp/DXcX0F+16+TLFmCpBT2dAYrSFzpW2SVXP/M2mdDVO5zG7SGYrOERmIhNPM/UQl2Z3B/jAjx9p4gOOKw+0DkAxwFrj2w8tN6IlBQ+v2DxS4k3JGcJkL4UTQ1Ypg2cy7NpFYTc7Q9fYl4C3ggIHWQvBMOX+Rdek5/tHqAciAjT8pbYke/4mDv/AkEx6onCzlBZ5HMeZSFZYFPq1yKKskb1o3HtM1DXGoI0sWhOgvqRqiIV7v2BfD6K1Dw7h5mM3TfBEHEVfLQ3p7EjCcS78hi0tzuJd8lxosdFWPCS/TqNXf6nTmblcObeQu3fwFOFqcT4qLDbHMo0ddbiNrChGl/ant4kWjbA6ODY3rx9404TW1+K446R1qScRhKcibqypi6IMzfgF0uSx/kRCZk1BhjQio4quTvh32pn0qwdQqrOl9/HCYuOzUJ/lBkCKlUDUIMNlqgb1Dvbt2ZHfJlOR8louNgZkmnOKaL3085Vvoz6OWJAPluWvp2fCAwy6ojm9/jnnVy0C5PtB4C3XhpJ4ByaxHZIft5h7zx1m7QDuZN05gZdZ00NZa/aC0SiA+7nAAKI9UgOsIYdeRFezzvSgpNAWaM0CMGlyhNsa3In+n/EcHKan4vOL/e4A+BtsVx/Chn+6uYxDo84SZSijTG4we5md7Ngld+mU9kNKU1Py4vEtFG4HDApnC22VNW73MDBSEOoHaQFwITGj7FAWzFwWkLaQjE3dnsDnkeMjpbpMjULCIT7U4Min5cfuhI reT3XUj4 ZKPAm2/ZLOIegUPqVqjJIuEr3WUOQyKHpTDjEmMDLlcOUcnm3/BFo79t/uczlIidit2LqC8BMJBi7ri+2gaVpYgqelq4e0iPfc5ObT5UEfibEDk1IJpKWq9dS6EJp6LNmfHRH18PYQcanAJvtTw80I4BhCkOJ2luqr36r7ZNFF/H0gSwW6Orj97e0LfsEmVBvM0JKRhJkj8Gdi2Dmv1C1yovTstQGX5WXwyLjBMMlJrHWjSg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 27, 2023 at 02:58:19PM +0100, Michal Hocko wrote: > On Fri 27-01-23 08:11:04, Michal Hocko wrote: > > [Cc Frederic] > > > > On Thu 26-01-23 15:12:35, Roman Gushchin wrote: > > > On Thu, Jan 26, 2023 at 08:41:34AM +0100, Michal Hocko wrote: > > [...] > > > > > Essentially each cpu will try to grab the remains of the memory quota > > > > > and move it locally. I wonder in such circumstances if we need to disable the pcp-caching > > > > > on per-cgroup basis. > > > > > > > > I think it would be more than sufficient to disable pcp charging on an > > > > isolated cpu. > > > > > > It might have significant performance consequences. > > > > Is it really significant? > > > > > I'd rather opt out of stock draining for isolated cpus: it might slightly reduce > > > the accuracy of memory limits and slightly increase the memory footprint (all > > > those dying memcgs...), but the impact will be limited. Actually it is limited > > > by the number of cpus. > > > > Hmm, OK, I have misunderstood your proposal. Yes, the overal pcp charges > > potentially left behind should be small and that shouldn't really be a > > concern for memcg oom situations (unless the limit is very small and > > workloads on isolated cpus using small hard limits is way beyond my > > imagination). > > > > My first thought was that those charges could be left behind without any > > upper bound but in reality sooner or later something should be running > > on those cpus and if the memcg is gone the pcp cache would get refilled > > and old charges gone. > > > > So yes, this is actually a better and even simpler solution. All we need > > is something like this > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index ab457f0394ab..13b84bbd70ba 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -2344,6 +2344,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > > struct mem_cgroup *memcg; > > bool flush = false; > > > > + if (cpu_is_isolated(cpu)) > > + continue; > > + > > rcu_read_lock(); > > memcg = stock->cached; > > if (memcg && stock->nr_pages && > > Btw. this would be over pessimistic. The following should make more > sense: > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index ab457f0394ab..55e440e54504 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2357,7 +2357,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) { > if (cpu == curcpu) > drain_local_stock(&stock->work); > - else > + else if (!cpu_is_isolated(cpu)) > schedule_work_on(cpu, &stock->work); > } > } Yes, this is exactly what I was thinking of. It should solve the problem for isolated cpus well enough without introducing an overhead for everybody else. If you'll make a proper patch, please add my Acked-by: Roman Gushchin I understand the concerns regarding spurious OOMs on 256-cores machine, but I guess they are somewhat theoretical and also possible with the current code (e.g. one ooming cgroup can effectively block draining for everybody else). Thanks!