From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D07EBC28D13 for ; Mon, 22 Aug 2022 18:25:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E10C8D0003; Mon, 22 Aug 2022 14:25:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 38F238D0001; Mon, 22 Aug 2022 14:25:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 257A38D0003; Mon, 22 Aug 2022 14:25:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 178988D0001 for ; Mon, 22 Aug 2022 14:25:33 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id DF345401E0 for ; Mon, 22 Aug 2022 18:25:32 +0000 (UTC) X-FDA: 79828056504.30.7F7AD06 Received: from out2.migadu.com (out2.migadu.com [188.165.223.204]) by imf09.hostedemail.com (Postfix) with ESMTP id 9B5841400E1 for ; Mon, 22 Aug 2022 18:24:04 +0000 (UTC) Date: Mon, 22 Aug 2022 11:23:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1661192642; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=bQzUYJbzeSJVtDIpGsHXApQTdq+ArDiyBUiXh9dJl9c=; b=ko5CwHMhDHobyMV2JTCqgpMz2+52hcGSQVRXn7bUUUSzOzmxP9AFk+/mRkwC+S5zY6MqdX F2q4LAhYZbvIHrC86yIgvLfaWvRCGsb0n0M91E3H2sYMBQDtjW1gtraZ1MHGzTdRTZPXQf PJRtOi3CP/lPjq5dDiL0j6nSDxN5b2s= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Shakeel Butt Cc: Johannes Weiner , Michal Hocko , Muchun Song , Michal =?iso-8859-1?Q?Koutn=FD?= , Eric Dumazet , Soheil Hassas Yeganeh , Feng Tang , Oliver Sang , Andrew Morton , lkp@lists.01.org, cgroups@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/3] mm: page_counter: remove unneeded atomic ops for low/min Message-ID: References: <20220822001737.4120417-1-shakeelb@google.com> <20220822001737.4120417-2-shakeelb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220822001737.4120417-2-shakeelb@google.com> X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661192644; a=rsa-sha256; cv=none; b=w/mUv90gczJ7OhcgJTsndDaf63NPJUf1jTeswvlal0gqMdnHGeNKeWJVY53bRwHdT8LSzE xK3DJUyH/perYylwcYivUWJoRIG0d0GjHhqSFOzLj1v+7X/xZTckSjtJtUQm14Nqy7EL7u F6ENy/fdbKnqqHVU6VSFOm4Rx1CG5UM= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ko5CwHMh; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of roman.gushchin@linux.dev designates 188.165.223.204 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661192644; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bQzUYJbzeSJVtDIpGsHXApQTdq+ArDiyBUiXh9dJl9c=; b=AOeXDsKEL/vcIuamxB6CpUqSAqdZ1aUT8W9FjR15H55DqozzN6JmwpDib7JF6+OkWwMvNF zPRXOrx4hhctLwplk04T/05T3DukhOKsdr4WGpr1XzQIx4Y75vrQo3CdKV673XaF9r6DBE Qp97YJlgdqCZXBEWpWpKYu7OKq9WyUg= X-Rspam-User: Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ko5CwHMh; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of roman.gushchin@linux.dev designates 188.165.223.204 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 9B5841400E1 X-Stat-Signature: h79uupf6kq15eigaxdzogiggds5s7dtf X-HE-Tag: 1661192644-158432 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 22, 2022 at 12:17:35AM +0000, Shakeel Butt wrote: > For cgroups using low or min protections, the function > propagate_protected_usage() was doing an atomic xchg() operation > irrespectively. It only needs to do that operation if the new value of > protection is different from older one. This patch does that. > > To evaluate the impact of this optimization, on a 72 CPUs machine, we > ran the following workload in a three level of cgroup hierarchy with top > level having min and low setup appropriately. More specifically > memory.min equal to size of netperf binary and memory.low double of > that. > > $ netserver -6 > # 36 instances of netperf with following params > $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K > > Results (average throughput of netperf): > Without (6.0-rc1) 10482.7 Mbps > With patch 14542.5 Mbps (38.7% improvement) > > With the patch, the throughput improved by 38.7% Nice savings! > > Signed-off-by: Shakeel Butt > Reported-by: kernel test robot > --- > mm/page_counter.c | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > > diff --git a/mm/page_counter.c b/mm/page_counter.c > index eb156ff5d603..47711aa28161 100644 > --- a/mm/page_counter.c > +++ b/mm/page_counter.c > @@ -17,24 +17,23 @@ static void propagate_protected_usage(struct page_counter *c, > unsigned long usage) > { > unsigned long protected, old_protected; > - unsigned long low, min; > long delta; > > if (!c->parent) > return; > > - min = READ_ONCE(c->min); > - if (min || atomic_long_read(&c->min_usage)) { > - protected = min(usage, min); > + protected = min(usage, READ_ONCE(c->min)); > + old_protected = atomic_long_read(&c->min_usage); > + if (protected != old_protected) { > old_protected = atomic_long_xchg(&c->min_usage, protected); > delta = protected - old_protected; > if (delta) > atomic_long_add(delta, &c->parent->children_min_usage); What if there is a concurrent update of c->min_usage? Then the patched version can miss an update. I can't imagine a case when it will lead to bad consequences, so probably it's ok. But not super obvious. I think the way to think of it is that a missed update will be fixed by the next one, so it's ok to run some time with old numbers. Acked-by: Roman Gushchin Thanks!