From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CDF1C433ED for ; Fri, 23 Apr 2021 07:44:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7023861468 for ; Fri, 23 Apr 2021 07:44:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7023861468 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A6EFB6B00C3; Fri, 23 Apr 2021 03:44:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1F266B00C4; Fri, 23 Apr 2021 03:44:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 897F16B00C5; Fri, 23 Apr 2021 03:44:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0125.hostedemail.com [216.40.44.125]) by kanga.kvack.org (Postfix) with ESMTP id 6B6B86B00C3 for ; Fri, 23 Apr 2021 03:44:39 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 143E68248047 for ; Fri, 23 Apr 2021 07:44:39 +0000 (UTC) X-FDA: 78062844678.34.13C67F6 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP id BB02DF4 for ; Fri, 23 Apr 2021 07:44:29 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 21391613D7; Fri, 23 Apr 2021 07:44:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1619163877; bh=8vt7fJFKJyNLj4TruaKcfoqK87/dNszJK3KYc+h7llo=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=XPM/BW3NJ38gXdx4kC9XLRyu4R5onNo/Wqm16A+sHhEqALBFN8UtvF30xiKtpAM69 ruMjG+OEgzQ7mzSaFmYzUb8nMNl91VX/734G4IMxS4r2Aym+1c+LknRsdHLVGoaS9J poN1lAeOZGP8dhfk81DMMpSp/vslj2BBRMtB1ZuUxTcx3KttlMPWsWZqfW15JERhqO 82TsZKYZAZXIh4n1KLo7G4q15+De1OCt7xrNyey9VubMs0vDbWnS7DSSg7ufwoniPq 0nlLbQuBnDVtXl3hK+j4sTazgBAMy+V+bMZaBJRHktnJHXg5VTrWshmxd/3oPM6y99 YfuILgFmKfsyA== Date: Fri, 23 Apr 2021 09:44:31 +0200 From: Alexey Gladkov To: Oliver Sang Cc: "Eric W. Biederman" , Linus Torvalds , Alexey Gladkov , 0day robot , LKML , lkp@lists.01.org, "Huang, Ying" , Feng Tang , zhengjun.xing@intel.com, Kernel Hardening , Linux Containers , Linux-MM , Andrew Morton , Christian Brauner , Jann Horn , Jens Axboe , Kees Cook , Oleg Nesterov Subject: Re: 08ed4efad6: stress-ng.sigsegv.ops_per_sec -41.9% regression Message-ID: <20210423074431.7ob6aqasome2zjbk@example.org> References: <7abe5ab608c61fc2363ba458bea21cf9a4a64588.1617814298.git.gladkov.alexey@gmail.com> <20210408083026.GE1696@xsang-OptiPlex-9020> <20210423024722.GA13968@xsang-OptiPlex-9020> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210423024722.GA13968@xsang-OptiPlex-9020> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: BB02DF4 X-Stat-Signature: ak471entxke5ehcs8j7dwja9h97zuf7r Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619163869-447724 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Apr 23, 2021 at 10:47:22AM +0800, Oliver Sang wrote: > hi, Eric, >=20 > On Thu, Apr 08, 2021 at 01:44:43PM -0500, Eric W. Biederman wrote: > > Linus Torvalds writes: > >=20 > > > On Thu, Apr 8, 2021 at 1:32 AM kernel test robot wrote: > > >> > > >> FYI, we noticed a -41.9% regression of stress-ng.sigsegv.ops_per_s= ec due to commit > > >> 08ed4efad684 ("[PATCH v10 6/9] Reimplement RLIMIT_SIGPENDING on to= p of ucounts") > > > > > > Ouch. > >=20 > > We were cautiously optimistic when no test problems showed up from > > the last posting that there was nothing to look at here. > >=20 > > Unfortunately it looks like the bots just missed the last posting.=20 >=20 > this report is upon v10. do you have newer version which hope bot test? Yes. I posted a new version of this patch set. I would be very grateful i= f you could test it. https://lore.kernel.org/lkml/cover.1619094428.git.legion@kernel.org/ > please be noted, sorry to say, due to various reasons, it will be a > big challenge for us to capture each version of a patch set. >=20 > e.g. we didn't make out a similar performance regression for > v8/v9 version of this one.. >=20 > >=20 > > So it seems we are finally pretty much at correct code in need > > of performance tuning. > >=20 > > > I *think* this test may be testing "send so many signals that it > > > triggers the signal queue overflow case". > > > > > > And I *think* that the performance degradation may be due to lots o= f > > > unnecessary allocations, because ity looks like that commit changes > > > __sigqueue_alloc() to do > > > > > > struct sigqueue *q =3D kmem_cache_alloc(sigqueue_cachep, fl= ags); > > > > > > *before* checking the signal limit, and then if the signal limit wa= s > > > exceeded, it will just be free'd instead. > > > > > > The old code would check the signal count against RLIMIT_SIGPENDING > > > *first*, and if there were m ore pending signals then it wouldn't d= o > > > anything at all (including not incrementing that expensive atomic > > > count). > >=20 > > This is an interesting test in a lot of ways as it is testing the > > synchronous signal delivery path caused by an exception. The test > > is either executing *ptr =3D 0 (where ptr points to a read-only page) > > or it executes an x86 instruction that is excessively long. > >=20 > > I have found the code but I haven't figured out how it is being > > called yet. The core loop is just: > > for(;;) { > > sigaction(SIGSEGV, &action, NULL); > > sigaction(SIGILL, &action, NULL); > > sigaction(SIGBUS, &action, NULL); > >=20 > > ret =3D sigsetjmp(jmp_env, 1); > > if (done()) > > break; > > if (ret) { > > /* verify signal */ > > } else { > > *ptr =3D 0; > > } > > } > >=20 > > Code like that fundamentally can not be multi-threaded. So the only = way > > the sigpending limit is being hit is if there are more processes runn= ing > > that code simultaneously than the size of the limit. > >=20 > > Further it looks like stress-ng pushes RLIMIT_SIGPENDING as high as i= t > > will go before the test starts. > >=20 > >=20 > > > Also, the old code was very careful to only do the "get_user()" for > > > the *first* signal it added to the queue, and do the "put_user()" f= or > > > when removing the last signal. Exactly because those atomics are ve= ry > > > expensive. > > > > > > The new code just does a lot of these atomics unconditionally. > >=20 > > Yes. That seems a likely culprit. > >=20 > > > I dunno. The profile data in there is a bit hard to read, but there= 's > > > a lot more cachee misses, and a *lot* of node crossers: > > > > > >> 5961544 +190.4% 17314361 perf-stat.i.cache-mi= sses > > >> 22107466 +119.2% 48457656 perf-stat.i.cache-re= ferences > > >> 163292 =C4=85 3% +4582.0% 7645410 perf-stat.i.nod= e-load-misses > > >> 227388 =C4=85 2% +3708.8% 8660824 perf-stat.i.nod= e-loads > > > > > > and (probably as a result) average instruction costs have gone up e= normously: > > > > > >> 3.47 +66.8% 5.79 perf-stat.overall.cp= i > > >> 22849 -65.6% 7866 perf-stat.overall.cy= cles-between-cache-misses > > > > > > and it does seem to be at least partly about "put_ucounts()": > > > > > >> 0.00 +4.5 4.46 perf-profile.calltra= ce.cycles-pp.put_ucounts.__sigqueue_free.get_signal.arch_do_signal_or_res= tart.exit_to_user_mode_prepare > > > > > > and a lot of "get_ucounts()". > > > > > > But it may also be that the new "get sigpending" is just *so* much > > > more expensive than it used to be. > >=20 > > That too is possible. > >=20 > > That node-load-misses number does look like something is bouncing bac= k > > and forth between the nodes a lot more. So I suspect stress-ng is > > running multiple copies of the sigsegv test in different processes at > > once. > >=20 > >=20 > >=20 > > That really suggests cache line ping pong from get_ucounts and > > incrementing sigpending. > >=20 > > It surprises me that obtaining the cache lines exclusively is > > the dominant cost on this code path but obtaining two cache lines > > exclusively instead of one cache cache line exclusively is consistent > > with a causing the exception delivery to take nearly twice as long. > >=20 > > For the optimization we only care about the leaf count so with a litt= le > > care we can restore the optimization. So that is probably the thing > > to do here. The fewer changes to worry about the less likely to find > > surprises. > >=20 > >=20 > >=20 > > That said for this specific case there is a lot of potential room for > > improvement. As this is a per thread signal the code update sigpendi= ng > > in commit_cred and never worry about needing to pin the struct > > user_struct or struct ucounts. As this is a synchronous signal we co= uld > > skip the sigpending increment, skip the signal queue entirely, and > > deliver the signal to user-space immediately. The removal of all cac= he > > ping pongs might make it worth it. > >=20 > > There is also Thomas Gleixner's recent optimization to cache one > > sigqueue entry per task to give more predictable behavior. That > > would remove the cost of the allocation. > >=20 > > Eric >=20 --=20 Rgrds, legion