From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D011C4321E for ; Mon, 28 Nov 2022 06:41:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98B2C6B0072; Mon, 28 Nov 2022 01:41:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 93AF66B0073; Mon, 28 Nov 2022 01:41:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82A2F6B0074; Mon, 28 Nov 2022 01:41:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 735326B0072 for ; Mon, 28 Nov 2022 01:41:02 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 41E8116054F for ; Mon, 28 Nov 2022 06:41:02 +0000 (UTC) X-FDA: 80181903564.26.C8E4C5D Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf28.hostedemail.com (Postfix) with ESMTP id 970F9C000C for ; Mon, 28 Nov 2022 06:41:01 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 12C4CB80B05; Mon, 28 Nov 2022 06:41:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1939CC433D6; Mon, 28 Nov 2022 06:40:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669617658; bh=ryGZdOEL0/JhMmu+OkrwTiHR2hFwcw5urUHIeZXSzFc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=sFTSrIkhRwNM2GVriqZotxOF2R2ZVzbSfmav/+QrBT3UIbubljJ4UJxqlrB9H/GcD MgCJhF8KWbORIo6jx55CMY9ScAQ+YoxmD4UbenT0UFb4QF10L/+Bs0zr/VtvWc8Cjm Z72AIitgAVBEB2HgH85m+cG4keSnQH4osWZZDhhgEmGmvfRKSaA+cnnXebRdu2mEKx RQsgBAt2rs29d2KbHiA9QTdPCurWpaADjJX9ievRL03+QPEFQUJbhh5FcYRDbo0KmT UMlDFBia3b8/lXdwMn9aYNcVlBSyvO2/qgC5a2fJBeZZ5pbZN2PbTPe+qtZpbmL4YD P+1IGjVBA/m1Q== Date: Sun, 27 Nov 2022 23:40:56 -0700 From: Nathan Chancellor To: Rik van Riel Cc: "Huang, Ying" , kernel test robot , lkp@lists.01.org, lkp@intel.com, Andrew Morton , Yang Shi , Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, feng.tang@intel.com, zhengjun.xing@linux.intel.com, fengwei.yin@intel.com Subject: Re: [mm] f35b5d7d67: will-it-scale.per_process_ops -95.5% regression Message-ID: References: <202210181535.7144dd15-yujie.liu@intel.com> <87edv4r2ip.fsf@yhuang6-desk2.ccr.corp.intel.com> <871qr3nkw2.fsf@yhuang6-desk2.ccr.corp.intel.com> <366045a27a96e01d0526d63fd78d4f3c5d1f530b.camel@surriel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669617661; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/iTmrFu9Jxq/sfgzp3pZVHvBzs3mwP7gF31zbtpeMEs=; b=cOJDzYASHhqE79ea7BOPNrvGj6EBHfX//8BJl/HWiYnrAvLdWlptcPfnsDdgSgs6aH+DLc IEy5NgHDPDXqkVjMlM4ESEbD+8jQKekYCxkx7oUqU+XVq6jigTn5D5Ea2Ktbk2ppPt9X8/ XCcK+5qKK8Q8MOQtUSI/Ak3CLxTyY6I= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=sFTSrIkh; spf=pass (imf28.hostedemail.com: domain of nathan@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=nathan@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669617661; a=rsa-sha256; cv=none; b=KnWauVEE+ym0aEIfh9cacY4ONMZdAPj7b/WSES5OlRS43BUQM7k/IuLYJhHkM4dAQGenA4 6yOjCKlf/efDajf+vf8wKWsrYleIndr6K8y9n2RSF7WYrwdupbbWQy+9J8oLMcFhsExHIf /enrhSLHRlFFtJRVqth68TSa6hXOg1o= X-Stat-Signature: hk9igk6hksrj8jumbz85ijzcn7snigzg X-Rspamd-Queue-Id: 970F9C000C Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=sFTSrIkh; spf=pass (imf28.hostedemail.com: domain of nathan@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=nathan@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1669617661-895534 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Rik, On Thu, Oct 20, 2022 at 10:16:20AM -0700, Nathan Chancellor wrote: > On Thu, Oct 20, 2022 at 11:28:16AM -0400, Rik van Riel wrote: > > On Thu, 2022-10-20 at 13:07 +0800, Huang, Ying wrote: > > > > > > Nathan Chancellor writes: > > > > > > > > For what it's worth, I just bisected a massive and visible > > > > performance > > > > regression on my Threadripper 3990X workstation to commit > > > > f35b5d7d676e > > > > ("mm: align larger anonymous mappings on THP boundaries"), which > > > > seems > > > > directly related to this report/analysis. I initially noticed this > > > > because my full set of kernel builds against mainline went from 2 > > > > hours > > > > and 20 minutes or so to over 3 hours. Zeroing in on x86_64 > > > > allmodconfig, > > > > which I used for the bisect: > > > > > > > > @ 7b5a0b664ebe ("mm/page_ext: remove unused variable in > > > > offline_page_ext"): > > > > > > > > Benchmark 1: make -skj128 LLVM=1 allmodconfig all > > > >   Time (mean ± σ):     318.172 s ±  0.730 s    [User: 31750.902 s, > > > > System: 4564.246 s] > > > >   Range (min … max):   317.332 s … 318.662 s    3 runs > > > > > > > > @ f35b5d7d676e ("mm: align larger anonymous mappings on THP > > > > boundaries"): > > > > > > > > Benchmark 1: make -skj128 LLVM=1 allmodconfig all > > > > Time (mean ± σ): 406.688 s ± 0.676 s [User: 31819.526 s, > > System: 16327.022 s] > > > > Range (min … max): 405.954 s … 407.284 s 3 run > > > > > > Have you tried to build with gcc?  Want to check whether is this > > > clang > > > specific issue or not. > > > > This may indeed be something LLVM specific. In previous tests, > > GCC has generally seen a benefit from increased THP usage. > > Many other applications also benefit from getting more THPs. > > Indeed, GCC builds actually appear to be slightly faster on my system now, > apologies for not trying that before reporting :/ > > 7b5a0b664ebe: > > Benchmark 1: make -skj128 allmodconfig all > Time (mean ± σ): 355.294 s ± 0.931 s [User: 33620.469 s, System: 6390.064 s] > Range (min … max): 354.571 s … 356.344 s 3 runs > > f35b5d7d676e: > > Benchmark 1: make -skj128 allmodconfig all > Time (mean ± σ): 347.400 s ± 2.029 s [User: 34389.724 s, System: 4603.175 s] > Range (min … max): 345.815 s … 349.686 s 3 runs > > > LLVM showing 10% system time before this change, and a whopping > > 30% system time after that change, suggests that LLVM is behaving > > quite differently from GCC in some ways. > > The above tests were done with GCC 12.2.0 from Arch Linux. The previous LLVM > tests were done with a self-compiled version of LLVM from the main branch > (16.0.0), optimized with BOLT [1]. To eliminate that as a source of issues, I > used my distribution's version of clang (14.0.6) and saw similar results as > before: > > 7b5a0b664ebe: > > Benchmark 1: make -skj128 LLVM=/usr/bin/ allmodconfig all > Time (mean ± σ): 462.517 s ± 1.214 s [User: 48544.240 s, System: 5586.212 s] > Range (min … max): 461.115 s … 463.245 s 3 runs > > f35b5d7d676e: > > Benchmark 1: make -skj128 LLVM=/usr/bin/ allmodconfig all > Time (mean ± σ): 547.927 s ± 0.862 s [User: 47913.709 s, System: 17682.514 s] > Range (min … max): 547.429 s … 548.922 s 3 runs > > > If we can figure out what these differences are, maybe we can > > just fine tune the code to avoid this issue. > > > > I'll try to play around with LLVM compilation a little bit next > > week, to see if I can figure out what might be going on. I wonder > > if LLVM is doing lots of mremap calls or something... > > If there is any further information I can provide or patches I can test, > I am more than happy to do so. > > [1]: https://github.com/llvm/llvm-project/tree/96552e73900176d65ee6650facae8d669d6f9498/bolt Was there ever a follow up to this report that I missed? I just noticed that I am still reverting f35b5d7d676e in my mainline kernel. Cheers, Nathan