From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98A9EC48BF6 for ; Mon, 4 Mar 2024 02:51:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B8776B006E; Sun, 3 Mar 2024 21:51:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 067F96B0078; Sun, 3 Mar 2024 21:51:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E720E6B0092; Sun, 3 Mar 2024 21:51:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D7A0F6B006E for ; Sun, 3 Mar 2024 21:51:31 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A34E01A08E6 for ; Mon, 4 Mar 2024 02:51:31 +0000 (UTC) X-FDA: 81857830782.05.2450177 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf21.hostedemail.com (Postfix) with ESMTP id 2F24D1C0008 for ; Mon, 4 Mar 2024 02:51:28 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf21.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709520690; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O3oDItPsOVoxjJ71eKTjfzRd4rZr/RmkCFhpbm6TZMs=; b=C+13sK6k+gZDj8gsDUo3AYofksln88GNMkv9Ir5VbkkDcqXI5867XX9MXpbgMLW15cPNgJ cqJTC9mRZ8z1TxK00H6RBnAk3ZfsgKTcfmkLyfyHxP0dfg16nOCMN77GxqEmAeMfcQdME+ xUGTx5wjUd87POG79eryKRvgEPpKjLo= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf21.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709520690; a=rsa-sha256; cv=none; b=6OPtjORAuau09kFmIWM656rKXGGgG4m9raPjKpzPKsfdNKqV7cfDfxtosWpHfWHhfCqYU9 cxTNZekKXiWA6qdeBsLmtZlrJTa1rGRWroVeR4WovmRBkir0ZcsgxoZGQv2V9QBnGciqeI IESdeI8zp2brTdmsjLSahef80M5eYrw= X-AuditID: a67dfc5b-d85ff70000001748-d1-65e53727c477 Date: Mon, 4 Mar 2024 11:51:14 +0900 From: Byungchul Park To: "Huang, Ying" Cc: David Hildenbrand , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel_team@skhynix.com, akpm@linux-foundation.org, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: Re: [RESEND PATCH v8 0/8] Reduce TLB flushes by 94% by improving folio migration Message-ID: <20240304025114.GB13332@system.software.com> References: <20240226030613.22366-1-byungchul@sk.com> <20240229092810.GC64252@system.software.com> <54053f0d-024b-4064-8d82-235cc71b61f8@redhat.com> <87wmqmbxko.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87wmqmbxko.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Mutt/1.9.4 (2018-02-28) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrHIsWRmVeSWpSXmKPExsXC9ZZnka66+dNUg757ihZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLyrjlsFvfW/Ge1OL9rLavFjqX7mCwuHVjAZHG89wCTxfx7n9ks Nm+aymxxfMpURovfP4CKT86azOIg4PG9tY/FY+esu+weCzaVemxeoeWxeM9LJo9NqzrZPDZ9 msTu8e7cOXaPEzN+s3jMOxno8X7fVTaPrb/sPBqnXmPz+LxJLoAvissmJTUnsyy1SN8ugSvj 0aK5bAXdUhUbtt1ibmBcK9LFyMkhIWAi8ebNSzYY+/u6qWA2i4CKxOs9P1lAbDYBdYkbN34y g9giAhoSnxYuZ+9i5OJgFvjLJNF1uI8VJCEsEC2xrGsKO4jNK2AhsW5BPxNIkZDAGUaJx79v MUEkBCVOznwCNpVZQEvixr+XQHEOIFtaYvk/DpAwp4CdxI07m8FKRAWUJQ5sOw42R0JgHbvE zPmPGSEulZQ4uOIGywRGgVlIxs5CMnYWwtgFjMyrGIUy88pyEzNzTPQyKvMyK/SS83M3MQJj cVntn+gdjJ8uBB9iFOBgVOLhzeh8kirEmlhWXJl7iFGCg1lJhLfmF1CINyWxsiq1KD++qDQn tfgQozQHi5I4r9G38hQhgfTEktTs1NSC1CKYLBMHp1QDo9xdvSjJ5rjSh1/NuF/n+0Uoqzb/ W/iUwUis6szUd1unMU+PSHniwZ7Ayhz+QSDmxrp3kRGf/S7HNraorgp9sa1hq7TF7gXiCxZr v77nlKQ3c/ZumbomFoPjNqEW4rNPv5jCY1p0uJOJ8/iNCf6mZtpBrBefcHbb2kTozc/PbJ9i dpWXT+y2EktxRqKhFnNRcSIAjCDitMECAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKIsWRmVeSWpSXmKPExsXC5WfdrKtu/jTVoPkgv8Wc9WvYLD5v+Mdm 8WJDO6PF1/W/mC2efupjsTg89ySrxeVdc9gs7q35z2pxftdaVosdS/cxWVw6sIDJ4njvASaL +fc+s1ls3jSV2eL4lKmMFr9/ABWfnDWZxUHQ43trH4vHzll32T0WbCr12LxCy2PxnpdMHptW dbJ5bPo0id3j3blz7B4nZvxm8Zh3MtDj/b6rbB6LX3xg8tj6y86jceo1No/Pm+QC+KO4bFJS czLLUov07RK4Mh4tmstW0C1VsWHbLeYGxrUiXYycHBICJhLf101lA7FZBFQkXu/5yQJiswmo S9y48ZMZxBYR0JD4tHA5excjFwezwF8mia7DfawgCWGBaIllXVPYQWxeAQuJdQv6mUCKhATO MEo8/n2LCSIhKHFy5hOwqcwCWhI3/r0EinMA2dISy/9xgIQ5BewkbtzZDFYiKqAscWDbcaYJ jLyzkHTPQtI9C6F7ASPzKkaRzLyy3MTMHFO94uyMyrzMCr3k/NxNjMDIWlb7Z+IOxi+X3Q8x CnAwKvHwTljzJFWINbGsuDL3EKMEB7OSCG/NL6AQb0piZVVqUX58UWlOavEhRmkOFiVxXq/w 1AQhgfTEktTs1NSC1CKYLBMHp1QDY0DovDbbgopH0yYYPRK8PqnV/eezV+9mX1O4b6pjUWf2 uzlBxk3ATKTObUnS1JSSFSa3QlOWzFhU0sqtdzXgyMpwQ6Om+UdSP3D8Chf9znZ+RVnltHsp 9V89lwvcO9x1oL5/RYfezrUHD2Tcu7KJRbLts9rL3n8TJ1bYMZucTfC6M/PmUYeyW0osxRmJ hlrMRcWJAPKgIrKoAgAA X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 2F24D1C0008 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ubk5s3p6gubz71r1nir66fkdn4c656ht X-HE-Tag: 1709520688-261562 X-HE-Meta: U2FsdGVkX1/BBmWWvmikhBCato9JXLtpCZkVTI7pfo11H6CUo5pjUnc2Vrxi/h0O08INOOlK51ZcZCQyw9QH8HPhSmPeses8+AvoGdZ2F8Vx48GImiENBj2xWVOhKJd9IUaDk4UWbLwlFcqEp/h7atFj22wrglr4N+L+xpEAh7ggi4rjgVuK7AqOYYBUGU4gjHp62rgTpER1B3UCUiUGOP92YCXwGzYLxSMAbLarc4TFPA5ssXQAGOiOQ1xBBEgXwnQqmpV1G/O+WXc0PDlU8hjUbTkICltksZMB1+aVOvLFxbqPW1PVLFyHl5jAvLp3Gd2qjwWcoz3vUnIYxTAysidlovo867Zft6p5AJAGgJ/QGgICiuW32qnWqQiUPy6X6K6R7fej6BlK0N4NhgyVmviP/ERfI0Qbb7VoYvatE7C40/nVgVqM08Imka/RRjpBcy6jwahzCIy9vx4FiINxeBrHyxRJ/qsqFJQlS6ZOla/5ShKxl05cMojZKh/RJEv6eku9/qSuncApWsm4iJRKJZTKor/BZs1ZlWHZ9HZOSTIW++Oy9tr/mfonCIpXiam7lNNSdfgnyVY5h2hJ6xYFChnCAwwa3ovIn37NqCF9Qz6ayRx7Xj+iIGYDIzR+g8pXOkyi9Qw49YdnvJTpA/DItczRrtQvuBFAUbJEDjLQGZ8EgH0Se9FKWbqoycLlYpWTft3p8aAArOOouOWJX4Gue5N2jG/W8ZhWxvKIo5Xo2kY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 01, 2024 at 08:33:11AM +0800, Huang, Ying wrote: > David Hildenbrand writes: > > > On 29.02.24 10:28, Byungchul Park wrote: > >> On Mon, Feb 26, 2024 at 12:06:05PM +0900, Byungchul Park wrote: > >>> Hi everyone, > >>> > >>> While I'm working with a tiered memory system e.g. CXL memory, I have > >>> been facing migration overhead esp. TLB shootdown on promotion or > >>> demotion between different tiers. Yeah.. most TLB shootdowns on > >>> migration through hinting fault can be avoided thanks to Huang Ying's > >>> work, commit 4d4b6d66db ("mm,unmap: avoid flushing TLB in batch if PTE > >>> is inaccessible"). See the following link: > >>> > >>> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/ > >>> > >>> However, it's only for ones using hinting fault. I thought it'd be much > >>> better if we have a general mechanism to reduce the number of TLB > >>> flushes and TLB misses, that we can ultimately apply to any type of > >>> migration, I tried it only for tiering for now tho. > >>> > >>> I'm suggesting a mechanism called MIGRC that stands for 'Migration Read > >>> Copy', to reduce TLB flushes by keeping source and destination of folios > >>> participated in the migrations until all TLB flushes required are done, > >>> only if those folios are not mapped with write permission PTE entries. > >>> > >>> To achieve that: > >>> > >>> 1. For the folios that map only to non-writable TLB entries, prevent > >>> TLB flush at migration by keeping both source and destination > >>> folios, which will be handled later at a better time. > >>> > >>> 2. When any non-writable TLB entry changes to writable e.g. through > >>> fault handler, give up migrc mechanism so as to perform TLB flush > >>> required right away. > >>> > >>> I observed a big improvement of TLB flushes # and TLB misses # at the > >>> following evaluation using XSBench like: > >>> > >>> 1. itlb flush was reduced by 93.9%. > >>> 2. dtlb thread was reduced by 43.5%. > >>> 3. stlb flush was reduced by 24.9%. > >> Hi guys, > > > > Hi, > > > >> The TLB flush reduction is 25% ~ 94%, IMO, it's unbelievable. > > > > Can't we find at least one benchmark that shows an actual improvement > > on some system? > > > > Staring at the number TLB flushes is nice, but if it does not affect > > actual performance of at least one benchmark why do we even care? > > > > "12 files changed, 597 insertions(+), 59 deletions(-)" > > > > is not negligible and needs proper review. > > And, the TLB flush is reduced at cost of memory wastage. The old pages > could have been freed. That may cause regression for some workloads. You seem to understand the key of migrc(migation read copy) :) Yeah, the most important thing to deal with is to remove the 'memory wastage'. The pages deferred to free for the optimization can be freed anytime when it's needed though TLB flush required that would've been already flushed unless migrc mechanism. So memory wastage can be totally removed if resolving some technical issues that might need your help :) Byungchul > > That review needs motivation. The current numbers do not seem to be > > motivating enough :) > > -- > Best Regards, > Huang, Ying