From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1219C4345F for ; Fri, 19 Apr 2024 06:08:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27D876B007B; Fri, 19 Apr 2024 02:08:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22C996B0082; Fri, 19 Apr 2024 02:08:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F4246B0083; Fri, 19 Apr 2024 02:08:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E480A6B007B for ; Fri, 19 Apr 2024 02:08:32 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 4B08D120A2F for ; Fri, 19 Apr 2024 06:08:32 +0000 (UTC) X-FDA: 82025252064.20.E2F14CF Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by imf21.hostedemail.com (Postfix) with ESMTP id 3CB891C0007 for ; Fri, 19 Apr 2024 06:08:29 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="W/mqHO0f"; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.7 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713506910; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ynqZBXVWtc8NG2wCqgCbeD8+WRmb80BDs1RV/H/JzsU=; b=lvB19CLmGJoxJj1Oi9WcX5fhxT/jZ6ayRDNPL5p0o395qNceO+NVKh068ZlLuegh5gmowK fSjE5AR5BBrWWp+7DER27nDK5ZNC3ECQBFgmmCax4q/D0yAhs8Utzc+LTNtdgH6JN3RIy+ etlIoFVEcpYB1FNXhjZgI/K/YlACIeA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713506910; a=rsa-sha256; cv=none; b=ewoVuEpsVnYPECDaSgHa7za7HoLykjcA5kvuLvG3whAEVrYESIhssPHY08kcKLIct36iVk fgaDVOeNUfStui3SRx3jfFmgoJstkgv9zoodcDxA4CmeH5730+KjzW4/SWMzOjkc64ntIB kBQHUhpAC9XEfKjqW0pcDFz65n1eXHc= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="W/mqHO0f"; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.7 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713506909; x=1745042909; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=IGgBgBEQSVCduTNe5O0kJ/Q4QAhHDWDQ/BCjoNwvD94=; b=W/mqHO0fk/gIlWjbQqDK7/IR0lmGOxpIs88+dEiLZC73tRz9/gM/fYJw juk8oKc8BXvU+/ozteCR91YIYNLEeomg0FmgVdtn/LyjolCzeHBGs2FDJ 9bX1LuODaXtK36yXxslRnbPK0EHelWfjVRM+zxff/aYEODtjIFpBeCT15 YNxJpzQs595c57tSh9cz31H7fcfmanuvkg4ZpEXR2tJCZmzg/hElqD5sI 6SBMCHi4Ywvx8cbR6K9Bc6tws/yGUGrrN9mKMzm3b2vBkFfEct840MIKr 6oQ+tyh7+f9BnG0VD+blZ36hLUYV2AMzyEdkHXllx3nKfH0iRhJCvKjni Q==; X-CSE-ConnectionGUID: n1mXMH20RDqwg1oKsFH3Xg== X-CSE-MsgGUID: nkTIflKWS/yF/u8GrEm0Hg== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="34488728" X-IronPort-AV: E=Sophos;i="6.07,213,1708416000"; d="scan'208";a="34488728" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 23:08:27 -0700 X-CSE-ConnectionGUID: oSQ2sxIETgOqx4gKNvbBZQ== X-CSE-MsgGUID: /mLe0RBqSluD0+4HvwAzUw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,213,1708416000"; d="scan'208";a="27886071" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 23:08:23 -0700 From: "Huang, Ying" To: Byungchul Park Cc: , , , , , , , , , , , , , , , Subject: Re: [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> (Byungchul Park's message of "Thu, 18 Apr 2024 15:15:28 +0900") References: <20240418061536.11645-1-byungchul@sk.com> Date: Fri, 19 Apr 2024 14:06:30 +0800 Message-ID: <87cyqlyjh5.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3CB891C0007 X-Rspam-User: X-Stat-Signature: j9magrdiq6juu8f5j5gdt7z1kbugkqrg X-HE-Tag: 1713506909-406695 X-HE-Meta: U2FsdGVkX18RxGhvZ8yfBQwjH51KIblQi1JuuyMn6iPsjylULRgckTv2DRPDmvxrbkB4G3QY3U84An41QhoIGAQekcO/FAQKDO/JCa5hd7dZcrcM7xiS69NVxpBoeygcvKcmospGsfvl4+0lbQ0249lNtjExu6laYNyiZfMP6yv+QzNN7RHbzdTDDKngjmcN3zXzOu/CBimba+ewQPsOGqFqfCw5sMEwTQm+koBCTIu3FG3bPQkIFi9p9RtARRQt5NLulO87QezqKubUMkkBDFhWX3ebEDlE9h/admkFS3pzPiuDqGAdv1GkQMzf83x9+DDP65FD3tE3wRcepCy5MqotfFx/+ezBzMACumhPf1G1omZ0oxZxArOPCtjcrnR0E3mI8vy6ZmZtQ9Klwnd/dyz5JU8ziO6J/3moWRL7XxN5BY2fgYYhujSdlvrFL2DC4m+Uy6mdb2nw8zXfzl3EMGoSe5nnKpopvUuhBoeYkmTQYwJcqmku+5kMEjbqoFlhNlebZxwzKrWNz1Jq1e9PLJGTaFlRWXcxm2RqxSsVBJSdvE80xPeH7IKScHxC1E4uK5rLLyUsZAfGaNcUfu2+5U89xYsm8fyZ+TPC0vWcMqw5xFxGe3nQuw8qsge4KOORmtckg1UB/JNjCfITTDIGhjj1Kxkt/dF63G/hOzUSqWEszuewH5XRTCoqDl1eTaVQW9WrsK2kxsdXXhsrY2r0Yfa/2WYwwJ6Ua7F+ZhMYGcfhF8lGMCghKN1QSGxC324K93oCJdqJ2an/Vl/MTiGbJJmhm7bKLI+bPy3WziZLiIHPWdM+Hz7bBo6vqRiUeuDyCI4uxJdM0OLYUGU/WmaIqbcu3Mu7oRPyi9rF9jdg+lBze99IdJw/eQFEPAws6LeG0Z4xNZ/ypUmn9kE9E4qipijWm9ArZoHDmdnhTNOrAJQIyHWeuCoApB0Hu1nPpm2o00JS75+tAtrY5zksRX7 aXhSgwvO vEHD+I7JwH95IFJF/U2sqvjmOa6n7fjlYNAWFG5yc0YFFB3jSBWol2NjJLU56PYHP3M/Nz+NYstRXq3pj0vRPd9U3zv1XflavCupTTNB8iVpNZ3JmjLVZohT6cfewBd2B+vMBjtZsLmG78xK5iJF9DSO/DNuiTP7albLU88moB3zfqK6JTNPT9+ZVGR+96DqW8Pe7h+Xjy0ioFOoiH3BG30hzVYIxXs8DpNlJiRgGR34W1ch3SNWe4aoDYUYCTT7JxfHW0BG1icAIDkE7PG07sxrI8FKVzG9B78reqyL3lOubryjmDnsvnnm8fYwl2hPAWIwNVYa/rLWDztKcZjZJ4ykdU6nLDcPEyT1K X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Byungchul Park writes: > Hi everyone, > > While I'm working with a tiered memory system e.g. CXL memory, I have > been facing migration overhead esp. tlb shootdown on promotion or > demotion between different tiers. Yeah.. most tlb shootdowns on > migration through hinting fault can be avoided thanks to Huang Ying's > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE > is inaccessible"). See the following link for more information: > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/ > > However, it's only for ones using hinting fault. I thought it'd be much > better if we have a general mechanism to reduce all tlb numbers that we > can ultimately apply to any type of migration. > > I'm suggesting a mechanism called MIGRC that stands for 'Migration Read > Copy', to reduce tlb numbers by deferring tlb flush until the source > folios at migration actually become used, of course, only if the target > PTE don't have write permission. > > To achieve that: > > 1. For the folios that map only to non-writable tlb entries, prevent > tlb flush during migration but perform it just before the source > folios actually become used out of buddy or pcp. > > 2. When any non-writable tlb entry changes to writable e.g. through > fault handler, give up migrc mechanism and perform tlb flush > required right away. > > No matter what type of workload is used for performance evaluation, the > result would be positive thanks to the unconditional reduction of tlb > flushes, tlb misses and interrupts. For the test, I picked up XSBench > that is widely used for performance analysis on high performance > computing architectures - https://github.com/ANL-CESAR/XSBench. > > The result would depend on memory latency and how often reclaim runs, > which implies tlb miss overhead and how many times migration happens. > The slower the memory is and the more reclaim runs, the better migrc > works so as to obtain the better result. In my system, the result > shows: > > 1. itlb flushes are reduced over 90%. > 2. itlb misses are reduced over 30%. > 3. All the other tlb numbers also get enhanced. > 4. tlb shootdown interrupts are reduced over 90%. > 5. The test program runtime is reduced over 5%. > > The test envitonment: > > Architecture - x86_64 > QEMU - kvm enabled, host cpu The test is run in VM? Do you have test results in bare metal environment? > Numa - 2 nodes (16 CPUs 1GB, no CPUs 99GB) The configuration looks quite abnormal. Have you tested with other configuration, such 1:4 or 1:8? > Linux Kernel - v6.9-rc4, numa balancing tiering on, demotion enabled > > < measurement: raw data - tlb and interrupt numbers > > > $ perf stat -a \ > -e itlb.itlb_flush \ > -e tlb_flush.dtlb_thread \ > -e tlb_flush.stlb_any \ > -e dtlb-load-misses \ > -e dtlb-store-misses \ > -e itlb-load-misses \ > XSBench -t 16 -p 50000000 > > $ grep "TLB shootdowns" /proc/interrupts > > BEFORE > ------ > 40417078 itlb.itlb_flush > 234852566 tlb_flush.dtlb_thread > 153192357 tlb_flush.stlb_any > 119001107892 dTLB-load-misses > 307921167 dTLB-store-misses > 1355272118 iTLB-load-misses > > TLB: 1364803 1303670 1333921 1349607 > 1356934 1354216 1332972 1342842 > 1350265 1316443 1355928 1360793 > 1298239 1326358 1343006 1340971 > TLB shootdowns > > AFTER > ----- > 3316495 itlb.itlb_flush > 138912511 tlb_flush.dtlb_thread > 115199341 tlb_flush.stlb_any > 117610390021 dTLB-load-misses > 198042233 dTLB-store-misses > 840066984 iTLB-load-misses > > TLB: 117257 119219 117178 115737 > 117967 118948 117508 116079 > 116962 117266 117320 117215 > 105808 103934 115672 117610 > TLB shootdowns > > < measurement: user experience - runtime > > > $ time XSBench -t 16 -p 50000000 > > BEFORE > ------ > Threads: 16 > Runtime: 968.783 seconds > Lookups: 1,700,000,000 > Lookups/s: 1,754,778 > > 15208.91s user 141.44s system 1564% cpu 16:20.98 total > > AFTER > ----- > Threads: 16 > Runtime: 913.210 seconds > Lookups: 1,700,000,000 > Lookups/s: 1,861,565 > > 14351.69s user 138.23s system 1565% cpu 15:25.47 total IIUC, the memory footprint will be larger with the patchset. Do you have data? -- Best Regards, Huang, Ying