From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0978BC0015E for ; Wed, 16 Aug 2023 01:03:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 51F1E8D001B; Tue, 15 Aug 2023 21:03:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D00A8D0001; Tue, 15 Aug 2023 21:03:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 396E78D001B; Tue, 15 Aug 2023 21:03:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 29C2B8D0001 for ; Tue, 15 Aug 2023 21:03:07 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E6FBB802CF for ; Wed, 16 Aug 2023 01:03:06 +0000 (UTC) X-FDA: 81128168772.14.B971E52 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by imf01.hostedemail.com (Postfix) with ESMTP id 769494000A for ; Wed, 16 Aug 2023 01:03:04 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Jf7C+RNU; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692147785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ThsQ50PjZ7HAWIruaGtE5GyK3DdN5hla4RKmTjc/jkw=; b=g12FmhQ3wR9JmrsoJoEuOIGV++pB2dJ+IcCFDwNV0sPDJbUcIw1GffopJGzN+Yi7fSNbh/ QCt0u/D5qRANnq3mp8/1X2QN6+cSnt2DXq01rFrs4ppCrlNL2yzFpEWt2wFIt/q74XogcT s0R5obxPpCMK+GIxcuwU8pXl7GXXEWs= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Jf7C+RNU; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692147785; a=rsa-sha256; cv=none; b=2VgaVAbyPerY4or8RCzMC5F3dgcnMn9WKddl3e16ZddW1Tmi7AyR/DlKrwKy/SNYKCzTYm dJ6bUm427P4GMpFuEZrfBszNT6HGz5LlfLc1fm1bvpypXOi+yQZ2AcaDrscEdyCHk2SHLc ojKnPaom2DTrpqO3b/gvUjGlBqq4DJ4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692147784; x=1723683784; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=fqfvG80fFYPaOZHRXoqAFgj36lBaRMCGfwkNqilmfhk=; b=Jf7C+RNUrlBcwPuGDZnSLefH5IurIkWHbAOFkj7rIWr/HIM+/jm9K9VQ bZNVMQgSKo8PacVi/H18kNKFaSFM3EG06D7wneNMZ+gqki99osh5z30XD pdiBWUytRyDlI6mJv95JuTkERN36sqG3eC4NN7jTuD0hxmFUrIBl7pJC4 rIeT67Sp3MKzghRtD2zcaEvVCZnmpqadYun+4nJI5AzbTQq6RXTqzorvl NYy5zZIoE5Ttd73ERqH4yljjiTJs+2i07cv2GeWSOsN1/Jwp/s+63aL8b wr+QQCUVSBgZ58zZrh4mKjmkN0a8t7s/D+t4rOBNIvBxRF72f7GBCRxM7 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="369885943" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="369885943" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2023 18:03:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="763462758" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="763462758" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2023 18:02:58 -0700 From: "Huang, Ying" To: Byungchul Park Cc: , , , , , , , , , , , , Subject: Re: [RFC 2/2] mm: Defer TLB flush by keeping both src and dst folios at migration References: <20230804061850.21498-1-byungchul@sk.com> <20230804061850.21498-3-byungchul@sk.com> <877cpx9jsx.fsf@yhuang6-desk2.ccr.corp.intel.com> <20230816001307.GA44941@system.software.com> Date: Wed, 16 Aug 2023 09:01:12 +0800 In-Reply-To: <20230816001307.GA44941@system.software.com> (Byungchul Park's message of "Wed, 16 Aug 2023 09:13:07 +0900") Message-ID: <87r0o37qcn.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 769494000A X-Stat-Signature: d9tx154jsafwsrtk8fj9pj6mo5683ucx X-Rspam-User: X-HE-Tag: 1692147784-174254 X-HE-Meta: U2FsdGVkX1877QMt6raSEe8cwIqqL9Q3AeuLKFmeW7Q9rdCDSJO4CNVy8oCrQqEsjx/EVC8rIXjxAqP+9qJz1ODu7o/XPxJ0pEpSAgNncdMLRRlVpScECAiNpiA7A6GIcXgaGbDMUQAIga4eGhHAHdh29FWVXrR62qQ4tzrjUQm3jg6SJXeTproCqQfoNgXntYETEF4QA90rDfRsm2Z8CvERkBJb2Kyck/CTlqHHmSgOHcZeZsw9KqxDs7yQqkwq3y+b++eME93FNRR8dA27z8MxTXuNwx+rSJYdOK/ybogrq+AXo/yxXPwkpyuwPg1Q4prEepHaYHuAysRHBVcyCUVon9h0ZbPeJhE+8+ehU7nLaPcjlJIDGAgbnfBh2Pz4q4w5FlufUntClO8RP/OVMX8W1HolFwCt7Z3G6Fla4DI0K86ayqO5LSncXs760MQ9ckXlT9gBxdT00bo/V0jjrxF5WYg2Wb+v2G2e8iFoMpPSJaq0HimLVVbtLyg12AQuy30agLRk9pA1dxaWwQQjXxFZmiMmoYyKX/JrIXjBh2CYRpw8beNvA1ft6wKSCGUmndcEQ5RnBnBXRkXasA5/BQIs3YjazFFxcuNYHgIqPEy+zw2/YTtZRdd0RW3wr1vVAdUzgsdES1+9wqhT7q5KCydAq6oV3qBr0VvL3pQVuaoVai9UWnCuqhbVq7NaaHSakeCtFlTbEQWBSeLOmjZLRkhxh+/hwx70ErGtzrJCpFoC1i78wLpOL+xBVenZoVuC1owUZcArgkdoIfFvy33bjWQcDYa+roEHaAjKYWVvuJdt/qnkaM+1/jaKDIoai4FaFaZS+xB8LK/kxvBEJw81gY1m1pj+cTD7BBrHR4pJw3ipQig9SKlBIt74C2CZESKL/j/bN36VN/URFNqC7UtjFYxFRT+YlyAN3fwHz+Y4hzqyCfKZJXQUI6PX2wARX9XIEvvarZzT8apylwUty2C bi990qY/ WHbUsfnsgjmjLs1kBMdk+/ZVgdbhRJGCR0ktyuaCVKX+1VAlbptv4XSTVkmTlet47izx2OLf3g4755B5MArGTA4by3V7AHOXJC3YXdIq6ZRJubbwksFfbznVrDkVkAAHnnCb6jRyIlqF0MEVxA+cmh1VGJzpYk6sqk45iApI6Qsslx2EBfxtehkGWHYiGb1/NxR3k3GINFiRdxQ8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Byungchul Park writes: > On Tue, Aug 15, 2023 at 09:27:26AM +0800, Huang, Ying wrote: >> Byungchul Park writes: >> >> > Implementation of CONFIG_MIGRC that stands for 'Migration Read Copy'. >> > >> > We always face the migration overhead at either promotion or demotion, >> > while working with tiered memory e.g. CXL memory and found out TLB >> > shootdown is a quite big one that is needed to get rid of if possible. >> > >> > Fortunately, TLB flush can be defered or even skipped if both source and >> > destination of folios during migration are kept until all TLB flushes >> > required will have been done, of course, only if the target PTE entries >> > have read only permission, more precisely speaking, don't have write >> > permission. Otherwise, no doubt the folio might get messed up. >> > >> > To achieve that: >> > >> > 1. For the folios that have only non-writable TLB entries, prevent >> > TLB flush by keeping both source and destination of folios during >> > migration, which will be handled later at a better time. >> > >> > 2. When any non-writable TLB entry changes to writable e.g. through >> > fault handler, give up CONFIG_MIGRC mechanism so as to perform >> > TLB flush required right away. >> > >> > 3. TLB flushes can be skipped if all TLB flushes required to free the >> > duplicated folios have been done by any reason, which doesn't have >> > to be done from migrations. >> > >> > 4. Adjust watermark check routine, __zone_watermark_ok(), with the >> > number of duplicated folios because those folios can be freed >> > and obtained right away through appropreate TLB flushes. >> > >> > 5. Perform TLB flushes and free the duplicated folios pending the >> > flushes if page allocation routine is in trouble due to memory >> > pressure, even more aggresively for high order allocation. >> >> Is the optimization restricted for page migration only? Can it be used >> for other places? Like page reclaiming? > > Just to make sure, are you talking about the (5) description? For now, > it's performed at the beginning of __alloc_pages_slowpath(), say, before > page recaiming. Do you think it'd be meaningful to perform it during page > reclaiming? Or do you mean something else? Not for (5). TLB needs to be flushed during page reclaiming too. Can similar method be used to reduce TLB flushing there too? >> > The measurement result: >> > >> > Architecture - x86_64 >> > QEMU - kvm enabled, host cpu, 2nodes((4cpus, 2GB)+(cpuless, 6GB)) >> > Linux Kernel - v6.4, numa balancing tiering on, demotion enabled >> > Benchmark - XSBench with no parameter changed >> > >> > run 'perf stat' using events: >> > (FYI, process wide result ~= system wide result(-a option)) >> > 1) itlb.itlb_flush >> > 2) tlb_flush.dtlb_thread >> > 3) tlb_flush.stlb_any >> > >> > run 'cat /proc/vmstat' and pick up: >> > 1) pgdemote_kswapd >> > 2) numa_pages_migrated >> > 3) pgmigrate_success >> > 4) nr_tlb_remote_flush >> > 5) nr_tlb_remote_flush_received >> > 6) nr_tlb_local_flush_all >> > 7) nr_tlb_local_flush_one >> > >> > BEFORE - mainline v6.4 >> > ========================================== >> > >> > $ perf stat -e itlb.itlb_flush,tlb_flush.dtlb_thread,tlb_flush.stlb_any ./XSBench >> > >> > Performance counter stats for './XSBench': >> > >> > 426856 itlb.itlb_flush >> > 6900414 tlb_flush.dtlb_thread >> > 7303137 tlb_flush.stlb_any >> > >> > 33.500486566 seconds time elapsed >> > 92.852128000 seconds user >> > 10.526718000 seconds sys >> > >> > $ cat /proc/vmstat >> > >> > ... >> > pgdemote_kswapd 1052596 >> > numa_pages_migrated 1052359 >> > pgmigrate_success 2161846 >> > nr_tlb_remote_flush 72370 >> > nr_tlb_remote_flush_received 213711 >> > nr_tlb_local_flush_all 3385 >> > nr_tlb_local_flush_one 198679 >> > ... >> > >> > AFTER - mainline v6.4 + CONFIG_MIGRC >> > ========================================== >> > >> > $ perf stat -e itlb.itlb_flush,tlb_flush.dtlb_thread,tlb_flush.stlb_any ./XSBench >> > >> > Performance counter stats for './XSBench': >> > >> > 179537 itlb.itlb_flush >> > 6131135 tlb_flush.dtlb_thread >> > 6920979 tlb_flush.stlb_any >> >> It appears that the number of "itlb.itlb_flush" changes much, but not >> for other 2 events. Because the text segment of the executable file is >> mapped as read-only? And most other pages are mapped read-write? > > Yes, for this benchmarch, XSBench. I didn't noticed that until checking > it using perf event either. > >> > 30.396700625 seconds time elapsed >> > 80.331252000 seconds user >> > 10.303761000 seconds sys >> > >> > $ cat /proc/vmstat >> > >> > ... >> > pgdemote_kswapd 1044602 >> > numa_pages_migrated 1044202 >> > pgmigrate_success 2157808 >> > nr_tlb_remote_flush 30453 >> > nr_tlb_remote_flush_received 88840 >> > nr_tlb_local_flush_all 3039 >> > nr_tlb_local_flush_one 198875 >> > ... >> > >> > Signed-off-by: Byungchul Park > > [...] > >> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h >> > index 306a3d1a0fa6..3be66d3eabd2 100644 >> > --- a/include/linux/mm_types.h >> > +++ b/include/linux/mm_types.h >> > @@ -228,6 +228,10 @@ struct page { >> > #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS >> > int _last_cpupid; >> > #endif >> > +#ifdef CONFIG_MIGRC >> > + struct llist_node migrc_node; >> > + unsigned int migrc_state; >> > +#endif >> >> We cannot enlarge "struct page". > > This is what I worried about. Do you have a better idea? I don't think > they fit onto page_ext or something. No. -- Best Regards, Huang, Ying