From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C2A2E7717F for ; Fri, 13 Dec 2024 01:48:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D116A6B0088; Thu, 12 Dec 2024 20:47:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CC0DC6B0089; Thu, 12 Dec 2024 20:47:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B61EE6B008A; Thu, 12 Dec 2024 20:47:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 95B3C6B0088 for ; Thu, 12 Dec 2024 20:47:59 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1AF671609E2 for ; Fri, 13 Dec 2024 01:47:59 +0000 (UTC) X-FDA: 82888248912.13.DD4ACF3 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf11.hostedemail.com (Postfix) with ESMTP id 4535440003 for ; Fri, 13 Dec 2024 01:47:34 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=isnzCJG7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734054453; a=rsa-sha256; cv=none; b=EPAyXsAwPcRIs3tKVO2bNfwAY30uGyDamUq3niwWqS9Y0HOaxBzcy64hlgn+FNNLgOnuII jIXQHt3dAqkgXAwNtOu0xP8cOoJ4ijOST2Jlsg23I7BsHP2jK/Oc+qf+ziiZay5mgZtEvE TD8/7SATth9dM1Udyg5sKmnahKmwURw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=isnzCJG7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734054453; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CZGwxTS+12Jgu2H6YHMC7lWfxKvq5PAI9LSKX522wws=; b=Sn/sceQRYWLpMkF7E9ot8Zk0oVBNg8PGlhNSnAVTFvPiCHLtJq/z+ktE5/0SZExLKERF1U eQMJ3taVTWzUrJ+rE13/L2j+Tju+F75Qpybco8WzjuFVGTm+DBsFF4O8JETtkuBDWfeyg6 h1DpKZ48wQZiIjcwljA/ZqS41ppr9r0= Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-2161eb95317so11813485ad.1 for ; Thu, 12 Dec 2024 17:47:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734054476; x=1734659276; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CZGwxTS+12Jgu2H6YHMC7lWfxKvq5PAI9LSKX522wws=; b=isnzCJG7aLsE2hraTrRDCJ6W+UokatjfpZ0kiPa+qwJVctwIhAmmS1yO85mKvk6cRM rTtangtxbHhgNKGh+Ms1OGcCDiyARJCZcRKtrgSQHAR8cOn2/r338S0Lx1m03JR+flH4 +cAZr8Fgj8NlkhPUfh2vUByplWnecZp9Y6Cy+Xen8HTpjbI0bO562UVnc1FeXrf+nRO+ f1m5wk/jmJ7sreEV62+zxTMFxB6JuVqzKIDNqhWN9DBW4Cklsqd6PQ2lGkoNV43FaLI1 MR2vH1sdNUrtR7uIGzPjubltlYAlD6yJiVDHIE+X0UgJYQLG+eUrJqw+Q1X6SL75Auu9 uSKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734054476; x=1734659276; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CZGwxTS+12Jgu2H6YHMC7lWfxKvq5PAI9LSKX522wws=; b=AcG0TEyCdXG9JtReKpFNFXl54bFtRwizyJp+Tewg26AJwyNJgzDhyhsRgdZ8idS0Vt C3dbw0S+d5LxQ48Dhtf/VZB3guAQHs0V5q3SZ0ZjlaI2EECVByd97VZgDf+S8M6ffcFF BbH4cKZWFTaHPUDJuTgvJQ1kAJc7aC2e1CgK1L/mHh/np7CM9+Qvwou3EuD0QCokBAyp wZxYK0iv+HEMcu0+ueJqWSz3bKwUhJsvlNC0eb0ZkDbg252t0aNps6dKzyZFDtxeUH6N QqwcZqj+D4+pJimvQfbJZcnAzJAl+rHMsrKZmAm9/U699EVlGVLRRzCOWGCzzVpnfbit taSg== X-Forwarded-Encrypted: i=1; AJvYcCUslrGNxVk8aI2s675FDwFOgxxw1HFiWFEDFtJEUgL4o2KFW5mHa28pfHBJXBmUSsA8eU643iOYQQ==@kvack.org X-Gm-Message-State: AOJu0YyXB5JW6YwqNt+y12aXUlH+GJ/0vOZtepDN7WMddWZLQxwm9k2m LBFErEFU9HGPaDtAKOOgNS1w8mp9gJ9XsjlMrTLCKc9lgFjj11Jt X-Gm-Gg: ASbGncv5Y7CVWwHEdeAKeBRCmq+ngyxaq2kSwtvRo8+Ub1I5nbk3sN1dz5kTYWmUdcd yqlISCY460+TrlQVVgIsmih4YoEAbGW4iKwLRG9zVAPw8Qdp8LXI/UJmUfmlTjZtEn+gyBrMm0P zTnws9GE8b756N22NMGyo9z6zQx0rTZrHeo738rmsmVPEwmhZL8ugpU9UaYFRr0/M09YwEZIUul Zga+7yC19TQrZJCWuS3EXIKdmTBGzuGF5wcwaFvjK1LQKv7EqAbQLV6neiKyNN6na68/bPv60/4 UNn8bs/z X-Google-Smtp-Source: AGHT+IHwj2tk5p8zkOH5geg/qnbyTnDnOidQHFD8z2VOAHxJBWG7q5D8vhBM1CLLTfu02MOrztwcAA== X-Received: by 2002:a17:902:e545:b0:216:30f9:93d4 with SMTP id d9443c01a7336-2189298bbf7mr13474575ad.8.1734054475646; Thu, 12 Dec 2024 17:47:55 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:af65:8200:703e:5915:56a9:5020]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21655569058sm71887925ad.69.2024.12.12.17.47.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 12 Dec 2024 17:47:55 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: hannes@cmpxchg.org Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, axboe@kernel.dk, bala.seshasayee@linux.intel.com, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hch@infradead.org, kanchana.p.sridhar@intel.com, kasong@tencent.com, linux-mm@kvack.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, terrelln@fb.com, usamaarif642@gmail.com, v-songbaohua@oppo.com, wajdi.k.feghali@intel.com, willy@infradead.org, ying.huang@linux.alibaba.com, yosryahmed@google.com Subject: Re: [PATCH RFC] mm: map zero-filled pages to zero_pfn while doing swap-in Date: Fri, 13 Dec 2024 14:47:44 +1300 Message-Id: <20241213014744.45296-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20241212162508.GA4712@cmpxchg.org> References: <20241212162508.GA4712@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4535440003 X-Stat-Signature: b59kmfmmkcmahu3hr5ftd9kiryzprmfm X-Rspam-User: X-HE-Tag: 1734054454-457569 X-HE-Meta: U2FsdGVkX18Rfdk+E0iznEJR6ON61R3xACjqDV2n1O1cenv0ez3PmhCcdxPYObdHXxxR4vWQx8LXABIl0JLk6/w4aqERIhC67PFx70T8ohtCZb/5Ve/PnQqiPsYcZzPqTQQIcvfEl7MuX8dRm8JLQ2tM8sGx9tRtqEQWQ399igbiSszqf0KHc6/eRMmkWHb8EULGv1QAicW2AE7zMBt7+r4XD/XBkBPafOkxj8f/B2qa66LFtsZK6DS3chzBgz1mI1emY+YSpCg7MCQsG5XKiJ6inhsuWXqnQZUDj+AQOjBtN7q3Bif96DY8o/TfPJ8XWQWm2rgCiM0cnmM0ueX5UAFSxb/8oat3KtBH4Yh7w/m9M8tNAXd2bJPpS3E/mfBY0i1L7bvA9cqPVdxV6bHLzQ4jEUFNt7A/bRuNUPcZyeRfpSMc2Bj7Zo8AAY4ejcv7xKzPrltxB40eDNIJ15A1+4oMQWsGEFo5Ox5dUb8YFhNW+cBJ2LqEPfOAtRARyXt4vFlcjiIZ44SwlS/rMahQZjVXXJpmaJRU+yoZbWHX9lzOUgbGrh905IeClTh+1XFIN2TDgVrF2oOVsiyoxB8tpGq1WwPCHkKdj8J8DHSErc9DSVK/SeBm9OYb75vKYsZIBtGlgAUcnrW5nWtvqou7qhkAH7U6D0XeayA9ja3mbMHEdPjop5nGAj4ICX9fd5nSgm2ZiqbaEAkni2P0GNiBGaHRdXbaA4fvK1WPRbuHcMvUWUP9r6APwPJt8yX/On7eDpr6R4ozKRe6c3dI+EEOOh/kakgIKglq9jbwWFaFq5UbCEYZ+J/FbT34mZA08Ef2YVVQT3bHaw8IYbSd6VgxdquTMb7gqg++2zv1Nr2dXJHbF+OoCXf2wvcaS4T0AzddBprWYEsVkRCKVG7nhOaCTy2BzGPGlR23/gwvwcjF/58LVIAZLFmSFuDNztMo7kBcA0B/uqY9Bt4YGGzCYwp ENqeF/Lv VKv71gn+7ai1VCogqRjSJwAkH18dfBrqqUPzFmHJ/PXkcZ4+eM+BK/AEc+Q21G0GaJcZnmcrpDu0t9Za8hZ6HwEvs09JwJBGiztFUoi3Z9IVLk84zVVDl7aSbpi5pR5IiZs4OZC4ZqIcN2ZwMzydwxOXHT3mpdvrBN6YfLGGUQsTi3MrOMwuTjZMeqHQVPOaFMf3CsCY0CBxdeO3BDg/+JFCu6IXGBM1YOatJUMn/aBEIDzJySs79s1NckLPLHZdThFEBrQV8iREazJYMtA5EO5UGgxgxrp0986NosjBG6xHgcKJyZsWNRmUUoSRvi3UdW7mHDh+1D7rgAnLzWPPV9dZQPP63oPZEpt8KVJ70WKipa4ESvoizYjmO9k9+4s39G0MOj22JgN+83Vq8ECejkw+Z6r/1CvaddM8LY3JLHeB8c3XlfX7FfYMiv7CqVKnwQD/VE5WO7gqjQwnpAFLnYnpo1Xw8OXwEUtiXv1uF0JIVhQA6ve41fuaJwbU2Mxtgcx7vZXW7H962guQO0bhnOLjUk2VdMoMYu91i7ajNvN7Elvp5Zpgsv9gexHUZIS5UJ/z+zkZH/lARVJBsxYZvMhe4mJEgjk5vfhVPMII5znWXn9noiJR6zwZalWh/k4Bol7yuYZWp3a6wehk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Dec 13, 2024 at 5:25 AM Johannes Weiner wrote: > > On Thu, Dec 12, 2024 at 10:16:22PM +1300, Barry Song wrote: > > On Thu, Dec 12, 2024 at 9:51 PM David Hildenbrand wrote: > > > > > > On 12.12.24 09:46, Barry Song wrote: > > > > On Thu, Dec 12, 2024 at 9:29 PM Christoph Hellwig wrote: > > > >> > > > >> On Thu, Dec 12, 2024 at 08:37:11PM +1300, Barry Song wrote: > > > >>> From: Barry Song > > > >>> > > > >>> While developing the zeromap series, Usama observed that certain > > > >>> workloads may contain over 10% zero-filled pages. This may present > > > >>> an opportunity to save memory by mapping zero-filled pages to zero_pfn > > > >>> in do_swap_page(). If a write occurs later, do_wp_page() can > > > >>> allocate a new page using the Copy-on-Write mechanism. > > > >> > > > >> Shouldn't this be done during, or rather instead of swap out instead? > > > >> Swapping all zero pages out just to optimize the in-memory > > > >> representation on seems rather backwards. > > > > > > > > I’m having trouble understanding your point—it seems like you might > > > > not have fully read the code. :-) > > > > > > > > The situation is as follows: for a zero-filled page, we are currently > > > > allocating a new > > > > page unconditionally. By mapping this zero-filled page to zero_pfn, we could > > > > save the memory used by this page. > > > > > > > > We don't need to allocate the memory until the page is written(which may never > > > > happen). > > > > > > I think what Christoph means is that you would determine that at PTE > > > unmap time, and directly place the zero page in there. So there would be > > > no need to have the page fault at all. > > > > > > I suspect at PTE unmap time might be problematic, because we might still > > > have other (i.e., GUP) references modifying that page, and we can only > > > rely on the page content being stable after we flushed the TLB as well. > > > (I recall some deferred flushing optimizations) > > > > Yes, we need to follow a strict sequence: > > > > 1. try_to_unmap - unmap PTEs in all processes; > > 2. try_to_unmap_flush_dirty - flush deferred TLB shootdown; > > 3. pageout - zeromap will set 1 in bitmap if page is zero-filled > > > > At the moment of pageout(), we can be confident that the page is zero-filled. > > > > mapping to zeropage during unmap seems quite risky. > > You have to unmap and flush to stop modifications, but I think not in > all processes before it's safe to decide. Shared anon pages have COW > semantics; when you enter try_to_unmap() with a page and rmap gives > you a pte, it's one of these: > >   a) never forked, no sibling ptes >   b) cow broken into private copy, no sibling ptes >   c) cow/WP; any writes to this or another pte will go to a new page. > > In cases a and b you need to unmap and flush the current pte, but then > it's safe to check contents and set the zero pte right away, even > before finishing the rmap walk. > > In case c, modifications to the page are impossible due to WP, so you > don't even need to unmap and flush before checking the contents. The > pte lock holds up COW breaking to a new page until you're done. > > It's definitely more complicated than the current implementation, but > if it can be made to work, we could get rid of the bitmap. > > You might also reduce faults, but I'm a bit skeptical. Presumably > zerofilled regions are mostly considered invalid by the application, > not useful data, so a populating write that will cowbreak seems more > likely to happen next than a faultless read from the zeropage. Yes. That is right. I created the following debug patch to count the proportional distribution of zero_swpin reads, as well as the comparison between zero_swpin and zero_swpout: diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index f70d0958095c..ed9d1a6cc565 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -136,6 +136,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, SWAP_RA_HIT, SWPIN_ZERO, SWPOUT_ZERO, + SWPIN_ZERO_READ, #ifdef CONFIG_KSM KSM_SWPIN_COPY, #endif diff --git a/mm/memory.c b/mm/memory.c index f3040c69f648..3aacfbe7bd77 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4400,6 +4400,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) /* Count SWPIN_ZERO since page_io was skipped */ objcg = get_obj_cgroup_from_swap(entry); count_vm_events(SWPIN_ZERO, 1); + count_vm_events(SWPIN_ZERO_READ, 1); if (objcg) { count_objcg_events(objcg, SWPIN_ZERO, 1); obj_cgroup_put(objcg); diff --git a/mm/vmstat.c b/mm/vmstat.c index 4d016314a56c..9465fe9bda9e 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1420,6 +1420,7 @@ const char * const vmstat_text[] = { "swap_ra_hit", "swpin_zero", "swpout_zero", + "swpin_zero_read", #ifdef CONFIG_KSM "ksm_swpin_copy", #endif For a kernel-build workload in a single memcg with only 1GB of memory, use the script below: #!/bin/bash echo never > /sys/kernel/mm/transparent_hugepage/hugepages-64kB/enabled echo never > /sys/kernel/mm/transparent_hugepage/hugepages-32kB/enabled echo never > /sys/kernel/mm/transparent_hugepage/hugepages-16kB/enabled echo never > /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled vmstat_path="/proc/vmstat" thp_base_path="/sys/kernel/mm/transparent_hugepage" read_values() { pswpin=$(grep "pswpin" $vmstat_path | awk '{print $2}') pswpout=$(grep "pswpout" $vmstat_path | awk '{print $2}') pgpgin=$(grep "pgpgin" $vmstat_path | awk '{print $2}') pgpgout=$(grep "pgpgout" $vmstat_path | awk '{print $2}') swpout_zero=$(grep "swpout_zero" $vmstat_path | awk '{print $2}') swpin_zero=$(grep "swpin_zero" $vmstat_path | awk '{print $2}') swpin_zero_read=$(grep "swpin_zero_read" $vmstat_path | awk '{print $2}') echo "$pswpin $pswpout $pgpgin $pgpgout $swpout_zero $swpin_zero $swpin_zero_read" } for ((i=1; i<=5; i++)) do echo echo "*** Executing round $i ***" make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- clean 1>/dev/null 2>/dev/null sync; echo 3 > /proc/sys/vm/drop_caches; sleep 1 #kernel build initial_values=($(read_values)) time systemd-run --scope -p MemoryMax=1G make ARCH=arm64 \ CROSS_COMPILE=aarch64-linux-gnu- vmlinux -j10 1>/dev/null 2>/dev/null final_values=($(read_values)) echo "pswpin: $((final_values[0] - initial_values[0]))" echo "pswpout: $((final_values[1] - initial_values[1]))" echo "pgpgin: $((final_values[2] - initial_values[2]))" echo "pgpgout: $((final_values[3] - initial_values[3]))" echo "swpout_zero: $((final_values[4] - initial_values[4]))" echo "swpin_zero: $((final_values[5] - initial_values[5]))" echo "swpin_zero_read: $((final_values[6] - initial_values[6]))" done The results I am seeing are as follows: real 6m43.998s user 47m3.800s sys 5m7.169s pswpin: 342041 pswpout: 1470846 pgpgin: 11744932 pgpgout: 14466564 swpout_zero: 318030 swpin_zero: 93621 swpin_zero_read: 13118 The proportion of zero_swpout is quite large (> 10%): 318,030 vs. 1,470,846. The percentage is 17.8% = 318,030 / (318,030 + 1,470,846). About 29.4% (93,621 / 318,030) of these will be swapped in, and 14% of those zero_swpin pages are read (13,118 / 93,621). Therefore, a total of 17.8% * 29.4% * 14% = 0.73% of all swapped-out pages will be re-mapped to zero_pfn, potentially saving up to 0.73% RSS in this kernel-build workload. Thus, the total build time of my final results falls within the testing jitter range, showing no noticeable difference while the conceptual model code with lots of zero-filled pages and read swap-in shows significant differences. I'm not sure if we can identify another real workload with more read swpin to observe noticeable improvements. Perhaps Usama has some? Thanks Barry