From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E04EC6FD1F for ; Wed, 27 Mar 2024 03:01:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 837AD6B0093; Tue, 26 Mar 2024 23:01:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E8226B0099; Tue, 26 Mar 2024 23:01:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AFEF6B009A; Tue, 26 Mar 2024 23:01:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 594526B0093 for ; Tue, 26 Mar 2024 23:01:26 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1C5C71A03D6 for ; Wed, 27 Mar 2024 03:01:26 +0000 (UTC) X-FDA: 81941318172.17.D4B4A57 Received: from mail-lj1-f182.google.com (mail-lj1-f182.google.com [209.85.208.182]) by imf27.hostedemail.com (Postfix) with ESMTP id 6BC514000D for ; Wed, 27 Mar 2024 03:01:24 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GFstIUWD; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711508484; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rTwGH14b5hNtq+Bp/YEoNS/F6ZykszY1GkITKMy45JI=; b=h6ZfRoy9TxCHbPHbv7puHtEt7iY/n2FjATtnDvKskxnjAnc87IAEyk6TSZK6tQjTmZZjzz AHKId9kZ8dvlrDjpmV6VLYykm4HueLHpDLD8d3liRBPzuKBzfJ4UAKT1JaGXtrTI/gg5Hq uszNogXXG2oCKz1LOfpXlGDWHBZn7F0= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GFstIUWD; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711508484; a=rsa-sha256; cv=none; b=IEZiBsi2ggEKfOv4aiGUqRf8gGR/sNFHvZHJf4JYMCfiySSUpKMp1pf7WYUbr6tDrmf/Pj pt/l5NlcHpMyl/OBLtB7LYWoZVJaYV7JquwKF8U6J45JaooadqXQVs00Dbkj1SLI38CtXN UUkH8Eb4+Oh/YgBz9GDcurNfy8Ntl8A= Received: by mail-lj1-f182.google.com with SMTP id 38308e7fff4ca-2d6dda3adb9so27674581fa.1 for ; Tue, 26 Mar 2024 20:01:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711508483; x=1712113283; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=rTwGH14b5hNtq+Bp/YEoNS/F6ZykszY1GkITKMy45JI=; b=GFstIUWDl37AQ9ZFMpIi8ZCBDds6UiwdNtU8MgTkbYauxg38XAD7VI+x/U87lDx8Zk +miUdnTA4asxolh+9GSxCm0eicJc37IJOHJ+PEmhg8VFiL/14Vyh62oykqGOVmJDNHVi qMa1EtUJTkjsV4fJZh3U1yqJW61hHSplLQfxLXS27eNs47AP9GPJKj6XkCr8hVpf2I5x PvVrGQuE7TeVfLX8cysrtmlqVAzFzK/5xo+s6dFBZeko3AvY7av6Dr/8h7UY9bfL1eV1 uWzD8SdH6YyJCHZuNcGp7ara3JzCf1s02ZkeEjWGErxXQiqPUbkJNFVhjQaoMk8SDuXL jRcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711508483; x=1712113283; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rTwGH14b5hNtq+Bp/YEoNS/F6ZykszY1GkITKMy45JI=; b=kWtQWhz0kXkyFGeijs+9uxrTExHt02SIOMUaCJ+vcXk7O0MeEeXpVGtTVEuK7glXuE Nh4RDtK72dVNaLwx90C8YyQmbVggXD8W1WMCT1eyLU6SZulYq1tFtvHz4CMrX6EPOvQO n/W5s9oFVWBTY5fvY8T2lEyEAqlgIs8DVsxB3s0LJt2FEpRgDyL53HNaQdzT79MALQCv /gUrHsXnvPeRRhOwBxhBVoCruMC0ZAFmrUBDpMuk5mkcsq9wYkPDIv775zowFpyP+ZQP hViNieJdY9VFH7/A9E2ax/+opaQXv5bQyd5fWF0/vedxT6e7XZWe1OeCp7jEQc/x9VF5 NJKg== X-Gm-Message-State: AOJu0Yxb9/Au6T6OVGadqasM+hfjfIk51+m/d+6BuQHFgJVUTwujVZKz c7E6Zj6ZCFlF3MaKGPQIm7pVLLnRpHq4WzPGBPdIL7pOjnT8Ki5/BwC+dIMpfxwu2GD4D6Uwm4K AI3g1SAuX6tIKnaTfUpE5bbpwq6U= X-Google-Smtp-Source: AGHT+IHRoKRGlAZVZR5ETdWGWh78P9+3JjU3kEy2ae1wV0N2/dbfCA4T5JRCvYroGbLRoJLKnuUVseUlPDYLu3jpycg= X-Received: by 2002:a2e:3017:0:b0:2d6:a9be:19e with SMTP id w23-20020a2e3017000000b002d6a9be019emr2942919ljw.46.1711508482368; Tue, 26 Mar 2024 20:01:22 -0700 (PDT) MIME-Version: 1.0 References: <20240326185032.72159-1-ryncsn@gmail.com> <878r24o07p.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <878r24o07p.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Kairui Song Date: Wed, 27 Mar 2024 11:01:05 +0800 Message-ID: Subject: Re: [RFC PATCH 00/10] mm/swap: always use swap cache for synchronization To: "Huang, Ying" Cc: linux-mm@kvack.org, Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 6BC514000D X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: w1mqeur3fqtnnzyqrhtj9gfguhsc3oto X-HE-Tag: 1711508484-586170 X-HE-Meta: U2FsdGVkX1/BA1msTSCEOZR6blyf+owQeIeF9bav147qmFBik+CaxO8BWTlkQe6XrFWsIn0yGvKSXNtL2KnGk5kUCpYGBNfPDupn7Ifm3Xj9B6Fez7bHIhHhsC+z+9sNaUh5QUjkVLrs/u9pLKbzH7J6GQ90dIbUlZkJhQu6e+gbquQltB9X+HZyiqyt5/W7RVig0IL4Vpowun4DFYbj/fi41IeLix3ffDad99+MiCbj6/9Kxd+XHn+LyssHkAzEF3ZfhISbHRWHu0WJXv5EcYlpDVZZLz/nZQAt3zf4hU+89V83FrEQDAiresIQiy/Zr7S3PMv0yTyIIYSFqCPqQ2rMMkKe17Xm9wIFhuwUNzcYResdhjk2y1GX0xulH8MXeF+OcmOSMxnzHov0YGXHQpMYdn6e8taLcesLm+LODZdycnp36cSKk0mSYlNpf25LQD9k/TbXDSby4qx/+V1GD+VD2doHk5sTQk7cz+Op5gObFPnKwKWWZhzYo806ntgOIQ9t1XpzqW9VTfj9xOwlKSG26bLVbDPzG/6AOMNoMqMeIJkAqAQJJZ/XF5moCrdCfMO+dhVcSR+hdgMFfCo/wpj74wMZDkQc93QfoutagAET/lycMBnrGuk7Baeqioip7K3cKw7+fRaANu0tneyMxkMAU1Y/9zU0RnYYEdJfD6DjF40Ds//DitXYwHZzfiGmTQKaaQ5NPc5iHio6rqtSVR7jZMmaMCOgjIOZsLRpGlbHzktde7xzC0xMsLqTdRQz7q3AG/OX3WgCj1cbDiPZghFR2f4DENH3pi2YpZ6lrS/UeixsvwSRLEVsHKWnWUJNSo7p9p0mcsaQcNqmwNrBstiNv856RJkf8lxfAP/qiRk6Qs4sIxOiEFh6v0dAb91ZRbmjIJSHRuS9vf8VSMDqD5a86bl+cju8vTIvEa6nBxoupOG864IV/snptH82V0u4UGOzAwkNvY70lVkBlrz 5WhR0d71 F8lE5vfr4zsCNfQJOgXnYDQti+VXWEpKPFLom X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 27, 2024 at 10:54=E2=80=AFAM Huang, Ying = wrote: > > Hi, Kairui, > > Kairui Song writes: > > > From: Kairui Song > > > > A month ago a bug was fixed for SWP_SYNCHRONOUS_IO swapin (swap cache > > bypass swapin): > > https://lore.kernel.org/linux-mm/20240219082040.7495-1-ryncsn@gmail.com= / > > > > Because we have to spin on the swap map on race, and swap map is too sm= all > > to contain more usable info, an ugly schedule_timeout_uninterruptible(1= ) > > is added. It's not the first time a hackish workaround was added for ca= che > > bypass swapin and not the last time. I did many experiments locally to > > see if the swap cache bypass path can be dropped while keeping the > > performance still comparable. And it seems doable. > > > > In general, I think that it's a good idea to unify cache bypass swapin > and normal swapin. But I haven't dive into the implementation yet. Thanks! This series might be a bit too large, I can try to split it for easier reviewing later, just if we are OK with this idea. > > > This series does the following things: > > 1. Remove swap cache bypass completely. > > 2. Apply multiple optimizations after that, these optimizations are > > either undoable or very difficult to do without dropping the cache > > bypass swapin path. > > 3. Use swap cache as a synchronization layer, also unify some code > > with page cache (filemap). > > > > As a result, we have: > > 1. A comparable performance, some tests are even faster. > > 2. Multi-index support for swap cache. > > 3. Removed many hackish workarounds including above long tailing > > issue is gone. > > > > Sending this as RFC to collect some discussion, suggestion, or rejectio= n > > early, this seems need to be split into multiple series, but the > > performance is not good until the last patch so I think start by > > seperating them may make this approach not very convincing. And there > > are still some (maybe further) TODO items and optimization space > > if we are OK with this approach. > > > > This is based on my another series, for reusing filemap code for swapca= che: > > [PATCH v2 0/4] mm/filemap: optimize folio adding and splitting > > https://lore.kernel.org/linux-mm/20240325171405.99971-1-ryncsn@gmail.co= m/ > > > > Patch 1/10, introduce a helper from filemap side to be used later. > > Patch 2/10, 3/10 are clean up and prepare for removing the swap cache > > bypass swapin path. > > Patch 4/10, removed the swap cache bypass swapin path, and the > > performance drop heavily (-28%). > > Patch 5/10, apply the first optimization after the removal, since all > > folios goes through swap cache now, there is no need to explicit shad= ow > > clearing any more. > > Patch 6/10, apply another optimization after clean up shadow clearing > > routines. Now swapcache is very alike page cache, so just reuse page > > cache code and we will have multi-index support. Shadow memory usage > > dropped a lot. > > Patch 7/10, just rename __read_swap_cache_async, it will be refactored > > and a key part of this series, and the naming is very confusing to me= . > > Patch 8/10, make swap cache as a synchronization layer, introduce two > > helpers for adding folios to swap cache, caller will either succeed o= r > > get a folio to wait on. > > Patch 9/10, apply another optimization. With above two helpers, looking > > up of swapcache can be optimized and avoid false looking up, which > > helped improve the performance. > > Patch 10/10, apply a major optimization for SWP_SYNCHRONOUS_IO devices, > > after this commit, performance for simple swapin/swapout is basically > > same as before. > > > > Test 1, sequential swapin/out of 30G zero page on ZRAM: > > > > Before (us) After (us) > > Swapout: 33619409 33886008 > > Swapin: 32393771 32465441 (- 0.2%) > > Swapout (THP): 7817909 6899938 (+11.8%) > > Swapin (THP) : 32452387 33193479 (- 2.2%) > > If my understanding were correct, we don't have swapin (THP) support, > yet. Right? Yes, this series doesn't change how swapin/swapout works with THP in general, but now THP swapout will leave shadows with large order, so it needs to be splitted upon swapin, that will slow down later swapin by a little bit but I think that's worth it. If we can do THP swapin in the future, this split on swapin can be saved to make the performance even better. > > > And after swapping out 30G with THP, the radix node usage dropped by a > > lot: > > > > Before: radix_tree_node 73728K > > After: radix_tree_node 7056K (-94%) > > Good! > > > Test 2: > > Mysql (16g buffer pool, 32G ZRAM SWAP, 4G memcg, Zswap disabled, THP ne= ver) > > sysbench /usr/share/sysbench/oltp_read_only.lua --mysql-user=3Droot \ > > --mysql-password=3D1234 --mysql-db=3Dsb --tables=3D36 --table-size=3D= 2000000 \ > > --threads=3D48 --time=3D300 --report-interval=3D10 run > > > > Before: transactions: 4849.25 per sec > > After: transactions: 4849.40 per sec > > > > Test 3: > > Mysql (16g buffer pool, NVME SWAP, 4G memcg, Zswap enabled, THP never) > > echo never > /sys/kernel/mm/transparent_hugepage/enabled > > echo 100 > /sys/module/zswap/parameters/max_pool_percent > > echo 1 > /sys/module/zswap/parameters/enabled > > echo y > /sys/module/zswap/parameters/shrinker_enabled > > > > sysbench /usr/share/sysbench/oltp_read_only.lua --mysql-user=3Droot \ > > --mysql-password=3D1234 --mysql-db=3Dsb --tables=3D36 --table-size=3D= 2000000 \ > > --threads=3D48 --time=3D600 --report-interval=3D10 run > > > > Before: transactions: 1662.90 per sec > > After: transactions: 1726.52 per sec > > 3.8% improvement. Good! > > > Test 4: > > Mysql (16g buffer pool, NVME SWAP, 4G memcg, Zswap enabled, THP always) > > echo always > /sys/kernel/mm/transparent_hugepage/enabled > > echo 100 > /sys/module/zswap/parameters/max_pool_percent > > echo 1 > /sys/module/zswap/parameters/enabled > > echo y > /sys/module/zswap/parameters/shrinker_enabled > > > > sysbench /usr/share/sysbench/oltp_read_only.lua --mysql-user=3Droot \ > > --mysql-password=3D1234 --mysql-db=3Dsb --tables=3D36 --table-size=3D= 2000000 \ > > --threads=3D48 --time=3D600 --report-interval=3D10 run > > > > Before: transactions: 2860.90 per sec. > > After: transactions: 2802.55 per sec. > > > > Test 5: > > Memtier / memcached (16G brd SWAP, 8G memcg, THP never): > > > > memcached -u nobody -m 16384 -s /tmp/memcached.socket -a 0766 -t 16 -= B binary & > > > > memtier_benchmark -S /tmp/memcached.socket \ > > -P memcache_binary -n allkeys --key-minimum=3D1 \ > > --key-maximum=3D24000000 --key-pattern=3DP:P -c 1 -t 16 \ > > --ratio 1:0 --pipeline 8 -d 1000 > > > > Before: 106730.31 Ops/sec > > After: 106360.11 Ops/sec > > > > Test 5: > > Memtier / memcached (16G brd SWAP, 8G memcg, THP always): > > > > memcached -u nobody -m 16384 -s /tmp/memcached.socket -a 0766 -t 16 -= B binary & > > > > memtier_benchmark -S /tmp/memcached.socket \ > > -P memcache_binary -n allkeys --key-minimum=3D1 \ > > --key-maximum=3D24000000 --key-pattern=3DP:P -c 1 -t 16 \ > > --ratio 1:0 --pipeline 8 -d 1000 > > > > Before: 83193.11 Ops/sec > > After: 82504.89 Ops/sec > > > > These tests are tested under heavy memory stress, and the performance > > seems basically same as before,very slightly better/worse for certain > > cases, the benefits of multi-index are basically erased by > > fragmentation and workingset nodes usage is slightly lower. > > > > Some (maybe further) TODO items if we are OK with this approach: > > > > - I see a slight performance regression for THP tests, > > could identify a clear hotspot with perf, my guess is the > > content on the xa_lock is an issue (we have a xa_lock for > > every 64M swap cache space), THP handling needs to take the lock > > longer than usual. splitting the xa_lock to be more > > fine-grained seems a good solution. We have > > SWAP_ADDRESS_SPACE_SHIFT =3D 14 which is not an optimal value. > > Considering XA_CHUNK_SHIFT is 6, we will have three layer of Xarray > > just for 2 extra bits. 12 should be better to always make use of > > the whole XA chunk and having two layers at most. But duplicated > > address_space struct also wastes more memory and cacheline. > > I see an observable performance drop (~3%) after change > > SWAP_ADDRESS_SPACE_SHIFT to 12. Might be a good idea to > > decouple swap cache xarray from address_space (there are > > too many user for swapcache, shouldn't come too dirty). > > > > - Actually after patch Patch 4/10, the performance is much better for > > tests limited with memory cgroup, until 10/10 applied the direct swap > > cache freeing logic for SWP_SYNCHRONOUS_IO swapin. Because if the swa= p > > device is not near full, swapin doesn't clear up the swapcache, so > > repeated swapout doesn't need to re-alloc a swap entry, make things > > faster. This may indicate that lazy freeing of swap cache could benif= it > > certain workloads and may worth looking into later. > > > > - Now SWP_SYNCHRONOUS_IO swapin will bypass readahead and force drop > > swap cache after swapin is done, which can be cleaned up and optimize= d > > further after this patch. Device type will only determine the > > readahead logic, and swap cache drop check can be based purely on swa= p > > count. > > > > - Recent mTHP swapin/swapout series should have no fundamental > > conflict with this. > > > > Kairui Song (10): > > mm/filemap: split filemap storing logic into a standalone helper > > mm/swap: move no readahead swapin code to a stand-alone helper > > mm/swap: convert swapin_readahead to return a folio > > mm/swap: remove cache bypass swapin > > mm/swap: clean shadow only in unmap path > > mm/swap: switch to use multi index entries > > mm/swap: rename __read_swap_cache_async to swap_cache_alloc_or_get > > mm/swap: use swap cache as a synchronization layer > > mm/swap: delay the swap cache look up for swapin > > mm/swap: optimize synchronous swapin > > > > include/linux/swapops.h | 5 +- > > mm/filemap.c | 161 +++++++++----- > > mm/huge_memory.c | 78 +++---- > > mm/internal.h | 2 + > > mm/memory.c | 133 ++++------- > > mm/shmem.c | 44 ++-- > > mm/swap.h | 71 ++++-- > > mm/swap_state.c | 478 +++++++++++++++++++++------------------- > > mm/swapfile.c | 64 +++--- > > mm/vmscan.c | 8 +- > > mm/workingset.c | 2 +- > > mm/zswap.c | 4 +- > > 12 files changed, 540 insertions(+), 510 deletions(-) > > -- > Best Regards, > Huang, Ying