From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4307CC3ABA9 for ; Wed, 30 Apr 2025 00:55:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5AE56B00DA; Tue, 29 Apr 2025 20:55:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E60F6B00D9; Tue, 29 Apr 2025 20:55:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 810666B00DA; Tue, 29 Apr 2025 20:55:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5BAFA6B00D7 for ; Tue, 29 Apr 2025 20:55:16 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D6BBC161465 for ; Tue, 29 Apr 2025 23:39:06 +0000 (UTC) X-FDA: 83388699492.09.0675064 Received: from mail-yw1-f171.google.com (mail-yw1-f171.google.com [209.85.128.171]) by imf09.hostedemail.com (Postfix) with ESMTP id 1F42B140002 for ; Tue, 29 Apr 2025 23:39:04 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jgRscwv3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745969945; a=rsa-sha256; cv=none; b=3v6ev9yjyRgi4x7OYlApgXXvHFia6cZOiEEe511SW4Fi1C8RLCBjbcYNGsvkSz6ewUso/O 7pFJHTS+rrUILTBFx+Y7o8rM4CnOnUfIX9Us3LwHOVNGAcdt1vxIsOS2WxUtx5CNFYSy2K aTJr/zlW55nlxZHJzjtVx+qKW4vyibY= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jgRscwv3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745969945; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fJ/RbZZlo/UkCDv1xusfLlmapZEwEXtY5fDz/xwPMAE=; b=kqgeBDTd8FcpskPLJS4YoqAu9g5g9Ku4qvNcOVQE4ojyn0/e7d3IrkoEtQHI33cZBrwa/l GmporO1tM/lnV+QBKgKStySDc5zWxx7xTMwwvMka/1pwBZYuaMByzDqF2Ms7YwbkFT9+CW ZZvKzR5AxlBbXxg+0w6iEFNx+jG7Oho= Received: by mail-yw1-f171.google.com with SMTP id 00721157ae682-702628e34f2so4275737b3.0 for ; Tue, 29 Apr 2025 16:39:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745969944; x=1746574744; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fJ/RbZZlo/UkCDv1xusfLlmapZEwEXtY5fDz/xwPMAE=; b=jgRscwv3xE1dxH29oaREqVqfNTI6VVRF7zj57H2+s7APuEOhJnWa2+qB3tDKcKhFA9 CSpFRG5ayplhkkgu56XmF7tgRWaN+jwn3PaWluzblRq68QQRU4V1dYWoY+OaNlzbo5B7 vwLthsnzXoJFYGe/Yxnub+QPutZ1ioGDblI9B9I35JBlOEdshUJFhqmgao7pE8dPz16o Cft4pOOK3gOwz4MwsFSvICsUl447He3UGaav4H0KOIG1F//DSdkMXoTouQ0QrElAOE+8 wJjSjxRUsCp7dViYFtHceNeZhgtmIHSY3BLjVN1CTEtwdPEpVBZ4z8uKXwSNKAcBjBuh J8zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745969944; x=1746574744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fJ/RbZZlo/UkCDv1xusfLlmapZEwEXtY5fDz/xwPMAE=; b=PCuv3uLUR5LZawTxvyp5WxsCubMubSYwfApyHRV9tq9sxDz9S42pYLSi/rOJUbSxfC S4c8URMEL+7oSp0d9lu48Y2Wmxb6JtR7sWIYlfvuyH+B0tf/Pq0jyxHnN8AhqP+iHCH2 9joySd8PfS8WHOUwl1687j6mSlrgwTtoSlfcGnZqg3fTYWCcEmC1XexeHYEkCPtv8pgw wMd8fGQ4QDkRdjM+RsXU0c+AtGmR4pA0vU+sfzxQl0vKi5r/JHeT7iMBG2NmTJHo/Fyd nhYc+bpxgBSCft3vforE/FpMm9Kojy+r+Nx9MpJtqY8XnyEPzGGXfhnQady7Jgd2TfY7 QLaQ== X-Gm-Message-State: AOJu0YwySGZsQdlUcf7U10xYikV4x4kEl9ZIZXuQCc99so1v8hw0Z+nS DNBCK8Pdegapm64LAf1OdgC4hvxO0/CKaJzNBiDLPAngNykvID0ji6nfag== X-Gm-Gg: ASbGnctWq7dEH+0dP2rCpv/EKUyVLU2O9dDtGNJfr7oaBH5IttQ5lp4ZELTXuGKIIQ5 1/xV0JddaSoOhqzKDB+G0kdkyaB8r7peJnJ4NJ6SbIjAtOG8Fsjl2qpzddA2/f08iNAKKpNS2ls nhnePwEzTNAIAVZ3mO/3iKI4s7b5uNcnKHPf5ucZuX3XV8I6pDA7KfjtW527NYjZBU2niZxhfR7 WN9CMyrI8RgR9zg17Qr1GdTULGb2UR0PKLhoKtdnN4323T4DjL5NkdMzANQAVIIeO3f/h4+W9KT C/qb5mPsx+ukt3yPheCY4tD/5rCw8EPlAARdBlhCKr4= X-Google-Smtp-Source: AGHT+IEHV428ZSbiu1+j7VMUxhVNZCUuk+j6iHkhzjjso97SHfk0dHLLiY+/9yco0G/qMlrTow3Phg== X-Received: by 2002:a05:690c:7a1:b0:6ea:8901:dad8 with SMTP id 00721157ae682-708ad07cabamr9860267b3.3.1745969944028; Tue, 29 Apr 2025 16:39:04 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:73::]) by smtp.gmail.com with ESMTPSA id 00721157ae682-708adfc3eadsm768377b3.12.2025.04.29.16.39.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Apr 2025 16:39:03 -0700 (PDT) From: Nhat Pham To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, viro@zeniv.linux.org.uk, baohua@kernel.org, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org, peterx@redhat.com Subject: [RFC PATCH v2 16/18] swap: simplify swapoff using virtual swap Date: Tue, 29 Apr 2025 16:38:44 -0700 Message-ID: <20250429233848.3093350-17-nphamcs@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250429233848.3093350-1-nphamcs@gmail.com> References: <20250429233848.3093350-1-nphamcs@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 1F42B140002 X-Rspamd-Server: rspam04 X-Stat-Signature: s75q9qhdonwi6ttjxcu7ymmfje14tbgd X-HE-Tag: 1745969944-641700 X-HE-Meta: U2FsdGVkX18G/Urd2OVrCnuIbLzlZqRy1DpQgJlYH05RUj4fqqdV9yV11TtwhHw8iof/FN5scTTEM0D3rYfPC+0suRFHBQYRa7UHiJ7HwjxN8/U7MN2auf5S4RdNZ/fsZV566FOcuAOAonFFLp4/LzsafC7+YxxJfVZ4aTQxzT/ERVkg7fNL7u9yQvKdT7bSJ818eqBZiMH4Cz9Y7byCpBTEjNAdPTReGAgT3dDlsAZERiWkVLUcoUYGRK0VmasooDnBDp77b8ZEwjfhYZPjbSTdCSX7ny1dM7IORXXBilvmt6VBUTQy8uwQXF46Wy0rJQaFiNFFrhBdJKudu7Zwo83ZR1UFHPWZY4ds0ZNB5Wv3AKicu8juNgV5Bw8wjDDTGBrKRRANgtpO8b+uwduHcmLNOLCIjrbL/KIrsaIxnpu6ocb+SSZUDobIJGDhowdY5hlQzbJSBz9T+/nrJrhZXg1B81DHN6tOalUYX/+NyaAuVhKbahznikEgcaoFILv5zaFu7JYBkSw+pzmbDNixyziYZolo/KK8EA/6T+HReZkEaW1o9xrPCDA5KsjX32qGN8Exjym60Ppk3s+5fHprE39BHAGWTt7Q1W5MPcwBP80zjk+39GPhs5LNji8sjwwsc3YSne08omaGeAKz+fz3st3JYiZ4TiGPem/qrAwPn8QQohnSIFVpWZgDZ4/OGbqqRumRYM1/TF7jFXjkmo9qLp6TqEa28MembJ4DFTW8tLHlX+Ek2K7QanoDdumQ2IiMY+xp45PLUvTPaPGkrys5vJkOkchcIvAYXJlFy5qm8qHwNPeQamYfhl+ZzuIudL6/PkpElHIOeCqiYLKgaffGDL8MocY+XtsN7C9LIqCcJG9olaQNWxwc2xxoFEe2jdkxTaOdCzxQ+ldvsw+WjdjcV8+ObqYj+e+cgvLelOgRqaRDphcUEJHsKpyX/laKF7hRX2yHPG0rEpLEdKx6ykL WUmzV8zY pl9AzJPPj23of2rpJpykvTNDWU2LIrPKf+85Lvs9DzTJVEfND65nPCHRoF5+QYibwL0Q2UBrweLwa8vm2nHoMsGuNeKQrI4LcazsC06U+mZI5NVr2AhB9eYWILnG8MDiR124okNs3pqm09ExASoxZ1v24iQaHMlKTYvsct+i638ygWSkYUwcnbUh27JeWeR5BswvC3qu8FabdnnswrN6MVebZQDG/Zd6Y02sSQi7sk2gCTweB98jWL0qmcHKJYOqqHJCeou0h/o+L8udCwwJbTYwbMMCnhPpToIaKbw7n3hH5sDwQkgJUEuRKFA0utMDU/5jKwmVNPaJNkSOJIJUKScRawyD83RZ0PCNqBnUunAsb+cOnVVUdLfsn+l2ChxZIgfoqXY2lYi85Zo7Fj9BY4NZwryB/NBD6Fv5BgIKeLRqj41NTd0STOFsZhVdxGVEsoad1a30zgVwP7B+PaA1XHjGuu/0J3576/sRpmx1NRF4xrl2NgLZLrt2Ulg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch presents the second applications of virtual swap design - simplifying and optimizing swapoff. With virtual swap slots stored at page table entries and used as indices to various swap-related data structures, we no longer have to perform a page table walk in swapoff. Simply iterate through all the allocated swap slots on the swapfile, invoke the backward map and fault them in. This is significantly cleaner, as well as slightly more performant, especially when there are a lot of unrelated VMAs (since the old swapoff code would have to traverse through all of them). In a simple benchmark, in which we swapoff a 32 GB swapfile that is 50% full, and in which there is a process that maps a 128GB file into memory: Baseline: real: 25.54s user: 0.00s sys: 11.48s New Design: real: 11.69s user: 0.00s sys: 9.96s Disregarding the real time reduction (which is mostly due to more IO asynchrony), the new design reduces the kernel CPU time by about 13%. Signed-off-by: Nhat Pham --- include/linux/shmem_fs.h | 3 + include/linux/swap.h | 1 + mm/shmem.c | 2 + mm/swapfile.c | 127 +++++++++++++++++++++++++++++++++++++++ mm/vswap.c | 61 +++++++++++++++++++ 5 files changed, 194 insertions(+) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 0b273a7b9f01..668b6add3b8f 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -108,7 +108,10 @@ extern void shmem_unlock_mapping(struct address_space *mapping); extern struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); + +#ifndef CONFIG_VIRTUAL_SWAP int shmem_unuse(unsigned int type); +#endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE unsigned long shmem_allowable_huge_orders(struct inode *inode, diff --git a/include/linux/swap.h b/include/linux/swap.h index c5a16f1ca376..0c585103d228 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -774,6 +774,7 @@ void vswap_store_folio(swp_entry_t entry, struct folio *folio); void swap_zeromap_folio_set(struct folio *folio); void vswap_assoc_zswap(swp_entry_t entry, struct zswap_entry *zswap_entry); bool vswap_can_swapin_thp(swp_entry_t entry, int nr); +void vswap_swapoff(swp_entry_t entry, struct folio *folio, swp_slot_t slot); #else /* CONFIG_VIRTUAL_SWAP */ static inline int vswap_init(void) { diff --git a/mm/shmem.c b/mm/shmem.c index 609971a2b365..fa792769e422 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1380,6 +1380,7 @@ static void shmem_evict_inode(struct inode *inode) #endif } +#ifndef CONFIG_VIRTUAL_SWAP static int shmem_find_swap_entries(struct address_space *mapping, pgoff_t start, struct folio_batch *fbatch, pgoff_t *indices, unsigned int type) @@ -1525,6 +1526,7 @@ int shmem_unuse(unsigned int type) return error; } +#endif /* CONFIG_VIRTUAL_SWAP */ /* * Move the page from the page cache to the swap cache. diff --git a/mm/swapfile.c b/mm/swapfile.c index 83016d86eb1c..3aa3df10c3be 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2089,6 +2089,132 @@ static unsigned int find_next_to_unuse(struct swap_info_struct *si, return i; } +#ifdef CONFIG_VIRTUAL_SWAP +#define for_each_allocated_offset(si, offset) \ + while (swap_usage_in_pages(si) && \ + !signal_pending(current) && \ + (offset = find_next_to_unuse(si, offset)) != 0) + +static struct folio *pagein(swp_entry_t entry, struct swap_iocb **splug, + struct mempolicy *mpol) +{ + bool folio_was_allocated; + struct folio *folio = __read_swap_cache_async(entry, GFP_KERNEL, mpol, + NO_INTERLEAVE_INDEX, &folio_was_allocated, false); + + if (folio_was_allocated) + swap_read_folio(folio, splug); + return folio; +} + +static int try_to_unuse(unsigned int type) +{ + struct swap_info_struct *si = swap_info[type]; + struct swap_iocb *splug = NULL; + struct mempolicy *mpol; + struct blk_plug plug; + unsigned long offset; + struct folio *folio; + swp_entry_t entry; + swp_slot_t slot; + int ret = 0; + + if (!atomic_long_read(&si->inuse_pages)) + goto success; + + mpol = get_task_policy(current); + blk_start_plug(&plug); + + /* first round - submit the reads */ + offset = 0; + for_each_allocated_offset(si, offset) { + slot = swp_slot(type, offset); + entry = swp_slot_to_swp_entry(slot); + if (!entry.val) + continue; + + folio = pagein(entry, &splug, mpol); + if (folio) + folio_put(folio); + } + blk_finish_plug(&plug); + swap_read_unplug(splug); + lru_add_drain(); + + /* second round - updating the virtual swap slots' backing state */ + offset = 0; + for_each_allocated_offset(si, offset) { + slot = swp_slot(type, offset); +retry: + entry = swp_slot_to_swp_entry(slot); + if (!entry.val) + continue; + + /* try to allocate swap cache folio */ + folio = pagein(entry, &splug, mpol); + if (!folio) { + if (!swp_slot_to_swp_entry(swp_slot(type, offset)).val) + continue; + + ret = -ENOMEM; + pr_err("swapoff: unable to allocate swap cache folio for %lu\n", + entry.val); + goto finish; + } + + folio_lock(folio); + /* + * We need to check if the folio is still in swap cache. We can, for + * instance, race with zswap writeback, obtaining the temporary folio + * it allocated for decompression and writeback, which would be + * promply deleted from swap cache. By the time we lock that folio, + * it might have already contained stale data. + * + * Concurrent swap operations might have also come in before we + * reobtain the lock, deleting the folio from swap cache, invalidating + * the virtual swap slot, then swapping out the folio again. + * + * In all of these cases, we must retry the physical -> virtual lookup. + * + * Note that if everything is still valid, then virtual swap slot must + * corresponds to the head page (since all previous swap slots are + * freed). + */ + if (!folio_test_swapcache(folio) || folio->swap.val != entry.val) { + folio_unlock(folio); + folio_put(folio); + if (signal_pending(current)) + break; + schedule_timeout_uninterruptible(1); + goto retry; + } + + folio_wait_writeback(folio); + vswap_swapoff(entry, folio, slot); + folio_unlock(folio); + folio_put(folio); + } + +finish: + if (ret == -ENOMEM) + return ret; + + /* concurrent swappers might still be releasing physical swap slots... */ + while (swap_usage_in_pages(si)) { + if (signal_pending(current)) + return -EINTR; + schedule_timeout_uninterruptible(1); + } + +success: + /* + * Make sure that further cleanups after try_to_unuse() returns happen + * after swap_range_free() reduces si->inuse_pages to 0. + */ + smp_mb(); + return 0; +} +#else static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte) { return pte_same(pte_swp_clear_flags(pte), swp_pte); @@ -2479,6 +2605,7 @@ static int try_to_unuse(unsigned int type) smp_mb(); return 0; } +#endif /* CONFIG_VIRTUAL_SWAP */ /* * After a successful try_to_unuse, if no swap is now in use, we know diff --git a/mm/vswap.c b/mm/vswap.c index 4aeb144921b8..35261b5664ee 100644 --- a/mm/vswap.c +++ b/mm/vswap.c @@ -1252,6 +1252,67 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) swapcache_clear(NULL, entry, nr); } +/** + * vswap_swapoff - unlink a range of virtual swap slots from their backing + * physical swap slots on a swapfile that is being swapped off, + * and associate them with the swapped in folio. + * @entry: the first virtual swap slot in the range. + * @folio: the folio swapped in and loaded into swap cache. + * @slot: the first physical swap slot in the range. + */ +void vswap_swapoff(swp_entry_t entry, struct folio *folio, swp_slot_t slot) +{ + int i = 0, nr = folio_nr_pages(folio); + struct swp_desc *desc; + unsigned int type = swp_slot_type(slot); + unsigned int offset = swp_slot_offset(slot); + + XA_STATE(xas, &vswap_map, entry.val); + + rcu_read_lock(); + xas_for_each(&xas, desc, entry.val + nr - 1) { + if (xas_retry(&xas, desc)) + continue; + + write_lock(&desc->lock); + /* + * There might be concurrent swap operations that might invalidate the + * originally obtained virtual swap slot, allowing it to be + * re-allocated, or change its backing state. + * + * We must re-check here to make sure we are not performing bogus backing + * store changes. + */ + if (desc->type != VSWAP_SWAPFILE || + swp_slot_type(desc->slot) != type) { + /* there should not be mixed backing states among the subpages */ + VM_WARN_ON(i); + write_unlock(&desc->lock); + break; + } + + VM_WARN_ON(swp_slot_offset(desc->slot) != offset + i); + + xa_erase(&vswap_rmap, desc->slot.val); + desc->type = VSWAP_FOLIO; + desc->folio = folio; + write_unlock(&desc->lock); + i++; + } + rcu_read_unlock(); + + if (i) { + /* + * If we update the virtual swap slots' backing, mark the folio as + * dirty so that reclaimers will try to page it out again. + */ + folio_mark_dirty(folio); + swap_slot_free_nr(slot, nr); + /* folio is in swap cache, so entries are guaranteed to be valid */ + mem_cgroup_uncharge_swap(entry, nr); + } +} + #ifdef CONFIG_MEMCG static unsigned short vswap_cgroup_record(swp_entry_t entry, unsigned short memcgid, unsigned int nr_ents) -- 2.47.1