From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CEC2C3ABD8 for ; Wed, 14 May 2025 20:18:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0787C6B00A1; Wed, 14 May 2025 16:18:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 027F26B00A2; Wed, 14 May 2025 16:18:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE9596B00A3; Wed, 14 May 2025 16:18:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BB08A6B00A1 for ; Wed, 14 May 2025 16:18:29 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A5CCEC07C8 for ; Wed, 14 May 2025 20:18:29 +0000 (UTC) X-FDA: 83442625938.26.3C00223 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf28.hostedemail.com (Postfix) with ESMTP id BE95DC0009 for ; Wed, 14 May 2025 20:18:27 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kiQ3EDOU; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747253907; a=rsa-sha256; cv=none; b=z2hE9vKMmJ4hg0Ay0YA5NnYljdc5xQWCSYq6ot4INQsJwHm32RWQmQgX55UEyqtEOEz4y/ PyGe4vlyfzUtsRpRj0+T77XYk4eaLfTapgS8n4lLKED7hw+DAvwnhnzg5j+kk8E4otOcDE NXjq72jgvOd++GyuNXY8URAl7M6IuWs= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kiQ3EDOU; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747253907; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gfjVpkL2RQ/juic9uIOxnoJPTEs/0Bw2uxj1hRxT5ew=; b=ilo3Ux8ZYsLz9Afebyi+NZGcr4lNk8hWHFiRea/njhZyixwwwa1Q7++8pCu1S0bwP8DJ8U ptbJFnzaTaIpVGUTvUx1srmpJKM7Fy7/lLldGhdOvH0PlHZN9Cfk9+ElPQPSSS+HEQqzfW uB6acDiVYfzseYSnIUj6X+fOJZ2OhW8= Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-30a93117e1bso314124a91.1 for ; Wed, 14 May 2025 13:18:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747253906; x=1747858706; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=gfjVpkL2RQ/juic9uIOxnoJPTEs/0Bw2uxj1hRxT5ew=; b=kiQ3EDOUxoYAz7sT4xCfwzSyHTdRZz5jWinHBxTxUnRc2Co0Di3CsCzkpJyRYVVRdM zSHswFN2IJFXhd71U1Q7l9KmHLWuR1aVFUViTzGIMraEZ52ycRUckf9p1mmLeVLwDX08 oHI54WWU/D7I410VU95zRCcgJ9VqcMIOYNIyN4DucH3BvcV7glEEEwrCqvt+gLw56XYM k8YlQ9zjgtyd0cP3S8FWimuIyDm56t87RGp1lyDUfi0I+gnkuX7qqd+AswaLrYl1keNr 4TcNslQ7bYYcv2gQQPkXi2t8XcSU9eFBG6rMOCCy8/Vw6GHuGftMAsZeQL2VwzOqMpQA rNeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747253906; x=1747858706; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=gfjVpkL2RQ/juic9uIOxnoJPTEs/0Bw2uxj1hRxT5ew=; b=QjI3gmsKezk483k6I+XBdSMAbac9OaIJDNG1O5f5JPuSIaGaK2N8Zb3ern+nd6kT6L nwX9V7ggdmS0nxscknLo3M5qbnx2Zs5wRvn+CrUKWDDu2LFoUyZZ5j1kXPKXMM/yIkqV PmaA0n9x4ErepUKcFX+/ou6XlYqSrc6lv+cBgOQGb3JIcIU9ANPyEF6clwAzV+wAW4Hd UQzX4M4vmjSLSngvmg0ZqMbPOa8fWYd7Z0+ZA0XyRmxsNulCa2F02AwusCAHiXo4IOqb nae7M1bj9HQQRnVmLCFq8YoRzu+fqBiJkdvSWinPOK0Rjz5x1QXjxwYim6q363OEp3cS rbhQ== X-Gm-Message-State: AOJu0Yx/pM/z5NXY0Msy/uHzuis//Nt1HoFGGl7z/87DxnaF36QOlg/k GNM77b5w6FWwPnhazQuUSYkPhuVvceUYKVz8FUzjxm9ifPs2apncQY6tViZD0hg= X-Gm-Gg: ASbGncvRCL7VWXOMOvhHmOMOu9XFdlUc5rDGKZeV+o0V3lDt48TvXMRmPkgR67GHWrR sjKx2KANKTb0ENqfdIxImkjfCwMBQEicdRvWCP9g7fEWIDBKsn7EuC4jTBRHEibXWIRtPU5v8R6 B6YZZbDdOiVEH2MXPrPuFchuJ3fsKP/EaHC9roLF9CX5iovAWP3fbXum0SeuhOmgBHtphkX6Bb6 pc2uoC+DEJuIzz4tpxwIneDOKK6hh98EnYYUiHrwn3+YZoPfiZ4bQ4zk3qTrZa2MiPdx5hP9YfG qs28Sog0QF7yHXnXsGoA6bxcuturlG4xD1D7hjtBo3qmE8u4E2wqSYiZWQIrO8clCoAse5yk X-Google-Smtp-Source: AGHT+IGWloP0l9qXTwCJCoEzooAjkbJiIzCiq4e3TVRQawW8kaVvenrNRmghiVZv//rXIgvQm18fjg== X-Received: by 2002:a17:90b:2d4b:b0:2ee:5bc9:75c3 with SMTP id 98e67ed59e1d1-30e2e583ebamr7419029a91.5.1747253905981; Wed, 14 May 2025 13:18:25 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e33401934sm2003692a91.9.2025.05.14.13.18.21 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 14 May 2025 13:18:25 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Hugh Dickins , Chris Li , David Hildenbrand , Yosry Ahmed , "Huang, Ying" , Nhat Pham , Johannes Weiner , Baolin Wang , Baoquan He , Barry Song , Kalesh Singh , Kemeng Shi , Tim Chen , Ryan Roberts , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 09/28] mm/swap: rename __read_swap_cache_async to __swapin_cache_alloc Date: Thu, 15 May 2025 04:17:09 +0800 Message-ID: <20250514201729.48420-10-ryncsn@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514201729.48420-1-ryncsn@gmail.com> References: <20250514201729.48420-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: BE95DC0009 X-Rspam-User: X-Stat-Signature: 5gjqe35hpw86k9k9erzzizppwsyhmh96 X-HE-Tag: 1747253907-678050 X-HE-Meta: U2FsdGVkX1+UrwL3N5WaVfsaxTrso5Gng3UvXMmrJBHj4w6/OioPyABYUYsSMamU6metofWDDr7JUo2frJZEgfTc3WGWNHxZKViKo6r893bp59WV3HyIbunx3fh76HATdvrwLu/nyTCR34Oy1mUXA5OQu9YwkLjgCcU4WeZYQEKWWUI9z3IVvRCXsui39/B91kpdCK62ap0H/hWhQFVFgk9TmAUiBiKVVTq1fyRtj9wjNjO/1q9qRox6ftcVq2QZlG5tg50iZJPPzhUVjs1tFu/vxK4qU2GJzGb7ej27a7RAPavSWU+PV/0TP8dyJvzWPWiUHcQvVnhlJwco+vVjdddudBxSM1N9YvuhVXqgE8klCQme/qYt0b85Y3loxkv22ThMKmPv1z3Lj1SdnAF9KbTYidyQTsZThbDb9Vh2Ljk620sJnRFTcgBUEcGHy3J525GW0Ll6glsQbvl/PHsRcB8mKnr1UhwZmJe4DRHHO2GeC/2jmMkX5tmNoZ+xPRccYs8gku/lOG5+zyw85HhJsi0hUgipyoDbHYVRj277iNkVOyXURzeXVz7YQZsbeolKkKolmp25Mu2YiZd3wNLAnk8N+H22cIgYuIrw2zDfykbCr0XzCo3s1a/vFdAxLzwWSFPXFITRFtzeAIjurQgxx+Wd8zxt6fFvlwQ3/mkCgWRPpCevfq+pgxcGGK9bzidCL2e1Ufw8kgxhiytH/GPvQkStrFHl+awWmnxf2sZITeT63yoXklO34U3hVKJ4h1MZotxpV+gCks5BiDXOSvSzOWyWWyq1syp0ewnAHNbdMxCbsIqWxg+haN97mBW+KDa/R4cUspkRLuDJOZBGTfWukX3IAwa3HqSAcHdwJYrJj1QyJCWTuMsSBDNGuK9TLW7IlYRHPq93w5OMcfhZX8zEeKtM21AMboYbIt4/jLJkVMgSC+5jtZ8Id6QlTkoH0KqOQDoUwt3pvKqQ+9KFqbJ qeyc0pPB FRhy7+CFL/pqalJx68tiNHOoHnk3Ntu85Yr+WojH4hznYlhwAgUA8TY31ZRfe0z/UeeHs7kq08t1AdvnaSKDmZ/ENZKss0usg9szLiUagLW11rNLGkiaApWxdkB8JaBtlQ9RH/CUg2weYVFp0FPPCUn2kQeXtiAMTDi0MtXi7TkXH3yEMLPy5Ul/kbKT8wJ5hV7hlaHuESE8Qx5E+xsUyQIkDHtmYqRSrQXYyk3QIy83E7ggirbdnTT7kr6rMc6DpndZ1U3Lo0QNo2AYplggNpJ8egluzUzRku7AB0Y2VnKvtXmm0GU3hCPakWHNSLBd07012DMke72fwrflZDnitrAHrLZQ4X1lmw91+OyqOExMadnV25WgSGTc23fQLcXsnOUK/zBGs/vj4+eQ52Y7tPbAvAgafvOT6bDdYKnlPsdamQMJg5G04az5ghwZf4bXkEoBC3Rl8p4ySo31aEUzc37cAbL4DINuZWF5KSYsjWtXe6zPh7CFP/eTNBMQ9ay3VbTo42ZeouAAJcWa9gddMV3wp7BKqt9VqrwFpeATO2bpYAvWHFNVFpNScOza4bonxe6BTXmBokRoLm3vrAXh6Oj7drpXq5QIAr7Om9feHsVMXIG8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song __read_swap_cache_async is widely used to allocate and ensure a folio is in swapcache, or get the folio if a folio is already there. It's not async, and it's not doing any read. Rename it to better present its usage. Signed-off-by: Kairui Song --- mm/swap.h | 2 +- mm/swap_state.c | 20 ++++++++++---------- mm/swapfile.c | 2 +- mm/zswap.c | 4 ++-- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 30cd257aecbb..fec7d6e751ae 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -210,7 +210,7 @@ void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr); struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, struct swap_iocb **plug); -struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags, +struct folio *__swapin_cache_alloc(swp_entry_t entry, gfp_t gfp_flags, struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, bool skip_if_exists); struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/swap_state.c b/mm/swap_state.c index bef9633533ec..fe71706e29d9 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -353,7 +353,7 @@ void swap_update_readahead(struct folio *folio, } } -struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, +struct folio *__swapin_cache_alloc(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, bool skip_if_exists) { @@ -403,12 +403,12 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, goto put_and_return; /* - * Protect against a recursive call to __read_swap_cache_async() + * Protect against a recursive call to __swapin_cache_alloc() * on the same entry waiting forever here because SWAP_HAS_CACHE * is set but the folio is not the swap cache yet. This can * happen today if mem_cgroup_swapin_charge_folio() below * triggers reclaim through zswap, which may call - * __read_swap_cache_async() in the writeback path. + * __swapin_cache_alloc() in the writeback path. */ if (skip_if_exists) goto put_and_return; @@ -417,7 +417,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * We might race against __swap_cache_del_folio(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another - * __read_swap_cache_async(), which has set SWAP_HAS_CACHE + * __swapin_cache_alloc(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */ schedule_timeout_uninterruptible(1); @@ -464,7 +464,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * the swap entry is no longer in use. * * get/put_swap_device() aren't needed to call this function, because - * __read_swap_cache_async() call them and swap_read_folio() holds the + * __swapin_cache_alloc() call them and swap_read_folio() holds the * swap cache folio lock. */ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, @@ -482,7 +482,7 @@ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, return NULL; mpol = get_vma_policy(vma, addr, 0, &ilx); - folio = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + folio = __swapin_cache_alloc(entry, gfp_mask, mpol, ilx, &page_allocated, false); mpol_cond_put(mpol); @@ -600,7 +600,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, blk_start_plug(&plug); for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ - folio = __read_swap_cache_async( + folio = __swapin_cache_alloc( swp_entry(swp_type(entry), offset), gfp_mask, mpol, ilx, &page_allocated, false); if (!folio) @@ -619,7 +619,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - folio = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + folio = __swapin_cache_alloc(entry, gfp_mask, mpol, ilx, &page_allocated, false); if (unlikely(page_allocated)) swap_read_folio(folio, NULL); @@ -714,7 +714,7 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, continue; pte_unmap(pte); pte = NULL; - folio = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + folio = __swapin_cache_alloc(entry, gfp_mask, mpol, ilx, &page_allocated, false); if (!folio) continue; @@ -734,7 +734,7 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, lru_add_drain(); skip: /* The folio was likely read above, so no need for plugging here */ - folio = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, + folio = __swapin_cache_alloc(targ_entry, gfp_mask, mpol, targ_ilx, &page_allocated, false); if (unlikely(page_allocated)) swap_read_folio(folio, NULL); diff --git a/mm/swapfile.c b/mm/swapfile.c index aaf7d21eaecb..62af67b6f7c2 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1390,7 +1390,7 @@ static unsigned char swap_entry_put_locked(struct swap_info_struct *si, * CPU1 CPU2 * do_swap_page() * ... swapoff+swapon - * __read_swap_cache_async() + * __swapin_cache_alloc() * swapcache_prepare() * __swap_duplicate() * // check swap_map diff --git a/mm/zswap.c b/mm/zswap.c index af954bda0b02..87aebeee11ef 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1084,8 +1084,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, return -EEXIST; mpol = get_task_policy(current); - folio = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, - NO_INTERLEAVE_INDEX, &folio_was_allocated, true); + folio = __swapin_cache_alloc(swpentry, GFP_KERNEL, mpol, + NO_INTERLEAVE_INDEX, &folio_was_allocated, true); put_swap_device(si); if (!folio) return -ENOMEM; -- 2.49.0