From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF2CFF327A8 for ; Tue, 21 Apr 2026 06:17:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C5516B0088; Tue, 21 Apr 2026 02:16:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D8216B0093; Tue, 21 Apr 2026 02:16:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08E336B0088; Tue, 21 Apr 2026 02:16:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E39C86B0088 for ; Tue, 21 Apr 2026 02:16:57 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 735458CFF0 for ; Tue, 21 Apr 2026 06:16:57 +0000 (UTC) X-FDA: 84681554874.13.D72FEAD Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf01.hostedemail.com (Postfix) with ESMTP id 70FCF40010 for ; Tue, 21 Apr 2026 06:16:55 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=MAKaN+Wq; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of devnull+kasong.tencent.com@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=devnull+kasong.tencent.com@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1776752215; a=rsa-sha256; cv=none; b=Kc+Ep8KVgv5/cayqp7CHgJxvRTeinVNXbVF6CxAM3Bnso7yLag/Gs5Je17KHrnXDpjfDCz EtsQzmAqV8r9GQg2KSmf3wuwfisZ96lY+QnqgTFTfWzalygtFTBjRYUg1RFkfnUGBT8yoH /7oxs82uG4pN8DPYL6BUQTmdy0T8YKc= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=MAKaN+Wq; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of devnull+kasong.tencent.com@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=devnull+kasong.tencent.com@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1776752215; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Rp6kjMW9f2H/H9HJ051iWbtemMCUZfjCF49YU8ohxAs=; b=7aGoYnKpfut6AU3OU1AR92c6SIR7JdfKYNlWIQ+ag+T4759pwL0FawvroV/Ho6osD1TQaW UUJLAGblbBHA4bBJxO48wqi6sPMYEY0UE46a5TScrigAg3/Dqbbh5KkSCAzibuKtcbugP1 Vtj+ptehfDzhgEPiBYfJRXs3G9pWIqE= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 97EBC60125; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPS id 3168CC2BCB8; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776752214; bh=odQrTnwM5vxemrpMnM/aiDi7q7lW+VPrnnmNzuwHSwo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=MAKaN+WqT0ORCxG3num46ovlU4w9ODcPFd2eMT+HDBRkpXgtE3Z++uwgDuulyg4ok MTn/Odr9uOhU6dgUvCcQb/Q5yQadnhM8AMYUokhXJqF+NlZWaPDoMaQxCOtUwlLsEI mj6Fa8nAJywTHKZ7RGqNe4RSSANC6GjoiqLHInrP+iu9+iOk4pcVdkdxcfQnTP9mJF cRo5nfpHvZvtvt3VlpLkZQHlKS3+9IdQ7EWUi5DsWNSPNtWtiSqw63yrsgopUOpTX2 M+58FisdqJm4wt2kI1z062TtD4fuPxuWubhnpRejp20005+1s3uJCPTxc+1uE+g/tm kPkwOeCerqG7w== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11C16F327A6; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) From: Kairui Song via B4 Relay Date: Tue, 21 Apr 2026 14:16:45 +0800 Subject: [PATCH v3 01/12] mm, swap: simplify swap cache allocation helper MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260421-swap-table-p4-v3-1-2f23759a76bc@tencent.com> References: <20260421-swap-table-p4-v3-0-2f23759a76bc@tencent.com> In-Reply-To: <20260421-swap-table-p4-v3-0-2f23759a76bc@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , David Hildenbrand , Zi Yan , Baolin Wang , Barry Song , Hugh Dickins , Chris Li , Kemeng Shi , Nhat Pham , Baoquan He , Johannes Weiner , Youngjun Park , Chengming Zhou , Roman Gushchin , Shakeel Butt , Muchun Song , Qi Zheng , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Kairui Song , Yosry Ahmed , Lorenzo Stoakes , Dev Jain , Lance Yang , Michal Hocko , Michal Hocko , Suren Baghdasaryan , Axel Rasmussen , Lorenzo Stoakes , Yosry Ahmed X-Mailer: b4 0.15.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1776752211; l=13895; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=JmiZz0Pt43GOVQb+kJDIooTPfiUFKI2G50LmA9MoerA=; b=9P1AQG4Z4srMJGxAGN8Tjk6LGMreUvW4McITbYKCHJyPAMtXFAXJQ6EzD4TLQTI+DuUzxM8oz hnUYLm1PAGADKvauZZtELzVdItuou0Nx/37lkeP/9++dJfb7Qy9M/2C X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com X-Rspamd-Queue-Id: 70FCF40010 X-Rspamd-Server: rspam12 X-Stat-Signature: 3hep5p6u8mat757t8ibfyzee6pshp6mh X-Rspam-User: X-HE-Tag: 1776752215-730666 X-HE-Meta: U2FsdGVkX1/Snyf0svJuM9x2G0vLAriuY+BwY+Jd68L3Ha3NENYOJT6ofJx8uxCjqygolJE2YyOyyCyps3SAgRHeVEY/Gp/To5BgqO8vbx8SuK2lmS5KXI3PR6w5ahNARU7ardEWbQ4DjfWwePGRXqx1oz61Aa164WJCTZHribyUUkA5mBSdwMQw+Q8EnE2bcBXuDmJHqema9NFJFhAeiwwu9xeIjLWgMZdJtMXKyxEWh+yRmv5fEgdMwINtFFZ+kWCvSSz/wnBR6Ir2ZQIrFa2nD4qP+AEoHTSvzgBmjMCbfDl3MMcHGIO6oaf7NDNDVEP4uHrgWKV4kXvF/1uH3DBp5kHtqHaHdE++IMCzgZbOF/0yoJ9WwrP+9RyhnVzd+0mekiZ1fNe/PHRJRmO0x3diQ2ycqzyz0J3sJxld1wXi8r2+jW8y1dDVQzvJmGaXUdpIm1D4mg/gZ0dZEwJaFhNbjDStQdOxb7Ga94f5H5rQlABeSDlCWxKhHmnwFuOZeHsdEFXdMjDOWZmiAYIFFhZ5Bw9lVCqjr78asmeJP6UU1NJQ4RSk1eu3+fbyVddgnJIWtaQI8MhD0oKOwKe6ctfC07utwtGINgHjvWca/22j+Sz6bNMoNVdgirajPAK65AE0vs2sdH6JhfcriZeoyDw8QhHxzgeY6yVroMf9iI40KYUZwkLiFZAuacyqH5M7yBn5u9dzSc5AmySEWaLYON8HgADqJSIqCbE/lNzSmguFhl1O5t1DkMqF8+qwEG5ZL7nO4oaFzH1+qAdAUAsZVdIOzykUy5iJFr15NcflEvIFIqJhfhB8O5SVXlcksv+ONMdX5pIluyXkmu6wxEvtrx74shPsAk0CB+qsYDi51w+lj7OuWc8TD4EUBTnn2fmcKRSxDy2apLK2EHNkKyPAErAm46dfkUB9vedQoqRlgik/GweCqWXAic5mSJR0oY4qYGUYwvJ+1uLVCaSN29M M7l8UyG4 JCBRr1zNBFNh1o6O9eHxS0YWA8NJsITpFLM+OiOEPfINVQCamKPg+5mVZkbrId5WVnNsMJjA5iVwCLGQw/jAElAKE5sMP3RstMLE3DYEci0Wt3AdPVPRnGp3LvrxIaKc6OCth7zio6US88eGJARLdhynGK9TGgjvLq47vD1Etbn+6fW6yZ0hqFO8rlW4OqmjE5w6tpRXjxzVqrfPYJUBATDCPDbEukon06KTBKUu8dsu+YEoUYo//onpVR0T8r+GovaHyhuZJOkFlMtck3NQEfhAV9o4MAewuN3o5dpD6w35iYjTsLcksbQrn1QGG2T6qrMrhI49RxXOkcN92Y0bVxaEIh95k+ef3/Dvzp9xejEbyT18sB0eDsJMV9Q== Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Instead of trying to return the existing folio if the entry is already cached, simply return an error code if the allocation fails and drop the output argument. And introduce proper wrappers that handle the allocation failure in different ways. For async swapin and readahead, the caller only wants to ensure that a swap-in read is issued when the allocation succeeded. And for zswap swap out, the caller will abort if the allocation failed because the entry is gone or cached already. Signed-off-by: Kairui Song --- mm/swap.h | 3 +- mm/swap_state.c | 180 +++++++++++++++++++++++++++++--------------------------- mm/zswap.c | 23 +++----- 3 files changed, 103 insertions(+), 103 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index a77016f2423b..ad8b17a93758 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -281,8 +281,7 @@ struct folio *swap_cache_get_folio(swp_entry_t entry); void *swap_cache_get_shadow(swp_entry_t entry); void swap_cache_del_folio(struct folio *folio); struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_flags, - struct mempolicy *mpol, pgoff_t ilx, - bool *alloced); + struct mempolicy *mpol, pgoff_t ilx); /* Below helpers require the caller to lock and pass in the swap cluster. */ void __swap_cache_add_folio(struct swap_cluster_info *ci, struct folio *folio, swp_entry_t entry); diff --git a/mm/swap_state.c b/mm/swap_state.c index 1415a5c54a43..204a9499d50c 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -459,54 +459,38 @@ void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma, * All swap slots covered by the folio must have a non-zero swap count. * * Context: Caller must protect the swap device with reference count or locks. - * Return: Returns the folio being added on success. Returns the existing folio - * if @entry is already cached. Returns NULL if raced with swapin or swapoff. + * Return: 0 if success, error code if failed. */ -static struct folio *__swap_cache_prepare_and_add(swp_entry_t entry, - struct folio *folio, - gfp_t gfp, bool charged) +static int __swap_cache_prepare_and_add(swp_entry_t entry, + struct folio *folio, + gfp_t gfp, bool charged) { - struct folio *swapcache = NULL; void *shadow; int ret; __folio_set_locked(folio); __folio_set_swapbacked(folio); - if (!charged && mem_cgroup_swapin_charge_folio(folio, NULL, gfp, entry)) + if (!charged && mem_cgroup_swapin_charge_folio(folio, NULL, gfp, entry)) { + ret = -ENOMEM; goto failed; - - for (;;) { - ret = swap_cache_add_folio(folio, entry, &shadow); - if (!ret) - break; - - /* - * Large order allocation needs special handling on - * race: if a smaller folio exists in cache, swapin needs - * to fallback to order 0, and doing a swap cache lookup - * might return a folio that is irrelevant to the faulting - * entry because @entry is aligned down. Just return NULL. - */ - if (ret != -EEXIST || folio_test_large(folio)) - goto failed; - - swapcache = swap_cache_get_folio(entry); - if (swapcache) - goto failed; } + ret = swap_cache_add_folio(folio, entry, &shadow); + if (ret) + goto failed; + memcg1_swapin(entry, folio_nr_pages(folio)); if (shadow) workingset_refault(folio, shadow); /* Caller will initiate read into locked folio */ folio_add_lru(folio); - return folio; + return 0; failed: folio_unlock(folio); - return swapcache; + return ret; } /** @@ -515,7 +499,6 @@ static struct folio *__swap_cache_prepare_and_add(swp_entry_t entry, * @gfp_mask: memory allocation flags * @mpol: NUMA memory allocation policy to be applied * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE - * @new_page_allocated: sets true if allocation happened, false otherwise * * Allocate a folio in the swap cache for one swap slot, typically before * doing IO (e.g. swap in or zswap writeback). The swap slot indicated by @@ -523,18 +506,40 @@ static struct folio *__swap_cache_prepare_and_add(swp_entry_t entry, * Currently only supports order 0. * * Context: Caller must protect the swap device with reference count or locks. - * Return: Returns the existing folio if @entry is cached already. Returns - * NULL if failed due to -ENOMEM or @entry have a swap count < 1. + * Return: Returns the folio if allocation succeeded and folio is added to + * swap cache. Returns error code if allocation failed due to race. */ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask, - struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated) + struct mempolicy *mpol, pgoff_t ilx) +{ + int ret; + struct folio *folio; + + /* Allocate a new folio to be added into the swap cache. */ + folio = folio_alloc_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); + if (!folio) + return ERR_PTR(-ENOMEM); + + /* + * Try to add the new folio to the swap cache. It returns + * -EEXIST if the entry is already cached. + */ + ret = __swap_cache_prepare_and_add(entry, folio, gfp_mask, false); + if (ret) { + folio_put(folio); + return ERR_PTR(ret); + } + + return folio; +} + +static struct folio *swap_cache_read_folio(swp_entry_t entry, gfp_t gfp, + struct mempolicy *mpol, pgoff_t ilx, + struct swap_iocb **plug, bool readahead) { struct swap_info_struct *si = __swap_entry_to_info(entry); struct folio *folio; - struct folio *result = NULL; - *new_page_allocated = false; /* Check the swap cache again for readahead path. */ folio = swap_cache_get_folio(entry); if (folio) @@ -544,17 +549,24 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask, if (!swap_entry_swapped(si, entry)) return NULL; - /* Allocate a new folio to be added into the swap cache. */ - folio = folio_alloc_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); - if (!folio) + do { + folio = swap_cache_get_folio(entry); + if (folio) + return folio; + + folio = swap_cache_alloc_folio(entry, gfp, mpol, ilx); + } while (IS_ERR(folio) && PTR_ERR(folio) == -EEXIST); + + if (IS_ERR_OR_NULL(folio)) return NULL; - /* Try add the new folio, returns existing folio or NULL on failure. */ - result = __swap_cache_prepare_and_add(entry, folio, gfp_mask, false); - if (result == folio) - *new_page_allocated = true; - else - folio_put(folio); - return result; + + swap_read_folio(folio, plug); + if (readahead) { + folio_set_readahead(folio); + count_vm_event(SWAP_RA); + } + + return folio; } /** @@ -573,15 +585,35 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask, */ struct folio *swapin_folio(swp_entry_t entry, struct folio *folio) { + int ret; struct folio *swapcache; pgoff_t offset = swp_offset(entry); unsigned long nr_pages = folio_nr_pages(folio); entry = swp_entry(swp_type(entry), round_down(offset, nr_pages)); - swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true); - if (swapcache == folio) - swap_read_folio(folio, NULL); - return swapcache; + for (;;) { + ret = __swap_cache_prepare_and_add(entry, folio, 0, true); + if (!ret) { + swap_read_folio(folio, NULL); + break; + } + + /* + * Large order allocation needs special handling on + * race: if a smaller folio exists in cache, swapin needs + * to fall back to order 0, and doing a swap cache lookup + * might return a folio that is irrelevant to the faulting + * entry because @entry is aligned down. Just return NULL. + */ + if (ret != -EEXIST || nr_pages > 1) + return NULL; + + swapcache = swap_cache_get_folio(entry); + if (swapcache) + return swapcache; + } + + return folio; } /* @@ -595,7 +627,6 @@ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct swap_iocb **plug) { struct swap_info_struct *si; - bool page_allocated; struct mempolicy *mpol; pgoff_t ilx; struct folio *folio; @@ -605,13 +636,9 @@ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, return NULL; mpol = get_vma_policy(vma, addr, 0, &ilx); - folio = swap_cache_alloc_folio(entry, gfp_mask, mpol, ilx, - &page_allocated); + folio = swap_cache_read_folio(entry, gfp_mask, mpol, ilx, plug, false); mpol_cond_put(mpol); - if (page_allocated) - swap_read_folio(folio, plug); - put_swap_device(si); return folio; } @@ -696,7 +723,7 @@ static unsigned long swapin_nr_pages(unsigned long offset) * are fairly likely to have been swapped out from the same node. */ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct mempolicy *mpol, pgoff_t ilx) + struct mempolicy *mpol, pgoff_t ilx) { struct folio *folio; unsigned long entry_offset = swp_offset(entry); @@ -706,7 +733,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct swap_info_struct *si = __swap_entry_to_info(entry); struct blk_plug plug; struct swap_iocb *splug = NULL; - bool page_allocated; + swp_entry_t ra_entry; mask = swapin_nr_pages(offset) - 1; if (!mask) @@ -723,18 +750,11 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, blk_start_plug(&plug); for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ - folio = swap_cache_alloc_folio( - swp_entry(swp_type(entry), offset), gfp_mask, mpol, ilx, - &page_allocated); + ra_entry = swp_entry(swp_type(entry), offset); + folio = swap_cache_read_folio(ra_entry, gfp_mask, mpol, ilx, + &splug, offset != entry_offset); if (!folio) continue; - if (page_allocated) { - swap_read_folio(folio, &splug); - if (offset != entry_offset) { - folio_set_readahead(folio); - count_vm_event(SWAP_RA); - } - } folio_put(folio); } blk_finish_plug(&plug); @@ -742,11 +762,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - folio = swap_cache_alloc_folio(entry, gfp_mask, mpol, ilx, - &page_allocated); - if (unlikely(page_allocated)) - swap_read_folio(folio, NULL); - return folio; + return swap_cache_read_folio(entry, gfp_mask, mpol, ilx, NULL, false); } static int swap_vma_ra_win(struct vm_fault *vmf, unsigned long *start, @@ -812,8 +828,7 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, pte_t *pte = NULL, pentry; int win; unsigned long start, end, addr; - pgoff_t ilx; - bool page_allocated; + pgoff_t ilx = targ_ilx; win = swap_vma_ra_win(vmf, &start, &end); if (win == 1) @@ -847,19 +862,12 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, if (!si) continue; } - folio = swap_cache_alloc_folio(entry, gfp_mask, mpol, ilx, - &page_allocated); + folio = swap_cache_read_folio(entry, gfp_mask, mpol, ilx, + &splug, addr != vmf->address); if (si) put_swap_device(si); if (!folio) continue; - if (page_allocated) { - swap_read_folio(folio, &splug); - if (addr != vmf->address) { - folio_set_readahead(folio); - count_vm_event(SWAP_RA); - } - } folio_put(folio); } if (pte) @@ -869,10 +877,8 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, lru_add_drain(); skip: /* The folio was likely read above, so no need for plugging here */ - folio = swap_cache_alloc_folio(targ_entry, gfp_mask, mpol, targ_ilx, - &page_allocated); - if (unlikely(page_allocated)) - swap_read_folio(folio, NULL); + folio = swap_cache_read_folio(targ_entry, gfp_mask, mpol, targ_ilx, + NULL, false); return folio; } diff --git a/mm/zswap.c b/mm/zswap.c index 4b5149173b0e..e27f6e96f003 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -991,7 +991,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry, pgoff_t offset = swp_offset(swpentry); struct folio *folio; struct mempolicy *mpol; - bool folio_was_allocated; struct swap_info_struct *si; int ret = 0; @@ -1002,22 +1001,18 @@ static int zswap_writeback_entry(struct zswap_entry *entry, mpol = get_task_policy(current); folio = swap_cache_alloc_folio(swpentry, GFP_KERNEL, mpol, - NO_INTERLEAVE_INDEX, &folio_was_allocated); + NO_INTERLEAVE_INDEX); put_swap_device(si); - if (!folio) - return -ENOMEM; /* - * Found an existing folio, we raced with swapin or concurrent - * shrinker. We generally writeback cold folios from zswap, and - * swapin means the folio just became hot, so skip this folio. - * For unlikely concurrent shrinker case, it will be unlinked - * and freed when invalidated by the concurrent shrinker anyway. + * Swap cache allocation might fail due to OOM, or the entry + * may already be cached due to concurrent swapin or have been + * freed. If already cached, a concurrent swapin made the folio + * hot, so skip it. For the unlikely concurrent shrinker case, + * it will be unlinked and freed when invalidated anyway. */ - if (!folio_was_allocated) { - ret = -EEXIST; - goto out; - } + if (IS_ERR(folio)) + return PTR_ERR(folio); /* * folio is locked, and the swapcache is now secured against @@ -1057,7 +1052,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, __swap_writepage(folio, NULL); out: - if (ret && ret != -EEXIST) { + if (ret) { swap_cache_del_folio(folio); folio_unlock(folio); } -- 2.53.0