From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5576C021B8 for ; Thu, 27 Feb 2025 01:19:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCC08280005; Wed, 26 Feb 2025 20:19:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7C2B280003; Wed, 26 Feb 2025 20:19:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1CC6280005; Wed, 26 Feb 2025 20:19:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 81AB9280003 for ; Wed, 26 Feb 2025 20:19:41 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 029D2141342 for ; Thu, 27 Feb 2025 01:19:40 +0000 (UTC) X-FDA: 83163967362.18.EB62414 Received: from out-189.mta1.migadu.com (out-189.mta1.migadu.com [95.215.58.189]) by imf09.hostedemail.com (Postfix) with ESMTP id 1F68B140002 for ; Thu, 27 Feb 2025 01:19:38 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=wzcuMUHd; spf=pass (imf09.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.189 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740619179; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tn0bCA8vmXCFisYbnlg1+ph/3V4Z5kayGOAi96rwN8M=; b=YCwdZFVtc6PzDBM4H+LyHF3cJbp904KN+HZxxypUPte4GQZXa37ekfKc53kiIuTJRWc8h+ Jtp6zThcQTrH2pkXbb8ZEyMps5wreo9EUwnmizKeiDO4+BZ23Ju4gAxEDiksW2n6NpFyUW X1ZOsPOeKzQSZICWWhzwZlvH3vPUBLI= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=wzcuMUHd; spf=pass (imf09.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.189 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740619179; a=rsa-sha256; cv=none; b=m29zKdPQQGhoK2VdqRATnPE5zvWk4l8WOKwu7DegdYJ9NRtgn4sE1JyLfyLfR13T/4nzQm L7zcQiZ2dBERYv6ta9ufE622hNF4Va7p1iRW259QoEfOuB7nRk0VvQfVwrpuO/eFek5x9/ Rsyq5bsAAG7H1dEOFqagAc3ol/3c7uQ= Date: Thu, 27 Feb 2025 01:19:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1740619177; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=tn0bCA8vmXCFisYbnlg1+ph/3V4Z5kayGOAi96rwN8M=; b=wzcuMUHdz0UHoju4Vymz/grmhzKu69MtfKKpZuR2AUxFB5rLy3rxM3mv0ZT6WpPHlRkPis cPRtdqF2M8MYc+F08qt2usqXhfnVa5ydVHH8eB+jPxbuxwkZEFp8xpvCQdtd5Ii53Lv0pN M0xIPAGDUL1AyUyI5X5qoulOf5Iq20c= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Nhat Pham Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, chengming.zhou@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] zswap: do not crash the kernel on decompression failure Message-ID: References: <20250227001445.1099203-1-nphamcs@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250227001445.1099203-1-nphamcs@gmail.com> X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 1F68B140002 X-Stat-Signature: z6wjf35a11c19i7xtqg6ukqfjf693r86 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740619178-854039 X-HE-Meta: U2FsdGVkX1+0QVcpFHgJUbJva5GX245dyVJNH5lN87vpJxXuYewVGn8QTtsur2Dr7NsKxRy6U/uOHES1wiCYgj4lv7QowErUV87d3wFSgFEuPylNg9/KYxcm20wHFGQoSAxGBo/bVmqZsjW0+AEfykq+jEbFdoK+gm0zNM0/6ROsTJbitDdcBCrZ9euHek5X7FmvIkLm0NAsoK58Ltm2jyImPEbXei308nB3mP0dlhNNjC1bEQmKNYB8ikMTsTg3nMipS7S9cHYx+y8VK2Kd5AfANCK6iedlOYdhqa84k0eIPEldo/WA2v84uvtJbvWnafwXc4ClUT+iw/su7K8HqTOcwvho9WTR+B9mVYt+TpSV8xAQfvjpzTh+Akbj5AXw2zeT8zezOf6ep01uYFQ25QiFJfU6tap0tq+NyB+SSmA+Omo+IT2Yr99uv4x1TKCO5Cutw+8AT4KitRdx2TfaX0Zlr8Idhe9z1vu8FrvFaVIXZ1y/DmElAQcV2S/IrWGymHhCk/ID3MXt2wFNIPFqBGZjVSmqNln1SC5qRFPgucdIPA3SbVyXZpuZ1XruEHNcTwI4deYIlyfKEYTW2ckOWYrDIOpLCdvU2kj+eDK63GrHLciLZWZ8MBJLgZwY5D34XCRn6Vzng4jAyGPy9VmSUJDDceBP62LB+QoOEURjHGp8Tfwm01Ao5BSDKmcErukH815kchd3/UyjPn89q5RbHAXUd7zrE/3QikrYgsaHTKw8x7Nv+9weNTD4LIYyNM0z2HfM1HuY+aUMbt8CKKD7aju83vig68PuLTq/ZftSy4ruEIdx11i60KJZaCVzGeblACK99q5R+5VjjeBZaAnlUZ7ZiahHgWWsAwL7WSPhEuz9g9u/d+AGyx9lYwArntlIvbTalyYQ8QqVOM2jt+D6/28esv1D+hTYZwtvSdtsaXfzWCOdAx4f+wNfqAjl6XvVot8fQ3hRl1pfMoccHFX oUZHOgp4 jhiSBB605HduM4J4xjhDh9oDA7pbcyUnddtCg6uPrDCByS2fn+FOS2jtz2ncu287u0GENf4T3jQPHTA2M0qcYFRpvyCrELJoU2EHRmrB2EsZx2mES+9qENfTc2iYao97OMvWRcy2b4LuJG3TE/lMxnLWAW34qFKtdPbdUucUynDcM43ouiAZZh90LMyimItnHK7cKFYzzBFFeVD7JeSHqpMhTAbbkWunVxnY/MpXU8vs5pFnhzgpyJkwlbFxl3DkUz/INSv0CNLZwgUqgo5b9qO9/KSrHOdfgV/SfRMPS1+FNAU2VnC/0+XITEQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Feb 26, 2025 at 04:14:45PM -0800, Nhat Pham wrote: > Currently, we crash the kernel when a decompression failure occurs in > zswap (either because of memory corruption, or a bug in the compression > algorithm). This is overkill. We should only SIGBUS the unfortunate > process asking for the zswap entry on zswap load, and skip the corrupted > entry in zswap writeback. The former is accomplished by returning true > from zswap_load(), indicating that zswap owns the swapped out content, > but without flagging the folio as up-to-date. The process trying to swap > in the page will check for the uptodate folio flag and SIGBUS (see > do_swap_page() in mm/memory.c for more details). We should call out the extra xarray walks and their perf impact (if any). > > See [1] for a recent upstream discussion about this. > > [1]: https://lore.kernel.org/all/ZsiLElTykamcYZ6J@casper.infradead.org/ > > Suggested-by: Matthew Wilcox > Suggested-by: Yosry Ahmed > Signed-off-by: Nhat Pham > --- > mm/zswap.c | 94 ++++++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 67 insertions(+), 27 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 6dbf31bd2218..e4a2157bbc64 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -62,6 +62,8 @@ static u64 zswap_reject_reclaim_fail; > static u64 zswap_reject_compress_fail; > /* Compressed page was too big for the allocator to (optimally) store */ > static u64 zswap_reject_compress_poor; > +/* Load or writeback failed due to decompression failure */ > +static u64 zswap_decompress_fail; > /* Store failed because underlying allocator could not get memory */ > static u64 zswap_reject_alloc_fail; > /* Store failed because the entry metadata could not be allocated (rare) */ > @@ -996,11 +998,13 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, > return comp_ret == 0 && alloc_ret == 0; > } > > -static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) > +static bool zswap_decompress(struct zswap_entry *entry, struct folio *folio) > { > struct zpool *zpool = entry->pool->zpool; > struct scatterlist input, output; > struct crypto_acomp_ctx *acomp_ctx; > + int decomp_ret; > + bool ret = true; > u8 *src; > > acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool); > @@ -1025,12 +1029,25 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) > sg_init_table(&output, 1); > sg_set_folio(&output, folio, PAGE_SIZE, 0); > acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE); > - BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); > - BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); > + decomp_ret = crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait); > + if (decomp_ret || acomp_ctx->req->dlen != PAGE_SIZE) { > + ret = false; > + zswap_decompress_fail++; > + pr_alert_ratelimited( > + "decompression failed with returned value %d on zswap entry with swap entry value %08lx, swap type %d, and swap offset %lu. compression algorithm is %s. compressed size is %u bytes, and decompressed size is %u bytes.\n", This is a very long line. I think we should break it into multiple lines. I know multiline strings are frowned upon by checkpatch, by this exist (see the warning in mem_cgroup_oom_control_write() for example), and they are definitely better than a very long line imo. > + decomp_ret, > + entry->swpentry.val, > + swp_type(entry->swpentry), > + swp_offset(entry->swpentry), > + entry->pool->tfm_name, > + entry->length, > + acomp_ctx->req->dlen); > + } > > if (src != acomp_ctx->buffer) > zpool_unmap_handle(zpool, entry->handle); > acomp_ctx_put_unlock(acomp_ctx); > + return ret; Not a big deal but we could probably store the length in a local variable and move the check here, and avoid needing 'ret'. > } > > /********************************* > @@ -1060,6 +1077,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, > struct writeback_control wbc = { > .sync_mode = WB_SYNC_NONE, > }; > + int ret = 0; > > /* try to allocate swap cache folio */ > si = get_swap_device(swpentry); > @@ -1081,8 +1099,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, > * and freed when invalidated by the concurrent shrinker anyway. > */ > if (!folio_was_allocated) { > - folio_put(folio); > - return -EEXIST; > + ret = -EEXIST; > + goto put_folio; > } > > /* > @@ -1095,14 +1113,17 @@ static int zswap_writeback_entry(struct zswap_entry *entry, > * be dereferenced. > */ > tree = swap_zswap_tree(swpentry); > - if (entry != xa_cmpxchg(tree, offset, entry, NULL, GFP_KERNEL)) { > - delete_from_swap_cache(folio); > - folio_unlock(folio); > - folio_put(folio); > - return -ENOMEM; > + if (entry != xa_load(tree, offset)) { > + ret = -ENOMEM; > + goto delete_unlock; > + } > + > + if (!zswap_decompress(entry, folio)) { > + ret = -EIO; > + goto delete_unlock; > } > > - zswap_decompress(entry, folio); > + xa_erase(tree, offset); > > count_vm_event(ZSWPWB); > if (entry->objcg) > @@ -1118,9 +1139,14 @@ static int zswap_writeback_entry(struct zswap_entry *entry, > > /* start writeback */ > __swap_writepage(folio, &wbc); > - folio_put(folio); > > - return 0; > +put_folio: > + folio_put(folio); > + return ret; > +delete_unlock: > + delete_from_swap_cache(folio); > + folio_unlock(folio); > + goto put_folio; I think I suggested a way to avoid this goto in v1: https://lore.kernel.org/lkml/Z782SPcJI8DFISRa@google.com/. Did this not work out? > } > > /********************************* > @@ -1620,6 +1646,20 @@ bool zswap_store(struct folio *folio) > return ret; > } > > +/** > + * zswap_load() - load a page from zswap > + * @folio: folio to load > + * > + * Returns: true if zswap owns the swapped out contents, false otherwise. > + * > + * Note that the zswap_load() return value doesn't indicate success or failure, > + * but whether zswap owns the swapped out contents. This MUST return true if > + * zswap does own the swapped out contents, even if it fails to write the > + * contents to the folio. Otherwise, the caller will try to read garbage from > + * the backend. > + * > + * Success is signaled by marking the folio uptodate. > + */ > bool zswap_load(struct folio *folio) > { > swp_entry_t swp = folio->swap; > @@ -1644,6 +1684,17 @@ bool zswap_load(struct folio *folio) The comment that exists here (not visible in the diff) should be abbreviated now that we already explained the whole uptodate thing above, right? > if (WARN_ON_ONCE(folio_test_large(folio))) > return true; > > + entry = xa_load(tree, offset); > + if (!entry) > + return false; > + A small comment here pointing out that we are deliberatly not setting uptodate because of the failure may make things more obvious, or do you think that's not needed? > + if (!zswap_decompress(entry, folio)) > + return true; > + > + count_vm_event(ZSWPIN); > + if (entry->objcg) > + count_objcg_events(entry->objcg, ZSWPIN, 1); > + > /* > * When reading into the swapcache, invalidate our entry. The > * swapcache can be the authoritative owner of the page and > @@ -1656,21 +1707,8 @@ bool zswap_load(struct folio *folio) > * files, which reads into a private page and may free it if > * the fault fails. We remain the primary owner of the entry.) > */ > - if (swapcache) > - entry = xa_erase(tree, offset); > - else > - entry = xa_load(tree, offset); > - > - if (!entry) > - return false; > - > - zswap_decompress(entry, folio); > - > - count_vm_event(ZSWPIN); > - if (entry->objcg) > - count_objcg_events(entry->objcg, ZSWPIN, 1); > - > if (swapcache) { > + xa_erase(tree, offset); > zswap_entry_free(entry); > folio_mark_dirty(folio); > } > @@ -1771,6 +1809,8 @@ static int zswap_debugfs_init(void) > zswap_debugfs_root, &zswap_reject_compress_fail); > debugfs_create_u64("reject_compress_poor", 0444, > zswap_debugfs_root, &zswap_reject_compress_poor); > + debugfs_create_u64("decompress_fail", 0444, > + zswap_debugfs_root, &zswap_decompress_fail); > debugfs_create_u64("written_back_pages", 0444, > zswap_debugfs_root, &zswap_written_back_pages); > debugfs_create_file("pool_total_size", 0444, > > base-commit: 598d34afeca6bb10554846cf157a3ded8729516c > -- > 2.43.5