From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F0F3C282DE for ; Sat, 8 Mar 2025 03:04:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C2C96B0082; Fri, 7 Mar 2025 22:04:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 44A086B0083; Fri, 7 Mar 2025 22:04:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EAEB6B0085; Fri, 7 Mar 2025 22:04:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0BEF86B0082 for ; Fri, 7 Mar 2025 22:04:06 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BC31F1613DA for ; Sat, 8 Mar 2025 03:04:07 +0000 (UTC) X-FDA: 83196889734.21.EFC8CC8 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by imf12.hostedemail.com (Postfix) with ESMTP id 8E64D40005 for ; Sat, 8 Mar 2025 03:04:04 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=dqMSs7qi; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741403045; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F93G/xQwuaQ+L3SjO6Q3R0jadY1LvGXIeiTnrx0dWo0=; b=LznhU18hc0e3mxTSXLDZLZLODviae2SAN+jRAVARUArM4EjuNZYOjOw21ohOszyANy1o35 bBDXKJ6bgjRVCFQpEHP+I6m/rez0deQwMHsp3Dt7c412rB53DqiD44YPg3bep4Po1VE5Le ruzD5fxRd/92e+e8k2XsUUpQtn0QAt0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=dqMSs7qi; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741403045; a=rsa-sha256; cv=none; b=Rdi6dRKrW0KUqF8mquhD5WZ7VWXL8E+3WPUOIdyBmc5d+Yuexv8wNcxhgCUAE3opgtAwL3 6nRx9/5eDOMahcNQ8IgYiF/0Ul/R0mZHHU5Js9KriKtPOgSkqJKfDuhTX6YtkOfZUi5BYu 3VfCXwvAp7zVXeJcSKX03dNoWMxdS5A= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1741403041; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=F93G/xQwuaQ+L3SjO6Q3R0jadY1LvGXIeiTnrx0dWo0=; b=dqMSs7qi4lCOSXYaDd6/0h+SshGGGOsbMxamAQ6/5hXHHVC++xlhzCUcDdtCVotxMpwxNCXgKvI1Yy7dYmnAlyBCJJCpGJmVyGcNYQswZXOtdQRQjn29JWRtz0Jx3m0KUW24qPTFM1C687DzBR8YwYCBLS3IEZxt9zKJt16BC84= Received: from 30.221.80.100(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WQtkU0i_1741403039 cluster:ay36) by smtp.aliyun-inc.com; Sat, 08 Mar 2025 11:03:59 +0800 Message-ID: Date: Sat, 8 Mar 2025 11:03:59 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3] mm/migrate: fix shmem xarray update during migration To: Zi Yan , Liu Shixin , linux-mm@kvack.org Cc: Andrew Morton , Barry Song , David Hildenbrand , Kefeng Wang , Lance Yang , Ryan Roberts , Matthew Wilcox , Hugh Dickins , Charan Teja Kalla , linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <20250305200403.2822855-1-ziy@nvidia.com> From: Baolin Wang In-Reply-To: <20250305200403.2822855-1-ziy@nvidia.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 8E64D40005 X-Rspamd-Server: rspam03 X-Stat-Signature: p8p95d6itbryi3frgatoyj34iey3dpn6 X-HE-Tag: 1741403044-500974 X-HE-Meta: U2FsdGVkX18OGCXu2lqaFCY7SJ0hUsv3LS6Op3kZWxF6sjWQmKv070ILRoocEXC9v/6YPBibXNgDTcP13UgRzkeslVc/ZWtlY3QrC+70BQkzJ6oweA9UBVRv0Vsxj9nsYlTmmt5Z0h0rvHStyaAc2Z49XgqgMld3IBlY0rZ6W70tf58DOn9N/X10s8tOWosLdSBvr6uc4HAmybnpxHqdJzRfG6w3A8j3xUfBJdikZQg/KI5P50AK1tUVLWwAVWvBLilKCC/FQw3tMPimGnDkT7mdEzJyFk/HglUAbXB65/9hJeDGw+hB25JFv5aBKtc4lm/wg7w4O94FJ/kSq0nilKtqVQyMoizFond4U+XBnkzhjawjE+fi9ermZbn6S/5Y3V9E1qDvEV322ahyoQBqfggF2jPaUljrmdQHggTsjzSiE2grZ8APKAIS1KrVVADpR66o9bntpL1FHhyPINiGV5Px2qrIlvb1lQ8oT9RFPDg9CC3VdXqV4TfMFbKj7s04i+oc06kvrtuGXxvlNVcuBnzrH9/9epp4fNp0R9Uf/4qaqbygT7FLj0TtgqcvUFi5vf6szNolhuGON65MiV1k/qCLbdIG+EVriqFZvrYsiQPVl3lVtQUzi4b+ggtDRC7DFc5sCicsTsyB2Kep+/ag7f9xwLa5eJNsf0ZavlpKEYbTU7iPGXmRKxRKlK9Pnki1p3uzqgLyuYeS3ImOa2tXVao6Dtkyx/rbnJm0gHfpg64fcS/5R7HoITuO/Jrvazr4GRG0YwdszCa0cdbzkbtJEeMlnhurXV/xicmWuvoZEj2Y4LUD6qI7e3mF0JThT4FlD9CcH0cOYlq2aNAX/uE4wgRTeJbB2iauA+0t1bZelIWfIT0tP+u2i87U74gwWRNkCkmFIEncbU62MPpDct9Vt+L09PN+G1JrGxH24sS/3KHAd/gqN/rH8I2Ru50sFu9cl99LOVPyXCaxzupRlGp xXO96oh6 auHmLEYf0As1UYM2DqkMwNQghkXOu5g2u6/IeuuEAbqpkbHnFfldj6t4ybY4pKpP6NFU9I9wuqVRqrFAXxA3c0ynXYIyaU/Lj+MuotiY+hi7yR32Geg+9dH8gMAyDt50fyhSr3XSw9+yjU4oo/z7Mh7szLKyj6XGnyTjRgB2csiZLX9sE/3Lm+lw2+OOrhpa7Xaf9w2NhvK5Um1vb4xA0oNnnchGntCCXTpMiHYVX3mEjvk5f6eZLHrCUqPY6oCi2Mp0JXswtVUqLKQpPs+CfnOShRVlInmNuegMPxGVW7xGoSo9l6BNwSSbEyrQAQMC6GLeUDcr2q+S9UrNMTAXB3j/Ort0X6Wy/ex1erW+M+fX8Yf7GXtlCOVE//3xIfvY/8prlAvZrJqqoQroCSs4jHuXO5dgnOxCei2cEUx7y4INJtxI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/3/6 04:04, Zi Yan wrote: > A shmem folio can be either in page cache or in swap cache, but not at the > same time. Namely, once it is in swap cache, folio->mapping should be NULL, > and the folio is no longer in a shmem mapping. > > In __folio_migrate_mapping(), to determine the number of xarray entries > to update, folio_test_swapbacked() is used, but that conflates shmem in > page cache case and shmem in swap cache case. It leads to xarray > multi-index entry corruption, since it turns a sibling entry to a > normal entry during xas_store() (see [1] for a userspace reproduction). > Fix it by only using folio_test_swapcache() to determine whether xarray > is storing swap cache entries or not to choose the right number of xarray > entries to update. > > [1] https://lore.kernel.org/linux-mm/Z8idPCkaJW1IChjT@casper.infradead.org/ > > Note: > In __split_huge_page(), folio_test_anon() && folio_test_swapcache() is used > to get swap_cache address space, but that ignores the shmem folio in swap > cache case. It could lead to NULL pointer dereferencing when a > in-swap-cache shmem folio is split at __xa_store(), since > !folio_test_anon() is true and folio->mapping is NULL. But fortunately, > its caller split_huge_page_to_list_to_order() bails out early with EBUSY > when folio->mapping is NULL. So no need to take care of it here. > > Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly") > Reported-by: Liu Shixin > Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@huawei.com/ > Suggested-by: Hugh Dickins > Signed-off-by: Zi Yan > Cc: stable@vger.kernel.org Thanks for fixing the issue. Reviewed-by: Baolin Wang > --- > mm/migrate.c | 10 ++++------ > 1 file changed, 4 insertions(+), 6 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index fb4afd31baf0..c0adea67cd62 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -518,15 +518,13 @@ static int __folio_migrate_mapping(struct address_space *mapping, > if (folio_test_anon(folio) && folio_test_large(folio)) > mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1); > folio_ref_add(newfolio, nr); /* add cache reference */ > - if (folio_test_swapbacked(folio)) { > + if (folio_test_swapbacked(folio)) > __folio_set_swapbacked(newfolio); > - if (folio_test_swapcache(folio)) { > - folio_set_swapcache(newfolio); > - newfolio->private = folio_get_private(folio); > - } > + if (folio_test_swapcache(folio)) { > + folio_set_swapcache(newfolio); > + newfolio->private = folio_get_private(folio); > entries = nr; > } else { > - VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); > entries = 1; > } >