From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 683D4C83F21 for ; Mon, 14 Jul 2025 13:53:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9DEB6B0095; Mon, 14 Jul 2025 09:53:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E756D6B0096; Mon, 14 Jul 2025 09:53:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB2366B009C; Mon, 14 Jul 2025 09:53:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C96376B0095 for ; Mon, 14 Jul 2025 09:53:46 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7E8C9B6C44 for ; Mon, 14 Jul 2025 13:53:46 +0000 (UTC) X-FDA: 83663013252.22.948B8B1 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) by imf13.hostedemail.com (Postfix) with ESMTP id 6A4752000B for ; Mon, 14 Jul 2025 13:53:44 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=lqPsegNM; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf13.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752501225; a=rsa-sha256; cv=none; b=yAEh5gAYLq3Sbq0bjtMsj5vkDlZ6ms7xwOK/jNlWB7Cbvw6wsg7afZhPgPAh2SUC0BYoiS o42fZfCV/jovK1Y7Ggnj9LIvXj6/7HY2b+ALuZUxHuVPo4F0pIsYmdHkwsBvQUDzugvLli OuuVPNQT7Ar+Q0mNS5sVEBMTBdoGJZs= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=lqPsegNM; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf13.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752501225; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0kKXjs1AvISoZhGFfgR4E6wavclCzcfZOOIjIZNUrgs=; b=TShJ66eE4F/CCQ/vmwh2bbDj8oVqjlyOU7AoT7aaQC9HPMOya0PgIO9nK0weVX44u/9/ZI U9gLTrJd6vi7uJuk5KcpXsM79K5yPSV93uz9HrXAbq25yrQf5suHiSTsqG6qB4u93U72KH j7L5+pxJLhnj71ZcpoTEZCW5csUBFC4= Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4bgkLW6Xlbz9tgJ; Mon, 14 Jul 2025 15:53:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1752501219; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0kKXjs1AvISoZhGFfgR4E6wavclCzcfZOOIjIZNUrgs=; b=lqPsegNMpxBzCEwzWsT2sFo3Bhpwdgsb6KBurXGScck5w2TF9DQ05MMN4N69RRzHfOxX1H wQ5Asxtyw1UjN2DhNYsAVCgY3dHppyYNf2JaYamIsgJEIWpXPWxERlbxo8/QckCluBTC4O W4IG8R6KXqUhQY1uQiMjC/X0JmCCMl1vI+LfyIUUOsFHiERkMopiS1CFHKmciG1MpXCX/P n4EQkFKt/n9/QHfPRgttBBnzlMpeT2PxdIN8Rff8Zy6jLvH3FV6SD3oLNAvzQV6KcUN6Ow yOK+dcb/KrfTljVmYdtJCFBERhoCNk/vbFfkjUuleBaAc+AodxzpqXlJYZc2jg== Message-ID: Date: Mon, 14 Jul 2025 15:53:35 +0200 MIME-Version: 1.0 Subject: Re: [PATCH v2 2/2] mm/memory_hotplug: fix hwpoisoned large folio handling in do_migrate_range To: Zi Yan , David Hildenbrand , Matthew Wilcox , Luis Chamberlain Cc: Jinjiang Tu , Oscar Salvador , akpm@linux-foundation.org, linmiaohe@huawei.com, mhocko@kernel.org, linux-mm@kvack.org, wangkefeng.wang@huawei.com References: <20250627125747.3094074-1-tujinjiang@huawei.com> <20250627125747.3094074-3-tujinjiang@huawei.com> <373d02c5-2b62-8543-b786-8fd591ad56eb@huawei.com> <61325284-d1d6-a973-8aa7-c0f226db95fa@huawei.com> <7b2c054b-fc33-4127-aaa9-9edf6a63e142@redhat.com> <924d9d25-e53c-f159-6ec0-e1fd4e96d6e2@huawei.com> <4c5d4fd5-5582-11d8-9fee-24828ac1913d@huawei.com> <8c9719f0-c072-40bb-b7f6-6f2cc41a31dc@redhat.com> <1D589FE5-3515-4ED5-B12E-D5CE23BA5D13@nvidia.com> Content-Language: en-US From: Pankaj Raghav In-Reply-To: <1D589FE5-3515-4ED5-B12E-D5CE23BA5D13@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 6A4752000B X-Stat-Signature: p7zxr9qtiypkd7yq59dmkthhgaiii3xx X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1752501224-836204 X-HE-Meta: U2FsdGVkX1/8rPM1gZEWwxjL8AIQ6bBOL2egWGeqAv8shqVdRS1zoVZCJXHVpPNMo+yVIZol2J/08wNzHvVXXoTIpRjcvurDUQZeEDeV6qmvdZ0S4A4iEyLj+zy+y8evSkV0LdHAh7m5k73hs4SD4deojW5X+0I5DYzbmpeEfBsXiiMiqfQk1Mhk90Q+QxxJbrlT6fDU+nQCjIVeSDzMM8y53HGlulcNL/BbvAv5GMZpSSYl7DjdmbrDcVzPHBY5XSJ35f9e8j7j9EL/RP8gQWvc+02IsJ6QTlinjaTQftBEiQhWIDgXnYK4CqNgZNJ4qrSo6OjV++QLiFEBoOY6k1KIJQYlhjLWCSe0sXS61SAkMWV8dUsKXuKBMYOW7CcC3xNc/gSXzG4iLbmqdkRtLSWnuQmWLc1+6zu2N9Kj5QxpU6N8b7ZGQxCi7GFbT3I4xFQZLY4ylXDnaVYgQfT5DP7Nkh8s9OnTCHwa34DEt4gvQmNaA7xp1FN6Ex+tykJalfWzASaJM+8x12BEmAMmLuh7V3jFNodgbIR1gRDxf6F7dlWluK6BTxMLPhrLdp46zIX2nbrt9NpYKFTrRwqR3lsfRTwRE06OIvAM2j9gF5kuthO2TnEXgdI5mU2x4mCCJYD+v1B+BMIEgq1i7V2ddKAbcGxL6jthdzTY8t6j6qCiwxuOERJdKiLUMa+jyiWVFYZJtesDk4ekkcAO/acb6xnM80eF5NHXD1LFIj7p7jwrong3kSIZeQVRBivy3o7z9ls3ARTSYH1Fva+mwMpaaDe7DYaN0PrMBrrnWu2NnHi5yGvr4WevJ4dTtfeRTOVFWM8T4zvY3WuLH8LTirEj28qlpRRul6tSnhmtIaPz0vMmUa6YanMtgNPOY187lUzuCcF1P+utPLLDLtEenBCttozVCFhhyXzepZ5exH4DSVEZxx0cNz2ppl0hgD4Utu/CJUiZvN1o0mfPXEOjazG tQvBRZL0 Kumx2PBkz9Y5ARZJnHxIjHiqsQImJpNBUdVTzgxj2oJHjQC16oGvJE69ymPTzu9/m3O0t7a4Bs/PN2emRJFS1dqW7dKVDh2x7UtTe/FwvVuhggtzt1cFmLRhDh+01oa+HZKCmRkGgZiLhUILpn0dAjrqlbxuc1J5TKktB+jk+ih7ZdCVExKBlbQQp9QPApP0VmWpJ3yf3E5SFOptnTUK9Be1Mw3nMG3WaEwdn/4g7+TfFvx/lokqHK05NaAMUNBbjDVl+gk91RmKEko4OWLjjaFkg35AjClNyggrJlZ5rUBusMVWQZ2zkI3yDwVi1TvChTREXPaLzSfV9F6g= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Zi Yan, > > > > Probably the WARN_ON can indeed trigger now. > > > > > > @Zi Yan, on a related note ... > > > > in memory_failure(), we call try_to_split_thp_page(). If it works, > > we assume that we have a small folio. > > > > But that is not the case if split_huge_page() cannot split it to > > order-0 ... min_order_for_split(). > > Right. memory failure needs to learn about this. Either poison every > subpage or write back if necessary and drop the page cache folio. > > > > > I'm afraid we havbe more such code that does not expect that if split_huge_page() > > succeeds that we still have a large folio ... > > I did some search, here are the users of split_huge_page*(): > > 1. ksm: since it is anonymous only, so split always goes to order-0; > 2. userfaultfd: it is also anonymous; > 3. madvise cold or pageout: a large pagecache folio will be split if it is partially > mapped. And it will retry. It might cause a deadlock if the folio has a min order. > 4. shmem: split always goes to order-0; > 5. memory-failure: see above. > > So we will need to take care of madvise cold or pageout case? > > Hi Matthew, Pankaj, and Luis, > > Is it possible to partially map a min-order folio in a fs with LBS? Based on my Typically, FSs match the min order with the blocksize of the filesystem. As a filesystem block is the smallest unit of data that the filesystem uses to store file data on the disk, we cannot partially map them. So if I understand your question correctly, the answer is no. > understanding of madvise_cold_or_pageout_pte_range(), it seems that it will try > to split the folio and expects a order-0 folio after a successful split. > But splitting a min-order folio is a nop. It could lead to a deadlock in the code. > Or I just get it wrong? Yeah, we have checks to make sure we never split a folio < min-order. I hope it answers your question :) -- Pankaj