From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5061210AB830 for ; Fri, 27 Mar 2026 14:27:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA09F6B00A0; Fri, 27 Mar 2026 10:27:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B77DF6B00A1; Fri, 27 Mar 2026 10:27:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8E5E6B00A2; Fri, 27 Mar 2026 10:27:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 97F826B00A0 for ; Fri, 27 Mar 2026 10:27:03 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4F4F61A10BF for ; Fri, 27 Mar 2026 14:27:03 +0000 (UTC) X-FDA: 84592069926.20.2E4116F Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf15.hostedemail.com (Postfix) with ESMTP id 14533A0013 for ; Fri, 27 Mar 2026 14:26:59 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=F0X8dnjb; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1774621621; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IGtPBcxw9SmT8YrZeAYAA/G+Qaiyn6Pw5UUHZsnXfKo=; b=sMgtZFpoM8MoDK+ohQ36EGKCqstT4HSA5kzetsQANFtwempj4IIX1MjLECUtl8tMM9mFw/ lr9cbRfKqNF5ysuNnWiKYZpLct0qEfEe8dJTnG0nYWFD06mck1/o7e8soq/EnIQE/ECFhQ pMnwtozEm6+lpqv+t88pQnTNgtJX7no= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1774621621; a=rsa-sha256; cv=none; b=L59USWDvsaKlonxvQqcTYSybGsbWjtmugzoQmDJ5mrbMFqUOCIOgfU2o1Dy5I2mVk0PSjc ULsDLcN9Lts0Y4Qkg5OspP9OYz/A3B3npc0dB88zWJhHZQoHwy+JFKySfhcXkYUmXwbYVH Ir3GfzA1fnC5JotyrHggFzZFlaWmX3Q= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=F0X8dnjb; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1774621615; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=IGtPBcxw9SmT8YrZeAYAA/G+Qaiyn6Pw5UUHZsnXfKo=; b=F0X8dnjbyIa1RcsWYD70ri2xx9i2se9LIXrzIridMTCM451uqGbIKcuwhtyL7R2KnEzp+ysywP9YBUYQAJwLNINauc3DBgmsHyO1oRUSR+945I/jH7RR0rD0IbxChwrodExNKlkJZ2tcKbzMyVLw1BCZfe78BjHDfPLV82SA2NY= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R241e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045098064;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=27;SR=0;TI=SMTPD_---0X.oc15I_1774621613; Received: from 30.42.98.36(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X.oc15I_1774621613 cluster:ay36) by smtp.aliyun-inc.com; Fri, 27 Mar 2026 22:26:54 +0800 Message-ID: Date: Fri, 27 Mar 2026 22:26:53 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 02/10] mm/khugepaged: remove READ_ONLY_THP_FOR_FS check To: "Lorenzo Stoakes (Oracle)" Cc: Zi Yan , "Matthew Wilcox (Oracle)" , Song Liu , Chris Mason , David Sterba , Alexander Viro , Christian Brauner , Jan Kara , Andrew Morton , David Hildenbrand , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Shuah Khan , linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org References: <20260327014255.2058916-1-ziy@nvidia.com> <20260327014255.2058916-3-ziy@nvidia.com> <7fd90f5e-65b5-4734-abb2-77b22c733af5@linux.alibaba.com> <8f5119a1-9aa9-4a39-ac94-ca1631db26e1@lucifer.local> <89c8b93c-f6dd-4d8e-bcee-3c1ff1c04295@lucifer.local> From: Baolin Wang In-Reply-To: <89c8b93c-f6dd-4d8e-bcee-3c1ff1c04295@lucifer.local> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 14533A0013 X-Stat-Signature: hiatcj9he45j67hfoaew94cfkhj47qxt X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1774621619-324571 X-HE-Meta: U2FsdGVkX19K2aJh7pshES9Setls72i7WD8+tm3/gv+bSGuaxVKkGLOf0gb7oiC5OSDjzz7byCwLGG0y127pVkdopgVaE497bpz6Y8IIp4K3YzARsyJJil/hhd0bbBGw5MwUglAVQCyYB14i4StPbJVtUy1yHdQGuQs8Y4P/j2Mzl9H5pXrjZhIBz82b4W4XFx1U7B5vKKRhLP9a0QAIXpMT4VSZuvvRCrC/PomRvBnpLEFnSLBISaMSgJ35PFsu8AkKX3VXby0yO94IppJ/ztGv4XDzhVj0JrHhp3p1y/HPvouFazm9xLawb0E7sRULHENI2fLkpU8yu8/C9TEbzn3ziLt5lJbGxHJxps5vkTIfGCGXFe6dVMpA/qxeQgVzpf4zLRvJyH2RfAX7+ZiFa3PIi2B5tpdQaA/6YRblybZQATclNE+daqHKZjoxtn3uGVj2gsoNCDqO3b/YWsCBa4gDiLlkX6te5C6krAUtSSp/r6NNW0GB1V1ijjp2ul3HQq1zwQY0YW52YV7UlRWKkebfq2xsZCuBEnzuWf5RTbqfiBO+CIYpsJpvZkezpurZrqzAHKcu0I+pRhrq1qaYZpFpemkJQMSj3+mCq/o6mtsa8yGVkCT1+HwvLtSZghyPms1c1Jg85psy+k5T2Hcqi+Wix//OaPGnN7hvA8LWFGaN3P7wIxWZg0Ga1u10D+lRxv2hqs6hdYWan/wNscvajet7bXZ3RKFnZHLm3pdXqBPPafeo0tGYcLY1FPFhNipU4Hm4W8WmlYTLpg5/kqufVLnxaefvv2vjmIbOGA57PKbIqt5G3mFMbd2MTyW15drbsGsaAYxmXkbeZ14aEwr92s2mUIlJT+z7baiutmN443SlxYdkgvB5CSPDQ4svWwRWuJF0T0zvvG4caxwoE4nLmfB22We2qm281kX6UO6102AaVlQinGLxGv3W9WeL4uMN/xhyzEtKgvqQcPXPTvb 72NiVJiH 3kZWvlOfSCaTTm4Xo8M3sq0VsApFYBZor6xv7U2+F56Q770YBYghLB3nj8beNxrosXYX7un5vMEj7nGCMQNE7BmI1Vfn1UdHXeDegIjh2BqTI+D8Blu2LalwDDRDv3oRgOmLiAvYwkNJWKSUbEd76S7uRw5n1KiZlczj8S0zPRwOJEtYPbI7qfEz4DP2dneeFHciI/T5zKZzvM2QcRAaIk1qRGaauPPu+UB+LaIDtAxYEWlWgIxCQphzKlvPT5vDc3JHvd9RspbpPsemI00IJ2xQcc/83r+YohiiHfGDPcpoRbs/MqIyyEI2/GZxlZ7kZgCxgxbM/oQtlGcd0DgBm+GxBS1rIoKJdRO4PMr08lO5PXrUY2EQhy3j37uuxDrQfdVcW Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 3/27/26 10:12 PM, Lorenzo Stoakes (Oracle) wrote: > On Fri, Mar 27, 2026 at 09:45:03PM +0800, Baolin Wang wrote: >> >> >> On 3/27/26 8:02 PM, Lorenzo Stoakes (Oracle) wrote: >>> On Fri, Mar 27, 2026 at 05:44:49PM +0800, Baolin Wang wrote: >>>> >>>> >>>> On 3/27/26 9:42 AM, Zi Yan wrote: >>>>> collapse_file() requires FSes supporting large folio with at least >>>>> PMD_ORDER, so replace the READ_ONLY_THP_FOR_FS check with that. shmem with >>>>> huge option turned on also sets large folio order on mapping, so the check >>>>> also applies to shmem. >>>>> >>>>> While at it, replace VM_BUG_ON with returning failure values. >>>>> >>>>> Signed-off-by: Zi Yan >>>>> --- >>>>> mm/khugepaged.c | 7 +++++-- >>>>> 1 file changed, 5 insertions(+), 2 deletions(-) >>>>> >>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >>>>> index d06d84219e1b..45b12ffb1550 100644 >>>>> --- a/mm/khugepaged.c >>>>> +++ b/mm/khugepaged.c >>>>> @@ -1899,8 +1899,11 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr, >>>>> int nr_none = 0; >>>>> bool is_shmem = shmem_file(file); >>>>> - VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); >>>>> - VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); >>>>> + /* "huge" shmem sets mapping folio order and passes the check below */ >>>>> + if (mapping_max_folio_order(mapping) < PMD_ORDER) >>>>> + return SCAN_FAIL; >>>> >>>> This is not true for anonymous shmem, since its large order allocation logic >>>> is similar to anonymous memory. That means it will not call >>>> mapping_set_large_folios() for anonymous shmem. >>>> >>>> So I think the check should be: >>>> >>>> if (!is_shmem && mapping_max_folio_order(mapping) < PMD_ORDER) >>>> return SCAN_FAIL; >>> >>> Hmm but in shmem_init() we have: >>> >>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >>> if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY) >>> SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; >>> else >>> shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */ >>> >>> /* >>> * Default to setting PMD-sized THP to inherit the global setting and >>> * disable all other multi-size THPs. >>> */ >>> if (!shmem_orders_configured) >>> huge_shmem_orders_inherit = BIT(HPAGE_PMD_ORDER); >>> #endif >>> >>> And shm_mnt->mnt_sb is the superblock used for anon shmem. Also >>> shmem_enabled_store() updates that if necessary. >>> >>> So we're still fine right? >>> >>> __shmem_file_setup() (used for anon shmem) calls shmem_get_inode() -> >>> __shmem_get_inode() which has: >>> >>> if (sbinfo->huge) >>> mapping_set_large_folios(inode->i_mapping); >>> >>> Shared for both anon shmem and tmpfs-style shmem. >>> >>> So I think it's fine as-is. >> >> I'm afraid not. Sorry, I should have been clearer. >> >> First, anonymous shmem large order allocation is dynamically controlled via >> the global interface (/sys/kernel/mm/transparent_hugepage/shmem_enabled) and >> the mTHP interfaces >> (/sys/kernel/mm/transparent_hugepage/hugepages-*kB/shmem_enabled). >> >> This means that during anonymous shmem initialization, these interfaces >> might be set to 'never'. so it will not call mapping_set_large_folios() >> because sbinfo->huge is 'SHMEM_HUGE_NEVER'. >> >> Even if shmem large order allocation is subsequently enabled via the >> interfaces, __shmem_file_setup -> mapping_set_large_folios() is not called >> again. > > I see your point, oh this is all a bit of a mess... > > It feels like entirely the wrong abstraction anyway, since at best you're > getting a global 'is enabled'. > > I guess what happened before was we'd never call into this with ! r/o thp for fs > && ! is_shmem. Right. > But now we are allowing it, but should STILL be gating on !is_shmem so yeah your > suggestion is correct I think actualyl. > > I do hate: > > if (!is_shmem && mapping_max_folio_order(mapping) < PMD_ORDER) > > As a bit of code though. It's horrible. Indeed. > Let's abstract that... > > It'd be nice if we could find a way to clean things up in the lead up to changes > in series like this instead of sticking with the mess, but I guess since it > mostly removes stuff that's ok for now. I think this check can be removed from this patch. During the khugepaged's scan, it will call thp_vma_allowable_order() to check if the VMA is allowed to collapse into a PMD. Specifically, within the call chain thp_vma_allowable_order() -> __thp_vma_allowable_orders(), shmem is checked via shmem_allowable_huge_orders(), while other FSes are checked via file_thp_enabled(). For those other filesystems, Patch 5 has already added the following check, which I think is sufficient to filter out those FSes that do not support large folios: if (mapping_max_folio_order(inode->i_mapping) < PMD_ORDER) return false; >> Anonymous shmem behaves similarly to anonymous pages: it is controlled by >> the 'shmem_enabled' interfaces and uses shmem_allowable_huge_orders() to >> check for allowed large orders, rather than relying on >> mapping_max_folio_order(). >> >> The mapping_max_folio_order() is intended to control large page allocation >> only for tmpfs mounts. Therefore, I find the current code confusing and >> think it needs to be fixed: >> >> /* Don't consider 'deny' for emergencies and 'force' for testing */ >> if (sb != shm_mnt->mnt_sb && sbinfo->huge) >> mapping_set_large_folios(inode->i_mapping); > > Cheers, Lorenzo