linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wei Yang <richard.weiyang@gmail.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Wei Yang <richard.weiyang@gmail.com>,
	akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, baolin.wang@linux.alibaba.com,
	Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com,
	dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev,
	linux-mm@kvack.org, stable@vger.kernel.org
Subject: Re: [PATCH] mm/huge_memory: fix NULL pointer deference when splitting shmem folio in swap cache
Date: Wed, 19 Nov 2025 02:56:20 +0000	[thread overview]
Message-ID: <20251119025620.mnumfajqrojfzv6l@master> (raw)
In-Reply-To: <A5303358-5FA3-4412-89B2-FF51DA759E28@nvidia.com>

On Tue, Nov 18, 2025 at 09:32:05PM -0500, Zi Yan wrote:
>On 18 Nov 2025, at 20:26, Wei Yang wrote:
>
>> Commit c010d47f107f ("mm: thp: split huge page to any lower order
>> pages") introduced an early check on the folio's order via
>> mapping->flags before proceeding with the split work.
>>
>> This check introduced a bug: for shmem folios in the swap cache, the
>> mapping pointer can be NULL. Accessing mapping->flags in this state
>> leads directly to a NULL pointer dereference.
>>
>> This commit fixes the issue by moving the check for mapping != NULL
>> before any attempt to access mapping->flags.
>>
>> This fix necessarily changes the return value from -EBUSY to -EINVAL
>> when mapping is NULL. After reviewing current callers, they do not
>> differentiate between these two error codes, making this change safe.
>>
>> Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: <stable@vger.kernel.org>
>>
>> ---
>>
>> This patch is based on current mm-new, latest commit:
>>
>>     056b93566a35 mm/vmalloc: warn only once when vmalloc detect invalid gfp flags
>>
>> Backport note:
>>
>> Current code evolved from original commit with following four changes.
>> We should do proper adjustment respectively on backporting.
>>
>> commit c010d47f107f609b9f4d6a103b6dfc53889049e9
>> Author: Zi Yan <ziy@nvidia.com>
>> Date:   Mon Feb 26 15:55:33 2024 -0500
>>
>>     mm: thp: split huge page to any lower order pages
>>
>> commit 6a50c9b512f7734bc356f4bd47885a6f7c98491a
>> Author: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>> Date:   Fri Jun 7 17:40:48 2024 +0800
>>
>>     mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
>
>This is a hot fix to commit c010d47f107f, so the backport should end
>at this point.
>
>>
>> commit 9b2f764933eb5e3ac9ebba26e3341529219c4401
>> Author: Zi Yan <ziy@nvidia.com>
>> Date:   Wed Jan 22 11:19:27 2025 -0500
>>
>>     mm/huge_memory: allow split shmem large folio to any lower order
>>
>> commit 58729c04cf1092b87aeef0bf0998c9e2e4771133
>> Author: Zi Yan <ziy@nvidia.com>
>> Date:   Fri Mar 7 12:39:57 2025 -0500
>>
>>     mm/huge_memory: add buddy allocator like (non-uniform) folio_split()
>> ---
>>  mm/huge_memory.c | 68 +++++++++++++++++++++++++-----------------------
>>  1 file changed, 35 insertions(+), 33 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 7c69572b6c3f..8701c3eef05f 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3696,29 +3696,42 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>  				"Cannot split to order-1 folio");
>>  		if (new_order == 1)
>>  			return false;
>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> -		    !mapping_large_folio_support(folio->mapping)) {
>> -			/*
>> -			 * We can always split a folio down to a single page
>> -			 * (new_order == 0) uniformly.
>> -			 *
>> -			 * For any other scenario
>> -			 *   a) uniform split targeting a large folio
>> -			 *      (new_order > 0)
>> -			 *   b) any non-uniform split
>> -			 * we must confirm that the file system supports large
>> -			 * folios.
>> -			 *
>> -			 * Note that we might still have THPs in such
>> -			 * mappings, which is created from khugepaged when
>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>> -			 * case, the mapping does not actually support large
>> -			 * folios properly.
>> -			 */
>> -			VM_WARN_ONCE(warns,
>> -				"Cannot split file folio to non-0 order");
>> +	} else {
>> +		const struct address_space *mapping = folio->mapping;
>> +
>> +		/* Truncated ? */
>> +		/*
>> +		 * TODO: add support for large shmem folio in swap cache.
>> +		 * When shmem is in swap cache, mapping is NULL and
>> +		 * folio_test_swapcache() is true.
>> +		 */
>> +		if (!mapping)
>>  			return false;
>> +
>> +		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> +			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> +			    !mapping_large_folio_support(folio->mapping)) {
>
>folio->mapping can just be mapping here. The above involved commits would
>mostly need separate backport patches, so keeping folio->mapping
>as the original code does not make backporting easier.
>

Thanks, I think you are right. I tried to keep the foot print small for
backport. But it seems not benefit much.

@Andrew

If an updated version is necessary, please let me know.

>> +				/*
>> +				 * We can always split a folio down to a
>> +				 * single page (new_order == 0) uniformly.
>> +				 *
>> +				 * For any other scenario
>> +				 *   a) uniform split targeting a large folio
>> +				 *      (new_order > 0)
>> +				 *   b) any non-uniform split
>> +				 * we must confirm that the file system
>> +				 * supports large folios.
>> +				 *
>> +				 * Note that we might still have THPs in such
>> +				 * mappings, which is created from khugepaged
>> +				 * when CONFIG_READ_ONLY_THP_FOR_FS is
>> +				 * enabled. But in that case, the mapping does
>> +				 * not actually support large folios properly.
>> +				 */
>> +				VM_WARN_ONCE(warns,
>> +					"Cannot split file folio to non-0 order");
>> +				return false;
>> +			}
>>  		}
>>  	}
>>
>> @@ -3965,17 +3978,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>
>>  		mapping = folio->mapping;
>>
>> -		/* Truncated ? */
>> -		/*
>> -		 * TODO: add support for large shmem folio in swap cache.
>> -		 * When shmem is in swap cache, mapping is NULL and
>> -		 * folio_test_swapcache() is true.
>> -		 */
>> -		if (!mapping) {
>> -			ret = -EBUSY;
>> -			goto out;
>> -		}
>> -
>>  		min_order = mapping_min_folio_order(folio->mapping);
>>  		if (new_order < min_order) {
>>  			ret = -EINVAL;
>> -- 
>> 2.34.1
>
>Otherwise, LGTM. Thank you for fixing the issue.
>
>Reviewed-by: Zi Yan <ziy@nvidia.com>
>
>Best Regards,
>Yan, Zi

-- 
Wei Yang
Help you, Help me


  reply	other threads:[~2025-11-19  2:56 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-19  1:26 Wei Yang
2025-11-19  2:32 ` Zi Yan
2025-11-19  2:56   ` Wei Yang [this message]
2025-11-19  8:57 ` David Hildenbrand (Red Hat)
2025-11-19 12:23   ` Wei Yang
2025-11-19 12:54     ` David Hildenbrand (Red Hat)
2025-11-19 13:08       ` Zi Yan
2025-11-19 13:41         ` Wei Yang
2025-11-19 13:58           ` David Hildenbrand (Red Hat)
2025-11-19 14:09         ` David Hildenbrand (Red Hat)
2025-11-19 14:29           ` Zi Yan
2025-11-19 14:37             ` David Hildenbrand (Red Hat)
2025-11-19 14:46               ` David Hildenbrand (Red Hat)
2025-11-19 14:48                 ` Zi Yan
2025-11-19 14:50                   ` David Hildenbrand (Red Hat)
2025-11-19 23:18                 ` Wei Yang
2025-11-20  0:47                 ` Wei Yang
2025-11-20  3:00                   ` Zi Yan
2025-11-19 14:47               ` Zi Yan
2025-11-19 13:14       ` Wei Yang
2025-11-19 12:42   ` Wei Yang
2025-11-19 14:13     ` David Hildenbrand (Red Hat)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251119025620.mnumfajqrojfzv6l@master \
    --to=richard.weiyang@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@kernel.org \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=stable@vger.kernel.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox