linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lance Yang <lance.yang@linux.dev>
To: Barry Song <21cnbao@gmail.com>
Cc: akpm@linux-foundation.org, david@redhat.com,
	baolin.wang@linux.alibaba.com, chrisl@kernel.org,
	kasong@tencent.com, linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com,
	ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org,
	huang.ying.caritas@gmail.com, zhengtangquan@oppo.com,
	riel@surriel.com, Liam.Howlett@oracle.com, vbabka@suse.cz,
	harry.yoo@oracle.com, mingzhe.yang@ly.com,
	stable@vger.kernel.org, Lance Yang <ioworker0@gmail.com>
Subject: Re: [PATCH v2 1/1] mm/rmap: fix potential out-of-bounds page table access during batched unmap
Date: Fri, 27 Jun 2025 15:15:41 +0800	[thread overview]
Message-ID: <1d39b66e-4009-4143-a8fa-5d876bc1f7e7@linux.dev> (raw)
In-Reply-To: <CAGsJ_4z+DU-FhNk9vkS-epdxgUMjrCvh31ZBwoAs98uWnbTK-A@mail.gmail.com>



On 2025/6/27 14:55, Barry Song wrote:
> On Fri, Jun 27, 2025 at 6:52 PM Barry Song <21cnbao@gmail.com> wrote:
>>
>> On Fri, Jun 27, 2025 at 6:23 PM Lance Yang <ioworker0@gmail.com> wrote:
>>>
>>> From: Lance Yang <lance.yang@linux.dev>
>>>
>>> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
>>> can read past the end of a PTE table if a large folio is mapped starting at
>>> the last entry of that table. It would be quite rare in practice, as
>>> MADV_FREE typically splits the large folio ;)
>>>
>>> So let's fix the potential out-of-bounds read by refactoring the logic into
>>> a new helper, folio_unmap_pte_batch().
>>>
>>> The new helper now correctly calculates the safe number of pages to scan by
>>> limiting the operation to the boundaries of the current VMA and the PTE
>>> table.
>>>
>>> In addition, the "all-or-nothing" batching restriction is removed to
>>> support partial batches. The reference counting is also cleaned up to use
>>> folio_put_refs().
>>>
>>> [1] https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
>>>
>>
>> What about ?
>>
>> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
>> may read past the end of a PTE table when a large folio spans across two PMDs,
>> particularly after being remapped with mremap(). This patch fixes the
>> potential out-of-bounds access by capping the batch at vm_end and the PMD
>> boundary.
>>
>> It also refactors the logic into a new helper, folio_unmap_pte_batch(),
>> which supports batching between 1 and folio_nr_pages. This improves code
>> clarity. Note that such cases are rare in practice, as MADV_FREE typically
>> splits large folios.
> 
> Sorry, I meant that MADV_FREE typically splits large folios if the specified
> range doesn't cover the entire folio.

Hmm... I got it wrong as well :( It's the partial coverage that triggers 
the split.

how about this revised version:

As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
may read past the end of a PTE table when a large folio spans across two
PMDs, particularly after being remapped with mremap(). This patch fixes
the potential out-of-bounds access by capping the batch at vm_end and the
PMD boundary.

It also refactors the logic into a new helper, folio_unmap_pte_batch(),
which supports batching between 1 and folio_nr_pages. This improves code
clarity. Note that such boundary-straddling cases are rare in practice, as
MADV_FREE will typically split a large folio if the advice range does not
cover the entire folio.



  reply	other threads:[~2025-06-27  7:15 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-27  6:23 Lance Yang
2025-06-27  6:52 ` Barry Song
2025-06-27  6:55   ` Barry Song
2025-06-27  7:15     ` Lance Yang [this message]
2025-06-27  7:36       ` Barry Song
2025-06-27 10:13         ` David Hildenbrand
2025-06-27 15:29           ` Lance Yang
2025-06-27 15:49             ` David Hildenbrand
2025-06-27 22:42             ` Barry Song
2025-06-27 20:09 ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1d39b66e-4009-4143-a8fa-5d876bc1f7e7@linux.dev \
    --to=lance.yang@linux.dev \
    --cc=21cnbao@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=harry.yoo@oracle.com \
    --cc=huang.ying.caritas@gmail.com \
    --cc=ioworker0@gmail.com \
    --cc=kasong@tencent.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mingzhe.yang@ly.com \
    --cc=riel@surriel.com \
    --cc=ryan.roberts@arm.com \
    --cc=stable@vger.kernel.org \
    --cc=v-songbaohua@oppo.com \
    --cc=vbabka@suse.cz \
    --cc=x86@kernel.org \
    --cc=zhengtangquan@oppo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox