From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: David Hildenbrand <david@redhat.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: ying.huang@intel.com, linux-mm@kvack.org,
John Hubbard <jhubbard@nvidia.com>,
Mel Gorman <mgorman@techsingularity.net>,
Ryan Roberts <ryan.roberts@arm.com>,
liushixin2@huawei.com
Subject: Re: [PATCH] mm: fix possible OOB in numa_rebuild_large_mapping()
Date: Wed, 12 Jun 2024 09:16:29 +0800 [thread overview]
Message-ID: <dbeebf56-61c9-414d-84db-1db9420f2259@linux.alibaba.com> (raw)
In-Reply-To: <f2760bc3-5e5b-4e24-b444-fd3f8e5f6306@redhat.com>
On 2024/6/11 20:32, David Hildenbrand wrote:
> On 07.06.24 12:32, Kefeng Wang wrote:
>> The large folio is mapped with folio size aligned virtual address during
>> the pagefault, eg, 'addr = ALIGN_DOWN(vmf->address, nr_pages *
>> PAGE_SIZE)'
>> in do_anonymous_page(), but after the mremap(), the virtual address only
>> require PAGE_SIZE aligned, also pte is moved to new in
>> move_page_tables(),
>> then traverse the new pte in numa_rebuild_large_mapping() will hint the
>> following issue,
>>
>> Unable to handle kernel paging request at virtual address
>> 00000a80c021a788
>> Mem abort info:
>> ESR = 0x0000000096000004
>> EC = 0x25: DABT (current EL), IL = 32 bits
>> SET = 0, FnV = 0
>> EA = 0, S1PTW = 0
>> FSC = 0x04: level 0 translation fault
>> Data abort info:
>> ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
>> CM = 0, WnR = 0, TnD = 0, TagAccess = 0
>> GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
>> user pgtable: 4k pages, 48-bit VAs, pgdp=00002040341a6000
>> [00000a80c021a788] pgd=0000000000000000, p4d=0000000000000000
>> Internal error: Oops: 0000000096000004 [#1] SMP
>> ...
>> CPU: 76 PID: 15187 Comm: git Kdump: loaded Tainted: G
>> W 6.10.0-rc2+ #209
>> Hardware name: Huawei TaiShan 2280 V2/BC82AMDD, BIOS 1.79 08/21/2021
>> pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>> pc : numa_rebuild_large_mapping+0x338/0x638
>> lr : numa_rebuild_large_mapping+0x320/0x638
>> sp : ffff8000b41c3b00
>> x29: ffff8000b41c3b30 x28: ffff8000812a0000 x27: 00000000000a8000
>> x26: 00000000000000a8 x25: 0010000000000001 x24: ffff20401c7170f0
>> x23: 0000ffff33a1e000 x22: 0000ffff33a76000 x21: ffff20400869eca0
>> x20: 0000ffff33976000 x19: 00000000000000a8 x18: ffffffffffffffff
>> x17: 0000000000000000 x16: 0000000000000020 x15: ffff8000b41c36a8
>> x14: 0000000000000000 x13: 205d373831353154 x12: 5b5d333331363732
>> x11: 000000000011ff78 x10: 000000000011ff10 x9 : ffff800080273f30
>> x8 : 000000320400869e x7 : c0000000ffffd87f x6 : 00000000001e6ba8
>> x5 : ffff206f3fb5af88 x4 : 0000000000000000 x3 : 0000000000000000
>> x2 : 0000000000000000 x1 : fffffdffc0000000 x0 : 00000a80c021a780
>> Call trace:
>> numa_rebuild_large_mapping+0x338/0x638
>> do_numa_page+0x3e4/0x4e0
>> handle_pte_fault+0x1bc/0x238
>> __handle_mm_fault+0x20c/0x400
>> handle_mm_fault+0xa8/0x288
>> do_page_fault+0x124/0x498
>> do_translation_fault+0x54/0x80
>> do_mem_abort+0x4c/0xa8
>> el0_da+0x40/0x110
>> el0t_64_sync_handler+0xe4/0x158
>> el0t_64_sync+0x188/0x190
>>
>> Fix it by correct the start and end, which may lead to only rebuild part
>> of large mapping in one numa page fault, there is no issue since other
>> part
>> could rebuild by another pagefault.
>>
>> Fixes: d2136d749d76 ("mm: support multi-size THP numa balancing")
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>> mm/memory.c | 24 +++++++++++++++---------
>> 1 file changed, 15 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index db9130488231..0ad57b6485ca 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -5223,15 +5223,21 @@ static void numa_rebuild_single_mapping(struct
>> vm_fault *vmf, struct vm_area_str
>> update_mmu_cache_range(vmf, vma, fault_addr, fault_pte, 1);
>> }
>> -static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct
>> vm_area_struct *vma,
>> - struct folio *folio, pte_t fault_pte,
>> - bool ignore_writable, bool pte_write_upgrade)
>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf,
>> + struct vm_area_struct *vma, struct folio *folio, int nr_pages,
>> + pte_t fault_pte, bool ignore_writable, bool pte_write_upgrade)
>> {
>> int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>> - unsigned long start = max(vmf->address - nr * PAGE_SIZE,
>> vma->vm_start);
>> - unsigned long end = min(vmf->address + (folio_nr_pages(folio) -
>> nr) * PAGE_SIZE, vma->vm_end);
>> - pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
>> - unsigned long addr;
>> + unsigned long folio_size = nr_pages * PAGE_SIZE;
>
> Just re-read folio_nr_pages() here, it's cheap. Or even better, use
> folio_size();
>
>> + unsigned long addr = vmf->address;
>> + unsigned long start, end, align_addr;
>> + pte_t *start_ptep;
>> +
>> + align_addr = ALIGN_DOWN(addr, folio_size);
>> + start = max3(addr - nr * PAGE_SIZE, align_addr, vma->vm_start);
>> + end = min3(addr + (nr_pages - nr) * PAGE_SIZE, align_addr +
>> folio_size,
>
> Please avoid mixing nr_pages and folio_size.
>
>> + vma->vm_end);
>> + start_ptep = vmf->pte - (addr - start) / PAGE_SIZE;
> writable);
>
> I am not able to convince myself that the old code could not have
> resulted in a vmf->pte that underflows the page table. Am I correct?
Yes, I think you are right, and I realized the problem now.
>
> Now align_addr would make sure that we are always within one page table
> (as long as our folio size does not exceed PMD size :) ).
>
> Can we use PMD_SIZE instead for that, something like the following?
>
>
> /* Stay within the VMA and within the page table. */
> pt_start = ALIGN_DOWN(addr, PMD_SIZE);
> start = max3(addr - nr << PAGE_SHIFT, pt_start, vma->vm_start);
> end = min3(addr + folio_size - nr << PAGE_SHIFT,
> pt_start + PMD_SIZE, vma->vm_end);
>
> start_ptep = vmf->pte - (addr - start) >> PAGE_SHIFT;
The changes look good to me. Thanks.
next prev parent reply other threads:[~2024-06-12 1:16 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-07 10:32 Kefeng Wang
2024-06-07 10:37 ` David Hildenbrand
2024-06-12 2:41 ` Kefeng Wang
2024-06-09 16:03 ` Dan Carpenter
2024-06-09 20:53 ` Andrew Morton
2024-06-12 2:50 ` Kefeng Wang
2024-06-12 10:06 ` Dan Carpenter
2024-06-12 12:32 ` Kefeng Wang
2024-06-11 7:48 ` Baolin Wang
2024-06-11 10:34 ` David Hildenbrand
2024-06-11 12:32 ` David Hildenbrand
2024-06-12 1:16 ` Baolin Wang [this message]
2024-06-12 6:02 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dbeebf56-61c9-414d-84db-1db9420f2259@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=linux-mm@kvack.org \
--cc=liushixin2@huawei.com \
--cc=mgorman@techsingularity.net \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox