From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73658C25B76 for ; Tue, 11 Jun 2024 07:48:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D750F6B0083; Tue, 11 Jun 2024 03:48:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D25616B0092; Tue, 11 Jun 2024 03:48:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C14806B0093; Tue, 11 Jun 2024 03:48:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A30566B0083 for ; Tue, 11 Jun 2024 03:48:44 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 39D121A1248 for ; Tue, 11 Jun 2024 07:48:44 +0000 (UTC) X-FDA: 82217830968.06.730E132 Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) by imf05.hostedemail.com (Postfix) with ESMTP id 8EC8410001D for ; Tue, 11 Jun 2024 07:48:40 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=kSgr6rXg; spf=pass (imf05.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718092122; a=rsa-sha256; cv=none; b=I60EQfvATHYEBbmXbhDU2nq3avSHmrSx8vS3skBAGjyfPQgldrYmYlb7UCj16tG8z2SGD0 Yvze5hbpG/ZQ6XOzRZ25lt9a/PnorQTOZSRx/D70Bq5LKfNZIW88BFm/tTS0uhgftRIrLb Huy6S3aNToRMudLFMIS5JAce/n+vMjI= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=kSgr6rXg; spf=pass (imf05.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718092122; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/f4kUxEYxXrXba192E0lBoJWWZJTUB6K//wrTSj1cds=; b=P+HGsjgSWFDScDx2A8QcTt1Yy5erqfPGKnlLExbx4cUovHOesb0E12EiUh0Ky5fwxw8cSg C8GIIbe1gcSh+UMNyM6yjANcznFcYe3+xuY291i5f4sL90iFJHKYS3VPdJFMN0I64HGBb9 a4wOJsQd4njjrNLR5W9XBPqqE/hGfD4= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718092118; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=/f4kUxEYxXrXba192E0lBoJWWZJTUB6K//wrTSj1cds=; b=kSgr6rXgcMgsZzfh9oixDReyX5PWGKiWUTjhX8O8vWSM9MaQftgael314d+aY9/HbG4sVRELwE+d46BCo8mFjXesQO3C/S6Xim8IR27YxsIiNV0EA/oAuCyJyt8XKA4eum8C67XLNTzX+C5MLyql+R/J5XOxA1E8xNYCoFVlgns= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R541e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045075189;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W8FgCXh_1718092116; Received: from 30.97.56.68(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8FgCXh_1718092116) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 15:48:37 +0800 Message-ID: Date: Tue, 11 Jun 2024 15:48:36 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: fix possible OOB in numa_rebuild_large_mapping() To: Kefeng Wang , Andrew Morton Cc: ying.huang@intel.com, linux-mm@kvack.org, David Hildenbrand , John Hubbard , Mel Gorman , Ryan Roberts , liushixin2@huawei.com References: <20240607103241.1298388-1-wangkefeng.wang@huawei.com> From: Baolin Wang In-Reply-To: <20240607103241.1298388-1-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: en6isdnh93rwhmyzmssjwocceojf9bkr X-Rspamd-Queue-Id: 8EC8410001D X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1718092120-413042 X-HE-Meta: U2FsdGVkX19Pljl+Rp2GbnopXRMfF2TM9j86qp5ChwUkgYvimDGVWB2IFnCaCmsgP/SVTZ44BXjmjEh3TyY/1hRLkcC+UnMqxbw1azJZ5asPjd5G/RcSf+a/msgfXbwDO+VJ1D0UiPsujOns5hvXccrxhuv3G4R8pgUYdHieA6+MXxxDCDbCiUcTs517xoRroS2fmKiJROBAT2QANSJWzJPA2bFLTH9UmP97MFJfSuxpAGTUsqGl5pjEBkidk3tvap2bnYUKEg6gTC2F36EcI+ov8Io2gHj1eCEjoIxApJEkKtJ0X2acxH2is1i3Nceb1A+werh6UdTLT7onGieYbpSw767qcP2C70SGkOwObYZvLzsDx2RsDcfaPCrD1mxWBm8eWFfbHR0MfueAlIQ1xckW5uA5bhmo57aClXiPc5fGvPJ8temxzJ4zM5gnfLdvymbp336sggDiKWH/S66pge247aFuk/DP0yyFAtK39dusbjSXIIFnbGhvuzmFXd8FHrUX4jJgwSMZAcPspY283iw8t1LPYNSribU+UfvRNLFntx/pOYhP4qFxqW7cxCx6pvrUoDYzh2ScrW3FsCkqcrltUCjiSApXj0Crvo+GpQDQseWBszWykbhHbJfkrO4ehqRH4Exs82c1U/O9sR03v03l1EwFpyhCu8qXTpuuGNDX3VMdd/DkzUL5xN7f2j8qJlztIIQFDW9ireT6LpsOApECm+yX4EoaHC6441vS5/1jAs8GDmBz34b0X2bOp25CMyGC07gtEMarcYmTwqGsRaH2h6bBRTsLc3fZmHk6EeU5xlZXVVNXgJQiM7wtECG2seTimadK2VnmAkCVwpdTm7/ksvF+R156HiZTKfuNuVeldZWghsdcRiU9SShkilRH96Hibr+oxGyQQ5DedR0P7jtKea3dA99NmhtALfHTVdORD/ppIwMo/2fB4TdaZMfFlNQZLWvnGQCPOnhJcq7 ncoaCh1j E8/2CYheDKw6eDhrlq03/MsHtBA7uYCyn0FuXk0nuMbHwonirqBPxQ8DVtrRadvzIhodQugTiB14Plkrz2bva2KYQKhgT5V2rl0XmmTdso7kF3IWqv18fvpei7pWmi2VyZ5PY7gy0agsjBlqVvid3k3Ludky3uKBUej1unPrDpoAXvsCqgBgAMDE8HYWGyMUeQJWYMRiOm8vJUE9vK7KkAgbPfacSEYI5AnpwEvDvtE1j9LjRH9mq8SjWLKoeUN+BmOiS0UcFjqfsZzlAm0KtUDYwcUFQqL0abKUuKHoOcE6097wuyJOtZ/B0YE55/rxm/c4Q4dnnlfqWXSgD9nQ2im6PyFyBxlVS1cTiZmt/avtUwibzl72IChExyw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Kefeng, On 2024/6/7 18:32, Kefeng Wang wrote: > The large folio is mapped with folio size aligned virtual address during > the pagefault, eg, 'addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE)' > in do_anonymous_page(), but after the mremap(), the virtual address only > require PAGE_SIZE aligned, also pte is moved to new in move_page_tables(), > then traverse the new pte in numa_rebuild_large_mapping() will hint the > following issue, > > Unable to handle kernel paging request at virtual address 00000a80c021a788 > Mem abort info: > ESR = 0x0000000096000004 > EC = 0x25: DABT (current EL), IL = 32 bits > SET = 0, FnV = 0 > EA = 0, S1PTW = 0 > FSC = 0x04: level 0 translation fault > Data abort info: > ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 > CM = 0, WnR = 0, TnD = 0, TagAccess = 0 > GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 > user pgtable: 4k pages, 48-bit VAs, pgdp=00002040341a6000 > [00000a80c021a788] pgd=0000000000000000, p4d=0000000000000000 > Internal error: Oops: 0000000096000004 [#1] SMP > ... > CPU: 76 PID: 15187 Comm: git Kdump: loaded Tainted: G W 6.10.0-rc2+ #209 > Hardware name: Huawei TaiShan 2280 V2/BC82AMDD, BIOS 1.79 08/21/2021 > pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) > pc : numa_rebuild_large_mapping+0x338/0x638 > lr : numa_rebuild_large_mapping+0x320/0x638 > sp : ffff8000b41c3b00 > x29: ffff8000b41c3b30 x28: ffff8000812a0000 x27: 00000000000a8000 > x26: 00000000000000a8 x25: 0010000000000001 x24: ffff20401c7170f0 > x23: 0000ffff33a1e000 x22: 0000ffff33a76000 x21: ffff20400869eca0 > x20: 0000ffff33976000 x19: 00000000000000a8 x18: ffffffffffffffff > x17: 0000000000000000 x16: 0000000000000020 x15: ffff8000b41c36a8 > x14: 0000000000000000 x13: 205d373831353154 x12: 5b5d333331363732 > x11: 000000000011ff78 x10: 000000000011ff10 x9 : ffff800080273f30 > x8 : 000000320400869e x7 : c0000000ffffd87f x6 : 00000000001e6ba8 > x5 : ffff206f3fb5af88 x4 : 0000000000000000 x3 : 0000000000000000 > x2 : 0000000000000000 x1 : fffffdffc0000000 x0 : 00000a80c021a780 > Call trace: > numa_rebuild_large_mapping+0x338/0x638 > do_numa_page+0x3e4/0x4e0 > handle_pte_fault+0x1bc/0x238 > __handle_mm_fault+0x20c/0x400 > handle_mm_fault+0xa8/0x288 > do_page_fault+0x124/0x498 > do_translation_fault+0x54/0x80 > do_mem_abort+0x4c/0xa8 > el0_da+0x40/0x110 > el0t_64_sync_handler+0xe4/0x158 > el0t_64_sync+0x188/0x190 > > Fix it by correct the start and end, which may lead to only rebuild part > of large mapping in one numa page fault, there is no issue since other part > could rebuild by another pagefault. > > Fixes: d2136d749d76 ("mm: support multi-size THP numa balancing") > Signed-off-by: Kefeng Wang Thanks for fixing the issue. But could you help to make the issue clear? e.g. how to reproduce it like David suggested? Do you mean 'vmf->address - nr * PAGE_SIZE' can be overflowed? > --- > mm/memory.c | 24 +++++++++++++++--------- > 1 file changed, 15 insertions(+), 9 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index db9130488231..0ad57b6485ca 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5223,15 +5223,21 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str > update_mmu_cache_range(vmf, vma, fault_addr, fault_pte, 1); > } > > -static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, > - struct folio *folio, pte_t fault_pte, > - bool ignore_writable, bool pte_write_upgrade) > +static void numa_rebuild_large_mapping(struct vm_fault *vmf, > + struct vm_area_struct *vma, struct folio *folio, int nr_pages, > + pte_t fault_pte, bool ignore_writable, bool pte_write_upgrade) > { > int nr = pte_pfn(fault_pte) - folio_pfn(folio); > - unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); > - unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); > - pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; > - unsigned long addr; > + unsigned long folio_size = nr_pages * PAGE_SIZE; > + unsigned long addr = vmf->address; > + unsigned long start, end, align_addr; > + pte_t *start_ptep; > + > + align_addr = ALIGN_DOWN(addr, folio_size); > + start = max3(addr - nr * PAGE_SIZE, align_addr, vma->vm_start); > + end = min3(addr + (nr_pages - nr) * PAGE_SIZE, align_addr + folio_size, > + vma->vm_end); > + start_ptep = vmf->pte - (addr - start) / PAGE_SIZE; > > /* Restore all PTEs' mapping of the large folio */ > for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { > @@ -5361,8 +5367,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > * non-accessible ptes, some can allow access by kernel mode. > */ > if (folio && folio_test_large(folio)) > - numa_rebuild_large_mapping(vmf, vma, folio, pte, ignore_writable, > - pte_write_upgrade); > + numa_rebuild_large_mapping(vmf, vma, folio, nr_pages, pte, > + ignore_writable, pte_write_upgrade); > else > numa_rebuild_single_mapping(vmf, vma, vmf->address, vmf->pte, > writable);