From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A0A2C27C78 for ; Wed, 12 Jun 2024 01:16:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DBE0B8D0005; Tue, 11 Jun 2024 21:16:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D6E7B8D0002; Tue, 11 Jun 2024 21:16:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0FA78D0005; Tue, 11 Jun 2024 21:16:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9FC0A8D0002 for ; Tue, 11 Jun 2024 21:16:38 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4B16B1A0869 for ; Wed, 12 Jun 2024 01:16:38 +0000 (UTC) X-FDA: 82220471676.02.82FE4B5 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) by imf21.hostedemail.com (Postfix) with ESMTP id 41F621C0008 for ; Wed, 12 Jun 2024 01:16:34 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=Gx07YBQb; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf21.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.112 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718154996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ni6Et0A1fId47FNBNwJ9pYP/DOshjiPyZoBH56MU8wA=; b=4pBwJf1setqTJaGO3PGcR3uIw7q0As8BeooQP1poP3+NRfI+u8VKZhNdls+m05u9LaO537 MWfizlXqBSQNiPxZ6zeaPQu7v24lgF0QPUETIbi0dduFCpuzrHQ5QhWbRBJ8geo44LktnE SdXkIHzVDD9dmaj2MknFxd+mxgfM2FQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=Gx07YBQb; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf21.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.112 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718154996; a=rsa-sha256; cv=none; b=Pv29w6UV+hFB7c32iFq+QcvEkjXlGyS3mg//fhOaVRWGcRzcnFdVfqHSc93+ftF4b5F1Cz qe5CCv7qSeUr6bANgK+7gT7FIyZ+lbO3FsjD8Q+ire5zzPY5sN7wNlPkDkKaOg9tIY0xtl kUKpC6bDmURuzBnVTdyQo8BwXHFyt0k= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718154992; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=Ni6Et0A1fId47FNBNwJ9pYP/DOshjiPyZoBH56MU8wA=; b=Gx07YBQbPpQASqxwgWX1xSITlK6dHvatLFNB3VJP2SdYZWtq1geoqpbGEPvBvpym06Xw3GcV6YnPiP20ww4F4837BguWrMtUUOk1BJMTJDOayOkelKU2G75TtTQlcNcNbwlglgkLhiQUQafnggWPb5H6VdOU2E+xBlgs8VPXDiY= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033032014031;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W8HzR5s_1718154990; Received: from 30.97.56.60(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8HzR5s_1718154990) by smtp.aliyun-inc.com; Wed, 12 Jun 2024 09:16:31 +0800 Message-ID: Date: Wed, 12 Jun 2024 09:16:29 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: fix possible OOB in numa_rebuild_large_mapping() To: David Hildenbrand , Kefeng Wang , Andrew Morton Cc: ying.huang@intel.com, linux-mm@kvack.org, John Hubbard , Mel Gorman , Ryan Roberts , liushixin2@huawei.com References: <20240607103241.1298388-1-wangkefeng.wang@huawei.com> From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 41F621C0008 X-Stat-Signature: ukqyi11m3m5rz1bn1ktuzugztjet97yf X-Rspam-User: X-HE-Tag: 1718154994-719365 X-HE-Meta: U2FsdGVkX1+YNgt2MWJ8cKaKrPo1B9KdfcprhbhaRFsXrq7iZi+CmOCIGL0WDiyEOX196B8CNdxaIuMj3gUdwvLBRYKdcIjqqEuCSo6vD7RsqECqjRw6VO8xsV793L/8Z10gmKYRdVI+9SAuNwok2trwMMRyGvyqMJfKhrZWD6Y3pHZVqbH4xpo0iz1ExzTvP1zZo/Xp7pKx7ElWBJVzAt3Ox1e97eXrzgcz+KTLpx0vcwOJHlmzOzOQumNeIAwyn+nrtv5FSKDsdHHYEGJXRuncyFVLueEy41DT8IZjGmT2gYqG2g3cCqIpCYA1CdoUn54M3g7fLTf7tJ5yBxmkp+7llSmSBzEgUaDO6Ic0XfJgdSzC146/8rX5HYwcu6SibXYc1i7qJVMbQmNh70/DANF0/bXkgpD6TOHjk3LZgDviNb9kQaRCNLbD7nOuJ9XeWtUHfftF8WH6KNMMIRsJX/By/Z/+8OFy6IOHKvRWaJFIfvjTPhayl3Df7b1ACp1fzsqibkcE1G2ML/xbe3vQmwvES2Rs4LtmOlMcBhH2UaT8XscQORZYxrcVdBbnYGhXH0WikDN1I4cULxU6BL0Mizle0bYH8ZwSUIIcgKupIS5h7KgyInrV5jkjQVYTbMsTFahuhHhMV8CnuiSaYz9bnr/dHN/5ZSZm+j7pAA9r6W/HFgbDIVk+rg8Ycb8YHFe0C9s8nocH1LRHLW/85lSjZ2Ot5viZrfTXn4B3NxWpEb6amQBnyqNTRzTrfpNNb9W5EIDuj913ut1QiOum0PSOkN1R7dLkdUR9qM+/Wuv+bzMEmRu4qnzPBeEp9k5k4qqwm/pL7TiNuAUBU8/QhEZmzeU37dcbJgu3SVF2LTBvd0DnIufzzBBNYTnk5CnB/wgxmRsGQQ2gdtR0HCEiQWD1cApPlzc/9nOA+4eKjjNZW7Ds4HLR+qKoXl49NMXuzw1HZfXzQZgcw45mi+0Gqtt rnZ/ITOF efjJSf2ZWHYbJDrHRk5C+oGMkE0zbVjBm4xUcIaVm7mW109dRgjNt5bOVrBNkPovSz1cqTlaCL6mu6/DmCqTRnJzSEyVQh1Kl07evlxkB6ShfNey+ZpZ0slepaderWCHDAHHaQUaPqwhwNpMSj8mserGLcuWvGQZtULyP0PiKD39u22f5LUqEyXQQw+PMsPPxB2ky6T5xfEA9esxpbiyAPEKS2YkD9tO9SekTZFZEWOfLtxcGLX684LeZlS9eN5GvWRxgTl6e3Tx/KXoToFssdlich8lJPWjkqHD+i9ruuSj0daGgCCAAhnEpMdCFTgFQlaYp1pXMYfLOmmbnqxaA/wWLQeNH+RODS/nzu0BKvI/M2q1VRnDb6aGbKleCqfU6TNPOPc5mTkOO9ss= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/6/11 20:32, David Hildenbrand wrote: > On 07.06.24 12:32, Kefeng Wang wrote: >> The large folio is mapped with folio size aligned virtual address during >> the pagefault, eg, 'addr = ALIGN_DOWN(vmf->address, nr_pages * >> PAGE_SIZE)' >> in do_anonymous_page(), but after the mremap(), the virtual address only >> require PAGE_SIZE aligned, also pte is moved to new in >> move_page_tables(), >> then traverse the new pte in numa_rebuild_large_mapping() will hint the >> following issue, >> >>     Unable to handle kernel paging request at virtual address >> 00000a80c021a788 >>     Mem abort info: >>       ESR = 0x0000000096000004 >>       EC = 0x25: DABT (current EL), IL = 32 bits >>       SET = 0, FnV = 0 >>       EA = 0, S1PTW = 0 >>       FSC = 0x04: level 0 translation fault >>     Data abort info: >>       ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 >>       CM = 0, WnR = 0, TnD = 0, TagAccess = 0 >>       GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 >>     user pgtable: 4k pages, 48-bit VAs, pgdp=00002040341a6000 >>     [00000a80c021a788] pgd=0000000000000000, p4d=0000000000000000 >>     Internal error: Oops: 0000000096000004 [#1] SMP >>     ... >>     CPU: 76 PID: 15187 Comm: git Kdump: loaded Tainted: G >> W          6.10.0-rc2+ #209 >>     Hardware name: Huawei TaiShan 2280 V2/BC82AMDD, BIOS 1.79 08/21/2021 >>     pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) >>     pc : numa_rebuild_large_mapping+0x338/0x638 >>     lr : numa_rebuild_large_mapping+0x320/0x638 >>     sp : ffff8000b41c3b00 >>     x29: ffff8000b41c3b30 x28: ffff8000812a0000 x27: 00000000000a8000 >>     x26: 00000000000000a8 x25: 0010000000000001 x24: ffff20401c7170f0 >>     x23: 0000ffff33a1e000 x22: 0000ffff33a76000 x21: ffff20400869eca0 >>     x20: 0000ffff33976000 x19: 00000000000000a8 x18: ffffffffffffffff >>     x17: 0000000000000000 x16: 0000000000000020 x15: ffff8000b41c36a8 >>     x14: 0000000000000000 x13: 205d373831353154 x12: 5b5d333331363732 >>     x11: 000000000011ff78 x10: 000000000011ff10 x9 : ffff800080273f30 >>     x8 : 000000320400869e x7 : c0000000ffffd87f x6 : 00000000001e6ba8 >>     x5 : ffff206f3fb5af88 x4 : 0000000000000000 x3 : 0000000000000000 >>     x2 : 0000000000000000 x1 : fffffdffc0000000 x0 : 00000a80c021a780 >>     Call trace: >>      numa_rebuild_large_mapping+0x338/0x638 >>      do_numa_page+0x3e4/0x4e0 >>      handle_pte_fault+0x1bc/0x238 >>      __handle_mm_fault+0x20c/0x400 >>      handle_mm_fault+0xa8/0x288 >>      do_page_fault+0x124/0x498 >>      do_translation_fault+0x54/0x80 >>      do_mem_abort+0x4c/0xa8 >>      el0_da+0x40/0x110 >>      el0t_64_sync_handler+0xe4/0x158 >>      el0t_64_sync+0x188/0x190 >> >> Fix it by correct the start and end, which may lead to only rebuild part >> of large mapping in one numa page fault, there is no issue since other >> part >> could rebuild by another pagefault. >> >> Fixes: d2136d749d76 ("mm: support multi-size THP numa balancing") >> Signed-off-by: Kefeng Wang >> --- >>   mm/memory.c | 24 +++++++++++++++--------- >>   1 file changed, 15 insertions(+), 9 deletions(-) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index db9130488231..0ad57b6485ca 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5223,15 +5223,21 @@ static void numa_rebuild_single_mapping(struct >> vm_fault *vmf, struct vm_area_str >>       update_mmu_cache_range(vmf, vma, fault_addr, fault_pte, 1); >>   } >> -static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct >> vm_area_struct *vma, >> -                       struct folio *folio, pte_t fault_pte, >> -                       bool ignore_writable, bool pte_write_upgrade) >> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, >> +        struct vm_area_struct *vma, struct folio *folio, int nr_pages, >> +        pte_t fault_pte, bool ignore_writable, bool pte_write_upgrade) >>   { >>       int nr = pte_pfn(fault_pte) - folio_pfn(folio); >> -    unsigned long start = max(vmf->address - nr * PAGE_SIZE, >> vma->vm_start); >> -    unsigned long end = min(vmf->address + (folio_nr_pages(folio) - >> nr) * PAGE_SIZE, vma->vm_end); >> -    pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; >> -    unsigned long addr; >> +    unsigned long folio_size = nr_pages * PAGE_SIZE; > > Just re-read folio_nr_pages() here, it's cheap. Or even better, use > folio_size(); > >> +    unsigned long addr = vmf->address; >> +    unsigned long start, end, align_addr; >> +    pte_t *start_ptep; >> + >> +    align_addr = ALIGN_DOWN(addr, folio_size); >> +    start = max3(addr - nr * PAGE_SIZE, align_addr, vma->vm_start); >> +    end = min3(addr + (nr_pages - nr) * PAGE_SIZE, align_addr + >> folio_size, > > Please avoid mixing nr_pages and folio_size. > >> +           vma->vm_end); >> +    start_ptep = vmf->pte - (addr - start) / PAGE_SIZE; >                 writable); > > I am not able to convince myself that the old code could not have > resulted in a vmf->pte that underflows the page table. Am I correct? Yes, I think you are right, and I realized the problem now. > > Now align_addr would make sure that we are always within one page table > (as long as our folio size does not exceed PMD size :) ). > > Can we use PMD_SIZE instead for that, something like the following? > > > /* Stay within the VMA and within the page table. */ > pt_start = ALIGN_DOWN(addr, PMD_SIZE); > start = max3(addr - nr << PAGE_SHIFT, pt_start, vma->vm_start); > end = min3(addr + folio_size - nr << PAGE_SHIFT, >            pt_start + PMD_SIZE, vma->vm_end); > > start_ptep = vmf->pte - (addr - start) >> PAGE_SHIFT; The changes look good to me. Thanks.