From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA004C27C53 for ; Thu, 13 Jun 2024 01:02:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 47D796B0083; Wed, 12 Jun 2024 21:02:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 42DA86B0099; Wed, 12 Jun 2024 21:02:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F5DA6B009A; Wed, 12 Jun 2024 21:02:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 118FA6B0083 for ; Wed, 12 Jun 2024 21:02:33 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 91DF2141135 for ; Thu, 13 Jun 2024 01:02:32 +0000 (UTC) X-FDA: 82224064944.25.358CD5D Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf13.hostedemail.com (Postfix) with ESMTP id 6A94D20007 for ; Thu, 13 Jun 2024 01:02:28 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718240549; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V510VIN/hTXcb8amA0OklJndhoc1xz2+hqXQn5axO6U=; b=ikewJKiLwK0qAObTwpN3XzdZSoVmDG6pgXwKGUHZ70/vf7FzRPlcKk56l1wmo3sdU9XCy/ BnV42FlPRGufaAXNA+bfkNF2IYnUeJE+WvdDF/dSz8aFBzcrZr8THUNoxuSUerYS/S+4lw 7t4pbCfIU/piXSvvsAs+5P8b70A67us= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718240549; a=rsa-sha256; cv=none; b=l1XxPIXjiN2zBydPidJJfNIfc0sWdjVKbU2YGDMNn3z50yw0LoMEzHExsQqg9vDpJAz4BB ONYE3euthIJbi8Lb06rr4DJg7Nbv1rWz7FwVonz+vM2SKm3zCdICNE27TAhDa8tjhC4ag7 v/H376Z8BJ5NMcYTGUUu0+5qDCOEFTI= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4W03tz20sfz2CkFc; Thu, 13 Jun 2024 08:58:35 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 640E11A0188; Thu, 13 Jun 2024 09:02:23 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 13 Jun 2024 09:02:22 +0800 Message-ID: <76ed2616-3493-4c88-8d1b-8aa4a2bc9688@huawei.com> Date: Thu, 13 Jun 2024 09:02:21 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] mm: fix possible OOB in numa_rebuild_large_mapping() Content-Language: en-US To: David Hildenbrand , Andrew Morton CC: , Baolin Wang , , John Hubbard , Mel Gorman , Ryan Roberts , References: <20240612122822.4033433-1-wangkefeng.wang@huawei.com> <541d1a37-7d3f-4f9a-b4d8-572aacd96f1e@redhat.com> From: Kefeng Wang In-Reply-To: <541d1a37-7d3f-4f9a-b4d8-572aacd96f1e@redhat.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6A94D20007 X-Stat-Signature: wpotxxxwgt6nwn391zjhpmkenrjt1akw X-Rspam-User: X-HE-Tag: 1718240548-204313 X-HE-Meta: U2FsdGVkX1/b3xZJqsYtdTLwNl3fmhYiCoLWTTxpWAUoIaxVsGuqmjTtWj5xPCKIjtzprF8k+R+jDJfLF1wwalA9ar/FL5MbMqoRhMp1YLzvuZQvfUBvvZBZ3m75XwMoWjP8iU7QGx800XhZ39ezd+KD05FuA6/piWxR7Iw0W3kR0sDgnj6lGVvw79QgbQZ77J+0bcIlkIXriZVhNi8ToXcXQ4yFbbSNgr6G2GofI3HumPizG/UbsASJkhTxKV4E5NA5nmc6bsMJSwRNA5yPfjSluPRFaSLjCnR+udmcqJWDC86Vzrp/HI6NtG+cRPiGf8alw2vHm6uNhOheUe3+pstQIMX1PWM4DmKX/iPpCUq/e+TSERnJmCFQKifTMIM7iRSkSrIDZR++RM1tdTw4Tt3TqdF0nkwqbp9r9Klq/YhJAXkzQOxpkRgMH0eInGj9IjgbATxYauuUZNQz5ne8HWlQNrO8NPkKg953c0nEvnDZocl4u6++zhmcw/OO+KSwhLiM9if1tNPCdw1bwbgF5JsnJyLe4dWgd9IQrUo0QGFlTHyKfnW+inSCOX8yxXZf+QPb/e4GKcf7PhcBFAego6kCCaxdhnI+OkN2v+RgrEIWqr/QcOfFq3PtXZLzaR7JNprugg3Oipl0cDkOGkjNdNnOf3Umt6a4pxvWO3DroEQ4exRzpecJ7tnUyu1zutaIfHoYgzqOdgnQTwNPXyf3CoFSSGn06qAb5AgXKa/IxgfYPybbIDS6kNtx9kOVf7m1R10K4mQtUv2SVXC5rfx9gSVq5vvh7w1yyKwYXeJub8Pt9Fg22fogW4rsPOOnUU2DL36tyWJrkKJMWAcUjLYSvc6PAm1yH4rvwfCeOfTeYa+5+EujT6TcRfbnsE6hbjydFO+o8gzDcQBfLJwGXVCTp/zm6spQQKA/KddjN2RNif2yAmigWMaztsjZFIAG+DaadxUyzll3m+YFm6QG9OD 0BYpqkWR eXmQF/f28Roim9JFZEnTjg/9yZStZSam6BDWWcyr/lY+fme4+79+NXzIwY0zjSQsIAMiy6jV+7+wygHxgGCCmld0aHlLPHHN1S/1U6vHJAA5T7o+gwQNkBhcpVwlD3vkg5exOXSHnnsFC4K6CbFEL+MjJMSWv+mhEP9qvxDkgSR67Roqy0jdZ3M7sHKrlGPxqFTmAFQFXZXFpI+c/MOEzERFyIGeboVcepv1heyIg92BZKKybje7bSjiA5W86v0Ozoi7CAs+SdLDl1pcKwiuOEoLgpVJuw2OE05QU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/6/12 23:27, David Hildenbrand wrote: > On 12.06.24 14:28, Kefeng Wang wrote: >> The large folio is mapped with folio size(not greater PMD_SIZE) aligned >> virtual address during the pagefault, ie, 'addr = >> ALIGN_DOWN(vmf->address, >> nr_pages * PAGE_SIZE)' in do_anonymous_page(). But after the mremap(), >> the virtual address only requires PAGE_SIZE alignment. Also pte is moved >> to new in move_page_tables(), then traversal of the new pte in the >> numa_rebuild_large_mapping() could hit the following issue, >> >>     Unable to handle kernel paging request at virtual address >> 00000a80c021a788 >>     Mem abort info: >>       ESR = 0x0000000096000004 >>       EC = 0x25: DABT (current EL), IL = 32 bits >>       SET = 0, FnV = 0 >>       EA = 0, S1PTW = 0 >>       FSC = 0x04: level 0 translation fault >>     Data abort info: >>       ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 >>       CM = 0, WnR = 0, TnD = 0, TagAccess = 0 >>       GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 >>     user pgtable: 4k pages, 48-bit VAs, pgdp=00002040341a6000 >>     [00000a80c021a788] pgd=0000000000000000, p4d=0000000000000000 >>     Internal error: Oops: 0000000096000004 [#1] SMP >>     ... >>     CPU: 76 PID: 15187 Comm: git Kdump: loaded Tainted: G >> W          6.10.0-rc2+ #209 >>     Hardware name: Huawei TaiShan 2280 V2/BC82AMDD, BIOS 1.79 08/21/2021 >>     pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) >>     pc : numa_rebuild_large_mapping+0x338/0x638 >>     lr : numa_rebuild_large_mapping+0x320/0x638 >>     sp : ffff8000b41c3b00 >>     x29: ffff8000b41c3b30 x28: ffff8000812a0000 x27: 00000000000a8000 >>     x26: 00000000000000a8 x25: 0010000000000001 x24: ffff20401c7170f0 >>     x23: 0000ffff33a1e000 x22: 0000ffff33a76000 x21: ffff20400869eca0 >>     x20: 0000ffff33976000 x19: 00000000000000a8 x18: ffffffffffffffff >>     x17: 0000000000000000 x16: 0000000000000020 x15: ffff8000b41c36a8 >>     x14: 0000000000000000 x13: 205d373831353154 x12: 5b5d333331363732 >>     x11: 000000000011ff78 x10: 000000000011ff10 x9 : ffff800080273f30 >>     x8 : 000000320400869e x7 : c0000000ffffd87f x6 : 00000000001e6ba8 >>     x5 : ffff206f3fb5af88 x4 : 0000000000000000 x3 : 0000000000000000 >>     x2 : 0000000000000000 x1 : fffffdffc0000000 x0 : 00000a80c021a780 >>     Call trace: >>      numa_rebuild_large_mapping+0x338/0x638 >>      do_numa_page+0x3e4/0x4e0 >>      handle_pte_fault+0x1bc/0x238 >>      __handle_mm_fault+0x20c/0x400 >>      handle_mm_fault+0xa8/0x288 >>      do_page_fault+0x124/0x498 >>      do_translation_fault+0x54/0x80 >>      do_mem_abort+0x4c/0xa8 >>      el0_da+0x40/0x110 >>      el0t_64_sync_handler+0xe4/0x158 >>      el0t_64_sync+0x188/0x190 >> >> Fix it by making the start and end not only within the vma range, but >> also within the page table range. >> >> Fixes: d2136d749d76 ("mm: support multi-size THP numa balancing") >> Signed-off-by: Kefeng Wang >> --- >> v2: >> - don't pass nr_pages into numa_rebuild_large_mapping() >> - address comment and suggestion from David >> >>   mm/memory.c | 14 ++++++++++---- >>   1 file changed, 10 insertions(+), 4 deletions(-) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index 0d309cfb703c..60f7a05ad0cd 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5228,10 +5228,16 @@ static void numa_rebuild_large_mapping(struct >> vm_fault *vmf, struct vm_area_stru >>                          bool ignore_writable, bool pte_write_upgrade) >>   { >>       int nr = pte_pfn(fault_pte) - folio_pfn(folio); >> -    unsigned long start = max(vmf->address - nr * PAGE_SIZE, >> vma->vm_start); >> -    unsigned long end = min(vmf->address + (folio_nr_pages(folio) - >> nr) * PAGE_SIZE, vma->vm_end); >> -    pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; >> -    unsigned long addr; >> +    unsigned long start, end, addr = vmf->address; >> +    unsigned long addr_start = addr - (nr << PAGE_SHIFT); >> +    unsigned long pt_start = ALIGN_DOWN(addr, PMD_SIZE); >> +    pte_t *start_ptep; >> + >> +    /* Stay within the VMA and within the page table. */ >> +    start = max3(addr_start, pt_start, vma->vm_start); >> +    end = min3(addr_start + folio_size(folio), pt_start + PMD_SIZE, >> +           vma->vm_end); >> +    start_ptep = vmf->pte - ((addr - start) >> PAGE_SHIFT); >>       /* Restore all PTEs' mapping of the large folio */ >>       for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { > > Should do the trick, hopefully ;) At least passed our test, almost 100% occurred before :) > > Acked-by: David Hildenbrand > Thanks