From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95E1FC5B549 for ; Wed, 4 Jun 2025 18:59:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0318C6B036F; Wed, 4 Jun 2025 14:59:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F24386B0371; Wed, 4 Jun 2025 14:59:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC6046B0373; Wed, 4 Jun 2025 14:59:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BB3A56B036F for ; Wed, 4 Jun 2025 14:59:31 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3C831C063E for ; Wed, 4 Jun 2025 18:59:31 +0000 (UTC) X-FDA: 83518631742.07.E4F4C89 Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by imf23.hostedemail.com (Postfix) with ESMTP id 3DF2F140007 for ; Wed, 4 Jun 2025 18:59:29 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=oeGZrk7E; spf=pass (imf23.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.208.47 as permitted sender) smtp.mailfrom=sharnoff@neon.tech; dmarc=pass (policy=reject) header.from=neon.tech ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749063569; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=mbPxzU21pQPfeJMs7GIx2NLevJblWqGR9y+lEXFKe30=; b=XSnnHPPfI3an2FuB/9CgD9WwxEQiNldxAh7b0U5iT+nZ++toT4U0FpWpgnc9/IfGJQhWrs Q6vB8H5XmE2jVnqHn8N9BtrZSmLiFhnra15ytjaMtnqHehjrch0OPNaUIx2QslwBSODtEj wzx9c+e7TMbOsEESQRD6WZJcyAPH47w= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=oeGZrk7E; spf=pass (imf23.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.208.47 as permitted sender) smtp.mailfrom=sharnoff@neon.tech; dmarc=pass (policy=reject) header.from=neon.tech ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749063569; a=rsa-sha256; cv=none; b=mfM/LSD95dtydpWf1+EKLrUwbog30KxfYLa3BwAGlK1UH9R7zmGQ27/NROfi1PluExmgMK wpwPRmACaaR41GQfzpWGB5Mo282URlotPR1KffZf8WqqS8X1bTo5q4kSl/UwHNXcAUQgqx 4xW4AF7RE4Ba3vao272RJBCqK7HB5Zs= Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-602c4eae8d5so427308a12.1 for ; Wed, 04 Jun 2025 11:59:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=neon.tech; s=google; t=1749063568; x=1749668368; darn=kvack.org; h=content-transfer-encoding:subject:from:cc:to:content-language :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=mbPxzU21pQPfeJMs7GIx2NLevJblWqGR9y+lEXFKe30=; b=oeGZrk7EyUUtrJ6j/2VNL4XlTusYC96RpelXnRzyol4QcwJjKxlNvivrbf5l3HQnl/ 53SVvcxBYECnCbbnohNr6eQWILhByRXVfiTGIw8ZjpbwE+JSNxbgDxQMDlPE7IJe2oL1 p3yYYsS/JHFb8/XVCm0cWRb4I8ybqO3GchGcM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749063568; x=1749668368; h=content-transfer-encoding:subject:from:cc:to:content-language :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=mbPxzU21pQPfeJMs7GIx2NLevJblWqGR9y+lEXFKe30=; b=iVzOMVdlHh8Sr+HbohrbHcYYWVkOPtgf9nqIlKwk9FZuhG5dMYjmyxyh2mMHiQ/BTv vNTy44JXt4l+MS4OZ5EKvt+YN3LnvRyRYLIJL8YSIcuK2KsS3kvCb3cxPjy+RrOoxGPH kxC3tPGgGfc1E8M7FCKLeAOImm5ZFAVjuuPITh1c6D9sFv1ua4DbD9AT4xQ3i+DVnawT lTZQaXhh5/aIPQzOihEk4MNzndHLjTVDGLOI6wpI1joQHBL5LpBMC+3raMA/m1Jm6DIZ oQXbWiLUegtU/9l1oD5UHvQUPaD4MUuVqWy1fetBpPO9H4R/Z9V98p8KslCF4Wlwg0ur gpKw== X-Forwarded-Encrypted: i=1; AJvYcCXxdExHM25vh2PjP92svad9E6iZfjfnRJOBWXTMz4Z6FPg+c5//yVf3xiQgvVBztLkVd4++zE2cZA==@kvack.org X-Gm-Message-State: AOJu0YwufeXEitck70J6M53tNKc9DKYd2u1ySbPcvdOGGDPkR/WzPpE5 mNhqvwsv8dVHEOMZ3su7aAtaP61s1WjLkVRT8mIgQ6WROd5YN3nA5vIyixq/puar+dk= X-Gm-Gg: ASbGncuTNuHdiNZa9P0moz7zJtsZs/Jw2dbSTGOrTIZBydyaAdhI4v829Rlro+WlGBO S2XvqSdpoaPtjmM/gv5QopmJ0vs0J88Y6YbBt6N6f2jhpNNfXdjo+Uv+Ikj8+m1dEHCzvew66sE W1wJOgyaH+hU8bdz+u996aQRFrg9iFnsYx9E3vXeDMpFoAqiPBtCKJuof7Zq+cFBfHOv6EXt1sS j/aBGx5Dfx77cCP7OfChkNxq5/QBp5vMkrcd4nponkwKfX95J9vMYCEas8sTNVpNtyhuaJQn7wj mHRfC0nWzOczt3NLmEAYbHUdd7W7Fy4gg2A2rSJNQW6Ga3WEcsV+o1o= X-Google-Smtp-Source: AGHT+IEQeZeCMV1Xtu4bSCUf7oDNF/zCJS7/LznGabgtUBL2VZ1B89jxObEDiyV81YDvM9Ne5wnAzQ== X-Received: by 2002:a05:6402:13c8:b0:602:53:cb06 with SMTP id 4fb4d7f45d1cf-60722a80206mr569998a12.17.1749063567520; Wed, 04 Jun 2025 11:59:27 -0700 (PDT) Received: from [192.168.86.142] ([84.65.228.220]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-60699731f06sm3326811a12.27.2025.06.04.11.59.26 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Jun 2025 11:59:27 -0700 (PDT) Message-ID: <9f4c0972-a123-4cc3-89f2-ed3490371e65@neon.tech> Date: Wed, 4 Jun 2025 19:59:25 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-mm@kvack.org Cc: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "Edgecombe, Rick P" , Oleg Vasilev , Arthur Petukhovsky , Stefan Radig , Misha Sakhnov From: Em Sharnoff Subject: [PATCH] x86/mm: Handle alloc failure in phys_*_init() Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Stat-Signature: daprowiakdkr63uk99c7rhj3xzow8z9w X-Rspamd-Queue-Id: 3DF2F140007 X-Rspam-User: X-Rspamd-Server: rspam02 X-HE-Tag: 1749063569-971454 X-HE-Meta: U2FsdGVkX19fDWPMel12xmPiczPuzbIFB7xW8MP9LX7bo+QiKokpF5dCJFnnTlTSZSV6UvcZX0PYVWTXPDcBp22X+MuPqdbRFJL6WWI8Y+uEANJx9ZVGyObAEGcM9YC/N4oUjrGG30Hs4mOIU7dCdzrwpRKqIzd/9frgRdvdCKl0VoLK9HFelvPGtUT7N6d1/2UbSP8elut+WbR0f6Lms3fGlAQkVbjcV6lBRNFnh3dpTkkrtErHSYi2A9GAJXJlhzg4RX5PrnUqGbcypjsPSYwO0j6VUcibqfj5qkIXjNPqucJ9JyDG0HdaWGS/GdTZT0p/MzeP6Tn8o6VLwgZCW/DjB3GDrdTz4dYc9Ch8ynEgUsgqvYnIHupfQ3ELiKcDNnX82GNRuRVkci1AX1znrgPGSZKg9viVIkpdPP7W2t6NaiY51DmIPvjH1oqO9mt12dwrQhNw52fsVIhilTVQ2LkCcmjF1lHibA/VERkM8wpCW4KcjoaNIu+QTJ2myQKMZ62joscpGfmIS3NJYg7/S3UQooKsbk/sNqnhOH6OXMMB80Q1OXixZfRw2VsrDCPc877UTzAczJHxx4ADciaD6VsP9D6OA7zBZVbF/MZ7mKTFpkFjJUt8Riu5WvgeNUCywGSyBLR4aPtyzWAVfV0nZa+oORfBc8/KwJKkBsh1ZT2ewAdjh/6HKss1FmNZuCvshNG/KvU+gTSSdnL/j0+cwRTsGPvN4c34rWf5FSGOzcIJGvLFYoWDwHrPXt/v+gMJ7ErUGCFD8MZrvREzcfNlMZ7InfFKH3ySXc6LqhQA1Kfc7UwBWF2NcNbpCi7dllCtl17dumWn/epuc677PELftCAF+j6eZwzGjh6/t4PVKWMWOe84NdxAnQvUfn7J2uMiBF6F+Oxefeh9vy2gpYCQoMNuELyOBtEvbYDpF6zBGk2GcdSN/7oZvNzDG3iuHpUF8v88q6U+mj0Ywti3QPI Dz71ICfd 6e8+IZGo5+ylRdD8KtPpDV24eilh5F3/jdC2pFpm/cxcQOQGMVAYD5VHf7KZiCYKjIu1E X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: tl;dr: * When setting up page table mappings for physical addresses after boot, alloc_low_page() uses GFP_ATOMIC, which is allowed to fail. * This isn't currently handled, and results in a null pointer dereference when it occurs. * This allocation failure can happen during memory hotplug. To handle failure, change phys_pud_init() and similar functions to return zero if allocation failed (either directly or transitively), and convert that to -ENOMEM in arch_add_memory(). === Background === We recently started observing these null pointer dereferences happening in practice (albeit quite rarely), triggered by allocation failures during virtio-mem hotplug. We use virtio-mem quite heavily - adding/removing memory based on resource usage of customer workloads across a fleet of VMs - so it's somewhat expected that we have occasional allocation failures here, if we run out of memory before hotplug takes place. We started seeing this bug after upgrading from 6.6.64 to 6.12.26, but there didn't appear to be relevant changes in the codepaths involved, so we figured the upgrade was triggering a latent issue. The possibility for this issue was also pointed out a while back: > For alloc_low_pages(), I noticed the callers don’t check for allocation > failure. I'm a little surprised that there haven't been reports of the > allocation failing, because these operations could result in a lot more > pages getting allocated way past boot, and failure causes a NULL > pointer dereference. https://lore.kernel.org/all/5aee7bcdf49b1c6b8ee902dd2abd9220169c694b.camel@intel.com/ For completeness, here's an example stack trace we saw (on 6.12.26): BUG: kernel NULL pointer dereference, address: 0000000000000000 .... Call Trace: phys_pud_init+0xa0/0x390 phys_p4d_init+0x93/0x330 __kernel_physical_mapping_init+0xa1/0x370 kernel_physical_mapping_init+0xf/0x20 init_memory_mapping+0x1fa/0x430 arch_add_memory+0x2b/0x50 add_memory_resource+0xe6/0x260 add_memory_driver_managed+0x78/0xc0 virtio_mem_add_memory+0x46/0xc0 virtio_mem_sbm_plug_and_add_mb+0xa3/0x160 virtio_mem_run_wq+0x1035/0x16c0 process_one_work+0x17a/0x3c0 worker_thread+0x2c5/0x3f0 ? _raw_spin_unlock_irqrestore+0x9/0x30 ? __pfx_worker_thread+0x10/0x10 kthread+0xdc/0x110 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x35/0x60 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 and the allocation failure preceding it: kworker/0:2: page allocation failure: order:0, mode:0x920(GFP_ATOMIC|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0 ... Call Trace: dump_stack_lvl+0x5b/0x70 dump_stack+0x10/0x20 warn_alloc+0x103/0x180 __alloc_pages_slowpath.constprop.0+0x738/0xf30 __alloc_pages_noprof+0x1e9/0x340 alloc_pages_mpol_noprof+0x47/0x100 alloc_pages_noprof+0x4b/0x80 get_free_pages_noprof+0xc/0x40 alloc_low_pages+0xc2/0x150 phys_pud_init+0x82/0x390 ... (everything from phys_pud_init and below was the same) There's some additional context in a github issue we opened on our side: https://github.com/neondatabase/autoscaling/issues/1391 === Reproducing / Testing === I was able to partially reproduce the original issue we saw by modifying phys_pud_init() to simulate alloc_low_page() returning null after boot, and then doing memory hotplug to trigger the "failure". Something roughly like: - pmd = alloc_low_page(); + if (!after_bootmem) + pmd = alloc_low_page(); + else + pmd = 0; To test recovery, I also tried simulating just one alloc_low_page() failure after boot. This change seemed to handle it at a basic level (virito-mem hotplug succeeded with the right amount, after retrying), but I didn't dig further. We also plan to test this in our production environment (where we should see the difference after a few days); as of 2025-06-04, we haven't yet rolled that out. === Rationale === Note: This is the first time I'm looking at this code; please review extra critically. As far as I can tell: 1. phys_*_init() should not currently return zero 2. If phys_*_init() gives up partway through, subsequent retries will be able to continue from the progress so far. So, it seems ok to give a zero return special meaning, and it seems like this is something that can be gracefully handled with the code as it is. Signed-off-by: Em Sharnoff --- arch/x86/mm/init.c | 6 +++++- arch/x86/mm/init_64.c | 50 +++++++++++++++++++++++++++++++++++++++---- 2 files changed, 51 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index bfa444a7dbb0..b90fe52a7d67 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -533,6 +533,7 @@ bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn) * Setup the direct mapping of the physical memory at PAGE_OFFSET. * This runs before bootmem is initialized and gets pages directly from * the physical memory. To access them they are temporarily mapped. + * Returns zero if allocation fails at any point. */ unsigned long __ref init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot) @@ -547,10 +548,13 @@ unsigned long __ref init_memory_mapping(unsigned long start, memset(mr, 0, sizeof(mr)); nr_range = split_mem_range(mr, 0, start, end); - for (i = 0; i < nr_range; i++) + for (i = 0; i < nr_range; i++) { ret = kernel_physical_mapping_init(mr[i].start, mr[i].end, mr[i].page_size_mask, prot); + if (!ret) + return 0; + } add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT); diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 7c4f6f591f2b..1b0140b49371 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -502,7 +502,7 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end, /* * Create PMD level page table mapping for physical addresses. The virtual * and physical address have to be aligned at this level. - * It returns the last physical address mapped. + * It returns the last physical address mapped, or zero if allocation failed. */ static unsigned long __meminit phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, @@ -572,7 +572,14 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, } pte = alloc_low_page(); + if (!pte) + return 0; paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot, init); + /* + * phys_{ppmd,pud,p4d}_init return zero if allocation failed. + * phys_pte_init makes no allocations, so should not return zero. + */ + BUG_ON(!paddr_last); spin_lock(&init_mm.page_table_lock); pmd_populate_kernel_init(&init_mm, pmd, pte, init); @@ -586,7 +593,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, * Create PUD level page table mapping for physical addresses. The virtual * and physical address do not have to be aligned at this level. KASLR can * randomize virtual addresses up to this level. - * It returns the last physical address mapped. + * It returns the last physical address mapped, or zero if allocation failed. */ static unsigned long __meminit phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, @@ -623,6 +630,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, paddr_end, page_size_mask, prot, init); + if (!paddr_last) + return 0; continue; } /* @@ -658,12 +667,22 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, } pmd = alloc_low_page(); + if (!pmd) + return 0; paddr_last = phys_pmd_init(pmd, paddr, paddr_end, page_size_mask, prot, init); + /* + * We might have !paddr_last if allocation failed, but we should still + * update pud before bailing, so that subsequent retries can pick up on + * progress (here and in phys_pmd_init) without leaking pmd. + */ spin_lock(&init_mm.page_table_lock); pud_populate_init(&init_mm, pud, pmd, init); spin_unlock(&init_mm.page_table_lock); + + if (!paddr_last) + return 0; } update_page_count(PG_LEVEL_1G, pages); @@ -707,16 +726,26 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end, pud = pud_offset(p4d, 0); paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end), page_size_mask, prot, init); + if (!paddr_last) + return 0; continue; } pud = alloc_low_page(); + if (!pud) + return 0; paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end), page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); p4d_populate_init(&init_mm, p4d, pud, init); spin_unlock(&init_mm.page_table_lock); + + /* + * Bail only after updating p4d to keep progress from pud across retries. + */ + if (!paddr_last) + return 0; } return paddr_last; @@ -748,10 +777,14 @@ __kernel_physical_mapping_init(unsigned long paddr_start, __pa(vaddr_end), page_size_mask, prot, init); + if (!paddr_last) + return 0; continue; } p4d = alloc_low_page(); + if (!p4d) + return 0; paddr_last = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), page_size_mask, prot, init); @@ -763,6 +796,13 @@ __kernel_physical_mapping_init(unsigned long paddr_start, (pud_t *) p4d, init); spin_unlock(&init_mm.page_table_lock); + + /* + * Bail only after updating pgd/p4d to keep progress from p4d across retries. + */ + if (!paddr_last) + return 0; + pgd_changed = true; } @@ -777,7 +817,8 @@ __kernel_physical_mapping_init(unsigned long paddr_start, * Create page table mapping for the physical memory for specific physical * addresses. Note that it can only be used to populate non-present entries. * The virtual and physical addresses have to be aligned on PMD level - * down. It returns the last physical address mapped. + * down. It returns the last physical address mapped, or zero if allocation + * failed at any point. */ unsigned long __meminit kernel_physical_mapping_init(unsigned long paddr_start, @@ -981,7 +1022,8 @@ int arch_add_memory(int nid, u64 start, u64 size, unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - init_memory_mapping(start, start + size, params->pgprot); + if (!init_memory_mapping(start, start + size, params->pgprot)) + return -ENOMEM; return add_pages(nid, start_pfn, nr_pages, params); } -- 2.39.5