From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6699EC71148 for ; Fri, 13 Jun 2025 20:09:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E02A26B007B; Fri, 13 Jun 2025 16:09:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DDA716B0089; Fri, 13 Jun 2025 16:09:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D16C36B008A; Fri, 13 Jun 2025 16:09:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B4BE86B007B for ; Fri, 13 Jun 2025 16:09:17 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E87F21A11BC for ; Fri, 13 Jun 2025 20:09:16 +0000 (UTC) X-FDA: 83551466712.15.1562C90 Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by imf24.hostedemail.com (Postfix) with ESMTP id 02E4E180008 for ; Fri, 13 Jun 2025 20:09:14 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=WpDdkYVL; dmarc=pass (policy=reject) header.from=neon.tech; spf=pass (imf24.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.208.42 as permitted sender) smtp.mailfrom=sharnoff@neon.tech ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749845355; a=rsa-sha256; cv=none; b=dECpLGbacM/sHfNAtUeteE1kGGHs2TgiqkX1VNEYp5+HAtS1WO59NBXwLthWwJpBUO/hnF sMKYkj0056Z/BQNTng3TMHgV+xsXaycS46ke5PJB2BjJOuIXycCK3wGprot+NmF6W+WYcO AnMXj03tRhDCVG24eiaGao6LyE94gA0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=WpDdkYVL; dmarc=pass (policy=reject) header.from=neon.tech; spf=pass (imf24.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.208.42 as permitted sender) smtp.mailfrom=sharnoff@neon.tech ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749845355; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=gCpIWwgyZWGIT23MiDMROhpJW16wYGCjBFfwJbBCLWw=; b=zlWKWgLlTDDnGfUHqokBrX2mRO3/WyD4Dpt6aW4/Mfgrwa5oUB3NrctwazaWzKhx5vLvMn U2DezjidXWGyGpVV/gXcB+Ktaxne7j1PZeYBq0+ixCYHGucR+V8Qx1hcR+Tl2Z/ihZoiOe aJRpD2klzX9uq+O923utayW0rSJbzzU= Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-60867565fb5so4264569a12.3 for ; Fri, 13 Jun 2025 13:09:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=neon.tech; s=google; t=1749845353; x=1750450153; darn=kvack.org; h=content-transfer-encoding:subject:from:cc:to:content-language :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=gCpIWwgyZWGIT23MiDMROhpJW16wYGCjBFfwJbBCLWw=; b=WpDdkYVLssM1rDd6Fc63E0eYE6aLJD0e6WFGHvLxFKI4lyxNth7L9Hz4b1g32LbM3r hv1v2zUcbaKqfI78ZyoZCQjxvitHXAJB4QxTg77jmO5aW6Mdik6PPBdwJevT/Ul84E7b DWFAJXe49jmNwKUBAKVhkLmcPGlkv4rk+pYVg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749845353; x=1750450153; h=content-transfer-encoding:subject:from:cc:to:content-language :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=gCpIWwgyZWGIT23MiDMROhpJW16wYGCjBFfwJbBCLWw=; b=IfaoJfHt9UCZVUx2isNVy0DtUhjYL0mnoGxLeLQN/PJKQXuM8LImUOYjdJLhfpzlKB aZCraeN2iv/w28VosZjH+MLP77c0QV3c2EtGtZzEKvCdGeFH4BdB03Rz9R6aSQhXuepb Wj7yigkNqfaA+2Ws+kI60NmcpTNPunABJP5SJJ1szQttT8truLH9gX6vpfVhk08SJ2c4 GPeAVsmDSn/0N7gakCLgE4hgtG/CuntpSzAJafjaLIrQ+Ruglxbjb531kPQL+ECdSsz6 /mFchInrItOP9QPzPB0q2M2qJMRBAPhjnY+LnPDNIZFJrWLqPfeI/DFAlsWknErJOxjA AuKA== X-Forwarded-Encrypted: i=1; AJvYcCX70D5swzy8a+b6EFHxwNwaoFOEMio2qhxbE+okVzMIRRRfowEvw/jh8DHQ4ZmEjzoFFt8D0IR1vA==@kvack.org X-Gm-Message-State: AOJu0YzoaXdDo+eqsOTnOZs6cp4YO9GzKzm0vp1VtkyT59Z+sf2fn47E A9dG/RUNGE/sdCYqIbLdELt/RY82a/ORC9LsXRsX5I73DtgstR5AyhfGVnV8LyKvDFk= X-Gm-Gg: ASbGncvlsYPhn6Pv7RCijOVgMj47UWobkjN7a8Ba80DPoSgXnQSUcicX0l3R4uD0dhY FXiZ8MMGr7rUjhg7ecMPaxTg8GS5Ux2ElKpOISujbNF3rt+1JW58MbkpLbfU8+qXQjcN+/Wj50O xfM9KvT0VhLkBs60TQvnmQlsMQK1qwxOtRxy8F4Y5pKIUscjgdnKhRwJKhv4Jgm5Epz8oKVDB6b HKMtl86SugNZPPnSKD+MXmrYCiQvgMOCESKGmDsHp0VWsQQWZxZhKQ0NkuEr7FlbBzJwd26kKKD uVNvY5fVVQTmTWndrdPlemDsihEGr7KDlb/RlfXyK88vFYx81I38dH8EuKF8Ajmwjw== X-Google-Smtp-Source: AGHT+IE+XRj0f2OO1NFtrwcgPSLWKh7S336TPPjKee4hHKgf48OA+OTMOlwDebfKglfKDN7oIV5sWw== X-Received: by 2002:a17:907:1ca2:b0:add:ede0:b9c4 with SMTP id a640c23a62f3a-adfad450a2amr44900866b.42.1749845353230; Fri, 13 Jun 2025 13:09:13 -0700 (PDT) Received: from [192.168.86.142] ([84.65.228.220]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-adec815353fsm179674466b.36.2025.06.13.13.09.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 13 Jun 2025 13:09:12 -0700 (PDT) Message-ID: <7d0d307d-71eb-4913-8023-bccc7a8a4a3d@neon.tech> Date: Fri, 13 Jun 2025 21:09:11 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-mm@kvack.org Cc: Ingo Molnar , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Borislav Petkov , "Edgecombe, Rick P" , Oleg Vasilev , Arthur Petukhovsky , Stefan Radig , Misha Sakhnov From: Em Sharnoff Subject: [PATCH v4 0/4] x86/mm: Improve alloc handling of phys_*_init() Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam01 X-Stat-Signature: 8sw1b899qwce5ma6mpapsmjqgco31rsx X-Rspamd-Queue-Id: 02E4E180008 X-Rspam-User: X-HE-Tag: 1749845354-271752 X-HE-Meta: U2FsdGVkX18OrcRqjH1eYjgHZo55xqnXK3pH688PDXQrsAWhWPiMxsBJh9buyahTiJehnJxJFb4iF33TqBs161IsG9SCdTYsRnSIgRaRYhaSoR6BaCiH4mNDXusGLP/facy9K3GcOFIQYJCJ/gibbzdmyBf1K1t8WAw8/OhPqPwnZ9CPKmZ/ytoFJ8gmDsp0sCq+HhgNwYIno5eGH4JpOHqIDsZRdl58m40v/dr4OSx2a0SLjdHGWuJEXObjwY5itoyM3zkPWYiIXmZPtFpUADNeKf+CBPtxuNMi+VIq2PZT0MT/pQYxY9cA3PzQYy2uRhBlLm6NUAFvwZ/13M4buryAHzBWpgHzyjXGChI8Ke9ETOX3K8CB4HLHMXAy9GR+6qQcMeg7hrZb3U+fvAASToG8C5ZsGDOGzxd6ImDDiQa4GEqbPd0MtxjqeqWYFBkKQVTT6ow7QtdikB3jFPq6CVDsqPW7r946kvhrkggXPf3InGbJO7R1Vye2pFLbvFZwy0PwXECF8S/hEhXvsfeos6gna1hKGr0xv41S58vmwl0iVyTFCR43HTL4Tg+YFE7H+hVkxboRIfWa7dmv1dvajLYskDkESOqj0gqQYT+rTV49vzjjSEH1xvISOCjtFaUIpbecs8S8GSsuJnxxNIUG3/a2L/0y/NHgJZOkvJKUcbFAsXl6joDeLTzE2UTs6iKRFhoXyKXCtmGV36I582Pu5eL6l3oWRDgkyAwKwSRTeP+2uia+jv1vV3dzwuZcOnfnQzZEqTcsyv1msf9D91TvoSwig38vWkjKN/zVNTPFlQF12U9BHu3r7xROHhcsAsGMZ0LzMI8ebXCjGB/uX74FSUbXUHVW+YEX9m7PsOmr+A1Hj9rZKatGyI8IUjlmOPSTLvktcEigtGP5vs8TvPmQXCFswUbHAlqtJ357H4ebMw57X/J9NxrxGBpsXvUBa6u8pZN5bZNZNkHRw7mNxr8 91k9uI7f 6K902GkPZtt0ZpJWkntBVyf8PwYwzG1mcnzLyQm4BaGNJ5fVm0cwRogUhc69aIMS2VJhd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi folks, See changelog + more context below. tl;dr: * Currently alloc_low_page() uses GFP_ATOMIC after boot, which may fail * Those failures aren't currently handled by phys_pud_init() and similar functions. * Those failures can happen during memory hotplug So: 1. Add handling for those allocation failures (patches 1-3) 2. Use GFP_KERNEL instead of GFP_ATOMIC (patch 4) Previous version here: https://lore.kernel.org/all/a31e3b89-5040-4426-9ce8-d674b8554aa1@neon.tech/ === Changelog === v2: - Switch from special-casing zero values to ERR_PTR() - Add patch to move from GFP_ATOMIC -> GFP_KERNEL - Move commentary out of the patch message and into this cover letter v3: - Fix -Wint-conversion issues v4: - new patch: move 'paddr_last' usage into phys_{pud,pmd}_init() so the return from those functions is no longer needed. - new patch: make phys_*_init() and their callers return int I'm not sure if patch 2/4 ("Allow error returns ...") should be separate from patch 3/4 ("Handle alloc failure ..."), but it's easy enough to combine them if need be. === Background === We recently started observing these null pointer dereferences happening in practice (albeit quite rarely), triggered by allocation failures during virtio-mem hotplug. We use virtio-mem quite heavily - adding/removing memory based on resource usage of customer workloads across a fleet of VMs - so it's somewhat expected that we have occasional allocation failures here, if we run out of memory before hotplug takes place. We started seeing this bug after upgrading from 6.6.64 to 6.12.26, but there didn't appear to be relevant changes in the codepaths involved, so we figured the upgrade was triggering a latent issue. The possibility for this issue was also pointed out a while back: > For alloc_low_pages(), I noticed the callers don’t check for allocation > failure. I'm a little surprised that there haven't been reports of the > allocation failing, because these operations could result in a lot more > pages getting allocated way past boot, and failure causes a NULL > pointer dereference. https://lore.kernel.org/all/5aee7bcdf49b1c6b8ee902dd2abd9220169c694b.camel@intel.com/ For completeness, here's an example stack trace we saw (on 6.12.26): BUG: kernel NULL pointer dereference, address: 0000000000000000 .... Call Trace: phys_pud_init+0xa0/0x390 phys_p4d_init+0x93/0x330 __kernel_physical_mapping_init+0xa1/0x370 kernel_physical_mapping_init+0xf/0x20 init_memory_mapping+0x1fa/0x430 arch_add_memory+0x2b/0x50 add_memory_resource+0xe6/0x260 add_memory_driver_managed+0x78/0xc0 virtio_mem_add_memory+0x46/0xc0 virtio_mem_sbm_plug_and_add_mb+0xa3/0x160 virtio_mem_run_wq+0x1035/0x16c0 process_one_work+0x17a/0x3c0 worker_thread+0x2c5/0x3f0 ? _raw_spin_unlock_irqrestore+0x9/0x30 ? __pfx_worker_thread+0x10/0x10 kthread+0xdc/0x110 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x35/0x60 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 and the allocation failure preceding it: kworker/0:2: page allocation failure: order:0, mode:0x920(GFP_ATOMIC|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0 ... Call Trace: dump_stack_lvl+0x5b/0x70 dump_stack+0x10/0x20 warn_alloc+0x103/0x180 __alloc_pages_slowpath.constprop.0+0x738/0xf30 __alloc_pages_noprof+0x1e9/0x340 alloc_pages_mpol_noprof+0x47/0x100 alloc_pages_noprof+0x4b/0x80 get_free_pages_noprof+0xc/0x40 alloc_low_pages+0xc2/0x150 phys_pud_init+0x82/0x390 ... (everything from phys_pud_init and below was the same) There's some additional context in a github issue we opened on our side: https://github.com/neondatabase/autoscaling/issues/1391 === Reproducing / Testing === I was able to partially reproduce the original issue we saw by modifying phys_pud_init() to simulate alloc_low_page() returning null after boot, and then doing memory hotplug to trigger the "failure". Something roughly like: - pmd = alloc_low_page(); + if (!after_bootmem) + pmd = alloc_low_page(); + else + pmd = 0; To test recovery, I also tried simulating just one alloc_low_page() failure after boot. This change seemed to handle it at a basic level (virito-mem hotplug succeeded with the right amount, after retrying), but I didn't dig further. We also plan to test this in our production environment (where we should see the difference after a few days); as of 2025-06-13, we haven't yet rolled that out. Em Sharnoff (4): x86/mm: Update mapped addresses in phys_{pmd,pud}_init() x86/mm: Allow error returns from phys_*_init() x86/mm: Handle alloc failure in phys_*_init() x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot arch/x86/include/asm/pgtable.h | 3 +- arch/x86/mm/init.c | 29 ++++++--- arch/x86/mm/init_32.c | 6 +- arch/x86/mm/init_64.c | 116 ++++++++++++++++++++++----------- arch/x86/mm/mem_encrypt_amd.c | 8 ++- arch/x86/mm/mm_internal.h | 13 ++-- 6 files changed, 113 insertions(+), 62 deletions(-) base-commit: 82f2b0b97b36ee3fcddf0f0780a9a0825d52fec3 -- 2.39.5