From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6F37C71136 for ; Fri, 13 Jun 2025 20:10:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 774F96B008A; Fri, 13 Jun 2025 16:10:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 74C1C6B0092; Fri, 13 Jun 2025 16:10:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 668476B008A; Fri, 13 Jun 2025 16:10:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 437D56B008A for ; Fri, 13 Jun 2025 16:10:19 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B2FEB161123 for ; Fri, 13 Jun 2025 20:10:18 +0000 (UTC) X-FDA: 83551469316.06.968873E Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) by imf19.hostedemail.com (Postfix) with ESMTP id 8AE3D1A0017 for ; Fri, 13 Jun 2025 20:10:16 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=B0Axj7cq; dmarc=pass (policy=reject) header.from=neon.tech; spf=pass (imf19.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.208.43 as permitted sender) smtp.mailfrom=sharnoff@neon.tech ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749845416; a=rsa-sha256; cv=none; b=y0kjF6cBtQP5dGFx23AH8pWdeo1yHBxZlmi0iHgxycEnAzVLadZgkpEVcXJJl4/vHkKCSx 5cJZ8M52vgvSSor1iburkRFl9KeXpeQfWxxxujl+TBSKsjQ5c/IpVJVvsUeBU5WVS8v4b2 F/HQU0i9i3wmEJRwKkG0kD1pFZzvqns= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=B0Axj7cq; dmarc=pass (policy=reject) header.from=neon.tech; spf=pass (imf19.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.208.43 as permitted sender) smtp.mailfrom=sharnoff@neon.tech ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749845416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HdeF68xR6GC/wjGYdz/zqG29CgX3dAqhdfESrVGyzJ8=; b=NuQX0+LonTiBG3hAGjbbRyCFuircvdTZ89x6hs1WFKwy9N9QO3B8xPVq2lyeamoDcnI2Gz z1X+8/Rs8ZU1TiHpHUXs+S6rjWKXbximNFUgLEchhObtPvwHbFsG1ioca3sOi65Bgte/pW XNDjZmoKE21RkCARkceVQcFde5E73Zc= Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-6086502e750so4782818a12.3 for ; Fri, 13 Jun 2025 13:10:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=neon.tech; s=google; t=1749845415; x=1750450215; darn=kvack.org; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to :cc:subject:date:message-id:reply-to; bh=HdeF68xR6GC/wjGYdz/zqG29CgX3dAqhdfESrVGyzJ8=; b=B0Axj7cqr8nzkZejqikcLwcf9FHQwbJP61oUWx8PAQotQp9DgPk+3VnpjOxEVK7E3p VOxJWcY2gOTjrG76eo87gzkXrRPJkbXbif31hIaZ7h00pa5fUSmTJ63gRYPPi3EN2ulp hPbre1vHcWFSgx6R5ZjaWOmYn9LdqjmdvcPyQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749845415; x=1750450215; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HdeF68xR6GC/wjGYdz/zqG29CgX3dAqhdfESrVGyzJ8=; b=HyWX+hqX7MWztzfK5lkcC2YXafBTeTrHMrb/cA4Z3Nkw4jtzWLnkzpCplS3VzZDYg4 tiJzfHFe0CZfxFH9PuaItcv1O34/ETXxGs/X+ns+4PqbKu+a1LzHqYAxJb3iPh6vVFRV a9x38W3eb4ZUdxPy/ApSjllDBXP8SK8yddkrn625eY3fmj24I0LJ59iaCJ5OIkJaCA3o VaSLq+GM/hwz8ZDBlgXiEEXx47YykfVMRkLexavJWwZRdGHoTWPUYZd3ninr9UvMbQkB SqAfOIlJiknlL1oNWOFIYiPig4sqZnytuxmsISm3RzoaddGK8yNmsRPNeKvfaZWg7i9G OpDg== X-Forwarded-Encrypted: i=1; AJvYcCUyv3NagLGsSVQfHoJGhQ31zQ+gyYe4s1V6A0zW1gUMvg5OrKxoPeg+ZEtEow3xGe5ZGOi0025K8w==@kvack.org X-Gm-Message-State: AOJu0YyGyJrnh9MBqsYmR2yU2fc0eBcF/1mRNFawDyqJ0tEMLClD2O2f /nutBYrZ8FjO3gPM395+5oMgxmKFbS6X0eUgaibn3Ly2tMhPTfdrHqUZmgfsOtQrWVk= X-Gm-Gg: ASbGncuUNWPPwK9hwUzaP3Xeezth0Ofl7b59ZWgCvXWru6AvQRaqXu99of3OK5yiR+0 /MvkZK35+8AYwB7J0Sfqc4b1BzNnMeDEzz5UqrV9SkdbAD7USxQMHJHF2HWsWfFYyi/Ue9iIqti XSuNRauiudFBmqdVdSjbNuOp41niO7FCKezii4OErqs2MQ6T3h1kswXL8is22/IYM0vJnQYnwV2 3EnwuB8eEbcT/D79uS4hKYywNR+Mt8cXpNx9slgvy1DELuKw28krNAnz8Mh55wbZMjYk99NJd2V F+GeK3kaRSsWyKaLaIIhR6qdVxMCuOKj8fv3CqYNMp3ra6emTr4ohsaemOnE0AnBYHERQpC8/jt U X-Google-Smtp-Source: AGHT+IFEvm0Xpn9ySXG9NpB9ymgE7br5qW9afaQGRT4NGMiC5EQVxt7wMazyNoKPE1tRNxJz70C3bQ== X-Received: by 2002:a05:6402:27c8:b0:608:66a3:fec with SMTP id 4fb4d7f45d1cf-608d088d800mr452063a12.2.1749845415004; Fri, 13 Jun 2025 13:10:15 -0700 (PDT) Received: from [192.168.86.142] ([84.65.228.220]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-608b48cd687sm1663842a12.18.2025.06.13.13.10.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 13 Jun 2025 13:10:14 -0700 (PDT) Message-ID: <3e767aa4-c783-4857-b34e-fdf3f20bd94f@neon.tech> Date: Fri, 13 Jun 2025 21:10:13 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH v4 1/4] x86/mm: Update mapped addresses in phys_{pmd,pud}_init() From: Em Sharnoff To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-mm@kvack.org Cc: Ingo Molnar , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Borislav Petkov , "Edgecombe, Rick P" , Oleg Vasilev , Arthur Petukhovsky , Stefan Radig , Misha Sakhnov References: <7d0d307d-71eb-4913-8023-bccc7a8a4a3d@neon.tech> Content-Language: en-US In-Reply-To: <7d0d307d-71eb-4913-8023-bccc7a8a4a3d@neon.tech> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam01 X-Stat-Signature: 5tmo5ketgp4wbm6m18q1myaru1788g9n X-Rspamd-Queue-Id: 8AE3D1A0017 X-Rspam-User: X-HE-Tag: 1749845416-426865 X-HE-Meta: U2FsdGVkX1/ROQb+sItBy7ulFcLWYOKMx5+33Z6sjo/F7CX2htnI0CgXOoj6ZjqMJL/UcMdCBvluk03wZXFZigI6tedyWBE4bPfx7P/N3/wbsWn7quIeKW+GJT1XJ+dKQqu+InU3rcCVZtzx3G1wJvlUVLD6HO878DQyn7uHpfiMnzHJTcXaL5X2cwOJDlxYsnHCl/S/iJZRRgdxSChO2OYP5inBocx1iDPvFK5A1jHYD7EZmJ7jroShOIL5Q/UVhY6fWshqLCetGrshP0ptzR0sRRDtnbOa9Q3ojHfX1DSmIHM+aXLlcbynIJ/ear1s67yk2GYcINNLKQrzVjLos8U1ThY0NbpQOBREkIxRC6sh2vLkRd/YPhIVG4VPr62fIbaAoMCK9b48BCex4I7gmRF/O5bTeJYpoycL6ysYuvCjYRKH8gwzFVjvTNLavywByoZGL4kOv5DVgW6Gyvtwb91C/2uVvV+frUTHknOqrDr7ENnFMW/a9jW7MXuy9FohZNBe/j0CyMNJs8fdXzNH7l/kJ6rj6bkxsy2o4v8pp3iOB2Xn0niBZGAafBnj0EHIoWHSqS8lQ+yE53Vv85e8W+FaeEH1WLuIFs4lUGntF08I7DwZFFvSbt5c24b3Xp76iiXu/KtmuDZhq4ePC0CHYmCiG+QCXSz2Euux91xuwFoEwn3r9hcsi5LWxFU25VKsmDDkWoYyB7MvstScd7AL1S8hn93XjphwJMmfL0uc31x0zoz+j48w0b1sUePFqD340XkTaM8YVNxrJoDIYve2JcF4WCkZqHjJmf7lp2jSjcrkV+TbtbyaFENy5vNe6i3Odu7V7ZleYj43oTvLs9Jy0gDpwOki7os4kBn/TvzBznoawvd58hB+WSLsRhceDLLVL3B9VaD2FI4PqU9YdrcFkgGnbMDlvjBHahImTlSWm1HGUd4aG+wVloCJpJH5g3wZz0Greos1I9z/eXq1noG bM9o6zR2 BVSN3Ohdh9uwazO6AGdfzWGqHD8M/O9Dt5OIgjGsj+V+UPDwLjiTrwdJViv0Xo4xrj0FhikVyda7Q+M3+4i3wk967dEBzOSZqdl5VCVzOOjmIhG1h9e/c1i6KAUWNkqore93y5GQpaLJj52tqA0SB3EGF3ptgX9No/oIGFkTOhXnDOAtypmOBI7ykGcGNlR4d2JfqLa+dM5jufMaglrNn7/yUsqbotB8qJEghgpHeqf3cs3Zb35FP8UQFHkhDoMkzfJvccqGhpxYJgR0cryrJyzTB1U98m6IWOwGy/WQ422M5a894zWErogyA+2Ondj0dJl7vHHPVIuyD4G8T1KlmAaGE5zacK+gROGsLtZ21stKOBE3eIsNFfL4yQRZrgcijYv6fvJ+pBRB8X+1IBR/DMTbsb3Ws6KNNuyojOmBoKvNIaKv6OoGwa4K5sIxUZ2rlXlZ7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently kernel_physical_mapping_init() and its dependents return the last physical address mapped ('paddr_last'). This makes it harder to cleanly handle allocation errors in those functions. 'paddr_last' is used to update 'pfn_mapped'/'max_pfn_mapped', so: 1. Introduce add_paddr_range_mapped() to do the update, translating from physical addresses to pfns 2. Call add_paddr_range_mapped() in phys_pud_init() where 'paddr_last' would otherwise be updated due to 1Gi pages. - Note: this includes places where we set 'paddr_last = paddr_next', as was added in 20167d3421a0 ("x86-64: Fix accounting in kernel_physical_mapping_init()") add_paddr_range_mapped() is probably too expensive to be called every time a page is updated, so instead, phys_pte_init() continues to return 'paddr_last', and phys_pmd_init() calls add_paddr_range_mapped() only at the end of the loop (should mean it's called every 1Gi). Signed-off-by: Em Sharnoff --- Changelog: - v4: Add this patch --- arch/x86/include/asm/pgtable.h | 3 +- arch/x86/mm/init.c | 23 +++++---- arch/x86/mm/init_32.c | 6 ++- arch/x86/mm/init_64.c | 88 +++++++++++++++++----------------- arch/x86/mm/mm_internal.h | 13 +++-- 5 files changed, 69 insertions(+), 64 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7bd6bd6df4a1..138d55f48a4f 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1244,8 +1244,7 @@ extern int direct_gbpages; void init_mem_mapping(void); void early_alloc_pgt_buf(void); void __init poking_init(void); -unsigned long init_memory_mapping(unsigned long start, - unsigned long end, pgprot_t prot); +void init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot); #ifdef CONFIG_X86_64 extern pgd_t trampoline_pgd_entry; diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index bfa444a7dbb0..1461873b44f1 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -529,16 +529,24 @@ bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn) return false; } +/* + * Update max_pfn_mapped and range_pfn_mapped with the range of physical + * addresses mapped. The range may overlap with previous calls to this function. + */ +void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr) +{ + add_pfn_range_mapped(start_paddr >> PAGE_SHIFT, end_paddr >> PAGE_SHIFT); +} + /* * Setup the direct mapping of the physical memory at PAGE_OFFSET. * This runs before bootmem is initialized and gets pages directly from * the physical memory. To access them they are temporarily mapped. */ -unsigned long __ref init_memory_mapping(unsigned long start, - unsigned long end, pgprot_t prot) +void __ref init_memory_mapping(unsigned long start, + unsigned long end, pgprot_t prot) { struct map_range mr[NR_RANGE_MR]; - unsigned long ret = 0; int nr_range, i; pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n", @@ -548,13 +556,10 @@ unsigned long __ref init_memory_mapping(unsigned long start, nr_range = split_mem_range(mr, 0, start, end); for (i = 0; i < nr_range; i++) - ret = kernel_physical_mapping_init(mr[i].start, mr[i].end, - mr[i].page_size_mask, - prot); + kernel_physical_mapping_init(mr[i].start, mr[i].end, + mr[i].page_size_mask, prot); - add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT); - - return ret >> PAGE_SHIFT; + return; } /* diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index ad662cc4605c..4427ac433041 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -245,7 +245,7 @@ static inline int is_x86_32_kernel_text(unsigned long addr) * of max_low_pfn pages, by creating page tables starting from address * PAGE_OFFSET: */ -unsigned long __init +void __init kernel_physical_mapping_init(unsigned long start, unsigned long end, unsigned long page_size_mask, @@ -382,7 +382,9 @@ kernel_physical_mapping_init(unsigned long start, mapping_iter = 2; goto repeat; } - return last_map_addr; + + add_paddr_range_mapped(start, last_map_addr); + return; } #ifdef CONFIG_HIGHMEM diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 7c4f6f591f2b..e729108bee30 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -502,13 +502,13 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end, /* * Create PMD level page table mapping for physical addresses. The virtual * and physical address have to be aligned at this level. - * It returns the last physical address mapped. */ -static unsigned long __meminit +static void __meminit phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { unsigned long pages = 0, paddr_next; + unsigned long paddr_first = paddr; unsigned long paddr_last = paddr_end; int i = pmd_index(paddr); @@ -579,21 +579,25 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, spin_unlock(&init_mm.page_table_lock); } update_page_count(PG_LEVEL_2M, pages); - return paddr_last; + /* + * In case of recovery from previous state, add_paddr_range_mapped() may + * be called with an overlapping range from previous operations. + * It is idempotent, so this is ok. + */ + add_paddr_range_mapped(paddr_first, paddr_last); + return; } /* * Create PUD level page table mapping for physical addresses. The virtual * and physical address do not have to be aligned at this level. KASLR can * randomize virtual addresses up to this level. - * It returns the last physical address mapped. */ -static unsigned long __meminit +static void __meminit phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t _prot, bool init) { unsigned long pages = 0, paddr_next; - unsigned long paddr_last = paddr_end; unsigned long vaddr = (unsigned long)__va(paddr); int i = pud_index(vaddr); @@ -619,10 +623,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, if (!pud_none(*pud)) { if (!pud_leaf(*pud)) { pmd = pmd_offset(pud, 0); - paddr_last = phys_pmd_init(pmd, paddr, - paddr_end, - page_size_mask, - prot, init); + phys_pmd_init(pmd, paddr, paddr_end, + page_size_mask, prot, init); continue; } /* @@ -640,7 +642,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, if (page_size_mask & (1 << PG_LEVEL_1G)) { if (!after_bootmem) pages++; - paddr_last = paddr_next; + add_paddr_range_mapped(paddr, paddr_next); continue; } prot = pte_pgprot(pte_clrhuge(*(pte_t *)pud)); @@ -653,13 +655,13 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, pfn_pud(paddr >> PAGE_SHIFT, prot_sethuge(prot)), init); spin_unlock(&init_mm.page_table_lock); - paddr_last = paddr_next; + add_paddr_range_mapped(paddr, paddr_next); continue; } pmd = alloc_low_page(); - paddr_last = phys_pmd_init(pmd, paddr, paddr_end, - page_size_mask, prot, init); + phys_pmd_init(pmd, paddr, paddr_end, + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); pud_populate_init(&init_mm, pud, pmd, init); @@ -668,22 +670,23 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, update_page_count(PG_LEVEL_1G, pages); - return paddr_last; + return; } -static unsigned long __meminit +static void __meminit phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { - unsigned long vaddr, vaddr_end, vaddr_next, paddr_next, paddr_last; + unsigned long vaddr, vaddr_end, vaddr_next, paddr_next; - paddr_last = paddr_end; vaddr = (unsigned long)__va(paddr); vaddr_end = (unsigned long)__va(paddr_end); - if (!pgtable_l5_enabled()) - return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, - page_size_mask, prot, init); + if (!pgtable_l5_enabled()) { + phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, + page_size_mask, prot, init); + return; + } for (; vaddr < vaddr_end; vaddr = vaddr_next) { p4d_t *p4d = p4d_page + p4d_index(vaddr); @@ -705,33 +708,32 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end, if (!p4d_none(*p4d)) { pud = pud_offset(p4d, 0); - paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); continue; } pud = alloc_low_page(); - paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); p4d_populate_init(&init_mm, p4d, pud, init); spin_unlock(&init_mm.page_table_lock); } - return paddr_last; + return; } -static unsigned long __meminit +static void __meminit __kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { bool pgd_changed = false; - unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last; + unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next; - paddr_last = paddr_end; vaddr = (unsigned long)__va(paddr_start); vaddr_end = (unsigned long)__va(paddr_end); vaddr_start = vaddr; @@ -744,16 +746,14 @@ __kernel_physical_mapping_init(unsigned long paddr_start, if (pgd_val(*pgd)) { p4d = (p4d_t *)pgd_page_vaddr(*pgd); - paddr_last = phys_p4d_init(p4d, __pa(vaddr), - __pa(vaddr_end), - page_size_mask, - prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); continue; } p4d = alloc_low_page(); - paddr_last = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), - page_size_mask, prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); if (pgtable_l5_enabled()) @@ -769,7 +769,7 @@ __kernel_physical_mapping_init(unsigned long paddr_start, if (pgd_changed) sync_global_pgds(vaddr_start, vaddr_end - 1); - return paddr_last; + return; } @@ -777,15 +777,15 @@ __kernel_physical_mapping_init(unsigned long paddr_start, * Create page table mapping for the physical memory for specific physical * addresses. Note that it can only be used to populate non-present entries. * The virtual and physical addresses have to be aligned on PMD level - * down. It returns the last physical address mapped. + * down. */ -unsigned long __meminit +void __meminit kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot) { - return __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, prot, true); + __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, prot, true); } /* @@ -794,14 +794,14 @@ kernel_physical_mapping_init(unsigned long paddr_start, * when updating the mapping. The caller is responsible to flush the TLBs after * the function returns. */ -unsigned long __meminit +void __meminit kernel_physical_mapping_change(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask) { - return __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, PAGE_KERNEL, - false); + __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, PAGE_KERNEL, + false); } #ifndef CONFIG_NUMA diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 3f37b5c80bb3..6fea5f7edd48 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -10,13 +10,12 @@ static inline void *alloc_low_page(void) void early_ioremap_page_table_range_init(void); -unsigned long kernel_physical_mapping_init(unsigned long start, - unsigned long end, - unsigned long page_size_mask, - pgprot_t prot); -unsigned long kernel_physical_mapping_change(unsigned long start, - unsigned long end, - unsigned long page_size_mask); +void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr); + +void kernel_physical_mapping_init(unsigned long start, unsigned long end, + unsigned long page_size_mask, pgprot_t prot); +void kernel_physical_mapping_change(unsigned long start, unsigned long end, + unsigned long page_size_mask); void zone_sizes_init(void); extern int after_bootmem; -- 2.39.5