From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 149B6C43460 for ; Mon, 26 Apr 2021 16:25:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7525F61175 for ; Mon, 26 Apr 2021 16:25:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7525F61175 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=dabbelt.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E0B058D0002; Mon, 26 Apr 2021 12:25:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DBAF46B006E; Mon, 26 Apr 2021 12:25:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE6858D0002; Mon, 26 Apr 2021 12:25:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id 938226B0036 for ; Mon, 26 Apr 2021 12:25:21 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 374F8180ACF76 for ; Mon, 26 Apr 2021 16:25:21 +0000 (UTC) X-FDA: 78075043242.22.A09F20A Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf30.hostedemail.com (Postfix) with ESMTP id 162E4E000137 for ; Mon, 26 Apr 2021 16:25:01 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id d10so2013937pgf.12 for ; Mon, 26 Apr 2021 09:25:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dabbelt-com.20150623.gappssmtp.com; s=20150623; h=date:subject:in-reply-to:cc:from:to:message-id:mime-version :content-transfer-encoding; bh=fSI8/NRCqwzanGzofgGeswpyryhOl5XK8N/2SPExUaM=; b=ZyHP65m0JRImHmM6z0hPa7p88MDC4qJnPBAiFXsCazeiEf8sVCdjZtPvmVHqYY67dJ IzP0XEBzBR8UzeDpv6weuSRRrh4OVpjOHYi6pyLuzvhJYJWYbHICIdwa2K3CLaoHSukd 37D1ROX4l7xG/q710dJEeLbTFH4ip1I4t753y4OmZEUO+cSavc6B3wLi9BAMxEPMxOQE 70xO8xijWxWxPUNrkQAhnSxqYlcGZsU7j8v/luvriloVOf3gqxzrICTDUHZT6vG0T6be eJ6uKP0tALSH3KY3L1g5TE+uJtGXqqKbISVfy7VPc1VIGDwq6nR51I9Z70hDgtXvrrRL MiMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:subject:in-reply-to:cc:from:to:message-id :mime-version:content-transfer-encoding; bh=fSI8/NRCqwzanGzofgGeswpyryhOl5XK8N/2SPExUaM=; b=CfGb8+PHJF4guATGVLQEP1P4gGDKx4Tf1hAKBE6t4QG/BdPehI90Ih7aZ0LlZTeDiy N/ecYKg7iN2opAz4DWuCsrihPtHKdOaFWUrutRJdpTCw62txf2j+h2W3JEL+tf0mCht6 WV1cM1Dp+ySz7hhb/c/5DUfL+lQZE+7fXIcfRVeuC8pOqdJLwDz7rgHaSiN5VMYYHtyB tbTIBWbTgeDUXr5eatmxwICjKJDdfGZxkO6jXWLu2tl9hIw+Yyl/ED+iXws72j0ULYEl +JgR5T8aTngNBk1KVMNicMQmKiKKcBTe3o6DpY4JQ5EguZYACFMkBR0NznbPBUsH1qTC 6wtw== X-Gm-Message-State: AOAM531AucFyKacdG6XqYP57J+uz2Tw1uOGbEi1ueF40PsoBuiFP4f0D r5bXM8zK3SqS5UVAxU1kQo4Xyg== X-Google-Smtp-Source: ABdhPJyzUDRjAvbhDAsSYd7QnOPV1+zieZp/viVrdrq5mIFB3KrJRDUA9nwmtgatZd/53jvnAmVwIA== X-Received: by 2002:a62:4e4c:0:b029:259:b25f:1bf with SMTP id c73-20020a624e4c0000b0290259b25f01bfmr18054066pfb.40.1619454319216; Mon, 26 Apr 2021 09:25:19 -0700 (PDT) Received: from localhost (76-210-143-223.lightspeed.sntcca.sbcglobal.net. [76.210.143.223]) by smtp.gmail.com with ESMTPSA id z2sm201012pfj.203.2021.04.26.09.25.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Apr 2021 09:25:18 -0700 (PDT) Date: Mon, 26 Apr 2021 09:25:18 -0700 (PDT) X-Google-Original-Date: Mon, 26 Apr 2021 09:25:17 PDT (-0700) Subject: Re: [PATCH] riscv: Fix 32b kernel caused by 64b kernel mapping moving outside linear mapping In-Reply-To: <97819559-0af0-0422-5b6c-30872f759daa@ghiti.fr> CC: anup@brainfault.org, corbet@lwn.net, Paul Walmsley , aou@eecs.berkeley.edu, Arnd Bergmann , aryabinin@virtuozzo.com, glider@google.com, dvyukov@google.com, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-mm@kvack.org From: Palmer Dabbelt To: alex@ghiti.fr Message-ID: Mime-Version: 1.0 (MHng) Content-Type: text/plain; charset=utf-8; format=flowed X-Stat-Signature: zqsbjzfm3t5m54mr9qgc38nhzmqfxrui X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 162E4E000137 Received-SPF: none (dabbelt.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=mail-pg1-f180.google.com; client-ip=209.85.215.180 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619454301-368672 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 23 Apr 2021 13:49:10 PDT (-0700), alex@ghiti.fr wrote: > > > Le 4/23/21 =C3=A0 12:57 PM, Palmer Dabbelt a =C3=A9crit=C2=A0: >> On Fri, 23 Apr 2021 01:34:02 PDT (-0700), alex@ghiti.fr wrote: >>> Le 4/20/21 =C3=A0 12:18 AM, Anup Patel a =C3=A9crit=C2=A0: >>>> On Sat, Apr 17, 2021 at 10:52 PM Alexandre Ghiti wro= te: >>>>> >>>>> Fix multiple leftovers when moving the kernel mapping outside the >>>>> linear >>>>> mapping for 64b kernel that left the 32b kernel unusable. >>>>> >>>>> Fixes: 4b67f48da707 ("riscv: Move kernel mapping outside of linear >>>>> mapping") >>>>> Signed-off-by: Alexandre Ghiti >>>> >>>> Quite a few #ifdef but I don't see any better way at the moment. >>>> Maybe we can >>>> clean this later. Otherwise looks good to me. >> >> Agreed.=C2=A0 I'd recently sent out a patch set that got NACK'd becaus= e we're >> supposed to be relying on the compiler to optimize away references tha= t >> can be staticly determined to not be exercised, which is probably the >> way forward to getting rid of a lot of of preprocessor stuff.=C2=A0 Th= at all >> seems very fragile and is a bigger problem than this, though, so it's >> probably best to do it as its own thing. >> >>>> Reviewed-by: Anup Patel >>> >>> Thanks Anup! >>> >>> @Palmer: This is not on for-next yet and then rv32 is broken. This do= es >>> not apply immediately on top of for-next though, so if you need a new >>> version, I can do that. But this squashes nicely with the patch it fi= xes >>> if you prefer. >> >> Thanks.=C2=A0 I just hadn't gotten to this one yet, but as you pointed= out >> it's probably best to just squash it.=C2=A0 It's in the version on for= -next >> now, it caused few conflicts but I think I got everything sorted out. >> >> Now that everything is in I'm going to stop rewriting this stuff, as i= t >> touches pretty much the whole tree.=C2=A0 I don't have much of a patch= back >> log as of right now, and as the new stuff will be on top of it that wi= ll >> make everyone's lives easier. >> >>> >>> Let me know, I can do that very quickly. >>> >>> Alex >>> >>>> >>>> Regards, >>>> Anup >>>> >>>>> --- >>>>> =C2=A0 arch/riscv/include/asm/page.h=C2=A0=C2=A0=C2=A0 |=C2=A0 9 ++= +++++++ >>>>> =C2=A0 arch/riscv/include/asm/pgtable.h | 16 ++++++++++++---- >>>>> =C2=A0 arch/riscv/mm/init.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 25 ++++++++++++++++++++++++- >>>>> =C2=A0 3 files changed, 45 insertions(+), 5 deletions(-) >>>>> >>>>> diff --git a/arch/riscv/include/asm/page.h >>>>> b/arch/riscv/include/asm/page.h >>>>> index 22cfb2be60dc..f64b61296c0c 100644 >>>>> --- a/arch/riscv/include/asm/page.h >>>>> +++ b/arch/riscv/include/asm/page.h >>>>> @@ -90,15 +90,20 @@ typedef struct page *pgtable_t; >>>>> >>>>> =C2=A0 #ifdef CONFIG_MMU >>>>> =C2=A0 extern unsigned long va_pa_offset; >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0 extern unsigned long va_kernel_pa_offset; >>>>> +#endif >>>>> =C2=A0 extern unsigned long pfn_base; >>>>> =C2=A0 #define ARCH_PFN_OFFSET=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (pfn_base) >>>>> =C2=A0 #else >>>>> =C2=A0 #define va_pa_offset=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 0 >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0 #define va_kernel_pa_offset=C2=A0=C2=A0=C2=A0 0 >>>>> +#endif >>>>> =C2=A0 #define ARCH_PFN_OFFSET=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (PAGE_OFFSET >> PAGE_= SHIFT) >>>>> =C2=A0 #endif /* CONFIG_MMU */ >>>>> >>>>> +#ifdef CONFIG_64BIT > > This one is incorrect as kernel_virt_addr is used also in 32b kernel, > which causes 32b failure when CONFIG_DEBUG_VIRTUAL is set, the followin= g > diff fixes it: > > diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/pag= e.h > index e280ba60cb34..6a7761c86ec2 100644 > --- a/arch/riscv/include/asm/page.h > +++ b/arch/riscv/include/asm/page.h > @@ -106,9 +106,9 @@ extern unsigned long pfn_base; > #define ARCH_PFN_OFFSET (PAGE_OFFSET >> PAGE_SHIFT) > #endif /* CONFIG_MMU */ > > -#ifdef CONFIG_64BIT > extern unsigned long kernel_virt_addr; > > +#ifdef CONFIG_64BIT > #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + > va_pa_offset)) > #ifdef CONFIG_XIP_KERNEL > #define kernel_mapping_pa_to_va(y) ({ > \ Can you send a patch for this one? I'm trying to avoid rebasing any=20 more, as there's more stuff on top of this now. > >>>>> =C2=A0 extern unsigned long kernel_virt_addr; >>>>> >>>>> =C2=A0 #define linear_mapping_pa_to_va(x)=C2=A0=C2=A0=C2=A0=C2=A0 (= (void *)((unsigned >>>>> long)(x) + va_pa_offset)) >>>>> @@ -112,6 +117,10 @@ extern unsigned long kernel_virt_addr; >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (_x < kernel_virt_= addr) >>>>> ?=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \ >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 linear_mapping_va_to_pa(_x) : >>>>> kernel_mapping_va_to_pa(_x);=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \ >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }) >>>>> +#else >>>>> +#define __pa_to_va_nodebug(x)=C2=A0 ((void *)((unsigned long) (x) = + >>>>> va_pa_offset)) >>>>> +#define __va_to_pa_nodebug(x)=C2=A0 ((unsigned long)(x) - va_pa_of= fset) >>>>> +#endif >>>>> >>>>> =C2=A0 #ifdef CONFIG_DEBUG_VIRTUAL >>>>> =C2=A0 extern phys_addr_t __virt_to_phys(unsigned long x); >>>>> diff --git a/arch/riscv/include/asm/pgtable.h >>>>> b/arch/riscv/include/asm/pgtable.h >>>>> index 80e63a93e903..5afda75cc2c3 100644 >>>>> --- a/arch/riscv/include/asm/pgtable.h >>>>> +++ b/arch/riscv/include/asm/pgtable.h >>>>> @@ -16,19 +16,27 @@ >>>>> =C2=A0 #else >>>>> >>>>> =C2=A0 #define ADDRESS_SPACE_END=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (UL(= -1)) >>>>> -/* >>>>> - * Leave 2GB for kernel and BPF at the end of the address space >>>>> - */ >>>>> + >>>>> +#ifdef CONFIG_64BIT >>>>> +/* Leave 2GB for kernel and BPF at the end of the address space */ >>>>> =C2=A0 #define KERNEL_LINK_ADDR=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= (ADDRESS_SPACE_END - SZ_2G + 1) >>>>> +#else >>>>> +#define KERNEL_LINK_ADDR=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PAGE_= OFFSET >>>>> +#endif >>>>> >>>>> =C2=A0 #define VMALLOC_SIZE=C2=A0=C2=A0=C2=A0=C2=A0 (KERN_VIRT_SIZE= >> 1) >>>>> =C2=A0 #define VMALLOC_END=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (PAGE_OFFS= ET - 1) >>>>> =C2=A0 #define VMALLOC_START=C2=A0=C2=A0=C2=A0 (PAGE_OFFSET - VMALL= OC_SIZE) >>>>> >>>>> -/* KASLR should leave at least 128MB for BPF after the kernel */ >>>>> =C2=A0 #define BPF_JIT_REGION_SIZE=C2=A0=C2=A0=C2=A0 (SZ_128M) >>>>> +#ifdef CONFIG_64BIT >>>>> +/* KASLR should leave at least 128MB for BPF after the kernel */ >>>>> =C2=A0 #define BPF_JIT_REGION_START=C2=A0=C2=A0 PFN_ALIGN((unsigned= long)&_end) >>>>> =C2=A0 #define BPF_JIT_REGION_END=C2=A0=C2=A0=C2=A0=C2=A0 (BPF_JIT_= REGION_START + >>>>> BPF_JIT_REGION_SIZE) >>>>> +#else >>>>> +#define BPF_JIT_REGION_START=C2=A0=C2=A0 (PAGE_OFFSET - BPF_JIT_RE= GION_SIZE) >>>>> +#define BPF_JIT_REGION_END=C2=A0=C2=A0=C2=A0=C2=A0 (VMALLOC_END) >>>>> +#endif >>>>> >>>>> =C2=A0 /* Modules always live before the kernel */ >>>>> =C2=A0 #ifdef CONFIG_64BIT >>>>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c >>>>> index 093f3a96ecfc..dc9b988e0778 100644 >>>>> --- a/arch/riscv/mm/init.c >>>>> +++ b/arch/riscv/mm/init.c >>>>> @@ -91,8 +91,10 @@ static void print_vm_layout(void) >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (unsigned long)VMALLOC_END); >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 print_mlm("lowmem"= , (unsigned long)PAGE_OFFSET, >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (unsigned long)high_memory); >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 print_mlm("kernel"= , (unsigned long)KERNEL_LINK_ADDR, >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (unsigned long)ADDRESS_SPACE_END)= ; >>>>> +#endif >>>>> =C2=A0 } >>>>> =C2=A0 #else >>>>> =C2=A0 static void print_vm_layout(void) { } >>>>> @@ -165,9 +167,11 @@ static struct pt_alloc_ops pt_ops; >>>>> =C2=A0 /* Offset between linear mapping virtual address and kernel = load >>>>> address */ >>>>> =C2=A0 unsigned long va_pa_offset; >>>>> =C2=A0 EXPORT_SYMBOL(va_pa_offset); >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0 /* Offset between kernel mapping virtual address and kernel = load >>>>> address */ >>>>> =C2=A0 unsigned long va_kernel_pa_offset; >>>>> =C2=A0 EXPORT_SYMBOL(va_kernel_pa_offset); >>>>> +#endif >>>>> =C2=A0 unsigned long pfn_base; >>>>> =C2=A0 EXPORT_SYMBOL(pfn_base); >>>>> >>>>> @@ -410,7 +414,9 @@ asmlinkage void __init setup_vm(uintptr_t dtb_p= a) >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 load_sz =3D (uintp= tr_t)(&_end) - load_pa; >>>>> >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 va_pa_offset =3D P= AGE_OFFSET - load_pa; >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 va_kernel_pa_offse= t =3D kernel_virt_addr - load_pa; >>>>> +#endif >>>>> >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pfn_base =3D PFN_D= OWN(load_pa); >>>>> >>>>> @@ -469,12 +475,16 @@ asmlinkage void __init setup_vm(uintptr_t dtb= _pa) >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 pa + PMD_SIZE, PMD_SIZE, PAGE_KERNEL); >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dtb_early_va =3D (= void *)DTB_EARLY_BASE_VA + (dtb_pa & >>>>> (PMD_SIZE - 1)); >>>>> =C2=A0 #else /* CONFIG_BUILTIN_DTB */ >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * __va can't= be used since it would return a linear >>>>> mapping address >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * whereas dt= b_early_va will be used before setup_vm_final >>>>> installs >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * the linear= mapping. >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */ >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dtb_early_va =3D k= ernel_mapping_pa_to_va(dtb_pa); >>>>> +#else >>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dtb_early_va =3D __va(dtb_pa)= ; >>>>> +#endif /* CONFIG_64BIT */ >>>>> =C2=A0 #endif /* CONFIG_BUILTIN_DTB */ >>>>> =C2=A0 #else >>>>> =C2=A0 #ifndef CONFIG_BUILTIN_DTB >>>>> @@ -486,7 +496,11 @@ asmlinkage void __init setup_vm(uintptr_t dtb_= pa) >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 pa + PGDIR_SIZE, PGDIR_SIZE, PAGE_KERNEL); >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dtb_early_va =3D (= void *)DTB_EARLY_BASE_VA + (dtb_pa & >>>>> (PGDIR_SIZE - 1)); >>>>> =C2=A0 #else /* CONFIG_BUILTIN_DTB */ >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dtb_early_va =3D k= ernel_mapping_pa_to_va(dtb_pa); >>>>> +#else >>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dtb_early_va =3D __va(dtb_pa)= ; >>>>> +#endif /* CONFIG_64BIT */ >>>>> =C2=A0 #endif /* CONFIG_BUILTIN_DTB */ >>>>> =C2=A0 #endif >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dtb_early_pa =3D d= tb_pa; >>>>> @@ -571,12 +585,21 @@ static void __init setup_vm_final(void) >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 for (pa =3D start; pa < end; pa +=3D map_size= ) { >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= va =3D (uintptr_t)__va(pa); >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= create_pgd_mapping(swapper_pg_dir, va, pa, >>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 map_size, PAGE_KERNEL); >>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 map_size, >>>>> +#ifdef CONFIG_64BIT >>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PAGE_KERNEL >>>>> +#else >>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PAGE_KERNEL_EXEC >>>>> +#endif >>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 ); >>>>> + >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 } >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >>>>> >>>>> +#ifdef CONFIG_64BIT >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Map the kernel = */ >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 create_kernel_page= _table(swapper_pg_dir, PMD_SIZE); >>>>> +#endif >>>>> >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Clear fixmap PT= E and PMD mappings */ >>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 clear_fixmap(FIX_P= TE); >>>>> -- >>>>> 2.20.1 >>>>> > > I agree with you, too much #ifdef, it is hardly readable: I take a look > at how I can make it simpler. > > Sorry for all those fixes, > > Alex > >> >> _______________________________________________ >> linux-riscv mailing list >> linux-riscv@lists.infradead.org >> http://lists.infradead.org/mailman/listinfo/linux-riscv