From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E1E5C433E3 for ; Wed, 22 Jul 2020 09:44:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4877120787 for ; Wed, 22 Jul 2020 09:44:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4877120787 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arndb.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B7C3D6B0003; Wed, 22 Jul 2020 05:44:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2C206B0005; Wed, 22 Jul 2020 05:44:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A426E6B0006; Wed, 22 Jul 2020 05:44:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id 8AFC46B0003 for ; Wed, 22 Jul 2020 05:44:11 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2F166182D1097 for ; Wed, 22 Jul 2020 09:44:11 +0000 (UTC) X-FDA: 77065225902.12.crowd14_4c104fb26f35 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 008EF180D505B for ; Wed, 22 Jul 2020 09:44:10 +0000 (UTC) X-HE-Tag: crowd14_4c104fb26f35 X-Filterd-Recvd-Size: 5933 Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.17.13]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 22 Jul 2020 09:44:10 +0000 (UTC) Received: from mail-qt1-f182.google.com ([209.85.160.182]) by mrelayeu.kundenserver.de (mreue108 [212.227.15.145]) with ESMTPSA (Nemesis) id 1M5fQq-1js8cU2w5C-007AVv for ; Wed, 22 Jul 2020 11:44:08 +0200 Received: by mail-qt1-f182.google.com with SMTP id k18so1318479qtm.10 for ; Wed, 22 Jul 2020 02:44:08 -0700 (PDT) X-Gm-Message-State: AOAM531FQjDhRCknw+iflKVoGiRAiH3PHiwmrNrdfZ3e9rovlXQ4TA8q 5BvSa2LsR/asO2R15lj3HGcOcErO9sv/FiolR7M= X-Google-Smtp-Source: ABdhPJxwkYMqqlMOkCtiDhkAnK0U1Z3J/eM92Y3pA2ZYfCjSxZxJC559Vt8UZd0LKulfbnhn81unz3J72RGNNuPEagQ= X-Received: by 2002:ac8:7587:: with SMTP id s7mr33522432qtq.304.1595411047084; Wed, 22 Jul 2020 02:44:07 -0700 (PDT) MIME-Version: 1.0 References: <7cb2285e-68ba-6827-5e61-e33a4b65ac03@ghiti.fr> In-Reply-To: From: Arnd Bergmann Date: Wed, 22 Jul 2020 11:43:50 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v5 1/4] riscv: Move kernel mapping to vmalloc zone To: Palmer Dabbelt Cc: Alexandre Ghiti , Albert Ou , Benjamin Herrenschmidt , Linux-MM , Michael Ellerman , Anup Patel , "linux-kernel@vger.kernel.org" , Atish Patra , Paul Mackerras , Zong Li , Paul Walmsley , linux-riscv , linuxppc-dev Content-Type: text/plain; charset="UTF-8" X-Provags-ID: V03:K1:wjI1TL3wL5rkLXNJlmDbvDT3JxyeEqrKWlWAya45Z1DgKhtWjyj 3LWwS65XWkHG/3n+4jNA35CGbBztv+TEMEipkSi8B4g1neqokn0WmnGWbXBP1bRsOtWFJ8Z xl+IBMiemT9r00CnLH4r6ql6CKvS3aLCQlpudNZSPHCZ47+oca8CsH4eBMxEbE7cFLrRQht Nfkl+kkMTR3TcprOMrHpg== X-UI-Out-Filterresults: notjunk:1;V03:K0:2ZC+kBzKWOI=:8SvipJDag7eHoOX/vTv1Ck xcffG0mhp5lFQUjGKD3gZM6kxcHvnWiwWprCa+6BS5nvqfpXY1jd3uESKnBmSWlLd9XixE7Tc NKxMRBL+PL77fg5zB2JbFjLxyqkXEvMjJIHV4nMo6EH2meEdk1kh8hqJ+k5YthUEw9MI+Yjiw qe0oQDaW3LpZbvIoQazuH87pKMJVUZWcW7MoFzHKRhJDsQKYdwDefTITxDryLXu1gxNXY/saa F67bOGQR00vLbdKVKBQLfJPthkEqhs3VfkX1FVOMledNtL1SBH2EEo068k8kSgPrMjbvvtVbw pBd/YbwnQd5FuqByHv7XC1zUBO8E8o9LGGu5wK90NXDorclaiAopqHEKvTCnRPtse6RxhCdjF VeEPIARlBTqRqczFyRaYM90loljIGOmZYuGp+sYugFWALmXtYKrRZXXlOXVHcRdZazurXQbzX ZAZxWEgWTSHVVDO+Q46o2jPZuckdz7KWnNPwa0XnXon7DMDXa4WQNrMqsU6neMdvmIy5TFo+e D+nq6lcURbaie7T0tGyq2pF/GX4xlqzLriYpq1Mm+dpeC2s9V1MgyzT8QFtPWFA/ERyguoI8N C92PGXQoHz1GP2YGNBZUsLARZIrTGQi4VhvmY6+z8uOZw/69m7M/UMUCRKOlYUl2ak4eWfAbm 6R74aEg5uxQyjV16TJGKOdoxTdDmVIYh0QwbX4jM0lbP7TK9azMmPSlIiWUj39MwyytD6hsOV Sq+UXMAqNzCB7kNm9j+3sidabFfd3bFmpdG9vr0HcYZm54XOznB7yCI2Z0SnMA0crwW4KzpjG KTWDxGFF/12hD751UVoevsPKBFvtrtOgpar56TSka3h4HF0Xc5ax2ly/u3PrEFhhieIYDaqJx SgQOyoZi9STZbgSIUezZbVW7vb/XSVS5JYahEUxwMon1fLpUnKvwtaQBO+kLcPFTIWGA9GToS bab/alYQZ/Yn2MbfH1fUmqWnBLhZUgOZM3+LerrE55Z6ykUUD8vQ4 X-Rspamd-Queue-Id: 008EF180D505B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 21, 2020 at 9:06 PM Palmer Dabbelt wrote: > > On Tue, 21 Jul 2020 11:36:10 PDT (-0700), alex@ghiti.fr wrote: > > Let's try to make progress here: I add linux-mm in CC to get feedback on > > this patch as it blocks sv48 support too. > > Sorry for being slow here. I haven't replied because I hadn't really fleshed > out the design yet, but just so everyone's on the same page my problems with > this are: > > * We waste vmalloc space on 32-bit systems, where there isn't a lot of it. There is actually an ongoing work to make 32-bit Arm kernels move vmlinux into the vmalloc space, as part of the move to avoid highmem. Overall, a 32-bit system would waste about 0.1% of its virtual address space by having the kernel be located in both the linear map and the vmalloc area. It's not zero, but not that bad either. With the typical split of 3072 MB user, 768MB linear and 256MB vmalloc, it's also around 1.5% of the available vmalloc area (assuming a 4MB vmlinux in a typical 32-bit kernel), but the boundaries can be changed arbitrarily if needed. The eventual goal is to have a split of 3840MB for either user or linear map plus and 256MB for vmalloc, including the kernel. Switching between linear and user has a noticeable runtime overhead, but it relaxes both the limits for user memory and lowmem, and it provides a somewhat stronger address space isolation. Another potential idea would be to completely randomize the physical addresses underneath the kernel by using a random permutation of the pages in the kernel image. This adds even more overhead (virt_to_phys may need to call vmalloc_to_page or similar) and may cause problems with DMA into kernel .data across page boundaries, > * Sort out how to maintain a linear map as the canonical hole moves around > between the VA widths without adding a bunch of overhead to the virt2phys and > friends. This is probably going to be the trickiest part, but I think if we > just change the page table code to essentially lie about VAs when an sv39 > system runs an sv48+sv39 kernel we could make it work -- there'd be some > logical complexity involved, but it would remain fast. I assume you can't use the trick that x86 has where all kernel addresses are at the top of the 64-bit address space and user addresses are at the bottom, regardless of the size of the page tables? Arnd