From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 8667F6B7B75 for ; Thu, 6 Dec 2018 13:54:05 -0500 (EST) Received: by mail-pg1-f197.google.com with SMTP id 202so791793pgb.6 for ; Thu, 06 Dec 2018 10:54:05 -0800 (PST) Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id j39si843362plb.272.2018.12.06.10.54.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Dec 2018 10:54:04 -0800 (PST) Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 60A4D214DA for ; Thu, 6 Dec 2018 18:54:03 +0000 (UTC) Received: by mail-wm1-f45.google.com with SMTP id q26so2126158wmf.5 for ; Thu, 06 Dec 2018 10:54:03 -0800 (PST) MIME-Version: 1.0 References: <20181128000754.18056-1-rick.p.edgecombe@intel.com> <20181128000754.18056-2-rick.p.edgecombe@intel.com> <4883FED1-D0EC-41B0-A90F-1A697756D41D@gmail.com> <20181204160304.GB7195@arm.com> <51281e69a3722014f718a6840f43b2e6773eed90.camel@intel.com> <20181205114148.GA15160@arm.com> In-Reply-To: From: Andy Lutomirski Date: Thu, 6 Dec 2018 10:53:50 -0800 Message-ID: Subject: Re: [PATCH 1/2] vmalloc: New flag for flush before releasing pages Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: owner-linux-mm@kvack.org List-ID: To: Ard Biesheuvel Cc: Andrew Lutomirski , Will Deacon , Rick Edgecombe , Nadav Amit , LKML , Daniel Borkmann , Jessica Yu , Steven Rostedt , Alexei Starovoitov , Linux-MM , Jann Horn , "Dock, Deneen T" , Peter Zijlstra , Kristen Carlson Accardi , Andrew Morton , Ingo Molnar , Anil S Keshavamurthy , Kernel Hardening , Masami Hiramatsu , "Naveen N . Rao" , "David S. Miller" , Network Development , Dave Hansen > On Dec 5, 2018, at 11:29 PM, Ard Biesheuvel w= rote: > >> On Thu, 6 Dec 2018 at 00:16, Andy Lutomirski wrote: >> >>> On Wed, Dec 5, 2018 at 3:41 AM Will Deacon wrote: >>> >>>> On Tue, Dec 04, 2018 at 12:09:49PM -0800, Andy Lutomirski wrote: >>>> On Tue, Dec 4, 2018 at 12:02 PM Edgecombe, Rick P >>>> wrote: >>>>> >>>>>> On Tue, 2018-12-04 at 16:03 +0000, Will Deacon wrote: >>>>>> On Mon, Dec 03, 2018 at 05:43:11PM -0800, Nadav Amit wrote: >>>>>>>> On Nov 27, 2018, at 4:07 PM, Rick Edgecombe >>>>>>>> wrote: >>>>>>>> >>>>>>>> Since vfree will lazily flush the TLB, but not lazily free the und= erlying >>>>>>>> pages, >>>>>>>> it often leaves stale TLB entries to freed pages that could get re= -used. >>>>>>>> This is >>>>>>>> undesirable for cases where the memory being freed has special per= missions >>>>>>>> such >>>>>>>> as executable. >>>>>>> >>>>>>> So I am trying to finish my patch-set for preventing transient W+X = mappings >>>>>>> from taking space, by handling kprobes & ftrace that I missed (than= ks again >>>>>>> for >>>>>>> pointing it out). >>>>>>> >>>>>>> But all of the sudden, I don=E2=80=99t understand why we have the p= roblem that this >>>>>>> (your) patch-set deals with at all. We already change the mappings = to make >>>>>>> the memory wrAcked-by: Ard Biesheuvel > itable before freeing the memory, so why can=E2=80=99t we make it >>>>>>> non-executable at the same time? Actually, why do we make the modul= e memory, >>>>>>> including its data executable before freeing it??? >>>>>> >>>>>> Yeah, this is really confusing, but I have a suspicion it's a combin= ation >>>>>> of the various different configurations and hysterical raisins. We c= an't >>>>>> rely on module_alloc() allocating from the vmalloc area (see nios2) = nor >>>>>> can we rely on disable_ro_nx() being available at build time. >>>>>> >>>>>> If we *could* rely on module allocations always using vmalloc(), the= n >>>>>> we could pass in Rick's new flag and drop disable_ro_nx() altogether >>>>>> afaict -- who cares about the memory attributes of a mapping that's = about >>>>>> to disappear anyway? >>>>>> >>>>>> Is it just nios2 that does something different? >>>>>> >>>>> Yea it is really intertwined. I think for x86, set_memory_nx everywhe= re would >>>>> solve it as well, in fact that was what I first thought the solution = should be >>>>> until this was suggested. It's interesting that from the other thread= Masami >>>>> Hiramatsu referenced, set_memory_nx was suggested last year and would= have >>>>> inadvertently blocked this on x86. But, on the other architectures I = have since >>>>> learned it is a bit different. >>>>> >>>>> It looks like actually most arch's don't re-define set_memory_*, and = so all of >>>>> the frob_* functions are actually just noops. In which case allocatin= g RWX is >>>>> needed to make it work at all, because that is what the allocation is= going to >>>>> stay at. So in these archs, set_memory_nx won't solve it because it w= ill do >>>>> nothing. >>>>> >>>>> On x86 I think you cannot get rid of disable_ro_nx fully because ther= e is the >>>>> changing of the permissions on the directmap as well. You don't want = some other >>>>> caller getting a page that was left RO when freed and then trying to = write to >>>>> it, if I understand this. >>>>> >>>> >>>> Exactly. >>> >>> Of course, I forgot about the linear mapping. On arm64, we've just queu= ed >>> support for reflecting changes to read-only permissions in the linear m= ap >>> [1]. So, whilst the linear map is always non-executable, we will need t= o >>> make parts of it writable again when freeing the module. >>> >>>> After slightly more thought, I suggest renaming VM_IMMEDIATE_UNMAP to >>>> VM_MAY_ADJUST_PERMS or similar. It would have the semantics you want, >>>> but it would also call some arch hooks to put back the direct map >>>> permissions before the flush. Does that seem reasonable? It would >>>> need to be hooked up that implement set_memory_ro(), but that should >>>> be quite easy. If nothing else, it could fall back to set_memory_ro() >>>> in the absence of a better implementation. >>> >>> You mean set_memory_rw() here, right? Although, eliding the TLB invalid= ation >>> would open up a window where the vmap mapping is executable and the lin= ear >>> mapping is writable, which is a bit rubbish. >>> >> >> Right, and Rick pointed out the same issue. Instead, we should set >> the direct map not-present or its ARM equivalent, then do the flush, >> then make it RW. I assume this also works on arm and arm64, although >> I don't know for sure. On x86, the CPU won't cache not-present PTEs. > > If we are going to unmap the linear alias, why not do it at vmalloc() > time rather than vfree() time? That=E2=80=99s not totally nuts. Do we ever have code that expects __va() t= o work on module data? Perhaps crypto code trying to encrypt static data because our APIs don=E2=80=99t understand virtual addresses. I guess = if highmem is ever used for modules, then we should be fine. RO instead of not present might be safer. But I do like the idea of renaming Rick's flag to something like VM_XPFO or VM_NO_DIRECT_MAP and making it do all of this. (It seems like some people call it the linear map and some people call it the direct map. Is there any preference?)