From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A781C25B78 for ; Tue, 28 May 2024 20:42:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 013B96B009F; Tue, 28 May 2024 16:42:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDEE06B00A0; Tue, 28 May 2024 16:42:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D583E6B00A9; Tue, 28 May 2024 16:42:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B37736B009F for ; Tue, 28 May 2024 16:42:05 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5D30C4071C for ; Tue, 28 May 2024 20:42:05 +0000 (UTC) X-FDA: 82168976610.13.F0D3BDF Received: from mail-ua1-f47.google.com (mail-ua1-f47.google.com [209.85.222.47]) by imf05.hostedemail.com (Postfix) with ESMTP id 90A45100014 for ; Tue, 28 May 2024 20:42:02 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=X5bmtT+J; spf=pass (imf05.hostedemail.com: domain of fvdl@google.com designates 209.85.222.47 as permitted sender) smtp.mailfrom=fvdl@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716928922; a=rsa-sha256; cv=none; b=NXhjlrdAzr2eoQcIGYXOO3HM8Q8I3UR8tE8d/vgtLq6rXHLi7LID/0E5uOHkFMrNJa6FKM fxRp/Updvk16IsR/0haL1yxsc9wixNMvtoOC5rBaC3Ctm0PFIx1LLu6PY2q7qQ4CUtOSPB hjtSrn+nm46R70mzDwiGXylZ09xGWEw= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=X5bmtT+J; spf=pass (imf05.hostedemail.com: domain of fvdl@google.com designates 209.85.222.47 as permitted sender) smtp.mailfrom=fvdl@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716928922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2tS6xXCxVy/LqyXJiba1u7hR4DE1wkaxKXxNrHJ7f2Q=; b=SJJdn65zUvAk+YAjMDsyObSNA77rm2RffznDDck4H8NrRfMi0GwKtwoS+tZ7u5erNkWWT+ 4p/HZuy42l0low+tQrxRLwSTkA/UtQFx/gWLQvJqfsQKiu9iArsI2qyuP1LDMCnWGDqLSc ad+61utPAjEAgluwaWKkNgsoMU3VYJk= Received: by mail-ua1-f47.google.com with SMTP id a1e0cc1a2514c-804efaa51c2so441959241.3 for ; Tue, 28 May 2024 13:42:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1716928921; x=1717533721; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=2tS6xXCxVy/LqyXJiba1u7hR4DE1wkaxKXxNrHJ7f2Q=; b=X5bmtT+JQKGOVX0IGlbmlo202PQBJMb0/29FVJw7pzAY3fLJum11Z207kSuAOdO84O 0krsaEo6DmNKbHYbJE9OK2a0uLTV7Jh7ps5hCDlvHCccDyprEHHB+TGE0ljV7fZYb7lo YdCgbm1fQUPOu4vf3XXrwPc+9jttRGTTiWKxb2MRyqxwgAfNhWL2PIuYJmwKjP2I1gtP 3jx1OFwGcA3eT8tbfKC1E5038RMdhefO3icvQ1N8pk6IzSsiI/KYUdieXGIZH2uiemiZ pW0+OLt22MhXqg5ZH4D6TMGXQjmDYIAtWJJKvZnrjU2IB6EYDduwMyHkHahp/Jnkwb39 IjmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716928921; x=1717533721; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2tS6xXCxVy/LqyXJiba1u7hR4DE1wkaxKXxNrHJ7f2Q=; b=Crsc+7aT16jUbSbQVgUFDg3pDbqG4qPy4YxatwEC7qWrXeC1CZ+UFkEm6thXkY5vI2 MFn7lYtSi1USKu1Li0IEj8K2+/9yzJLsigmU/LqiCXg9fGSiw3O2IX+4YK17+pOJGft/ CUW52mk3FwSlHOmM5mBmYdONf8Qu0eaNg52gsQqrdBYR5GNiFRQ7LEe1hDUnSU+7i4NZ 6HanIRfXQ/ZaA549EhwHvrCPnsZV9+CCIPXOj9kLjb0Kmz7oWZocC98nP9dqDmmq95SO 1faUXpWQuCuTGh+RTon4LCYtsqWkCY5IAN0YG277pkKVyHIqIpwtDJFJmstPO8AEDIUC rhzg== X-Forwarded-Encrypted: i=1; AJvYcCVEtlggyI16UGvHdf1Q9RYP2RNPBqcXO9ogR91JqpvKymtizNc+9LnDmOzmg9mrXllknzTD9GdSnn7Jwv0kKSUe1Tc= X-Gm-Message-State: AOJu0YyU0WGWQl0JTS9NDHUt2a6RyPBAGG5rxdmqu2l+xEQpQbrjq94E hXB9K/uKhBk+SAJ/Pqzg1fJDm89WXMXtYT2E1Aj5ijNOLWR9xie34shvIuXSQxmn8u3TKOZXzk4 aKw/zp1370ObSVezZvD9CwNPdvHSKlQlK8aYZ X-Google-Smtp-Source: AGHT+IHBOt+yJuqVz4kWO3s7wm5GrNfxrhrfFQWpzXtoqn1G1zQvvTJwI9D7NLsJKgAT8jJo2F3DvxerxddRcFeD0R0= X-Received: by 2002:a67:b34a:0:b0:486:6998:d773 with SMTP id ada2fe7eead31-48a38527321mr13094417137.9.1716928921389; Tue, 28 May 2024 13:42:01 -0700 (PDT) MIME-Version: 1.0 References: <20240528122352.2485958-1-Jason@zx2c4.com> <20240528122352.2485958-2-Jason@zx2c4.com> In-Reply-To: <20240528122352.2485958-2-Jason@zx2c4.com> From: Frank van der Linden Date: Tue, 28 May 2024 13:41:50 -0700 Message-ID: Subject: Re: [PATCH v16 1/5] mm: add VM_DROPPABLE for designating always lazily freeable mappings To: "Jason A. Donenfeld" Cc: linux-kernel@vger.kernel.org, patches@lists.linux.dev, tglx@linutronix.de, linux-crypto@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, Greg Kroah-Hartman , Adhemerval Zanella Netto , "Carlos O'Donell" , Florian Weimer , Arnd Bergmann , Jann Horn , Christian Brauner , David Hildenbrand , linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 90A45100014 X-Stat-Signature: 4kgi8ej18gan7e8gzz6q6abdy93fndfy X-HE-Tag: 1716928922-112633 X-HE-Meta: U2FsdGVkX18GyBKwsukHP0REx7AYITt44j0pBlDnQ3qVjehyueQUcJJng4LTHqLQ0dcVCrOoXDnsRul91VJs4E2xJTSXKUTk3DzJDS9EqofK+s1yTNB4BeFoBHGlFnMrnJEoThfZ38vZEftvL7iSGtvWZB2bLs7W2iBY9yLQ6kfJwL0h+wrJk99At2jZukO1Tn5JYXgFTN9oJXJf1TvF5B/YpddjYIIDqvSJPTwBU0q0SZ2j9TMuMxgiPFs8FEGh7O4sBHpfm66t+l2hNDSfcQ8rESi0KKojQqx1Ppx2UmOZAnymJSMW0JSmpwmTqdGBYmlodAw6N7dVmA7drangsLPFtypYXbesVeMpyIwkKhZdGnToml3Uxkd/m9vDxt8xwiwl7zVDy1Zm/StPzaQ2cdLUCS3VgpKGk9v3Yeg6X895B9r1TC2JMqVobW2VcCHnR3rk97sS5rPLpuAWtEoFNwoKTECvXKa/4RzyyhG1nhj3nhJALGas1Pe3DfncYwIE+VhJhLuMyKsUjX5NVsmhOFnlmiLNpmJ229xToiw3rwzqZ+Er+iRFsEOEAAFsEvimSbrPz3h0fGPJxPr0Fq7UtBijoqpxp0lF1VFnOiJAmp/mMM4Snysd+2k3rbXxuCbz3iNLeNGHzoO1ipmOB51r+jCePcObzemfmwXlK2/CEYfkUkWMch3IJa6D5z6OzzCfUdSni/bNAGEYItThLu2/ewQ1/XWaKJW2Ced8CeMqtrbcYLv0zFQz9svDz/gltaU6WFyghkaPH0Wp/UEl0g5cqjyafh8EDYueVszwReLEWwgDS3hppeNwbw2mR1KivXK6FCyqP9A0x3XDl/T811HTk498ZHc0JM5qp8ZnXbfC78ovxl/tJzeup3rgmkA2L3XbEAppySXnbmcnnlYWa0Yuk8UjKqU3mA/Mw1+EPITB2Ja+Mq5YCbcUNWUNoIyhlreEmnZXFBzLUvM0j0sEUqh G+bScSZ3 lcMdXmvxpslVmu2gUPj2equd2UGdgij54aDLCF8cbYAK8Zabu7QBB7ysiklv4u7OqMu1fg7RMKinucDsaoTDa/fsMpyL/PypvNvqhcCrOieBQ/XcXvafplIHVW7Gleom0giDBHj5SafPXGCNQaRO7M6cfwwWtoG4S+CLCjjZLij9Ie3dZuWaXgoO0pZyrQZUX231flP4/Y63jnk2OAs78J9DWHZxMb6KwLrEdB4ivYuaLH0Ngkez4iFYD5VdO/b3wpse97RdZY6G0u8IksrReS8SkOsnqbBxTH0NCkDGjk6AkT1+GwmtNGM2RpB7NfnrfMVESwZJmzIBQlCgVgvA9xOwojDvnY7o50KjrLMSO8Yi4YZuUdT5o4wB/4uSxtowY4f3oradKyiJDsao= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, May 28, 2024 at 5:24=E2=80=AFAM Jason A. Donenfeld wrote: > > The vDSO getrandom() implementation works with a buffer allocated with a > new system call that has certain requirements: > > - It shouldn't be written to core dumps. > * Easy: VM_DONTDUMP. > - It should be zeroed on fork. > * Easy: VM_WIPEONFORK. > > - It shouldn't be written to swap. > * Uh-oh: mlock is rlimited. > * Uh-oh: mlock isn't inherited by forks. > > - It shouldn't reserve actual memory, but it also shouldn't crash when > page faulting in memory if none is available > * Uh-oh: MAP_NORESERVE respects vm.overcommit_memory=3D2. > * Uh-oh: VM_NORESERVE means segfaults. > > It turns out that the vDSO getrandom() function has three really nice > characteristics that we can exploit to solve this problem: > > 1) Due to being wiped during fork(), the vDSO code is already robust to > having the contents of the pages it reads zeroed out midway through > the function's execution. > > 2) In the absolute worst case of whatever contingency we're coding for, > we have the option to fallback to the getrandom() syscall, and > everything is fine. > > 3) The buffers the function uses are only ever useful for a maximum of > 60 seconds -- a sort of cache, rather than a long term allocation. > > These characteristics mean that we can introduce VM_DROPPABLE, which > has the following semantics: > > a) It never is written out to swap. > b) Under memory pressure, mm can just drop the pages (so that they're > zero when read back again). > c) If there's not enough memory to service a page fault, it's not fatal. > d) It is inherited by fork. > e) It doesn't count against the mlock budget, since nothing is locked. > > This is fairly simple to implement, with the one snag that we have to > use 64-bit VM_* flags, but this shouldn't be a problem, since the only > consumers will probably be 64-bit anyway. > > This way, allocations used by vDSO getrandom() can use: > > VM_DROPPABLE | VM_DONTDUMP | VM_WIPEONFORK | VM_NORESERVE > > And there will be no problem with OOMing, crashing on overcommitment, > using memory when not in use, not wiping on fork(), coredumps, or > writing out to swap. > > Cc: linux-mm@kvack.org > Signed-off-by: Jason A. Donenfeld > --- > fs/proc/task_mmu.c | 3 +++ > include/linux/mm.h | 8 ++++++++ > include/trace/events/mmflags.h | 7 +++++++ > mm/Kconfig | 3 +++ > mm/memory.c | 4 ++++ > mm/mempolicy.c | 3 +++ > mm/mprotect.c | 2 +- > mm/rmap.c | 8 +++++--- > 8 files changed, 34 insertions(+), 4 deletions(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index e5a5f015ff03..b5a59e57bde1 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -706,6 +706,9 @@ static void show_smap_vma_flags(struct seq_file *m, s= truct vm_area_struct *vma) > #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ > #ifdef CONFIG_X86_USER_SHADOW_STACK > [ilog2(VM_SHADOW_STACK)] =3D "ss", > +#endif > +#ifdef CONFIG_NEED_VM_DROPPABLE > + [ilog2(VM_DROPPABLE)] =3D "dp", > #endif > }; > size_t i; > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 9849dfda44d4..5978cb4cc21c 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -321,12 +321,14 @@ extern unsigned int kobjsize(const void *objp); > #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit arch= itectures */ > #define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit arch= itectures */ > #define VM_HIGH_ARCH_BIT_5 37 /* bit only usable on 64-bit arch= itectures */ > +#define VM_HIGH_ARCH_BIT_6 38 /* bit only usable on 64-bit arch= itectures */ > #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0) > #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1) > #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2) > #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3) > #define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4) > #define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5) > +#define VM_HIGH_ARCH_6 BIT(VM_HIGH_ARCH_BIT_6) > #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ > > #ifdef CONFIG_ARCH_HAS_PKEYS > @@ -357,6 +359,12 @@ extern unsigned int kobjsize(const void *objp); > # define VM_SHADOW_STACK VM_NONE > #endif > > +#ifdef CONFIG_NEED_VM_DROPPABLE > +# define VM_DROPPABLE VM_HIGH_ARCH_6 > +#else > +# define VM_DROPPABLE VM_NONE > +#endif > + > #if defined(CONFIG_X86) > # define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA= at once (x86) */ > #elif defined(CONFIG_PPC) > diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflag= s.h > index e46d6e82765e..fab7848df50a 100644 > --- a/include/trace/events/mmflags.h > +++ b/include/trace/events/mmflags.h > @@ -165,6 +165,12 @@ IF_HAVE_PG_ARCH_X(arch_3) > # define IF_HAVE_UFFD_MINOR(flag, name) > #endif > > +#ifdef CONFIG_NEED_VM_DROPPABLE > +# define IF_HAVE_VM_DROPPABLE(flag, name) {flag, name}, > +#else > +# define IF_HAVE_VM_DROPPABLE(flag, name) > +#endif > + > #define __def_vmaflag_names \ > {VM_READ, "read" }, \ > {VM_WRITE, "write" }, \ > @@ -197,6 +203,7 @@ IF_HAVE_VM_SOFTDIRTY(VM_SOFTDIRTY, "softdirty" )= \ > {VM_MIXEDMAP, "mixedmap" }, \ > {VM_HUGEPAGE, "hugepage" }, \ > {VM_NOHUGEPAGE, "nohugepage" }, \ > +IF_HAVE_VM_DROPPABLE(VM_DROPPABLE, "droppable" ) \ > {VM_MERGEABLE, "mergeable" } \ > > #define show_vma_flags(flags) \ > diff --git a/mm/Kconfig b/mm/Kconfig > index b4cb45255a54..6cd65ea4b3ad 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -1056,6 +1056,9 @@ config ARCH_USES_HIGH_VMA_FLAGS > bool > config ARCH_HAS_PKEYS > bool > +config NEED_VM_DROPPABLE > + select ARCH_USES_HIGH_VMA_FLAGS > + bool > > config ARCH_USES_PG_ARCH_X > bool > diff --git a/mm/memory.c b/mm/memory.c > index b5453b86ec4b..57b03fc73159 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5689,6 +5689,10 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *= vma, unsigned long address, > > lru_gen_exit_fault(); > > + /* If the mapping is droppable, then errors due to OOM aren't fat= al. */ > + if (vma->vm_flags & VM_DROPPABLE) > + ret &=3D ~VM_FAULT_OOM; > + > if (flags & FAULT_FLAG_USER) { > mem_cgroup_exit_user_fault(); > /* > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index aec756ae5637..a66289f1d931 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -2300,6 +2300,9 @@ struct folio *vma_alloc_folio_noprof(gfp_t gfp, int= order, struct vm_area_struct > pgoff_t ilx; > struct page *page; > > + if (vma->vm_flags & VM_DROPPABLE) > + gfp |=3D __GFP_NOWARN | __GFP_NORETRY; > + > pol =3D get_vma_policy(vma, addr, order, &ilx); > page =3D alloc_pages_mpol_noprof(gfp | __GFP_COMP, order, > pol, ilx, numa_node_id()); > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 94878c39ee32..88ff3ecc08a1 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -622,7 +622,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, > may_expand_vm(mm, oldflags, nrpages)) > return -ENOMEM; > if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB| > - VM_SHARED|VM_NORESERVE)))= { > + VM_SHARED|VM_NORESERVE|VM_DROPPABLE))) = { > charged =3D nrpages; > if (security_vm_enough_memory_mm(mm, charged)) > return -ENOMEM; > diff --git a/mm/rmap.c b/mm/rmap.c > index e8fc5ecb59b2..d873a3f06506 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1397,7 +1397,8 @@ void folio_add_new_anon_rmap(struct folio *folio, s= truct vm_area_struct *vma, > VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); > VM_BUG_ON_VMA(address < vma->vm_start || > address + (nr << PAGE_SHIFT) > vma->vm_end, vma); > - __folio_set_swapbacked(folio); > + if (!(vma->vm_flags & VM_DROPPABLE)) > + __folio_set_swapbacked(folio); > __folio_set_anon(folio, vma, address, true); > > if (likely(!folio_test_large(folio))) { > @@ -1841,7 +1842,7 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, > * plus the rmap(s) (dropped by discard:)= . > */ > if (ref_count =3D=3D 1 + map_count && > - !folio_test_dirty(folio)) { > + (!folio_test_dirty(folio) || (vma->vm= _flags & VM_DROPPABLE))) { > dec_mm_counter(mm, MM_ANONPAGES); > goto discard; > } > @@ -1851,7 +1852,8 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, > * discarded. Remap the page to page tabl= e. > */ > set_pte_at(mm, address, pvmw.pte, pteval)= ; > - folio_set_swapbacked(folio); > + if (!(vma->vm_flags & VM_DROPPABLE)) > + folio_set_swapbacked(folio); > ret =3D false; > page_vma_mapped_walk_done(&pvmw); > break; > -- > 2.44.0 > > This seems like an obvious question, but I can't seem to find a message asking this in the long history of this patchset: VM_DROPPABLE seems very close to MADV_FREE lazyfree memory. Could those functionalities be folded in to one? - Frank