From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
To: Fares Mehanna <faresx@amazon.de>
Cc: nh-open-source@amazon.com, "Roman Kagan" <rkagan@amazon.de>,
"Marc Zyngier" <maz@kernel.org>,
"Oliver Upton" <oliver.upton@linux.dev>,
"James Morse" <james.morse@arm.com>,
"Suzuki K Poulose" <suzuki.poulose@arm.com>,
"Zenghui Yu" <yuzenghui@huawei.com>,
"Catalin Marinas" <catalin.marinas@arm.com>,
"Will Deacon" <will@kernel.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Kemeng Shi" <shikemeng@huaweicloud.com>,
"Pierre-Clément Tosi" <ptosi@google.com>,
"Ard Biesheuvel" <ardb@kernel.org>,
"Mark Rutland" <mark.rutland@arm.com>,
"Javier Martinez Canillas" <javierm@redhat.com>,
"Arnd Bergmann" <arnd@arndb.de>, "Fuad Tabba" <tabba@google.com>,
"Mark Brown" <broonie@kernel.org>,
"Joey Gouly" <joey.gouly@arm.com>,
"Kristina Martsenko" <kristina.martsenko@arm.com>,
"Randy Dunlap" <rdunlap@infradead.org>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Jean-Philippe Brucker" <jean-philippe@linaro.org>,
"Mike Rapoport (IBM)" <rppt@kernel.org>,
"David Hildenbrand" <david@redhat.com>,
"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)"
<linux-arm-kernel@lists.infradead.org>,
"open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)"
<kvmarm@lists.linux.dev>,
"open list" <linux-kernel@vger.kernel.org>,
"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>
Subject: Re: [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges
Date: Thu, 12 Sep 2024 12:40:33 -0400 [thread overview]
Message-ID: <zghnfw2vvrvlxenzx3oi55hzznxbx2nhcuwfk5zpe42bm4dxzv@zknjtfa2fu7n> (raw)
In-Reply-To: <20240911143421.85612-2-faresx@amazon.de>
* Fares Mehanna <faresx@amazon.de> [240911 10:36]:
> To make sure the kernel mm-local mapping is untouched by the user, we will seal
> the VMA before changing the protection to be used by the kernel.
>
> This will guarantee that userspace can't unmap or alter this VMA while it is
> being used by the kernel.
>
> After the kernel is done with the secret memory, it will unseal the VMA to be
> able to unmap and free it.
>
> Unseal operation is not exposed to userspace.
We can't use the mseal feature for this; it is supposed to be a one way
transition.
Willy describes the feature best here [1].
It is not clear from the change log above or the cover letter as to why
you need to go this route instead of using the mmap lock.
[1] https://lore.kernel.org/lkml/ZS%2F3GCKvNn5qzhC4@casper.infradead.org/
>
> Signed-off-by: Fares Mehanna <faresx@amazon.de>
> Signed-off-by: Roman Kagan <rkagan@amazon.de>
> ---
> mm/internal.h | 7 +++++
> mm/mseal.c | 81 ++++++++++++++++++++++++++++++++-------------------
> 2 files changed, 58 insertions(+), 30 deletions(-)
>
> diff --git a/mm/internal.h b/mm/internal.h
> index b4d86436565b..cf7280d101e9 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1501,6 +1501,8 @@ bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> unsigned long end);
> bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> unsigned long end, int behavior);
> +/* mm's mmap write lock must be taken before seal/unseal operation */
> +int do_mseal(unsigned long start, unsigned long end, bool seal);
> #else
> static inline int can_do_mseal(unsigned long flags)
> {
> @@ -1518,6 +1520,11 @@ static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> {
> return true;
> }
> +
> +static inline int do_mseal(unsigned long start, unsigned long end, bool seal)
> +{
> + return -EINVAL;
> +}
> #endif
>
> #ifdef CONFIG_SHRINKER_DEBUG
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 15bba28acc00..aac9399ffd5d 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -26,6 +26,11 @@ static inline void set_vma_sealed(struct vm_area_struct *vma)
> vm_flags_set(vma, VM_SEALED);
> }
>
> +static inline void clear_vma_sealed(struct vm_area_struct *vma)
> +{
> + vm_flags_clear(vma, VM_SEALED);
> +}
> +
> /*
> * check if a vma is sealed for modification.
> * return true, if modification is allowed.
> @@ -117,7 +122,7 @@ bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long
>
> static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> struct vm_area_struct **prev, unsigned long start,
> - unsigned long end, vm_flags_t newflags)
> + unsigned long end, vm_flags_t newflags, bool seal)
> {
> int ret = 0;
> vm_flags_t oldflags = vma->vm_flags;
> @@ -131,7 +136,10 @@ static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> goto out;
> }
>
> - set_vma_sealed(vma);
> + if (seal)
> + set_vma_sealed(vma);
> + else
> + clear_vma_sealed(vma);
> out:
> *prev = vma;
> return ret;
> @@ -167,9 +175,9 @@ static int check_mm_seal(unsigned long start, unsigned long end)
> }
>
> /*
> - * Apply sealing.
> + * Apply sealing / unsealing.
> */
> -static int apply_mm_seal(unsigned long start, unsigned long end)
> +static int apply_mm_seal(unsigned long start, unsigned long end, bool seal)
> {
> unsigned long nstart;
> struct vm_area_struct *vma, *prev;
> @@ -191,11 +199,14 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
> unsigned long tmp;
> vm_flags_t newflags;
>
> - newflags = vma->vm_flags | VM_SEALED;
> + if (seal)
> + newflags = vma->vm_flags | VM_SEALED;
> + else
> + newflags = vma->vm_flags & ~(VM_SEALED);
> tmp = vma->vm_end;
> if (tmp > end)
> tmp = end;
> - error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> + error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags, seal);
> if (error)
> return error;
> nstart = vma_iter_end(&vmi);
> @@ -204,6 +215,37 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
> return 0;
> }
>
> +int do_mseal(unsigned long start, unsigned long end, bool seal)
> +{
> + int ret;
> +
> + if (end < start)
> + return -EINVAL;
> +
> + if (end == start)
> + return 0;
> +
> + /*
> + * First pass, this helps to avoid
> + * partial sealing in case of error in input address range,
> + * e.g. ENOMEM error.
> + */
> + ret = check_mm_seal(start, end);
> + if (ret)
> + goto out;
> +
> + /*
> + * Second pass, this should success, unless there are errors
> + * from vma_modify_flags, e.g. merge/split error, or process
> + * reaching the max supported VMAs, however, those cases shall
> + * be rare.
> + */
> + ret = apply_mm_seal(start, end, seal);
> +
> +out:
> + return ret;
> +}
> +
> /*
> * mseal(2) seals the VM's meta data from
> * selected syscalls.
> @@ -256,7 +298,7 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
> *
> * unseal() is not supported.
> */
> -static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> +static int __do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> {
> size_t len;
> int ret = 0;
> @@ -277,33 +319,12 @@ static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> return -EINVAL;
>
> end = start + len;
> - if (end < start)
> - return -EINVAL;
> -
> - if (end == start)
> - return 0;
>
> if (mmap_write_lock_killable(mm))
> return -EINTR;
>
> - /*
> - * First pass, this helps to avoid
> - * partial sealing in case of error in input address range,
> - * e.g. ENOMEM error.
> - */
> - ret = check_mm_seal(start, end);
> - if (ret)
> - goto out;
> -
> - /*
> - * Second pass, this should success, unless there are errors
> - * from vma_modify_flags, e.g. merge/split error, or process
> - * reaching the max supported VMAs, however, those cases shall
> - * be rare.
> - */
> - ret = apply_mm_seal(start, end);
> + ret = do_mseal(start, end, true);
>
> -out:
> mmap_write_unlock(current->mm);
> return ret;
> }
> @@ -311,5 +332,5 @@ static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long,
> flags)
> {
> - return do_mseal(start, len, flags);
> + return __do_mseal(start, len, flags);
> }
> --
> 2.40.1
>
>
>
>
> Amazon Web Services Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
> Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
> Sitz: Berlin
> Ust-ID: DE 365 538 597
>
>
next prev parent reply other threads:[~2024-09-12 16:41 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges Fares Mehanna
2024-09-12 16:40 ` Liam R. Howlett [this message]
2024-09-25 15:25 ` Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 2/7] mm/secretmem: implement mm-local kernel allocations Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 3/7] arm64: KVM: Refactor C-code to access vCPU gp-registers through macros Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 4/7] KVM: Refactor Assembly-code to access vCPU gp-registers through a macro Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 5/7] arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 6/7] arm64: KVM: Refactor C-code to access vCPU fp-registers through macros Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 7/7] arm64: KVM: Allocate vCPU fp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
2024-09-20 12:34 ` [RFC PATCH 0/7] support for mm-local memory allocations and use it Mike Rapoport
2024-09-25 15:33 ` Fares Mehanna
2024-09-27 7:08 ` Mike Rapoport
2024-10-08 20:06 ` Fares Mehanna
2024-09-20 13:19 ` Alexander Graf
2024-09-27 12:59 ` David Hildenbrand
2024-10-10 15:52 ` Fares Mehanna
2024-10-11 12:04 ` David Hildenbrand
2024-10-11 12:36 ` Mediouni, Mohamed
2024-10-11 12:56 ` Mediouni, Mohamed
2024-10-11 12:58 ` David Hildenbrand
2024-10-11 14:25 ` Fares Mehanna
2024-10-18 18:52 ` David Hildenbrand
2024-10-18 19:02 ` David Hildenbrand
[not found] <CABi2SkWrQOdxdai7YLoYKKc6GAwxue=jc+bH1=CgE-bKRO-GhA@mail.gmail.com>
2024-10-08 20:31 ` [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges Fares Mehanna
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=zghnfw2vvrvlxenzx3oi55hzznxbx2nhcuwfk5zpe42bm4dxzv@zknjtfa2fu7n \
--to=liam.howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=faresx@amazon.de \
--cc=james.morse@arm.com \
--cc=javierm@redhat.com \
--cc=jean-philippe@linaro.org \
--cc=joey.gouly@arm.com \
--cc=kristina.martsenko@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=nh-open-source@amazon.com \
--cc=oliver.upton@linux.dev \
--cc=ptosi@google.com \
--cc=rdunlap@infradead.org \
--cc=rkagan@amazon.de \
--cc=rppt@kernel.org \
--cc=shikemeng@huaweicloud.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox