From: jeffxu@chromium.org
To: akpm@linux-foundation.org, keescook@chromium.org,
jannh@google.com, sroettger@google.com, willy@infradead.org,
gregkh@linuxfoundation.org, torvalds@linux-foundation.org
Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org,
linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
linux-mm@kvack.org, surenb@google.com, alex.sierra@amd.com,
apopple@nvidia.com, aneesh.kumar@linux.ibm.com,
axelrasmussen@google.com, ben@decadent.org.uk,
catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk,
ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com,
corbet@lwn.net, wangkefeng.wang@huawei.com,
Liam.Howlett@oracle.com, lstoakes@gmail.com,
mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com,
peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com,
shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com,
yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com,
luto@kernel.org, linux-hardening@vger.kernel.org
Subject: [RFC PATCH v2 3/8] mseal: add can_modify_mm and can_modify_vma
Date: Tue, 17 Oct 2023 09:08:10 +0000 [thread overview]
Message-ID: <20231017090815.1067790-4-jeffxu@chromium.org> (raw)
In-Reply-To: <20231017090815.1067790-1-jeffxu@chromium.org>
From: Jeff Xu <jeffxu@google.com>
can_modify_mm:
checks sealing flags for given memory range.
can_modify_vma:
checks sealing flags for given vma.
Signed-off-by: Jeff Xu <jeffxu@google.com>
---
include/linux/mm.h | 26 ++++++++++++++++++++++++++
mm/mseal.c | 42 ++++++++++++++++++++++++++++++++++++++++--
2 files changed, 66 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b511932df033..b09df8501987 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3299,6 +3299,32 @@ static inline void mm_populate(unsigned long addr, unsigned long len)
static inline void mm_populate(unsigned long addr, unsigned long len) {}
#endif
+#ifdef CONFIG_MSEAL
+extern bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+ unsigned long end, unsigned long checkSeals);
+
+extern bool can_modify_vma(struct vm_area_struct *vma,
+ unsigned long checkSeals);
+
+static inline unsigned long vma_seals(struct vm_area_struct *vma)
+{
+ return (vma->vm_seals & MM_SEAL_ALL);
+}
+
+#else
+static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+ unsigned long end, unsigned long checkSeals)
+{
+ return true;
+}
+
+static inline bool can_modify_vma(struct vm_area_struct *vma,
+ unsigned long checkSeals)
+{
+ return true;
+}
+#endif
+
/* These take the mm semaphore themselves */
extern int __must_check vm_brk(unsigned long, unsigned long);
extern int __must_check vm_brk_flags(unsigned long, unsigned long, unsigned long);
diff --git a/mm/mseal.c b/mm/mseal.c
index ffe4c4c3f1bc..3e9d1c732c38 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -26,6 +26,44 @@ static bool can_do_mseal(unsigned long types, unsigned long flags)
return true;
}
+/*
+ * check if a vma is sealed for modification.
+ * return true, if modification is allowed.
+ */
+bool can_modify_vma(struct vm_area_struct *vma,
+ unsigned long checkSeals)
+{
+ if (checkSeals & vma_seals(vma))
+ return false;
+
+ return true;
+}
+
+/*
+ * Check if the vmas of a memory range are allowed to be modified.
+ * the memory ranger can have a gap (unallocated memory).
+ * return true, if it is allowed.
+ */
+bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end,
+ unsigned long checkSeals)
+{
+ struct vm_area_struct *vma;
+
+ VMA_ITERATOR(vmi, mm, start);
+
+ if (!checkSeals)
+ return true;
+
+ /* going through each vma to check */
+ for_each_vma_range(vmi, vma, end) {
+ if (!can_modify_vma(vma, checkSeals))
+ return false;
+ }
+
+ /* Allow by default. */
+ return true;
+}
+
/*
* Check if a seal type can be added to VMA.
*/
@@ -33,7 +71,7 @@ static bool can_add_vma_seals(struct vm_area_struct *vma, unsigned long newSeals
{
/* When SEAL_MSEAL is set, reject if a new type of seal is added */
if ((vma->vm_seals & MM_SEAL_MSEAL) &&
- (newSeals & ~(vma->vm_seals & MM_SEAL_ALL)))
+ (newSeals & ~(vma_seals(vma))))
return false;
return true;
@@ -45,7 +83,7 @@ static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
{
int ret = 0;
- if (addtypes & ~(vma->vm_seals & MM_SEAL_ALL)) {
+ if (addtypes & ~(vma_seals(vma))) {
/*
* Handle split at start and end.
* Note: sealed VMA doesn't merge with other VMAs.
--
2.42.0.655.g421f12c284-goog
next prev parent reply other threads:[~2023-10-17 9:08 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-17 9:08 [RFC PATCH v2 0/8] Introduce mseal() syscall jeffxu
2023-10-17 9:08 ` [RFC PATCH v2 1/8] mseal: Add mseal(2) syscall jeffxu
2023-10-17 15:45 ` Randy Dunlap
2023-10-17 9:08 ` [RFC PATCH v2 2/8] mseal: Wire up mseal syscall jeffxu
2023-10-17 9:08 ` jeffxu [this message]
2023-10-17 9:08 ` [RFC PATCH v2 4/8] mseal: Check seal flag for mprotect(2) jeffxu
2023-10-17 9:08 ` [RFC PATCH v2 5/8] mseal: Check seal flag for munmap(2) jeffxu
2023-10-17 16:54 ` Linus Torvalds
2023-10-18 15:08 ` Jeff Xu
2023-10-18 17:14 ` Jeff Xu
2023-10-18 18:27 ` Linus Torvalds
2023-10-18 19:07 ` Jeff Xu
2023-10-17 9:08 ` [RFC PATCH v2 6/8] mseal: Check seal flag for mremap(2) jeffxu
2023-10-20 13:56 ` Muhammad Usama Anjum
2023-10-17 9:08 ` [RFC PATCH v2 7/8] mseal:Check seal flag for mmap(2) jeffxu
2023-10-17 17:04 ` Linus Torvalds
2023-10-17 17:43 ` Linus Torvalds
2023-10-18 7:01 ` Jeff Xu
2023-10-19 7:27 ` Stephen Röttger
2023-10-17 9:08 ` [RFC PATCH v2 8/8] selftest mm/mseal mprotect/munmap/mremap/mmap jeffxu
2023-10-20 14:24 ` Muhammad Usama Anjum
2023-10-20 15:23 ` Peter Zijlstra
2023-10-20 16:33 ` Muhammad Usama Anjum
2023-10-19 9:19 ` [RFC PATCH v2 0/8] Introduce mseal() syscall David Laight
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231017090815.1067790-4-jeffxu@chromium.org \
--to=jeffxu@chromium.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=alex.sierra@amd.com \
--cc=aneesh.kumar@linux.ibm.com \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=ben@decadent.org.uk \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=dwmw@amazon.co.uk \
--cc=gregkh@linuxfoundation.org \
--cc=groeck@chromium.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=jeffxu@google.com \
--cc=joey.gouly@arm.com \
--cc=jorgelo@chromium.org \
--cc=keescook@chromium.org \
--cc=linmiaohe@huawei.com \
--cc=linux-hardening@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=luto@kernel.org \
--cc=mawupeng1@huawei.com \
--cc=namit@vmware.com \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=ryan.roberts@arm.com \
--cc=shr@devkernel.io \
--cc=sroettger@google.com \
--cc=surenb@google.com \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiujianfeng@huawei.com \
--cc=ying.huang@intel.com \
--cc=yu.ma@intel.com \
--cc=zhangpeng362@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox