From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: Dave Hansen <dave.hansen@linux.intel.com>,
Andy Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Kostya Serebryany <kcc@google.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Alexander Potapenko <glider@google.com>,
Taras Madan <tarasmadan@google.com>,
Dmitry Vyukov <dvyukov@google.com>,
"H . J . Lu" <hjl.tools@gmail.com>,
Andi Kleen <ak@linux.intel.com>,
Rick Edgecombe <rick.p.edgecombe@intel.com>,
Bharata B Rao <bharata@amd.com>,
Jacob Pan <jacob.jun.pan@linux.intel.com>,
Ashok Raj <ashok.raj@intel.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCHv13 01/16] x86: Allow atomic MM_CONTEXT flags setting
Date: Tue, 27 Dec 2022 06:08:14 +0300 [thread overview]
Message-ID: <20221227030829.12508-2-kirill.shutemov@linux.intel.com> (raw)
In-Reply-To: <20221227030829.12508-1-kirill.shutemov@linux.intel.com>
So far there's no need in atomic setting of MM context flags in
mm_context_t::flags. The flags set early in exec and never change
after that.
LAM enabling requires atomic flag setting.
Convert the field to unsigned long and MM_CONTEXT_* accesses with
__set_bit() and test_bit().
No functional changes.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
arch/x86/entry/vsyscall/vsyscall_64.c | 2 +-
arch/x86/include/asm/mmu.h | 6 +++---
arch/x86/include/asm/mmu_context.h | 2 +-
arch/x86/kernel/process_64.c | 4 ++--
4 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
index 4af81df133ee..ed320e145cf9 100644
--- a/arch/x86/entry/vsyscall/vsyscall_64.c
+++ b/arch/x86/entry/vsyscall/vsyscall_64.c
@@ -317,7 +317,7 @@ static struct vm_area_struct gate_vma __ro_after_init = {
struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
#ifdef CONFIG_COMPAT
- if (!mm || !(mm->context.flags & MM_CONTEXT_HAS_VSYSCALL))
+ if (!mm || test_bit(MM_CONTEXT_HAS_VSYSCALL, &mm->context.flags))
return NULL;
#endif
if (vsyscall_mode == NONE)
diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index 5d7494631ea9..efa3eaee522c 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -9,9 +9,9 @@
#include <linux/bits.h>
/* Uprobes on this MM assume 32-bit code */
-#define MM_CONTEXT_UPROBE_IA32 BIT(0)
+#define MM_CONTEXT_UPROBE_IA32 0
/* vsyscall page is accessible on this MM */
-#define MM_CONTEXT_HAS_VSYSCALL BIT(1)
+#define MM_CONTEXT_HAS_VSYSCALL 1
/*
* x86 has arch-specific MMU state beyond what lives in mm_struct.
@@ -39,7 +39,7 @@ typedef struct {
#endif
#ifdef CONFIG_X86_64
- unsigned short flags;
+ unsigned long flags;
#endif
struct mutex lock;
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index b8d40ddeab00..53ef591a6166 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -181,7 +181,7 @@ static inline void arch_exit_mmap(struct mm_struct *mm)
static inline bool is_64bit_mm(struct mm_struct *mm)
{
return !IS_ENABLED(CONFIG_IA32_EMULATION) ||
- !(mm->context.flags & MM_CONTEXT_UPROBE_IA32);
+ !test_bit(MM_CONTEXT_UPROBE_IA32, &mm->context.flags);
}
#else
static inline bool is_64bit_mm(struct mm_struct *mm)
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 4e34b3b68ebd..8b06034e8c70 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -671,7 +671,7 @@ void set_personality_64bit(void)
task_pt_regs(current)->orig_ax = __NR_execve;
current_thread_info()->status &= ~TS_COMPAT;
if (current->mm)
- current->mm->context.flags = MM_CONTEXT_HAS_VSYSCALL;
+ __set_bit(MM_CONTEXT_HAS_VSYSCALL, ¤t->mm->context.flags);
/* TBD: overwrites user setup. Should have two bits.
But 64bit processes have always behaved this way,
@@ -708,7 +708,7 @@ static void __set_personality_ia32(void)
* uprobes applied to this MM need to know this and
* cannot use user_64bit_mode() at that time.
*/
- current->mm->context.flags = MM_CONTEXT_UPROBE_IA32;
+ __set_bit(MM_CONTEXT_UPROBE_IA32, ¤t->mm->context.flags);
}
current->personality |= force_personality32;
--
2.38.2
next prev parent reply other threads:[~2022-12-27 3:08 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-27 3:08 [PATCHv13 00/16] Linear Address Masking enabling Kirill A. Shutemov
2022-12-27 3:08 ` Kirill A. Shutemov [this message]
2023-01-04 19:07 ` [PATCHv13 01/16] x86: Allow atomic MM_CONTEXT flags setting Dave Hansen
2022-12-27 3:08 ` [PATCHv13 02/16] x86: CPUID and CR3/CR4 flags for Linear Address Masking Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 03/16] x86/mm: Handle LAM on context switch Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 04/16] mm: Introduce untagged_addr_remote() Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 05/16] x86/uaccess: Provide untagged_addr() and remove tags before address check Kirill A. Shutemov
2022-12-27 19:10 ` Linus Torvalds
2022-12-31 0:10 ` Kirill A. Shutemov
2022-12-31 0:42 ` Linus Torvalds
2023-01-02 13:55 ` David Laight
2023-01-02 19:05 ` Linus Torvalds
2023-01-03 8:37 ` David Laight
2023-01-07 9:10 ` Kirill A. Shutemov
2023-01-07 17:28 ` Linus Torvalds
2022-12-27 3:08 ` [PATCHv13 06/16] x86/mm: Provide arch_prctl() interface for LAM Kirill A. Shutemov
2023-01-04 19:55 ` Edgecombe, Rick P
2023-01-10 6:17 ` Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 07/16] x86/mm: Reduce untagged_addr() overhead until the first LAM user Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 08/16] mm: Expose untagging mask in /proc/$PID/status Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 09/16] iommu/sva: Replace pasid_valid() helper with mm_valid_pasid() Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 10/16] x86/mm/iommu/sva: Make LAM and SVA mutually exclusive Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 11/16] selftests/x86/lam: Add malloc and tag-bits test cases for linear-address masking Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 12/16] selftests/x86/lam: Add mmap and SYSCALL " Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 13/16] selftests/x86/lam: Add io_uring " Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 14/16] selftests/x86/lam: Add inherit " Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 15/16] selftests/x86/lam: Add ARCH_FORCE_TAGGED_SVA " Kirill A. Shutemov
2022-12-27 3:08 ` [PATCHv13 16/16] selftests/x86/lam: Add test cases for LAM vs thread creation Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221227030829.12508-2-kirill.shutemov@linux.intel.com \
--to=kirill.shutemov@linux.intel.com \
--cc=ak@linux.intel.com \
--cc=andreyknvl@gmail.com \
--cc=ashok.raj@intel.com \
--cc=bharata@amd.com \
--cc=dave.hansen@linux.intel.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=hjl.tools@gmail.com \
--cc=jacob.jun.pan@linux.intel.com \
--cc=kcc@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=ryabinin.a.a@gmail.com \
--cc=tarasmadan@google.com \
--cc=torvalds@linux-foundation.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox