linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andy Lutomirski <luto@kernel.org>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	dave.hansen@intel.com
Cc: ak@linux.intel.com, andreyknvl@gmail.com, ashok.raj@intel.com,
	bharata@amd.com, dave.hansen@linux.intel.com, dvyukov@google.com,
	glider@google.com, hjl.tools@gmail.com,
	jacob.jun.pan@linux.intel.com, kcc@google.com,
	kirill@shutemov.name, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, peterz@infradead.org,
	rick.p.edgecombe@intel.com, ryabinin.a.a@gmail.com,
	tarasmadan@google.com, x86@kernel.org
Subject: Re: [PATCHv11.1 04/16] x86/mm: Handle LAM on context switch
Date: Tue, 8 Nov 2022 19:54:35 -0800	[thread overview]
Message-ID: <6782d309-5e4b-580c-fbbb-4388bda69bf3@kernel.org> (raw)
In-Reply-To: <20221107213558.27807-1-kirill.shutemov@linux.intel.com>

On 11/7/22 13:35, Kirill A. Shutemov wrote:
> Linear Address Masking mode for userspace pointers encoded in CR3 bits.
> The mode is selected per-process and stored in mm_context_t.
> 
> switch_mm_irqs_off() now respects selected LAM mode and constructs CR3
> accordingly.
> 
> The active LAM mode gets recorded in the tlb_state.
> 

> +static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm)
> +{
> +	return mm->context.lam_cr3_mask;

READ_ONCE -- otherwise this has a data race and might generate sanitizer 
complaints.

> +}

> @@ -491,6 +496,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>   {
>   	struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm);
>   	u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
> +	unsigned long prev_lam = tlbstate_lam_cr3_mask();
> +	unsigned long new_lam = mm_lam_cr3_mask(next);

So I'm reading this again after drinking a cup of coffee.  new_lam is 
next's LAM mask according to mm_struct (and thus can change 
asynchronously due to a remote CPU).  prev_lam is based on tlbstate and 
can't change asynchronously, at least not with IRQs off.


>   	bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy);
>   	unsigned cpu = smp_processor_id();
>   	u64 next_tlb_gen;
> @@ -520,7 +527,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>   	 * isn't free.
>   	 */
>   #ifdef CONFIG_DEBUG_VM
> -	if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid))) {
> +	if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid, prev_lam))) {

So is the only purpose of tlbstate_lam_cr3_mask() to enable this warning 
to work?

>   		/*
>   		 * If we were to BUG here, we'd be very likely to kill
>   		 * the system so hard that we don't see the call trace.
> @@ -552,9 +559,15 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>   	 * instruction.
>   	 */
>   	if (real_prev == next) {
> +		/* Not actually switching mm's */
>   		VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
>   			   next->context.ctx_id);
>   
> +		/*
> +		 * If this races with another thread that enables lam, 'new_lam'
> +		 * might not match 'prev_lam'.
> +		 */
> +

Indeed.

>   		/*
>   		 * Even in lazy TLB mode, the CPU should stay set in the
>   		 * mm_cpumask. The TLB shootdown code can figure out from
> @@ -622,15 +635,16 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>   		barrier();
>   	}

> @@ -691,6 +705,10 @@ void initialize_tlbstate_and_flush(void)
>   	/* Assert that CR3 already references the right mm. */
>   	WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(mm->pgd));
>   
> +	/* LAM expected to be disabled in CR3 and init_mm */
> +	WARN_ON(cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57));
> +	WARN_ON(mm_lam_cr3_mask(&init_mm));
> +

I think the callers all have init_mm selected, but the rest of this 
function is not really written with this assumption.  (But it does force 
ASID 0, which is at least a bizarre thing to do for non-init-mm.)

What's the purpose of this warning?  I'm okay with keeping it, but maybe 
also add a warning that fires if mm != &init_mm.



  reply	other threads:[~2022-11-09  3:55 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-25  0:17 [PATCHv11 00/16] Linear Address Masking enabling Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 01/16] x86/mm: Fix CR3_ADDR_MASK Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 02/16] x86: CPUID and CR3/CR4 flags for Linear Address Masking Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 03/16] mm: Pass down mm_struct to untagged_addr() Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 04/16] x86/mm: Handle LAM on context switch Kirill A. Shutemov
2022-11-07 14:58   ` Andy Lutomirski
2022-11-07 17:14     ` Kirill A. Shutemov
2022-11-07 18:02       ` Dave Hansen
2022-11-07 21:35         ` [PATCHv11.1 " Kirill A. Shutemov
2022-11-09  3:54           ` Andy Lutomirski [this message]
2022-11-09  9:17             ` kirill
2022-10-25  0:17 ` [PATCHv11 05/16] x86/uaccess: Provide untagged_addr() and remove tags before address check Kirill A. Shutemov
2022-11-07 14:50   ` Andy Lutomirski
2022-11-07 17:33     ` Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 06/16] KVM: Serialize tagged address check against tagging enabling Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 07/16] x86/mm: Provide arch_prctl() interface for LAM Kirill A. Shutemov
2022-11-07 21:37   ` [PATCHv11.1 " Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 08/16] x86/mm: Reduce untagged_addr() overhead until the first LAM user Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 09/16] mm: Expose untagging mask in /proc/$PID/status Kirill A. Shutemov
2022-10-28 14:02   ` Catalin Marinas
2022-10-25  0:17 ` [PATCHv11 10/16] iommu/sva: Replace pasid_valid() helper with mm_valid_pasid() Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 11/16] x86/mm, iommu/sva: Make LAM and SVA mutually exclusive Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 12/16] selftests/x86/lam: Add malloc and tag-bits test cases for linear-address masking Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 13/16] selftests/x86/lam: Add mmap and SYSCALL " Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 14/16] selftests/x86/lam: Add io_uring " Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 15/16] selftests/x86/lam: Add inherit " Kirill A. Shutemov
2022-10-25  0:17 ` [PATCHv11 16/16] selftests/x86/lam: Add ARCH_FORCE_TAGGED_SVA " Kirill A. Shutemov
2022-11-07 11:25 ` [PATCHv11 00/16] Linear Address Masking enabling Kirill A. Shutemov
2022-11-07 14:59 ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6782d309-5e4b-580c-fbbb-4388bda69bf3@kernel.org \
    --to=luto@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=andreyknvl@gmail.com \
    --cc=ashok.raj@intel.com \
    --cc=bharata@amd.com \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=hjl.tools@gmail.com \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=kcc@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=ryabinin.a.a@gmail.com \
    --cc=tarasmadan@google.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox