From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED509C43334 for ; Wed, 29 Jun 2022 00:34:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 680EA6B0071; Tue, 28 Jun 2022 20:34:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 630D58E0001; Tue, 28 Jun 2022 20:34:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D2256B0073; Tue, 28 Jun 2022 20:34:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3A6AF6B0071 for ; Tue, 28 Jun 2022 20:34:56 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 1035580283 for ; Wed, 29 Jun 2022 00:34:56 +0000 (UTC) X-FDA: 79629403392.13.03AD7B4 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf29.hostedemail.com (Postfix) with ESMTP id 1384412003B for ; Wed, 29 Jun 2022 00:34:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656462895; x=1687998895; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=gu2ji86KwLEp3cWwdUufrgM6jQM+uMNiTRLYjZQ+/Ho=; b=Fkrw8cp8+0mgykB1/5l+ywzCTwKXtgnAt7ep9jCMBEFKhIHl8KoD7y8C bdMU9IlzJ+S8S7v8Gsrti5MyYQE3PeC/846wuyqgPeMN98vQ+lVaif45s Z601cAmDHCgILJZFenpuMLet9f6xv1QjfjB1QyGVCZU3JQf0aKuHby0r6 T+nOFY39HTMPcwFQRP0Cu8+GFVaKiZPiKXbpWQVBnrnBPyFq5iLLTbDeq oyXBJsVR+3xzx9wOJSZSaap1qYL+/XSZvwa/Q6c3FBARrnMk9We+YvAbl ++nL6/ROYH1qYTTeuKwgfzSR90RrYkIPhyvh7HwbU7txy5kZdMWMQUFAP Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10392"; a="280646142" X-IronPort-AV: E=Sophos;i="5.92,230,1650956400"; d="scan'208";a="280646142" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2022 17:34:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,230,1650956400"; d="scan'208";a="647145078" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2022 17:34:46 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 3B9CACE; Wed, 29 Jun 2022 03:34:52 +0300 (EEST) Date: Wed, 29 Jun 2022 03:34:52 +0300 From: "Kirill A. Shutemov" To: Andy Lutomirski Cc: Dave Hansen , Peter Zijlstra , x86@kernel.org, Kostya Serebryany , Andrey Ryabinin , Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , "H . J . Lu" , Andi Kleen , Rick Edgecombe , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv3 4/8] x86/mm: Handle LAM on context switch Message-ID: <20220629003452.37yojljbcl7jjgu5@black.fi.intel.com> References: <20220610143527.22974-1-kirill.shutemov@linux.intel.com> <20220610143527.22974-5-kirill.shutemov@linux.intel.com> <9efc4129-e82b-740f-3d6d-67f1468879bb@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9efc4129-e82b-740f-3d6d-67f1468879bb@kernel.org> ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Fkrw8cp8; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656462895; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nxrpCJvgCEpnhAVObJLBaXQxQHYsUcHscTqwavZTOvg=; b=Pvb/cx+2wrfcz1MUoStbCq3DXmGsqVHa87niP5cBjOdl+xlpWBkzP4obw7thXb2o3lAeN+ MxU1FxSESndhiUHZheBKSyiFWuIv2dPYpcPyj0iEHjCshU4IhISZKLpVl/kt2x0IHQO/9J 3zgbbfWxv1YsTZnm77GCzjNcd+mlUgY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656462895; a=rsa-sha256; cv=none; b=qgxILfnetrGme7IyGdzFF8VmyttLTACYc3rru21OzN6OoraDAWsue+OnNOGFsSPh1lL4KV RP3Bnub+zTEmskWmrQExhyT0LZQ1tieCklz8d3NS6igHRlIDZJ81dq16uiSR3FOsd8r8IL B4djDnOM1sXge+P+m+mDsvTC2sjeWv4= X-Stat-Signature: 469gx9ji9fjbsyzwkuigi1uor5o9g14a X-Rspamd-Queue-Id: 1384412003B Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Fkrw8cp8; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=kirill.shutemov@linux.intel.com X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1656462894-99550 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 28, 2022 at 04:33:21PM -0700, Andy Lutomirski wrote: > On 6/10/22 07:35, Kirill A. Shutemov wrote: > > Linear Address Masking mode for userspace pointers encoded in CR3 bits. > > The mode is selected per-thread. Add new thread features indicate that the > > thread has Linear Address Masking enabled. > > > > switch_mm_irqs_off() now respects these flags and constructs CR3 > > accordingly. > > > > The active LAM mode gets recorded in the tlb_state. > > > > Signed-off-by: Kirill A. Shutemov > > --- > > arch/x86/include/asm/mmu.h | 1 + > > arch/x86/include/asm/mmu_context.h | 24 ++++++++++++ > > arch/x86/include/asm/tlbflush.h | 3 ++ > > arch/x86/mm/tlb.c | 62 ++++++++++++++++++++++-------- > > 4 files changed, 75 insertions(+), 15 deletions(-) > > > > diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h > > index 5d7494631ea9..d150e92163b6 100644 > > --- a/arch/x86/include/asm/mmu.h > > +++ b/arch/x86/include/asm/mmu.h > > @@ -40,6 +40,7 @@ typedef struct { > > #ifdef CONFIG_X86_64 > > unsigned short flags; > > + u64 lam_cr3_mask; > > #endif > > struct mutex lock; > > diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h > > index b8d40ddeab00..e6eac047c728 100644 > > --- a/arch/x86/include/asm/mmu_context.h > > +++ b/arch/x86/include/asm/mmu_context.h > > @@ -91,6 +91,29 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next) > > } > > #endif > > +#ifdef CONFIG_X86_64 > > +static inline u64 mm_cr3_lam_mask(struct mm_struct *mm) > > +{ > > + return mm->context.lam_cr3_mask; > > +} > > + > > +static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) > > +{ > > + mm->context.lam_cr3_mask = oldmm->context.lam_cr3_mask; > > +} > > + > > +#else > > + > > +static inline u64 mm_cr3_lam_mask(struct mm_struct *mm) > > +{ > > + return 0; > > +} > > + > > +static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) > > +{ > > +} > > +#endif > > Do we really need the ifdeffery here? I see no real harm in having the > field exist on 32-bit -- we don't care much about performance for 32-bit > kernels. The waste doesn't feel right to me. I would rather keep it. But sure I can do this if needed. > > - if (real_prev == next) { > > + if (real_prev == next && prev_lam == new_lam) { > > VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != > > next->context.ctx_id); > > This looks wrong to me. If we change threads within the same mm but lam > changes (which is certainly possible by a race if nothing else) then this > will go down the "we really are changing mms" path, not the "we're not > changing but we might need to flush something" path. If LAM gets enabled we must write CR3 with the new LAM mode. Without the change real_prev == next case will not do this for !was_lazy case. Note that currently enabling LAM is done by setting LAM mode in the mmu context and doing switch_mm(current->mm, current->mm, current), so it is very important case. -- Kirill A. Shutemov