From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 620DF6B5831 for ; Fri, 31 Aug 2018 13:52:04 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id q21-v6so7181216pff.21 for ; Fri, 31 Aug 2018 10:52:04 -0700 (PDT) Received: from mga12.intel.com (mga12.intel.com. [192.55.52.136]) by mx.google.com with ESMTPS id 91-v6si10856491plc.500.2018.08.31.10.52.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 31 Aug 2018 10:52:03 -0700 (PDT) Subject: Re: [RFC PATCH v3 12/24] x86/mm: Modify ptep_set_wrprotect and pmdp_set_wrprotect for _PAGE_DIRTY_SW References: <20180830143904.3168-1-yu-cheng.yu@intel.com> <20180830143904.3168-13-yu-cheng.yu@intel.com> <079a55f2-4654-4adf-a6ef-6e480b594a2f@linux.intel.com> From: Dave Hansen Message-ID: <72456264-2214-3c01-593a-7de862f1799d@linux.intel.com> Date: Fri, 31 Aug 2018 10:52:02 -0700 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: Andy Lutomirski Cc: Jann Horn , Yu-cheng Yu , the arch/x86 maintainers , "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , kernel list , linux-doc@vger.kernel.org, Linux-MM , linux-arch , Linux API , Arnd Bergmann , Balbir Singh , Cyrill Gorcunov , Florian Weimer , "H. J. Lu" , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , "Shanbhogue, Vedvyas" On 08/31/2018 10:46 AM, Andy Lutomirski wrote: > On Thu, Aug 30, 2018 at 11:55 AM, Dave Hansen >> That little hunk will definitely need to get updated with something like: >> >> On processors enumerating support for CET, the processor will on >> set the dirty flag on paging structure entries in which the W >> flag is 1. > > Can we get something much stronger, perhaps? Like this: > > On processors enumerating support for CET, the processor will write to > the accessed and/or dirty flags atomically, as if using the LOCK > CMPXCHG instruction. The memory access, any cached entries in any > paging-structure caches, and the values in the paging-structure entry > before and after writing the A and/or D bits will all be consistent. There's some talk of this already in: 8.1.2.1 Automatic Locking: > When updating page-directory and page-table entries a?? When updating > page-directory and page-table entries, the processor uses locked > cycles to set the accessed and dirty flag in the page-directory and > page-table entries. As for the A/D consistency, I'll see if I can share that before it hits the SDM for real and see if it's sufficient for everybody.