From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72105C43334 for ; Wed, 20 Jul 2022 13:10:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E379A6B0072; Wed, 20 Jul 2022 09:10:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE7C66B0073; Wed, 20 Jul 2022 09:10:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C882A6B0074; Wed, 20 Jul 2022 09:10:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B9E3E6B0072 for ; Wed, 20 Jul 2022 09:10:43 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 77F398050C for ; Wed, 20 Jul 2022 13:10:43 +0000 (UTC) X-FDA: 79707512766.13.0A9490B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf24.hostedemail.com (Postfix) with ESMTP id A146F1800A8 for ; Wed, 20 Jul 2022 13:10:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1658322641; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=dFlb3qAG0P2dgOAUGD2eXn+YURleKGghkF1TrKpP95A=; b=GzKGWbh1pq4OL4gVDpVP5U1Bqsy5j5fb0KUhMMfO5B8U5O1A7odjQpLvqreHJXcKCQYSfW lkgT4cQucrxKMdv+/8QovIYUooq+KoGabq4SabNOyIomFiOLaxGEtHUPbV9TJTiHPNUAvx HmkR2E/W1pFWgNeZFXbFnVb3zAM1/Go= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-497-lIi7Ea3HNRulu95ILH80Aw-1; Wed, 20 Jul 2022 09:10:40 -0400 X-MC-Unique: lIi7Ea3HNRulu95ILH80Aw-1 Received: by mail-qt1-f199.google.com with SMTP id v6-20020ac87486000000b0031ee0ae1400so8880407qtq.14 for ; Wed, 20 Jul 2022 06:10:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=dFlb3qAG0P2dgOAUGD2eXn+YURleKGghkF1TrKpP95A=; b=PZGAzM34dllW2jdw/xIxzWPWRrPMcOv15Aau6sylL77ZtjoTC/v0JfdMIjxSkDpD9T KsfjCVxSV2ODUwbXvS26wTfW0kHj5E5UFxUcd9arKS+YgIAoA8x7KZBALbDOCo511rFR cvvN6m9VAEx/S7V7sMjnOOf85hx5p976Job5ol/vwWe6x0rhZDRFOAyDpTTGWmX2wATP 4DEBtnwbehU4oTtKV7J2IZrMF1sr4SHCq1Uz7vwQI7clwERXrEyvlr0Y+Wp7mcaAeyjc ybGVh/hY0TYqlNS6yRutFZSqY9BcnQp+fWJJapeuClEY5rjfuiA7NGIjupVQP1P3NgAs AftQ== X-Gm-Message-State: AJIora9Cthq7u+lRb+Db0FgdAxLrkIdIxckoDqsDkOX2I/E6Ws1SYp/D uADtLvIVCCKj7rdMRjrsCG428NvA0zj2Kz7u5PAfXGJNOVBk8jJwliwRJWNwNEB8UQ/K3456O7/ l6OOuJsBbDVg= X-Received: by 2002:a05:620a:25c9:b0:6b2:7409:892e with SMTP id y9-20020a05620a25c900b006b27409892emr23739296qko.367.1658322639495; Wed, 20 Jul 2022 06:10:39 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sXkQe6Uk4jZjI3ombLhQcay8rk3R1JIP6PL9j2qltDZvZr9cg7EUE9jey9R73N1/4YMzfsAQ== X-Received: by 2002:a05:620a:25c9:b0:6b2:7409:892e with SMTP id y9-20020a05620a25c900b006b27409892emr23739243qko.367.1658322639013; Wed, 20 Jul 2022 06:10:39 -0700 (PDT) Received: from xz-m1.local (bras-base-aurron9127w-grc-37-74-12-30-48.dsl.bell.ca. [74.12.30.48]) by smtp.gmail.com with ESMTPSA id s10-20020ac85eca000000b0031ede43512bsm9647133qtx.44.2022.07.20.06.10.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jul 2022 06:10:38 -0700 (PDT) Date: Wed, 20 Jul 2022 09:10:36 -0400 From: Peter Xu To: David Hildenbrand Cc: Nadav Amit , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Mike Rapoport , Axel Rasmussen , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin Subject: Re: [RFC PATCH 01/14] userfaultfd: set dirty and young on writeprotect Message-ID: References: <20220718120212.3180-1-namit@vmware.com> <20220718120212.3180-2-namit@vmware.com> <017facf0-7ef8-3faf-138d-3013a20b37db@redhat.com> MIME-Version: 1.0 In-Reply-To: <017facf0-7ef8-3faf-138d-3013a20b37db@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GzKGWbh1; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf24.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658322641; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dFlb3qAG0P2dgOAUGD2eXn+YURleKGghkF1TrKpP95A=; b=eW0Vw6P3tFK9RHi0FKF2KkysTotVP4lYcGPufkslYpv8HuLbN8R2oopZExQ5/ltzhk74MZ P8AlQmfDaljxKyyWPnQD2I3O0hgKLsFlvGrsWrwj4OY9sG7U6gAbM+ekh/7eYPpR/1JLVl TKWFAkq4nd4ls1fXdS0pn8XsYKqSnFc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658322641; a=rsa-sha256; cv=none; b=PUExtYej4xmxPBZeoSHaOUSC3+rsOPUrbSRw+pszGolwxJv/mbvjWU6mV3s5/BgfgRhTJ9 t2tbrSSB/QTp8fIHEBbQ4w090b+xNcBvte44cSkca86Gks/YN4u8mqq6ccOc4L5DXd9aQZ z1McKEOHglYhoKZMUzE5b03YW9iR4ek= X-Stat-Signature: gzgc6ts3t15rferb1hjifnrsskxn3ho9 X-Rspamd-Queue-Id: A146F1800A8 X-Rspamd-Server: rspam08 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GzKGWbh1; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf24.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com X-Rspam-User: X-HE-Tag: 1658322641-78094 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 20, 2022 at 11:39:23AM +0200, David Hildenbrand wrote: > On 19.07.22 22:47, Peter Xu wrote: > > On Mon, Jul 18, 2022 at 05:01:59AM -0700, Nadav Amit wrote: > >> From: Nadav Amit > >> > >> When userfaultfd makes a PTE writable, it can now change the PTE > >> directly, in some cases, without going triggering a page-fault first. > >> Yet, doing so might leave the PTE that was write-unprotected as old and > >> clean. At least on x86, this would cause a >500 cycles overhead when the > >> PTE is first accessed. > >> > >> Use MM_CP_WILL_NEED to set the PTE as young and dirty when userfaultfd > >> gets a hint that the page is likely to be used. Avoid changing the PTE > >> to young and dirty in other cases to avoid excessive writeback and > >> messing with the page reclamation logic. > >> > >> Cc: Andrea Arcangeli > >> Cc: Andrew Cooper > >> Cc: Andrew Morton > >> Cc: Andy Lutomirski > >> Cc: Dave Hansen > >> Cc: David Hildenbrand > >> Cc: Peter Xu > >> Cc: Peter Zijlstra > >> Cc: Thomas Gleixner > >> Cc: Will Deacon > >> Cc: Yu Zhao > >> Cc: Nick Piggin > >> --- > >> include/linux/mm.h | 2 ++ > >> mm/mprotect.c | 9 ++++++++- > >> mm/userfaultfd.c | 8 ++++++-- > >> 3 files changed, 16 insertions(+), 3 deletions(-) > >> > >> diff --git a/include/linux/mm.h b/include/linux/mm.h > >> index 9cc02a7e503b..4afd75ce5875 100644 > >> --- a/include/linux/mm.h > >> +++ b/include/linux/mm.h > >> @@ -1988,6 +1988,8 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma, > >> /* Whether this change is for write protecting */ > >> #define MM_CP_UFFD_WP (1UL << 2) /* do wp */ > >> #define MM_CP_UFFD_WP_RESOLVE (1UL << 3) /* Resolve wp */ > >> +/* Whether to try to mark entries as dirty as they are to be written */ > >> +#define MM_CP_WILL_NEED (1UL << 4) > >> #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ > >> MM_CP_UFFD_WP_RESOLVE) > >> > >> diff --git a/mm/mprotect.c b/mm/mprotect.c > >> index 996a97e213ad..34c2dfb68c42 100644 > >> --- a/mm/mprotect.c > >> +++ b/mm/mprotect.c > >> @@ -82,6 +82,7 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, > >> bool prot_numa = cp_flags & MM_CP_PROT_NUMA; > >> bool uffd_wp = cp_flags & MM_CP_UFFD_WP; > >> bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; > >> + bool will_need = cp_flags & MM_CP_WILL_NEED; > >> > >> tlb_change_page_size(tlb, PAGE_SIZE); > >> > >> @@ -172,6 +173,9 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, > >> ptent = pte_clear_uffd_wp(ptent); > >> } > >> > >> + if (will_need) > >> + ptent = pte_mkyoung(ptent); > > > > For uffd path, UFFD_FLAGS_ACCESS_LIKELY|UFFD_FLAGS_WRITE_LIKELY are new > > internal flags used with or without the new feature bit set. It means even > > with !ACCESS_HINT we'll start to set young bit while we used not to? Is > > that some kind of a light abi change? > > > > I'd suggest we only set will_need if ACCESS_HINT is set. > > > >> + > >> /* > >> * In some writable, shared mappings, we might want > >> * to catch actual write access -- see > >> @@ -187,8 +191,11 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, > >> */ > >> if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && > >> !pte_write(ptent) && > >> - can_change_pte_writable(vma, addr, ptent)) > >> + can_change_pte_writable(vma, addr, ptent)) { > >> ptent = pte_mkwrite(ptent); > >> + if (will_need) > >> + ptent = pte_mkdirty(ptent); > > > > Can we make this unconditional? IOW to cover both: > > > > (1) When will_need is not set, or > > (2) mprotect() too > > > > David's patch is good in that we merged the unprotect and CoW. However > > that's not complete because the dirty bit ops are missing. > > > > Here IMHO we should have a standalone patch to just add the dirty bit into > > this logic when we'll grant write bit. IMHO it'll make the write+dirty > > bits coherent again in all paths. > > I'm not sure I follow. > > We *surely* don't want to dirty random pages (especially once in the > pagecache/swapcache) simply because we change protection. > > Just like we don't set all pages write+dirty in a writable VMA on a read > fault. IMO unmprotect (in generic mprotect form or uffd form) has a stronger sign of page being written, unlike read faults, as many of them happen because page being written and message generated. But yeah you have a point too, maybe we shouldn't assume such a condition. Especially as long as we won't set write=1 without soft-dirty tracking enabled, I think it should be safe. -- Peter Xu