From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48971C3F6B0 for ; Wed, 10 Aug 2022 06:24:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C74638E0003; Wed, 10 Aug 2022 02:24:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C23F88E0001; Wed, 10 Aug 2022 02:24:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF6938E0003; Wed, 10 Aug 2022 02:24:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9B56F8E0001 for ; Wed, 10 Aug 2022 02:24:43 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5C5AA140524 for ; Wed, 10 Aug 2022 06:24:43 +0000 (UTC) X-FDA: 79782694446.08.48E3F02 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf25.hostedemail.com (Postfix) with ESMTP id 49216A005C for ; Wed, 10 Aug 2022 06:24:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660112681; x=1691648681; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=0ehwEycYZmTzgxqjzkcGB9NIAU0k8XRrwb9NK6VoLoI=; b=byBRfMyYgXAuqYSA08EY8w2iBeV9kMSoOmF0woGL2NgLlF8sc/z6EtK6 Lwm1YYnnnyjp/IUFHAQ4AblSTwxD7irjhCXjGkw/2AnV+97dsG5aY95P6 14sGclen2QlmqE1czdCmVVP7RxdGQeMr97+mlcvaGixtbXmmHMHATL4m8 wgOXJ3iszmd+q6Hdo2w0M/+oiXTr6Cvuxa5eAXvqLrpe8csAiEMJjQUTG tzSyejRfHcuh+av24btn5bmXZUwbLE4mEvtSepu2SE/IiTogt35Db22e6 anMtbmwHlqo6QwAVE0vcxlEZx0pdiSKMoyGOUvhUDluVwyt+ej6F3EiSk Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10434"; a="291010144" X-IronPort-AV: E=Sophos;i="5.93,226,1654585200"; d="scan'208";a="291010144" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Aug 2022 23:24:39 -0700 X-IronPort-AV: E=Sophos;i="5.93,226,1654585200"; d="scan'208";a="581108867" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Aug 2022 23:24:36 -0700 From: "Huang, Ying" To: Peter Xu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Minchan Kim , David Hildenbrand , Nadav Amit , Andrew Morton , Hugh Dickins , Vlastimil Babka , Andrea Arcangeli , Andi Kleen , "Kirill A . Shutemov" Subject: Re: [PATCH v3 4/7] mm/thp: Carry over dirty bit when thp splits on pmd References: <20220809220100.20033-1-peterx@redhat.com> <20220809220100.20033-5-peterx@redhat.com> Date: Wed, 10 Aug 2022 14:24:33 +0800 In-Reply-To: <20220809220100.20033-5-peterx@redhat.com> (Peter Xu's message of "Tue, 9 Aug 2022 18:00:57 -0400") Message-ID: <877d3gfwf2.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660112682; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aRggJZeCkf1LWXSvvqCkhCqtwQPvCmH397i7Z+i4oJc=; b=dEC6HhmNMvAmbmEYsKKY+UYOfHo9qA+5XeChzLMTcpJIXPjA75y5X38nNtxGAdUJPhnSqq YA2+XA5LsWTlCyOi65uDwwXhsvf7/WCw4lfh1PALdZn2cuAoRagwuN3GlA/ZEErxRaTKUG 65JkuUnQAd6KKSKYp9QWJwXG5Uq8cdk= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=byBRfMyY; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660112682; a=rsa-sha256; cv=none; b=3sPwW2Vm6jwLtBTNdZRN25cQqLTthcQcQZbPdTHEdhYt5S/tiOO3BujNSvHhjxCrDBX2vm udR3uCtW5HRcpcYk3r+JgFpdQpeprcidjG8eAiGfUvhmeQVNdPhYbhHUd93rSeN6LxG8Lv 7ktptgJu6XbyJPy3kQ09o5P763HZQUA= X-Rspamd-Queue-Id: 49216A005C Authentication-Results: imf25.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=byBRfMyY; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=ying.huang@intel.com X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: kew77oknctbom8cmw6iteoxp6h8cmiwg X-HE-Tag: 1660112681-769144 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Peter Xu writes: > Carry over the dirty bit from pmd to pte when a huge pmd splits. It > shouldn't be a correctness issue since when pmd_dirty() we'll have the page > marked dirty anyway, however having dirty bit carried over helps the next > initial writes of split ptes on some archs like x86. > > Signed-off-by: Peter Xu > --- > mm/huge_memory.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 0611b2fd145a..e8e78d1bac5f 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2005,7 +2005,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > pgtable_t pgtable; > pmd_t old_pmd, _pmd; > bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false; > - bool anon_exclusive = false; > + bool anon_exclusive = false, dirty = false; > unsigned long addr; > int i; > > @@ -2098,6 +2098,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > SetPageDirty(page); > write = pmd_write(old_pmd); > young = pmd_young(old_pmd); > + dirty = pmd_dirty(old_pmd); Nitpick: This can be put under if (pmd_dirty(old_pmd)) SetPageDirty(page); Not a big deal. Reviewed-by: "Huang, Ying" > soft_dirty = pmd_soft_dirty(old_pmd); > uffd_wp = pmd_uffd_wp(old_pmd); > > @@ -2161,6 +2162,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > entry = pte_wrprotect(entry); > if (!young) > entry = pte_mkold(entry); > + /* NOTE: this may set soft-dirty too on some archs */ > + if (dirty) > + entry = pte_mkdirty(entry); > if (soft_dirty) > entry = pte_mksoft_dirty(entry); > if (uffd_wp)