From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87505C433F5 for ; Thu, 17 Mar 2022 12:23:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD5D66B0071; Thu, 17 Mar 2022 08:23:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D851F8D0002; Thu, 17 Mar 2022 08:23:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4D3D8D0001; Thu, 17 Mar 2022 08:23:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id B204D6B0071 for ; Thu, 17 Mar 2022 08:23:20 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 66AEC182890F0 for ; Thu, 17 Mar 2022 12:23:20 +0000 (UTC) X-FDA: 79253793360.25.5A2DF96 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 465DB14000F for ; Thu, 17 Mar 2022 12:23:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=CkpD4U/xBZ2hqIizdz+6oXRWCbbpVMQAvu/URGqBeZU=; b=O5AkkI4N/RTC4Vnr506hKSvr5Y xG45CjXBIjc7ftOYxJDcYa5oRcVUKOHrQbEMoYappkB9NFDLbFLb3hla8SZsDDptflu4BaxbqnBqj g7AQd2t0nuaNbva1wIpk213yzZ9P6kUzLqgKXOq52++NQ5eTDRS4WdUwoWaYEHGsmk4ZEprgsA5WB szmSM2+uI+r+4CWJi7bizSkEtlygBJwgCYHuFvQwB0t3V5Gg3bRgaH0fnqOP1ULoJWMqCCgxJx5cr 94vgn5luapm6S56oVmFoV+41LRmRzb1ILW/Q0V1OF5A1h7cSYkeZpPRiR/H9x0pCu4I0NdceoThZs EI7DLlbA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nUp9t-006wtu-Va; Thu, 17 Mar 2022 12:23:14 +0000 Date: Thu, 17 Mar 2022 12:23:13 +0000 From: Matthew Wilcox To: Bibo Mao Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: Re: [PATCH v2] mm: add access/dirty bit on numa page fault Message-ID: References: <20220317065033.2635123-1-maobibo@loongson.cn> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220317065033.2635123-1-maobibo@loongson.cn> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 465DB14000F X-Stat-Signature: ptic1tp9d5j4fqyc649ybsnush71mic9 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=O5AkkI4N; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1647519799-991881 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 17, 2022 at 02:50:33AM -0400, Bibo Mao wrote: > On platforms like x86/arm which supports hw page walking, access > and dirty bit is set by hw, however on some platforms without > such hw functions, access and dirty bit is set by software in > next trap. > > During numa page fault, dirty bit can be added for old pte if > fail to migrate on write fault. And if it succeeds to migrate, > access bit can be added for migrated new pte, also dirty bit > can be added for write fault. Is this a correctness problem, in which case this will need to be backported, or is this a performance problem, in which case can you share some numbers? > Signed-off-by: Bibo Mao > --- > mm/memory.c | 21 ++++++++++++++++++++- > 1 file changed, 20 insertions(+), 1 deletion(-) > > diff --git a/mm/memory.c b/mm/memory.c > index c125c4969913..65813bec9c06 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > if (migrate_misplaced_page(page, vma, target_nid)) { > page_nid = target_nid; > flags |= TNF_MIGRATED; > + > + /* > + * update pte entry with access bit, and dirty bit for > + * write fault > + */ > + spin_lock(vmf->ptl); > + pte = *vmf->pte; > + pte = pte_mkyoung(pte); > + if (was_writable) { > + pte = pte_mkwrite(pte); > + if (vmf->flags & FAULT_FLAG_WRITE) > + pte = pte_mkdirty(pte); > + } > + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > + update_mmu_cache(vma, vmf->address, vmf->pte); > + pte_unmap_unlock(vmf->pte, vmf->ptl); > } else { > flags |= TNF_MIGRATE_FAIL; > vmf->pte = pte_offset_map(vmf->pmd, vmf->address); > @@ -4427,8 +4443,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte); > pte = pte_modify(old_pte, vma->vm_page_prot); > pte = pte_mkyoung(pte); > - if (was_writable) > + if (was_writable) { > pte = pte_mkwrite(pte); > + if (vmf->flags & FAULT_FLAG_WRITE) > + pte = pte_mkdirty(pte); > + } > ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); > update_mmu_cache(vma, vmf->address, vmf->pte); > pte_unmap_unlock(vmf->pte, vmf->ptl); > -- > 2.31.1 > >