From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 205C9C433E0 for ; Wed, 1 Jul 2020 07:34:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D470B206A1 for ; Wed, 1 Jul 2020 07:34:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="dOJcp6W5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D470B206A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 528376B00C5; Wed, 1 Jul 2020 03:34:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D8AD6B00C7; Wed, 1 Jul 2020 03:34:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C7556B00C8; Wed, 1 Jul 2020 03:34:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 26BC56B00C5 for ; Wed, 1 Jul 2020 03:34:19 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CFF5D1EE6 for ; Wed, 1 Jul 2020 07:34:18 +0000 (UTC) X-FDA: 76988693796.18.ducks90_441644426e7e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id A650D100EDBE1 for ; Wed, 1 Jul 2020 07:34:10 +0000 (UTC) X-HE-Tag: ducks90_441644426e7e X-Filterd-Recvd-Size: 4932 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 1 Jul 2020 07:34:10 +0000 (UTC) Received: from willie-the-truck (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8363E206A1; Wed, 1 Jul 2020 07:34:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593588849; bh=uRnmlykEHcH2t5rCDGTbzG8t+NbTQ+0N1Dd0E2sEE5w=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=dOJcp6W5oab01R+28pyX7UOisa3FhewGv5f4Q6ltH+NdMAeyod3bMwtOnhq/GXqoh 7WBol6lrv0gcD4Yraz7UFK6emx/OAM/0hMxpxYFw4zr3rTv8BsNnXZ2BYoMhjGKtDy pqDURz90nQQ9sPwuS7I86QCy9RR+fVSGZoel/6VM= Date: Wed, 1 Jul 2020 08:34:04 +0100 From: Will Deacon To: Peter Xu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v4 05/26] mm/arm64: Use general page fault accounting Message-ID: <20200701073403.GA14692@willie-the-truck> References: <20200630204514.38711-1-peterx@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200630204514.38711-1-peterx@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: A650D100EDBE1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 30, 2020 at 04:45:14PM -0400, Peter Xu wrote: > Use the general page fault accounting by passing regs into handle_mm_fault(). > It naturally solve the issue of multiple page fault accounting when page fault > retry happened. To do this, we pass pt_regs pointer into __do_page_fault(). > > CC: Catalin Marinas > CC: Will Deacon > CC: linux-arm-kernel@lists.infradead.org > Signed-off-by: Peter Xu > --- > arch/arm64/mm/fault.c | 29 ++++++----------------------- > 1 file changed, 6 insertions(+), 23 deletions(-) > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index be29f4076fe3..f07333e86c2f 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -404,7 +404,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re > #define VM_FAULT_BADACCESS 0x020000 > > static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, > - unsigned int mm_flags, unsigned long vm_flags) > + unsigned int mm_flags, unsigned long vm_flags, > + struct pt_regs *regs) > { > struct vm_area_struct *vma = find_vma(mm, addr); > > @@ -428,7 +429,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, > */ > if (!(vma->vm_flags & vm_flags)) > return VM_FAULT_BADACCESS; > - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); > + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs); > } > > static bool is_el0_instruction_abort(unsigned int esr) > @@ -450,7 +451,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > { > const struct fault_info *inf; > struct mm_struct *mm = current->mm; > - vm_fault_t fault, major = 0; > + vm_fault_t fault; > unsigned long vm_flags = VM_ACCESS_FLAGS; > unsigned int mm_flags = FAULT_FLAG_DEFAULT; > > @@ -516,8 +517,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > #endif > } > > - fault = __do_page_fault(mm, addr, mm_flags, vm_flags); > - major |= fault & VM_FAULT_MAJOR; > + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs); > > /* Quick path to respond to signals */ > if (fault_signal_pending(fault, regs)) { > @@ -538,25 +538,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > * Handle the "normal" (no error) case first. > */ > if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | > - VM_FAULT_BADACCESS)))) { > - /* > - * Major/minor page fault accounting is only done > - * once. If we go through a retry, it is extremely > - * likely that the page will be found in page cache at > - * that point. > - */ > - if (major) { > - current->maj_flt++; > - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, > - addr); > - } else { > - current->min_flt++; > - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, > - addr); > - } > - > + VM_FAULT_BADACCESS)))) > return 0; > - } Thanks, looks good to me: Acked-by: Will Deacon Will