From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB667C67861 for ; Tue, 9 Apr 2024 08:56:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A9026B0089; Tue, 9 Apr 2024 04:56:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 559116B008A; Tue, 9 Apr 2024 04:56:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 448066B008C; Tue, 9 Apr 2024 04:56:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 233F16B0089 for ; Tue, 9 Apr 2024 04:56:59 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D9330A0318 for ; Tue, 9 Apr 2024 08:56:58 +0000 (UTC) X-FDA: 81989388516.15.1202C96 Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by imf16.hostedemail.com (Postfix) with ESMTP id 9B35D180018 for ; Tue, 9 Apr 2024 08:56:55 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=ellerman.id.au header.s=201909 header.b=msOkz6Vj; dmarc=none; spf=pass (imf16.hostedemail.com: domain of mpe@ellerman.id.au designates 150.107.74.76 as permitted sender) smtp.mailfrom=mpe@ellerman.id.au ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712653016; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rA9AK049F3IGs47StGGKCr+FtqU4g3CoryIwwmMomeM=; b=FnU1PnrzwvxQB7UV0HcJOnTuBwUG5S5hlqRAhIHjMUSH/9a4Fn5ozhy+C4RnEXOkmdYDVw +UaNOpTgdefYcxWuXOPgckBqRynPIbYOuFVtK2c0Qo6fL3eS4M8aojpu3LX8P/aJZLh716 9U4hqW4VoL0qtaLOi5wmx1DBM84FZa0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=ellerman.id.au header.s=201909 header.b=msOkz6Vj; dmarc=none; spf=pass (imf16.hostedemail.com: domain of mpe@ellerman.id.au designates 150.107.74.76 as permitted sender) smtp.mailfrom=mpe@ellerman.id.au ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712653016; a=rsa-sha256; cv=none; b=WGbGWBTorM6tzOs/XHcdZuJ9G6QaQWi9Qt10Y5cxK3ylhdHf2xvCUxWKy/8qWUidiM7Nj8 +2wjS6mJyw8aEyFli7AQ4SRWUm4pKMFUMFH1GmZ3O7621jA/aU0dZmbjxOVJTNtsGgT/lk puDgLrxUjHgybP2f0jPJFroxcTMCAL8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ellerman.id.au; s=201909; t=1712653012; bh=rA9AK049F3IGs47StGGKCr+FtqU4g3CoryIwwmMomeM=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=msOkz6VjlCQK6oln6GUJUGTDCSBfzOeZfQi9EsVwNoLRa5ix9kExqStoAQ5AprBqk 34LTS1WhqXj3hftPvoUBI1x1Zp/u3pVOJDpZPQbXLnKGX2vomfk1kAqaFpQo8cRhPE stQOnXX7nAI0w9OrzO3M5ey4vEF08V5p2tARnvjjWvFF+jrya8UqrwRzTnxYKU7da3 Rm2vrdr/se1lJ6uQvcH5wFGlTbFEnrSPqjZu0rfJ3o8U7jBvaQRGYxGwYHE5cunhCX lCI7cehVn6h6m3BvJLCepBZ4OYS8I/Nfcl0L6KVmZN5ybtAGzhS1vIWQB5uyF52/bT a76+x/QpVimEA== Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.ozlabs.org (Postfix) with ESMTPSA id 4VDKZp4Mb0z4wnr; Tue, 9 Apr 2024 18:56:50 +1000 (AEST) From: Michael Ellerman To: Kefeng Wang , akpm@linux-foundation.org Cc: Russell King , Catalin Marinas , Will Deacon , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , x86@kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, surenb@google.com, linux-mm@kvack.org, Kefeng Wang Subject: Re: [PATCH v2 4/7] powerpc: mm: accelerate pagefault when badaccess In-Reply-To: <20240403083805.1818160-5-wangkefeng.wang@huawei.com> References: <20240403083805.1818160-1-wangkefeng.wang@huawei.com> <20240403083805.1818160-5-wangkefeng.wang@huawei.com> Date: Tue, 09 Apr 2024 18:56:49 +1000 Message-ID: <871q7ec3se.fsf@mail.lhotse> MIME-Version: 1.0 Content-Type: text/plain X-Rspam-User: X-Stat-Signature: 1cb31bpz7zeutujrfyxhmtmj3s8jquud X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9B35D180018 X-HE-Tag: 1712653015-761543 X-HE-Meta: U2FsdGVkX1/W5+gyB2sJi/Ij3J+RjH+R9R2ZpwqNOXMjN10RV40AdcvGfdXoQPSU6D38tS4DPEOSPI8s0a9/3aDtexbKfmT2NGfBxptRRabbuoCWQFHt2DAYhAWBuoLw7hgHP4VhmHwgd5IDTTBls6zNoH7iqJ3UFdBRwqf65FVydw7c5X3es61Eo3LNMxUyPBb5zwN+UrZqqg7lz43YEyF67XmZEFnc5xg/e+DDOPw+mqxDA+mln57CHihiMoudZrPGJLRlkeOHa1Lqpa9yeaXHHKdbgHhaQ0a7LU+nZQJNx41bbAIdXlaVprJrOaBe+8EHNjci3v5iTaYX68IT+kRI+rtupRry3E5JflY35qL8lQs6MWZ7/CtHWWkiJHbwePRI/RMfm5ihVzVgRVNMH5Xk71VMUeGigy+S97mNTFvELL/OKxKjQLAeiwgjaftrcy9wBadY6vRXvMlvNicpXbMeIpCnZUjTD80NcW6oxuQfFHP0mS61ObujQRqSqdndLQxazG531k5Jkhk914hgRLWv9aD8+NzprVNKZbalXssvPWu3RBwYAgc0vO8o2AZocr313MCmO8uBD3AkeelokNc/M2BgisGJLcYwe1xICKUjJZFYn+J/k013ogyZeJCKySg6jY1SezkKZZYgTlDueA365PgUI2Q7SLA/TzAaQhEJ0JG/OVJjmxTfijZIEQnPk1LT6WJCKFeF9pEo00kW4qL6Tf11WGc/vQ2UV+Q3BJL8xTbra3MBGAjQ59xVtlbZYYQzE1mFv7g72y2j8a3fwTwXILYr/tzojeZS6xaDlz+VGRTSUzx5hqa3A4SotiEeenJLCp/kehXNmF2t+vLHG9xk7ovg62AJ73bcfIKNFbegxK35zrdhmXS74VlrTwHPioU+FqUt+iUZ8ccBkgxMWCSlQO5rlpe5vRzfPjOFiMeJscM6K6opG9OFeyrADfYPS9kZV7Xq881J3pemKsi pFwHSNTG Uq6i7HA15FHmzYpzJyyePb964C8xFHbkqUgVrvJRICB1c+51OoAq6e7H2pflZJaMamCA+B/QyXec3+/04Y+qB3qowgnb1iEOXh2d5es9MvqIDtuC19yuokQOwg/dOr9IIEaPbEWoZO4FMsIxCaYF67yJEicb7LQVKokGytShmoljpyYI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Kefeng Wang writes: > The access_[pkey]_error() of vma already checked under per-VMA lock, if > it is a bad access, directly handle error, no need to retry with mmap_lock > again. In order to release the correct lock, pass the mm_struct into > bad_access_pkey()/bad_access(), if mm is NULL, release vma lock, or > release mmap_lock. Since the page faut is handled under per-VMA lock, > count it as a vma lock event with VMA_LOCK_SUCCESS. > > Signed-off-by: Kefeng Wang > --- > arch/powerpc/mm/fault.c | 33 ++++++++++++++++++++------------- > 1 file changed, 20 insertions(+), 13 deletions(-) I thought there might be a nicer way to do this, plumbing the mm and vma down through all those levels is a bit of a pain (vma->vm_mm exists after all). But I couldn't come up with anything obviously better, without doing lots of refactoring first, which would be a pain to integrate into this series. So anyway, if the series goes ahead: Acked-by: Michael Ellerman (powerpc) cheers > diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c > index 53335ae21a40..215690452495 100644 > --- a/arch/powerpc/mm/fault.c > +++ b/arch/powerpc/mm/fault.c > @@ -71,23 +71,26 @@ static noinline int bad_area_nosemaphore(struct pt_regs *regs, unsigned long add > return __bad_area_nosemaphore(regs, address, SEGV_MAPERR); > } > > -static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code) > +static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code, > + struct mm_struct *mm, struct vm_area_struct *vma) > { > - struct mm_struct *mm = current->mm; > > /* > * Something tried to access memory that isn't in our memory map.. > * Fix it, but check if it's kernel or user first.. > */ > - mmap_read_unlock(mm); > + if (mm) > + mmap_read_unlock(mm); > + else > + vma_end_read(vma); > > return __bad_area_nosemaphore(regs, address, si_code); > } > > static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, > + struct mm_struct *mm, > struct vm_area_struct *vma) > { > - struct mm_struct *mm = current->mm; > int pkey; > > /* > @@ -109,7 +112,10 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, > */ > pkey = vma_pkey(vma); > > - mmap_read_unlock(mm); > + if (mm) > + mmap_read_unlock(mm); > + else > + vma_end_read(vma); > > /* > * If we are in kernel mode, bail out with a SEGV, this will > @@ -124,9 +130,10 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, > return 0; > } > > -static noinline int bad_access(struct pt_regs *regs, unsigned long address) > +static noinline int bad_access(struct pt_regs *regs, unsigned long address, > + struct mm_struct *mm, struct vm_area_struct *vma) > { > - return __bad_area(regs, address, SEGV_ACCERR); > + return __bad_area(regs, address, SEGV_ACCERR, mm, vma); > } > > static int do_sigbus(struct pt_regs *regs, unsigned long address, > @@ -479,13 +486,13 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, > > if (unlikely(access_pkey_error(is_write, is_exec, > (error_code & DSISR_KEYFAULT), vma))) { > - vma_end_read(vma); > - goto lock_mmap; > + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); > + return bad_access_pkey(regs, address, NULL, vma); > } > > if (unlikely(access_error(is_write, is_exec, vma))) { > - vma_end_read(vma); > - goto lock_mmap; > + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); > + return bad_access(regs, address, NULL, vma); > } > > fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); > @@ -521,10 +528,10 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, > > if (unlikely(access_pkey_error(is_write, is_exec, > (error_code & DSISR_KEYFAULT), vma))) > - return bad_access_pkey(regs, address, vma); > + return bad_access_pkey(regs, address, mm, vma); > > if (unlikely(access_error(is_write, is_exec, vma))) > - return bad_access(regs, address); > + return bad_access(regs, address, mm, vma); > > /* > * If for any reason at all we couldn't handle the fault, > -- > 2.27.0 > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv