From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADC06C11D05 for ; Thu, 20 Feb 2020 14:54:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 57A3524656 for ; Thu, 20 Feb 2020 14:54:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NinqU8CQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57A3524656 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 486976B0008; Thu, 20 Feb 2020 09:54:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 40EB76B000A; Thu, 20 Feb 2020 09:54:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 214C16B000C; Thu, 20 Feb 2020 09:54:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 01FCD6B0008 for ; Thu, 20 Feb 2020 09:54:47 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A2E28180AD80F for ; Thu, 20 Feb 2020 14:54:47 +0000 (UTC) X-FDA: 76510802214.14.uncle75_46d9b76f01c56 X-HE-Tag: uncle75_46d9b76f01c56 X-Filterd-Recvd-Size: 15825 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Thu, 20 Feb 2020 14:54:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582210486; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z6xuYzHQUW/wUUnwdIjhUOl2K5aYzMRahbPSFJRA8pw=; b=NinqU8CQbtD7HOQpTHG6XvRjxkf9j9h15ZBkQbuiD6w3ofuSaaeXB/hilTh+e5kSDvAjU5 Aue4Uz1wabcDgfEUpcEe4DFRVezg3pzwHEv6ehRqbtV5wKoaRIXnIDtsZ5GS74/2Ov3uxj OuE03QOF0ythBUOnXnNJZC5li5XfQoo= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-42-MSP8JQnuOKiXb7v98LNAEA-1; Thu, 20 Feb 2020 09:54:44 -0500 X-MC-Unique: MSP8JQnuOKiXb7v98LNAEA-1 Received: by mail-qk1-f197.google.com with SMTP id d134so2885234qkc.0 for ; Thu, 20 Feb 2020 06:54:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0Mh676x2V7WPJQ5Q4bhcBk2eD6SPRqYMWz8sogXT9zQ=; b=oPLDRvyoZz97R7Ap937bXqS7cGjLh+d0nqJWaaFrQ/kIIGtPXOOWQbBJPT5HDU/VFm 0ebvmMotxebVnbYjY+yVAzsmSg2A0HGhPLj+L1GpZTTzschMXLuilF5d8rI/hGrgdCsD Dyza7G1XazJK5aq2GV8OCOg5rTr195/PUpa6MdU/QP9Bd2uRBffOQ6diENi3Fu5iOEvm vlMr1B5wvD4aIYAxgac4/+75fkX2mef2nYycuqC9oqtGFD5YhdMMcA+hXjtga2i0I7hm T++6zbkth3LkhXkth+tODFtwy3aHNYhAQVN+VdsmfadIW3Wtkucf2l1cnRelJcj5v4oA gwGg== X-Gm-Message-State: APjAAAW1shcy7lXgupPOOY7rR8SlkWH+zqL2leb6oCybsJDqfsrl/DJF OVQyVThw8PLOzvdkbsATbaWbfoaG6/ewTwvFt34rsEqxuXLZEEitWVW2tDj2eaNBPoDYQwfL37o 6RLwq8wkoGz0= X-Received: by 2002:ad4:52eb:: with SMTP id p11mr24145878qvu.211.1582210482425; Thu, 20 Feb 2020 06:54:42 -0800 (PST) X-Google-Smtp-Source: APXvYqxaZqmZOhqSy2sy5PUZ5FIrfAVMn2N36oRkdVctDx8Tyl89xiu0RLSXzzHIFfu4HI/oZF2OuA== X-Received: by 2002:ad4:52eb:: with SMTP id p11mr24145826qvu.211.1582210481944; Thu, 20 Feb 2020 06:54:41 -0800 (PST) Received: from xz-x1.redhat.com ([104.156.64.75]) by smtp.gmail.com with ESMTPSA id v82sm1725109qka.51.2020.02.20.06.54.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2020 06:54:40 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Martin Cracauer , Mike Rapoport , Hugh Dickins , Jerome Glisse , peterx@redhat.com, "Kirill A . Shutemov" , Matthew Wilcox , Pavel Emelyanov , Brian Geffon , Maya Gokhale , Denis Plotnikov , Andrea Arcangeli , Johannes Weiner , "Dr . David Alan Gilbert" , Linus Torvalds , Mike Kravetz , Marty McFadden , David Hildenbrand , Bobby Powers , Mel Gorman Subject: [PATCH v6 03/16] mm: Introduce fault_signal_pending() Date: Thu, 20 Feb 2020 09:54:19 -0500 Message-Id: <20200220145432.4561-4-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200220145432.4561-1-peterx@redhat.com> References: <20200220145432.4561-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For most architectures, we've got a quick path to detect fatal signal after a handle_mm_fault(). Introduce a helper for that quick path. It cleans the current codes a bit so we don't need to duplicate the same check across archs. More importantly, this will be an unified place that we handle the signal immediately right after an interrupted page fault, so it'll be much easier for us if we want to change the behavior of handling signals later on for all the archs. Note that currently only part of the archs are using this new helper, because some archs have their own way to handle signals. In the follow up patches, we'll try to apply this helper to all the rest of archs. Another note is that the "regs" parameter in the new helper is not used yet. It'll be used very soon. Now we kept it in this patch only to avoid touching all the archs again in the follow up patches. Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 2 +- arch/arm/mm/fault.c | 2 +- arch/hexagon/mm/vm_fault.c | 2 +- arch/ia64/mm/fault.c | 2 +- arch/m68k/mm/fault.c | 2 +- arch/microblaze/mm/fault.c | 2 +- arch/mips/mm/fault.c | 2 +- arch/nds32/mm/fault.c | 2 +- arch/nios2/mm/fault.c | 2 +- arch/openrisc/mm/fault.c | 2 +- arch/parisc/mm/fault.c | 2 +- arch/riscv/mm/fault.c | 2 +- arch/s390/mm/fault.c | 3 +-- arch/sparc/mm/fault_32.c | 2 +- arch/sparc/mm/fault_64.c | 2 +- arch/unicore32/mm/fault.c | 2 +- arch/xtensa/mm/fault.c | 2 +- include/linux/sched/signal.h | 13 +++++++++++++ 18 files changed, 30 insertions(+), 18 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index 741e61ef9d3f..aea33b599037 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -150,7 +150,7 @@ do_page_fault(unsigned long address, unsigned long mmcs= r, =09 the fault. */ =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index bd0f4821f7e1..937b81ff8649 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -295,7 +295,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, str= uct pt_regs *regs) =09 * signal first. We do not need to release the mmap_sem because =09 * it would already be released in __lock_page_or_retry in =09 * mm/filemap.c. */ -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) { +=09if (fault_signal_pending(fault, regs)) { =09=09if (!user_mode(regs)) =09=09=09goto no_context; =09=09return 0; diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index b3bc71680ae4..d19beaf11b4c 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -91,7 +91,7 @@ void do_page_fault(unsigned long address, long cause, str= uct pt_regs *regs) =20 =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09/* The most common case -- we are done. */ diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index c2f299fe9e04..211b4f439384 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -141,7 +141,7 @@ ia64_do_page_fault (unsigned long address, unsigned lon= g isr, struct pt_regs *re =09 */ =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index e9b1d7585b43..a455e202691b 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -138,7 +138,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long a= ddress, =09fault =3D handle_mm_fault(vma, address, flags); =09pr_debug("handle_mm_fault returns %x\n", fault); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return 0; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index e6a810b0c7ad..cdde01dcdfc3 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -217,7 +217,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long = address, =09 */ =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index 1e8d00793784..0ee6fafc57bc 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -154,7 +154,7 @@ static void __kprobes __do_page_fault(struct pt_regs *r= egs, unsigned long write, =09 */ =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(regs)) =09=09return; =20 =09perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index 906dfb25353c..0e63f81eff5b 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -214,7 +214,7 @@ void do_page_fault(unsigned long entry, unsigned long a= ddr, =09 * signal first. We do not need to release the mmap_sem because it =09 * would already be released in __lock_page_or_retry in mm/filemap.c. =09 */ -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) { +=09if (fault_signal_pending(fault, regs)) { =09=09if (!user_mode(regs)) =09=09=09goto no_context; =09=09return; diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index 6a2e716b959f..704ace8ca0ee 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -133,7 +133,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, uns= igned long cause, =09 */ =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index 5d4d3a9691d0..85c7eb0c0186 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -161,7 +161,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, uns= igned long address, =20 =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index adbd5e2144a3..f9be1d1cb43f 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -304,7 +304,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long = code, =20 =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index cf7248e07f43..1d3869e9ddef 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -117,7 +117,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) =09 * signal first. We do not need to release the mmap_sem because it =09 * would already be released in __lock_page_or_retry in mm/filemap.c. =09 */ -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(tsk)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 7b0bb475c166..179cf92a56e5 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -480,8 +480,7 @@ static inline vm_fault_t do_exception(struct pt_regs *r= egs, int access) =09 * the fault. =09 */ =09fault =3D handle_mm_fault(vma, address, flags); -=09/* No reason to continue if interrupted by SIGKILL. */ -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) { +=09if (fault_signal_pending(fault, regs)) { =09=09fault =3D VM_FAULT_SIGNAL; =09=09if (flags & FAULT_FLAG_RETRY_NOWAIT) =09=09=09goto out_up; diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index 89976c9b936c..6efbeb227644 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -237,7 +237,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, in= t text_fault, int write, =09 */ =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 8b7ddbd14b65..dd1ed6555831 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -425,7 +425,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_re= gs *regs) =20 =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09goto exit_exception; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c index 76342de9cf8c..59d0e6ec2cfc 100644 --- a/arch/unicore32/mm/fault.c +++ b/arch/unicore32/mm/fault.c @@ -250,7 +250,7 @@ static int do_pf(unsigned long addr, unsigned int fsr, = struct pt_regs *regs) =09 * signal first. We do not need to release the mmap_sem because =09 * it would already be released in __lock_page_or_retry in =09 * mm/filemap.c. */ -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return 0; =20 =09if (!(fault & VM_FAULT_ERROR) && (flags & FAULT_FLAG_ALLOW_RETRY)) { diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index bee30a77cd70..59515905d4ad 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -110,7 +110,7 @@ void do_page_fault(struct pt_regs *regs) =09 */ =09fault =3D handle_mm_fault(vma, address, flags); =20 -=09if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) +=09if (fault_signal_pending(fault, regs)) =09=09return; =20 =09if (unlikely(fault & VM_FAULT_ERROR)) { diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index 88050259c466..4c87ffce64d1 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -369,6 +369,19 @@ static inline int signal_pending_state(long state, str= uct task_struct *p) =09return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p); } =20 +/* + * This should only be used in fault handlers to decide whether we + * should stop the current fault routine to handle the signals + * instead, especially with the case where we've got interrupted with + * a VM_FAULT_RETRY. + */ +static inline bool fault_signal_pending(unsigned int fault_flags, +=09=09=09=09=09struct pt_regs *regs) +{ +=09return unlikely((fault_flags & VM_FAULT_RETRY) && +=09=09=09fatal_signal_pending(current)); +} + /* * Reevaluate whether the task has signals pending delivery. * Wake the task if so. --=20 2.24.1