From: Michel Lespinasse <walken@google.com>
To: Peter Zijlstra <peterz@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>,
Laurent Dufour <ldufour@linux.ibm.com>,
Vlastimil Babka <vbabka@suse.cz>,
Matthew Wilcox <willy@infradead.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Jerome Glisse <jglisse@redhat.com>,
Davidlohr Bueso <dave@stgolabs.net>,
David Rientjes <rientjes@google.com>
Cc: linux-mm <linux-mm@kvack.org>, Michel Lespinasse <walken@google.com>
Subject: [RFC PATCH 11/24] x86 fault handler: merge bad_area() functions
Date: Mon, 24 Feb 2020 12:30:44 -0800 [thread overview]
Message-ID: <20200224203057.162467-12-walken@google.com> (raw)
In-Reply-To: <20200224203057.162467-1-walken@google.com>
This merges the bad_area(), bad_area_access_error() and the underlying
__bad_area() functions into one single unified function.
Passing a NULL vma triggers the prior bad_area() behavior, while
passing a non-NULL vma triggers the prior bad_area_access_error() behavior.
The control flow is very similar in all cases, and we now release the
mmap_sem read lock in one single place rather than 3.
Text size is reduced by 356 bytes here.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
arch/x86/mm/fault.c | 54 ++++++++++++++++++++-------------------------
1 file changed, 24 insertions(+), 30 deletions(-)
diff --git arch/x86/mm/fault.c arch/x86/mm/fault.c
index a8ce9e160b72..adbd2b03fcf9 100644
--- arch/x86/mm/fault.c
+++ arch/x86/mm/fault.c
@@ -919,26 +919,6 @@ bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
__bad_area_nosemaphore(regs, error_code, address, 0, SEGV_MAPERR);
}
-static void
-__bad_area(struct pt_regs *regs, unsigned long error_code,
- unsigned long address, u32 pkey, int si_code)
-{
- struct mm_struct *mm = current->mm;
- /*
- * Something tried to access memory that isn't in our memory map..
- * Fix it, but check if it's kernel or user first..
- */
- mm_read_unlock(mm);
-
- __bad_area_nosemaphore(regs, error_code, address, pkey, si_code);
-}
-
-static noinline void
-bad_area(struct pt_regs *regs, unsigned long error_code, unsigned long address)
-{
- __bad_area(regs, error_code, address, 0, SEGV_MAPERR);
-}
-
static inline bool bad_area_access_from_pkeys(unsigned long error_code,
struct vm_area_struct *vma)
{
@@ -957,9 +937,15 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code,
}
static noinline void
-bad_area_access_error(struct pt_regs *regs, unsigned long error_code,
- unsigned long address, struct vm_area_struct *vma)
+bad_area(struct pt_regs *regs, unsigned long error_code,
+ unsigned long address, struct vm_area_struct *vma)
{
+ u32 pkey = 0;
+ int si_code = SEGV_MAPERR;
+
+ if (!vma)
+ goto unlock;
+
/*
* This OSPKE check is not strictly necessary at runtime.
* But, doing it this way allows compiler optimizations
@@ -986,12 +972,20 @@ bad_area_access_error(struct pt_regs *regs, unsigned long error_code,
* 6. T1 : reaches here, sees vma_pkey(vma)=5, when we really
* faulted on a pte with its pkey=4.
*/
- u32 pkey = vma_pkey(vma);
-
- __bad_area(regs, error_code, address, pkey, SEGV_PKUERR);
+ pkey = vma_pkey(vma);
+ si_code = SEGV_PKUERR;
} else {
- __bad_area(regs, error_code, address, 0, SEGV_ACCERR);
+ si_code = SEGV_ACCERR;
}
+
+unlock:
+ /*
+ * Something tried to access memory that isn't in our memory map..
+ * Fix it, but check if it's kernel or user first..
+ */
+ mm_read_unlock(current->mm);
+
+ __bad_area_nosemaphore(regs, error_code, address, pkey, si_code);
}
static void
@@ -1401,17 +1395,17 @@ void do_user_addr_fault(struct pt_regs *regs,
vma = find_vma(mm, address);
if (unlikely(!vma)) {
- bad_area(regs, hw_error_code, address);
+ bad_area(regs, hw_error_code, address, NULL);
return;
}
if (likely(vma->vm_start <= address))
goto good_area;
if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
- bad_area(regs, hw_error_code, address);
+ bad_area(regs, hw_error_code, address, NULL);
return;
}
if (unlikely(expand_stack(vma, address))) {
- bad_area(regs, hw_error_code, address);
+ bad_area(regs, hw_error_code, address, NULL);
return;
}
@@ -1421,7 +1415,7 @@ void do_user_addr_fault(struct pt_regs *regs,
*/
good_area:
if (unlikely(access_error(hw_error_code, vma))) {
- bad_area_access_error(regs, hw_error_code, address, vma);
+ bad_area(regs, hw_error_code, address, vma);
return;
}
--
2.25.0.341.g760bfbb309-goog
next prev parent reply other threads:[~2020-02-24 20:31 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-24 20:30 [RFC PATCH 00/24] Fine grained MM locking Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 01/24] MM locking API: initial implementation as rwsem wrappers Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 02/24] MM locking API: use coccinelle to convert mmap_sem rwsem call sites Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 03/24] MM locking API: manual conversion of mmap_sem call sites missed by coccinelle Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 04/24] MM locking API: add range arguments Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 05/24] MM locking API: allow for sleeping during unlock Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 06/24] MM locking API: implement fine grained range locks Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 07/24] mm/memory: add range field to struct vm_fault Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 08/24] mm/memory: allow specifying MM lock range to handle_mm_fault() Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 09/24] do_swap_page: use the vmf->range field when dropping mmap_sem Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 10/24] handle_userfault: " Michel Lespinasse
2020-02-24 20:30 ` Michel Lespinasse [this message]
2020-02-24 20:30 ` [RFC PATCH 12/24] x86 fault handler: use an explicit MM lock range Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 13/24] mm/memory: add prepare_mm_fault() function Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 14/24] mm/swap_state: disable swap vma readahead Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 15/24] x86 fault handler: use a pseudo-vma when operating on anonymous vmas Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 16/24] MM locking API: add vma locking API Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 17/24] x86 fault handler: implement range locking Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 18/24] shared file mappings: use the vmf->range field when dropping mmap_sem Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 19/24] mm: add field to annotate vm_operations that support range locking Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 20/24] x86 fault handler: extend range locking to supported file vmas Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 21/24] do_mmap: add locked argument Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 22/24] do_mmap: implement " Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 23/24] do_mmap: use locked=false in vm_mmap_pgoff() and aio_setup_ring() Michel Lespinasse
2020-02-24 20:30 ` [RFC PATCH 24/24] do_mmap: implement easiest cases of fine grained locking Michel Lespinasse
2022-03-20 22:08 ` [RFC PATCH 00/24] Fine grained MM locking Barry Song
2022-03-20 23:14 ` Matthew Wilcox
2022-03-21 0:20 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200224203057.162467-12-walken@google.com \
--to=walken@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=dave@stgolabs.net \
--cc=jglisse@redhat.com \
--cc=ldufour@linux.ibm.com \
--cc=linux-mm@kvack.org \
--cc=peterz@infradead.org \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox