From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 874796B027E for ; Sun, 4 Feb 2018 20:29:26 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id v7so18549420pgo.8 for ; Sun, 04 Feb 2018 17:29:26 -0800 (PST) Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id b69si6079814pfl.359.2018.02.04.17.28.06 for (version=TLS1 cipher=AES128-SHA bits=128/128); Sun, 04 Feb 2018 17:28:06 -0800 (PST) From: Davidlohr Bueso Subject: [PATCH 49/64] arch/xtensa: use mm locking wrappers Date: Mon, 5 Feb 2018 02:27:39 +0100 Message-Id: <20180205012754.23615-50-dbueso@wotan.suse.de> In-Reply-To: <20180205012754.23615-1-dbueso@wotan.suse.de> References: <20180205012754.23615-1-dbueso@wotan.suse.de> Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org, mingo@kernel.org Cc: peterz@infradead.org, ldufour@linux.vnet.ibm.com, jack@suse.cz, mhocko@kernel.org, kirill.shutemov@linux.intel.com, mawilcox@microsoft.com, mgorman@techsingularity.net, dave@stgolabs.net, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Davidlohr Bueso From: Davidlohr Bueso This becomes quite straightforward with the mmrange in place. Signed-off-by: Davidlohr Bueso --- arch/xtensa/mm/fault.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index 6f8e3e7cccb5..5e783e5583b6 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -75,7 +75,7 @@ void do_page_fault(struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; retry: - down_read(&mm->mmap_sem); + mm_read_lock(mm, &mmrange); vma = find_vma(mm, address); if (!vma) @@ -141,7 +141,7 @@ void do_page_fault(struct pt_regs *regs) } } - up_read(&mm->mmap_sem); + mm_read_unlock(mm, &mmrange); perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); if (flags & VM_FAULT_MAJOR) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); @@ -154,7 +154,7 @@ void do_page_fault(struct pt_regs *regs) * Fix it, but check if it's kernel or user first.. */ bad_area: - up_read(&mm->mmap_sem); + mm_read_unlock(mm, &mmrange); if (user_mode(regs)) { current->thread.bad_vaddr = address; current->thread.error_code = is_write; @@ -173,7 +173,7 @@ void do_page_fault(struct pt_regs *regs) * us unable to handle the page fault gracefully. */ out_of_memory: - up_read(&mm->mmap_sem); + mm_read_unlock(mm, &mmrange); if (!user_mode(regs)) bad_page_fault(regs, address, SIGKILL); else @@ -181,7 +181,7 @@ void do_page_fault(struct pt_regs *regs) return; do_sigbus: - up_read(&mm->mmap_sem); + mm_read_unlock(mm, &mmrange); /* Send a sigbus, regardless of whether we were in kernel * or user mode. -- 2.13.6 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org