From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8312EC2BA1A for ; Tue, 7 Apr 2020 03:03:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 37DF920801 for ; Tue, 7 Apr 2020 03:03:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="e+NA8UxO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37DF920801 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E29278E0012; Mon, 6 Apr 2020 23:03:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DFF748E0001; Mon, 6 Apr 2020 23:03:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3D038E0012; Mon, 6 Apr 2020 23:03:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id BB57A8E0001 for ; Mon, 6 Apr 2020 23:03:50 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7B36240FB for ; Tue, 7 Apr 2020 03:03:50 +0000 (UTC) X-FDA: 76679564220.24.use58_5bb1b5975af38 X-HE-Tag: use58_5bb1b5975af38 X-Filterd-Recvd-Size: 9148 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Apr 2020 03:03:50 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2BCD820857; Tue, 7 Apr 2020 03:03:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586228629; bh=gb/kHGwBNPwcO41UZvXnfP15R/lwEqquihWqGy+sSLE=; h=Date:From:To:Subject:In-Reply-To:From; b=e+NA8UxO0pwCJy5crJ22eUw5aqR6GtzHytUUm80tSf6uL1Cy6jUkD/5DBkRk/2KL9 65qO5FgVTU9CwiTbfvhERgzn5Avw2iwUKf09iIn+/zOZuv2ruTZ08NCAtvIfBqix6r mJqnlQhQTB84MxWH3c73pXV+HPd7QthHCh8wdwTA= Date: Mon, 06 Apr 2020 20:03:47 -0700 From: Andrew Morton To: acme@kernel.org, akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com, anshuman.khandual@arm.com, arnd@arndb.de, benh@kernel.crashing.org, dalias@libc.org, dave.hansen@linux.intel.com, geert@linux-m68k.org, guoren@kernel.org, linux-mm@kvack.org, luto@kernel.org, mgorman@suse.de, mingo@redhat.com, mm-commits@vger.kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, paulburton@kernel.org, paulus@ozlabs.org, paulus@samba.org, peterz@infradead.org, ralf@linux-mips.org, rostedt@goodmis.org, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz, viro@zeniv.linux.org.uk, will@kernel.org, ysato@users.sourceforge.jp Subject: [patch 006/166] mm/vma: make vma_is_accessible() available for general use Message-ID: <20200407030347.TAKOqg6UF%akpm@linux-foundation.org> In-Reply-To: <20200406200254.a69ebd9e08c4074e41ddebaf@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Anshuman Khandual Subject: mm/vma: make vma_is_accessible() available for general use Lets move vma_is_accessible() helper to include/linux/mm.h which makes it available for general use. While here, this replaces all remaining open encodings for VMA access check with vma_is_accessible(). Link: http://lkml.kernel.org/r/1582520593-30704-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual Acked-by: Geert Uytterhoeven Acked-by: Guo Ren Acked-by: Vlastimil Babka Cc: Guo Ren Cc: Geert Uytterhoeven Cc: Ralf Baechle Cc: Paul Burton Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Yoshinori Sato Cc: Rich Felker Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Steven Rostedt Cc: Mel Gorman Cc: Alexander Viro Cc: "Aneesh Kumar K.V" Cc: Arnaldo Carvalho de Melo Cc: Arnd Bergmann Cc: Nick Piggin Cc: Paul Mackerras Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/csky/mm/fault.c | 2 +- arch/m68k/mm/fault.c | 2 +- arch/mips/mm/fault.c | 2 +- arch/powerpc/mm/fault.c | 2 +- arch/sh/mm/fault.c | 2 +- arch/x86/mm/fault.c | 2 +- include/linux/mm.h | 6 ++++++ kernel/sched/fair.c | 2 +- mm/gup.c | 2 +- mm/memory.c | 5 ----- mm/mempolicy.c | 3 +-- mm/mmap.c | 5 ++--- 12 files changed, 17 insertions(+), 18 deletions(-) --- a/arch/csky/mm/fault.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/arch/csky/mm/fault.c @@ -141,7 +141,7 @@ good_area: if (!(vma->vm_flags & VM_WRITE)) goto bad_area; } else { - if (!(vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))) + if (!vma_is_accessible(vma)) goto bad_area; } --- a/arch/m68k/mm/fault.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/arch/m68k/mm/fault.c @@ -125,7 +125,7 @@ good_area: case 1: /* read, present */ goto acc_err; case 0: /* read, not present */ - if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) + if (!vma_is_accessible(vma)) goto acc_err; } --- a/arch/mips/mm/fault.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/arch/mips/mm/fault.c @@ -142,7 +142,7 @@ good_area: goto bad_area; } } else { - if (!(vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))) + if (!vma_is_accessible(vma)) goto bad_area; } } --- a/arch/powerpc/mm/fault.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/arch/powerpc/mm/fault.c @@ -314,7 +314,7 @@ static bool access_error(bool is_write, return false; } - if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))) + if (unlikely(!vma_is_accessible(vma))) return true; /* * We should ideally do the vma pkey access check here. But in the --- a/arch/sh/mm/fault.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/arch/sh/mm/fault.c @@ -355,7 +355,7 @@ static inline int access_error(int error return 1; /* read, not present: */ - if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))) + if (unlikely(!vma_is_accessible(vma))) return 1; return 0; --- a/arch/x86/mm/fault.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/arch/x86/mm/fault.c @@ -1222,7 +1222,7 @@ access_error(unsigned long error_code, s return 1; /* read, not present: */ - if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))) + if (unlikely(!vma_is_accessible(vma))) return 1; return 0; --- a/include/linux/mm.h~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/include/linux/mm.h @@ -629,6 +629,12 @@ static inline bool vma_is_foreign(struct return false; } + +static inline bool vma_is_accessible(struct vm_area_struct *vma) +{ + return vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC); +} + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow --- a/kernel/sched/fair.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/kernel/sched/fair.c @@ -2799,7 +2799,7 @@ static void task_numa_work(struct callba * Skip inaccessible VMAs to avoid any confusion between * PROT_NONE and NUMA hinting ptes */ - if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) + if (!vma_is_accessible(vma)) continue; do { --- a/mm/gup.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/mm/gup.c @@ -1416,7 +1416,7 @@ long populate_vma_page_range(struct vm_a * We want mlock to succeed for regions that have any permissions * other than PROT_NONE. */ - if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) + if (vma_is_accessible(vma)) gup_flags |= FOLL_FORCE; /* --- a/mm/memory.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/mm/memory.c @@ -3964,11 +3964,6 @@ static inline vm_fault_t wp_huge_pmd(str return VM_FAULT_FALLBACK; } -static inline bool vma_is_accessible(struct vm_area_struct *vma) -{ - return vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE); -} - static vm_fault_t create_huge_pud(struct vm_fault *vmf) { #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ --- a/mm/mempolicy.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/mm/mempolicy.c @@ -678,8 +678,7 @@ static int queue_pages_test_walk(unsigne if (flags & MPOL_MF_LAZY) { /* Similar to task_numa_work, skip inaccessible VMAs */ - if (!is_vm_hugetlb_page(vma) && - (vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)) && + if (!is_vm_hugetlb_page(vma) && vma_is_accessible(vma) && !(vma->vm_flags & VM_MIXEDMAP)) change_prot_numa(vma, start, endvma); return 1; --- a/mm/mmap.c~mm-vma-make-vma_is_accessible-available-for-general-use +++ a/mm/mmap.c @@ -2358,8 +2358,7 @@ int expand_upwards(struct vm_area_struct gap_addr = TASK_SIZE; next = vma->vm_next; - if (next && next->vm_start < gap_addr && - (next->vm_flags & (VM_WRITE|VM_READ|VM_EXEC))) { + if (next && next->vm_start < gap_addr && vma_is_accessible(next)) { if (!(next->vm_flags & VM_GROWSUP)) return -ENOMEM; /* Check that both stack segments have the same anon_vma? */ @@ -2440,7 +2439,7 @@ int expand_downwards(struct vm_area_stru prev = vma->vm_prev; /* Check that both stack segments have the same anon_vma? */ if (prev && !(prev->vm_flags & VM_GROWSDOWN) && - (prev->vm_flags & (VM_WRITE|VM_READ|VM_EXEC))) { + vma_is_accessible(prev)) { if (address - prev->vm_end < stack_guard_gap) return -ENOMEM; } _