From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id 7775E2802A5 for ; Fri, 10 Nov 2017 19:52:52 -0500 (EST) Received: by mail-pf0-f200.google.com with SMTP id u70so9023011pfa.2 for ; Fri, 10 Nov 2017 16:52:52 -0800 (PST) Received: from mga04.intel.com (mga04.intel.com. [192.55.52.120]) by mx.google.com with ESMTPS id o6si10369666pgn.138.2017.11.10.16.52.51 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Nov 2017 16:52:51 -0800 (PST) Subject: [PATCH 2/4] mm: replace pud_write with pud_access_permitted in fault + gup paths From: Dan Williams Date: Fri, 10 Nov 2017 16:44:36 -0800 Message-ID: <151036107621.32713.2965839916788844402.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <151036106541.32713.16875776773735515483.stgit@dwillia2-desk3.amr.corp.intel.com> References: <151036106541.32713.16875776773735515483.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org Cc: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dave Hansen , "David S. Miller" , "Kirill A. Shutemov" The 'access_permitted' helper is used in the gup-fast path and goes beyond the simple _PAGE_RW check to also: * validate that the mapping is writable from a protection keys standpoint * validate that the pte has _PAGE_USER set since all fault paths where pud_write is must be referencing user-memory. Cc: Dave Hansen Cc: "David S. Miller" Cc: Kirill A. Shutemov Signed-off-by: Dan Williams --- arch/sparc/mm/gup.c | 2 +- mm/huge_memory.c | 2 +- mm/memory.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c index 5335ba3c850e..5ae2d0a01a70 100644 --- a/arch/sparc/mm/gup.c +++ b/arch/sparc/mm/gup.c @@ -114,7 +114,7 @@ static int gup_huge_pud(pud_t *pudp, pud_t pud, unsigned long addr, if (!(pud_val(pud) & _PAGE_VALID)) return 0; - if (write && !pud_write(pud)) + if (!pud_access_permitted(pud, write)) return 0; refs = 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1981ed697dab..1e4e11275856 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1022,7 +1022,7 @@ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, assert_spin_locked(pud_lockptr(mm, pud)); - if (flags & FOLL_WRITE && !pud_write(*pud)) + if (!pud_access_permitted(*pud, flags & FOLL_WRITE)) return NULL; if (pud_present(*pud) && pud_devmap(*pud)) diff --git a/mm/memory.c b/mm/memory.c index a728bed16c20..64f86beadcca 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3987,7 +3987,7 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, /* NUMA case for anonymous PUDs would go here */ - if (dirty && !pud_write(orig_pud)) { + if (dirty && !pud_access_permitted(orig_pud, WRITE)) { ret = wp_huge_pud(&vmf, orig_pud); if (!(ret & VM_FAULT_FALLBACK)) return ret; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org