From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0E58C43331 for ; Fri, 27 Mar 2020 20:00:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 49B1320714 for ; Fri, 27 Mar 2020 20:00:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="H1W0eveT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49B1320714 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D55BB6B0010; Fri, 27 Mar 2020 16:00:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D06B86B0032; Fri, 27 Mar 2020 16:00:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCEA56B0036; Fri, 27 Mar 2020 16:00:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id A6E0E6B0010 for ; Fri, 27 Mar 2020 16:00:29 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C8C3C6110 for ; Fri, 27 Mar 2020 20:00:29 +0000 (UTC) X-FDA: 76642209378.26.note27_214d709dea357 X-HE-Tag: note27_214d709dea357 X-Filterd-Recvd-Size: 17407 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 27 Mar 2020 20:00:28 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id c9so9671580qtw.7 for ; Fri, 27 Mar 2020 13:00:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q0OsabNYqQz9XezfUAjPoyuspZFWQcrY4Mhi6eg0HzY=; b=H1W0eveTDYFp9Me6A7/gA4rO/fywX4Lkj9lBPMfHlCm+QsCrBoc31mD4j7V3cQcAXh QOlQIvesfCTUyHo4J/1oAR4ksHK6O8H3SiXtz4NlIC/TQGNktGUOdjWIBn5JmLTTYgoN eTRYLQJRFW7bBXZ7IZ9KtyUBg60ZAYHc/jJn2QTkGvwjGIJQCfiBpnHRcbfyCeuXBqnx L+HxsN697K3iOPQWULz54HPHfQrAnVGG2YSoD1aJBkGpYxGqMJvLAJXPbTyYQGQFPobh J4uQhJF9Kk+QVN+wrbqTw0cj/TtOGyv71hmd15wH5Evbq7IdTyhDfza2i1rCJ/aRLHZv 1GTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q0OsabNYqQz9XezfUAjPoyuspZFWQcrY4Mhi6eg0HzY=; b=Wxkm2Z12ZZg4Txs8rVR6VXMCYdBUD2ZPR1AHzN6f6lvnerYfe7H5+Au84+m9eI4wq5 FTikmcWkyi4W2P+5VeFsSxajC2lOjwVlfbytGcKXv/rMIN1XJ1qRfnymmOxvQGmc0I7r 3nEAmFt8uHGcJju8MnTa3vGbxUu1Hklo6vSlFmjVZ6II57BVRZpX/g+nxN7ChfkQybQB +V79PSccHHWIHtvrU7f0WPr0Lq5HZ4jxb0AusBRCqlRrlFubMkC3IPQdNuy2I72ln59+ IvyyDbPik6vF+PirCeJQeTDEP2IDqhxGpwh5cf4pyyYXrhVhmZPYNsLrgbRH1tbkYfy9 owdQ== X-Gm-Message-State: ANhLgQ08i2ohKJM//4vgmdeP6kQvc4lO0smO6gLw8JLUDIdLmSNNMJzC XrKa7x0XN+vT1roJn2yT5ssq0g== X-Google-Smtp-Source: ADFU+vv1KEJUNoOTFPYi7PUdKmoRKZ930ogjB3Gxy2qwLSxuhnLwJkV54eLdQAnPUsAEMzUP3D5Ziw== X-Received: by 2002:ac8:4d06:: with SMTP id w6mr956299qtv.287.1585339227895; Fri, 27 Mar 2020 13:00:27 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-68-57-212.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.68.57.212]) by smtp.gmail.com with ESMTPSA id m67sm4357086qke.101.2020.03.27.13.00.25 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 27 Mar 2020 13:00:25 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1jHv9V-0007fZ-0I; Fri, 27 Mar 2020 17:00:25 -0300 From: Jason Gunthorpe To: Jerome Glisse , Ralph Campbell , Felix.Kuehling@amd.com Cc: linux-mm@kvack.org, John Hubbard , dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Philip Yang , Jason Gunthorpe Subject: [PATCH v2 hmm 2/9] mm/hmm: return the fault type from hmm_pte_need_fault() Date: Fri, 27 Mar 2020 17:00:14 -0300 Message-Id: <20200327200021.29372-3-jgg@ziepe.ca> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200327200021.29372-1-jgg@ziepe.ca> References: <20200327200021.29372-1-jgg@ziepe.ca> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jason Gunthorpe Using two bools instead of flags return is not necessary and leads to bugs. Returning a value is easier for the compiler to check and easier to pass around the code flow. Convert the two bools into flags and push the change to all callers. Signed-off-by: Jason Gunthorpe --- mm/hmm.c | 183 ++++++++++++++++++++++++------------------------------- 1 file changed, 81 insertions(+), 102 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 3a2610e0713329..d208ddd351066f 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -32,6 +32,12 @@ struct hmm_vma_walk { unsigned int flags; }; =20 +enum { + HMM_NEED_FAULT =3D 1 << 0, + HMM_NEED_WRITE_FAULT =3D 1 << 1, + HMM_NEED_ALL_BITS =3D HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, +}; + static int hmm_pfns_fill(unsigned long addr, unsigned long end, struct hmm_range *range, enum hmm_pfn_value_e value) { @@ -49,8 +55,7 @@ static int hmm_pfns_fill(unsigned long addr, unsigned l= ong end, * hmm_vma_fault() - fault in a range lacking valid pmd or pte(s) * @addr: range virtual start address (inclusive) * @end: range virtual end address (exclusive) - * @fault: should we fault or not ? - * @write_fault: write fault ? + * @required_fault: HMM_NEED_* flags * @walk: mm_walk structure * Return: -EBUSY after page fault, or page fault error * @@ -58,8 +63,7 @@ static int hmm_pfns_fill(unsigned long addr, unsigned l= ong end, * or whenever there is no page directory covering the virtual address r= ange. */ static int hmm_vma_fault(unsigned long addr, unsigned long end, - bool fault, bool write_fault, - struct mm_walk *walk) + unsigned int required_fault, struct mm_walk *walk) { struct hmm_vma_walk *hmm_vma_walk =3D walk->private; struct hmm_range *range =3D hmm_vma_walk->range; @@ -68,13 +72,13 @@ static int hmm_vma_fault(unsigned long addr, unsigned= long end, unsigned long i =3D (addr - range->start) >> PAGE_SHIFT; unsigned int fault_flags =3D FAULT_FLAG_REMOTE; =20 - WARN_ON_ONCE(!fault && !write_fault); + WARN_ON_ONCE(!required_fault); hmm_vma_walk->last =3D addr; =20 if (!vma) goto out_error; =20 - if (write_fault) { + if (required_fault & HMM_NEED_WRITE_FAULT) { if (!(vma->vm_flags & VM_WRITE)) return -EPERM; fault_flags |=3D FAULT_FLAG_WRITE; @@ -91,14 +95,13 @@ static int hmm_vma_fault(unsigned long addr, unsigned= long end, return -EFAULT; } =20 -static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma= _walk, - uint64_t pfns, uint64_t cpu_flags, - bool *fault, bool *write_fault) +static unsigned int hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vm= a_walk, + uint64_t pfns, uint64_t cpu_flags) { struct hmm_range *range =3D hmm_vma_walk->range; =20 if (hmm_vma_walk->flags & HMM_FAULT_SNAPSHOT) - return; + return 0; =20 /* * So we not only consider the individual per page request we also @@ -114,37 +117,37 @@ static inline void hmm_pte_need_fault(const struct = hmm_vma_walk *hmm_vma_walk, =20 /* We aren't ask to do anything ... */ if (!(pfns & range->flags[HMM_PFN_VALID])) - return; + return 0; =20 - /* If CPU page table is not valid then we need to fault */ - *fault =3D !(cpu_flags & range->flags[HMM_PFN_VALID]); /* Need to write fault ? */ if ((pfns & range->flags[HMM_PFN_WRITE]) && - !(cpu_flags & range->flags[HMM_PFN_WRITE])) { - *write_fault =3D true; - *fault =3D true; - } + !(cpu_flags & range->flags[HMM_PFN_WRITE])) + return HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT; + + /* If CPU page table is not valid then we need to fault */ + if (!(cpu_flags & range->flags[HMM_PFN_VALID])) + return HMM_NEED_FAULT; + return 0; } =20 -static void hmm_range_need_fault(const struct hmm_vma_walk *hmm_vma_walk= , - const uint64_t *pfns, unsigned long npages, - uint64_t cpu_flags, bool *fault, - bool *write_fault) +static unsigned int +hmm_range_need_fault(const struct hmm_vma_walk *hmm_vma_walk, + const uint64_t *pfns, unsigned long npages, + uint64_t cpu_flags) { + unsigned int required_fault =3D 0; unsigned long i; =20 - if (hmm_vma_walk->flags & HMM_FAULT_SNAPSHOT) { - *fault =3D *write_fault =3D false; - return; - } + if (hmm_vma_walk->flags & HMM_FAULT_SNAPSHOT) + return 0; =20 - *fault =3D *write_fault =3D false; for (i =3D 0; i < npages; ++i) { - hmm_pte_need_fault(hmm_vma_walk, pfns[i], cpu_flags, - fault, write_fault); - if ((*write_fault)) - return; + required_fault |=3D + hmm_pte_need_fault(hmm_vma_walk, pfns[i], cpu_flags); + if (required_fault =3D=3D HMM_NEED_ALL_BITS) + return required_fault; } + return required_fault; } =20 static int hmm_vma_walk_hole(unsigned long addr, unsigned long end, @@ -152,17 +155,16 @@ static int hmm_vma_walk_hole(unsigned long addr, un= signed long end, { struct hmm_vma_walk *hmm_vma_walk =3D walk->private; struct hmm_range *range =3D hmm_vma_walk->range; - bool fault, write_fault; + unsigned int required_fault; unsigned long i, npages; uint64_t *pfns; =20 i =3D (addr - range->start) >> PAGE_SHIFT; npages =3D (end - addr) >> PAGE_SHIFT; pfns =3D &range->pfns[i]; - hmm_range_need_fault(hmm_vma_walk, pfns, npages, - 0, &fault, &write_fault); - if (fault || write_fault) - return hmm_vma_fault(addr, end, fault, write_fault, walk); + required_fault =3D hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0); + if (required_fault) + return hmm_vma_fault(addr, end, required_fault, walk); hmm_vma_walk->last =3D addr; return hmm_pfns_fill(addr, end, range, HMM_PFN_NONE); } @@ -183,16 +185,15 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,= unsigned long addr, struct hmm_vma_walk *hmm_vma_walk =3D walk->private; struct hmm_range *range =3D hmm_vma_walk->range; unsigned long pfn, npages, i; - bool fault, write_fault; + unsigned int required_fault; uint64_t cpu_flags; =20 npages =3D (end - addr) >> PAGE_SHIFT; cpu_flags =3D pmd_to_hmm_pfn_flags(range, pmd); - hmm_range_need_fault(hmm_vma_walk, pfns, npages, cpu_flags, - &fault, &write_fault); - - if (fault || write_fault) - return hmm_vma_fault(addr, end, fault, write_fault, walk); + required_fault =3D + hmm_range_need_fault(hmm_vma_walk, pfns, npages, cpu_flags); + if (required_fault) + return hmm_vma_fault(addr, end, required_fault, walk); =20 pfn =3D pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); for (i =3D 0; addr < end; addr +=3D PAGE_SIZE, i++, pfn++) @@ -229,18 +230,15 @@ static int hmm_vma_handle_pte(struct mm_walk *walk,= unsigned long addr, { struct hmm_vma_walk *hmm_vma_walk =3D walk->private; struct hmm_range *range =3D hmm_vma_walk->range; - bool fault, write_fault; + unsigned int required_fault; uint64_t cpu_flags; pte_t pte =3D *ptep; uint64_t orig_pfn =3D *pfn; =20 *pfn =3D range->values[HMM_PFN_NONE]; - fault =3D write_fault =3D false; - if (pte_none(pte)) { - hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0, - &fault, &write_fault); - if (fault || write_fault) + required_fault =3D hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0); + if (required_fault) goto fault; return 0; } @@ -261,9 +259,8 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, return 0; } =20 - hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0, &fault, - &write_fault); - if (!fault && !write_fault) + required_fault =3D hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0); + if (!required_fault) return 0; =20 if (!non_swap_entry(entry)) @@ -283,9 +280,8 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, } =20 cpu_flags =3D pte_to_hmm_pfn_flags(range, pte); - hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags, &fault, - &write_fault); - if (fault || write_fault) + required_fault =3D hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags= ); + if (required_fault) goto fault; =20 /* @@ -293,9 +289,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, * fall through and treat it like a normal page. */ if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) { - hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0, &fault, - &write_fault); - if (fault || write_fault) { + if (hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0)) { pte_unmap(ptep); return -EFAULT; } @@ -309,7 +303,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, fault: pte_unmap(ptep); /* Fault any virtual address we were asked to fault */ - return hmm_vma_fault(addr, end, fault, write_fault, walk); + return hmm_vma_fault(addr, end, required_fault, walk); } =20 static int hmm_vma_walk_pmd(pmd_t *pmdp, @@ -322,7 +316,6 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, uint64_t *pfns =3D &range->pfns[(start - range->start) >> PAGE_SHIFT]; unsigned long npages =3D (end - start) >> PAGE_SHIFT; unsigned long addr =3D start; - bool fault, write_fault; pte_t *ptep; pmd_t pmd; =20 @@ -332,9 +325,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_vma_walk_hole(start, end, -1, walk); =20 if (thp_migration_supported() && is_pmd_migration_entry(pmd)) { - hmm_range_need_fault(hmm_vma_walk, pfns, npages, - 0, &fault, &write_fault); - if (fault || write_fault) { + if (hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0)) { hmm_vma_walk->last =3D addr; pmd_migration_entry_wait(walk->mm, pmdp); return -EBUSY; @@ -343,9 +334,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, } =20 if (!pmd_present(pmd)) { - hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0, &fault, - &write_fault); - if (fault || write_fault) + if (hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0)) return -EFAULT; return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); } @@ -375,9 +364,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, * recover. */ if (pmd_bad(pmd)) { - hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0, &fault, - &write_fault); - if (fault || write_fault) + if (hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0)) return -EFAULT; return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); } @@ -434,8 +421,8 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned lon= g start, unsigned long end, =20 if (pud_huge(pud) && pud_devmap(pud)) { unsigned long i, npages, pfn; + unsigned int required_fault; uint64_t *pfns, cpu_flags; - bool fault, write_fault; =20 if (!pud_present(pud)) { spin_unlock(ptl); @@ -447,12 +434,11 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned l= ong start, unsigned long end, pfns =3D &range->pfns[i]; =20 cpu_flags =3D pud_to_hmm_pfn_flags(range, pud); - hmm_range_need_fault(hmm_vma_walk, pfns, npages, - cpu_flags, &fault, &write_fault); - if (fault || write_fault) { + required_fault =3D hmm_range_need_fault(hmm_vma_walk, pfns, + npages, cpu_flags); + if (required_fault) { spin_unlock(ptl); - return hmm_vma_fault(addr, end, fault, write_fault, - walk); + return hmm_vma_fault(addr, end, required_fault, walk); } =20 pfn =3D pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); @@ -484,7 +470,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, uns= igned long hmask, struct hmm_range *range =3D hmm_vma_walk->range; struct vm_area_struct *vma =3D walk->vma; uint64_t orig_pfn, cpu_flags; - bool fault, write_fault; + unsigned int required_fault; spinlock_t *ptl; pte_t entry; =20 @@ -495,12 +481,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, u= nsigned long hmask, orig_pfn =3D range->pfns[i]; range->pfns[i] =3D range->values[HMM_PFN_NONE]; cpu_flags =3D pte_to_hmm_pfn_flags(range, entry); - fault =3D write_fault =3D false; - hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags, - &fault, &write_fault); - if (fault || write_fault) { + required_fault =3D hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags= ); + if (required_fault) { spin_unlock(ptl); - return hmm_vma_fault(addr, end, fault, write_fault, walk); + return hmm_vma_fault(addr, end, required_fault, walk); } =20 pfn =3D pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); @@ -522,37 +506,32 @@ static int hmm_vma_walk_test(unsigned long start, u= nsigned long end, struct hmm_range *range =3D hmm_vma_walk->range; struct vm_area_struct *vma =3D walk->vma; =20 + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) && + vma->vm_flags & VM_READ) + return 0; + /* - * Skip vma ranges that don't have struct page backing them or map I/O - * devices directly. + * vma ranges that don't have struct page backing them or map I/O + * devices directly cannot be handled by hmm_range_fault(). * * If the vma does not allow read access, then assume that it does not * allow write access either. HMM does not support architectures that * allow write without read. + * + * If a fault is requested for an unsupported range then it is a hard + * failure. */ - if ((vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) || - !(vma->vm_flags & VM_READ)) { - bool fault, write_fault; - - /* - * Check to see if a fault is requested for any page in the - * range. - */ - hmm_range_need_fault(hmm_vma_walk, range->pfns + - ((start - range->start) >> PAGE_SHIFT), - (end - start) >> PAGE_SHIFT, - 0, &fault, &write_fault); - if (fault || write_fault) - return -EFAULT; - - hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); - hmm_vma_walk->last =3D end; + if (hmm_range_need_fault(hmm_vma_walk, + range->pfns + + ((start - range->start) >> PAGE_SHIFT), + (end - start) >> PAGE_SHIFT, 0)) + return -EFAULT; =20 - /* Skip this vma and continue processing the next vma. */ - return 1; - } + hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + hmm_vma_walk->last =3D end; =20 - return 0; + /* Skip this vma and continue processing the next vma. */ + return 1; } =20 static const struct mm_walk_ops hmm_walk_ops =3D { --=20 2.25.2