From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81E96C54E49 for ; Fri, 8 May 2020 19:20:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4A31220CC7 for ; Fri, 8 May 2020 19:20:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Bb2vO5A2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A31220CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3818790000F; Fri, 8 May 2020 15:20:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3314D900005; Fri, 8 May 2020 15:20:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21FE190000F; Fri, 8 May 2020 15:20:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id F336A900005 for ; Fri, 8 May 2020 15:20:29 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BB05D180AD806 for ; Fri, 8 May 2020 19:20:29 +0000 (UTC) X-FDA: 76794518178.29.cat52_5b8fdf9620308 X-HE-Tag: cat52_5b8fdf9620308 X-Filterd-Recvd-Size: 5813 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 May 2020 19:20:28 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 08 May 2020 12:20:15 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Fri, 08 May 2020 12:20:28 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 08 May 2020 12:20:28 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 8 May 2020 19:20:26 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 8 May 2020 19:20:26 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 08 May 2020 12:20:25 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Ben Skeggs" , Andrew Morton , Shuah Khan , Ralph Campbell Subject: [PATCH 4/6] mm/hmm: add output flag for compound page mapping Date: Fri, 8 May 2020 12:20:07 -0700 Message-ID: <20200508192009.15302-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200508192009.15302-1-rcampbell@nvidia.com> References: <20200508192009.15302-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1588965615; bh=VWezLqS4HMRVGScQgCLLJAXh7u1icGGGw3VQK/+GVjs=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=Bb2vO5A2hOIdj0cSUnCngCJ1jAZCVlZ/f3ExAGu4jfMOHizJfCs7hcwT8TggEKy71 FSGEMOJa6qeJFBqoGrxDNGS/8XBOjVLERYDg7fQ50phKtSk9EAI8M7jGdc4fIaTrEP gmQPmqMsAssiwj6YgVNCEElyN6cYU3BlxMXzYLAHcFuScGOU5uZ87xdsWfACk5VVaG a9eIBYmwaBYuZNNVfQUt4WOLHoYwsNxhVtXNuvhXGOIcNaq9TeKaJTtlfOAvzQoEMe NIz5JRSlDyEk+6reB/J+5FEc13yx67WZcYjmt0oO1i2EWO/9j5C9Y+3hX2M2o32rxH cfcJSWIcVJJbw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To be fully general, hmm_range_fault() would need to return the mapping size to handle cases like a 1GB compound page being mapped with 2MB PMD entries. However, the most common case is the mapping size the same as the underlying compound page size. Add a new output flag to indicate this so that callers know it is safe to use a large device page table mapping if one is available. Signed-off-by: Ralph Campbell --- include/linux/hmm.h | 4 +++- mm/hmm.c | 10 +++++++--- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index e912b9dc4633..f2d38af421e7 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -41,12 +41,14 @@ enum hmm_pfn_flags { HMM_PFN_VALID =3D 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE =3D 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR =3D 1UL << (BITS_PER_LONG - 3), + HMM_PFN_COMPOUND =3D 1UL << (BITS_PER_LONG - 4), =20 /* Input flags */ HMM_PFN_REQ_FAULT =3D HMM_PFN_VALID, HMM_PFN_REQ_WRITE =3D HMM_PFN_WRITE, =20 - HMM_PFN_FLAGS =3D HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR, + HMM_PFN_FLAGS =3D HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR | + HMM_PFN_COMPOUND, }; =20 /* diff --git a/mm/hmm.c b/mm/hmm.c index 41673a6d8d46..a9dd06e190a1 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -170,7 +170,9 @@ static inline unsigned long pmd_to_hmm_pfn_flags(struct= hmm_range *range, { if (pmd_protnone(pmd)) return 0; - return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pmd_write(pmd) ? + (HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) : + (HMM_PFN_VALID | HMM_PFN_COMPOUND); } =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -389,7 +391,9 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct= hmm_range *range, { if (!pud_present(pud)) return 0; - return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pud_write(pud) ? + (HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) : + (HMM_PFN_VALID | HMM_PFN_COMPOUND); } =20 static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned lon= g end, @@ -484,7 +488,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsig= ned long hmask, =20 pfn =3D pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); for (; addr < end; addr +=3D PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] =3D pfn | cpu_flags; + range->hmm_pfns[i] =3D pfn | cpu_flags | HMM_PFN_COMPOUND; =20 spin_unlock(ptl); return 0; --=20 2.20.1