From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B6A8C43465 for ; Fri, 18 Sep 2020 16:44:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EE1D82076B for ; Fri, 18 Sep 2020 16:44:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="VlNEUv9L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE1D82076B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7B4466B005C; Fri, 18 Sep 2020 12:44:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 766676B005D; Fri, 18 Sep 2020 12:44:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A2936B006C; Fri, 18 Sep 2020 12:44:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id 559AC6B005C for ; Fri, 18 Sep 2020 12:44:10 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 00EC8246A for ; Fri, 18 Sep 2020 16:44:09 +0000 (UTC) X-FDA: 77276754660.24.knee28_5a0c0e82712c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id D52941A4A5 for ; Fri, 18 Sep 2020 16:44:09 +0000 (UTC) X-HE-Tag: knee28_5a0c0e82712c X-Filterd-Recvd-Size: 5102 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 16:44:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=69oCjFxEJdGUlaKFJvOKKJe894mXXG8veyEGu1EhEJ0=; b=VlNEUv9LV+aY4SkVXB3pcBhQK7 +4f1IfC2eWnutDmDJHsF4O75LR/vsSEVed4BUN269IhW+paaZFAbeUUdsKcoQ0gNW7Kfmr+VPkUHA GvwA1P+iQoGls6ETWLNao2T8t7Mr8sE0yPVdtkjO/LK1Nj783nR91OuW9xdY/1vbPRffrxGVOeG9X 2sDnFATRgmtwPufmWpiyG7yWGqlpIhXWxZxQ7v2AcX7oVTThZ31QMmE6iQTzxt/29V6Jf9U4Wv6wu SyZcuuzTerz8XZ6jnd8TFvT9ezWRbKXv4ugYyyk7ROHTMayUrFcx1u8hULLBACrBBpTGGl5PxhvUl bs5MaWXw==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJUL-0007LP-9v; Fri, 18 Sep 2020 16:43:57 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 2/6] mm: add a vmap_pfn function Date: Fri, 18 Sep 2020 18:37:20 +0200 Message-Id: <20200918163724.2511-3-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a proper helper to remap PFNs into kernel virtual space so that drivers don't have to abuse alloc_vm_area and open coded PTE manipulation for it. Signed-off-by: Christoph Hellwig --- include/linux/vmalloc.h | 1 + mm/Kconfig | 3 +++ mm/vmalloc.c | 45 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 49 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 0221f852a7e1a3..8ecd92a947ee0c 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -121,6 +121,7 @@ extern void vfree_atomic(const void *addr); =20 extern void *vmap(struct page **pages, unsigned int count, unsigned long flags, pgprot_t prot); +void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot); extern void vunmap(const void *addr); =20 extern int remap_vmalloc_range_partial(struct vm_area_struct *vma, diff --git a/mm/Kconfig b/mm/Kconfig index 6c974888f86f97..6fa7ba1199eb1e 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -815,6 +815,9 @@ config DEVICE_PRIVATE memory; i.e., memory that is only accessible from the device (or group of devices). You likely also want to select HMM_MIRROR. =20 +config VMAP_PFN + bool + config FRAME_VECTOR bool =20 diff --git a/mm/vmalloc.c b/mm/vmalloc.c index be4724b916b3e7..59f2afcf26c312 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2407,6 +2407,51 @@ void *vmap(struct page **pages, unsigned int count= , } EXPORT_SYMBOL(vmap); =20 +#ifdef CONFIG_VMAP_PFN +struct vmap_pfn_data { + unsigned long *pfns; + pgprot_t prot; + unsigned int idx; +}; + +static int vmap_pfn_apply(pte_t *pte, unsigned long addr, void *private) +{ + struct vmap_pfn_data *data =3D private; + + if (WARN_ON_ONCE(pfn_valid(data->pfns[data->idx]))) + return -EINVAL; + *pte =3D pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); + return 0; +} + +/** + * vmap_pfn - map an array of PFNs into virtually contiguous space + * @pfns: array of PFNs + * @count: number of pages to map + * @prot: page protection for the mapping + * + * Maps @count PFNs from @pfns into contiguous kernel virtual space and = returns + * the start address of the mapping. + */ +void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) +{ + struct vmap_pfn_data data =3D { .pfns =3D pfns, .prot =3D pgprot_nx(pro= t) }; + struct vm_struct *area; + + area =3D get_vm_area_caller(count * PAGE_SIZE, VM_IOREMAP, + __builtin_return_address(0)); + if (!area) + return NULL; + if (apply_to_page_range(&init_mm, (unsigned long)area->addr, + count * PAGE_SIZE, vmap_pfn_apply, &data)) { + free_vm_area(area); + return NULL; + } + return area->addr; +} +EXPORT_SYMBOL_GPL(vmap_pfn); +#endif /* CONFIG_VMAP_PFN */ + static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot, int node) { --=20 2.28.0