From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3974FC433E1 for ; Tue, 25 Aug 2020 14:58:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D432620782 for ; Tue, 25 Aug 2020 14:58:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EmY2bagV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D432620782 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 77B698E0017; Tue, 25 Aug 2020 10:58:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 72C628D0001; Tue, 25 Aug 2020 10:58:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F4418E0017; Tue, 25 Aug 2020 10:58:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 48FEE8D0001 for ; Tue, 25 Aug 2020 10:58:51 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 080313625 for ; Tue, 25 Aug 2020 14:58:51 +0000 (UTC) X-FDA: 77189398062.27.tray77_6012b9d2705c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id C5CE63D663 for ; Tue, 25 Aug 2020 14:58:50 +0000 (UTC) X-HE-Tag: tray77_6012b9d2705c X-Filterd-Recvd-Size: 14906 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 25 Aug 2020 14:58:50 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id l191so4898235pgd.5 for ; Tue, 25 Aug 2020 07:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FErZs3hZSGjvqwNcDn/CTTFNbpmYi2cg1JR35qAXZTs=; b=EmY2bagVYcEwm/v7nmZnHTii9lxPoceyw//2dPjiejBidWt6EbSEd2mhPJcOgGVVd0 PETr9kZUt+8PZVFnxT357wUl0RQsyaZHtU+qKhbYjsBmz7rgaosotYUl7OsbvPMI9bP+ UZDtKe5/XCjGTipj5wmhZdxbeQ3j6sdUkaQR5Ml0JCBkGhCI3dxqQGKQKLhoIun4huda q4OQE81ocUORQul9f50fs9/wcB1vdEJBSX2m5sCh2xi9uGo7fyrRNKSQg+wmXebMj/cn Aq4N8+0aq7+kjVDVUfrN/0i9yojvl874l7U4a8ry0z/sRBrgmftKoWz/Xddx0R+MZU1N NUCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FErZs3hZSGjvqwNcDn/CTTFNbpmYi2cg1JR35qAXZTs=; b=JyV1ataKspaja9u58aSd8+P3pd1TJni7GEc+NyFaDgsPFWWoJWrZoaMn07fAHKFj0C gjA7skguDEdf4XT0uGnC7KtVTXo6OfVh/cQbgWi2WJT4CTi+Y8nj6B5p0e5UTh09oaTl sBEEHac4dBNz0MvqckPx6oiN+Yk5wAbgPIspv2OAZWKwaS3v2fwLimDmZW536U+cv6tl W7LntxOrQkRQ6ldMsvuHa9grOJGNVIfSVIdC1Grsl00o/0yNt/C2bvC6bRNblmSfI5bO 5y9Nyb2TG6j0eSpMJj3zAxLFKLjTab4grNisc2AapeZtjwJ039o2CqqHbsODMoCrfnsF sfvA== X-Gm-Message-State: AOAM5331ItQn1wXaWlDf7uJVyub4jlc3E+cescZP0QKfraOVtWbwC63u /7foXDIhbTREzKsEtcH0GEEebTd44cU= X-Google-Smtp-Source: ABdhPJw31A0Uq06DkF6fTEYi8KgTmGz2Cy3Zchx+cG6HwJRAzukHgP51Ao6KdPPtqmJe9zVfFGPIFg== X-Received: by 2002:a17:902:8ecb:: with SMTP id x11mr7922476plo.13.1598367529029; Tue, 25 Aug 2020 07:58:49 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:48 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy Subject: [PATCH v7 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c Date: Wed, 26 Aug 2020 00:57:50 +1000 Message-Id: <20200825145753.529284-10-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C5CE63D663 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a generic kernel virtual memory mapper, not specific to ioremap. Signed-off-by: Nicholas Piggin --- include/linux/vmalloc.h | 3 + mm/ioremap.c | 197 ---------------------------------------- mm/vmalloc.c | 196 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 199 insertions(+), 197 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 3f6bba4cc9bc..15adb9a14fb6 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -177,6 +177,9 @@ extern struct vm_struct *remove_vm_area(const void *a= ddr); extern struct vm_struct *find_vm_area(const void *addr); =20 #ifdef CONFIG_MMU +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift); extern int map_kernel_range_noflush(unsigned long start, unsigned long s= ize, pgprot_t prot, struct page **pages); int map_kernel_range(unsigned long start, unsigned long size, pgprot_t p= rot, diff --git a/mm/ioremap.c b/mm/ioremap.c index c67f91164401..d1dcc7e744ac 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -28,203 +28,6 @@ early_param("nohugeiomap", set_nohugeiomap); static const bool iomap_max_page_shift =3D PAGE_SHIFT; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ =20 -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long = end, - phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) -{ - pte_t *pte; - u64 pfn; - - pfn =3D phys_addr >> PAGE_SHIFT; - pte =3D pte_alloc_kernel_track(pmd, addr, mask); - if (!pte) - return -ENOMEM; - do { - BUG_ON(!pte_none(*pte)); - set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); - pfn++; - } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); - *mask |=3D PGTBL_PTE_MODIFIED; - return 0; -} - -static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned lo= ng end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < PMD_SHIFT) - return 0; - - if (!arch_vmap_pmd_supported(prot)) - return 0; - - if ((end - addr) !=3D PMD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PMD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PMD_SIZE)) - return 0; - - if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) - return 0; - - return pmd_set_huge(pmd, phys_addr, prot); -} - -static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long = end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - pmd_t *pmd; - unsigned long next; - - pmd =3D pmd_alloc_track(&init_mm, pud, addr, mask); - if (!pmd) - return -ENOMEM; - do { - next =3D pmd_addr_end(addr, end); - - if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift= )) { - *mask |=3D PGTBL_PMD_MODIFIED; - continue; - } - - if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) - return -ENOMEM; - } while (pmd++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); - return 0; -} - -static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned lo= ng end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < PUD_SHIFT) - return 0; - - if (!arch_vmap_pud_supported(prot)) - return 0; - - if ((end - addr) !=3D PUD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PUD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PUD_SIZE)) - return 0; - - if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) - return 0; - - return pud_set_huge(pud, phys_addr, prot); -} - -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long = end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - pud_t *pud; - unsigned long next; - - pud =3D pud_alloc_track(&init_mm, p4d, addr, mask); - if (!pud) - return -ENOMEM; - do { - next =3D pud_addr_end(addr, end); - - if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift= )) { - *mask |=3D PGTBL_PUD_MODIFIED; - continue; - } - - if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, m= ask)) - return -ENOMEM; - } while (pud++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); - return 0; -} - -static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned lo= ng end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < P4D_SHIFT) - return 0; - - if (!arch_vmap_p4d_supported(prot)) - return 0; - - if ((end - addr) !=3D P4D_SIZE) - return 0; - - if (!IS_ALIGNED(addr, P4D_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, P4D_SIZE)) - return 0; - - if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) - return 0; - - return p4d_set_huge(p4d, phys_addr, prot); -} - -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long = end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - p4d_t *p4d; - unsigned long next; - - p4d =3D p4d_alloc_track(&init_mm, pgd, addr, mask); - if (!p4d) - return -ENOMEM; - do { - next =3D p4d_addr_end(addr, end); - - if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift= )) { - *mask |=3D PGTBL_P4D_MODIFIED; - continue; - } - - if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, m= ask)) - return -ENOMEM; - } while (p4d++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); - return 0; -} - -static int vmap_range(unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - pgd_t *pgd; - unsigned long start; - unsigned long next; - int err; - pgtbl_mod_mask mask =3D 0; - - might_sleep(); - BUG_ON(addr >=3D end); - - start =3D addr; - pgd =3D pgd_offset_k(addr); - do { - next =3D pgd_addr_end(addr, end); - err =3D vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shif= t, &mask); - if (err) - break; - } while (pgd++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); - - flush_cache_vmap(start, end); - - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) - arch_sync_kernel_mappings(start, end); - - return err; -} - int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 45cd80ec7eeb..256554d598e6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -70,6 +70,202 @@ static void free_work(struct work_struct *w) } =20 /*** Page table manipulation functions ***/ +static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long = end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) +{ + pte_t *pte; + u64 pfn; + + pfn =3D phys_addr >> PAGE_SHIFT; + pte =3D pte_alloc_kernel_track(pmd, addr, mask); + if (!pte) + return -ENOMEM; + do { + BUG_ON(!pte_none(*pte)); + set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); + pfn++; + } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); + *mask |=3D PGTBL_PTE_MODIFIED; + return 0; +} + +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned lo= ng end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < PMD_SHIFT) + return 0; + + if (!arch_vmap_pmd_supported(prot)) + return 0; + + if ((end - addr) !=3D PMD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PMD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PMD_SIZE)) + return 0; + + if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) + return 0; + + return pmd_set_huge(pmd, phys_addr, prot); +} + +static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long = end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + pmd_t *pmd; + unsigned long next; + + pmd =3D pmd_alloc_track(&init_mm, pud, addr, mask); + if (!pmd) + return -ENOMEM; + do { + next =3D pmd_addr_end(addr, end); + + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift= )) { + *mask |=3D PGTBL_PMD_MODIFIED; + continue; + } + + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + return -ENOMEM; + } while (pmd++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); + return 0; +} + +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned lo= ng end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < PUD_SHIFT) + return 0; + + if (!arch_vmap_pud_supported(prot)) + return 0; + + if ((end - addr) !=3D PUD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PUD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PUD_SIZE)) + return 0; + + if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) + return 0; + + return pud_set_huge(pud, phys_addr, prot); +} + +static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long = end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + pud_t *pud; + unsigned long next; + + pud =3D pud_alloc_track(&init_mm, p4d, addr, mask); + if (!pud) + return -ENOMEM; + do { + next =3D pud_addr_end(addr, end); + + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift= )) { + *mask |=3D PGTBL_PUD_MODIFIED; + continue; + } + + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, m= ask)) + return -ENOMEM; + } while (pud++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); + return 0; +} + +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned lo= ng end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < P4D_SHIFT) + return 0; + + if (!arch_vmap_p4d_supported(prot)) + return 0; + + if ((end - addr) !=3D P4D_SIZE) + return 0; + + if (!IS_ALIGNED(addr, P4D_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, P4D_SIZE)) + return 0; + + if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) + return 0; + + return p4d_set_huge(p4d, phys_addr, prot); +} + +static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long = end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + p4d_t *p4d; + unsigned long next; + + p4d =3D p4d_alloc_track(&init_mm, pgd, addr, mask); + if (!p4d) + return -ENOMEM; + do { + next =3D p4d_addr_end(addr, end); + + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift= )) { + *mask |=3D PGTBL_P4D_MODIFIED; + continue; + } + + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, m= ask)) + return -ENOMEM; + } while (p4d++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); + return 0; +} + +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + pgd_t *pgd; + unsigned long start; + unsigned long next; + int err; + pgtbl_mod_mask mask =3D 0; + + might_sleep(); + BUG_ON(addr >=3D end); + + start =3D addr; + pgd =3D pgd_offset_k(addr); + do { + next =3D pgd_addr_end(addr, end); + err =3D vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shif= t, &mask); + if (err) + break; + } while (pgd++, phys_addr +=3D (next - addr), addr =3D next, addr !=3D = end); + + flush_cache_vmap(start, end); + + if (mask & ARCH_PAGE_TABLE_SYNC_MASK) + arch_sync_kernel_mappings(start, end); + + return err; +} =20 static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned lo= ng end, pgtbl_mod_mask *mask) --=20 2.23.0