From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CCC5C433F5 for ; Fri, 17 Dec 2021 15:34:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CED366B007B; Fri, 17 Dec 2021 10:31:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C9C366B007D; Fri, 17 Dec 2021 10:31:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B64156B007E; Fri, 17 Dec 2021 10:31:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0246.hostedemail.com [216.40.44.246]) by kanga.kvack.org (Postfix) with ESMTP id A69946B007B for ; Fri, 17 Dec 2021 10:31:36 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5AB2182499B9 for ; Fri, 17 Dec 2021 15:31:26 +0000 (UTC) X-FDA: 78927675372.18.7F969F5 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id D1744140007 for ; Fri, 17 Dec 2021 15:31:16 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0FEDC153B; Fri, 17 Dec 2021 07:31:20 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B52083F774; Fri, 17 Dec 2021 07:31:18 -0800 (PST) From: Robin Murphy To: joro@8bytes.org, will@kernel.org Cc: iommu@lists.linux-foundation.org, suravee.suthikulpanit@amd.com, baolu.lu@linux.intel.com, willy@infradead.org, linux-kernel@vger.kernel.org, john.garry@huawei.com, linux-mm@kvack.org, hch@lst.de Subject: [PATCH v3 5/9] iommu/amd: Use put_pages_list Date: Fri, 17 Dec 2021 15:30:59 +0000 Message-Id: <73af128f651aaa1f38f69e586c66765a88ad2de0.1639753638.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.28.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: f6dnc5sf7hcdq9rfmhzi8aumu96xw4zk X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D1744140007 Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com; dmarc=pass (policy=none) header.from=arm.com X-HE-Tag: 1639755076-995141 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" page->freelist is for the use of slab. We already have the ability to free a list of pages in the core mm, but it requires the use of a list_head and for the pages to be chained together through page->lru. Switch the AMD IOMMU code over to using free_pages_list(). Signed-off-by: Matthew Wilcox (Oracle) [rm: split from original patch, cosmetic tweaks] Signed-off-by: Robin Murphy --- drivers/iommu/amd/io_pgtable.c | 50 ++++++++++++---------------------- 1 file changed, 18 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtabl= e.c index 4165e1372b6e..b1bf4125b0f7 100644 --- a/drivers/iommu/amd/io_pgtable.c +++ b/drivers/iommu/amd/io_pgtable.c @@ -74,26 +74,14 @@ static u64 *first_pte_l7(u64 *pte, unsigned long *pag= e_size, * ***********************************************************************= *****/ =20 -static void free_page_list(struct page *freelist) -{ - while (freelist !=3D NULL) { - unsigned long p =3D (unsigned long)page_address(freelist); - - freelist =3D freelist->freelist; - free_page(p); - } -} - -static struct page *free_pt_page(u64 *pt, struct page *freelist) +static void free_pt_page(u64 *pt, struct list_head *freelist) { struct page *p =3D virt_to_page(pt); =20 - p->freelist =3D freelist; - - return p; + list_add_tail(&p->lru, freelist); } =20 -static struct page *free_pt_lvl(u64 *pt, struct page *freelist, int lvl) +static void free_pt_lvl(u64 *pt, struct list_head *freelist, int lvl) { u64 *p; int i; @@ -114,22 +102,22 @@ static struct page *free_pt_lvl(u64 *pt, struct pag= e *freelist, int lvl) */ p =3D IOMMU_PTE_PAGE(pt[i]); if (lvl > 2) - freelist =3D free_pt_lvl(p, freelist, lvl - 1); + free_pt_lvl(p, freelist, lvl - 1); else - freelist =3D free_pt_page(p, freelist); + free_pt_page(p, freelist); } =20 - return free_pt_page(pt, freelist); + free_pt_page(pt, freelist); } =20 -static struct page *free_sub_pt(u64 *root, int mode, struct page *freeli= st) +static void free_sub_pt(u64 *root, int mode, struct list_head *freelist) { switch (mode) { case PAGE_MODE_NONE: case PAGE_MODE_7_LEVEL: break; case PAGE_MODE_1_LEVEL: - freelist =3D free_pt_page(root, freelist); + free_pt_page(root, freelist); break; case PAGE_MODE_2_LEVEL: case PAGE_MODE_3_LEVEL: @@ -141,8 +129,6 @@ static struct page *free_sub_pt(u64 *root, int mode, = struct page *freelist) default: BUG(); } - - return freelist; } =20 void amd_iommu_domain_set_pgtable(struct protection_domain *domain, @@ -350,7 +336,7 @@ static u64 *fetch_pte(struct amd_io_pgtable *pgtable, return pte; } =20 -static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *fr= eelist) +static void free_clear_pte(u64 *pte, u64 pteval, struct list_head *freel= ist) { u64 *pt; int mode; @@ -361,12 +347,12 @@ static struct page *free_clear_pte(u64 *pte, u64 pt= eval, struct page *freelist) } =20 if (!IOMMU_PTE_PRESENT(pteval)) - return freelist; + return; =20 pt =3D IOMMU_PTE_PAGE(pteval); mode =3D IOMMU_PTE_MODE(pteval); =20 - return free_sub_pt(pt, mode, freelist); + free_sub_pt(pt, mode, freelist); } =20 /* @@ -380,7 +366,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *o= ps, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { struct protection_domain *dom =3D io_pgtable_ops_to_domain(ops); - struct page *freelist =3D NULL; + LIST_HEAD(freelist); bool updated =3D false; u64 __pte, *pte; int ret, i, count; @@ -400,9 +386,9 @@ static int iommu_v1_map_page(struct io_pgtable_ops *o= ps, unsigned long iova, goto out; =20 for (i =3D 0; i < count; ++i) - freelist =3D free_clear_pte(&pte[i], pte[i], freelist); + free_clear_pte(&pte[i], pte[i], &freelist); =20 - if (freelist !=3D NULL) + if (!list_empty(&freelist)) updated =3D true; =20 if (count > 1) { @@ -437,7 +423,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *o= ps, unsigned long iova, } =20 /* Everything flushed out, free pages now */ - free_page_list(freelist); + put_pages_list(&freelist); =20 return ret; } @@ -499,7 +485,7 @@ static void v1_free_pgtable(struct io_pgtable *iop) { struct amd_io_pgtable *pgtable =3D container_of(iop, struct amd_io_pgta= ble, iop); struct protection_domain *dom; - struct page *freelist =3D NULL; + LIST_HEAD(freelist); =20 if (pgtable->mode =3D=3D PAGE_MODE_NONE) return; @@ -516,9 +502,9 @@ static void v1_free_pgtable(struct io_pgtable *iop) BUG_ON(pgtable->mode < PAGE_MODE_NONE || pgtable->mode > PAGE_MODE_6_LEVEL); =20 - freelist =3D free_sub_pt(pgtable->root, pgtable->mode, freelist); + free_sub_pt(pgtable->root, pgtable->mode, &freelist); =20 - free_page_list(freelist); + put_pages_list(&freelist); } =20 static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, v= oid *cookie) --=20 2.28.0.dirty