From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88764C77B72 for ; Mon, 17 Apr 2023 20:53:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C173D90000B; Mon, 17 Apr 2023 16:53:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA18A900003; Mon, 17 Apr 2023 16:53:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92E4690000B; Mon, 17 Apr 2023 16:53:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 81DE5900003 for ; Mon, 17 Apr 2023 16:53:10 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5320FC06AA for ; Mon, 17 Apr 2023 20:53:10 +0000 (UTC) X-FDA: 80692082940.07.B75FC08 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 6415F18001D for ; Mon, 17 Apr 2023 20:53:08 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=Bg9zrwHm; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681764788; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cDm5QDDKVVs67sujhxqGF1xdxDwJ7Cf+86boh2cHUEk=; b=1dL6bqM2SBsykIUgDaTFe8zkVUUOJBL1pAKEv/+kbuqXBwmhVBvvYCfaiYSAacEwlP82mg mDdLDadMGhPdjIR6VQt/PqTzXJL2ONI5/aPzi/GgsCvO80qzpAO7xTMuciVQrIUxOEE2cZ tiVmzPpWKSUNyeviu77veo7n1TIObhU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=Bg9zrwHm; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681764788; a=rsa-sha256; cv=none; b=q8JRoHNKct0fHFIy33Wx719rwgNoMxHL1NcLxmpVGNHBwpAei7zRmcaasWZa9Ig/q7Mlfw j5G9dj5MwctpL3GQC5JhsUxmNi+QnXUGqeWMW2M59F3rdQJYHtD0qIYsppQGz4LEtVU6DQ kvvk47PpAsBxYoOwShebn3BaChAOZfQ= Received: by mail-pl1-f172.google.com with SMTP id w11so27336277plp.13 for ; Mon, 17 Apr 2023 13:53:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764788; x=1684356788; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cDm5QDDKVVs67sujhxqGF1xdxDwJ7Cf+86boh2cHUEk=; b=Bg9zrwHmz+zWpBrO553L6xcdY/9UqoWF+7JCbvFGdP25XqpLPTgtgzoeSaxtZoExE9 /hfw60KOOdzfPF4LPrgT1cUNcQCYvvrOjqBV5EeXW1oBY3+U5yLq/ZcoGhd2glQtHjSd E1SNfE4qG9orNxm4nPGVLtj0ZQ+EoD5+Of2kyr0VxZlAahOe3asPxpa5GldaOSiEcuod A/YatVmT6GM0jfgVRuBueRMwbH+1Zmb4szBdOpsRslV8e0A8Jy37baDQLXTR/uZAq8Io SplJDGW4RHssNwF9j3bgci8G5WEKKtAuNGcD0sy8PlvNVq51dUIvBHLo/SQen5NnMvRt jmoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764788; x=1684356788; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cDm5QDDKVVs67sujhxqGF1xdxDwJ7Cf+86boh2cHUEk=; b=FFs52LWERB75b7oCddjcz4JkKs2dV8gjQRh0M+IInKptN1U/qU+Klk+jDydfq21b2h DjfigE80TpF1nMfPg/zYp/ryIR/Mn9eJvm/RNWI3VOcnSmZ3LNayIpqBQw9xgXMnXtMn OLpA+DSseIG5LfaViidFqhSQsXW4YBhS44teJ/mlpzrf8OM7yp/PCm1kKdK2iceE1Qz8 kMVu6SMbUD6YVo3KXQgUcKDsvZLRy0iyA7VmGg5RA+G2FWTPPqVsVJS/US9lsvSAWaVZ syGN+emc3geqDHSmyUCtsODVID9G1M1OkNfdJykYoG0Om7kPA70ChEdraPY83qSLteS9 KjIA== X-Gm-Message-State: AAQBX9f13dNGLRMIxw05Tn7Rw+9jLMZsad9X95q5ZjSTu7ZSsfXTQAJn a5ybVmw6g59n+HTPsBsarDE= X-Google-Smtp-Source: AKy350Z9i6yPCDN1KgNRMsqi9oENFCLbYvx/eJwTeNJWe0OuxQ2o4hV6w1fDsy15hPJg9C6OAHihoA== X-Received: by 2002:a17:90a:6aca:b0:246:82ac:b6b2 with SMTP id b10-20020a17090a6aca00b0024682acb6b2mr15576120pjm.9.1681764787792; Mon, 17 Apr 2023 13:53:07 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:07 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 16/33] s390: Convert various pgalloc functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:31 -0700 Message-Id: <20230417205048.15870-17-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 6415F18001D X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: cbxcsqjkqyhmfmiykke67mxc9sjoziky X-HE-Tag: 1681764788-63166 X-HE-Meta: U2FsdGVkX1/C2U/qFGeWGQvX86nYNSEyHeHxdYEYqaz+V7oxo8gEwLPFWR9SbYqyGZNWYuuOPsxpTS01IMMPD9q/dEW1bYqtk9rsh8m7a9FhS3EzXOWnNaK3aqH+EUXry9efyQ2rSsGB6GRRtwWX6cQpeihLjhYJjmWr2YT9QMt9FenNyy57OsxPJOwTfEg07Hyt3ssgZI3iQncrTZjfSFWSI57+JaJAls0o1zeCiLMD24OprzjymOF6CJgS/dZaTV8t+eWTu4AhegFcT3KF7XgpE/WO1vSx+JEs4TRmX/QgzI1GJmjKzDD3Z7IfwPhSm6Gx9aCsVFgvgb/8Jfy4tEkdDdkY8qDdHDr75flAqiY5Oo2Y5Ji1vBz6MgHe87Q88U2oW0B+VcXIKkv9beK5O1sEm/1ibdCii6bGLgbroXYPCurdW6NP0+HR+8ELfjN21j6g4rAX4StFBLsYGqRyfD3UoP33rb0fxC4tiPduRqcYcIaxdDjkmdmPiMB0KvRCwrKV7WF8vyNHpuHK6WFanqjxCEFpFSR3GulIiY9KUOs+3ThFvQFLMKRN2hEjc/uvOp5qTwm2RojEJ6DMB/4BxVq33MQlPQFvCxvusuC9jqrROd1w9b2ZOG+ePgbh6Gs3jMeNa3gzp7HI/TwDsz812ptiW1sPb6y/1qSfH7VvUyLDtvP2R32PFpfQ/dgucpSA5glapQY/Jza9+ztoqKZ3PeBAzL/HUPxVaxzQB42uSkeFm9ebD8M0aa+rtPk5EZEsjbenWBjNARj1kMKPlfwB5bHt5Ov95WTfTOpv2CHoACZ8vWKMv9Vo0QoWiVhdT4XuOBwhzHDUttAXVqlAs9Qpsdi5DjJkFbmHQc8l3Utp9szsgFIzxIk9Iv0ozMv2At+FMDy//Pd7E9ybZP74+fkafFkZd+eN8KHMdVqtajlXAwEYKTJXNbooZWcmKvAEypEIv3e3Q55hdvEpgoGklP/ ZXn0FdVi T4Chw0r8XxjTgYq96aFU+cTMO9fEWw3edYyZOW2lvt0oOu3AKa9VYr3zcyXrHyhGMYvPQ0oRsO31rWg63AlD8im79Qdi0sAIXZEqGdik2cOWnE8k4bH6xwzKao8UvP7GrHBgHgFDR97eO8r5TlmneaBU2dvQgopEYO89hVoYSCJodQl5sri4QiyDAwlWOVGq09Ar2Fxte3c9D9QAWzXK/I+4ToKc5z+kRIYh1558zM+uaVZV/5coYww/Jjc8nQzwlP4wzAIuqKnYglr+h2hHObsg9YseW5d9xGpGs/Brm2OqISEu4p1XPmLacCooxWpbpDSXmoUwLUmqDjQys7kfpTQUH00HLHTgrocIDUGXSOm+Ij3hg5kt5RnXH41daA0lVEZNebnk6GUXUUilI17W3b0wLLPulhixDsew62bpEZTYz9GZbn23emY9V5Q7qNyka9hjUFOEpsTxVLlddKCJAvjwYFL9/Mm2B9CoKXQDOU/plnRPjVyrHllVshcIs80cvC8g7S6WFmVmj/EPsmApn/6ISeCsE+iigPYWa1K3Ceyr+xaA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/s390/include/asm/pgalloc.h | 4 +- arch/s390/include/asm/tlb.h | 4 +- arch/s390/mm/pgalloc.c | 108 ++++++++++++++++---------------- 3 files changed, 59 insertions(+), 57 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 17eb618f1348..9841481560ae 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) if (!table) return NULL; crst_table_init(table, _SEGMENT_ENTRY_EMPTY); - if (!pgtable_pmd_page_ctor(virt_to_page(table))) { + if (!ptdesc_pmd_ctor(virt_to_ptdesc(table))) { crst_table_free(mm, table); return NULL; } @@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { if (mm_pmd_folded(mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + ptdesc_pmd_dtor(virt_to_ptdesc(pmd)); crst_table_free(mm, (unsigned long *) pmd); } diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index b91f4a9b044c..1388c819b467 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + ptdesc_pmd_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_table(tlb, pmd); + tlb_remove_ptdesc(tlb, pmd); } /* diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 6b99932abc66..16a29d2cfe85 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl); unsigned long *crst_table_alloc(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER); + struct ptdesc *ptdesc = ptdesc(GFP_KERNEL, CRST_ALLOC_ORDER); - if (!page) + if (!ptdesc) return NULL; - arch_set_page_dat(page, CRST_ALLOC_ORDER); - return (unsigned long *) page_to_virt(page); + arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER); + return (unsigned long *) ptdesc_to_virt(ptdesc); } void crst_table_free(struct mm_struct *mm, unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + ptdesc_free(virt_to_ptdesc(table); } static void __crst_table_upgrade(void *arg) @@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits) struct page *page_table_alloc_pgste(struct mm_struct *mm) { - struct page *page; + struct page *ptdesc; u64 *table; - page = alloc_page(GFP_KERNEL); - if (page) { - table = (u64 *)page_to_virt(page); + ptdesc = ptdesc_alloc(GFP_KERNEL, 0); + if (ptdesc) { + table = (u64 *)ptdesc_to_virt(page); memset64(table, _PAGE_INVALID, PTRS_PER_PTE); memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } - return page; + return ptdesc_page(ptdesc); } void page_table_free_pgste(struct page *page) { - __free_page(page); + ptdesc_free(page_ptdesc(page)); } #endif /* CONFIG_PGSTE */ @@ -230,7 +230,7 @@ void page_table_free_pgste(struct page *page) unsigned long *page_table_alloc(struct mm_struct *mm) { unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; unsigned int mask, bit; /* Try to get a fragment of a 4K page as a 2K page table */ @@ -238,9 +238,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = NULL; spin_lock_bh(&mm->context.lock); if (!list_empty(&mm->context.pgtable_list)) { - page = list_first_entry(&mm->context.pgtable_list, - struct page, lru); - mask = atomic_read(&page->pt_frag_refcount); + ptdesc = list_first_entry(&mm->context.pgtable_list, + struct ptdesc, pt_list); + mask = atomic_read(&ptdesc->pt_frag_refcount); /* * The pending removal bits must also be checked. * Failure to do so might lead to an impossible @@ -253,13 +253,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm) */ mask = (mask | (mask >> 4)) & 0x03U; if (mask != 0x03U) { - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->pt_frag_refcount, + atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x01U << bit); - list_del(&page->lru); + list_del(&ptdesc->pt_list); } } spin_unlock_bh(&mm->context.lock); @@ -267,27 +267,27 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } /* Allocate a fresh page */ - page = alloc_page(GFP_KERNEL); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!ptdesc_pte_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - arch_set_page_dat(page, 0); + arch_set_page_dat(ptdesc_page(ptdesc), 0); /* Initialize page table */ - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - atomic_xor_bits(&page->pt_frag_refcount, 0x03U); + atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_xor_bits(&page->pt_frag_refcount, 0x01U); + atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x01U); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); spin_unlock_bh(&mm->context.lock); } return table; @@ -309,9 +309,8 @@ static void page_table_release_check(struct page *page, void *table, void page_table_free(struct mm_struct *mm, unsigned long *table) { unsigned int mask, bit, half; - struct page *page; + struct ptdesc *ptdesc = virt_to_ptdesc(table); - page = virt_to_page(table); if (!mm_alloc_pgste(mm)) { /* Free 2K page table fragment of a 4K page */ bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)); @@ -321,39 +320,38 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) * will happen outside of the critical section from this * function or from __tlb_remove_table() */ - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit); if (mask & 0x03U) - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); else - list_del(&page->lru); + list_del(&ptdesc->pt_list); spin_unlock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x10U << bit); if (mask != 0x00U) return; half = 0x01U << bit; } else { half = 0x03U; - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U); } - page_table_release_check(page, table, half, mask); - pgtable_pte_page_dtor(page); - __free_page(page); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, unsigned long vmaddr) { struct mm_struct *mm; - struct page *page; unsigned int bit, mask; + struct ptdesc *ptdesc = virt_to_ptdesc(table); mm = tlb->mm; - page = virt_to_page(table); if (mm_alloc_pgste(mm)) { gmap_unlink(mm, table, vmaddr); table = (unsigned long *) ((unsigned long)table | 0x03U); - tlb_remove_table(tlb, table); + tlb_remove_ptdesc(tlb, table); return; } bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)); @@ -363,11 +361,11 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, * outside of the critical section from __tlb_remove_table() or from * page_table_free() */ - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit); if (mask & 0x03U) - list_add_tail(&page->lru, &mm->context.pgtable_list); + list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list); else - list_del(&page->lru); + list_del(&ptdesc->pt_list); spin_unlock_bh(&mm->context.lock); table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); tlb_remove_table(tlb, table); @@ -377,7 +375,7 @@ void __tlb_remove_table(void *_table) { unsigned int mask = (unsigned long) _table & 0x03U, half = mask; void *table = (void *)((unsigned long) _table ^ mask); - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); switch (half) { case 0x00U: /* pmd, pud, or p4d */ @@ -385,18 +383,18 @@ void __tlb_remove_table(void *_table) return; case 0x01U: /* lower 2K of a 4K page table */ case 0x02U: /* higher 2K of a 4K page table */ - mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, mask << 4); if (mask != 0x00U) return; break; case 0x03U: /* 4K page table with pgstes */ - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U); break; } - page_table_release_check(page, table, half, mask); - pgtable_pte_page_dtor(page); - __free_page(page); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } /* @@ -424,16 +422,20 @@ static void base_pgt_free(unsigned long *table) static unsigned long *base_crst_alloc(unsigned long val) { unsigned long *table; + struct ptdesc *ptdesc; - table = (unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER); - if (table) - crst_table_init(table, val); + ptdesc = ptdesc_alloc(GFP_KERNEL, CRST_ALLOC_ORDER); + if (!ptdesc) + return NULL; + table = ptdesc_address(ptdesc); + + crst_table_init(table, val); return table; } static void base_crst_free(unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + ptdesc_free(virt_to_ptdesc(table)); } #define BASE_ADDR_END_FUNC(NAME, SIZE) \ -- 2.39.2