From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B8BEEB64D8 for ; Wed, 14 Jun 2023 13:48:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F072D6B0078; Wed, 14 Jun 2023 09:48:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB7288E0005; Wed, 14 Jun 2023 09:48:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7EAE8E0003; Wed, 14 Jun 2023 09:48:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id CA8006B0078 for ; Wed, 14 Jun 2023 09:48:47 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9FC59AFCB9 for ; Wed, 14 Jun 2023 13:48:47 +0000 (UTC) X-FDA: 80901483894.22.8A7AC7E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf20.hostedemail.com (Postfix) with ESMTP id E63381C0002 for ; Wed, 14 Jun 2023 13:48:45 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=V0QZDgjJ; spf=pass (imf20.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686750526; a=rsa-sha256; cv=none; b=WloMb7N+mkZ6Fda0lN1ItaiwirmKoampc4sxJTsZS/mkYPtxnQMX5Ovc/IO4Oyh0ho/6NY 4GGdnkzrpZqHCk9X0n0hs5kqWI0+rCl8xliO564OW7Y5mXliOeyojBJTGJAGNhP/DLHsBk rPTwL70ZZtcd0hPH9Z5wy5+gRC+gNsw= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=V0QZDgjJ; spf=pass (imf20.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686750526; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KMDfS/qpwYOaYdZX9XTpkOXG8dUGmyJnDyiC0K8tvEQ=; b=UD1BZwMqBtL8FbcOc8nTjd3TDFHwiyUV4Q7C7r5wtFDlOT6OtgO5AYcpsnnODk0p/DQXfu vTDia3qaf00ZEc4A0e5q/GbZd0Vc6hwecedsW4Hf7FS4uXjBPLMIPzvtsvNcACFqqo0hse 5h2J5Ch54cppkmwhm27KVxmlrsVZ0f8= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 14F9D64263; Wed, 14 Jun 2023 13:48:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B707C433C0; Wed, 14 Jun 2023 13:48:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686750524; bh=gxU3ouGWBBmqmAiLv/xnL0DeuSWekXdjsP3njBIM/3U=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=V0QZDgjJL3QgE9x8JUspdTbfMfs3pMGIVLLZECZJRsEFY8NLOU3FHa9eCro1DbTQG /FeNyJHOZk4eWdOp9Qm7YkJcLLJo0ck5GXNz5h3cPjiltj6wJSpNYi/mUUybLAseJG L8jXEpIhrSeK1arYk58jkb8ZY9oXqo8PxB0L2CeXKQzmY6lKQeLK4aPo3LbQYz+RlD 4VPsZDkobDvZkcDjfmrkEBjkp93jW2mXHfVUlKBD7ewu+WoTI6jTDRezExZmibSP75 rhMOAusVyZKPnHJ6/AZh/X/iBtHCxp2v/1z1wxzagOqeVMLIVnG7qjazCHaREz+Nj1 kDrhr4zp2JnYA== Date: Wed, 14 Jun 2023 16:48:08 +0300 From: Mike Rapoport To: "Vishal Moola (Oracle)" Cc: Andrew Morton , Matthew Wilcox , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins Subject: Re: [PATCH v4 05/34] mm: add utility functions for ptdesc Message-ID: <20230614134808.GD52412@kernel.org> References: <20230612210423.18611-1-vishal.moola@gmail.com> <20230612210423.18611-6-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230612210423.18611-6-vishal.moola@gmail.com> X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E63381C0002 X-Stat-Signature: hwbuzw4endoidjdasoagmkmzh86mir5q X-Rspam-User: X-HE-Tag: 1686750525-79702 X-HE-Meta: U2FsdGVkX18ONtKmXLxqr8SQpD6Tg4JXeKJpVgXPOqy4UkcIFqtFGykm0hKg0h12MsV+Y1HYMS2uoXhsvrc4eQA557gNYrc0JM/wyKFaw77GClJss/YRKvM6j2xdJitdBfVVJqO8wTfACbpBny8cZDXdA3tv0I9EoE+6HJsvbV3+O3Whr8IUbEmTwc1gSvCpv8nPQTHu1Mm+Q/bFiQskwDrUGKawXEJ0hK0wDJaC7PWW5EWbBD1cS2D9L1yStQpXG8KQte4412XTU3wnQN0u4hNHLjUJal7lB8NKmz+QGFB8vmOJDounmuNd74R8WZTWPrgiFMgmyv4Rs2k0Q45IPP5JsxKLrH2ehv+pk1C106swzB+34tODN2ZLxzTV5qrEPpW1o6e+RWm0QrRk99+a/zeOgby26ESMk+1SkU8VQfdhITfuYKWJDasj6e6R0AvNwvHoq93Qp0xOHr/z1gDInhLomkbCdNqPKDw1DvXi3K1JsnNwr+yroarQBPNa+SuLfHJHy0Fl++RYtNc7PIz3dQzwm/ckihwyrUBumj6aYiL68Na/2J9iM3eXfR7HbS31KUmpogPCiE7qLq8b4gw1zJDBoP5mqxG+nSAvDhfaELAE7y9bRZPV6iLlf7gb3DgNTy5sY+T1draosIWBwS+cALInwdK0WiBt3+/pl9cwBXmZNCaDk9xpRUCvKt6ydXhU1ERXFTGcCzWUAEbLDVEgm4S+NOUiuka9OppZCicqAKyJoIN3ifY0lAWpL2Tevxzm/5ojK/7XOxfJ3t5aHWwtqHmh/2O+P8MB8jUNPPxpo9ZESJk6xsuvRyXVJO2khj9XYG8TeK8wHs84VAYHMYM7q6F3RB4jQXF2HtDdM+4hzEZG9kyga8951Hfm4/5rjWueue4rLjRrmekQwaa+84QImdIvXqWCVcCc7T0/2H3nDZBQcbLkqr7ldeVMwdyG6Jwg6tMXSubyKsvGjX3D020 WEAF8frN xw255Szbyzqw/JOGVPRgbLYuHLIZAerFtdfj+hrOv72zcixPKs+ty5U4e40LWZradT/oXNqIdrnlthZtepARLQs4mDuVA8DFsoc3TjK1GSlfjFEkfZdXfOrDtwqw7rAacWwxofdw4KRUaS09tIj3AaacF3DUUoC11kvA/4vk6v5rTb4iVqMA52VuBttGOgHZBg4fG0E0hVXKgAA+BAd9Y/8JKHEfQD/5kXXeUhoLy/Yp5a4sQ7g90G3xZN8i//CpKxXLuX8R7bjgTjmwoTND5xbLI925nGu7aGbWLhyGUWMI3mC9thNfzGO2efey8N4Lw0Lz1j7i9lEUYOxpbvO5sGGrkNer4GsCLZO2ViOltEcg1xw0+h9MkujXS4jzQG75BGKQz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 12, 2023 at 02:03:54PM -0700, Vishal Moola (Oracle) wrote: > Introduce utility functions setting the foundation for ptdescs. These > will also assist in the splitting out of ptdesc from struct page. > > Functions that focus on the descriptor are prefixed with ptdesc_* while > functions that focus on the pagetable are prefixed with pagetable_*. > > pagetable_alloc() is defined to allocate new ptdesc pages as compound > pages. This is to standardize ptdescs by allowing for one allocation > and one free function, in contrast to 2 allocation and 2 free functions. > > Signed-off-by: Vishal Moola (Oracle) > --- > include/asm-generic/tlb.h | 11 +++++++ > include/linux/mm.h | 61 +++++++++++++++++++++++++++++++++++++++ > include/linux/pgtable.h | 12 ++++++++ > 3 files changed, 84 insertions(+) > > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h > index b46617207c93..6bade9e0e799 100644 > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) > return tlb_remove_page_size(tlb, page, PAGE_SIZE); > } > > +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) > +{ > + tlb_remove_table(tlb, pt); > +} > + > +/* Like tlb_remove_ptdesc, but for page-like page directories. */ > +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt) > +{ > + tlb_remove_page(tlb, ptdesc_page(pt)); > +} > + > static inline void tlb_change_page_size(struct mmu_gather *tlb, > unsigned int page_size) > { > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 0db09639dd2d..f184f1eba85d 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2766,6 +2766,62 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a > } > #endif /* CONFIG_MMU */ > > +static inline struct ptdesc *virt_to_ptdesc(const void *x) > +{ > + return page_ptdesc(virt_to_page(x)); > +} > + > +static inline void *ptdesc_to_virt(const struct ptdesc *pt) > +{ > + return page_to_virt(ptdesc_page(pt)); > +} > + > +static inline void *ptdesc_address(const struct ptdesc *pt) > +{ > + return folio_address(ptdesc_folio(pt)); > +} > + > +static inline bool pagetable_is_reserved(struct ptdesc *pt) > +{ > + return folio_test_reserved(ptdesc_folio(pt)); > +} > + > +/** > + * pagetable_alloc - Allocate pagetables > + * @gfp: GFP flags > + * @order: desired pagetable order > + * > + * pagetable_alloc allocates a page table descriptor as well as all pages > + * described by it. I think the order should be switched here to emphasize that primarily this method allocates memory for page tables. How about pagetable_alloc allocates memory for the page tables as well as a page table descriptor that describes the allocated memory > + * > + * Return: The ptdesc describing the allocated page tables. > + */ > +static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order) > +{ > + struct page *page = alloc_pages(gfp | __GFP_COMP, order); > + > + return page_ptdesc(page); > +} > + > +/** > + * pagetable_free - Free pagetables > + * @pt: The page table descriptor > + * > + * pagetable_free frees a page table descriptor as well as all page > + * tables described by said ptdesc. Similarly here. > + */ > +static inline void pagetable_free(struct ptdesc *pt) > +{ > + struct page *page = ptdesc_page(pt); > + > + __free_pages(page, compound_order(page)); > +} > + > +static inline void pagetable_clear(void *x) > +{ > + clear_page(x); > +} > + > #if USE_SPLIT_PTE_PTLOCKS > #if ALLOC_SPLIT_PTLOCKS > void __init ptlock_cache_init(void); > @@ -2992,6 +3048,11 @@ static inline void mark_page_reserved(struct page *page) > adjust_managed_page_count(page, -1); > } > > +static inline void free_reserved_ptdesc(struct ptdesc *pt) > +{ > + free_reserved_page(ptdesc_page(pt)); > +} > + > /* > * Default method to free all the __init memory into the buddy system. > * The freed pages will be poisoned with pattern "poison" if it's within > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 330de96ebfd6..c405f74d3875 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -1026,6 +1026,18 @@ TABLE_MATCH(ptl, ptl); > #undef TABLE_MATCH > static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); > > +#define ptdesc_page(pt) (_Generic((pt), \ > + const struct ptdesc *: (const struct page *)(pt), \ > + struct ptdesc *: (struct page *)(pt))) > + > +#define ptdesc_folio(pt) (_Generic((pt), \ > + const struct ptdesc *: (const struct folio *)(pt), \ > + struct ptdesc *: (struct folio *)(pt))) > + > +#define page_ptdesc(p) (_Generic((p), \ > + const struct page *: (const struct ptdesc *)(p), \ > + struct page *: (struct ptdesc *)(p))) > + > /* > * No-op macros that just return the current protection value. Defined here > * because these macros can be used even if CONFIG_MMU is not defined. > -- > 2.40.1 > > -- Sincerely yours, Mike.