From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02EC8C2D0A8 for ; Mon, 28 Sep 2020 17:55:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5A4FF208D5 for ; Mon, 28 Sep 2020 17:55:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=sent.com header.i=@sent.com header.b="icWNxsjP"; dkim=temperror (0-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="Tg3P8xtH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A4FF208D5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=sent.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C356E900004; Mon, 28 Sep 2020 13:55:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2068900003; Mon, 28 Sep 2020 13:55:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E873900002; Mon, 28 Sep 2020 13:55:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 87D5F6B005D for ; Mon, 28 Sep 2020 13:55:23 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4845E180AD806 for ; Mon, 28 Sep 2020 17:55:23 +0000 (UTC) X-FDA: 77313222126.18.wheel31_1617c4627183 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 2410A1016E295 for ; Mon, 28 Sep 2020 17:55:23 +0000 (UTC) X-HE-Tag: wheel31_1617c4627183 X-Filterd-Recvd-Size: 8245 Received: from wnew3-smtp.messagingengine.com (wnew3-smtp.messagingengine.com [64.147.123.17]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Mon, 28 Sep 2020 17:55:22 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.west.internal (Postfix) with ESMTP id 63CD4E17; Mon, 28 Sep 2020 13:55:20 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 28 Sep 2020 13:55:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=/iLEjo06CSGEa pnKoXGuDZP8TVgidFWGouWqDpYGetE=; b=icWNxsjP5uTrE1Wr00eXXG21kKr2M Ox47jYHnPPsqXuEP56ze8OfuK5dJhEWI3tiOjmE/wg9/kNfQn7iMP9yWPbACoK8Q cD3RRxZk6XNVp/mCy+bIQsIxt5V072Bhuq0vtpPTPhT/d3xWSN+J4MoXh16+H5eG v59KtFZyMKwDfKrCEYBqjVmxbgcypgAieDrYNMkjz/kQm/VNVSZBWrcEcjvb/rw2 YclNLp8BFacCVHwYq+bp6aPtTTOnr7s7XJotUllDdusH4zTx4uBqCLxXZT65KbJM eU0LwvA3bqh6S7n5XwJWqgVxgvngC0lZ90Ag7sYUXz6CGcd8oG8kl5W4g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=/iLEjo06CSGEapnKoXGuDZP8TVgidFWGouWqDpYGetE=; b=Tg3P8xtH TViy2l/mFvSjJoNoBDpxk75TuNZ8EqJZ/CTAp9WbBlTIVk+fVA846N07vudBXCgn C3cdbptY+QDoJM4Vkq34CMNsSKa7xaT14vgZdf3nsMstJZStH4OcxX218z+LDBFa K2aal2SwC+TY0VcD+5H/QaSuzdz0jMOHzu2zzGQhxnuLPtGqlMSnjK4J6Q9fVLFO 8ZC7jIuLgyCvbBO1o1nzHRp+6jKgXE3vx05UqsUR+iBwEJrvN3scabP5v0Y33BUt pkhEvriaDsoU/JscsJscbxnAH7wenSLZtxzRI9u8Kn5c++rZ7D2zmT2SN/4LA9T3 Ps05L5Jub3pFYA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrvdeigdeliecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddtnecuhfhrohhmpegkihcujggr nhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeduhfffve ektdduhfdutdfgtdekkedvhfetuedufedtgffgvdevleehheevjefgtdenucfkphepuddv rdegiedruddtiedrudeigeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmh grihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrdgtohhm X-ME-Proxy: Received: from nvrsysarch6.NVidia.COM (unknown [12.46.106.164]) by mail.messagingengine.com (Postfix) with ESMTPA id 53DB63064686; Mon, 28 Sep 2020 13:55:19 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: "Kirill A . Shutemov" , Roman Gushchin , Rik van Riel , Matthew Wilcox , Shakeel Butt , Yang Shi , Jason Gunthorpe , Mike Kravetz , Michal Hocko , David Hildenbrand , William Kucharski , Andrea Arcangeli , John Hubbard , David Nellans , linux-kernel@vger.kernel.org, Zi Yan Subject: [RFC PATCH v2 04/30] mm: add new helper functions to allocate one PMD page with 512 PTE pages. Date: Mon, 28 Sep 2020 13:54:02 -0400 Message-Id: <20200928175428.4110504-5-zi.yan@sent.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200928175428.4110504-1-zi.yan@sent.com> References: <20200928175428.4110504-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan This prepares for PUD THP support, which allocates 512 of such PMD pages when creating a PUD THP. These page table pages will be withdrawn during THP split. Signed-off-by: Zi Yan --- arch/x86/include/asm/pgalloc.h | 60 ++++++++++++++++++++++++++++++++++ arch/x86/mm/pgtable.c | 25 ++++++++++++++ include/linux/huge_mm.h | 3 ++ 3 files changed, 88 insertions(+) diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgallo= c.h index 62ad61d6fefc..b24284522973 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -52,6 +52,19 @@ extern pgd_t *pgd_alloc(struct mm_struct *); extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); =20 extern pgtable_t pte_alloc_one(struct mm_struct *); +extern pgtable_t pte_alloc_order(struct mm_struct *mm, unsigned long add= ress, + int order); + +static inline void pte_free_order(struct mm_struct *mm, struct page *pte= , + int order) +{ + int i; + + for (i =3D 0; i < (1< 2 +static inline pmd_t *pmd_alloc_one_page_with_ptes(struct mm_struct *mm, = unsigned long addr) +{ + pgtable_t pte_pgtables; + pmd_t *pmd; + spinlock_t *pmd_ptl; + int i; + + pte_pgtables =3D pte_alloc_order(mm, addr, + HPAGE_PUD_ORDER - HPAGE_PMD_ORDER); + if (!pte_pgtables) + return NULL; + + pmd =3D pmd_alloc_one(mm, addr); + if (unlikely(!pmd)) { + pte_free_order(mm, pte_pgtables, + HPAGE_PUD_ORDER - HPAGE_PMD_ORDER); + return NULL; + } + pmd_ptl =3D pmd_lock(mm, pmd); + + for (i =3D 0; i < (1<<(HPAGE_PUD_ORDER - HPAGE_PMD_ORDER)); i++) + pgtable_trans_huge_deposit(mm, pmd, pte_pgtables + i); + + spin_unlock(pmd_ptl); + + return pmd; +} + +static inline void pmd_free_page_with_ptes(struct mm_struct *mm, pmd_t *= pmd) +{ + spinlock_t *pmd_ptl; + int i; + + BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); + pmd_ptl =3D pmd_lock(mm, pmd); + + for (i =3D 0; i < (1<<(HPAGE_PUD_ORDER - HPAGE_PMD_ORDER)); i++) { + pgtable_t pte_pgtable; + + pte_pgtable =3D pgtable_trans_huge_withdraw(mm, pmd); + pte_free(mm, pte_pgtable); + } + + spin_unlock(pmd_ptl); + pmd_free(mm, pmd); +} + extern void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd); =20 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index dfd82f51ba66..7be73aee6183 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -33,6 +33,31 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) return __pte_alloc_one(mm, __userpte_alloc_gfp); } =20 +pgtable_t pte_alloc_order(struct mm_struct *mm, unsigned long address, i= nt order) +{ + struct page *pte; + int i; + + pte =3D alloc_pages(__userpte_alloc_gfp, order); + if (!pte) + return NULL; + split_page(pte, order); + for (i =3D 1; i < (1 << order); i++) + set_page_private(pte + i, 0); + + for (i =3D 0; i < (1<=3D 0) { + pgtable_pte_page_dtor(&pte[i]); + __free_page(&pte[i]); + } + return NULL; + } + } + return pte; +} + static int __init setup_userpte(char *arg) { if (!arg) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 8a8bc46a2432..e9d228d4fc69 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -115,6 +115,9 @@ extern struct kobj_attribute shmem_enabled_attr; #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) #define HPAGE_PMD_NR (1<