From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14A52C35266 for ; Tue, 13 Oct 2020 23:53:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A2167206D5 for ; Tue, 13 Oct 2020 23:53:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="1lkFr/OA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A2167206D5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 345E36B00E6; Tue, 13 Oct 2020 19:53:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CE696B00E7; Tue, 13 Oct 2020 19:53:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16FCB6B00E8; Tue, 13 Oct 2020 19:53:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0029.hostedemail.com [216.40.44.29]) by kanga.kvack.org (Postfix) with ESMTP id DAC4B6B00E6 for ; Tue, 13 Oct 2020 19:53:25 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8436C181AC9BF for ; Tue, 13 Oct 2020 23:53:25 +0000 (UTC) X-FDA: 77368556370.17.frogs83_030df1527207 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 69755180D0181 for ; Tue, 13 Oct 2020 23:53:25 +0000 (UTC) X-HE-Tag: frogs83_030df1527207 X-Filterd-Recvd-Size: 5152 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 13 Oct 2020 23:53:24 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 71DB322201; Tue, 13 Oct 2020 23:53:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602633204; bh=6MyXFc7oQ2KRtHCmQE5Akq7JqE8Wbpgw8TPtPrYxiGg=; h=Date:From:To:Subject:In-Reply-To:From; b=1lkFr/OAKOQuKXtpwLCs5g3azveFsCxaVbq4YZlFmsKxoC8DQWpmuiOW/1+4i4hkr WyjnenuR43uWq7wLmnPVJxhnGhtpQ8510s4/fr9gWjiaHfqVOPrp/TbgDuAplqm2Da 1ilh/7asXR0peqrnFvkdN65wNRn2fziEgPl2Pr0I= Date: Tue, 13 Oct 2020 16:53:22 -0700 From: Andrew Morton To: abdhalee@linux.vnet.ibm.com, akpm@linux-foundation.org, anders.roxell@linaro.org, arnd@arndb.de, christophe.leroy@csgroup.eu, jcmvbkbc@gmail.com, joro@8bytes.org, linux-mm@kvack.org, luto@kernel.org, mm-commits@vger.kernel.org, naresh.kamboju@linaro.org, peterz@infradead.org, rppt@linux.ibm.com, sathnaga@linux.vnet.ibm.com, shorne@gmail.com, torvalds@linux-foundation.org, willy@infradead.org Subject: [patch 092/181] mm: account PMD tables like PTE tables Message-ID: <20201013235322.pSdRSYom5%akpm@linux-foundation.org> In-Reply-To: <20201013164658.3bfd96cc224d8923e66a9f4e@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Matthew Wilcox Subject: mm: account PMD tables like PTE tables We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. [rppt@linux.ibm.com: arm: __pmd_free_tlb(): call page table destructor] Link: https://lkml.kernel.org/r/20200825111303.GB69694@linux.ibm.com Link: http://lkml.kernel.org/r/20200627184642.GF25039@casper.infradead.org Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport Cc: Abdul Haleem Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Christophe Leroy Cc: Joerg Roedel Cc: Max Filippov Cc: Peter Zijlstra Cc: Satheesh Rajendran Cc: Stafford Horne Cc: Naresh Kamboju Cc: Anders Roxell Signed-off-by: Andrew Morton --- arch/arm/include/asm/tlb.h | 1 + include/linux/mm.h | 24 ++++++++++++++++++++---- 2 files changed, 21 insertions(+), 4 deletions(-) --- a/arch/arm/include/asm/tlb.h~mm-account-pmd-tables-like-pte-tables +++ a/arch/arm/include/asm/tlb.h @@ -59,6 +59,7 @@ __pmd_free_tlb(struct mmu_gather *tlb, p #ifdef CONFIG_ARM_LPAE struct page *page = virt_to_page(pmdp); + pgtable_pmd_page_dtor(page); tlb_remove_table(tlb, page); #endif } --- a/include/linux/mm.h~mm-account-pmd-tables-like-pte-tables +++ a/include/linux/mm.h @@ -2254,7 +2254,7 @@ static inline spinlock_t *pmd_lockptr(st return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2262,7 +2262,7 @@ static inline bool pgtable_pmd_page_ctor return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2279,8 +2279,8 @@ static inline spinlock_t *pmd_lockptr(st return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2293,6 +2293,22 @@ static inline spinlock_t *pmd_lock(struc return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be _