From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F576C433EF for ; Thu, 23 Jun 2022 03:32:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D07518E0119; Wed, 22 Jun 2022 23:32:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C90C18E0115; Wed, 22 Jun 2022 23:32:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B312F8E0119; Wed, 22 Jun 2022 23:32:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A31668E0115 for ; Wed, 22 Jun 2022 23:32:36 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 74FCE214AC for ; Thu, 23 Jun 2022 03:32:36 +0000 (UTC) X-FDA: 79608078312.13.672DE66 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf01.hostedemail.com (Postfix) with ESMTP id 9DFCB40019 for ; Thu, 23 Jun 2022 03:32:34 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=5;SR=0;TI=SMTPD_---0VH9dVlt_1655955150; Received: from 30.97.48.163(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VH9dVlt_1655955150) by smtp.aliyun-inc.com; Thu, 23 Jun 2022 11:32:31 +0800 Message-ID: Date: Thu, 23 Jun 2022 11:32:37 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.10.0 Subject: Re: [RFC PATCH v2 2/3] mm: Add PUD level pagetable account To: Mike Rapoport Cc: akpm@linux-foundation.org, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655955156; a=rsa-sha256; cv=none; b=P6qR1khME3m9tU1ta5g36HO8EAtn1f/2yXn7ejWD7nqP/kE3BEj0wSQIJdpYKOzCS0SaOx MxzdOt5uMAf+tOH8PMfIpe5odfLMQxZzqhMkIGmUTx05oqFqeVt941suP7RNE9LIFXjv9H FpOLvqVMhIHLDF+00U3gp4djiRd8IGM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655955156; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SsOwJBIU7CPKh/UAyCprmdYS2LgLNs6jWqV8vXcrxC0=; b=8XBNuvlR14/1amb2lW0ybDBNanGUUIO6Il0IGDV4DuTvUck1cu3YiXjs7z3P0YaickAKyh 5zVIpALiy+8IUSHkSTvLWMx/hbmGJo+/LW7ZiJzgnzrDz30AO+rmPKD3S7/8m3tcT77YLr YSKFUhxNw12bbDyn3j4DaHch0bTKFqI= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf01.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com X-Rspam-User: X-Rspamd-Queue-Id: 9DFCB40019 Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf01.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com X-Stat-Signature: p87ide8d1p85gyds1zw6puupco57x7ah X-Rspamd-Server: rspam09 X-HE-Tag: 1655955154-13554 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/22/2022 10:38 PM, Mike Rapoport wrote: > On Wed, Jun 22, 2022 at 04:58:53PM +0800, Baolin Wang wrote: >> Now the PUD level ptes are always protected by mm->page_table_lock, >> which means no split pagetable lock needed. So the generic PUD level >> pagetable pages allocation will not call pgtable_pte_page_ctor/dtor(), >> that means we will miss to account PUD level pagetable pages. >> >> Adding pagetable account by calling pgtable_set_and_inc() or >> pgtable_clear_and_dec() when allocating or freeing PUD level pagetable >> pages to help to get an accurate pagetable accounting. >> >> Moreover this patch will also mark the PUD level pagetable with PG_table >> flag, which will help to do sanity validation in unpoison_memory() and >> get more accurate pagetable accounting by /proc/kpageflags interface. >> >> Meanwhile converting the architectures with using generic PUD pagatable >> allocation to add corresponding pgtable_set_and_inc() or pgtable_clear_and_dec() >> to account PUD level pagetable. >> >> Signed-off-by: Baolin Wang >> --- >> arch/arm64/include/asm/tlb.h | 5 ++++- >> arch/loongarch/include/asm/pgalloc.h | 11 ++++++++--- >> arch/mips/include/asm/pgalloc.h | 11 ++++++++--- >> arch/s390/include/asm/tlb.h | 1 + >> arch/x86/mm/pgtable.c | 5 ++++- >> include/asm-generic/pgalloc.h | 12 ++++++++++-- >> 6 files changed, 35 insertions(+), 10 deletions(-) >> >> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h >> index c995d1f..47e0623 100644 >> --- a/arch/arm64/include/asm/tlb.h >> +++ b/arch/arm64/include/asm/tlb.h >> @@ -94,7 +94,10 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, >> static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, >> unsigned long addr) >> { >> - tlb_remove_table(tlb, virt_to_page(pudp)); >> + struct page *page = virt_to_page(pudp); >> + >> + pgtable_clear_and_dec(page); >> + tlb_remove_table(tlb, page); >> } >> #endif >> >> diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h >> index b0a57b2..50a896f 100644 >> --- a/arch/loongarch/include/asm/pgalloc.h >> +++ b/arch/loongarch/include/asm/pgalloc.h >> @@ -89,10 +89,15 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) >> static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) >> { >> pud_t *pud; >> + struct page *pg; > > struct page *page; > > looks better IMO. Sure. > >> + >> + pg = alloc_pages(GFP_KERNEL & ~__GFP_HIGHMEM, PUD_ORDER); >> + if (!pg) >> + return NULL; >> >> - pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_ORDER); >> - if (pud) >> - pud_init((unsigned long)pud, (unsigned long)invalid_pmd_table); >> + pgtable_set_and_inc(pg); >> + pud = (pud_t *)page_address(pg); > > I don't think __get_free_pages() should be replaced with alloc_pages() > here, just call pgtable_set_and_inc() with virt_to_page(pud). > > The same applies for the cases below. Sure. Will do in next version. Thanks.