From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4272FE77170 for ; Wed, 4 Dec 2024 17:08:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D06BD6B0093; Wed, 4 Dec 2024 12:08:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB64C6B0095; Wed, 4 Dec 2024 12:08:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA57A6B0096; Wed, 4 Dec 2024 12:08:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9E6B56B0093 for ; Wed, 4 Dec 2024 12:08:13 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 267C21C7F6C for ; Wed, 4 Dec 2024 17:08:00 +0000 (UTC) X-FDA: 82857908826.05.3F66FBC Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id 418BB40005 for ; Wed, 4 Dec 2024 17:07:44 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733332064; a=rsa-sha256; cv=none; b=0VjeDa5pBm2GJf3dD6u8x1Bg87qWEUOZiidCUNCbgO11jv3nnxgolYLpzU6HKiBsCFbcus shTZTyIghiPUH7+x9A0MSMeTyzROf7YmLjth3JXEf7UF2kMmRLJmJVIvn6dXpAuOxQa2Lr AuDYYm4ckepHIZZB92YjrUicvX9CBVE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733332064; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xkzG8nQ2vnP3KBka4DybD38nzV8A6ZqaeLvxltgxGk0=; b=rq+DmZmoOJP9As3lPXUVktsTrOFiG0P52FEzzpEZxqYc67QUKTThj1btchIdMnS9fWsz0Y 0g3vaoLfFPGs6JKKgu/byw8fj2vFXRoh9b1t6x4iw0+e2AQ8hSFuAVc+IdVTTdjIRVc0f9 plHYo0n9Db/nxI967NY0djlx62IBGeU= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E2931063; Wed, 4 Dec 2024 09:08:25 -0800 (PST) Received: from [10.1.31.170] (XHFQ2J9959.cambridge.arm.com [10.1.31.170]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 432F93F71E; Wed, 4 Dec 2024 09:07:55 -0800 (PST) Message-ID: Date: Wed, 4 Dec 2024 17:07:53 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] smaps: count large pages smaller than PMD size to anonymous_thp Content-Language: en-GB To: Wenchao Hao , David Hildenbrand , Andrew Morton , Matthew Wilcox , Oscar Salvador , Muhammad Usama Anjum , Andrii Nakryiko , Peter Xu , Barry Song <21cnbao@gmail.com>, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Dev Jain References: <20241203134949.2588947-1-haowenchao22@gmail.com> <926c6f86-82c6-41bb-a24d-5418163d5c5e@redhat.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 418BB40005 X-Stat-Signature: irny1z3bg9pwi7hdt65rp6tpi1ypy9xj X-Rspam-User: X-HE-Tag: 1733332064-318346 X-HE-Meta: U2FsdGVkX1+n1Ae2YEKztmQuCnNizpUFD6dZjsF+LHvtgGO2ZAqbmM62fi1OncSnL+4qlSB4EgJ94aVckHe6t8TEjc0m/6XjTwEP0qN1JHpZHqdNZoMTHZ4Z7E+nMMdsIicl4Gb4rO3smuGRzRr/CDqxSkemh5CbLshyze0Yr5RjRgXnc852F/oIXpj+XaGQj9uZYmYtgLQ9K2RxWjoeY8r44tpRBfv+ZrZScXoLEmzp4098lw/kzXTJaEdj+cNNVpggu1JLjjbsHpNyeotcPkmrR/Cn8MXlFc1kOHBIcP503ikSnwTHvxcNLfuhRBtp2d/0+E3+XDCxoawcwx/QKTTJ+QUpJ8qx7OGU3kP4iTZme8waINbMQhy+Q9YsxhrQLh1IlB3ciG3SnyKn1fHU1LrMWXDpAoA077sh5cd81IYTxMOM1c+m2gK9Np1xJGz1kaUkbBWrtdp4pNfhEZeUztYF8YFozLPPqDvi0domDmjCUn7+ichcTl+Ld04BilrCv+gxv2kxcE9h0mQmeWNaLT/5Jc/ykLH18dEjyIYvuylcorIfvH6AhwOuVBBZrCqKie41yr1iW3L/dB95wlALfXkfDg0I2HdtLc+47dj24E4OFKNUN5oD8jA8AAQqN+VWyQ7nnzxlmhdG02bgMeWTNCfVW1hDLE5U7zVeyLv+K8AylF48D5yKeS7dYvooehBhWqOoYlBMv1mvBFpdtoBrJoolLpTseiJAfcQG0luPnSBIL60oax6cSz2JlI+h4pT4f/PVRbhN3RWTvVzSVTcn6M5myrm1cTgkFmeWic/7yPlqYaRX/rzdLctwA388yc/Gv/XOj7qRqVYzsjULySCUhhsFGK2N960fgtqV+ILnYnTl087dKJtsn0aA+jsc80QfSxBog0G/e8vPlxwh9pgcZq8WCqjnTvUo8XxqAyg392zAAN9rd7kP4wI+vu+H9NPVMp/uGo1N50hUGKP/A8s 0fiQadmS +drLQWvCVvsKVvg1RHEndto0iP7iObXbTnywcs+EjSHc2c5POujLUUkPoi89YAnJJ/FiZTqhJyGXD5IZhOgPEV2hAsIfOZ4MK0eBLU8gMxaxwzSpDChuvjxdMDjw8aj+WQdoEkkAX59rAeVeNm5pYxDgRLT0atRcKEtw/nOzhwZmhi9D0I3m3a2u1Qz+vLoG4UmclNy/khhRl6YobmzkKAd0OVFrgR2HQs6xpRjxs/uV9MeObOmXcyzGkiA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: + Dev Jain On 04/12/2024 14:30, Wenchao Hao wrote: > On 2024/12/3 22:17, David Hildenbrand wrote: >> On 03.12.24 14:49, Wenchao Hao wrote: >>> Currently, /proc/xxx/smaps reports the size of anonymous huge pages for >>> each VMA, but it does not include large pages smaller than PMD size. >>> >>> This patch adds the statistics of anonymous huge pages allocated by >>> mTHP which is smaller than PMD size to AnonHugePages field in smaps. >>> >>> Signed-off-by: Wenchao Hao >>> --- >>>   fs/proc/task_mmu.c | 6 ++++++ >>>   1 file changed, 6 insertions(+) >>> >>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c >>> index 38a5a3e9cba2..b655011627d8 100644 >>> --- a/fs/proc/task_mmu.c >>> +++ b/fs/proc/task_mmu.c >>> @@ -717,6 +717,12 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, >>>           if (!folio_test_swapbacked(folio) && !dirty && >>>               !folio_test_dirty(folio)) >>>               mss->lazyfree += size; >>> + >>> +        /* >>> +         * Count large pages smaller than PMD size to anonymous_thp >>> +         */ >>> +        if (!compound && PageHead(page) && folio_order(folio)) >>> +            mss->anonymous_thp += folio_size(folio); >>>       } >>>         if (folio_test_ksm(folio)) >> >> >> I think we decided to leave this (and /proc/meminfo) be one of the last >> interfaces where this is only concerned with PMD-sized ones: >> > > Could you explain why? > > When analyzing the impact of mTHP on performance, we need to understand > how many pages in the process are actually present as large pages. > By comparing this value with the actual memory usage of the process, > we can analyze the large page allocation success rate of the process, > and further investigate the situation of khugepaged. If the actual Note that khugepaged does not yet support collapse to mTHP sizes. Dev Jain (CCed) is working on an implementation for that now. If you are planning to look at this area, you might want to chat first. > proportion of large pages is low, the performance of the process may > be affected, which could be directly reflected in the high number of > TLB misses and page faults. > > However, currently, only PMD-sized large pages are being counted, > which is insufficient. > >> Documentation/admin-guide/mm/transhuge.rst: >> >> The number of PMD-sized anonymous transparent huge pages currently used by the >> system is available by reading the AnonHugePages field in ``/proc/meminfo``. >> To identify what applications are using PMD-sized anonymous transparent huge >> pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages >> fields for each mapping. (Note that AnonHugePages only applies to traditional >> PMD-sized THP for historical reasons and should have been called >> AnonHugePmdMapped). >> > > Maybe rename this field, then AnonHugePages contains huge page of mTHP? > > Thanks, > wenchao > >> >> >