From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A18A7C3DA61 for ; Fri, 19 Jul 2024 00:12:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C97286B0082; Thu, 18 Jul 2024 20:12:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C47686B0083; Thu, 18 Jul 2024 20:12:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B0F056B0085; Thu, 18 Jul 2024 20:12:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8B77D6B0082 for ; Thu, 18 Jul 2024 20:12:29 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 13C46A1E7E for ; Fri, 19 Jul 2024 00:12:29 +0000 (UTC) X-FDA: 82354575618.09.86A8491 Received: from mail-ua1-f53.google.com (mail-ua1-f53.google.com [209.85.222.53]) by imf01.hostedemail.com (Postfix) with ESMTP id 5EC6340007 for ; Fri, 19 Jul 2024 00:12:27 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721347901; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YwRbeEDrk/1wNx9bV9ON1gbANQ3HFETpf7KZdJnrPVU=; b=TjFlMaj43IkK6c9ppWTzXoaQfa3kl6IAcApPRL0bQVBlBp6H4ztiQB7N8lY4891rB/ggeO MX4qcSr2oN08lZ4qDEg48+rARHwjT+e8P36damXSqKh91BEAABRWfZVztVYwKjBiV1i9la cVx24lbMJlm1710hhjttJmd1nO6878I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721347901; a=rsa-sha256; cv=none; b=V0sWmwZ088mtFq/dqT5jDbm21xpoVUTsN/91emA1hd9ITyMKjd1vmCDJuDd/5GA4WI1QRy UlHTQxGrTNk4/7kPPW9RCuWf9arD9RHLWuUiaznytYQWbL4HHgjOOoVSPl6+BJL/FFzSwF 2aVusG4L9erpZalx6AYvRZMK8dXfSIM= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com Received: by mail-ua1-f53.google.com with SMTP id a1e0cc1a2514c-823227e7572so566491241.1 for ; Thu, 18 Jul 2024 17:12:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721347946; x=1721952746; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YwRbeEDrk/1wNx9bV9ON1gbANQ3HFETpf7KZdJnrPVU=; b=t2c7BA6lTvrISjqGfqkxbdk+EiVa4AIbkZR/sXw7J8jeO6Ht+BENAZj3EXOWbU9ja1 UURf3u8qJ2UkrysFPoQN4FeDuNIQsb0T0rIjEdEoozD9e1lr64j+qIvBbsNFPugG0Sts mOn2XWsgIX8w3M2uUOSxIs/uWbK+3FQjSQEpIqM5QPZDUZjXjmfg1LJs5oQmwE9as7DQ Rb+MOulF32yjRlAlTnVhI9qBuNjw3htp0aAvyjhXAFbgCPOlj/QOCHgdwOU/bjkf8CJ8 rh/QQaaNIm4fKnamgXOIB2yQT+U2Qi+BhvFghCOHDG4BQrc/Dla54Wsi/uaDW/Zskola yt7g== X-Forwarded-Encrypted: i=1; AJvYcCXrYDeADKU0xKYzQWGI7u1xehCMa1Ngqh0fKfY22f6u/Xwtti2cybUAy2EYMhqB4jbFnljHIMn/ShJGMUVMzszIrDg= X-Gm-Message-State: AOJu0Yy/aFgH2ubwbxGpdJQQ2BigeMD1WgJUv74N8oP8Ypf/iXOqNVuX 0BdtNtEAwX/PMkXh8AlNaGPZVyX01oiUQocWTR57PZqJPgdXRX59LR7tZnL7OcAkuKmMa1dlvy7 P1L34Pa1sP4KEbJMnIGJ9+Zrtroc= X-Google-Smtp-Source: AGHT+IGcC4f4akF/lDVIeZVidNIqfNqfjNRQGAIuZgaWAIRA+xrVqVT/El1EwmjXD9f0YBaSNvGYXrbmizb9aw2uHQY= X-Received: by 2002:a05:6102:3f12:b0:48d:a44a:aa00 with SMTP id ada2fe7eead31-492629920e3mr4998093137.14.1721347946275; Thu, 18 Jul 2024 17:12:26 -0700 (PDT) MIME-Version: 1.0 References: <20240716135907.4047689-1-ryan.roberts@arm.com> <20240716135907.4047689-4-ryan.roberts@arm.com> In-Reply-To: <20240716135907.4047689-4-ryan.roberts@arm.com> From: Barry Song Date: Fri, 19 Jul 2024 12:12:15 +1200 Message-ID: Subject: Re: [PATCH v2 3/3] mm: mTHP stats for pagecache folio allocations To: Ryan Roberts Cc: Andrew Morton , Hugh Dickins , Jonathan Corbet , "Matthew Wilcox (Oracle)" , David Hildenbrand , Lance Yang , Baolin Wang , Gavin Shan , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 5EC6340007 X-Stat-Signature: yaqqeyrss9u1oe5iq96x8nt1iww4zsis X-Rspam-User: X-HE-Tag: 1721347947-672742 X-HE-Meta: U2FsdGVkX1+1ITVDvDBuhlc15Igrl9AFIQ5FAKE9PsmXPa9nvbjrA8LKEBTwXUwfTSQpZT35C8bQ/MuyV91chc2oHt1ERjTr1TQ1uE4arvBk3/cBYhtaQbfbioEOmU+4qeixgBtP7rKfit64Jzt4iL4ZyWz1d5MlV98e4mTpej6XRS7h/7BujimPcrmvC6YbOIQFwGEBcMWyEO8wD2t+8i/jyzkz9MwKHr1gqgBwVEnuhHmlfueK1fgM9h0tqwP85PQ7H0ozk59QdP7992YjOmyky4UzxJhzOazaMIL0KbtxFMEBZrBCG0shFGiAZupS4vjlPAsxeaVH+TkXI1Y6GADoreNTU3GVT4ublLe8+CEndK9b/DRadli21qXW5ZNJCN3x8Im9VKheQbPIEbJO5PfJ4ICcVfSPN7jMrlxK3MIifjbE/yhEea0HrB++6Y0eIdw5E4dZ+9WGepFc38NxCjbzYrXlA3zOiamIfWnYg8kWVXuVOacTC2JEiBrm5DI4YwYTlSjMMl5Oewsb7UWB8cnNMUa6QMc52wRJdUqlRpBd6iPbOpyiTGEKoVjxlzHfQJfG05nYyJWZDrladaqrD0+Q3NytN9GjDRKJ84MCSg9HTIHi1J7qs5x2NIqaVdw4mbM40B3IYbodL/kWNnB3/MZ4T+0ZioXkSMB7qvN6zB5V34hRZSB9nOAEpnuhsaQZx7koTi+LrgI6ABiFghcHcIF8/kRvP/l1BehzHpu1o4SP/yvPxFClo2XSzIUiw5ncMogubb0lT1ySRPLT90h1q6G6GE40gzGjeXt/8gvU/TaL6/4fXzBYNBO+tQUoqH5fLP/+/Qc670O1tqTmI+XwLDBXHpKuekbR0I8F/J3jHHy+s/H5f8xIZyfk+4y1Swh71Zo/mHhpv+SWdq3TmnAEaNQFL6mFOA3V1o12Bwn75qR1NPbjpkcrGHDkJBzhXec0D+9/fxGfsKwo050AZBz lDIvtu9c FztxDBrqcNoatkzN3MWm7rjBPgkjyF7hQgfC+uqV50dSVZYewfdN6+/SMkuRCyjsiM4N8sFaAAvHbItvsNCZHx2dviXnCKI7vAT/Zq9kFKKN7NBfgLUpZTTmUKFQIByVV1Ruu66q60hYZjKeDWd9o87ZZ3Go17vtjqGwbxSLc/9gAOOhcqH0VPutqZr/sONlJpp7aozllny/8TLMqR62Ym/fU+PYv5c+RtK3UL3hn9UmBWCKrPcmXI84ecdWCL2lzwW7gfjutYoWO0jdbEUeCG69KpwlLsjotTwZs0g8TsDuiReTdcj85CYpTItZDZv8I/RiHCzwkj/kPrUfXZpKuSNQnuw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jul 17, 2024 at 1:59=E2=80=AFAM Ryan Roberts = wrote: > > Expose 3 new mTHP stats for file (pagecache) folio allocations: > > /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_alloc > /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback > /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback_c= harge > > This will provide some insight on the sizes of large folios being > allocated for file-backed memory, and how often allocation is failing. > > All non-order-0 (and most order-0) folio allocations are currently done > through filemap_alloc_folio(), and folios are charged in a subsequent > call to filemap_add_folio(). So count file_fallback when allocation > fails in filemap_alloc_folio() and count file_alloc or > file_fallback_charge in filemap_add_folio(), based on whether charging > succeeded or not. There are some users of filemap_add_folio() that > allocate their own order-0 folio by other means, so we would not count > an allocation failure in this case, but we also don't care about order-0 > allocations. This approach feels like it should be good enough and > doesn't require any (impractically large) refactoring. > > The existing mTHP stats interface is reused to provide consistency to > users. And because we are reusing the same interface, we can reuse the > same infrastructure on the kernel side. > > Signed-off-by: Ryan Roberts > --- > Documentation/admin-guide/mm/transhuge.rst | 13 +++++++++++++ > include/linux/huge_mm.h | 3 +++ > include/linux/pagemap.h | 16 ++++++++++++++-- > mm/filemap.c | 6 ++++-- > mm/huge_memory.c | 7 +++++++ > 5 files changed, 41 insertions(+), 4 deletions(-) > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/a= dmin-guide/mm/transhuge.rst > index 058485daf186..d4857e457add 100644 > --- a/Documentation/admin-guide/mm/transhuge.rst > +++ b/Documentation/admin-guide/mm/transhuge.rst > @@ -512,6 +512,19 @@ shmem_fallback_charge > falls back to using small pages even though the allocation was > successful. > > +file_alloc > + is incremented every time a file huge page is successfully > + allocated. > + > +file_fallback > + is incremented if a file huge page is attempted to be allocated > + but fails and instead falls back to using small pages. > + > +file_fallback_charge > + is incremented if a file huge page cannot be charged and instead > + falls back to using small pages even though the allocation was > + successful. > + I realized that when we talk about fallback, it doesn't necessarily mean small pages; it could also refer to smaller huge pages. anon_fault_alloc is incremented every time a huge page is successfully allocated and charged to handle a page fault. anon_fault_fallback is incremented if a page fault fails to allocate or charge a huge page and instead falls back to using huge pages with lower orders or small pages. anon_fault_fallback_charge is incremented if a page fault fails to charge a huge page and instead falls back to using huge pages with lower orders or small pages even though the allocation was successful. This also applies to files, right? do { gfp_t alloc_gfp =3D gfp; err =3D -ENOMEM; if (order > 0) alloc_gfp |=3D __GFP_NORETRY | __GFP_NOWARN= ; folio =3D filemap_alloc_folio(alloc_gfp, order); if (!folio) continue; /* Init accessed so avoid atomic mark_page_accessed later */ if (fgp_flags & FGP_ACCESSED) __folio_set_referenced(folio); err =3D filemap_add_folio(mapping, folio, index, gf= p); if (!err) break; folio_put(folio); folio =3D NULL; } while (order-- > 0); > split > is incremented every time a huge page is successfully split into > smaller orders. This can happen for a variety of reasons but a > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index b8c63c3e967f..4f9109fcdded 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -123,6 +123,9 @@ enum mthp_stat_item { > MTHP_STAT_SHMEM_ALLOC, > MTHP_STAT_SHMEM_FALLBACK, > MTHP_STAT_SHMEM_FALLBACK_CHARGE, > + MTHP_STAT_FILE_ALLOC, > + MTHP_STAT_FILE_FALLBACK, > + MTHP_STAT_FILE_FALLBACK_CHARGE, > MTHP_STAT_SPLIT, > MTHP_STAT_SPLIT_FAILED, > MTHP_STAT_SPLIT_DEFERRED, > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index 6e2f72d03176..95a147b5d117 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -562,14 +562,26 @@ static inline void *detach_page_private(struct page= *page) > } > > #ifdef CONFIG_NUMA > -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); > +struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order= ); > #else > -static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsign= ed int order) > +static inline struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsi= gned int order) > { > return folio_alloc_noprof(gfp, order); > } > #endif > > +static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsign= ed int order) > +{ > + struct folio *folio; > + > + folio =3D __filemap_alloc_folio_noprof(gfp, order); > + > + if (!folio) > + count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); > + > + return folio; > +} Do we need to add and export __filemap_alloc_folio_noprof()? In any case, we won't call count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK) and will only allocate the folio instead? > + > #define filemap_alloc_folio(...) \ > alloc_hooks(filemap_alloc_folio_noprof(__VA_ARGS__)) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 53d5d0410b51..131d514fca29 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -963,6 +963,8 @@ int filemap_add_folio(struct address_space *mapping, = struct folio *folio, > int ret; > > ret =3D mem_cgroup_charge(folio, NULL, gfp); > + count_mthp_stat(folio_order(folio), > + ret ? MTHP_STAT_FILE_FALLBACK_CHARGE : MTHP_STAT_FILE_ALL= OC); > if (ret) > return ret; Would the following be better? ret =3D mem_cgroup_charge(folio, NULL, gfp); if (ret) { count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK_CHARGE); return ret; } count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_ALLOC); Anyway, it's up to you. The code just feels a bit off to me :-) > > @@ -990,7 +992,7 @@ int filemap_add_folio(struct address_space *mapping, = struct folio *folio, > EXPORT_SYMBOL_GPL(filemap_add_folio); > > #ifdef CONFIG_NUMA > -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) > +struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order= ) > { > int n; > struct folio *folio; > @@ -1007,7 +1009,7 @@ struct folio *filemap_alloc_folio_noprof(gfp_t gfp,= unsigned int order) > } > return folio_alloc_noprof(gfp, order); > } > -EXPORT_SYMBOL(filemap_alloc_folio_noprof); > +EXPORT_SYMBOL(__filemap_alloc_folio_noprof); > #endif > > /* > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 578ac212c172..26d558e3e80f 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -608,7 +608,14 @@ static struct attribute_group anon_stats_attr_grp = =3D { > .attrs =3D anon_stats_attrs, > }; > > +DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); > +DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); > +DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHAR= GE); > + > static struct attribute *file_stats_attrs[] =3D { > + &file_alloc_attr.attr, > + &file_fallback_attr.attr, > + &file_fallback_charge_attr.attr, > #ifdef CONFIG_SHMEM > &shmem_alloc_attr.attr, > &shmem_fallback_attr.attr, > -- > 2.43.0 > Thanks Barry