From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F4BFC3DA42 for ; Wed, 10 Jul 2024 14:08:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93F356B0088; Wed, 10 Jul 2024 10:08:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8EF1D6B008C; Wed, 10 Jul 2024 10:08:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78F746B0092; Wed, 10 Jul 2024 10:08:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5B5956B0088 for ; Wed, 10 Jul 2024 10:08:35 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EAFCF412EA for ; Wed, 10 Jul 2024 14:08:34 +0000 (UTC) X-FDA: 82324023348.14.78491E4 Received: from mail-oi1-f176.google.com (mail-oi1-f176.google.com [209.85.167.176]) by imf29.hostedemail.com (Postfix) with ESMTP id DC66212001B for ; Wed, 10 Jul 2024 14:08:31 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf29.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.167.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720620488; a=rsa-sha256; cv=none; b=7+UpmXWGaUdBOphN60tvAonyUNcHzOyOITT2BwNd7teCHf3PXIl19b4zxihkEeMG9R1iwb aJOfXi6HklrpaYZoN7D4TwL88hqij9ubhD//sVHcvezg/UHs5mSuP+GGIEWOeBOP8nh6ek Ep791B5vmlYysQMt+UUAJDJYKKKihzw= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf29.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.167.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720620488; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iEUae8LL4yTltrn/8LVs2k9hfnmoKZsI7ZDNm1zTbEk=; b=T0q8TWFVk5vYDnOM94I0O8bVR78LDuXQq1hqlrC3kLS/DxMC0mvesu2u0cIzUZ9ZoSeEvI uYZ+rL79glBJd4aFSxKoz20QXhvYt17RTD9MCPc0ZIuSVou+kyQhVhnclrkFv57u4PRjEI w7sTrvi+ThGnJkPmtLbUNt2rn6BdV/Y= Received: by mail-oi1-f176.google.com with SMTP id 5614622812f47-3d9272287easo2355917b6e.2 for ; Wed, 10 Jul 2024 07:08:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720620511; x=1721225311; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iEUae8LL4yTltrn/8LVs2k9hfnmoKZsI7ZDNm1zTbEk=; b=F7NyCAEbAnUNyDgtVT6Qy4r84ULao7BGN+A4b4tS6z/4KJHjv6u7jE5ag/F4JzkCYW yYXm0XYPiRUzoJm7owN3kOhabMGWIYmx6LAVNH4gkZtlmfdDxzJt1Gk5SPvbXVV0qGMU qELGxmMavels2ITfmp+O15j1dfmXxEAsIaRh+mepHvkCRBL4STrUOp+9/qtVpYy70jqm MyzgCkP711TVOQJMABL9Ph4yyV1Q3PIgH16vhXpn3usbeLg2Rf9B5H1VuQdce8iVENBF MXb+ZRRh0UGhgnVpuXqHB0liZlNJrg60BhQ137TXJItTNYzR2XjnYPPyYBsm246dMHlh DgUA== X-Forwarded-Encrypted: i=1; AJvYcCXhLWw7KZaAjALiUjnOrucp9L1xmjYo4gNj2tvaPXzN++A3s+c281WXNYhe6som9VrWFgEF7sNmmoyKfWX1iCIz/ao= X-Gm-Message-State: AOJu0YzXhQNP9SMsxNsqpzthY68HVYs87iCX1oeCbsgFvOWvxh+guwJv OyWQfkIS6UB4RswyV4gKWEMWa+AV4+wz9gbcjBRDCxd7vX/L5xcfJQykZl1F4KHC5Tpjqk/mJnV Z9pEXm0tvPfcWkJTPtQnoN1NwIJg= X-Google-Smtp-Source: AGHT+IGd7WsiNXVe+9MM6aRnm5OIIEGJUvbHnAWwyCRYxRmj1HUpucvEUd5b+jyo73PS1RpgEj+P4NZNAhPBd9QQDvc= X-Received: by 2002:a05:6808:128c:b0:3c7:3106:e2e1 with SMTP id 5614622812f47-3d93bde36acmr7753462b6e.8.1720620510052; Wed, 10 Jul 2024 07:08:30 -0700 (PDT) MIME-Version: 1.0 References: <20240710095503.3193901-1-ryan.roberts@arm.com> In-Reply-To: <20240710095503.3193901-1-ryan.roberts@arm.com> From: Barry Song Date: Wed, 10 Jul 2024 22:08:18 +0800 Message-ID: Subject: Re: [PATCH v2] mm: shmem: Rename mTHP shmem counters To: Ryan Roberts Cc: Andrew Morton , Hugh Dickins , Jonathan Corbet , David Hildenbrand , Baolin Wang , Lance Yang , Matthew Wilcox , Zi Yan , Daniel Gomez , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: DC66212001B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ojsi9hg1oeyxsoed3a1jzxaw8dm7gzx4 X-HE-Tag: 1720620511-395733 X-HE-Meta: U2FsdGVkX181eEWA/lnKr2OmBrq0sfHa3tBAO+utH2GiXtEP2bQZNrW/R9UbOlP6yISBQ63uu794rgUaxlwsTzx/MJlY51Oo3+5DX2HUFx4LouN3PWStY9IxkGv5GZ+ezHt5rKH2VyMEg1231/xCNzJhVSCfPMLbzNo41SNxAJ3rOmv+AeM2U0UWoCCW14z0vUVOge562WRcMsLLW6hSBTh5WUPZD4ZWM05H8LwGce3aOyUPsYm4VrZ+8z+jphwlGXk0ZsGGzwQkdEWruDN987u7bFZIkwookTBxQsC4yRoBz+vJ85bhKrqBGyuEGc7Ada3CkH49v5/PWgfyaWVr1nthKrWOb4qiFR4rjW/i2foeouzvQj5CeSgMIQnadKyBpxI4TMhkQL6uwYmDGAUqgJ8VIatuMi8stIjjVEybe95ni20GoRhqBIgWZ9fzzDj2mRTmNLO8ZSXmVLd6wXZ4PDANc00bBH98iHnakqpKy7HdNBeH1nEQvvhG0pBUFT6X0hDPi4f9qhw6stl7Fl2FstjtrwB8yssLl/KfhgypwsQIQT4g9YyHSW/paiZSrBKPUZQPalWnGCoMJ+4/od5S3cV2yl4+n6y1LMWAgOsdFsZLLJQPieaYVD+fVBfeZFPcfiRURToE2PMP7PHsFcsG/BUoGmn/dvIcQkx+S89cuWkgvKvGfn3O9rAVjjE4nfoe/Vs3roT2vHbbihMZs3GxB2KA52txoG72vL8tTtGILS9c+F77Z9qpweNt4qNWNA/g+JAVHqUrps5l0qFMpywV+x1luf3X0VWDSGfqMW0qh3yFrWlOALRJ0r5sXdGu/diHRxBzbXLGFsxK6OMGaaK7sfRxeQPNoiSUnyNAAjfezL20cymGsxm+jS9JNqJHFM8Id34mhsHNL+C157uOsXxA6/jVrYxBwpOGHjuKvP0jGxcRVUY/d+baTMFlbLNA1vbjia9wX6q5kTiWkLgxiBh WKXSn3r9 Nc8/qgV327TeAlHPZX94rDMPotUhirzjOk4YIQOJd2Djh+3yNFrgJN1zI/eGW1zj5HFaZqwFZ+O6K4yh38layKKHbN+3XEFExH8QcdRTivFtDhtYioIv6o+PIvsh/Lvn5XWv7vWUClxUCJSBg/kkZfu1uAYFUc3zmhmyIFGHsBnm4AbS/GF4Q8XI2aFIyZgC33m0sjj0RpIIBZzUVK0HaPCSb5gCHkI9j0dauW2fjNe9CCq1tcwSRSVxZDP7CBxqNgDqIBiJ8tOafd30NDGs7i05XaK1z1kXBd3meJ/wJyzvwVl0/SFTU04BXj9YbCVB5SEqAoI/WN8FNrKfbgg3ndghTT1aklY2IbUg7rB3cclT4s8tcnkMKpPOQKkt0GS9xtVpHeIotNPx/riCjpTtFOlgVutS+qQfgGeIeo8JcqKAg7pvLj+spd06nOiz12D/DDdT2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jul 10, 2024 at 5:55=E2=80=AFPM Ryan Roberts = wrote: > > The legacy PMD-sized THP counters at /proc/vmstat include > thp_file_alloc, thp_file_fallback and thp_file_fallback_charge, which > rather confusingly refer to shmem THP and do not include any other types > of file pages. This is inconsistent since in most other places in the > kernel, THP counters are explicitly separated for anon, shmem and file > flavours. However, we are stuck with it since it constitutes a user ABI. > > Recently, commit 66f44583f9b6 ("mm: shmem: add mTHP counters for > anonymous shmem") added equivalent mTHP stats for shmem, keeping the > same "file_" prefix in the names. But in future, we may want to add > extra stats to cover actual file pages, at which point, it would all > become very confusing. > > So let's take the opportunity to rename these new counters "shmem_" > before the change makes it upstream and the ABI becomes immutable. While > we are at it, let's improve the documentation for the legacy counters to > make it clear that they count shmem pages only. > > Signed-off-by: Ryan Roberts > Reviewed-by: Baolin Wang > Reviewed-by: Lance Yang Reviewed-by: Barry Song > --- > > Hi All, > > Applies on top of yesterday's mm-unstable (2073cda629a4) and tested with = mm > selftests; no regressions observed. > > The backstory here is that I'd like to introduce some counters for regula= r file > folio allocations to observe how often large folio allocation succeeds, b= ut > these shmem counters are named "file" which is going to make things confu= sing. > So hoping to solve that before commit 66f44583f9b6 ("mm: shmem: add mTHP > counters for anonymous shmem") goes upstream (it is currently in mm-stabl= e). > > Changes since v1 [1] > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > - Updated documentation for existing legacy "file_" counters to make it= clear > they only count shmem pages. > > [1] https://lore.kernel.org/linux-mm/20240708112445.2690631-1-ryan.robert= s@arm.com/ > > Thanks, > Ryan > > Documentation/admin-guide/mm/transhuge.rst | 29 ++++++++++++---------- > include/linux/huge_mm.h | 6 ++--- > mm/huge_memory.c | 12 ++++----- > mm/shmem.c | 8 +++--- > 4 files changed, 29 insertions(+), 26 deletions(-) > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/a= dmin-guide/mm/transhuge.rst > index 747c811ee8f1..3528daa1f239 100644 > --- a/Documentation/admin-guide/mm/transhuge.rst > +++ b/Documentation/admin-guide/mm/transhuge.rst > @@ -412,20 +412,23 @@ thp_collapse_alloc_failed > the allocation. > > thp_file_alloc > - is incremented every time a file huge page is successfully > - allocated. > + is incremented every time a shmem huge page is successfully > + allocated (Note that despite being named after "file", the counte= r > + measures only shmem). > > thp_file_fallback > - is incremented if a file huge page is attempted to be allocated > - but fails and instead falls back to using small pages. > + is incremented if a shmem huge page is attempted to be allocated > + but fails and instead falls back to using small pages. (Note that > + despite being named after "file", the counter measures only shmem= ). > > thp_file_fallback_charge > - is incremented if a file huge page cannot be charged and instead > + is incremented if a shmem huge page cannot be charged and instead > falls back to using small pages even though the allocation was > - successful. > + successful. (Note that despite being named after "file", the > + counter measures only shmem). > > thp_file_mapped > - is incremented every time a file huge page is mapped into > + is incremented every time a file or shmem huge page is mapped int= o > user address space. > > thp_split_page > @@ -496,16 +499,16 @@ swpout_fallback > Usually because failed to allocate some continuous swap space > for the huge page. > > -file_alloc > - is incremented every time a file huge page is successfully > +shmem_alloc > + is incremented every time a shmem huge page is successfully > allocated. > > -file_fallback > - is incremented if a file huge page is attempted to be allocated > +shmem_fallback > + is incremented if a shmem huge page is attempted to be allocated > but fails and instead falls back to using small pages. > > -file_fallback_charge > - is incremented if a file huge page cannot be charged and instead > +shmem_fallback_charge > + is incremented if a shmem huge page cannot be charged and instead > falls back to using small pages even though the allocation was > successful. > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index acb6ac24a07e..cff002be83eb 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -269,9 +269,9 @@ enum mthp_stat_item { > MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, > MTHP_STAT_SWPOUT, > MTHP_STAT_SWPOUT_FALLBACK, > - MTHP_STAT_FILE_ALLOC, > - MTHP_STAT_FILE_FALLBACK, > - MTHP_STAT_FILE_FALLBACK_CHARGE, > + MTHP_STAT_SHMEM_ALLOC, > + MTHP_STAT_SHMEM_FALLBACK, > + MTHP_STAT_SHMEM_FALLBACK_CHARGE, > MTHP_STAT_SPLIT, > MTHP_STAT_SPLIT_FAILED, > MTHP_STAT_SPLIT_DEFERRED, > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 9ec64aa2be94..f9696c94e211 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -568,9 +568,9 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_= ANON_FAULT_FALLBACK); > DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_F= ALLBACK_CHARGE); > DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); > DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); > -DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); > -DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); > -DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHAR= GE); > +DEFINE_MTHP_STAT_ATTR(shmem_alloc, MTHP_STAT_SHMEM_ALLOC); > +DEFINE_MTHP_STAT_ATTR(shmem_fallback, MTHP_STAT_SHMEM_FALLBACK); > +DEFINE_MTHP_STAT_ATTR(shmem_fallback_charge, MTHP_STAT_SHMEM_FALLBACK_CH= ARGE); > DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT); > DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); > DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); > @@ -581,9 +581,9 @@ static struct attribute *stats_attrs[] =3D { > &anon_fault_fallback_charge_attr.attr, > &swpout_attr.attr, > &swpout_fallback_attr.attr, > - &file_alloc_attr.attr, > - &file_fallback_attr.attr, > - &file_fallback_charge_attr.attr, > + &shmem_alloc_attr.attr, > + &shmem_fallback_attr.attr, > + &shmem_fallback_charge_attr.attr, > &split_attr.attr, > &split_failed_attr.attr, > &split_deferred_attr.attr, > diff --git a/mm/shmem.c b/mm/shmem.c > index 921d59c3d669..f24dfbd387ba 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1777,7 +1777,7 @@ static struct folio *shmem_alloc_and_add_folio(stru= ct vm_fault *vmf, > if (pages =3D=3D HPAGE_PMD_NR) > count_vm_event(THP_FILE_FALLBACK); > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); > + count_mthp_stat(order, MTHP_STAT_SHMEM_FALLBACK); > #endif > order =3D next_order(&suitable_orders, order); > } > @@ -1804,8 +1804,8 @@ static struct folio *shmem_alloc_and_add_folio(stru= ct vm_fault *vmf, > count_vm_event(THP_FILE_FALLBACK_CHARGE); > } > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - count_mthp_stat(folio_order(folio), MTHP_STAT_FIL= E_FALLBACK); > - count_mthp_stat(folio_order(folio), MTHP_STAT_FIL= E_FALLBACK_CHARGE); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SHM= EM_FALLBACK); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SHM= EM_FALLBACK_CHARGE); > #endif > } > goto unlock; > @@ -2181,7 +2181,7 @@ static int shmem_get_folio_gfp(struct inode *inode,= pgoff_t index, > if (folio_test_pmd_mappable(folio)) > count_vm_event(THP_FILE_ALLOC); > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - count_mthp_stat(folio_order(folio), MTHP_STAT_FIL= E_ALLOC); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SHM= EM_ALLOC); > #endif > goto alloced; > } > -- > 2.43.0 > Thanks Barry