From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BFC3C4345F for ; Fri, 12 Apr 2024 09:43:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5D9A6B0088; Fri, 12 Apr 2024 05:43:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0D406B0089; Fri, 12 Apr 2024 05:43:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D5D46B008A; Fri, 12 Apr 2024 05:43:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7202A6B0088 for ; Fri, 12 Apr 2024 05:43:35 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 33701A0DF6 for ; Fri, 12 Apr 2024 09:43:35 +0000 (UTC) X-FDA: 82000392390.26.F600664 Received: from mail-vk1-f176.google.com (mail-vk1-f176.google.com [209.85.221.176]) by imf03.hostedemail.com (Postfix) with ESMTP id 126AF2002A for ; Fri, 12 Apr 2024 09:43:32 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="dW7jrKf/"; spf=pass (imf03.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712915013; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1N/psaTlBtE4ejH7nyl+e/9iBALPJhvNwmMr01wZTog=; b=c9y0Vq3l3Bw+gGh9sNey8+XHsJqUw8GxRX3huQdy8kppEKSd2rXMjxinvur19FKF1CsLLL UJdobTeWRIE3LfSvkM/nielLTWHtgZ6e3OSAvUaQc3+QDtYM6dvYwAoDyPF5pzjzoA8AgE G+i4aWJEdgzq5Bcw1/zjGgKBKpx398o= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712915013; a=rsa-sha256; cv=none; b=z4fPHpXuBE2lFQD7FTBzN+p3wcrDhu5eLai9QgGDtpFxOrbgiI0qgL9nkdiQVF3Gi9QQsK 5pBhbXnBTSkMBGx3/eXrqUHzGWJD5xPrfPC6LvlilNf/6hUAG4+a2TEutnz5VjIHpMrosT ezALBv8MQjTrgctGpRwEGlDfkDPqJnI= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="dW7jrKf/"; spf=pass (imf03.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vk1-f176.google.com with SMTP id 71dfb90a1353d-4dac3cbc8fdso256188e0c.0 for ; Fri, 12 Apr 2024 02:43:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712915012; x=1713519812; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=1N/psaTlBtE4ejH7nyl+e/9iBALPJhvNwmMr01wZTog=; b=dW7jrKf/Tz6Sr/UHhw3Gg80+l8L5EnM237q8/+DbhfjAPQcFPU08ttam1oJFZaCjFZ 8FrswIakgdVzhxRNTBLdD3zz6qnY5UzZXf8XzEKz9cohItxjjyDXI88dI8bx797C+Kg/ DB5yp7dB4K727p9W6FGCx6ueo9k907q3SABNzaCaeJSdaLwBRsdmknYiCcUN6tSXKraF nNwqI016mkw8INsF9xeiyX4JOvT05qZROeC6E+m0dfMw4F1AOreGeZ3Xn0j0GSaeed0t J+PlH1gSjibMyXHqnvRy7lMpI1SyfugTQN3tdYTdQgkYr4LWW/ftiGTv1fnf/OfxsvYK xY4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712915012; x=1713519812; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1N/psaTlBtE4ejH7nyl+e/9iBALPJhvNwmMr01wZTog=; b=ffl/sMPFUqZ2jvGTJ5Wqa9Ggn0/4JrvUlKHgOAuuEGBgm+giiReTPIg5jxVXNPRddt r3IE1R/jXCjvhltTl7lZ7nAzs0L8WYTOTfSUQSR9XvEphjvqQINm6Oj63tKh7gqU5GV/ 7/Fl/lB3BrxF/f7/MV5mABuOmrIyjNvPQSMhKceKM1vTGgctkhhLgUX7Aku5h8u9Cqbe MQZM2+d8MYrNL8Ch3I5sDh3LmHNPvABn9tTn3wkaumaptswG/tdHEfvfu49YRs1zLh7N NZpmq7rMrbT/gx4dZGM4NjDvtTyEOSGn4swNAjeu/WvBktVI4emTsmNtKbQucQ11FIF2 JG6g== X-Forwarded-Encrypted: i=1; AJvYcCWokTcIZM18u1wTTcE0eoEBxQT+kEQO9wpuipg+u1V7esNLVusbce1WZZqKiZtUtsI7RZQqAFl8KKzBU+tqxGgF/Ko= X-Gm-Message-State: AOJu0Yzw0jG4q5uBVND3xF8EXuk7jVusiN4Hz7yjZKa5rCimbiZ83fR6 P2zUxMtU0GkE8jR9aBJfwsPxCtx1GYqB+y03zVcAumJ4IH0ZfF1jTEYWrfC4x1Cx9Fe930KWAu+ EEUivky1Cw0ds2qz/DtW0gEFVb4U= X-Google-Smtp-Source: AGHT+IEVkjyFyf++beVpi5qQuP98lqvfso3/zCZw3PF0vcTQEZmo2YIZUMUf4Z8M7auZjtK1lqb8oejto6EFEn5sVok= X-Received: by 2002:a05:6122:7d1:b0:4d3:3446:6bc9 with SMTP id l17-20020a05612207d100b004d334466bc9mr2237944vkr.14.1712915011985; Fri, 12 Apr 2024 02:43:31 -0700 (PDT) MIME-Version: 1.0 References: <20240412073740.294272-1-21cnbao@gmail.com> <20240412073740.294272-2-21cnbao@gmail.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Fri, 12 Apr 2024 21:43:20 +1200 Message-ID: Subject: Re: [PATCH v5 1/4] mm: add per-order mTHP anon_fault_alloc and anon_fault_fallback counters To: Ryan Roberts Cc: akpm@linux-foundation.org, linux-mm@kvack.org, cerasuolodomenico@gmail.com, chrisl@kernel.org, david@redhat.com, kasong@tencent.com, linux-kernel@vger.kernel.org, peterx@redhat.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, yosryahmed@google.com, yuzhao@google.com, corbet@lwn.net Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 126AF2002A X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: swar6p3m18w8pfyjyck7t5ih3ote6jwo X-HE-Tag: 1712915012-247185 X-HE-Meta: U2FsdGVkX1+qdeeZTVD9koVBLN+1YKxgv1y2hjQKE4np2A7+mEba5ukDRm6xiRssLYUpQwD4otoO0ULxzPo6vnuY2hytEJ+tFDOLyXIUrlMJ7Pf3Fqa1f6Vif7olT+nWFqYsApgjwIoi1HQd/BQb7XUYF0KsTmNV8ayPsASwhxM/LOlCP8YcWhnlTH4p4+Ht49ku5ulYTeRduc9frmA0uonivvYkrXdJambHf4ID8k33HO5q4s3YeZvfFOIPSrqxQuilWL5P8NzzMmom1T0IUdJYJ95PWYO21S+/D4IA2Rmeb6TyelaB4IPED2yH04YqgmAlFWDdqTrPQdTe41EHJPhYseBbjZaO2XmCPfydx7fnU1x+CzOQ4yokBvHJ6NctYoPNSt6KjYkn4D5tJ2XhU0M2NEBtT2y3wPV2ch0LSshRBS/3W4N2MdMs1burHINE/UWbXXk5mnQofG09XTfY26Ne2n+cFtGmorev3aYwEq0dWuz/Bp3AMx2tC2bKX6HAM5nYrxOd0j9IDyh6oTy4LzlMqFPiNkaXuwXx8NlstiZCM0tdsmRNqkvSh0Q1spYoR8PbUqhinCyiEnlEvtm+qpZPnL9ufqj30Le3CazymxpYe6k7LyHa0b8zPxhJPHfUnJ5AC962adLIUQEiUgnuJclUvrDG4WPYQoe7UB+I+wvyNPvLlNPQkw8TbDUf0E5CiT0WkbYHxJ9cubxNuI5XUwUCqEKGFmG8iGf4O8Ilnx8vEBbVwMRkGm4hzhDJEUp7A77WvN0GcFXDpb4mHGzbDWeuvDf0diSyjQVm8OPmYcLpfOqePZ2uXZRBes95oiA0q8CWuuW4xwlWR+uRj8GS7IOlS6DwNIe6PA3F8zzdBBFzQeFdKlB8PiXXZyaZRb6LgZ6l4nSrO1TD9VYFrVsT1aOqNPmEoYvdgC7jOQVqL03TXjdaKJEA0H7r2pkcpPxnHatDDxVgC/qGrBPZUs/ YOmf3ksZ 0aIW7KgFmvNy14iVTQoJbS3Ij1WtzEhwYuB5a4VWGDC8sH1RFt4iSVGc+k2JvUG8507qY9/gq002kirKh3NywJ+ZQlHgJHvBExSqzPaLssYo8ucXhTnRDvl6+BL3c/z3tECx/N6uSb2F7dbRX5tStnKPcYdWWTZtSSI1++Of9wyAROmVB/GD/I/11G47aZSuY5YQBxH2iOVIPt1ms6bn1Mw7y8byxbAed16YSOl0bAw3/n30zutIPbNXIDfwDVjMolKWMxISO+bDspxy9JzUyw+b1NuJFEpn2ms7pyCGrYD+O/KrgtuUHsbF6d8skEb4IHVlJSzmcm/r+q87sNPKtSUKXQU8CAZxGI4e3aAV9MNTVSOtdcprP211W+ZQUAL6yMDxh66KRWwthpZJUuKVsmSe+tSdmNwUexi8+fEdLQToMdt0qADmumIGXSGdyBJgRHSq+UT7CKFV++/SWkKNQDl5vmgqx3liKG6IJxV/R+kzGNvlACt40EuRtaoeBa3f2+ZgTm1aXllJQ51YuGSngSMjeM04B2lFSKU1RK/KpggoSbNYdqY8dhcEV/A/vYzqsnVYGmvkTY1m6PefodlNOZWUwqdLp7N6A+PWauZzh9tBFXlah+2ChQY3NtHzYMYUKO4K1y4aXL8bwDLHoLLvZufzSeAaf53M0PJ0Yrn0KkH4yMLJCmtsjgEpUqzV4K1P56mFp4a5JPuPKmwfDHQH87YmLXMW9qt9BOi49Y0ZFG8iDWZFCFcIlabVvsBlM4rjtMev3uvqX+6JrMEw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 12, 2024 at 9:27=E2=80=AFPM Ryan Roberts = wrote: > > Hi Barry, > > 2 remaining comments - otherwise looks good. (same comments I just made i= n the > v4 conversation). > > On 12/04/2024 08:37, Barry Song wrote: > > From: Barry Song > > > > Profiling a system blindly with mTHP has become challenging due to the > > lack of visibility into its operations. Presenting the success rate of > > mTHP allocations appears to be pressing need. > > > > Recently, I've been experiencing significant difficulty debugging > > performance improvements and regressions without these figures. It's > > crucial for us to understand the true effectiveness of mTHP in real-wor= ld > > scenarios, especially in systems with fragmented memory. > > > > This patch establishes the framework for per-order mTHP > > counters. It begins by introducing the anon_fault_alloc and > > anon_fault_fallback counters. Additionally, to maintain consistency > > with thp_fault_fallback_charge in /proc/vmstat, this patch also tracks > > anon_fault_fallback_charge when mem_cgroup_charge fails for mTHP. > > Incorporating additional counters should now be straightforward as well= . > > > > Signed-off-by: Barry Song > > Cc: Chris Li > > Cc: David Hildenbrand > > Cc: Domenico Cerasuolo > > Cc: Kairui Song > > Cc: Matthew Wilcox (Oracle) > > Cc: Peter Xu > > Cc: Ryan Roberts > > Cc: Suren Baghdasaryan > > Cc: Yosry Ahmed > > Cc: Yu Zhao > > --- > > include/linux/huge_mm.h | 51 ++++++++++++++++++++++++++++++++++ > > mm/huge_memory.c | 61 +++++++++++++++++++++++++++++++++++++++++ > > mm/memory.c | 3 ++ > > mm/page_alloc.c | 4 +++ > > 4 files changed, 119 insertions(+) > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > index e896ca4760f6..c5beb54b97cb 100644 > > --- a/include/linux/huge_mm.h > > +++ b/include/linux/huge_mm.h > > @@ -264,6 +264,57 @@ unsigned long thp_vma_allowable_orders(struct vm_a= rea_struct *vma, > > enforce_sysfs, orders); > > } > > > > +enum mthp_stat_item { > > + MTHP_STAT_ANON_FAULT_ALLOC, > > + MTHP_STAT_ANON_FAULT_FALLBACK, > > + MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, > > + __MTHP_STAT_COUNT > > +}; > > + > > +struct mthp_stat { > > + unsigned long stats[0][__MTHP_STAT_COUNT]; > > +}; > > + > > +extern struct mthp_stat __percpu *mthp_stats; > > + > > +static inline void count_mthp_stat(int order, enum mthp_stat_item item= ) > > +{ > > + if (order <=3D 0 || order > PMD_ORDER || !mthp_stats) > > + return; > > + > > + this_cpu_inc(mthp_stats->stats[order][item]); > > +} > > + > > +static inline void count_mthp_stats(int order, enum mthp_stat_item ite= m, long delta) > > +{ > > + if (order <=3D 0 || order > PMD_ORDER || !mthp_stats) > > + return; > > + > > + this_cpu_add(mthp_stats->stats[order][item], delta); > > +} > > + > > +/* > > + * Fold the foreign cpu mthp stats into our own. > > + * > > + * This is adding to the stats on one processor > > + * but keeps the global counts constant. > > + */ > > +static inline void mthp_stats_fold_cpu(int cpu) > > +{ > > + struct mthp_stat *fold_stat; > > + int i, j; > > + > > + if (!mthp_stats) > > + return; > > + fold_stat =3D per_cpu_ptr(mthp_stats, cpu); > > + for (i =3D 1; i <=3D PMD_ORDER; i++) { > > + for (j =3D 0; j < __MTHP_STAT_COUNT; j++) { > > + count_mthp_stats(i, j, fold_stat->stats[i][j]); > > + fold_stat->stats[i][j] =3D 0; > > + } > > + } > > +} > > This is a pretty horrible hack; I'm pretty sure just summing for all *pos= sible* > cpus should work. > > > + > > #define transparent_hugepage_use_zero_page() \ > > (transparent_hugepage_flags & \ > > (1< > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index dc30139590e6..21c4ac74b484 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -526,6 +526,50 @@ static const struct kobj_type thpsize_ktype =3D { > > .sysfs_ops =3D &kobj_sysfs_ops, > > }; > > > > +struct mthp_stat __percpu *mthp_stats; > > + > > +static unsigned long sum_mthp_stat(int order, enum mthp_stat_item item= ) > > +{ > > + unsigned long sum =3D 0; > > + int cpu; > > + > > + cpus_read_lock(); > > + for_each_online_cpu(cpu) { > > + struct mthp_stat *this =3D per_cpu_ptr(mthp_stats, cpu); > > + > > + sum +=3D this->stats[order][item]; > > + } > > + cpus_read_unlock(); > > + > > + return sum; > > +} > > + > > +#define DEFINE_MTHP_STAT_ATTR(_name, _index) = \ > > +static ssize_t _name##_show(struct kobject *kobj, \ > > + struct kobj_attribute *attr, char *buf) \ > > +{ \ > > + int order =3D to_thpsize(kobj)->order; = \ > > + \ > > + return sysfs_emit(buf, "%lu\n", sum_mthp_stat(order, _index)); \ > > +} \ > > +static struct kobj_attribute _name##_attr =3D __ATTR_RO(_name) > > + > > +DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); > > +DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBA= CK); > > +DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT= _FALLBACK_CHARGE); > > + > > +static struct attribute *stats_attrs[] =3D { > > + &anon_fault_alloc_attr.attr, > > + &anon_fault_fallback_attr.attr, > > + &anon_fault_fallback_charge_attr.attr, > > + NULL, > > +}; > > + > > +static struct attribute_group stats_attr_group =3D { > > + .name =3D "stats", > > + .attrs =3D stats_attrs, > > +}; > > + > > static struct thpsize *thpsize_create(int order, struct kobject *paren= t) > > { > > unsigned long size =3D (PAGE_SIZE << order) / SZ_1K; > > @@ -549,6 +593,12 @@ static struct thpsize *thpsize_create(int order, s= truct kobject *parent) > > return ERR_PTR(ret); > > } > > > > + ret =3D sysfs_create_group(&thpsize->kobj, &stats_attr_group); > > + if (ret) { > > + kobject_put(&thpsize->kobj); > > + return ERR_PTR(ret); > > + } > > + > > thpsize->order =3D order; > > return thpsize; > > } > > @@ -691,6 +741,11 @@ static int __init hugepage_init(void) > > */ > > MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER < 2); > > > > + mthp_stats =3D __alloc_percpu((PMD_ORDER + 1) * sizeof(mthp_stats= ->stats[0]), > > + sizeof(unsigned long)); > > Personally I think it would be cleaner to allocate statically using > ilog2(MAX_PTRS_PER_PTE) instead of PMD_ORDER. Hi Ryan, I don't understand why MAX_PTRS_PER_PTE is the correct size. For ARM64, #define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT) #define MAX_PTRS_PER_PTE PTRS_PER_PTE #define PTRS_PER_PTE (1 << (PAGE_SHIFT - 3)) while PAGE_SIZE is 16KiB or 64KiB, PTRS_PER_PTE can be a huge number? Am I missing something? > > > + if (!mthp_stats) > > + return -ENOMEM; > > + > > err =3D hugepage_init_sysfs(&hugepage_kobj); > > if (err) > > goto err_sysfs; > > @@ -725,6 +780,8 @@ static int __init hugepage_init(void) > > err_slab: > > hugepage_exit_sysfs(hugepage_kobj); > > err_sysfs: > > + free_percpu(mthp_stats); > > + mthp_stats =3D NULL; > > return err; > > } > > subsys_initcall(hugepage_init); > > @@ -880,6 +937,8 @@ static vm_fault_t __do_huge_pmd_anonymous_page(stru= ct vm_fault *vmf, > > folio_put(folio); > > count_vm_event(THP_FAULT_FALLBACK); > > count_vm_event(THP_FAULT_FALLBACK_CHARGE); > > + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FAL= LBACK); > > + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FAL= LBACK_CHARGE); > > return VM_FAULT_FALLBACK; > > } > > folio_throttle_swaprate(folio, gfp); > > @@ -929,6 +988,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(stru= ct vm_fault *vmf, > > mm_inc_nr_ptes(vma->vm_mm); > > spin_unlock(vmf->ptl); > > count_vm_event(THP_FAULT_ALLOC); > > + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALL= OC); > > count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); > > } > > > > @@ -1050,6 +1110,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_f= ault *vmf) > > folio =3D vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true)= ; > > if (unlikely(!folio)) { > > count_vm_event(THP_FAULT_FALLBACK); > > + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FAL= LBACK); > > return VM_FAULT_FALLBACK; > > } > > return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); > > diff --git a/mm/memory.c b/mm/memory.c > > index 649a547fe8e3..06048af7cf9a 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4368,6 +4368,7 @@ static struct folio *alloc_anon_folio(struct vm_f= ault *vmf) > > folio =3D vma_alloc_folio(gfp, order, vma, addr, true); > > if (folio) { > > if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { > > + count_mthp_stat(order, MTHP_STAT_ANON_FAU= LT_FALLBACK_CHARGE); > > folio_put(folio); > > goto next; > > } > > @@ -4376,6 +4377,7 @@ static struct folio *alloc_anon_folio(struct vm_f= ault *vmf) > > return folio; > > } > > next: > > + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); > > order =3D next_order(&orders, order); > > } > > > > @@ -4485,6 +4487,7 @@ static vm_fault_t do_anonymous_page(struct vm_fau= lt *vmf) > > > > folio_ref_add(folio, nr_pages - 1); > > add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > > + count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); > > folio_add_new_anon_rmap(folio, vma, addr); > > folio_add_lru_vma(folio, vma); > > setpte: > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index b51becf03d1e..3135b5ca2457 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -5840,6 +5840,10 @@ static int page_alloc_cpu_dead(unsigned int cpu) > > */ > > vm_events_fold_cpu(cpu); > > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > > + mthp_stats_fold_cpu(cpu); > > +#endif > > + > > /* > > * Zero the differential counters of the dead processor > > * so that the vm statistics are consistent. >