From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7304CCD1284 for ; Tue, 2 Apr 2024 22:14:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D33AF6B0083; Tue, 2 Apr 2024 18:14:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE43F6B0088; Tue, 2 Apr 2024 18:14:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B84406B0089; Tue, 2 Apr 2024 18:14:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9436D6B0083 for ; Tue, 2 Apr 2024 18:14:54 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id ED9CEA0E0E for ; Tue, 2 Apr 2024 22:14:53 +0000 (UTC) X-FDA: 81965997666.27.BC8D0AF Received: from mail-ua1-f48.google.com (mail-ua1-f48.google.com [209.85.222.48]) by imf04.hostedemail.com (Postfix) with ESMTP id 46D6440002 for ; Tue, 2 Apr 2024 22:14:52 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=azcmsBfR; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712096092; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sU2j1O+M2IVAeV24AnEVUTO7+JLYymVbNPIXlhJsPEU=; b=PNX35VwlfI3vFJL5N5OhCm+tnd0T6OBwfaSfi3lpIC4O+7fAt8UgJqZKHxOAnnw7dRuZnY 8fh3LN4ajgF/Qh3ngfbfVGfmWvfzy6kDml+qBKlh3Bmtt5VSFXJ6b/BAPVU34YbNxGsyI4 jDBQtpcLKKBhrQG5vFhCG6D6Sd7ERNA= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=azcmsBfR; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712096092; a=rsa-sha256; cv=none; b=qSguDteQ9G/A2GjecibrxWLEsUY2wZvY6EI6FfUTIU8DvUQIGhdNIevNWpApnM2qhEMSSi 4UwXu8luA28XXMzKt5ZkwoeIFHRNS4oFtA88UdN5A4qNM6Cu8jO66btf0CGS8FnpCSRD32 8RqKHeAwSNfge36rmJu2ZzNetasWcPA= Received: by mail-ua1-f48.google.com with SMTP id a1e0cc1a2514c-7dfacd39b9eso200789241.1 for ; Tue, 02 Apr 2024 15:14:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712096091; x=1712700891; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=sU2j1O+M2IVAeV24AnEVUTO7+JLYymVbNPIXlhJsPEU=; b=azcmsBfRzWBek0qrAwwbyBgWvVpMt3psWhNTgCj9LpXxEki51nPWjJuwt6Vs73ssVx M/ms0iNynOvtmeCIF1+1b2LN9OGtuQq9C2hRO9BSMsWys6SQtYMWymnOFVlQk6WRxjk9 MwtVH5cdBOljcraP4C4jyauiIscuTyO5fKNHPIN3XABg4VB8lF+mFLoRtmRkWozT8/h+ WzA8fmR1tyNooazFqmFXJCAoYHzqXlfaWYaucVRfVjnnVeNmlgHgNCB3ZopQzpuB4Ma6 Lq5YZ4jf5bqDAC/sR9ahH+bele5PjhUelvzqxxcl1qKNfNXZkXkLz8AAMwgu8Ol/7wsn Ju+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712096091; x=1712700891; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sU2j1O+M2IVAeV24AnEVUTO7+JLYymVbNPIXlhJsPEU=; b=xPDWZJDSreTl698ezKlpgBONTS232/YBPyLIMoEafPr3vX5my2+Z3O2YxiMJs7GWti 66n5ogfh48NCJvUF9N6xTpzP0XorR9zCGYh3aO/ee3OIR4yLDwG7ERMXVHhlYbxdOuCL JHZeYtbebvdrT3COkYYDACxvLT3XMd+BvtYV1URcBpX0E2H/3eycoRv/2WB0TxAoxC1k I46kTqZYTOl3ALOfpu+w2JXGyUvLPejretx/MGdCjkZt5J3vdFm9yUq9Puhh1nErrN+Q Cqco2q9Hp/VJk2qlcH3ZDH+Ck5QBDc633JNn317IkDFC1nEwx/56HAaNNROrQVYKvIYV eL3w== X-Forwarded-Encrypted: i=1; AJvYcCVPZD9VcW4PreMLfnkAx9fYIA1fEmAPmHLCZUCzoIcGhx0As5G8/ziFwmEWK62HsMVozQ+fhwCK0CHhfCbc0P1i6BM= X-Gm-Message-State: AOJu0YwjuyIhbkPwwzHAkkoONBqcY5HnE7kIEUOG0aPGixZWuCN9T2T7 Dh8rIYIPq/eSANZeFRjZ+xW9QYfQcljUAZgdaFTc0NwBNDU8zy5M/GuTenqbTEX6PymQ7iVIDj5 WxoL2lAXsgvhzKW0D8LPK4PmmYU0= X-Google-Smtp-Source: AGHT+IFbi1IKNAI18zbfDMAhsDKaVl6QCDzozbcSI0Y4OLQjCIOUZLv9kqHnoxcaTrY1tncCUCKuggH0jnRhjzM+Lro= X-Received: by 2002:a05:6122:c8f:b0:4b9:e8bd:3b2 with SMTP id ba15-20020a0561220c8f00b004b9e8bd03b2mr801199vkb.2.1712096091186; Tue, 02 Apr 2024 15:14:51 -0700 (PDT) MIME-Version: 1.0 References: <20240328095139.143374-1-21cnbao@gmail.com> <16ac20eb-41fc-4024-a701-de576ff63c09@arm.com> In-Reply-To: <16ac20eb-41fc-4024-a701-de576ff63c09@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 3 Apr 2024 11:14:39 +1300 Message-ID: Subject: Re: [PATCH v2] mm: add per-order mTHP alloc_success and alloc_fail counters To: Ryan Roberts Cc: akpm@linux-foundation.org, linux-mm@kvack.org, willy@infradead.org, david@redhat.com, cerasuolodomenico@gmail.com, kasong@tencent.com, surenb@google.com, v-songbaohua@oppo.com, yosryahmed@google.com, yuzhao@google.com, chrisl@kernel.org, peterx@redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: rged7kn5om65rtpc3dszibpm7n4qfktn X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 46D6440002 X-HE-Tag: 1712096092-820362 X-HE-Meta: U2FsdGVkX1+qmTyBa9vlyrTU/h/jzHOozAPW9lglexSYIi9YqVvS57MqLcZLwuAntB90OrKraQrLmC8D0I2KCXrqrFNroF2syjQSIYbKlsQyA0l5OLdfkeN8Gr4It2jA16pKu34Bwwk+DofWSQfg1E8ZTMl0zNnC7Zh24xanLfhZ7LC8r2+eXjQl8w9XHh8YfZ7avWoAkXmr1b0uMX11hPPswp5rDTzH/6zeues7o7YhN8bjTdvSFiVB1Ul8EqI2FbY7TuVD48Lun+2Nm24/oW3Mm86xocr0XWrF5+ZnpjcNCN7I8gfl1CqfCClYdXHMNs08WL0okHvaFcf7VOUlh4/tf/cizyz+QtEXBsUznUoJlj6Gvu71+Mg28JrA62M5goXcl8HeM43LJh9b7+LTbMqTUctmAF6p1wZXZg+je2N3fhAqInUvhhh2zyNBTspb7qTd8wP4kSVSDLe7hlUph+eZy/QI4MY77KPQ7F9ntVGD6GZbgD8xS5gsBsJWnCpsdJdBkfRpo4qIi0LIXC86BTNZvbfwHEQtOq9QCATnSvvLhGYPmSRI5Vmdcjm8QPtCMddnVHIhpek78trvuWodWtaKazWQCWAEkMap/XwGWE1745qjux0/OE8/v9sCwz8EjOwg+E3LsUWH2AHedfVG8WFPZuwZqdWODhUH+B/6b7Cmc3VyotUz1YxP3X2yi0KtCkDOIxEBIVloG6xhkcRaFzR/s/Fbb4twEmbOs9jW5ocQcNso32aA7jtGQju9OhTVLZiwP0h8W3lvFhlJLncMLyutsgilS26PV0t2D5UW1fxdJTqn7vtPa4A//Gk5nIOBDjbpoKLLqYukZ9SZSrSgPwVI9zVSlFKhNywNRTK5Br2RqXoS+MpTfSOvRk25JoOnLXoaXe9Zub3+TQ3xn8rfYY6fJYEXLqFxaBosApk/EQIcb0PpPhm4I2Jo5DIrL1zFX/ceytEtS9cHlbaDT6r Z4w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 2, 2024 at 11:44=E2=80=AFPM Ryan Roberts = wrote: > > On 02/04/2024 10:40, Barry Song wrote: > > On Tue, Apr 2, 2024 at 9:58=E2=80=AFPM Ryan Roberts wrote: > >> > >> On 28/03/2024 09:51, Barry Song wrote: > >>> From: Barry Song > >>> > >>> Profiling a system blindly with mTHP has become challenging due > >>> to the lack of visibility into its operations. Presenting the > >>> success rate of mTHP allocations appears to be pressing need. > >>> > >>> Recently, I've been experiencing significant difficulty debugging > >>> performance improvements and regressions without these figures. > >>> It's crucial for us to understand the true effectiveness of > >>> mTHP in real-world scenarios, especially in systems with > >>> fragmented memory. > >>> > >>> This patch sets up the framework for per-order mTHP counters, > >>> starting with the introduction of alloc_success and alloc_fail > >>> counters. Incorporating additional counters should now be > >>> straightforward as well. > >>> > >>> The initial two unsigned longs for each event are unused, given > >>> that order-0 and order-1 are not mTHP. Nonetheless, this refinement > >>> improves code clarity. > >> > >> Overall, this approach looks good to me - thanks for doing this! > > > > Thank you for reviewing! > > > >> > >>> > >>> Signed-off-by: Barry Song > >>> --- > >>> -v2: > >>> * move to sysfs and provide per-order counters; David, Ryan, Willy > >>> -v1: > >>> https://lore.kernel.org/linux-mm/20240326030103.50678-1-21cnbao@gmai= l.com/ > >>> > >>> include/linux/huge_mm.h | 17 +++++++++++++ > >>> mm/huge_memory.c | 54 +++++++++++++++++++++++++++++++++++++++= ++ > >>> mm/memory.c | 3 +++ > >>> 3 files changed, 74 insertions(+) > >>> > >>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > >>> index e896ca4760f6..27fa26a22a8f 100644 > >>> --- a/include/linux/huge_mm.h > >>> +++ b/include/linux/huge_mm.h > >>> @@ -264,6 +264,23 @@ unsigned long thp_vma_allowable_orders(struct vm= _area_struct *vma, > >>> enforce_sysfs, orders); > >>> } > >>> > >>> +enum thp_event_item { > >>> + THP_ALLOC_SUCCESS, > >>> + THP_ALLOC_FAIL, > >>> + NR_THP_EVENT_ITEMS > >>> +}; > >>> + > >>> +struct thp_event_state { > >>> + unsigned long event[PMD_ORDER + 1][NR_THP_EVENT_ITEMS]; > >>> +}; > >>> + > >>> +DECLARE_PER_CPU(struct thp_event_state, thp_event_states); > >>> + > >>> +static inline void count_thp_event(int order, enum thp_event_item it= em) > >> > >> We previously concluded that we will likely want a "currently allocate= d THPs" > >> counter, which will need both increment and decrement. So perhaps > > > > I intend to handle this via incremental patches, but it's improbable > > to occur this > > month due to my packed schedule with multiple tasks. Presently, my top = priority > > regarding counting is the fallback ratio for allocations. Following > > that, it would > > likely be the SWAP allocation and fallback ratio to determine if the sw= apfile is > > highly fragmented. > > Yes indeed. I wasn't suggesting that the extra counters should be added > immediately, just that having a view on the eventual need for "instantane= ous" > counters might influence the name of this function. make sense. "Count" remains the most suitable name for variables that consistently increase by one. We might introduce "inc" and "dec" later and encapsulate t= hem within "count," as you proposed. thp_stat_inc(int order, enum thp_stat_item item); thp_stat_dec(int order, enum thp_stat_item item); // Wrapper for increment-only events static inline void count_thp_event(int order, enum thp_stat_item item) { thp_stat_inc(order, item); } > > > > >> "thp_event_inc()" (and eventual "thp_event_dec()" are more appropriate= ? And > >> perhaps "event" isn't the right description either since it implies al= ways > >> counting upwards (to me at least). Although I guess you have borrowed = from the > >> vmstat pattern? That file has some instantaneous counters, so how does= it decrement? > >> > > > > Counting represents just one aspect of vmstat. In this scenario, the va= lues are > > consistently increasing. For those metrics that could fluctuate in > > either direction, > > an entirely different mechanism is employed. These are typically manage= d through > > global atomic operations. > > OK, but why do these need to be global atomic? I would have thought we co= uld > just define the per-cpu values as signed and allow both inc and dec? Then= when > you sum them, you get the correct answer. You could have: > > long stat[PMD_ORDER + 1][NR_THP_EVENT_ITEMS]; > > thp_stat_inc(int order, enum thp_stat_item item); > thp_stat_dec(int order, enum thp_stat_item item); > > // Wrapper for increment-only events > static inline void count_thp_event(int order, enum thp_stat_item item) > { > thp_stat_inc(order, item); > } > > Just a thought. Perhaps its better to follow the established pattern. > i think your proposed approach is exactly the way the existing pattern migh= t be using, there is a combination of atomic variable and per-cpu inc/dec, fo= r example: void __inc_zone_state(struct zone *zone, enum zone_stat_item item) { struct per_cpu_zonestat __percpu *pcp =3D zone->per_cpu_zonestats; s8 __percpu *p =3D pcp->vm_stat_diff + item; s8 v, t; /* See __mod_node_page_state */ preempt_disable_nested(); v =3D __this_cpu_inc_return(*p); t =3D __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { s8 overstep =3D t >> 1; zone_page_state_add(v + overstep, zone, item); __this_cpu_write(*p, -overstep); } preempt_enable_nested(); } void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) { struct per_cpu_nodestat __percpu *pcp =3D pgdat->per_cpu_nodestats; s8 __percpu *p =3D pcp->vm_node_stat_diff + item; s8 v, t; VM_WARN_ON_ONCE(vmstat_item_in_bytes(item)); /* See __mod_node_page_state */ preempt_disable_nested(); v =3D __this_cpu_inc_return(*p); t =3D __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { s8 overstep =3D t >> 1; node_page_state_add(v + overstep, pgdat, item); __this_cpu_write(*p, -overstep); } preempt_enable_nested(); } zone_page_state_add() and node_page_state_add() are finally updating the atomic variable, static inline void zone_page_state_add(long x, struct zone *zone, enum zone_stat_item item) { atomic_long_add(x, &zone->vm_stat[item]); atomic_long_add(x, &vm_zone_stat[item]); } static inline void node_page_state_add(long x, struct pglist_data *pgdat, enum node_stat_item item) { atomic_long_add(x, &pgdat->vm_stat[item]); atomic_long_add(x, &vm_node_stat[item]); } I'll dig into the details while bringing up dual-direction counters. It's likely we can simplify the implementation compared to the complexity of zone and node states. > > > > /* > > * Zone and node-based page accounting with per cpu differentials. > > */ > > extern atomic_long_t vm_zone_stat[NR_VM_ZONE_STAT_ITEMS]; > > extern atomic_long_t vm_node_stat[NR_VM_NODE_STAT_ITEMS]; > > extern atomic_long_t vm_numa_event[NR_VM_NUMA_EVENT_ITEMS]; > > > > void __mod_zone_page_state(struct zone *, enum zone_stat_item item, lon= g); > > void __inc_zone_page_state(struct page *, enum zone_stat_item); > > void __dec_zone_page_state(struct page *, enum zone_stat_item); > > > > void __mod_node_page_state(struct pglist_data *, enum node_stat_item > > item, long); > > void __inc_node_page_state(struct page *, enum node_stat_item); > > void __dec_node_page_state(struct page *, enum node_stat_item); > > > > static inline void zone_page_state_add(long x, struct zone *zone, > > enum zone_stat_item item) > > { > > atomic_long_add(x, &zone->vm_stat[item]); > > atomic_long_add(x, &vm_zone_stat[item]); > > } > > > > Counting is likely most efficiently handled with per-CPU copies and a s= ummation > > function to aggregate values from all CPUs. > > > >>> +{ > >>> + this_cpu_inc(thp_event_states.event[order][item]); > >> > >> Could we declare as "PMD_ORDER - 1" then do [order - 2] here? That wou= ld save 16 > >> bytes per CPU. (8K in a system with 512 CPUs). Possibly with a order_i= ndex() > >> helper since the *_show() functions will need this too. > > > > I actually put some words in commit messages > > > > "The initial two unsigned longs for each event are unused, given > > that order-0 and order-1 are not mTHP. Nonetheless, this refinement > > improves code clarity." > > Yes I saw that. I was just observing that we can have both memory effcien= cy and > code clarity fairly easily. Not a strong opinion though. Achieving this can be as simple as modifying 3 lines of code. I prefer the below approach over using "order_index," as order_index's name feels overly broad, on the other hand, THP_MIN_ORDER can potentially be reused by more scenarios. diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 27fa26a22a8f..f62a8ecf1bda 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -70,6 +70,7 @@ extern struct kobj_attribute shmem_enabled_attr; * (which is a limitation of the THP implementation). */ #define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(= 1))) +#define THP_MIN_ORDER 2 /* * Mask of all large folio orders supported for file THP. @@ -271,14 +272,14 @@ enum thp_event_item { }; struct thp_event_state { - unsigned long event[PMD_ORDER + 1][NR_THP_EVENT_ITEMS]; + unsigned long event[PMD_ORDER + 1 - THP_MIN_ORDER][NR_THP_EVENT_ITE= MS]; }; DECLARE_PER_CPU(struct thp_event_state, thp_event_states); static inline void count_thp_event(int order, enum thp_event_item item) { - this_cpu_inc(thp_event_states.event[order][item]); + this_cpu_inc(thp_event_states.event[order - THP_MIN_ORDER][item]); } #define transparent_hugepage_use_zero_page() \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e3d35c2fa563..3414ac2cf0dc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -536,7 +536,7 @@ static unsigned long sum_thp_events(int order, enum thp_event_item item) for_each_online_cpu(cpu) { struct thp_event_state *this =3D &per_cpu(thp_event_states,= cpu); - sum +=3D this->event[order][item]; + sum +=3D this->event[order - THP_MIN_ORDER][item]; } > > > > > However, if conserving memory is a higher priority, I can utilize > > PMD_ORDER - 1 in version 3. > > > >> > >>> +} > >>> + > >>> #define transparent_hugepage_use_zero_page() = \ > >>> (transparent_hugepage_flags & = \ > >>> (1< >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >>> index 1683de78c313..addd093d8410 100644 > >>> --- a/mm/huge_memory.c > >>> +++ b/mm/huge_memory.c > >>> @@ -526,6 +526,52 @@ static const struct kobj_type thpsize_ktype =3D = { > >>> .sysfs_ops =3D &kobj_sysfs_ops, > >>> }; > >>> > >>> +DEFINE_PER_CPU(struct thp_event_state, thp_event_states) =3D {{{0}}}= ; > >>> + > >>> +static unsigned long sum_thp_events(int order, enum thp_event_item i= tem) > >>> +{ > >>> + unsigned long sum =3D 0; > >>> + int cpu; > >>> + > >>> + for_each_online_cpu(cpu) { > >>> + struct thp_event_state *this =3D &per_cpu(thp_event_sta= tes, cpu); > >>> + > >>> + sum +=3D this->event[order][item]; > >>> + } > >>> + > >>> + return sum; > >>> +} > >>> + > >>> +static ssize_t alloc_success_show(struct kobject *kobj, > >>> + struct kobj_attribute *attr, char *b= uf) > >>> +{ > >>> + int order =3D to_thpsize(kobj)->order; > >>> + > >>> + return sysfs_emit(buf, "%lu\n", sum_thp_events(order, THP_ALLOC= _SUCCESS)); > >>> +} > >>> + > >>> +static ssize_t alloc_fail_show(struct kobject *kobj, > >>> + struct kobj_attribute *attr, char *b= uf) > >>> +{ > >>> + int order =3D to_thpsize(kobj)->order; > >>> + > >>> + return sysfs_emit(buf, "%lu\n", sum_thp_events(order, THP_ALLOC= _FAIL)); > >>> +} > >>> + > >>> +static struct kobj_attribute alloc_success_attr =3D __ATTR_RO(alloc_= success); > >>> +static struct kobj_attribute alloc_fail_attr =3D __ATTR_RO(alloc_fai= l); > >>> + > >>> +static struct attribute *stats_attrs[] =3D { > >>> + &alloc_success_attr.attr, > >>> + &alloc_fail_attr.attr, > >>> + NULL, > >>> +}; > >>> + > >>> +static struct attribute_group stats_attr_group =3D { > >>> + .name =3D "stats", > >>> + .attrs =3D stats_attrs, > >>> +}; > >>> + > >>> static struct thpsize *thpsize_create(int order, struct kobject *par= ent) > >>> { > >>> unsigned long size =3D (PAGE_SIZE << order) / SZ_1K; > >>> @@ -549,6 +595,12 @@ static struct thpsize *thpsize_create(int order,= struct kobject *parent) > >>> return ERR_PTR(ret); > >>> } > >>> > >>> + ret =3D sysfs_create_group(&thpsize->kobj, &stats_attr_group); > >>> + if (ret) { > >>> + kobject_put(&thpsize->kobj); > >>> + return ERR_PTR(ret); > >>> + } > >>> + > >>> thpsize->order =3D order; > >>> return thpsize; > >>> } > >>> @@ -1050,8 +1102,10 @@ vm_fault_t do_huge_pmd_anonymous_page(struct v= m_fault *vmf) > >>> folio =3D vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, tru= e); > >>> if (unlikely(!folio)) { > >>> count_vm_event(THP_FAULT_FALLBACK); > >>> + count_thp_event(HPAGE_PMD_ORDER, THP_ALLOC_FAIL); > >>> return VM_FAULT_FALLBACK; > >>> } > >>> + count_thp_event(HPAGE_PMD_ORDER, THP_ALLOC_SUCCESS); > >>> return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); > >>> } > >>> > >>> diff --git a/mm/memory.c b/mm/memory.c > >>> index 984b138f85b4..c9c1031c2ecb 100644 > >>> --- a/mm/memory.c > >>> +++ b/mm/memory.c > >>> @@ -4365,7 +4365,10 @@ static struct folio *alloc_anon_folio(struct v= m_fault *vmf) > >>> } > >>> folio_throttle_swaprate(folio, gfp); > >>> clear_huge_page(&folio->page, vmf->address, 1 <= < order); > >>> + count_thp_event(order, THP_ALLOC_SUCCESS); > >>> return folio; > >>> + } else { > >> > >> Do we need this else, given the last statement in the "if" clause is r= eturn? > > > > I included this "else" statement because we encounter a scenario where > > we "goto next" even if allocation succeeds. > > > > folio =3D vma_alloc_folio(gfp, order, vma, addr, true)= ; > > if (folio) { > > if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) = { > > folio_put(folio); > > goto next; > > } > > > > After revisiting the code for a second time, I noticed that the SUCCESS > > counter is incremented after the "goto" statement, so I realize that I = won't > > need this "else" clause. > > > > folio =3D vma_alloc_folio(gfp, order, vma, addr, true); > > if (folio) { > > if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) = { > > folio_put(folio); > > goto next; > > } > > folio_throttle_swaprate(folio, gfp); > > clear_huge_page(&folio->page, vmf->address, 1 <= < order); > > count_thp_event(order, THP_ALLOC_SUCCESS); > > return folio; > > > > > > So, I have two options: > > > > 1. Move the count_thp_event(order, THP_ALLOC_SUCCESS); statement ahead = of > > mem_cgroup_charge(folio, vma->vm_mm, gfp) and retain the else clause. > > 2. Keep THP_ALLOC_SUCCESS unchanged and remove the else clause. > > > > Since mem_cgroup_charge() rarely fails, option 2 should suffice. > > I guess it depends exactly on what the semantics of THP_ALLOC_SUCCESS and > THP_ALLOC_FAIL are supposed to be. For me, this makes most sense: > > while (orders) { > addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > folio =3D vma_alloc_folio(gfp, order, vma, addr, true); > if (folio) { > if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { > folio_put(folio); > goto next; > } > folio_throttle_swaprate(folio, gfp); > clear_huge_page(&folio->page, vmf->address, 1 << = order); > count_thp_event(order, THP_ALLOC_SUCCESS); > return folio; > } > next: > count_thp_event(order, THP_ALLOC_FAIL); > order =3D next_order(&orders, order); > } > > But this way you will end up with more THP_ALLOC_FAIL events than page fa= ults. > Let's say you have 1M, 512K, 256K and 128K mTHP enabled, and you fail to > allocate all except 128K; You will end up with 3 FAILs and 1 SUCCESS for = a > single fault. Is that a problem? I don't see this as an issue. Users are familiar with its configuration in sysfs. The key focus here is the extent to which we encounter difficulties in allocating the mTHP we prefer due to fragmented buddy memory. But I would like to move count_thp_event(order, THP_ALLOC_FAIL) before "nex= t:" as this won't count mem_cgroup_charge failure as THP_ALLOC_FAIL. > > I guess an alternative approach would be to mark SUCCESS per-size. But ha= ve a > single, global FAIL counter, which just tells you how many faults end up = getting > a 4K page. (or have that counter in addition to the per-size FAIL counter= s). > > It feels a bit odd that you get a per-size FAIL if the allocation fails, = but not > if the VMA is congested; if the VMA is congested, you will have already f= iltered > out the larger orders and therefore never see the allocation FAIL. > > What question are you usually trying to answer with these stats? The crux of the matter is the allocation success rate when setting mTHP to, for instance, 64KiB. If it's just 5%, mTHP is merely posing as a helpful ally. At 50%, mTHP demonstrates its reliability as a dependable companion. Attaining an 80% allocation success rate solidifies mTHP's status as a steadfast friend, consistently supporting us. > > Perhaps I'm making this more complicated than it needs to be... :) > Indeed, sticking to the simplest approach seems prudent for now. /sys/kernel/mm/transparent_hugepage/hugepages-kB/stats/alloc_success /sys/kernel/mm/transparent_hugepage/hugepages-kB/stats/alloc_fail Users aren't likely to be oblivious to what's happening, and the kernel may not be so adept at manipulating data to aid user comprehension; it could potentially distort the data instead. > > > > > By the way, why do we proceed to the next order after > > mem_cgroup_charge() fails? > > I assume mem_cgroup_charge() is still likely to fail even after we move= to > > the next order? > > That bit was added by Kefeng Wang, but my understanding is that > mem_cgroup_charge() will fail if we are bumping up against the quota limi= t? So I > guess the thinking is we may not be able to allocate 1M but 32K may be fi= ne. > > > > > > >> > >>> + count_thp_event(order, THP_ALLOC_FAIL); > >>> } > >>> next: > >>> order =3D next_order(&orders, order); > >> > >> Thanks, > >> Ryan > >> > > Thanks Barry >