From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE33C4345F for ; Thu, 11 Apr 2024 22:40:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 073136B0082; Thu, 11 Apr 2024 18:40:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3DAA6B0083; Thu, 11 Apr 2024 18:40:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D909A6B0087; Thu, 11 Apr 2024 18:40:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B92956B0082 for ; Thu, 11 Apr 2024 18:40:24 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 446E840D12 for ; Thu, 11 Apr 2024 22:40:24 +0000 (UTC) X-FDA: 81998721168.02.88AEDF8 Received: from mail-ua1-f49.google.com (mail-ua1-f49.google.com [209.85.222.49]) by imf02.hostedemail.com (Postfix) with ESMTP id 7641580009 for ; Thu, 11 Apr 2024 22:40:22 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KDDG18XY; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.49 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712875222; a=rsa-sha256; cv=none; b=efLhPgZQ6+GMvPCvpGw0n5Nxg+6j600F9+wKOr6lQfkrs6anrwKjUFkjJHnrBrxoTwEvFP mqe+DTUl7DnUP5n3vMaNGy7LbSuepSQjL1+gu0IVCSjqTzFugrLt+PhKnwjxpeW909Vu0T JKy/seecxpSNDWU1sm+Q4TbgmvYNOrA= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KDDG18XY; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.49 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712875222; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=R7gD6KArFKxNHqKzeEt+LV7nGI39tXVKdn0tSPVL+0k=; b=1aEgzqXjLecAsXYeXxZqQ0J9PMZc5VQZZtIscqso5hdLOe+aVlKpnKwHH61GRwKp8gmEAi SMFbN3rkJj2JE9f22gZTSg2jekBAKWa/0VXyz4r5hoEizPxAO9oAMYaurUt848Zo4yatuL D8WjOxH4a7euXqC8O/5ovgTnrvFHjFA= Received: by mail-ua1-f49.google.com with SMTP id a1e0cc1a2514c-7e7cf5cc1d0so745501241.0 for ; Thu, 11 Apr 2024 15:40:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712875221; x=1713480021; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=R7gD6KArFKxNHqKzeEt+LV7nGI39tXVKdn0tSPVL+0k=; b=KDDG18XYzdQjfx9Hnnr/p/WprPDovTrzFpg+8nhFH5b4NENfan5Zy+HGK4O1tdgsG9 YwKudt+0NA4VNh+iKlnB72umCu+n5JU4zrbCCVYTwsC2eugm8Qlrg84LgZMr57CRlGFG dV7bgmj4ifD0LaxzNMHkhLEmlvFXUGMTQrad1PfLO7k7OlKnQfOZ5J3IEN33sfpd79ff OPJrXcrmTM/p/jJT/kQ3UUKwGsuzW3PkigkqMnbRk9ZSWO1nd0cnnnGJJCfIr3wg1mMI X4zflQyuoPntJf9XlusUXb/tdMGK+gWJcxMmA10ArvLK0pivGuWdKplxfVXUlJWXXp45 g28Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712875221; x=1713480021; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R7gD6KArFKxNHqKzeEt+LV7nGI39tXVKdn0tSPVL+0k=; b=qXOEFKyQJfZi70V/faeDQHyfEksnocifuyJutJrnharKMNw+j1mIYDi61QveZh1a3r LL1a4YXBwx+Ff6dPQJphYOFaoOKeay8jyc2UFt1+s60lz+oYjd+kNqj6IKYIntHR3Y93 oJKTDOGpFaV8VRoYxoME6qghBkcbEa5xqwC+VMxdlOCLAkryT5V0tV+CpTfpxDL1PE4c iY1rdXvtGeePO3kQMxFUao/aCaENnrQy6wIMaarSTNCTIpXbOYKO5AJaNFADjxKf3Lqu R2fZ6lr8H9Ts/9828ADtjVKnOAQYCxbKFeSqFQj/drSXvcF3MqhSe34poYiAnBvsvTKX sCYw== X-Forwarded-Encrypted: i=1; AJvYcCWLH6crrz8ws3yHOARCb7L4XGobdd8QsZWVscDNszmL9d6qyla0BUIQLcGvrnr39RmYJkNkcDqrgEbOlOto/Qojs0w= X-Gm-Message-State: AOJu0YwgbNLlH7Y9BiqthcruZx8li56SeMy1YgWFmjMs+iNR79+gal8b sbH7bmpXB47/7NV8YM+H8o5Iotk4Zm3jQj1Tbortc2LZiky7/NCrSocFuIVjji0TI+/g5owR6hn 0IlbC1CARq0ks7AgXrt0ZxRMwy2Q= X-Google-Smtp-Source: AGHT+IHigMdEYAO2J3Z4fjjtOYRWKL2kMzsCcMD4dNVi2KoEaLkn4p4BgPJtGTyWcCoCSgqgzoDB9vXkIj9DVnqLPgM= X-Received: by 2002:a05:6102:674:b0:47a:1517:a9ac with SMTP id z20-20020a056102067400b0047a1517a9acmr3397916vsf.16.1712875221434; Thu, 11 Apr 2024 15:40:21 -0700 (PDT) MIME-Version: 1.0 References: <20240405102704.77559-1-21cnbao@gmail.com> <20240405102704.77559-2-21cnbao@gmail.com> <7cf0a47b-0347-4e81-956f-34bef4ef794a@arm.com> In-Reply-To: <7cf0a47b-0347-4e81-956f-34bef4ef794a@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Fri, 12 Apr 2024 10:40:10 +1200 Message-ID: Subject: Re: [PATCH v4 1/2] mm: add per-order mTHP anon_alloc and anon_alloc_fallback counters To: Ryan Roberts Cc: david@redhat.com, akpm@linux-foundation.org, linux-mm@kvack.org, cerasuolodomenico@gmail.com, chrisl@kernel.org, kasong@tencent.com, peterx@redhat.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, yosryahmed@google.com, yuzhao@google.com, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 7641580009 X-Stat-Signature: zk38qgy9bs5sbnw1zijyzsjc8p3xre4f X-Rspam-User: X-HE-Tag: 1712875222-543239 X-HE-Meta: U2FsdGVkX19MhKFu0tc0wnYD7YGVkdsyJyr5bB6s8SRwT7MujeDsRSpVdxjusrMlQwLbaux7MSNmBEL1SUeY0I6mELajCG1HlHBQVjSdus6ci3mzZigJSuvSkYTTUJDp7c7FFvbJg+CvUA7qknPy/hp619bhwuncGV2T8kx58u9xkpCKOIbquLuVJLWyq/29ucYGsViWUlx5UGGlqp4qzP0acgg1nVR/siIBV0SXdRbY+SQTHbbhwvGUoHdWo4Jfr8yZDoXTY52aZMDKldDcnfleb/y79qwfVm5k2+vyNpUWmsm0UPocnwlD/II0+lqKjJ7duVLsyxBX+te4wP3/kEepEnNbditeMLJwWvCPYBcw5RfqnpXnEPa7qTG56S1HoqGYBb6PzldLVyP65M1MCIynBNgPsYtbE7GG2IAG966KEbf+Y94TBilgPf/zY+ck4mK/tj6ZyQ8nLxwprf++v7ExT3r6D9/KK8LHsBPUZM8retDUz6uikLF/NBEyqhN0YMziwnLyiVKGzSqf3fpXyhS47IoOxFjayavDlKmqmDPVHc0I64hZMtB2Oi4P/2+MwERyvtypoDex0R9rhg4wUnk4JWgpryJX4ClFawETUYt5mGWp4v33u3cpNtzzRxunJunt+vkkTkxorezgoVJfft7n3Yz8VAyzFtDI7kIRzavsRrjDOmyn/YiAQSfnvrraBhunMsCVdoI9s9QfpNG4YzdMq5kD43r15uEPwP+zdpSaN1/6iRIdnvtAqtKn/HUIGXQOZJoBC7fwBtYs51VkYKpisrfoxGvPNUi4G3WO3bOlh8wpS0iHwmJP8SIuey+T85Y6ITL/h4XVeEifUQ0P47At68Py5433uvX2RVwk5rLWtlRFObbH9hMRPK71tIhztvGoAUfQ5tC3IKn3wlTnpCf3EPJRlVncB2kl3BfEsAVIibX05aUKwKAxnyw9cwhKzqOHkq2rsfPrsOiQx0r GjPoiXLa WWkuCZP9Lm6LD/uHQDhm2mpA4YfywHKxhs3tr3HRavyYyma+TBF3NNRGbLg121r/qP7eXBEdiqeU8Lq7yoxSO/S20Or36+hJBI2tkr9zKawi6OgIwlu4JpiHYiNaCHydwFlXTpg/xoWGRA1PTd3Y2JEfcifoafnmM/Iq/gNchs/DD3dYsnF04KgPqvh2GyrDq+u8G1BwPAZM4kyuSmIQNUxVEdM1v0F1KSxV8fgsstXb3oyc/jhLSgFl3iPRgu690ZO3hTbvHTZ0hIc+WhlgIyKPaNtI6NT7BdUXIu/x611n7RzzzYNvFvXySaYBeRGcBZDBh+8To/51r/03HFpqBY8VLY/81g58rUACi9WtGMXp5z4jsivipPuxtMF7y7yGjJ6QeOYRaJvHxs3z0NC82Kv7TylNR2+4kvZqnmho/13lGMcDYhJBBRtgy7EmpTgfCe/4kcdd1EuU6Uz9eYAgMtmeD39oSr7YlaYbvGffDHGZ+JDT0ErPbD8630g/BHtIT5KFvFOPvyvkTDR8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 12, 2024 at 4:38=E2=80=AFAM Ryan Roberts = wrote: > > On 05/04/2024 11:27, Barry Song wrote: > > From: Barry Song > > > > Profiling a system blindly with mTHP has become challenging due to the > > lack of visibility into its operations. Presenting the success rate of > > mTHP allocations appears to be pressing need. > > > > Recently, I've been experiencing significant difficulty debugging > > performance improvements and regressions without these figures. > > It's crucial for us to understand the true effectiveness of mTHP in > > real-world scenarios, especially in systems with fragmented memory. > > > > This patch sets up the framework for per-order mTHP counters, starting > > with the introduction of anon_alloc and anon_alloc_fallback counters. > > Incorporating additional counters should now be straightforward as well= . > > > > Signed-off-by: Barry Song > > --- > > include/linux/huge_mm.h | 19 ++++++++++++++++ > > mm/huge_memory.c | 48 +++++++++++++++++++++++++++++++++++++++++ > > mm/memory.c | 2 ++ > > 3 files changed, 69 insertions(+) > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > index e896ca4760f6..c5d33017a4dd 100644 > > --- a/include/linux/huge_mm.h > > +++ b/include/linux/huge_mm.h > > @@ -264,6 +264,25 @@ unsigned long thp_vma_allowable_orders(struct vm_a= rea_struct *vma, > > enforce_sysfs, orders); > > } > > > > +enum mthp_stat_item { > > + MTHP_STAT_ANON_ALLOC, > > + MTHP_STAT_ANON_ALLOC_FALLBACK, > > + __MTHP_STAT_COUNT > > +}; > > + > > +struct mthp_stat { > > + unsigned long stats[PMD_ORDER + 1][__MTHP_STAT_COUNT]; > > I saw a fix for this allocation dynamically due to powerpc PMD_ORDER not = being > constant. I wonder if ilog2(MAX_PTRS_PER_PTE) would help here? > It's a possibility. However, since we've passed all the build tests using dynamic allocation, it might not be worth the effort to attempt static allocation again. Who knows what will happen next :-) > > +}; > > + > > +DECLARE_PER_CPU(struct mthp_stat, mthp_stats); > > + > > +static inline void count_mthp_stat(int order, enum mthp_stat_item item= ) > > I thought we were going to call this always counting up type of stat and = event? > "count_mthp_event"? But I'm happy with it as is, personally. > > > +{ > > + if (unlikely(order > PMD_ORDER)) > > + return; > > I'm wondering if it also makes sense to ignore order =3D=3D 0? Although I= guess if > called for order-0 its safe since the storage exists and sum_mthp_stat() = is > never be called for 0. Ignore this comment :) Agreed. I'd like to change it to ignore oder 0; > > > + this_cpu_inc(mthp_stats.stats[order][item]); > > +} > > + > > #define transparent_hugepage_use_zero_page() \ > > (transparent_hugepage_flags & \ > > (1< > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index 9d4b2fbf6872..5b875f0fc923 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -526,6 +526,46 @@ static const struct kobj_type thpsize_ktype =3D { > > .sysfs_ops =3D &kobj_sysfs_ops, > > }; > > > > +DEFINE_PER_CPU(struct mthp_stat, mthp_stats) =3D {{{0}}}; > > + > > +static unsigned long sum_mthp_stat(int order, enum mthp_stat_item item= ) > > +{ > > + unsigned long sum =3D 0; > > + int cpu; > > + > > + for_each_online_cpu(cpu) { > > What happens if a cpu that was online and collected a bunch of stats gets > offlined? The user will see stats get smaller? > > Perhaps this should be for_each_possible_cpu()? Although I'm not sure wha= t > happens to percpu data when a cpu goes offline? Is the data preserved? Or= wiped, > or unmapped? dunno. Might we need to rescue stats into a global counter a= t > offline-time? Good catch. I see /proc/vmstat is always using the for_each_online_cpu() b= ut it doesn't have the issue, but mTHP counters do have the problem. * step 1: cat the current thp_swpout value before running a test program which does swpout; / # cat /proc/vmstat | grep thp_swpout thp_swpout 0 / # cat /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/stats/anon_swp= out 0 * step 2: run the test program on cpu2; / # taskset -c 2 /home/barry/develop/linux/swpcache-2m * step 3: cat the current thp_swpout value after running a test program which does swpout; / # cat /proc/vmstat | grep thp_swpout thp_swpout 98 / # cat /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/stats/anon_swp= out 98 *step 4: offline cpu2 and read thp_swpout; / # echo 0 > /sys/devices/system/cpu/cpu2/online [ 339.058661] psci: CPU2 killed (polled 0 ms) / # cat /proc/vmstat | grep thp_swpout thp_swpout 98 / # cat /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/stats/anon_swp= out 0 *step 5: online cpu2 and read thp_swpout / # echo 1 > /sys/devices/system/cpu/cpu2/online [ 791.642058] CPU2: Booted secondary processor 0x0000000002 [0x000f0510] / # cat /proc/vmstat | grep thp_swpout thp_swpout 98 / # cat /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/stats/anon_swp= out 98 As you can see, in step 4, /proc/vmstat is all good but mTHP counters becom= e zero. The reason is /proc/vmstat will fold the offline cpu to an online cpu but mthp counters lack it: /* * Fold the foreign cpu events into our own. * * This is adding to the events on one processor * but keeps the global counts constant. */ void vm_events_fold_cpu(int cpu) { struct vm_event_state *fold_state =3D &per_cpu(vm_event_states, cpu= ); int i; for (i =3D 0; i < NR_VM_EVENT_ITEMS; i++) { count_vm_events(i, fold_state->event[i]); fold_state->event[i] =3D 0; } } static int page_alloc_cpu_dead(unsigned int cpu) { ... /* * Spill the event counters of the dead processor * into the current processors event counters. * This artificially elevates the count of the current * processor. */ vm_events_fold_cpu(cpu); ... return 0; } So I will do the same thing for mTHP counters - fold offline cpu counters to online one. > > > + struct mthp_stat *this =3D &per_cpu(mthp_stats, cpu); > > + > > + sum +=3D this->stats[order][item]; > > + } > > + > > + return sum; > > +} > > + > > +#define DEFINE_MTHP_STAT_ATTR(_name, _index) = \ > > +static ssize_t _name##_show(struct kobject *kobj, \ > > + struct kobj_attribute *attr, char *buf) \ > > +{ \ > > + int order =3D to_thpsize(kobj)->order; = \ > > + \ > > + return sysfs_emit(buf, "%lu\n", sum_mthp_stat(order, _index)); \ > > +} \ > > +static struct kobj_attribute _name##_attr =3D __ATTR_RO(_name) > > Very nice! Right. I got duplicated copy-paste and bad small in code so I wrote this ma= cro. > > > + > > +DEFINE_MTHP_STAT_ATTR(anon_alloc, MTHP_STAT_ANON_ALLOC); > > +DEFINE_MTHP_STAT_ATTR(anon_alloc_fallback, MTHP_STAT_ANON_ALLOC_FALLBA= CK); > > + > > +static struct attribute *stats_attrs[] =3D { > > + &anon_alloc_attr.attr, > > + &anon_alloc_fallback_attr.attr, > > + NULL, > > +}; > > + > > +static struct attribute_group stats_attr_group =3D { > > + .name =3D "stats", > > + .attrs =3D stats_attrs, > > +}; > > + > > static struct thpsize *thpsize_create(int order, struct kobject *paren= t) > > { > > unsigned long size =3D (PAGE_SIZE << order) / SZ_1K; > > @@ -549,6 +589,12 @@ static struct thpsize *thpsize_create(int order, s= truct kobject *parent) > > return ERR_PTR(ret); > > } > > > > + ret =3D sysfs_create_group(&thpsize->kobj, &stats_attr_group); > > + if (ret) { > > + kobject_put(&thpsize->kobj); > > + return ERR_PTR(ret); > > + } > > + > > thpsize->order =3D order; > > return thpsize; > > } > > @@ -1050,8 +1096,10 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_= fault *vmf) > > folio =3D vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true)= ; > > if (unlikely(!folio)) { > > count_vm_event(THP_FAULT_FALLBACK); > > + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_ALLOC_FAL= LBACK); > > I think we should aim for the PMD-oder MTHP_STAT_ANON_ALLOC and > MTHP_STAT_ANON_ALLOC_FALLBACK to match THP_FAULT_ALLOC and THP_FAULT_FALL= BACK. > Its not currently setup this way... right. I also realized this and asked for your comments on this in another thread. > > > > return VM_FAULT_FALLBACK; > > } > > + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_ALLOC); > > return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); > > } > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 649e3ed94487..1723c8ddf9cb 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4374,8 +4374,10 @@ static struct folio *alloc_anon_folio(struct vm_= fault *vmf) > > } > > folio_throttle_swaprate(folio, gfp); > > clear_huge_page(&folio->page, vmf->address, 1 << = order); > > + count_mthp_stat(order, MTHP_STAT_ANON_ALLOC); > > return folio; > > } > > + count_mthp_stat(order, MTHP_STAT_ANON_ALLOC_FALLBACK); > > ...And we should follow the usage same pattern for the smaller mTHP here = too. > Which means MTHP_STAT_ANON_ALLOC_FALLBACK should be after the next: label= . We The only difference is the case if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) goto next; but vmstat is counting this as fallback so i feel good to move after next, if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { folio_put(folio); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); return VM_FAULT_FALLBACK; } > could introduce a MTHP_STAT_ANON_ALLOC_FALLBACK_CHARGE which would only t= rigger > on a fallback due to charge failure, just like THP_FAULT_FALLBACK_CHARGE? it is fine to add this THP_FAULT_FALLBACK_CHARGE though it is not that useful for profiling buddy fragmentation. > > > next: > > order =3D next_order(&orders, order); > > } > Thanks Barry