linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Yu Zhao <yuzhao@google.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	willy@infradead.org,  david@redhat.com, ryan.roberts@arm.com,
	cerasuolodomenico@gmail.com,  kasong@tencent.com,
	surenb@google.com, v-songbaohua@oppo.com,  yosryahmed@google.com,
	chrisl@kernel.org, peterx@redhat.com
Subject: Re: [PATCH v2] mm: add per-order mTHP alloc_success and alloc_fail counters
Date: Tue, 2 Apr 2024 09:40:41 +1300	[thread overview]
Message-ID: <CAGsJ_4wQt04nTJZK_JR+w04y5ubuV73c=Xw+jZtzOFpFoaDVWQ@mail.gmail.com> (raw)
In-Reply-To: <CAOUHufY+CqX8b5JGvxLUuXAjbiNbSk=KPMeFPpeE9hgGE2fk=Q@mail.gmail.com>

On Tue, Apr 2, 2024 at 3:46 AM Yu Zhao <yuzhao@google.com> wrote:
>
> On Thu, Mar 28, 2024 at 5:51 AM Barry Song <21cnbao@gmail.com> wrote:
> >
> > From: Barry Song <v-songbaohua@oppo.com>
> >
> > Profiling a system blindly with mTHP has become challenging due
> > to the lack of visibility into its operations. Presenting the
> > success rate of mTHP allocations appears to be pressing need.
> >
> > Recently, I've been experiencing significant difficulty debugging
> > performance improvements and regressions without these figures.
> > It's crucial for us to understand the true effectiveness of
> > mTHP in real-world scenarios, especially in systems with
> > fragmented memory.
> >
> > This patch sets up the framework for per-order mTHP counters,
> > starting with the introduction of alloc_success and alloc_fail
> > counters.  Incorporating additional counters should now be
> > straightforward as well.
> >
> > The initial two unsigned longs for each event are unused, given
> > that order-0 and order-1 are not mTHP. Nonetheless, this refinement
> > improves code clarity.
> >
> > Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> > ---
> >  -v2:
> >  * move to sysfs and provide per-order counters; David, Ryan, Willy
> >  -v1:
> >  https://lore.kernel.org/linux-mm/20240326030103.50678-1-21cnbao@gmail.com/
> >
> >  include/linux/huge_mm.h | 17 +++++++++++++
> >  mm/huge_memory.c        | 54 +++++++++++++++++++++++++++++++++++++++++
> >  mm/memory.c             |  3 +++
> >  3 files changed, 74 insertions(+)
> >
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index e896ca4760f6..27fa26a22a8f 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -264,6 +264,23 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
> >                                           enforce_sysfs, orders);
> >  }
> >
> > +enum thp_event_item {
> > +       THP_ALLOC_SUCCESS,
> > +       THP_ALLOC_FAIL,
> > +       NR_THP_EVENT_ITEMS
> > +};
> > +
> > +struct thp_event_state {
> > +       unsigned long event[PMD_ORDER + 1][NR_THP_EVENT_ITEMS];
> > +};
> > +
> > +DECLARE_PER_CPU(struct thp_event_state, thp_event_states);
>
> Do we have existing per-CPU counters that cover all possible THP
> orders? I.e., foo_counter[PMD_ORDER + 1][BAR_ITEMS]. I don't think we
> do but I want to double check.

Right.

The current counters are tailored for PMD-mapped THP within the
vm_event_state. Therefore, it appears that we lack counters specific
to each order.

>
> This might be fine if BAR_ITEMS is global, not per memcg. Otherwise on
> larger systems, e.g., 512 CPUs which is not uncommon, we'd have high
> per-CPU memory overhead. For Google's datacenters, per-CPU memory
> overhead has been a problem.

Right. I don't strongly feel the need for per-memcg counters, and the
/sys/kernel/mm/transparent_hugepage/hugepages-<size> is also global.

>
> I'm not against this patch since NR_THP_EVENT_ITEMS is not per memcg.
> Alternatively, we could make the per-CPU counters to track only one
> order and flush the local counter to a global atomic counter if the
> new order doesn't match the existing order stored in the local
> counter. WDYT?

The code assumes the worst-case scenario where users might enable
multiple orders.
Therefore, we require a lightweight approach to prevent frequent
flushing of atomic
operations.

Thanks
Barry


  reply	other threads:[~2024-04-01 20:40 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-28  9:51 Barry Song
2024-04-01 14:46 ` Yu Zhao
2024-04-01 20:40   ` Barry Song [this message]
2024-04-02  8:58 ` Ryan Roberts
2024-04-02  9:40   ` Barry Song
2024-04-02 10:44     ` Ryan Roberts
2024-04-02 22:14       ` Barry Song
2024-04-03  8:03         ` Ryan Roberts
2024-04-02 18:46 ` David Hildenbrand
2024-04-02 21:29   ` Barry Song
2024-04-03  8:13     ` David Hildenbrand
2024-04-03 20:47       ` Barry Song
2024-04-03  8:18     ` Ryan Roberts
2024-04-03  8:24       ` David Hildenbrand
2024-04-03  8:46         ` Ryan Roberts
2024-04-03  9:05           ` David Hildenbrand
2024-04-03 21:06         ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4wQt04nTJZK_JR+w04y5ubuV73c=Xw+jZtzOFpFoaDVWQ@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cerasuolodomenico@gmail.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=kasong@tencent.com \
    --cc=linux-mm@kvack.org \
    --cc=peterx@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=v-songbaohua@oppo.com \
    --cc=willy@infradead.org \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox