linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>,  <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>,
	 Arjan Van De Ven <arjan@linux.intel.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	 David Hildenbrand <david@redhat.com>,
	Johannes Weiner <jweiner@redhat.com>,
	 Dave Hansen <dave.hansen@linux.intel.com>,
	 Pavel Tatashin <pasha.tatashin@soleen.com>,
	 Matthew Wilcox <willy@infradead.org>
Subject: Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning
Date: Mon, 17 Jul 2023 17:16:11 +0800	[thread overview]
Message-ID: <87pm4qdhk4.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <20230714140710.5xbesq6xguhcbyvi@techsingularity.net> (Mel Gorman's message of "Fri, 14 Jul 2023 15:07:10 +0100")

Mel Gorman <mgorman@techsingularity.net> writes:

> On Thu, Jul 13, 2023 at 04:56:54PM +0800, Huang, Ying wrote:
>> Mel Gorman <mgorman@techsingularity.net> writes:
>> 
>> > On Tue, Jul 11, 2023 at 01:19:46PM +0200, Michal Hocko wrote:
>> >> On Mon 10-07-23 14:53:25, Huang Ying wrote:
>> >> > To auto-tune PCP high for each CPU automatically, an
>> >> > allocation/freeing depth based PCP high auto-tuning algorithm is
>> >> > implemented in this patch.
>> >> > 
>> >> > The basic idea behind the algorithm is to detect the repetitive
>> >> > allocation and freeing pattern with short enough period (about 1
>> >> > second).  The period needs to be short to respond to allocation and
>> >> > freeing pattern changes quickly and control the memory wasted by
>> >> > unnecessary caching.
>> >> 
>> >> 1s is an ethernity from the allocation POV. Is a time based sampling
>> >> really a good choice? I would have expected a natural allocation/freeing
>> >> feedback mechanism. I.e. double the batch size when the batch is
>> >> consumed and it requires to be refilled and shrink it under memory
>> >> pressure (GFP_NOWAIT allocation fails) or when the surplus grows too
>> >> high over batch (e.g. twice as much).  Have you considered something as
>> >> simple as that?
>> >> Quite honestly I am not sure time based approach is a good choice
>> >> because memory consumptions tends to be quite bulky (e.g. application
>> >> starts or workload transitions based on requests).
>> >>  
>> >
>> > I tend to agree. Tuning based on the recent allocation pattern without
>> > frees would make more sense and also be symmetric with how free_factor
>> > works.
>> 
>> This sounds good to me.  I will give it a try to tune PCP batch.  Have
>> some questions about how to tune PCP high based on that.  Details are in
>> the following.
>> 
>> > I suspect that time-based may be heavily orientated around the
>> > will-it-scale benchmark.
>> 
>> I totally agree that will-it-scale isn't a real workload.  So, I tried
>> to find some more practical ones.  I found that a repetitive
>> allocation/freeing several hundreds MB pages pattern exists in kernel
>> building and netperf/SCTP_STREAM_MANY workloads.  More details can be
>> found in my reply to Michal as follows,
>> 
>> https://lore.kernel.org/linux-mm/877cr4dydo.fsf@yhuang6-desk2.ccr.corp.intel.com/
>> 
>> > While I only glanced at this, a few things jumped out
>> >
>> > 1. Time-based heuristics are not ideal. congestion_wait() and
>> >    friends was an obvious case where time-based heuristics fell apart even
>> >    before the event it waited on was removed. For congestion, it happened to
>> >    work for slow storage for a while but that was about it.  For allocation
>> >    stream detection, it has a similar problem. If a process is allocating
>> >    heavily, then fine, if it's in bursts of less than a second more than one
>> >    second apart then it will not adapt. While I do not think it is explicitly
>> >    mentioned anywhere, my understanding was that heuristics like this within
>> >    mm/ should be driven by explicit events as much as possible and not time.
>> 
>> The proposed heuristics can be changed to be not time-based.  When it is
>> found that the allocation/freeing depth is larger than previous value,
>> the PCP high can be increased immediately.  We use time-based
>> implementation to try to reduce the overhead.  And, we mainly targeted
>> long-time allocation pattern before.
>> 
>
> Time simply has too many corner cases. When it's reset for example, all
> state is lost so patterns that are longer than the time window are
> unpredictable. It tends to work slightly better than time decays state
> rather than resets but it gets very hand-wavey.
>
>> > 2. If time was to be used, it would be cheaper to have the simpliest possible
>> >    state tracking in the fast paths and decay any resizing of the PCP
>> >    within the vmstat updates (reuse pcp->expire except it applies to local
>> >    pcps). Even this is less than ideal as the PCP may be too large for short
>> >    periods of time but it may also act as a backstop for worst-case behaviour
>> 
>> This sounds reasonable.  Thanks!  We will try this if we choose to use
>> time-based implementation.
>> 
>> > 3. free_factor is an existing mechanism for detecting recent patterns
>> >    and adapting the PCP sizes. The allocation side should be symmetric
>> >    and the events that should drive it are "refills" on the alloc side and
>> >    "drains" on the free side. Initially it might be easier to have a single
>> >    parameter that scales batch and high up to a limit
>> 
>> For example, when a workload is started, several GB pages will be
>> allocated.  We will observe many "refills", and almost no "drains".  So,
>> we will scales batch and high up to a limit.  When the workload exits,
>> large number of pages of the workload will be put in PCP because the PCP
>> high is increased.  When should we decrease the PCP batch and high?
>> 
>
> Honestly, I'm not 100% certain as I haven't spent time with paper to
> sketch out the different combinations with "all allocs" at one end, "all
> frees" at the other and "ratio of alloc:free" in the middle. Intuitively
> I would expect the time to shrink is when there is a mix.
>
> All allocs -- maximise batch and high
> All frees  -- maximise batch and high
> Mix        -- adjust high to appoximate the minimum value of high such
> 	      that a drain/refill does not occur or rarely occurs
>
> Batch should have a much lower maximum than high because it's a deferred cost
> that gets assigned to an arbitrary task. The worst case is where a process
> that is a light user of the allocator incurs the full cost of a refill/drain.
>
> Again, intuitively this may be PID Control problem for the "Mix" case
> to estimate the size of high required to minimise drains/allocs as each
> drain/alloc is potentially a lock contention. The catchall for corner
> cases would be to decay high from vmstat context based on pcp->expires. The
> decay would prevent the "high" being pinned at an artifically high value
> without any zone lock contention for prolonged periods of time and also
> mitigate worst-case due to state being per-cpu. The downside is that "high"
> would also oscillate for a continuous steady allocation pattern as the PID
> control might pick an ideal value suitable for a long period of time with
> the "decay" disrupting that ideal value.

Maybe we can track the minimal value of pcp->count.  If it's small
enough recently, we can avoid to decay pcp->high.  Because the pages in
PCP are used for allocations instead of idle.

Another question is as follows.

For example, on CPU A, a large number of pages are freed, and we
maximize batch and high.  So, a large number of pages are put in PCP.
Then, the possible situations may be,

a) a large number of pages are allocated on CPU A after some time
b) a large number of pages are allocated on another CPU B

For a), we want the pages are kept in PCP of CPU A as long as possible.
For b), we want the pages are kept in PCP of CPU A as short as possible.
I think that we need to balance between them.  What is the reasonable
time to keep pages in PCP without many allocations?

>> > 4. The amount of state tracked seems excessive and increases the size of
>> >    the per-cpu structure by more than 1 cache line. That in itself may not
>> >    be a problem but the state is tracked on every page alloc/free that goes
>> >    through the fast path and it's relatively complex to track.  That is
>> >    a constant penalty in fast paths that may not may not be relevant to the
>> >    workload and only sustained bursty allocation streams may offset the
>> >    cost.
>> 
>> Yes.  Thanks for pointing this out.  We will optimize this if the other
>> aspects of the basic idea could be accepted.
>> 
>
> I'm not opposed to having an adaptive pcp->high in concept. I think it would
> be best to disable adaptive tuning if percpu_pagelist_high_fraction is set
> though. I expect that users of that tunable are rare and that if it *is*
> used that there is a very good reason for it.

OK.  Will do that in the future version.

>> > 5. Memory pressure and reclaim activity does not appear to be accounted
>> >    for and it's not clear if pcp->high is bounded or if it's possible for
>> >    a single PCP to hide a large number of pages from other CPUs sharing the
>> >    same node. The max size of the PCP should probably be explicitly clamped.
>> 
>> As in my reply to Michal's email, previously I thought
>> ZONE_RECLAIM_ACTIVE will be set for kswap reclaiming, and PCP high will
>> be decreased to (batch * 4) effectively.  Or I miss something?
>> 
>
> I don't think you did, but it deserves a big comment in the tuning at minimum
> and potentially even disabling adaptive tuning entirely if reclaim is active.

Sure.  Will do that.

Best Regards,
Huang, Ying


  reply	other threads:[~2023-07-17  9:18 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-10  6:53 [RFC 0/2] mm: " Huang Ying
2023-07-10  6:53 ` [RFC 1/2] mm: add framework for " Huang Ying
2023-07-11 11:07   ` Michal Hocko
2023-07-12  7:45     ` Huang, Ying
2023-07-14  8:59       ` Michal Hocko
2023-07-17  8:19         ` Huang, Ying
2023-07-10  6:53 ` [RFC 2/2] mm: alloc/free depth based " Huang Ying
2023-07-11 11:19   ` Michal Hocko
2023-07-12  9:05     ` Mel Gorman
2023-07-13  8:56       ` Huang, Ying
2023-07-14 14:07         ` Mel Gorman
2023-07-17  9:16           ` Huang, Ying [this message]
2023-07-17 13:50             ` Mel Gorman
2023-07-18  0:55               ` Huang, Ying
2023-07-18 12:34                 ` Mel Gorman
2023-07-19  5:59                   ` Huang, Ying
2023-07-19  9:05                     ` Mel Gorman
2023-07-21  7:28                       ` Huang, Ying
2023-07-21  9:21                         ` Mel Gorman
2023-07-24  1:09                           ` Huang, Ying
2023-07-14 11:41       ` Michal Hocko
2023-07-13  8:11     ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87pm4qdhk4.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@linux.intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=jweiner@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox