From: "Huang, Ying" <ying.huang@intel.com>
To: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Arjan Van De Ven <arjan@linux.intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Vlastimil Babka <vbabka@suse.cz>,
David Hildenbrand <david@redhat.com>,
Johannes Weiner <jweiner@redhat.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Pavel Tatashin <pasha.tatashin@soleen.com>,
Matthew Wilcox <willy@infradead.org>
Subject: Re: [RFC 1/2] mm: add framework for PCP high auto-tuning
Date: Mon, 17 Jul 2023 16:19:12 +0800 [thread overview]
Message-ID: <87wmyzc5mn.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <ZLEOZ7ScEwnNpS0e@dhcp22.suse.cz> (Michal Hocko's message of "Fri, 14 Jul 2023 10:59:19 +0200")
Michal Hocko <mhocko@suse.com> writes:
> On Wed 12-07-23 15:45:58, Huang, Ying wrote:
>> Michal Hocko <mhocko@suse.com> writes:
>>
>> > On Mon 10-07-23 14:53:24, Huang Ying wrote:
>> >> The page allocation performance requirements of different workloads
>> >> are usually different. So, we often need to tune PCP (per-CPU
>> >> pageset) high to optimize the workload page allocation performance.
>> >> Now, we have a system wide sysctl knob (percpu_pagelist_high_fraction)
>> >> to tune PCP high by hand. But, it's hard to find out the best value
>> >> by hand. And one global configuration may not work best for the
>> >> different workloads that run on the same system. One solution to
>> >> these issues is to tune PCP high of each CPU automatically.
>> >>
>> >> This patch adds the framework for PCP high auto-tuning. With it,
>> >> pcp->high will be changed automatically by tuning algorithm at
>> >> runtime. Its default value (pcp->high_def) is the original PCP high
>> >> value calculated based on low watermark pages or
>> >> percpu_pagelist_high_fraction sysctl knob. To avoid putting too many
>> >> pages in PCP, the original limit of percpu_pagelist_high_fraction
>> >> sysctl knob, MIN_PERCPU_PAGELIST_HIGH_FRACTION, is used to calculate
>> >> the max PCP high value (pcp->high_max).
>> >
>> > It would have been very helpful to describe the basic entry points to
>> > the auto-tuning. AFAICS the central place of the tuning is tune_pcp_high
>> > which is called from the freeing path. Why? Is this really a good place
>> > considering this is a hot path? What about the allocation path? Isn't
>> > that a good spot to watch for the allocation demand?
>>
>> Yes. The main entry point to the auto-tuning is tune_pcp_high(). Which
>> is called from the freeing path because pcp->high is only used by page
>> freeing. It's possible to call it in allocation path instead. The
>> drawback is that the pcp->high may be updated a little later in some
>> situations. For example, if there are many page freeing but no page
>> allocation for quite long time. But I don't think this is a serious
>> problem.
>
> I consider it a serious flaw in the framework as it cannot cope with the
> transition of the allocation pattern (e.g. increasing the allocation
> pressure).
Sorry, my previous words are misleading. What I really wanted to say is
that the problem may be just theoretical. Anyway, I will try to avoid
this problem in the future version.
>> > Also this framework seems to be enabled by default. Is this really
>> > desirable? What about workloads tuning the pcp batch size manually?
>> > Shouldn't they override any auto-tuning?
>>
>> In the current implementation, the pcp->high will be tuned between
>> original pcp high (default or tuned manually) and the max pcp high (via
>> MIN_PERCPU_PAGELIST_HIGH_FRACTION). So the high value tuned manually is
>> respected at some degree.
>>
>> So you think that it's better to disable auto-tuning if PCP high is
>> tuned manually?
>
> Yes, I think this is a much safer option. For two reasons 1) it is less
> surprising to setups which know what they are doing by configuring the
> batching and 2) the auto-tuning needs a way to get disabled in case
> there are pathological patterns in behavior.
OK.
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2023-07-17 8:21 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-10 6:53 [RFC 0/2] mm: " Huang Ying
2023-07-10 6:53 ` [RFC 1/2] mm: add framework for " Huang Ying
2023-07-11 11:07 ` Michal Hocko
2023-07-12 7:45 ` Huang, Ying
2023-07-14 8:59 ` Michal Hocko
2023-07-17 8:19 ` Huang, Ying [this message]
2023-07-10 6:53 ` [RFC 2/2] mm: alloc/free depth based " Huang Ying
2023-07-11 11:19 ` Michal Hocko
2023-07-12 9:05 ` Mel Gorman
2023-07-13 8:56 ` Huang, Ying
2023-07-14 14:07 ` Mel Gorman
2023-07-17 9:16 ` Huang, Ying
2023-07-17 13:50 ` Mel Gorman
2023-07-18 0:55 ` Huang, Ying
2023-07-18 12:34 ` Mel Gorman
2023-07-19 5:59 ` Huang, Ying
2023-07-19 9:05 ` Mel Gorman
2023-07-21 7:28 ` Huang, Ying
2023-07-21 9:21 ` Mel Gorman
2023-07-24 1:09 ` Huang, Ying
2023-07-14 11:41 ` Michal Hocko
2023-07-13 8:11 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87wmyzc5mn.fsf@yhuang6-desk2.ccr.corp.intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=arjan@linux.intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=jweiner@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@soleen.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox