linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: akpm@linux-foundation.org, mgorman@techsingularity.net,
	linux-mm@kvack.org
Subject: Re: [PATCH 2/3] mm/page_alloc: Avoid changing pcp->high decaying when adjusting CONFIG_PCP_BATCH_SCALE_MAX
Date: Wed, 10 Jul 2024 10:07:09 +0800	[thread overview]
Message-ID: <CALOAHbBsUb3amayaz3+DBaukgeP16e=xjf3ZvWKiRJwjbq9FTw@mail.gmail.com> (raw)
In-Reply-To: <87h6cyau9w.fsf@yhuang6-desk2.ccr.corp.intel.com>

On Wed, Jul 10, 2024 at 9:53 AM Huang, Ying <ying.huang@intel.com> wrote:
>
> Yafang Shao <laoar.shao@gmail.com> writes:
>
> > When adjusting the CONFIG_PCP_BATCH_SCALE_MAX configuration from its
> > default value of 5 to a lower value, such as 0, it's important to ensure
> > that the pcp->high decaying is not inadvertently slowed down. Similarly,
> > when increasing CONFIG_PCP_BATCH_SCALE_MAX to a larger value, like 6, we
> > must avoid inadvertently increasing the number of pages freed in
> > free_pcppages_bulk() as a result of this change.
> >
> > So below improvements are made:
> > - hardcode the default value of 5 to avoiding modifying the pcp->high
> > - refactore free_pcppages_bulk() into multiple steps, with each step
> >   processing a fixed batch size of pages
>
> This is confusing.  You don't change free_pcppages_bulk() itself.  I
> guess what you mean is "change free_pcppages_bulk() calling into
> multiple steps".

will change it.

>
> >
> > Suggested-by: "Huang, Ying" <ying.huang@intel.com>
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> >  mm/page_alloc.c | 15 +++++++++++----
> >  1 file changed, 11 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 8e2f4e1ab4f2..2b76754a48e0 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2247,7 +2247,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
> >  int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
> >  {
> >       int high_min, to_drain, batch;
> > -     int todo = 0;
> > +     int todo = 0, count = 0;
> >
> >       high_min = READ_ONCE(pcp->high_min);
> >       batch = READ_ONCE(pcp->batch);
> > @@ -2257,18 +2257,25 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
> >        * control latency.  This caps pcp->high decrement too.
> >        */
> >       if (pcp->high > high_min) {
> > -             pcp->high = max3(pcp->count - (batch << CONFIG_PCP_BATCH_SCALE_MAX),
> > +             /* When tuning the pcp batch scale value, we want to ensure that
> > +              * the pcp->high decay rate is not slowed down. Therefore, we
> > +              * hard-code the historical default scale value of 5 here to
> > +              * prevent any unintended effects.
> > +              */
>
> This is good description for history.  But, in the result code, it's
> not easy for people to connect the code with pcp batch scale directly.
> How about something as follows,
>
> We will decay 1/8 pcp->high each time in general, so that the idle PCP
> pages can be returned to buddy system timely.  To control the max
> latency of decay, we also constrain the number pages freed each time.

Thanks for your suggestion.
will change it.

>
> > +             pcp->high = max3(pcp->count - (batch << 5),
> >                                pcp->high - (pcp->high >> 3), high_min);
> >               if (pcp->high > high_min)
> >                       todo++;
> >       }
> >
> >       to_drain = pcp->count - pcp->high;
> > -     if (to_drain > 0) {
> > +     while (count < to_drain) {
> >               spin_lock(&pcp->lock);
> > -             free_pcppages_bulk(zone, to_drain, pcp, 0);
> > +             free_pcppages_bulk(zone, batch, pcp, 0);
>
> "to_drain - count" may < batch.

Nice catch. will fix it.

>
> >               spin_unlock(&pcp->lock);
> > +             count += batch;
> >               todo++;
> > +             cond_resched();
> >       }
> >
> >       return todo;
>
> --
> Best Regards,
> Huang, Ying



-- 
Regards
Yafang


  reply	other threads:[~2024-07-10  2:07 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-07  9:49 [PATCH 0/3] mm/page_alloc: Introduce a new sysctl knob vm.pcp_batch_scale_max Yafang Shao
2024-07-07  9:49 ` [PATCH 1/3] mm/page_alloc: A minor fix to the calculation of pcp->free_count Yafang Shao
2024-07-10  1:52   ` Huang, Ying
2024-07-07  9:49 ` [PATCH 2/3] mm/page_alloc: Avoid changing pcp->high decaying when adjusting CONFIG_PCP_BATCH_SCALE_MAX Yafang Shao
2024-07-10  1:51   ` Huang, Ying
2024-07-10  2:07     ` Yafang Shao [this message]
2024-07-07  9:49 ` [PATCH 3/3] mm/page_alloc: Introduce a new sysctl knob vm.pcp_batch_scale_max Yafang Shao
2024-07-10  2:49   ` Huang, Ying
2024-07-11  2:21     ` Yafang Shao
2024-07-11  6:42       ` Huang, Ying
2024-07-11  7:25         ` Yafang Shao
2024-07-11  8:18           ` Huang, Ying
2024-07-11  9:51             ` Yafang Shao
2024-07-11 10:49               ` Huang, Ying
2024-07-11 12:45                 ` Yafang Shao
2024-07-12  1:19                   ` Huang, Ying
2024-07-12  2:25                     ` Yafang Shao
2024-07-12  3:05                       ` Huang, Ying
2024-07-12  3:44                         ` Yafang Shao
2024-07-12  5:25                           ` Huang, Ying
2024-07-12  5:41                             ` Yafang Shao
2024-07-12  6:16                               ` Huang, Ying
2024-07-12  6:41                                 ` Yafang Shao
2024-07-12  7:04                                   ` Huang, Ying
2024-07-12  7:36                                     ` Yafang Shao
2024-07-12  8:24                                       ` Huang, Ying
2024-07-12  8:49                                         ` Yafang Shao
2024-07-12  9:10                                           ` Huang, Ying
2024-07-12  9:24                                             ` Yafang Shao
2024-07-12  9:46                                               ` Yafang Shao
2024-07-15  1:09                                                 ` Huang, Ying
2024-07-15  4:32                                                   ` Yafang Shao
2024-07-10  3:00 ` [PATCH 0/3] " Huang, Ying
2024-07-11  2:25   ` Yafang Shao
2024-07-11  6:38     ` Huang, Ying
2024-07-11  7:21       ` Yafang Shao
2024-07-11  8:36         ` Huang, Ying
2024-07-11  9:40           ` Yafang Shao
2024-07-11 11:03             ` Huang, Ying
2024-07-11 12:40               ` Yafang Shao
2024-07-12  2:32                 ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALOAHbBsUb3amayaz3+DBaukgeP16e=xjf3ZvWKiRJwjbq9FTw@mail.gmail.com' \
    --to=laoar.shao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox