linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zhaoyang Huang <huangzhaoyang@gmail.com>
To: "zhaoyang.huang" <zhaoyang.huang@unisoc.com>,
	kernel-team@fb.com, Qian Cai <cai@lca.pw>,
	 Vlastimil Babka <vbabka@suse.cz>,
	Mel Gorman <mgorman@techsingularity.net>,
	 Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Roman Gushchin <guro@fb.com>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 ke.wang@unisoc.com
Subject: Re: [PATCHv2] mm: optimization on page allocation when CMA enabled
Date: Fri, 5 May 2023 16:02:12 +0800	[thread overview]
Message-ID: <CAGWkznEqH3b5VbgEa0CFKcZPharKx=wrT0iEea3fGfEoVf0n-A@mail.gmail.com> (raw)
In-Reply-To: <1683194994-3070-1-git-send-email-zhaoyang.huang@unisoc.com>

add more reviewer

On Thu, May 4, 2023 at 6:11 PM zhaoyang.huang <zhaoyang.huang@unisoc.com> wrote:
>
> From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
>
> Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB
> (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use
> CMA since C which actually has caused U&R lower than WMARK_LOW (this should be
> deemed as against current memory policy, that is, U&R should either stay around
> WATERMARK_LOW when no allocation or do reclaim via enter slowpath)
>
> free_cma/free_pages(MB)      A(12/30)     B(12/25)     C(12/20)
> fixed 1/2 ratio                 N             N           Y
> this commit                     Y             Y           Y
>
> Suggested-by: Roman Gushchin <roman.gushchin@linux.dev>
> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
> ---
> v2: do proportion check when zone_watermark_ok, update commit message
> ---
> ---
>  mm/page_alloc.c | 36 ++++++++++++++++++++++++++++++++----
>  1 file changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0745aed..d0baeab 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3071,6 +3071,34 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>
>  }
>
> +#ifdef CONFIG_CMA
> +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags)
> +{
> +       unsigned long cma_proportion = 0;
> +       unsigned long cma_free_proportion = 0;
> +       unsigned long watermark = 0;
> +       long count = 0;
> +       bool cma_first = false;
> +
> +       watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
> +       /*check if GFP_MOVABLE pass previous watermark check via the help of CMA*/
> +       if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA)))
> +               /* WMARK_LOW failed lead to using cma first, this helps U&R stay
> +                * around low when being drained by GFP_MOVABLE
> +                */
> +               cma_first = true;
> +       else {
> +               /*check proportion when zone_watermark_ok*/
> +               count = atomic_long_read(&zone->managed_pages);
> +               cma_proportion = zone->cma_pages * 100 / count;
> +               cma_free_proportion = zone_page_state(zone, NR_FREE_CMA_PAGES) * 100
> +                       /  zone_page_state(zone, NR_FREE_PAGES);
> +               cma_first = (cma_free_proportion >= cma_proportion * 2
> +                               || cma_free_proportion >= 50);
> +       }
> +       return cma_first;
> +}
> +#endif
>  /*
>   * Do the hard work of removing an element from the buddy allocator.
>   * Call me with the zone->lock already held.
> @@ -3087,10 +3115,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>                  * allocating from CMA when over half of the zone's free memory
>                  * is in the CMA area.
>                  */
> -               if (alloc_flags & ALLOC_CMA &&
> -                   zone_page_state(zone, NR_FREE_CMA_PAGES) >
> -                   zone_page_state(zone, NR_FREE_PAGES) / 2) {
> -                       page = __rmqueue_cma_fallback(zone, order);
> +               if (migratetype == MIGRATE_MOVABLE) {
> +                       bool cma_first = __if_use_cma_first(zone, order, alloc_flags);
> +
> +                       page = cma_first ? __rmqueue_cma_fallback(zone, order) : NULL;
>                         if (page)
>                                 return page;
>                 }
> --
> 1.9.1
>


  parent reply	other threads:[~2023-05-05  8:02 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-04 10:09 zhaoyang.huang
2023-05-04 16:48 ` kernel test robot
2023-05-05  8:02 ` Zhaoyang Huang [this message]
2023-05-05 21:25 ` Andrew Morton
2023-05-05 22:28 ` Roman Gushchin
2023-05-06  2:44   ` Zhaoyang Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGWkznEqH3b5VbgEa0CFKcZPharKx=wrT0iEea3fGfEoVf0n-A@mail.gmail.com' \
    --to=huangzhaoyang@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=cai@lca.pw \
    --cc=guro@fb.com \
    --cc=ke.wang@unisoc.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    --cc=zhaoyang.huang@unisoc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox