From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA7A9C77B7F for ; Fri, 5 May 2023 08:02:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 464096B0075; Fri, 5 May 2023 04:02:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 415276B0078; Fri, 5 May 2023 04:02:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 303866B007B; Fri, 5 May 2023 04:02:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by kanga.kvack.org (Postfix) with ESMTP id BC2C66B0075 for ; Fri, 5 May 2023 04:02:36 -0400 (EDT) Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-2ac811c6691so15270051fa.3 for ; Fri, 05 May 2023 01:02:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683273756; x=1685865756; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=1fbuRo3h5EmOX7ozGvJB42czu0NHSlI3RqyThUj5mBo=; b=r2tm8XmZ8I2daChoV5gdUed5f/GdQhQp0Gw+vyyTdKUsv3/nrqG1ZT2kH8kk5aeUYG koVXIrk3/GAivaLXsgDBcw6QZxK8V9Ie/n5jkd6VKS1aXRgEgYoUI81vCJ9k8Kd1P/q2 41blV+kv+oPPdG0yFc1OiZ+x5BfyRFCEMlifpiLygIrtsBA5ZGrf210AUGEOSmz+BE/9 irkUZxH5+MBkkdFjrDKMtMVioEC9W/R4nFWIZxJp+fs19MCD9JZRPVpivDzjfoGo6/4p lcpY2Vv0XgAi1uubBRBBMcp2NAGaltkPVYedTn6N0EDB/Xtg2clBiz6ZMMF3mh2DqqC9 u0wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683273756; x=1685865756; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1fbuRo3h5EmOX7ozGvJB42czu0NHSlI3RqyThUj5mBo=; b=hknGdn+fN63GQY4hnTDyb/XcCKz9WSJosHRbdWmcEQy2ocZsLHFURVuKb2GBrhdIwF VMu1Lra7C5p/0igSfvI9a6Yy5EXcSvr92qF1j+CUHZflnjhxsefiigVxVD+HrX7vjFQX LtXEZg1qpct2neUFdrtFEpwgXyyiG7QpDRMcR11Kw2DpbdK50vLNSJF1zbSJiU5djb1O BZlvG5+q2pXIRt2ocWbJcoPKIpyL+e2288I1dlOtmDHRVV2tHMyAot8/gFFo1NkgmBTn lDgRcf0CFIk9oJI1xeEK3Gewcl2Pq2t10ONgj+9hxmx+1v2T5nuBkXh9YUeBusLLs0gL Vvzw== X-Gm-Message-State: AC+VfDymJVe4TuJIrdZifW01lCG2dJ8BVD2Y6vFBTlRBbTtxQmiG3pLo GHFNUzd7WgcwJj78E+t2Os8HW1VUJVhDzQRB4jY= X-Google-Smtp-Source: ACHHUZ6vDolii3Aki9T/T+8AbhJCLGMwK00GXMfo3btxwH5gcAQwdLK3TfAVH7q4YvHsEGzVaGPSgMsakMEZAmYqwAY= X-Received: by 2002:ac2:4846:0:b0:4f0:15dc:4f23 with SMTP id 6-20020ac24846000000b004f015dc4f23mr292344lfy.29.1683273755424; Fri, 05 May 2023 01:02:35 -0700 (PDT) MIME-Version: 1.0 References: <1683194994-3070-1-git-send-email-zhaoyang.huang@unisoc.com> In-Reply-To: <1683194994-3070-1-git-send-email-zhaoyang.huang@unisoc.com> From: Zhaoyang Huang Date: Fri, 5 May 2023 16:02:12 +0800 Message-ID: Subject: Re: [PATCHv2] mm: optimization on page allocation when CMA enabled To: "zhaoyang.huang" , kernel-team@fb.com, Qian Cai , Vlastimil Babka , Mel Gorman , Anshuman Khandual Cc: Andrew Morton , Roman Gushchin , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, ke.wang@unisoc.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: add more reviewer On Thu, May 4, 2023 at 6:11=E2=80=AFPM zhaoyang.huang wrote: > > From: Zhaoyang Huang > > Let us look at the series of scenarios below with WMARK_LOW=3D25MB,WMARK_= MIN=3D5MB > (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start t= o use > CMA since C which actually has caused U&R lower than WMARK_LOW (this shou= ld be > deemed as against current memory policy, that is, U&R should either stay = around > WATERMARK_LOW when no allocation or do reclaim via enter slowpath) > > free_cma/free_pages(MB) A(12/30) B(12/25) C(12/20) > fixed 1/2 ratio N N Y > this commit Y Y Y > > Suggested-by: Roman Gushchin > Signed-off-by: Zhaoyang Huang > --- > v2: do proportion check when zone_watermark_ok, update commit message > --- > --- > mm/page_alloc.c | 36 ++++++++++++++++++++++++++++++++---- > 1 file changed, 32 insertions(+), 4 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0745aed..d0baeab 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3071,6 +3071,34 @@ static bool unreserve_highatomic_pageblock(const s= truct alloc_context *ac, > > } > > +#ifdef CONFIG_CMA > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, un= signed int alloc_flags) > +{ > + unsigned long cma_proportion =3D 0; > + unsigned long cma_free_proportion =3D 0; > + unsigned long watermark =3D 0; > + long count =3D 0; > + bool cma_first =3D false; > + > + watermark =3D wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); > + /*check if GFP_MOVABLE pass previous watermark check via the help= of CMA*/ > + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (= ~ALLOC_CMA))) > + /* WMARK_LOW failed lead to using cma first, this helps U= &R stay > + * around low when being drained by GFP_MOVABLE > + */ > + cma_first =3D true; > + else { > + /*check proportion when zone_watermark_ok*/ > + count =3D atomic_long_read(&zone->managed_pages); > + cma_proportion =3D zone->cma_pages * 100 / count; > + cma_free_proportion =3D zone_page_state(zone, NR_FREE_CMA= _PAGES) * 100 > + / zone_page_state(zone, NR_FREE_PAGES); > + cma_first =3D (cma_free_proportion >=3D cma_proportion * = 2 > + || cma_free_proportion >=3D 50); > + } > + return cma_first; > +} > +#endif > /* > * Do the hard work of removing an element from the buddy allocator. > * Call me with the zone->lock already held. > @@ -3087,10 +3115,10 @@ static bool unreserve_highatomic_pageblock(const = struct alloc_context *ac, > * allocating from CMA when over half of the zone's free = memory > * is in the CMA area. > */ > - if (alloc_flags & ALLOC_CMA && > - zone_page_state(zone, NR_FREE_CMA_PAGES) > > - zone_page_state(zone, NR_FREE_PAGES) / 2) { > - page =3D __rmqueue_cma_fallback(zone, order); > + if (migratetype =3D=3D MIGRATE_MOVABLE) { > + bool cma_first =3D __if_use_cma_first(zone, order= , alloc_flags); > + > + page =3D cma_first ? __rmqueue_cma_fallback(zone,= order) : NULL; > if (page) > return page; > } > -- > 1.9.1 >