From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FC73C77B75 for ; Sat, 6 May 2023 02:44:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99D026B007E; Fri, 5 May 2023 22:44:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 94C1A6B0080; Fri, 5 May 2023 22:44:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83A936B0081; Fri, 5 May 2023 22:44:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by kanga.kvack.org (Postfix) with ESMTP id 1A1E76B007E for ; Fri, 5 May 2023 22:44:52 -0400 (EDT) Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-4f14f266b72so595822e87.1 for ; Fri, 05 May 2023 19:44:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683341091; x=1685933091; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=y7UB2nW/Jj+U4yffAwPMogyKSJLrjsqdpipIiYsI+rw=; b=BdC8aAp7RX1CCo+JSpU8/7he3ffh2rtnXbhhKWzuiqVxu5gR2Zru/TPBHOBw3ZpPve YisP0v5BewuWDBsRyOLCLVkuJPQCop0ZhcETYAc++YrzZaiyUi/wiZ/YXQEF077B+A0H 6ksKOzZ4SyAL/Qa7TkS2vw+Jkt9CGw24StbzkplEbQX3pOmOs+rg/DTA4uYDMV5oLjB3 u4j125MjLwu7gnuLm8A/hVLTRlIU+0v0jbnwZz4NoOpYfl51Sda4E336XACMu981Or0l DMLKpaLZEEGL6MK2rN0MZG1BAHZn1eceEI7zyUNp3F/hoTHagtL+IFdeNbNBVwL9gb7v hOKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683341091; x=1685933091; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y7UB2nW/Jj+U4yffAwPMogyKSJLrjsqdpipIiYsI+rw=; b=OmEkgtuvsTCPQk9g8JDyKqPWK9JVXAM9vmZi4bv9HGF0bzzEKKNR6Ul1+WzRbWGjjs No1qSiE9aL4lgHHjHGChekWHdpHQ75nDWsEqH8rWlQ5Ot+TV3/skSmdMdcfm3QcVwRAz W2LV8OqFl4xUcZwMhhzjb8O/Xcgg5+nXFRI3PIM78ObPZ+QTRRulLFMYCWm6k2UY40OE d+xXnJzGvK7n6a7xrQ46kdAl3gk0t3K0a/ZAiYkITlt7DdX6g0EzLsvQt1rEzfHFJSFa 00tdqPRQ0gpt20/PbIaJRFoNEJOdkVn9xx5FSiv7RiSnL8Fgb/4w/Sr+DzMtyMv5x0gs 6bRw== X-Gm-Message-State: AC+VfDxdAZNvehV8asTnF/BTW+p8MW4MQTvtkIiPGu1B3T0Z2EAxiAyg OI043W30DMKwBQ7A020FIFDZzkCtgG+ruC9Hx3E= X-Google-Smtp-Source: ACHHUZ5zg5zXHtMT2LQnYg3Mr3XsMKlJsai1xG1XWuHffg3dqgmpSSqTWYSHB6Lm33JMYkSqD53rGNgfDyE1NuKMzdA= X-Received: by 2002:a19:f008:0:b0:4d5:a689:7f9d with SMTP id p8-20020a19f008000000b004d5a6897f9dmr989833lfc.57.1683341090737; Fri, 05 May 2023 19:44:50 -0700 (PDT) MIME-Version: 1.0 References: <1683194994-3070-1-git-send-email-zhaoyang.huang@unisoc.com> In-Reply-To: From: Zhaoyang Huang Date: Sat, 6 May 2023 10:44:28 +0800 Message-ID: Subject: Re: [PATCHv2] mm: optimization on page allocation when CMA enabled To: Roman Gushchin Cc: "zhaoyang.huang" , Andrew Morton , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, ke.wang@unisoc.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, May 6, 2023 at 6:29=E2=80=AFAM Roman Gushchin wrote: > > On Thu, May 04, 2023 at 06:09:54PM +0800, zhaoyang.huang wrote: > > From: Zhaoyang Huang > > > > Let us look at the series of scenarios below with WMARK_LOW=3D25MB,WMAR= K_MIN=3D5MB > > (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start= to use > > CMA since C which actually has caused U&R lower than WMARK_LOW (this sh= ould be > > deemed as against current memory policy, that is, U&R should either sta= y around > > WATERMARK_LOW when no allocation or do reclaim via enter slowpath) > > > > free_cma/free_pages(MB) A(12/30) B(12/25) C(12/20) > > fixed 1/2 ratio N N Y > > this commit Y Y Y > > > > Suggested-by: Roman Gushchin > > I didn't suggest it in this form, please, drop this tag. > > > Signed-off-by: Zhaoyang Huang > > --- > > v2: do proportion check when zone_watermark_ok, update commit message > > --- > > --- > > mm/page_alloc.c | 36 ++++++++++++++++++++++++++++++++---- > > 1 file changed, 32 insertions(+), 4 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 0745aed..d0baeab 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -3071,6 +3071,34 @@ static bool unreserve_highatomic_pageblock(const= struct alloc_context *ac, > > > > } > > > > +#ifdef CONFIG_CMA > > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, = unsigned int alloc_flags) > > +{ > > + unsigned long cma_proportion =3D 0; > > + unsigned long cma_free_proportion =3D 0; > > + unsigned long watermark =3D 0; > > + long count =3D 0; > > + bool cma_first =3D false; > > + > > + watermark =3D wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); > > + /*check if GFP_MOVABLE pass previous watermark check via the help= of CMA*/ > > + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (= ~ALLOC_CMA))) > > + /* WMARK_LOW failed lead to using cma first, this helps U= &R stay > > + * around low when being drained by GFP_MOVABLE > > + */ > > + cma_first =3D true; > > This part looks reasonable to me. > > > + else { > > + /*check proportion when zone_watermark_ok*/ > > + count =3D atomic_long_read(&zone->managed_pages); > > + cma_proportion =3D zone->cma_pages * 100 / count; > > + cma_free_proportion =3D zone_page_state(zone, NR_FREE_CMA= _PAGES) * 100 > > + / zone_page_state(zone, NR_FREE_PAGES); > > + cma_first =3D (cma_free_proportion >=3D cma_proportion * = 2 > > Why *2? Please, explain. It is a magic number here which aims at avoiding late use of cma when free pages near to WMARK_LOW by periodically using them in advance. > > > + || cma_free_proportion >=3D 50); > > It will heavily boost the use of cma at early stages of uptime, when ther= e is a lot of !cma > memory, making continuous (e.g. hugetlb) allocations fail more often. Not= a good idea. Actually, it is equal to "zone_page_state(zone, NR_FREE_CMA_PAGES) > zone_page_state(zone, NR_FREE_PAGES) / 2" > > Thanks!