From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D504C433EF for ; Fri, 28 Jan 2022 12:25:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 893E76B0071; Fri, 28 Jan 2022 07:25:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 81BF36B0072; Fri, 28 Jan 2022 07:25:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 696A56B0073; Fri, 28 Jan 2022 07:25:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id 58C1E6B0071 for ; Fri, 28 Jan 2022 07:25:46 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 24C8F93680 for ; Fri, 28 Jan 2022 12:25:46 +0000 (UTC) X-FDA: 79079617092.05.3C4CC1E Received: from mail-oi1-f181.google.com (mail-oi1-f181.google.com [209.85.167.181]) by imf30.hostedemail.com (Postfix) with ESMTP id B8B6F80013 for ; Fri, 28 Jan 2022 12:25:45 +0000 (UTC) Received: by mail-oi1-f181.google.com with SMTP id m9so11913873oia.12 for ; Fri, 28 Jan 2022 04:25:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BkpPxTgkOSWNuF9DGyEQ5Iu3HAuCNH96jO/vAxRqfQ0=; b=A7pNJ6XuH50tIg0Nw5Gdg+JIn0UC0p91lz+rB+D1RijVs0Jxhht4WbJXb0C7T8DYHV cgtXBsIqYk3P9Ki5TCh1JyrJiljy/AWD89S+jnfq5/Yoa4Uu1cKc995n2tAsb7ZrapsL 3Vt9PSNfiWSb5+KM6RL/RWZ88f0uNf9kXBZr8bYWPIawKV/SXBkAXvDCDqAgu82c7iDO zge7KYPugNxRfHPV4csvAtETqqNw7TVlupkLLY0zyKz3qWcuSDNEMq4QpZmbKNgKFvYc 7a1a9AgGMOwu/cGhBXRPBZ8DqoQtcFzguZlNe86JAR52LJD5nLq4WGTG/VO+/R2rpZA8 GYJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BkpPxTgkOSWNuF9DGyEQ5Iu3HAuCNH96jO/vAxRqfQ0=; b=e9xz58GNPnG0cLhAzDuV+f8YonF1HdwPjt9V0NFObRJhvAmi07qWU+DqN285CeF+tC aGg9m353cal+9BWhksDefYPu3T8lG6lBp8iD8MSxo+8X8LQ0r6wnF14BPxMI4DOyjRdv a6zqic2zLtm4qeRaX7IayXyYuVvl5npGaAu3WMCr5Y1TY1C4vo3n1TyoDhRyE0Uk0GM7 SYJj3u+oKudI6qGLDumkErZbBIHUHHVT+Uwga7m2WSvjCtUqpOiVVG+FoWKprF/T3na7 JtZkkbopIBhFD9LIbGUCq4LpHiaUaKMlz4a012lca9vpARX+T9xM92p/+ns/oxIvr3wg Erqw== X-Gm-Message-State: AOAM533bY8rDgKSJvYoS4DpHLIl2DAwZNEzQKeh/LGZVTBksafHzxxDe 1aUVtTl+CZsrQuAfun/+vXHwj3ptTukkm9EtI0s= X-Google-Smtp-Source: ABdhPJxndR/QG1T25SoLU7nJf7n7EbCuNtEKNY5d8GA37y+rEh+w14Z8eagHst7EsNBDbtcNYjt78bOLIQ0j4lDpSdc= X-Received: by 2002:aca:2304:: with SMTP id e4mr5417340oie.167.1643372744839; Fri, 28 Jan 2022 04:25:44 -0800 (PST) MIME-Version: 1.0 References: <20220112131552.3329380-1-aisheng.dong@nxp.com> <20220112131552.3329380-3-aisheng.dong@nxp.com> <517e1ea1-f826-228b-16a0-da1dc76017cc@redhat.com> In-Reply-To: <517e1ea1-f826-228b-16a0-da1dc76017cc@redhat.com> From: Dong Aisheng Date: Fri, 28 Jan 2022 20:20:03 +0800 Message-ID: Subject: Re: [PATCH v2 2/2] mm: cma: try next MAX_ORDER_NR_PAGES during retry To: David Hildenbrand Cc: Dong Aisheng , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, jason.hui.liu@nxp.com, leoyang.li@nxp.com, abel.vesa@nxp.com, shawnguo@kernel.org, linux-imx@nxp.com, akpm@linux-foundation.org, m.szyprowski@samsung.com, lecopzer.chen@mediatek.com, vbabka@suse.cz, stable@vger.kernel.org, shijie.qin@nxp.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B8B6F80013 X-Rspam-User: nil Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=A7pNJ6Xu; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of dongas86@gmail.com designates 209.85.167.181 as permitted sender) smtp.mailfrom=dongas86@gmail.com X-Stat-Signature: byw8ybapzyhibedrpwmr3c8qjj19re16 X-HE-Tag: 1643372745-640037 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 26, 2022 at 12:33 AM David Hildenbrand wrote: > > On 12.01.22 14:15, Dong Aisheng wrote: > > On an ARMv7 platform with 32M pageblock(MAX_ORDER 14), we observed a > > Did you actually intend to talk about pageblocks here (and below)? > > I assume you have to be clearer here that you talk about the maximum > allocation granularity, which is usually bigger than actual pageblock size. > I'm talking about the ARM32 case where pageblock_order is equal to MAX_ORDER -1. /* If huge pages are not used, group by MAX_ORDER_NR_PAGES */ #define pageblock_order (MAX_ORDER-1) In order to be clearer, maybe I can add this info into the commit message too. > > huge number of repeat retries of CMA allocation (1k+) during booting > > when allocating one page for each of 3 mmc instance probe. > > > > This is caused by CMA now supports cocurrent allocation since commit > > a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock"). > > The pageblock or (MAX_ORDER -1) from which we are trying to allocate > > memory may have already been acquired and isolated by others. > > Current cma_alloc() will then retry the next area by the step of > > bitmap_no + mask + 1 which are very likely within the same isolated range > > and fail again. So when the pageblock or MAX_ORDER is big (e.g. 8192), > > keep retrying in a small step become meaningless because it will be known > > to fail at a huge number of times due to the pageblock has been isolated > > by others, especially when allocating only one or two pages. > > > > Instread of looping in the same pageblock and wasting CPU mips a lot, > > especially for big pageblock system (e.g. 16M or 32M), > > we try the next MAX_ORDER_NR_PAGES directly. > > > > Doing this way can greatly mitigate the situtation. > > > > Below is the original error log during booting: > > [ 2.004804] cma: cma_alloc(cma (ptrval), count 1, align 0) > > [ 2.010318] cma: cma_alloc(cma (ptrval), count 1, align 0) > > [ 2.010776] cma: cma_alloc(): memory range at (ptrval) is busy, retrying > > [ 2.010785] cma: cma_alloc(): memory range at (ptrval) is busy, retrying > > [ 2.010793] cma: cma_alloc(): memory range at (ptrval) is busy, retrying > > [ 2.010800] cma: cma_alloc(): memory range at (ptrval) is busy, retrying > > [ 2.010807] cma: cma_alloc(): memory range at (ptrval) is busy, retrying > > [ 2.010814] cma: cma_alloc(): memory range at (ptrval) is busy, retrying > > .... (+1K retries) > > > > After fix, the 1200+ reties can be reduced to 0. > > Another test running 8 VPU decoder in parallel shows that 1500+ retries > > dropped to ~145. > > > > IOW this patch can improve the CMA allocation speed a lot when there're > > enough CMA memory by reducing retries significantly. > > > > Cc: Andrew Morton > > Cc: Marek Szyprowski > > Cc: Lecopzer Chen > > Cc: David Hildenbrand > > Cc: Vlastimil Babka > > CC: stable@vger.kernel.org # 5.11+ > > Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock") > > Signed-off-by: Dong Aisheng > > --- > > v1->v2: > > * change to align with MAX_ORDER_NR_PAGES instead of pageblock_nr_pages > > --- > > mm/cma.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/mm/cma.c b/mm/cma.c > > index 1c13a729d274..1251f65e2364 100644 > > --- a/mm/cma.c > > +++ b/mm/cma.c > > @@ -500,7 +500,9 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > > trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn), > > count, align); > > /* try again with a bit different memory target */ > > - start = bitmap_no + mask + 1; > > + start = ALIGN(bitmap_no + mask + 1, > > + MAX_ORDER_NR_PAGES >> cma->order_per_bit); > > Mind giving the reader a hint in the code why we went for > MAX_ORDER_NR_PAGES? > Yes, good suggestion. I could add one more line of code comments as follows: "As alloc_contig_range() will isolate all pageblocks within the range which are aligned with max_t(MAX_ORDER_NR_PAGES, pageblock_nr_pages), here we align with MAX_ORDER_NR_PAGES which is usually bigger than actual pageblock size" Does this look ok to you? > What would happen if the CMA granularity is bigger than > MAX_ORDER_NR_PAGES? I'd assume no harm done, as we'd try aligning to 0. > I think yes. Regards Aisheng > -- > Thanks, > > David / dhildenb >