From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E15FCC77B7F for ; Sat, 6 May 2023 15:46:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2C8F6B0072; Sat, 6 May 2023 11:46:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDC576B0078; Sat, 6 May 2023 11:46:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA4126B007B; Sat, 6 May 2023 11:46:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from us-smtp-delivery-110.mimecast.com (us-smtp-delivery-110.mimecast.com [170.10.133.110]) by kanga.kvack.org (Postfix) with ESMTP id B3DD96B0072 for ; Sat, 6 May 2023 11:46:41 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=globallogic.com; s=mimecast20210517; t=1683388001; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=B6FS5oQNBjBbbCRRhcxRU7xcVaGxXRxAvmoFWXIuxdg=; b=OYVB+OQzhVXeAtObqPyrT9g3TNiMuFrVtGY28NJ0e61MhCcig//wsIm+4EBDAucFSPPo9x dcT9HP9R/laz1qAIRfBtqhpckXF0sPzpTvMBRuKnZg6qjatbLU0iLB+YcyMz1ZGnI23S9H 03SrwQhSFbLk6lMvkpLsgAb9AFsMGsMGeGY2Sqg5DsjaMgIBRQnF/jA/grlZkioUxCtcm0 WZwbE7j5Q9dXlyxzQzBHcZ8HQufvUiholqqTqg/+ME5RetN7OTZjm0gc0PurNwAtAhyHlm GrJAJPooSZVwYeNe2XFAQexL950ICJk+TVgOmScn4PpykC3Phr7IPZgikutPBA== Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-6-2m_PLNp4MUSABrVrEnwDUQ-1; Sat, 06 May 2023 11:46:39 -0400 X-MC-Unique: 2m_PLNp4MUSABrVrEnwDUQ-1 Received: by mail-ed1-f71.google.com with SMTP id 4fb4d7f45d1cf-506e62603f6so2659483a12.2 for ; Sat, 06 May 2023 08:46:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683387998; x=1685979998; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FCHWDLcvgmRa9tkmhESnWYAAt9mM2c/50yg6WTytR3A=; b=dS6Zw/8uIK5f4A5+3RGax1O1L2GbVYYgKQEQtgIlkDfYipFX+mgA1+fDwcFvPDxxbY iU8zfxnHpWz/64ljkGSJD5o8ZLuxj/qj+INU1SNxREjKPMmhjVQYXcrWrUPWKTZJFkik rlb6HNNrovYzfGw4qCzuJLkxCY/TtZdMwn3Zd6AFp3clQU0JYzcslbcOgVH/Yjy36ipe g3rXUWT9Z9GgzYA+nOY6UjDM3/yWu7wraG6CkEpVgsccNsBjNHwjlHfL19RbVMNkiG9H 6SshAKBnBQXOhRKFYWDLMWpP+OhtVRABpcPDjWH11nhmTQ5A/JGzjB8SOsIbiuBPlWNf jU5g== X-Gm-Message-State: AC+VfDzcDjZONdIWWdBRWsrUzJdk78ttu+EIdmg38xXtqLkIIjcfzdDP C93k5IVJp9u6I72LolXNyDNJwaGLe92K+fM/xgtAIVbJOetTUY1OZmBesgnVl2nRCoa8Rv862rD +lZFkNijScaWW0KDZohxW0MURbnPXNq/ve+G8 X-Received: by 2002:a05:6402:203b:b0:505:48c:3266 with SMTP id ay27-20020a056402203b00b00505048c3266mr4284321edb.20.1683387998527; Sat, 06 May 2023 08:46:38 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ773QPjxN/poZBko32IFblGeg/jfQjYytlc7Qi4rwmKBj2fJ3O4LZ/Ip2055zJRKgq2NA+hSDOVz6P9/HAzWng= X-Received: by 2002:a05:6402:203b:b0:505:48c:3266 with SMTP id ay27-20020a056402203b00b00505048c3266mr4284309edb.20.1683387998147; Sat, 06 May 2023 08:46:38 -0700 (PDT) MIME-Version: 1.0 References: <20230419083851.2555096-1-sergii.piatakov@globallogic.com> In-Reply-To: From: Sergii Piatakov Date: Sat, 6 May 2023 18:46:01 +0300 Message-ID: Subject: Re: [PATCH mm/cma] mm/cma: retry allocation of dedicated area on EBUSY To: Minchan Kim Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Steffen Zachaeus , Gotthard Voellmeke , Yaroslav Parkhomenko X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: globallogic.com Content-Type: multipart/alternative; boundary="000000000000a6420505fb084f28" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --000000000000a6420505fb084f28 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable > That's out of expectation. Every CMA client should expect that CMA > allocation can be failed since there are a lot of reasons CMA can fail. Understood, thank you for the clarification! > Can't we also consider the request size is greater than half the size of > CMA as well if we want to go this approach? Actually, my original intention was to introduce retrying only for cases when the whole region is requested. But I agree that potentially could be several branches with optimal handlin= g for some specific cases and one fallback branch with a generic approach. I tried to emphasize this idea in the following comment: > > + * Here we have different options depending on each particular case. > Furthermore, what happens if the CMA is shared with others and remains > free memory up to only the requested size? In the case, it also returns > without further retrial I think that such cases could be covered in a dedicated branch (I mean if-else branch). > I am thinking how we can generalize if we want > to add retrial option to increase success ratio not only entire range > request but also other cases By the way, based on my personal observation, moving the requested pages range may potentially reduce the success ratio for cases when allocation fails due to isolation tests. This is because the pages are updated in the direction from the lower indexes to higher ones. And if page number N doesn't fit isolation requirements, it is likely that page N+1 doesn't fit the requirements too. Moreover, page N+1 will be updated and pass the isolation test later than page N! So moving the requested pages range in the same direction (from lower to higher indexes) may reduce the success ratio! Per my understanding, if allocation fails due to an isolation test it would be better to request again the same region without any shift! Please keep in mind, that my experience is based on one particular use case= , so I may be wrong! > At a quick look, I think the CMA client need to handle the failure. > If they request entire range, they should try harder(e.g., multiple attempts) > (Just FYI, folks had tried such a retry option multiple times even though > it was not entire range request since CMA allocation is fragile) Thank you for providing this comment, I really appreciate it! I understood that it is not guaranteed that CMA is allocated from the first attempt, so a module should retry allocation by itself! We will apply the suggested approach! Just one comment from my side. In my opinion, retrying allocation by a module would be a perfect solution, if the module knows the exact reason why the allocation fails (EBUSY, ENOMEM, etc). Based on the actual error code the module may choose a proper handling for each particular case. Without knowing the exact error code (but only having a NULL pointer), retrying looks like a kind of workaround rather than a proper solution. On Thu, May 4, 2023 at 7:30=E2=80=AFPM Minchan Kim wro= te: > On Wed, Apr 19, 2023 at 11:38:51AM +0300, Sergii Piatakov wrote: > > Sometimes continuous page range can't be successfully allocated, becaus= e > > some pages in the range may not pass the isolation test. In this case, > > the CMA allocator gets an EBUSY error and retries allocation again (in > > the slightly shifted range). During this procedure, a user may see > > messages like: > > alloc_contig_range: [70000, 80000) PFNs busy > > But in most cases, everything will be OK, because isolation test failur= e > > is a recoverable issue and the CMA allocator takes care of it (retrying > > allocation again and again). > > > > This approach works well while a small piece of memory is allocated fro= m > > a big CMA region. But there are cases when the caller needs to allocate > > the entire CMA region at once. > > I agree that's valid use case. > > > > > For example, when a module requires a lot of CMA memory and a region > > with the requested size is binded to the module in the DTS file. When > > the module tries to allocate the entire its own region at once and the > > isolation test fails, the situation will be different than usual due to > > the following: > > - it is not possible to allocate pages in another range from the CMA > > region (because the module requires the whole range from the > > beginning to the end); > > - the module (from the client's point of view) doesn't expect its > > request will be rejected (because it has its own dedicated CMA regio= n > > declared in the DTS). > > That's out of expectation. Every CMA client should expect that CMA > allocation can be failed since there are a lot of reasons CMA can fail. > > > > > This issue should be handled on the CMA allocator layer as this is the > > lowest layer when the reason for failure can be distinguished. Because > > the allocator doesn't return an error code, but instead it just returns > > a pointer (to a page structure). And when the caller gets a NULL it > > can't realize what kind of problem happens inside (EBUSY, ENOMEM, or > > something else). > > > > To avoid cases when CMA region has enough room to allocate the requeste= d > > pages, but returns NULL due to failed isolation test it is proposed: > > - add a separate branch to handle cases when the entire region is > > requested; > > Can't we also consider the request size is greater than half the size of > CMA as well if we want to go this approach? > > Furthermore, what happens if the CMA is shared with others and remains > free memory up to only the requested size? In the case, it also returns > without further retrial(I am thinking how we can generalize if we want > to add retrial option to increase success ratio not only entire range > request but also other cases). > > > - as an initial solution, retry allocation several times (in the setup > > where the issue was observed this solution helps). > > At a quick look, I think the CMA client need to handle the failure. > If they request entire range, they should try harder(e.g., multiple > attempts) > (Just FYI, folks had tried such a retry option multiple times even though > it was not entire range request since CMA allocation is fragile) > > > > > Signed-off-by: Sergii Piatakov > > --- > > mm/cma.c | 23 +++++++++++++++++++++-- > > 1 file changed, 21 insertions(+), 2 deletions(-) > > > > diff --git a/mm/cma.c b/mm/cma.c > > index a7263aa02c92..37e2bc34391b 100644 > > --- a/mm/cma.c > > +++ b/mm/cma.c > > @@ -431,6 +431,7 @@ struct page *cma_alloc(struct cma *cma, unsigned > long count, > > unsigned long i; > > struct page *page =3D NULL; > > int ret =3D -ENOMEM; > > + int retry =3D 0; > > > > if (!cma || !cma->count || !cma->bitmap) > > goto out; > > @@ -487,8 +488,26 @@ struct page *cma_alloc(struct cma *cma, unsigned > long count, > > > > trace_cma_alloc_busy_retry(cma->name, pfn, > pfn_to_page(pfn), > > count, align); > > - /* try again with a bit different memory target */ > > - start =3D bitmap_no + mask + 1; > > + > > + /* > > + * The region has enough free space, but it can't be > provided right now > > + * because the underlying layer is busy and can't perform > allocation. > > + * Here we have different options depending on each > particular case. > > + */ > > + > > + if (!start && !offset && bitmap_maxno =3D=3D bitmap_count= ) { > > + /* > > + * If the whole region is requested it means that= : > > + * - there is no room to retry the allocation in > another range; > > + * - most likely somebody tries to allocate a > dedicated CMA region. > > + * So in this case we can just retry allocation > several times with the > > + * same parameters. > > + */ > > + if (retry++ >=3D 5/*maxretry*/) > > + break; > > + } else > > + /* In other cases try again with a bit different > memory target */ > > + start =3D bitmap_no + mask + 1; > > } > > > > trace_cma_alloc_finish(cma->name, pfn, page, count, align, ret); > > -- > > 2.25.1 > > > > > > --000000000000a6420505fb084f28 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
> That's out of expectation. Every CMA client shoul= d expect that CMA
> allocation can be failed since there are a lot of reasons CMA can fail= .

Understood, thank you for the clarification!

> Can't we also consider the request size is g= reater than half the size of
> CMA as well if we want to go this appr= oach?

Actually, my original intention was to intro= duce retrying only for cases
when the whole region is requested.<= /div>

But I agree that potentially could be several bran= ches with optimal handling
for some specific cases and one fallba= ck branch with a generic approach.
I tried to emphasize this idea= in the following comment:
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Here we have different options depending on = each particular case.

> Furthermore, what happe= ns if the CMA is shared with others and remains
> free memory up to only the requested size? In the case, it also return= s
> without further retrial

I think that such cas= es could be covered in a dedicated branch (I mean if-else
branch)= .

> I am thinking how we can generalize if we w= ant
> to add retrial option to increase success ratio not only entire range<= br> > request but also other cases

By the way, base= d on my personal observation, moving the requested pages
range ma= y potentially reduce the success ratio for cases when allocation fails
due to isolation tests.
This is because the pages are updat= ed in the direction from the lower indexes
to higher ones. And if= page number N doesn't fit isolation requirements, it is
like= ly that page N+1 doesn't fit the requirements too. Moreover, page N+1 w= ill
be updated and pass the isolation test later than page N!
So moving the requested pages range in the same direction (from lowe= r to
higher indexes) may reduce the success ratio!
Per = my understanding, if allocation fails due to an isolation test it would be = better
to request again the same region without any shift!
<= div>
Please keep in mind, that my experience is based on one = particular use case,
so I may be wrong!

<= div>> At a quick look, I think the CMA client need to handle the failure= .
> If they request entire range, they should try harder(e.g., multiple at= tempts)
> (Just FYI, folks had tried such a retry option multiple times even tho= ugh
> it was not entire range request since CMA allocation is fragile)
=

Thank you for providing this comment, I really apprecia= te it!
I understood that it is not guaranteed that CMA is allocat= ed from the first attempt,
so a module should retry allocation by= itself! We will apply the suggested approach!

Just one comment from my side. In my opinion, retrying allocation by a mod= ule
would be a perfect solution, if the module knows the exact re= ason why the allocation
fails (EBUSY, ENOMEM, etc). Based on the = actual error code the module may
choose a proper handling for eac= h particular case. Without knowing the exact error code
(but only= having a NULL pointer), retrying looks like a kind of workaround rather th= an
a proper solution.

On Thu, May 4, 2023 at 7:30=E2=80= =AFPM Minchan Kim <minchan@kernel.= org> wrote:
On Wed, Apr 19, 2023 at 11:38:51AM +0300, Sergii Piatakov wrote:
> Sometimes continuous page range can't be successfully allocated, b= ecause
> some pages in the range may not pass the isolation test. In this case,=
> the CMA allocator gets an EBUSY error and retries allocation again (in=
> the slightly shifted range). During this procedure, a user may see
> messages like:
>=C2=A0 =C2=A0 =C2=A0alloc_contig_range: [70000, 80000) PFNs busy
> But in most cases, everything will be OK, because isolation test failu= re
> is a recoverable issue and the CMA allocator takes care of it (retryin= g
> allocation again and again).
>
> This approach works well while a small piece of memory is allocated fr= om
> a big CMA region. But there are cases when the caller needs to allocat= e
> the entire CMA region at once.

I agree that's valid use case.

>
> For example, when a module requires a lot of CMA memory and a region > with the requested size is binded to the module in the DTS file. When<= br> > the module tries to allocate the entire its own region at once and the=
> isolation test fails, the situation will be different than usual due t= o
> the following:
>=C2=A0 - it is not possible to allocate pages in another range from the= CMA
>=C2=A0 =C2=A0 region (because the module requires the whole range from = the
>=C2=A0 =C2=A0 beginning to the end);
>=C2=A0 - the module (from the client's point of view) doesn't e= xpect its
>=C2=A0 =C2=A0 request will be rejected (because it has its own dedicate= d CMA region
>=C2=A0 =C2=A0 declared in the DTS).

That's out of expectation. Every CMA client should expect that CMA
allocation can be failed since there are a lot of reasons CMA can fail.

>
> This issue should be handled on the CMA allocator layer as this is the=
> lowest layer when the reason for failure can be distinguished. Because=
> the allocator doesn't return an error code, but instead it just re= turns
> a pointer (to a page structure). And when the caller gets a NULL it > can't realize what kind of problem happens inside (EBUSY, ENOMEM, = or
> something else).
>
> To avoid cases when CMA region has enough room to allocate the request= ed
> pages, but returns NULL due to failed isolation test it is proposed: >=C2=A0 - add a separate branch to handle cases when the entire region i= s
>=C2=A0 =C2=A0 requested;

Can't we also consider the request size is greater than half the size o= f
CMA as well if we want to go this approach?

Furthermore, what happens if the CMA is shared with others and remains
free memory up to only the requested size? In the case, it also returns
without further retrial(I am thinking how we can generalize if we want
to add retrial option to increase success ratio not only entire range
request but also other cases).

>=C2=A0 - as an initial solution, retry allocation several times (in the= setup
>=C2=A0 =C2=A0 where the issue was observed this solution helps).

At a quick look, I think the CMA client need to handle the failure.
If they request entire range, they should try harder(e.g., multiple attempt= s)
(Just FYI, folks had tried such a retry option multiple times even though it was not entire range request since CMA allocation is fragile)

>
> Signed-off-by: Sergii Piatakov <sergii.piatakov@globallogic.com> > ---
>=C2=A0 mm/cma.c | 23 +++++++++++++++++++++--
>=C2=A0 1 file changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index a7263aa02c92..37e2bc34391b 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -431,6 +431,7 @@ struct page *cma_alloc(struct cma *cma, unsigned l= ong count,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned long i;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0struct page *page =3D NULL;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0int ret =3D -ENOMEM;
> +=C2=A0 =C2=A0 =C2=A0int retry =3D 0;
>=C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cma || !cma->count || !cma->bitma= p)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto out;
> @@ -487,8 +488,26 @@ struct page *cma_alloc(struct cma *cma, unsigned = long count,
>=C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0trace_cma_alloc_= busy_retry(cma->name, pfn, pfn_to_page(pfn),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 count, align);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* try again with a b= it different memory target */
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0start =3D bitmap_no += mask + 1;
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/*
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * The region has eno= ugh free space, but it can't be provided right now
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * because the underl= ying layer is busy and can't perform allocation.
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Here we have diffe= rent options depending on each particular case.
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!start &&= !offset && bitmap_maxno =3D=3D bitmap_count) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0/*
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 * If the whole region is requested it means that:
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 *=C2=A0 - there is no room to retry the allocation in another range= ;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 *=C2=A0 - most likely somebody tries to allocate a dedicated CMA re= gion.
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 * So in this case we can just retry allocation several times with t= he
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 * same parameters.
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 */
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0if (retry++ >=3D 5/*maxretry*/)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0} else
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0/* In other cases try again with a bit different memory target */ > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0start =3D bitmap_no + mask + 1;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0trace_cma_alloc_finish(cma->name, pfn, pa= ge, count, align, ret);
> --
> 2.25.1
>
>

--000000000000a6420505fb084f28--