linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: jane.chu@oracle.com
To: Kefeng Wang <wangkefeng.wang@huawei.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	Oscar Salvador <osalvador@suse.de>,
	Muchun Song <muchun.song@linux.dev>
Cc: sidhartha.kumar@oracle.com, Zi Yan <ziy@nvidia.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Brendan Jackman <jackmanb@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	linux-mm@kvack.org
Subject: Re: [PATCH v2 7/9] mm: cma: add alloc flags for __cma_alloc()
Date: Mon, 8 Sep 2025 17:19:02 -0700	[thread overview]
Message-ID: <18d3f516-5197-4c02-adc0-b4fa03f7c191@oracle.com> (raw)
In-Reply-To: <20250902124820.3081488-8-wangkefeng.wang@huawei.com>


On 9/2/2025 5:48 AM, Kefeng Wang wrote:
> In order to support frozen page allocation in the following
> changes, adding the alloc flags for __cma_alloc().
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>   mm/cma.c | 15 +++++++++------
>   1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/cma.c b/mm/cma.c
> index e56ec64d0567..3f3c96be67f7 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -778,7 +778,8 @@ static void cma_debug_show_areas(struct cma *cma)
>   
>   static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr,
>   				unsigned long count, unsigned int align,
> -				struct page **pagep, gfp_t gfp)
> +				struct page **pagep, gfp_t gfp,
> +				acr_flags_t alloc_flags)
>   {
>   	unsigned long mask, offset;
>   	unsigned long pfn = -1;
> @@ -823,7 +824,7 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr,
>   
>   		pfn = cmr->base_pfn + (bitmap_no << cma->order_per_bit);
>   		mutex_lock(&cma->alloc_mutex);
> -		ret = alloc_contig_range(pfn, pfn + count, ACR_FLAGS_CMA, gfp);
> +		ret = alloc_contig_range(pfn, pfn + count, alloc_flags, gfp);
>   		mutex_unlock(&cma->alloc_mutex);
>   		if (ret == 0) {
>   			page = pfn_to_page(pfn);
> @@ -848,7 +849,7 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr,
>   }
>   
>   static struct page *__cma_alloc(struct cma *cma, unsigned long count,
> -		       unsigned int align, gfp_t gfp)
> +		       unsigned int align, gfp_t gfp, acr_flags_t alloc_flags)
>   {
>   	struct page *page = NULL;
>   	int ret = -ENOMEM, r;
> @@ -870,7 +871,7 @@ static struct page *__cma_alloc(struct cma *cma, unsigned long count,
>   		page = NULL;
>   
>   		ret = cma_range_alloc(cma, &cma->ranges[r], count, align,
> -				       &page, gfp);
> +				       &page, gfp, alloc_flags);
>   		if (ret != -EBUSY || page)
>   			break;
>   	}
> @@ -918,7 +919,9 @@ static struct page *__cma_alloc(struct cma *cma, unsigned long count,
>   struct page *cma_alloc(struct cma *cma, unsigned long count,
>   		       unsigned int align, bool no_warn)
>   {
> -	return __cma_alloc(cma, count, align, GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
> +	return __cma_alloc(cma, count, align,
> +			   GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0),
> +			   ACR_FLAGS_CMA);
>   }
>   
>   struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp)
> @@ -928,7 +931,7 @@ struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp)
>   	if (WARN_ON(!order || !(gfp & __GFP_COMP)))
>   		return NULL;
>   
> -	page = __cma_alloc(cma, 1 << order, order, gfp);
> +	page = __cma_alloc(cma, 1 << order, order, gfp, ACR_FLAGS_CMA);
>   
>   	return page ? page_folio(page) : NULL;
>   }

Looks good!
Reviewed-by: Jane Chu <jane.chu@oracle.com>

-jane


  reply	other threads:[~2025-09-09  0:19 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-02 12:48 [PATCH v2 0/9] mm: hugetlb: cleanup and allocate frozen hugetlb folio Kefeng Wang
2025-09-02 12:48 ` [PATCH v2 1/9] mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio() Kefeng Wang
2025-09-08  9:21   ` Oscar Salvador
2025-09-08 12:59     ` Kefeng Wang
2025-09-09  0:54   ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 2/9] mm: hugetlb: convert to account_new_hugetlb_folio() Kefeng Wang
2025-09-08  9:26   ` Oscar Salvador
2025-09-08 13:20     ` Kefeng Wang
2025-09-08 13:38       ` Oscar Salvador
2025-09-08 13:40   ` Oscar Salvador
2025-09-09  7:04     ` Kefeng Wang
2025-09-09  0:59   ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 3/9] mm: hugetlb: directly pass order when allocate a hugetlb folio Kefeng Wang
2025-09-08  9:29   ` Oscar Salvador
2025-09-09  1:11   ` Zi Yan
2025-09-09  7:11     ` Kefeng Wang
2025-09-02 12:48 ` [PATCH v2 4/9] mm: hugetlb: remove struct hstate from init_new_hugetlb_folio() Kefeng Wang
2025-09-08  9:31   ` Oscar Salvador
2025-09-09  1:13   ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 5/9] mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio() Kefeng Wang
2025-09-08  9:34   ` Oscar Salvador
2025-09-09  1:16   ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 6/9] mm: page_alloc: add alloc_contig_frozen_pages() Kefeng Wang
2025-09-09  0:21   ` jane.chu
2025-09-09  1:44   ` Zi Yan
2025-09-09  7:29     ` Kefeng Wang
2025-09-09  8:11   ` Oscar Salvador
2025-09-09 18:55   ` Matthew Wilcox
2025-09-09 19:08     ` Zi Yan
2025-09-10  2:05       ` Kefeng Wang
2025-09-02 12:48 ` [PATCH v2 7/9] mm: cma: add alloc flags for __cma_alloc() Kefeng Wang
2025-09-09  0:19   ` jane.chu [this message]
2025-09-09  2:03   ` Zi Yan
2025-09-09  8:05   ` Oscar Salvador
2025-09-02 12:48 ` [PATCH v2 8/9] mm: cma: add __cma_release() Kefeng Wang
2025-09-09  0:15   ` jane.chu
2025-09-02 12:48 ` [PATCH v2 9/9] mm: hugetlb: allocate frozen pages in alloc_gigantic_folio() Kefeng Wang
2025-09-09  1:48   ` jane.chu
2025-09-09  7:33     ` Kefeng Wang
2025-09-09  2:02   ` Zi Yan
2025-09-09  7:34     ` Kefeng Wang
2025-09-02 13:51 ` [PATCH v2 0/9] mm: hugetlb: cleanup and allocate frozen hugetlb folio Oscar Salvador

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=18d3f516-5197-4c02-adc0-b4fa03f7c191@oracle.com \
    --to=jane.chu@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    --cc=sidhartha.kumar@oracle.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox