From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC22EC52D7C for ; Thu, 15 Aug 2024 14:41:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E4606B00A1; Thu, 15 Aug 2024 10:41:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 56FA66B00B6; Thu, 15 Aug 2024 10:41:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EA4F6B00A7; Thu, 15 Aug 2024 10:41:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1834E6B0113 for ; Thu, 15 Aug 2024 10:41:01 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 98938A160B for ; Thu, 15 Aug 2024 14:41:00 +0000 (UTC) X-FDA: 82454741880.15.09082E8 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf17.hostedemail.com (Postfix) with ESMTP id C8DDC4000D for ; Thu, 15 Aug 2024 14:40:56 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723732785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t+upoBLV/llsKmVuH3EKKhkLRdhFtX0+pmGYgTNbZnc=; b=5adhAIEqjS+5sporxWGjKkZ9V4UHBz+qLO+b30FMGHoGF5UdNWnSh6MrEQPJ1dfb1jUoBG P4WM1dTspEXu0DNJbSfvvufmYSTj7O1xHkIQ+I+Nx/m3NK+VTISiu2uEHKxYfrHst+5ZL2 mh9d6nwr30gr2uxmCHwulgMcdtUdbsE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723732785; a=rsa-sha256; cv=none; b=8PY5F08reQt2byzOjD0pNK082TMy616waK7MTeFz9Qy84oq3xtTRUPRZrpg4CfGqvwySNh 50koaJjrtUOzFHT6I5Q/9oyKoXUbtGosOoZSzxIUvrk7XMFW7Af04oMVPn6/vje/wN5C2K nw7rjYbOFECoCCmzSj6XurgH3oUNNfc= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Wl7570YPHz1HG8l; Thu, 15 Aug 2024 22:37:47 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id B53FA1A016C; Thu, 15 Aug 2024 22:40:52 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 15 Aug 2024 22:40:52 +0800 Message-ID: <6d19d790-92d2-40f9-9797-cc08bf3921fe@huawei.com> Date: Thu, 15 Aug 2024 22:40:51 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH mm-unstable v1 2/3] mm/cma: add cma_alloc_folio() Content-Language: en-US To: Yu Zhao , Andrew Morton , Muchun Song CC: "Matthew Wilcox (Oracle)" , Zi Yan , , References: <20240811212129.3074314-1-yuzhao@google.com> <20240811212129.3074314-3-yuzhao@google.com> From: Kefeng Wang In-Reply-To: <20240811212129.3074314-3-yuzhao@google.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C8DDC4000D X-Stat-Signature: 1xfgbdiaiiju8g3o591si13pazy38r46 X-HE-Tag: 1723732856-174218 X-HE-Meta: U2FsdGVkX19HERy7JYz4N+RTp6QG9Bi7h9+9WMKlrcz1y0NQ4NSpwD00R1SAdaP84FekjtPV7NEw10tNtO84cpLi3tYkLNCgDAy9vGXTRi02x9ByYqkYJV3H6IgjGlvpLWSbTnKCjr7a8r8//sVYbfG3LSvMG1mjWPnWI7oJOcLbMPIm5yKfLDFrJ49y1fBCNZiiQgibGTtzmN1dYPxq4SBvogJOgWpL6FVeQ1FJ1b4s9gMSrMjECQLfYsJvKo3eA+xvZ0Ql27re7snbgJti3aoZsJB68V+er0XyYoowvv93P3NrN9IB3QTIxfAw/jevEqS1pR6auATMpbQhjUB26VvECmI4cF6C4boRjwdbcORYoyQKRk6kYHm061KK3AjH8/Xz/GEfRA3L9KZwymA6wQ73lyXRIj+kZNdzp0B2NZZVQ7H9Saf0aFu55J0/yU2N95TEnubQwzEH6UxI/d82tOm/gRFEb7qbagoDkah3DddzF9yxTCzWRuwfHBc1LodIkxUTLCXoY7bqgvRtG+RIsoKcgTwoN0vGP0Q6q8I1cXn7704iPjnyeh4kziZZ4SWrc322ORYxSD7kSmQrKxK2idh3cJEzW+yoz6v8d/6o4qWm5RJciLOevQT9LaR3Wu8huEBRlZk865cmnOysQQlOcvff9wYlGM8noh0Szhz/M4nPaYvIo5hoSCm6DRzyfP33kA2m5VnAiKG317us9WM2JLmZg0JmrRaVPfzNQge5DUFH5c96PqPBwKRX0kXv34UTuIaW2SU0LOoy1Igic5v3zrHtxF7cv4yxIwnIz0Kydit+rTf/uDDXjrkx+gMxZ7uYAa6xb4zA+gN2IwgydNCbK+Bw79XkU31Lg+aFqNmcIgxwVd44EN+3Zx1aduykqqBvduMnuebpcEN67Xe0ERsV4ON1YSNISy0QQWypijKRraLMWnnQh/Md4NGsfLFV+YgK+SQmSOcJWAVcXBq7dCu zrxu/8Co zdx++T5IkEsZrtO5qVqkPhO9LahqRmecP6DiJ7Bw5V/u1fs2/sxJSCmsxVXc98VnYu/zcMjWOKykWHA7hmLXyyFJWkm9xVd6h5o21OlGCWNEvNL02YxdLkjn9/pagopjiOtLjE/aKD+/CZi2yNC7m3tE3rSdyHJvCj9GtVl9Jy/CkrVCPiGLH6a9Jk1JRamA807plSpN+LpjjAzUFcA4MPD+/4sGcJwugrQmFK+9l7xKB49E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/8/12 5:21, Yu Zhao wrote: > With alloc_contig_range() and free_contig_range() supporting large > folios, CMA can allocate and free large folios too, by > cma_alloc_folio() and cma_release(). > > Signed-off-by: Yu Zhao > --- > include/linux/cma.h | 1 + > mm/cma.c | 47 ++++++++++++++++++++++++++++++--------------- > 2 files changed, 33 insertions(+), 15 deletions(-) > > diff --git a/include/linux/cma.h b/include/linux/cma.h > index 9db877506ea8..086553fbda73 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -46,6 +46,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > struct cma **res_cma); > extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align, > bool no_warn); > +extern struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp); > extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count); > extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count); > > diff --git a/mm/cma.c b/mm/cma.c > index 95d6950e177b..46feb06db8e7 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -403,18 +403,8 @@ static void cma_debug_show_areas(struct cma *cma) > spin_unlock_irq(&cma->lock); > } > > -/** > - * cma_alloc() - allocate pages from contiguous area > - * @cma: Contiguous memory region for which the allocation is performed. > - * @count: Requested number of pages. > - * @align: Requested alignment of pages (in PAGE_SIZE order). > - * @no_warn: Avoid printing message about failed allocation > - * > - * This function allocates part of contiguous memory on specific > - * contiguous memory area. > - */ > -struct page *cma_alloc(struct cma *cma, unsigned long count, > - unsigned int align, bool no_warn) > +static struct page *__cma_alloc(struct cma *cma, unsigned long count, > + unsigned int align, gfp_t gfp) > { > unsigned long mask, offset; > unsigned long pfn = -1; > @@ -463,8 +453,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > > pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); > mutex_lock(&cma_mutex); > - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, > - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); > + ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, gfp); > mutex_unlock(&cma_mutex); > if (ret == 0) { > page = pfn_to_page(pfn); > @@ -494,7 +483,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > page_kasan_tag_reset(nth_page(page, i)); > } > > - if (ret && !no_warn) { > + if (ret && !(gfp & __GFP_NOWARN)) { > pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, ret: %d\n", > __func__, cma->name, count, ret); > cma_debug_show_areas(cma); > @@ -513,6 +502,34 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > return page; > } > > +/** > + * cma_alloc() - allocate pages from contiguous area > + * @cma: Contiguous memory region for which the allocation is performed. > + * @count: Requested number of pages. > + * @align: Requested alignment of pages (in PAGE_SIZE order). > + * @no_warn: Avoid printing message about failed allocation > + * > + * This function allocates part of contiguous memory on specific > + * contiguous memory area. > + */ > +struct page *cma_alloc(struct cma *cma, unsigned long count, > + unsigned int align, bool no_warn) > +{ > + return __cma_alloc(cma, count, align, GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); > +} > + > +struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp) > +{ > + struct page *page; > + > + if (WARN_ON(order && !(gfp | __GFP_COMP))) > + return NULL; > + > + page = __cma_alloc(cma, 1 << order, order, gfp); > + > + return page ? page_folio(page) : NULL; We don't set large_rmappable for cma alloc folio, which is not consistent with other folio allocation, eg folio_alloc/folio_alloc_mpol(), there is no issue for HugeTLB folio, and for HugeTLB folio must without large_rmappable, but once we use it for mTHP/THP, it need some extra handle, maybe we set large_rmappable here, and clear it in init_new_hugetlb_folio()? > +} > + > bool cma_pages_valid(struct cma *cma, const struct page *pages, > unsigned long count) > {