From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB367C52D7C for ; Thu, 22 Aug 2024 17:24:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CE4080046; Thu, 22 Aug 2024 13:24:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 67F2580044; Thu, 22 Aug 2024 13:24:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5220E80046; Thu, 22 Aug 2024 13:24:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3083780044 for ; Thu, 22 Aug 2024 13:24:25 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E38AB8163A for ; Thu, 22 Aug 2024 17:24:24 +0000 (UTC) X-FDA: 82480555248.25.6086DAB Received: from mail-il1-f175.google.com (mail-il1-f175.google.com [209.85.166.175]) by imf18.hostedemail.com (Postfix) with ESMTP id 05B391C0013 for ; Thu, 22 Aug 2024 17:24:22 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=s79+r04f; spf=pass (imf18.hostedemail.com: domain of yuzhao@google.com designates 209.85.166.175 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724347446; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mN5wIRJdiVSlta0TS2suOK6eU9WU8r0xHPYK/6dBy6U=; b=wIV09WCWCAywo1Si75xEa1Epa/3UGk8y5Tmo5CjK4Lqx/GN8cb7CfXMubL1mUvLrVWPd6J oSOHLnbYKVIXIfy+wEGtKn/5XGohTAjirYPPGX8Aim5FGXPN3ygqw+f5OC/cFMX7LoLJBT tkkuEDYfJt58LTg/JwgD60DBM7PbluI= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=s79+r04f; spf=pass (imf18.hostedemail.com: domain of yuzhao@google.com designates 209.85.166.175 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724347446; a=rsa-sha256; cv=none; b=8SCuUfvtCOzvqWMMXCvII2Diq2N2wZGmNNE6rTdG0PQgsfalNdxSWwMWfPud1mw50AvymA SfSIGpc3IGJUIhlq79dYkDk7EFVJ8fI/Uw0CXRTxqiSpyNvBogCRTsus7YYS3eHOWZQjBB L8TTAYvINg8YGIbpGRQp8ZUc5Sxzkxs= Received: by mail-il1-f175.google.com with SMTP id e9e14a558f8ab-39d31d16d39so3857825ab.0 for ; Thu, 22 Aug 2024 10:24:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724347462; x=1724952262; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=mN5wIRJdiVSlta0TS2suOK6eU9WU8r0xHPYK/6dBy6U=; b=s79+r04f/c+gbu2QjZqRyWgwhR3DnGMTiDbZCGvvtZIAC5ISy3Rx0VOKwlf+gVQPaH bFbNsBeSjNeJy2HwTwbgU0EydeNuE3nU7SYZ97vl7qKgL3Vh5kzo5iKuPBLuZQUsg5JJ 80tCLVszW55VrEmMjkzjgAa+qVh4q8WL5Slt5GEiLp/Tn7PYoAc8ODKnqIUwsxQn7z/R GHUW+ZfokoAyauNNwY4Pc+oK4Wo0pxcqhLYUASgDpEA9BZUjpXeMYGr+0/U3o7cGlr3G a6ZIGBPpUnlpGpM1wgoXWHdjvNpgn8KS7pbD4SMFuwNvlOZ45sl2+nR+1QwmxkUtQJPW OgFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724347462; x=1724952262; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=mN5wIRJdiVSlta0TS2suOK6eU9WU8r0xHPYK/6dBy6U=; b=SRsI409ac8kh9BVh7z5Pt2d13VBD1TgXcJ2ipV9SpUkI/yjaMXGauSdSd9DxPw/LV/ UBY/Yziz7sFm5FUyIlIkWZGvm0sGq4hZUDJKzMitlR0mYluDUuZIotP6JjG8b9NO/Jvf IbofwSOC/KGPLPqCRwhJliwc+WPw0fNOLxBnJ5zzut+Ubb5denGntfjwBv4pojFCszdK mqu8f4REGEHENgafDuQlXXVyX9EAzOLchA0q2ClatGRt7pcE+UftkmMBaZNi3pZO55hw /X/hWWOM2kT4QClkjwdv53cTVOvHt5sP1BxcOqbMJbkpLuYjE93agiGaJzu5Z1YFZHNv jkzA== X-Forwarded-Encrypted: i=1; AJvYcCVslO50a9BemH2pQ+CifxyM25IKc3btDXV8+BPL05TgNKezVduWxraYAY3ardTUE5paorhFdtk0qA==@kvack.org X-Gm-Message-State: AOJu0Yz1an60HiVMLFbupkz/wKlMnnWGDKljmPejcBz+pL2m7CzzrX3t AC1eeNUQlp3zsonT2KV614Y8eJO07lMe/Bm6uaCd+Q8wjQiTbbzdw7oPUnfoxw== X-Google-Smtp-Source: AGHT+IErlP3ypEfi7AAnCViLthU1ABd7chnF50wNfQkKZsISaH+EqLZxPpAtpOSpg9lYobOqy1oXkg== X-Received: by 2002:a05:6e02:1fe9:b0:39d:2a84:86a3 with SMTP id e9e14a558f8ab-39d6c3585e4mr60926895ab.4.1724347461638; Thu, 22 Aug 2024 10:24:21 -0700 (PDT) Received: from google.com ([2a00:79e0:2e28:6:b7cf:b486:59da:5224]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ce70f20c6asm562553173.7.2024.08.22.10.24.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Aug 2024 10:24:20 -0700 (PDT) Date: Thu, 22 Aug 2024 11:24:14 -0600 From: Yu Zhao To: Andrew Morton , Muchun Song Cc: "Matthew Wilcox (Oracle)" , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH mm-unstable v2 2/3] mm/cma: add cma_{alloc,free}_folio() Message-ID: References: <20240814035451.773331-1-yuzhao@google.com> <20240814035451.773331-3-yuzhao@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240814035451.773331-3-yuzhao@google.com> X-Rspam-User: X-Stat-Signature: sc7jdf83psnrb1myicsfdyzsr387s93c X-Rspamd-Queue-Id: 05B391C0013 X-Rspamd-Server: rspam11 X-HE-Tag: 1724347462-996372 X-HE-Meta: U2FsdGVkX1/HF013qjMDT+vsCxWLir27eiBXFJnFrC9rcxSSTbvxnKWSaEbU4Jw0mcPgr+7Cfsbu84NHCMjCOmK7R+3tDH+XelVTjxEqoFXelcFIEWWhG32RpeVafZPdV499RxPdR99BVa1j2xMkVIubRfFlWkZxyVQHuwCIGbk70NY/CoSy0GOmpfANEQPJzYiDVkYND1oKkNl1lBYs5eapwTmZ3EuwgXVJNGuZoL7wHgsAd9At6cV/bEU6kotWAvn4l7u+kJi6sdCtUVKJG+nV9h58HjxaGycF7pIZkNeuuS3B/wOAFFZLv4cpgs6Ivy6EbKW6ixWMv8Y3F8oI4QOmBFvLKjI/txQ1Smd1cwv4lB8njzTlZSwrCLCLyWTqBHw0VrsYyIgzXsiSwJAWs2iEHQrBCVBAdDlPmwfoOfuJmi40ANjXt5J3SkFk4hSwbhCq80QTECT4cydIsO0/4wpvJslseT0gmIUdCZqA112EkMsqutZpJC23jihIDflT0F9zTnO+SOwFjw4MED2ck/9qTCMRyToY6l+xXFMfhSyDLSp8VCNWOAvLpVSKxVNggPJD00G4JmU9pclh1hVYP9qyISKKnI5ADRFflAnFikxbIKx26Tej7t4O/aOqDk+FleSlW0CzbyxzttGCxnijzFoBMlWJx6elUQjJKNmOxDO+SKYh4bigcIPvxYHkfXNXB/RPuk4jh2rnztLKnEYPjFQb8ulXHgHaSdALkbdKQxAV/MYDEuVjCYBxwrX1GyXv6/fyJl2+prwPoVrwuDHxZwpqXvokOkFIhK5LxiTIZa3BRpZsxAuJjSn0/SUlmr+kmOnxFRmdnt46cjZaDbd5yUKk20GLNYIi7+1lVCTLSAUs8D+Cxwp2kIAqljHrGQj7yxAGabEEoEf0jFUeiee0Q13F0CXo+/7m7wUCU6U3LgkLvDfEfknOolMMkcQG/qNFsifkxIuSlAnQ4dbuhM2 NASBwKHV U98+aXVZefexSxcqupa0hneBnPmowwk3fEe4j5J8qg2QDE5Dnbulpje01CXOmAamD5RktHoJFozITp1BIkHFYTE3GX/eIEhRclJMEv961BVvWllkDBrljbddmwQdnJbPd9hjMHyFcxWj0MBox/gPL2G18PhB4kW3B3pWsUYBGnJFxPIgL62Vzk2GttAP4ajr8jnoefGoaqQSsc/kDXglFvJ9aZqu75j9YWwHkFcDXF/ydZE5kuUhwYN95t7Mrq7V2hm4Bn5V0AA2IDcD3g6l0dGn0px3ckNxiRq5D0mmg43Z7YbLo+oN1vqzrccr4l1224U8oOAMFF7ej9ACWa8Nl1FO7VEBKaGsa9wc7tM362nhYySNzndT3/rDigA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Aug 13, 2024 at 09:54:50PM -0600, Yu Zhao wrote: > With alloc_contig_range() and free_contig_range() supporting large > folios, CMA can allocate and free large folios too, by > cma_alloc_folio() and cma_free_folio(). > > Signed-off-by: Yu Zhao > --- > include/linux/cma.h | 16 +++++++++++++ > mm/cma.c | 55 ++++++++++++++++++++++++++++++++------------- > 2 files changed, 56 insertions(+), 15 deletions(-) > > diff --git a/include/linux/cma.h b/include/linux/cma.h > index 9db877506ea8..d15b64f51336 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -52,4 +52,20 @@ extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long > extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); > > extern void cma_reserve_pages_on_error(struct cma *cma); > + > +#ifdef CONFIG_CMA > +struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp); > +bool cma_free_folio(struct cma *cma, const struct folio *folio); > +#else > +static inline struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp) > +{ > + return NULL; > +} > + > +static inline bool cma_free_folio(struct cma *cma, const struct folio *folio) > +{ > + return false; > +} > +#endif > + > #endif > diff --git a/mm/cma.c b/mm/cma.c > index 95d6950e177b..4354823d28cf 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -403,18 +403,8 @@ static void cma_debug_show_areas(struct cma *cma) > spin_unlock_irq(&cma->lock); > } > > -/** > - * cma_alloc() - allocate pages from contiguous area > - * @cma: Contiguous memory region for which the allocation is performed. > - * @count: Requested number of pages. > - * @align: Requested alignment of pages (in PAGE_SIZE order). > - * @no_warn: Avoid printing message about failed allocation > - * > - * This function allocates part of contiguous memory on specific > - * contiguous memory area. > - */ > -struct page *cma_alloc(struct cma *cma, unsigned long count, > - unsigned int align, bool no_warn) > +static struct page *__cma_alloc(struct cma *cma, unsigned long count, > + unsigned int align, gfp_t gfp) > { > unsigned long mask, offset; > unsigned long pfn = -1; > @@ -463,8 +453,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > > pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); > mutex_lock(&cma_mutex); > - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, > - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); > + ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, gfp); > mutex_unlock(&cma_mutex); > if (ret == 0) { > page = pfn_to_page(pfn); > @@ -494,7 +483,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > page_kasan_tag_reset(nth_page(page, i)); > } > > - if (ret && !no_warn) { > + if (ret && !(gfp & __GFP_NOWARN)) { > pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, ret: %d\n", > __func__, cma->name, count, ret); > cma_debug_show_areas(cma); > @@ -513,6 +502,34 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > return page; > } > > +/** > + * cma_alloc() - allocate pages from contiguous area > + * @cma: Contiguous memory region for which the allocation is performed. > + * @count: Requested number of pages. > + * @align: Requested alignment of pages (in PAGE_SIZE order). > + * @no_warn: Avoid printing message about failed allocation > + * > + * This function allocates part of contiguous memory on specific > + * contiguous memory area. > + */ > +struct page *cma_alloc(struct cma *cma, unsigned long count, > + unsigned int align, bool no_warn) > +{ > + return __cma_alloc(cma, count, align, GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); > +} > + > +struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp) > +{ > + struct page *page; > + > + if (WARN_ON(!order || !(gfp | __GFP_COMP))) And here too. Thank you. diff --git a/mm/cma.c b/mm/cma.c index 4354823d28cf..2d9fae939283 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -522,7 +522,7 @@ struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp) { struct page *page; - if (WARN_ON(!order || !(gfp | __GFP_COMP))) + if (WARN_ON(!order || !(gfp & __GFP_COMP))) return NULL; page = __cma_alloc(cma, 1 << order, order, gfp);