From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4C74C433ED for ; Wed, 31 Mar 2021 17:34:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2229061026 for ; Wed, 31 Mar 2021 17:34:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2229061026 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7B1FB6B007E; Wed, 31 Mar 2021 13:34:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 75F626B0080; Wed, 31 Mar 2021 13:34:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 602796B0081; Wed, 31 Mar 2021 13:34:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id 47CF86B007E for ; Wed, 31 Mar 2021 13:34:47 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0095B81D4 for ; Wed, 31 Mar 2021 17:34:47 +0000 (UTC) X-FDA: 77980869414.35.133F12C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf22.hostedemail.com (Postfix) with ESMTP id 4B8ABC0007C5 for ; Wed, 31 Mar 2021 17:34:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617212086; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mfb3+QJTICoAcBVnRAz+A/OozCnNFwZhALhIsGKfQ1M=; b=Cqg577hC2FA/y1Ie82SC/TYf2Zyh7DJco4kCTPeSIbEiNNcSWKTL4TwHh8Kdl2plE2a6p4 8unhkeQxZqDOn4DeyQ30stFz1D6qzW7xUAd7ppV6yzM65hscwMiMOo54ePn9EGiXbCheQ2 HA2Plrma53zR/lcJ0dC9MnivFfM3nSg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-137-ctfOQft4NFG91yjqFYrHFA-1; Wed, 31 Mar 2021 13:34:43 -0400 X-MC-Unique: ctfOQft4NFG91yjqFYrHFA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13357180FCA1; Wed, 31 Mar 2021 17:34:42 +0000 (UTC) Received: from [10.36.113.60] (ovpn-113-60.ams2.redhat.com [10.36.113.60]) by smtp.corp.redhat.com (Postfix) with ESMTP id 70A661DB; Wed, 31 Mar 2021 17:34:40 +0000 (UTC) Subject: Re: [PATCH] mm: use proper type for cma_[alloc|release] To: Minchan Kim , Andrew Morton Cc: linux-mm , LKML , joaodias@google.com, Matthew Wilcox References: <20210331164018.710560-1-minchan@kernel.org> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <5af695c6-4698-9945-1d2d-164665c056f6@redhat.com> Date: Wed, 31 Mar 2021 19:34:39 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: <20210331164018.710560-1-minchan@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4B8ABC0007C5 X-Stat-Signature: 7r45okf15yzwk8i4j3e1stzbrzo9rojg Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617212086-239765 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 31.03.21 18:40, Minchan Kim wrote: > size_t in cma_alloc is confusing since it makes people think > it's byte count, not pages. Change it to unsigned long[1]. > > The unsigned int in cma_release is also not right so change it. > Since we have unsigned long in cma_release, free_contig_range > should also respect it. > > [1] 67a2e213e7e9, mm: cma: fix incorrect type conversion for size during dma allocation > Link: https://lore.kernel.org/linux-mm/20210324043434.GP1719932@casper.infradead.org/ > Cc: Matthew Wilcox > Cc: David Hildenbrand > Signed-off-by: Minchan Kim > --- > include/linux/cma.h | 4 ++-- > include/linux/gfp.h | 2 +- > include/trace/events/cma.h | 22 +++++++++++----------- > mm/cma.c | 17 +++++++++-------- > mm/page_alloc.c | 6 +++--- > 5 files changed, 26 insertions(+), 25 deletions(-) > > diff --git a/include/linux/cma.h b/include/linux/cma.h > index 217999c8a762..53fd8c3cdbd0 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -44,9 +44,9 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > unsigned int order_per_bit, > const char *name, > struct cma **res_cma); > -extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > +extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align, > bool no_warn); > -extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); > +extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count); > > extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); > #endif > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 0a88f84b08f4..529c27c6cb15 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -639,7 +639,7 @@ extern int alloc_contig_range(unsigned long start, unsigned long end, > extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, > int nid, nodemask_t *nodemask); > #endif > -void free_contig_range(unsigned long pfn, unsigned int nr_pages); > +void free_contig_range(unsigned long pfn, unsigned long nr_pages); > > #ifdef CONFIG_CMA > /* CMA stuff */ > diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h > index 5cf385ae7c08..c3d354702cb0 100644 > --- a/include/trace/events/cma.h > +++ b/include/trace/events/cma.h > @@ -11,7 +11,7 @@ > DECLARE_EVENT_CLASS(cma_alloc_class, > > TP_PROTO(const char *name, unsigned long pfn, const struct page *page, > - unsigned int count, unsigned int align), > + unsigned long count, unsigned int align), > > TP_ARGS(name, pfn, page, count, align), > > @@ -19,7 +19,7 @@ DECLARE_EVENT_CLASS(cma_alloc_class, > __string(name, name) > __field(unsigned long, pfn) > __field(const struct page *, page) > - __field(unsigned int, count) > + __field(unsigned long, count) > __field(unsigned int, align) > ), > > @@ -31,7 +31,7 @@ DECLARE_EVENT_CLASS(cma_alloc_class, > __entry->align = align; > ), > > - TP_printk("name=%s pfn=%lx page=%p count=%u align=%u", > + TP_printk("name=%s pfn=%lx page=%p count=%lu align=%u", > __get_str(name), > __entry->pfn, > __entry->page, > @@ -42,7 +42,7 @@ DECLARE_EVENT_CLASS(cma_alloc_class, > TRACE_EVENT(cma_release, > > TP_PROTO(const char *name, unsigned long pfn, const struct page *page, > - unsigned int count), > + unsigned long count), > > TP_ARGS(name, pfn, page, count), > > @@ -50,7 +50,7 @@ TRACE_EVENT(cma_release, > __string(name, name) > __field(unsigned long, pfn) > __field(const struct page *, page) > - __field(unsigned int, count) > + __field(unsigned long, count) > ), > > TP_fast_assign( > @@ -60,7 +60,7 @@ TRACE_EVENT(cma_release, > __entry->count = count; > ), > > - TP_printk("name=%s pfn=%lx page=%p count=%u", > + TP_printk("name=%s pfn=%lx page=%p count=%lu", > __get_str(name), > __entry->pfn, > __entry->page, > @@ -69,13 +69,13 @@ TRACE_EVENT(cma_release, > > TRACE_EVENT(cma_alloc_start, > > - TP_PROTO(const char *name, unsigned int count, unsigned int align), > + TP_PROTO(const char *name, unsigned long count, unsigned int align), > > TP_ARGS(name, count, align), > > TP_STRUCT__entry( > __string(name, name) > - __field(unsigned int, count) > + __field(unsigned long, count) > __field(unsigned int, align) > ), > > @@ -85,7 +85,7 @@ TRACE_EVENT(cma_alloc_start, > __entry->align = align; > ), > > - TP_printk("name=%s count=%u align=%u", > + TP_printk("name=%s count=%lu align=%u", > __get_str(name), > __entry->count, > __entry->align) > @@ -94,7 +94,7 @@ TRACE_EVENT(cma_alloc_start, > DEFINE_EVENT(cma_alloc_class, cma_alloc_finish, > > TP_PROTO(const char *name, unsigned long pfn, const struct page *page, > - unsigned int count, unsigned int align), > + unsigned long count, unsigned int align), > > TP_ARGS(name, pfn, page, count, align) > ); > @@ -102,7 +102,7 @@ DEFINE_EVENT(cma_alloc_class, cma_alloc_finish, > DEFINE_EVENT(cma_alloc_class, cma_alloc_busy_retry, > > TP_PROTO(const char *name, unsigned long pfn, const struct page *page, > - unsigned int count, unsigned int align), > + unsigned long count, unsigned int align), > > TP_ARGS(name, pfn, page, count, align) > ); > diff --git a/mm/cma.c b/mm/cma.c > index de6b9f01be53..f3bca4178c7f 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -80,7 +80,7 @@ static unsigned long cma_bitmap_pages_to_bits(const struct cma *cma, > } > > static void cma_clear_bitmap(struct cma *cma, unsigned long pfn, > - unsigned int count) > + unsigned long count) > { > unsigned long bitmap_no, bitmap_count; > > @@ -423,21 +423,21 @@ static inline void cma_debug_show_areas(struct cma *cma) { } > * This function allocates part of contiguous memory on specific > * contiguous memory area. > */ > -struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > - bool no_warn) > +struct page *cma_alloc(struct cma *cma, unsigned long count, > + unsigned int align, bool no_warn) > { > unsigned long mask, offset; > unsigned long pfn = -1; > unsigned long start = 0; > unsigned long bitmap_maxno, bitmap_no, bitmap_count; > - size_t i; > + unsigned long i; > struct page *page = NULL; > int ret = -ENOMEM; > > if (!cma || !cma->count || !cma->bitmap) > goto out; > > - pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma, > + pr_debug("%s(cma %p, count %lu, align %d)\n", __func__, (void *)cma, > count, align); > > if (!count) > @@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > } > > if (ret && !no_warn) { > - pr_err_ratelimited("%s: %s: alloc failed, req-size: %zu pages, ret: %d\n", > + pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, ret: %d\n", > __func__, cma->name, count, ret); > cma_debug_show_areas(cma); > } > @@ -534,14 +534,15 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > * It returns false when provided pages do not belong to contiguous area and > * true otherwise. > */ > -bool cma_release(struct cma *cma, const struct page *pages, unsigned int count) > +bool cma_release(struct cma *cma, const struct page *pages, > + unsigned long count) > { > unsigned long pfn; > > if (!cma || !pages) > return false; > > - pr_debug("%s(page %p, count %u)\n", __func__, (void *)pages, count); > + pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count); > > pfn = page_to_pfn(pages); > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c53fe4fa10bf..21540fb29b0d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8817,9 +8817,9 @@ struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, > } > #endif /* CONFIG_CONTIG_ALLOC */ > > -void free_contig_range(unsigned long pfn, unsigned int nr_pages) > +void free_contig_range(unsigned long pfn, unsigned long nr_pages) > { > - unsigned int count = 0; > + unsigned long count = 0; > > for (; nr_pages--; pfn++) { > struct page *page = pfn_to_page(pfn); > @@ -8827,7 +8827,7 @@ void free_contig_range(unsigned long pfn, unsigned int nr_pages) > count += page_count(page) != 1; > __free_page(page); > } > - WARN(count != 0, "%d pages are still in use!\n", count); > + WARN(count != 0, "%lu pages are still in use!\n", count); > } > EXPORT_SYMBOL(free_contig_range); > > Reviewed-by: David Hildenbrand -- Thanks, David / dhildenb