From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 031C1C83F03 for ; Thu, 3 Jul 2025 18:47:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E8CB6B027C; Thu, 3 Jul 2025 14:47:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BF316B027E; Thu, 3 Jul 2025 14:47:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FCF36B027F; Thu, 3 Jul 2025 14:47:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6D4CD6B027C for ; Thu, 3 Jul 2025 14:47:23 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E6FF2595E3 for ; Thu, 3 Jul 2025 18:47:22 +0000 (UTC) X-FDA: 83623836324.28.86F0E7F Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf10.hostedemail.com (Postfix) with ESMTP id 665D9C0010 for ; Thu, 3 Jul 2025 18:47:21 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LGQbRdl6; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751568441; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wqQex2T64YEmupJs8cYGAOeFXeb+fL3lMZb/x/e+Z7A=; b=AyUyvYJNNc46Ici1PRDbIngkS37DwbPToWyzk7lkTGx/qI90hsINI9hWUIVT6dSIJI0qZ2 1wjv7S6P3W+Jzte3H3MRor1yBoq4rIJBqewhcSIFiETUJqoil3J9sqj3FfnY6mTWRWMfy3 wXkrzWD81GOXGyUsLiKIYgNNaKvZEnI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751568441; a=rsa-sha256; cv=none; b=ekZbaG8N9CyAIBpS0fe3mOqGXVk839QnZ3pCC6S+zXtXIn8SDA55CN7h464aXHjJ/T9cnz W6M/95nojjHU2cV9PsqQ5nqvku9FBVj1AmEIPBXHsttDdw4vD7kCZs+IGZL1M3TGd/W3xB 4siy47d1AcRt2ewy4c7CEVnk6AygDcY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LGQbRdl6; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id A7BD661450; Thu, 3 Jul 2025 18:47:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3ECC5C4CEF2; Thu, 3 Jul 2025 18:47:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751568440; bh=4QUbQceDgpTB7furRIxt3J/Itk7emLWN+QIP9ZQ7hRk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LGQbRdl6+BywZx3nIIybXt8Bw1kieyNxqeJBB/gEQeV8kciKrZDLsUpDsJ/sYRG18 PYxWQNlHbuvv9qXAsJ6Xzxr/9lbJw7Ouu1HaFX3wBWlS1TPCdp5c71wL7cho4W+uQM oNYXvQ1T5+YWEvjk22ydvmdrWvoiRdbNetrCAuxIt+JDFgYcr6o//gz4KjY/2G1kJ6 AI6fvYLZM7sKI76ABHVvjm37RJx/UsMG5EbFhOZotO9TOwe13KGYSFMteiBqIghGEj aeCkWp+t7dCYofUfnfpOSXLRGtJ41Mv2KliKFlpsj47jUqkJOT1zutimSYddclCRfx VYsFRN48hH5mQ== From: Mike Rapoport To: Andrew Morton Cc: Alexandre Ghiti , David Hildenbrand , Mike Rapoport , Oscar Salvador , Pratyush Yadav , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 1/3] cma: move __cma_declare_contiguous_nid() before its usage Date: Thu, 3 Jul 2025 21:47:09 +0300 Message-ID: <20250703184711.3485940-2-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250703184711.3485940-1-rppt@kernel.org> References: <20250703184711.3485940-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: 8jwu8zrednohzqxjxunrxcu4j77f4cb9 X-Rspamd-Queue-Id: 665D9C0010 X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1751568441-1053 X-HE-Meta: U2FsdGVkX1+vwRb4k7WL0v+dKgd2xwbEmCOtA1w9biDoRPUumOFS3YHxok6WTnDfKos1Acb37d8uJLtc1r9N3qoevGrMGTrlAOav3JmIgunv/XKwo62+c2ihaMFbjofFTaOfjuf6Bc35eB0Pk+fAASUsoW7kzHcz6kmBkrMLQuObOk0PcblhgwY35VRkeh6lxrSfOtlrO+7RMj8b4fpE6zOD1DLg8xZXuhFVfik4muwCgmlnC1B6P1znTpqb7n0uv8dj1mjVtjLiiCEyv36M3OcDtmxfb1TQmFNfQ1N4eP/ZN92X62cZr1oxAOiOIUmTzFGtsm8gUvZtSHVW7wLCBQXWHRP0YsSX8Cz4D0jbOoK+Sq2WFeO99QZWqDseQzJBUhzthgrSxgGP2eXRWYpjjtlV7oGVYRqFSS1bHtr27V6HWzfFT4GdxU5w/tfpSphBWp2P4s8wlKuweKr9duJrw2Xw1cX0rBdxT/BLF68YGB+ZDoc7JS6xUs50R2gPFqBbVdgIW1xAkNVAPH3J3DfCpD7528Z2JC7QED31NLXKMZxBBeKYNZPhLPCT35WnLeU1Zu29p5JyEmrppLOqTYF1V5e6P0jd8IPLjQxnjC/DVRY7ZFceYsURFWBoUGzRtcfcw1r3TN50g4QyZOBEUbotaFUJePIzPI7EhVuBqfBdsXlkPm/FCUdrJyZMBr0IhSZNtKczhqPmw+b21vArCUXR29XQvSJgar4eCFlXUkIB3KrjHFc8Oqt6TEPwS1qsJCOIggrsZwEjqUxg5ywHjri5xyVu25bikFj/of/s93bjCWwbR5Ps5ZXXf9Deinc9bWe+cCBPoM9lA7lwQGJ3MoiCfjAI/9VAo/B8OrDM9LEt2tbjq9D8K97y4/fwtCfWH1hNHkNDU6m2IcxpiQrQHA74m8LF9WbaOY9ozfhxIAegpYZgasKZZbovs4clrCBcwdagIYUqMZS2/y84ZJ4D6XZ HSeN5uIQ CtMoasNx6bxBAfigVSoZSwIbRBtG2RIEqZeKXHfIxjnvgx3Tys2iZuhEsOe64G33H5V9uhOwzruI86CZvEab6UB4tOBDAUFqYWWBmncIJG6Toq2P4nxwHxtmuDnrNpSym288Z/1IcbYtg1jkTNOqxee60ZjXe9tKY/J4QnJ0sobrJQIxdreMOPDeyeHwFn1r/+k/P/GRRdnANVhRD1iXqJMMyH6wFd2E67sBmXpblM1st6UDxX9IhvHtxWUnNOTwgMEFZ17dKE2sh0fW+iO784Uk7sUIab0sH194WvPH8gf2zOwXX0MvMRHe6YFf3aQUzUvOyhOYZ/feIVWc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" and kill static declaration Signed-off-by: Mike Rapoport (Microsoft) Acked-by: Oscar Salvador Acked-by: David Hildenbrand --- mm/cma.c | 294 +++++++++++++++++++++++++++---------------------------- 1 file changed, 144 insertions(+), 150 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 397567883a10..9bf95f8f0f33 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -35,12 +35,6 @@ struct cma cma_areas[MAX_CMA_AREAS]; unsigned int cma_area_count; -static int __init __cma_declare_contiguous_nid(phys_addr_t *basep, - phys_addr_t size, phys_addr_t limit, - phys_addr_t alignment, unsigned int order_per_bit, - bool fixed, const char *name, struct cma **res_cma, - int nid); - phys_addr_t cma_get_base(const struct cma *cma) { WARN_ON_ONCE(cma->nranges != 1); @@ -358,6 +352,150 @@ static void __init list_insert_sorted( } } +static int __init __cma_declare_contiguous_nid(phys_addr_t *basep, + phys_addr_t size, phys_addr_t limit, + phys_addr_t alignment, unsigned int order_per_bit, + bool fixed, const char *name, struct cma **res_cma, + int nid) +{ + phys_addr_t memblock_end = memblock_end_of_DRAM(); + phys_addr_t highmem_start, base = *basep; + int ret; + + /* + * We can't use __pa(high_memory) directly, since high_memory + * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly) + * complain. Find the boundary by adding one to the last valid + * address. + */ + if (IS_ENABLED(CONFIG_HIGHMEM)) + highmem_start = __pa(high_memory - 1) + 1; + else + highmem_start = memblock_end_of_DRAM(); + pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n", + __func__, &size, &base, &limit, &alignment); + + if (cma_area_count == ARRAY_SIZE(cma_areas)) { + pr_err("Not enough slots for CMA reserved regions!\n"); + return -ENOSPC; + } + + if (!size) + return -EINVAL; + + if (alignment && !is_power_of_2(alignment)) + return -EINVAL; + + if (!IS_ENABLED(CONFIG_NUMA)) + nid = NUMA_NO_NODE; + + /* Sanitise input arguments. */ + alignment = max_t(phys_addr_t, alignment, CMA_MIN_ALIGNMENT_BYTES); + if (fixed && base & (alignment - 1)) { + pr_err("Region at %pa must be aligned to %pa bytes\n", + &base, &alignment); + return -EINVAL; + } + base = ALIGN(base, alignment); + size = ALIGN(size, alignment); + limit &= ~(alignment - 1); + + if (!base) + fixed = false; + + /* size should be aligned with order_per_bit */ + if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit)) + return -EINVAL; + + /* + * If allocating at a fixed base the request region must not cross the + * low/high memory boundary. + */ + if (fixed && base < highmem_start && base + size > highmem_start) { + pr_err("Region at %pa defined on low/high memory boundary (%pa)\n", + &base, &highmem_start); + return -EINVAL; + } + + /* + * If the limit is unspecified or above the memblock end, its effective + * value will be the memblock end. Set it explicitly to simplify further + * checks. + */ + if (limit == 0 || limit > memblock_end) + limit = memblock_end; + + if (base + size > limit) { + pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n", + &size, &base, &limit); + return -EINVAL; + } + + /* Reserve memory */ + if (fixed) { + if (memblock_is_region_reserved(base, size) || + memblock_reserve(base, size) < 0) { + return -EBUSY; + } + } else { + phys_addr_t addr = 0; + + /* + * If there is enough memory, try a bottom-up allocation first. + * It will place the new cma area close to the start of the node + * and guarantee that the compaction is moving pages out of the + * cma area and not into it. + * Avoid using first 4GB to not interfere with constrained zones + * like DMA/DMA32. + */ +#ifdef CONFIG_PHYS_ADDR_T_64BIT + if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) { + memblock_set_bottom_up(true); + addr = memblock_alloc_range_nid(size, alignment, SZ_4G, + limit, nid, true); + memblock_set_bottom_up(false); + } +#endif + + /* + * All pages in the reserved area must come from the same zone. + * If the requested region crosses the low/high memory boundary, + * try allocating from high memory first and fall back to low + * memory in case of failure. + */ + if (!addr && base < highmem_start && limit > highmem_start) { + addr = memblock_alloc_range_nid(size, alignment, + highmem_start, limit, nid, true); + limit = highmem_start; + } + + if (!addr) { + addr = memblock_alloc_range_nid(size, alignment, base, + limit, nid, true); + if (!addr) + return -ENOMEM; + } + + /* + * kmemleak scans/reads tracked objects for pointers to other + * objects but this address isn't mapped and accessible + */ + kmemleak_ignore_phys(addr); + base = addr; + } + + ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma); + if (ret) { + memblock_phys_free(base, size); + return ret; + } + + (*res_cma)->nid = nid; + *basep = base; + + return 0; +} + /* * Create CMA areas with a total size of @total_size. A normal allocation * for one area is tried first. If that fails, the biggest memblock @@ -593,150 +731,6 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, return ret; } -static int __init __cma_declare_contiguous_nid(phys_addr_t *basep, - phys_addr_t size, phys_addr_t limit, - phys_addr_t alignment, unsigned int order_per_bit, - bool fixed, const char *name, struct cma **res_cma, - int nid) -{ - phys_addr_t memblock_end = memblock_end_of_DRAM(); - phys_addr_t highmem_start, base = *basep; - int ret; - - /* - * We can't use __pa(high_memory) directly, since high_memory - * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly) - * complain. Find the boundary by adding one to the last valid - * address. - */ - if (IS_ENABLED(CONFIG_HIGHMEM)) - highmem_start = __pa(high_memory - 1) + 1; - else - highmem_start = memblock_end_of_DRAM(); - pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n", - __func__, &size, &base, &limit, &alignment); - - if (cma_area_count == ARRAY_SIZE(cma_areas)) { - pr_err("Not enough slots for CMA reserved regions!\n"); - return -ENOSPC; - } - - if (!size) - return -EINVAL; - - if (alignment && !is_power_of_2(alignment)) - return -EINVAL; - - if (!IS_ENABLED(CONFIG_NUMA)) - nid = NUMA_NO_NODE; - - /* Sanitise input arguments. */ - alignment = max_t(phys_addr_t, alignment, CMA_MIN_ALIGNMENT_BYTES); - if (fixed && base & (alignment - 1)) { - pr_err("Region at %pa must be aligned to %pa bytes\n", - &base, &alignment); - return -EINVAL; - } - base = ALIGN(base, alignment); - size = ALIGN(size, alignment); - limit &= ~(alignment - 1); - - if (!base) - fixed = false; - - /* size should be aligned with order_per_bit */ - if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit)) - return -EINVAL; - - /* - * If allocating at a fixed base the request region must not cross the - * low/high memory boundary. - */ - if (fixed && base < highmem_start && base + size > highmem_start) { - pr_err("Region at %pa defined on low/high memory boundary (%pa)\n", - &base, &highmem_start); - return -EINVAL; - } - - /* - * If the limit is unspecified or above the memblock end, its effective - * value will be the memblock end. Set it explicitly to simplify further - * checks. - */ - if (limit == 0 || limit > memblock_end) - limit = memblock_end; - - if (base + size > limit) { - pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n", - &size, &base, &limit); - return -EINVAL; - } - - /* Reserve memory */ - if (fixed) { - if (memblock_is_region_reserved(base, size) || - memblock_reserve(base, size) < 0) { - return -EBUSY; - } - } else { - phys_addr_t addr = 0; - - /* - * If there is enough memory, try a bottom-up allocation first. - * It will place the new cma area close to the start of the node - * and guarantee that the compaction is moving pages out of the - * cma area and not into it. - * Avoid using first 4GB to not interfere with constrained zones - * like DMA/DMA32. - */ -#ifdef CONFIG_PHYS_ADDR_T_64BIT - if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) { - memblock_set_bottom_up(true); - addr = memblock_alloc_range_nid(size, alignment, SZ_4G, - limit, nid, true); - memblock_set_bottom_up(false); - } -#endif - - /* - * All pages in the reserved area must come from the same zone. - * If the requested region crosses the low/high memory boundary, - * try allocating from high memory first and fall back to low - * memory in case of failure. - */ - if (!addr && base < highmem_start && limit > highmem_start) { - addr = memblock_alloc_range_nid(size, alignment, - highmem_start, limit, nid, true); - limit = highmem_start; - } - - if (!addr) { - addr = memblock_alloc_range_nid(size, alignment, base, - limit, nid, true); - if (!addr) - return -ENOMEM; - } - - /* - * kmemleak scans/reads tracked objects for pointers to other - * objects but this address isn't mapped and accessible - */ - kmemleak_ignore_phys(addr); - base = addr; - } - - ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma); - if (ret) { - memblock_phys_free(base, size); - return ret; - } - - (*res_cma)->nid = nid; - *basep = base; - - return 0; -} - static void cma_debug_show_areas(struct cma *cma) { unsigned long next_zero_bit, next_set_bit, nr_zero; -- 2.47.2