From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74863C282D1 for ; Fri, 28 Feb 2025 18:30:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B1FB5280019; Fri, 28 Feb 2025 13:30:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ACBF3280001; Fri, 28 Feb 2025 13:30:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91E5A280019; Fri, 28 Feb 2025 13:30:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6F7F6280001 for ; Fri, 28 Feb 2025 13:30:31 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2C231A367B for ; Fri, 28 Feb 2025 18:30:31 +0000 (UTC) X-FDA: 83170193862.20.3999DBE Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf10.hostedemail.com (Postfix) with ESMTP id 00FBEC0014 for ; Fri, 28 Feb 2025 18:30:28 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="dMjQ/Pyw"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 3wwDCZwQKCBIxDv3y66y3w.u64305CF-442Dsu2.69y@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3wwDCZwQKCBIxDv3y66y3w.u64305CF-442Dsu2.69y@flex--fvdl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740767429; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A/PlrUHdXBS5aQMGQGUAn/25QTwI04y5T00xCw9SgH8=; b=HGv3obbHF1P0WKwa2Td27w8aHJedeVOy/0NTEfOmN1DXrIMUlpi/NDfT8noBS7dudn58Jj dA9jwx96/lcH1gMFW+pG8mABm/daEP9MKE3AH6z/WbKh7oh8jDtHGeJsnT2JQhHezv38qW MA9n4lpyGUHIPGgbZyvWHLwzwCG5dGY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="dMjQ/Pyw"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 3wwDCZwQKCBIxDv3y66y3w.u64305CF-442Dsu2.69y@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3wwDCZwQKCBIxDv3y66y3w.u64305CF-442Dsu2.69y@flex--fvdl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740767429; a=rsa-sha256; cv=none; b=xIfNVgGYY2ub9j1sZeD0AQGwjQktLLDxYTnf8DH8KnhlwIJOKMMU3dm8fgSxifPE+l09Gk YeD09z+ReSMNl+pbWKKevRmcRdtsx/zZ/9LXJftfXNg1JFlFL9CfPdHcnrwPeDWxtfYLUl Sw4+p+VJD+4eEc8Twjl9Pl3JXNjJjMs= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc1eabf4f7so5261685a91.1 for ; Fri, 28 Feb 2025 10:30:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767428; x=1741372228; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A/PlrUHdXBS5aQMGQGUAn/25QTwI04y5T00xCw9SgH8=; b=dMjQ/PywkoTbLppi4lXHCJFn2uL/MEIaazRmq1vRGi1jImPNXdLbhjmvpxt6vVDt2+ s1tue7PlMLamt7znmSrzHmyUVW0eIKE5MNVcpIdeuddfbNOkC4s7QPvX0mfFeqDz9PoE XhVQ2HAjYlFSYJ6sfJfCYfuSv3mHWAxhHwWoEWpFt5wE+lJ0AvXo7XXqm1QjN7OQO3oY HTUKaohf2Btx0Cwg2vYEyE+9nhgjUE5dlkQY9bVDwYATv3nfvtyHWfesMpuF0zq5z0Pw ESGm7sXx0jYhRtvXZ/MjZOzAAey2ySFWf2jngpcbVYLgGF89qOmcgjplhRNidG9ncrtP bu8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767428; x=1741372228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A/PlrUHdXBS5aQMGQGUAn/25QTwI04y5T00xCw9SgH8=; b=c4OVfyvmeNx+ZgS5Rku2ZaTzAZbEHyTdOu8cOjUaAoH2ZpjaCCBeE0VFWcqfmWQsMq lu/MhM0iuzvTA1wb+kkMRuHXgJa8OUsQEmiVvMxn4qZuneMAhPY91+lTmaLyOmcW8q6G flLJxbVXJ4exiZch5MrzYPnB6VWScVH8Sd06kNW88JzpgwNxt8Qo0l6i47dE1sgmbMO8 DvajBLaTVoQuq1VDNoh0t8OuEGjH6L+bthMl6BTiz958pzX4brx6Zl0hvUGEWsy/ogvW lHHf4jgKrbneWC59bLUOg4r38+4EYvol0W2wTVlaEg1zsfVeo1YhhXntSpV1qn/+vr8B VyAQ== X-Forwarded-Encrypted: i=1; AJvYcCWIed8DSpzM9MYNDREG92MqsZ/boZshk/C+iVmgLJsDh2bOD3dvSk9PSawFwDwJNGfFYa4synprEw==@kvack.org X-Gm-Message-State: AOJu0YxovXoxQJ8cgkZnoI3CJSeQHOfVrgH85YnycsmFINGW1wO/YFD9 ClZDyJH1jqWXBRDnxKzOYlQ+sBiVfc0L610g/I3LvPc6o7TzeNWi+2mCM173bJX2h2fYLw== X-Google-Smtp-Source: AGHT+IE9hv4JieOLsrmBlcdFHGW2nIb9erq5H77qZHSPmE2pf3XgwT3SfOsT16N+0hOwK5A8lx3Kobsp X-Received: from pjbpd10.prod.google.com ([2002:a17:90b:1dca:b0:2fa:15aa:4d1e]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3845:b0:2ee:b6c5:1def with SMTP id 98e67ed59e1d1-2febab3e271mr7456963a91.8.1740767427793; Fri, 28 Feb 2025 10:30:27 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:25 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-25-fvdl@google.com> Subject: [PATCH v5 24/27] mm/cma: introduce interface for early reservations From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 00FBEC0014 X-Rspam-User: X-Stat-Signature: 49ikz9impbiz547afa6398hk7xct8kan X-HE-Tag: 1740767428-296684 X-HE-Meta: U2FsdGVkX1/96jSLrbENyNjdoAhHJMOJg7I5siMZUAL85CalDhMPKo7Rc4ip+c0AVcl4EKwEj0XXdULvTOwaNy42+UKQbwpoGrYh4ewDLdC42a9kxyNF3h4OquSY+0jM99l+dpDlKfGR/zfOBvxk/98KCidKarVkZf5Z010rwRcf9D3++koSyLM0A1s2VQ9bxnwBNJnJpJg2mQagQcw8gX2tiYCTg0sKsnM94MAYZJSrGQbKZJfPEUNTb83QVL7GqNXb2/eZVOe5D0fF/y6ybVcYL9aeRdNIqolNf3e/2pyJX16HOHQYHFh/PjqiuCHm+lLeKsuFwmMV4pArgdDRcjr9/kkereJ3fHCJ8cCi9bRXUA5nJE6iAOEfkGuG19GUGN1rmaKKDGREVVxOlv1KJ17OEGy3N+rEEQ+NPR0KHcHjGOpiBs6oGz5QR3Zmkv4+KBnbCAJH7j+PNJhdwAHxlh0vPEwnW+UCJFb7+s0jRBaKzshAU6uDrHnBrUYuWlujv/hS/rCMq+jngDdNvVLVmSIkPOIryy9lgBjCwUnTWyQ06QDmYWvK2FNcMujS3CJU9tT+PxRz7/h8lom2QDfHhwvr1yiEdr6AiP33SaR4gi747TjEsZm+GFvzlFgq+tytaS/e9Im5kJYKf0J0n/xtUqIegZbdaRJFvdPiomxD/SK/M7uGdgZS3HQ9hA5HtynQEph6Y/9o09tSr2u1QfpVF3hHIRqOzV2wT2Y1F4P9q7TQ3z3LL+zmjtskNnGjvHi1QB4eE8m8gCK86cKj92JXdB5KzLh+xopwWYbUtGdPrGBxSQ4/L+gYl4U2n5BxBoB7kM906bx8+UdKtk/EuaD/eEXirX54kB8gvQPVgi2FQjIY4kY1v7Uik/IBrlbYm/Uw+Cn0FLHwyK9zhunV+uTy/1h5jrQRjEamkMkF5wdpz32SEMlBbncNYdNkHAXY7t7TYp72BLvPmISfJBo+R/+ 4CIjefcl 35UaWIwqgqTdMEh7QQbM0C4jJH7gjUrdijuv5xeMB3iNi+YBgptEbqJgN9Qyv9knK87wfRsEzgnnnLB1+a6po4En+XqhFyzE2IQkWHX/n56DmIuoLDb2js0h0wnT3n3ZhFEdQ3uZiljk7sNx8WVNXqClU/hO8jssuJ0araMVRkr3k5I/eJ889kkXCm35k4YIoHE9inXJl9yUWh8jB76xKIegHKwoIKFqDpQ1ctdYdjFVbNJ5VQeQLrGzI8ASc324/CIsTyfiTxFYJGxeFvLkAQm/7bIWU+VufiqYDbA+75XAZkO9dvBFc2Cv2rBecb0bwfFKyKOLE0d2Pn/3dRLOBp4LYks+qdAOD5tSyvPWlbuf+dTD4MtEvJ1bWVqmKoToSoXbf5BQTpUTTeH5JFeXO7YTn50pUIg8mkvoLr+weJgOiwHOyWT3qXnW3RYzBn/C4PPAkBojlpO20g3eeNgASLzBZKwk9YJrRVY8q6IPY3Ql8UByP1ZM4JQ7eh/8KqznNT3Ev8RakS+KXG/fcqXVpFEQNgg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It can be desirable to reserve memory in a CMA area before it is activated, early in boot. Such reservations would effectively be memblock allocations, but they can be returned to the CMA area later. This functionality can be used to allow hugetlb bootmem allocations from a hugetlb CMA area. A new interface, cma_reserve_early is introduced. This allows for pageblock-aligned reservations. These reservations are skipped during the initial handoff of pages in a CMA area to the buddy allocator. The caller is responsible for making sure that the page structures are set up, and that the migrate type is set correctly, as with other memblock allocations that stick around. If the CMA area fails to activate (because it intersects with multiple zones), the reserved memory is not given to the buddy allocator, the caller needs to take care of that. Signed-off-by: Frank van der Linden --- mm/cma.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++----- mm/cma.h | 8 +++++ mm/internal.h | 16 ++++++++++ mm/mm_init.c | 9 ++++++ 4 files changed, 109 insertions(+), 7 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 5e1d169e24fa..09322b8284bd 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -144,9 +144,10 @@ bool cma_validate_zones(struct cma *cma) static void __init cma_activate_area(struct cma *cma) { - unsigned long pfn, base_pfn; + unsigned long pfn, end_pfn; int allocrange, r; struct cma_memrange *cmr; + unsigned long bitmap_count, count; for (allocrange = 0; allocrange < cma->nranges; allocrange++) { cmr = &cma->ranges[allocrange]; @@ -161,8 +162,13 @@ static void __init cma_activate_area(struct cma *cma) for (r = 0; r < cma->nranges; r++) { cmr = &cma->ranges[r]; - base_pfn = cmr->base_pfn; - for (pfn = base_pfn; pfn < base_pfn + cmr->count; + if (cmr->early_pfn != cmr->base_pfn) { + count = cmr->early_pfn - cmr->base_pfn; + bitmap_count = cma_bitmap_pages_to_bits(cma, count); + bitmap_set(cmr->bitmap, 0, bitmap_count); + } + + for (pfn = cmr->early_pfn; pfn < cmr->base_pfn + cmr->count; pfn += pageblock_nr_pages) init_cma_reserved_pageblock(pfn_to_page(pfn)); } @@ -173,6 +179,7 @@ static void __init cma_activate_area(struct cma *cma) INIT_HLIST_HEAD(&cma->mem_head); spin_lock_init(&cma->mem_head_lock); #endif + set_bit(CMA_ACTIVATED, &cma->flags); return; @@ -184,9 +191,8 @@ static void __init cma_activate_area(struct cma *cma) if (!test_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags)) { for (r = 0; r < allocrange; r++) { cmr = &cma->ranges[r]; - for (pfn = cmr->base_pfn; - pfn < cmr->base_pfn + cmr->count; - pfn++) + end_pfn = cmr->base_pfn + cmr->count; + for (pfn = cmr->early_pfn; pfn < end_pfn; pfn++) free_reserved_page(pfn_to_page(pfn)); } } @@ -290,6 +296,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, return ret; cma->ranges[0].base_pfn = PFN_DOWN(base); + cma->ranges[0].early_pfn = PFN_DOWN(base); cma->ranges[0].count = cma->count; cma->nranges = 1; cma->nid = NUMA_NO_NODE; @@ -509,6 +516,7 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size, nr, (u64)mlp->base, (u64)mlp->base + size); cmrp = &cma->ranges[nr++]; cmrp->base_pfn = PHYS_PFN(mlp->base); + cmrp->early_pfn = cmrp->base_pfn; cmrp->count = size >> PAGE_SHIFT; sizeleft -= size; @@ -540,7 +548,6 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size, pr_info("Reserved %lu MiB in %d range%s\n", (unsigned long)total_size / SZ_1M, nr, nr > 1 ? "s" : ""); - return ret; } @@ -1034,3 +1041,65 @@ bool cma_intersects(struct cma *cma, unsigned long start, unsigned long end) return false; } + +/* + * Very basic function to reserve memory from a CMA area that has not + * yet been activated. This is expected to be called early, when the + * system is single-threaded, so there is no locking. The alignment + * checking is restrictive - only pageblock-aligned areas + * (CMA_MIN_ALIGNMENT_BYTES) may be reserved through this function. + * This keeps things simple, and is enough for the current use case. + * + * The CMA bitmaps have not yet been allocated, so just start + * reserving from the bottom up, using a PFN to keep track + * of what has been reserved. Unreserving is not possible. + * + * The caller is responsible for initializing the page structures + * in the area properly, since this just points to memblock-allocated + * memory. The caller should subsequently use init_cma_pageblock to + * set the migrate type and CMA stats the pageblocks that were reserved. + * + * If the CMA area fails to activate later, memory obtained through + * this interface is not handed to the page allocator, this is + * the responsibility of the caller (e.g. like normal memblock-allocated + * memory). + */ +void __init *cma_reserve_early(struct cma *cma, unsigned long size) +{ + int r; + struct cma_memrange *cmr; + unsigned long available; + void *ret = NULL; + + if (!cma || !cma->count) + return NULL; + /* + * Can only be called early in init. + */ + if (test_bit(CMA_ACTIVATED, &cma->flags)) + return NULL; + + if (!IS_ALIGNED(size, CMA_MIN_ALIGNMENT_BYTES)) + return NULL; + + if (!IS_ALIGNED(size, (PAGE_SIZE << cma->order_per_bit))) + return NULL; + + size >>= PAGE_SHIFT; + + if (size > cma->available_count) + return NULL; + + for (r = 0; r < cma->nranges; r++) { + cmr = &cma->ranges[r]; + available = cmr->count - (cmr->early_pfn - cmr->base_pfn); + if (size <= available) { + ret = phys_to_virt(PFN_PHYS(cmr->early_pfn)); + cmr->early_pfn += size; + cma->available_count -= size; + return ret; + } + } + + return ret; +} diff --git a/mm/cma.h b/mm/cma.h index bddc84b3cd96..df7fc623b7a6 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -16,9 +16,16 @@ struct cma_kobject { * and the total amount of memory requested, while smaller than the total * amount of memory available, is large enough that it doesn't fit in a * single physical memory range because of memory holes. + * + * Fields: + * @base_pfn: physical address of range + * @early_pfn: first PFN not reserved through cma_reserve_early + * @count: size of range + * @bitmap: bitmap of allocated (1 << order_per_bit)-sized chunks. */ struct cma_memrange { unsigned long base_pfn; + unsigned long early_pfn; unsigned long count; unsigned long *bitmap; #ifdef CONFIG_CMA_DEBUGFS @@ -58,6 +65,7 @@ enum cma_flags { CMA_RESERVE_PAGES_ON_ERROR, CMA_ZONES_VALID, CMA_ZONES_INVALID, + CMA_ACTIVATED, }; extern struct cma cma_areas[MAX_CMA_AREAS]; diff --git a/mm/internal.h b/mm/internal.h index 63fda9bb9426..8318c8e6e589 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -848,6 +848,22 @@ void init_cma_reserved_pageblock(struct page *page); #endif /* CONFIG_COMPACTION || CONFIG_CMA */ +struct cma; + +#ifdef CONFIG_CMA +void *cma_reserve_early(struct cma *cma, unsigned long size); +void init_cma_pageblock(struct page *page); +#else +static inline void *cma_reserve_early(struct cma *cma, unsigned long size) +{ + return NULL; +} +static inline void init_cma_pageblock(struct page *page) +{ +} +#endif + + int find_suitable_fallback(struct free_area *area, unsigned int order, int migratetype, bool only_stealable, bool *can_steal); diff --git a/mm/mm_init.c b/mm/mm_init.c index f7d5b4fe1ae9..f31260fd393e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2263,6 +2263,15 @@ void __init init_cma_reserved_pageblock(struct page *page) adjust_managed_page_count(page, pageblock_nr_pages); page_zone(page)->cma_pages += pageblock_nr_pages; } +/* + * Similar to above, but only set the migrate type and stats. + */ +void __init init_cma_pageblock(struct page *page) +{ + set_pageblock_migratetype(page, MIGRATE_CMA); + adjust_managed_page_count(page, pageblock_nr_pages); + page_zone(page)->cma_pages += pageblock_nr_pages; +} #endif void set_zone_contiguous(struct zone *zone) -- 2.48.1.711.g2feabab25a-goog