From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7DE1CDB465 for ; Mon, 16 Oct 2023 23:32:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0327A8D00D3; Mon, 16 Oct 2023 19:32:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F21E78D00B8; Mon, 16 Oct 2023 19:32:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC4038D00D3; Mon, 16 Oct 2023 19:32:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C9CD88D00B8 for ; Mon, 16 Oct 2023 19:32:39 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 979AD1A0C08 for ; Mon, 16 Oct 2023 23:32:39 +0000 (UTC) X-FDA: 81352926438.26.A518511 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by imf10.hostedemail.com (Postfix) with ESMTP id CF758C0015 for ; Mon, 16 Oct 2023 23:32:37 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux.microsoft.com header.s=default header.b=NLMKufIP; spf=pass (imf10.hostedemail.com: domain of madvenka@linux.microsoft.com designates 13.77.154.182 as permitted sender) smtp.mailfrom=madvenka@linux.microsoft.com; dmarc=pass (policy=none) header.from=linux.microsoft.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697499158; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gwqo8nAHuTkGlbsI56Q26dBaLLDjv6Rxrz98nYbOB2c=; b=SlUOBG1OpGbXqLf0NaPuXPp79hKPn35iswGQ2oUQFQ3ZeZWl6yaxstjUY8AJHJu03eiqNa STaNv+P7xeFd+e+qZxwnKLtT9cFl5LKSOu0d9Gx+lA+Rv2nj3QIRNdq4Q2eIHLp4EHNm/U Uk8rWrYxLqZInaMovUFt1WUqMq11eRY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=linux.microsoft.com header.s=default header.b=NLMKufIP; spf=pass (imf10.hostedemail.com: domain of madvenka@linux.microsoft.com designates 13.77.154.182 as permitted sender) smtp.mailfrom=madvenka@linux.microsoft.com; dmarc=pass (policy=none) header.from=linux.microsoft.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697499158; a=rsa-sha256; cv=none; b=N1h7RT8KsxbCZrCuEeLS881fZOcZZCissgWlphe0y/hXHJQvCqrb0uDTVzMsv2u2oaUabX kD1PYurS5Mgw/kh046kP/OZCPvsNDBv/iVMFhaeTiWE6xg1qKEVrzi7B3thxB3/+LMRnmw P8jbZYXK75R+SwgHSLB5uyk6CrdZKSc= Received: from localhost.localdomain (unknown [47.186.13.91]) by linux.microsoft.com (Postfix) with ESMTPSA id 54D2020B74C0; Mon, 16 Oct 2023 16:32:36 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 54D2020B74C0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1697499157; bh=gwqo8nAHuTkGlbsI56Q26dBaLLDjv6Rxrz98nYbOB2c=; h=From:To:Subject:Date:In-Reply-To:References:From; b=NLMKufIPDMUJ67hSfmBMt8Y+H9/Q18Hpv2iX7h1kSbsIHVpljtfhVKCQi0chFUj8+ B4+t3BdWs0SqHbN80N54dh9tjP/K9kcWSckFawigYKvVPN2jF1yNfiXIX/qs9O0HSD YVF0B6V8k6kHLVxkHXw7PJey2WV5075oz1FaXXbA= From: madvenka@linux.microsoft.com To: gregkh@linuxfoundation.org, pbonzini@redhat.com, rppt@kernel.org, jgowans@amazon.com, graf@amazon.de, arnd@arndb.de, keescook@chromium.org, stanislav.kinsburskii@gmail.com, anthony.yznaga@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com, jamorris@linux.microsoft.com Subject: [RFC PATCH v1 05/10] mm/prmem: Implement a buffer allocator for persistent memory Date: Mon, 16 Oct 2023 18:32:10 -0500 Message-Id: <20231016233215.13090-6-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231016233215.13090-1-madvenka@linux.microsoft.com> References: <1b1bc25eb87355b91fcde1de7c2f93f38abb2bf9> <20231016233215.13090-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: CF758C0015 X-Rspam-User: X-Stat-Signature: g9dicia77xw1p7beq3oyjq1muckhat8c X-Rspamd-Server: rspam01 X-HE-Tag: 1697499157-343368 X-HE-Meta: U2FsdGVkX1+CClB2QwXQOaDhkMAq2qlb/SVHuT8KzMNE3He6j0IST+qXWrFV5vhBFzBXhSweEZBjp2f976UG6nhFgxRsuYMfbb24VBVe/zoDLWfdl8PhZftlG//7SUirkpRjyFxH83k09H3yYAzl8EPeLiBUpR+x5K0IuTEtSwsZBj6PE+sDRh99WQUtiqxAi38aY/s6WMWlbQXsoe3Yxq3msq0YGOvPSlO2aRwM5lXgFf5+v1iHwii0OanhFzrFD6JXFTgJMuc6NnC1bIRLu0CXxmu5B4zuV4XDgrrGX9eLDmuMOTG3A6G3g2EqN0ONEbS1T6NxXHkrTclsNhNuQ8zWJ990uGkYVIAzA4AYpajCsgzPDKloiAPVfAAqgycIp27XYr0JnEkrsj4YkNUKwEmFf2ANw61nefsogzyFNPyn+YzzeFMYdR/nYrTdhmaarukt2b2bT7xpFDq2YZDHqpHTNzhHCO0+/lgqcIDWYU8h/vaM38cnGjfTvHsamkI2gWqjNurj0KSezAPP3FVcH6Y/I9tIAHkBsgQjxkToLusLtABAW7bt4ISRUt2x6G7uBmiDc8/TV56dqVXUb/fihl0uAuLwRHx2zBaDuswIjSvR7/9RcfmnCe8NwSlUfWNhL/egrtWH7cKbXa39ehbM1mOfqB6ZaCrOrza2E1nMbqASeY9VM+cx6V17jz33k4rGZcMrFVGT4ITIW2qk/be0wlnqnEMg3FXyhV62sjYzxloO8Cs6TJ7Kd5k6kEMSuSyPsku+xwHIfRTtCK3me1VJ4o1FS/InCiPsDyFDjmyOJ8KD6/3arWqMgbHiU0DsRwOuCG2aS559v61Lr2SGzfazt0A5fITpJfnoZwkC9zvP9EFM/vJ9e3/uxTCH8eR2fV3gWBkoCVWYI7xgzpEImtOiMobsviY/E3bRHsnKP0WL4gStg+4sI+m8ilLZ2TbMqW3oTE+rTU/TskO7SMemo7O nM2nUHLS Ufc/O/prCmPG8yJn57VBgTLTBpENH9WR92swARqUWsFPfcXwMkt3jS6WlL4VxGd5VE9A9tEZRtF8X6pjGfwMhTAmGNn6vZ3abSKkPpcHHWVIpPJy/Q+yoWehgPCi+/wylumqqbl2dTrQHXRf+9heuBecCvLF1weXfcII6AOa7slu3NHPAgodzRjH8ma8FwCv9iZ9e0JIBklFBJeqNmYvHKeufpELPmUGhVMuyXs2fWS9+Vc11IMSdd+UfqwYR5pmE9VV4Bj36bKd/vRP2LyDJKhi6xhhLz/EUO1K276l8WoiZCDk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Madhavan T. Venkataraman" Implement functions that can allocate and free memory smaller than a page size. - prmem_alloc() - prmem_free() These functions look like kmalloc() and kfree(). However, the only GFP flag that is processed is __GFP_ZERO to zero out the allocated memory. To make the implementation simpler, create allocation caches for different object sizes: 8, 16, 32, 64, ..., PAGE_SIZE For a given size, allocate from the appropriate cache. This idea has been plagiarized from the kmem allocator. To fill the cache of a specific size, allocate a page, break it up into equal sized objects and add the objects to the cache. This is just a very simple allocator. It does not attempt to do sophisticated things like cache coloring, coalescing objects that belong to the same page so the page can be freed, etc. Signed-off-by: Madhavan T. Venkataraman --- include/linux/prmem.h | 12 ++++ kernel/prmem/prmem_allocator.c | 112 ++++++++++++++++++++++++++++++++- 2 files changed, 123 insertions(+), 1 deletion(-) diff --git a/include/linux/prmem.h b/include/linux/prmem.h index 108683933c82..1cb4660cf35e 100644 --- a/include/linux/prmem.h +++ b/include/linux/prmem.h @@ -50,6 +50,8 @@ struct prmem_region { struct gen_pool_chunk *chunk; }; +#define PRMEM_MAX_CACHES 14 + /* * PRMEM metadata. * @@ -60,6 +62,9 @@ struct prmem_region { * size Size of initial memory allocated to prmem. * * regions List of memory regions. + * + * caches Caches for different object sizes. For allocations smaller than + * PAGE_SIZE, these caches are used. */ struct prmem { unsigned long checksum; @@ -68,6 +73,9 @@ struct prmem { /* Persistent Regions. */ struct list_head regions; + + /* Allocation caches. */ + void *caches[PRMEM_MAX_CACHES]; }; extern struct prmem *prmem; @@ -87,6 +95,8 @@ int prmem_cmdline_size(void); /* Allocator API. */ struct page *prmem_alloc_pages(unsigned int order, gfp_t gfp); void prmem_free_pages(struct page *pages, unsigned int order); +void *prmem_alloc(size_t size, gfp_t gfp); +void prmem_free(void *va, size_t size); /* Internal functions. */ struct prmem_region *prmem_add_region(unsigned long pa, size_t size); @@ -95,6 +105,8 @@ void *prmem_alloc_pool(struct prmem_region *region, size_t size, int align); void prmem_free_pool(struct prmem_region *region, void *va, size_t size); void *prmem_alloc_pages_locked(unsigned int order); void prmem_free_pages_locked(void *va, unsigned int order); +void *prmem_alloc_locked(size_t size); +void prmem_free_locked(void *va, size_t size); unsigned long prmem_checksum(void *start, size_t size); bool __init prmem_validate(void); void prmem_cmdline(char *cmdline); diff --git a/kernel/prmem/prmem_allocator.c b/kernel/prmem/prmem_allocator.c index 07a5a430630c..f12975bc6777 100644 --- a/kernel/prmem/prmem_allocator.c +++ b/kernel/prmem/prmem_allocator.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Persistent-Across-Kexec memory feature (prmem) - Allocator. + * Persistent-Across-Kexec memory (prmem) - Allocator. * * Copyright (C) 2023 Microsoft Corporation * Author: Madhavan T. Venkataraman (madvenka@linux.microsoft.com) @@ -72,3 +72,113 @@ void prmem_free_pages(struct page *pages, unsigned int order) spin_unlock(&prmem_lock); } EXPORT_SYMBOL_GPL(prmem_free_pages); + +/* Buffer allocation functions. */ + +#if PAGE_SIZE > 65536 +#error "Page size is too big" +#endif + +static size_t prmem_cache_sizes[PRMEM_MAX_CACHES] = { + 8, 16, 32, 64, 128, 256, 512, + 1024, 2048, 4096, 8192, 16384, 32768, 65536, +}; + +static int prmem_cache_index(size_t size) +{ + int i; + + for (i = 0; i < PRMEM_MAX_CACHES; i++) { + if (size <= prmem_cache_sizes[i]) + return i; + } + BUG(); +} + +static void prmem_refill(void **cache, size_t size) +{ + void *va; + int i, n = PAGE_SIZE / size; + + /* Allocate a page. */ + va = prmem_alloc_pages_locked(0); + if (!va) + return; + + /* Break up the page into pieces and put them in the cache. */ + for (i = 0; i < n; i++, va += size) { + *((void **) va) = *cache; + *cache = va; + } +} + +void *prmem_alloc_locked(size_t size) +{ + void *va; + int index; + void **cache; + + index = prmem_cache_index(size); + size = prmem_cache_sizes[index]; + + cache = &prmem->caches[index]; + if (!*cache) { + /* Refill the cache. */ + prmem_refill(cache, size); + } + + /* Allocate one from the cache. */ + va = *cache; + if (va) + *cache = *((void **) va); + return va; +} + +void *prmem_alloc(size_t size, gfp_t gfp) +{ + void *va; + bool zero = !!(gfp & __GFP_ZERO); + + if (!prmem_inited || !size) + return NULL; + + /* This function is only for sizes up to a PAGE_SIZE. */ + if (size > PAGE_SIZE) + return NULL; + + spin_lock(&prmem_lock); + va = prmem_alloc_locked(size); + spin_unlock(&prmem_lock); + + if (va && zero) + memset(va, 0, size); + return va; +} +EXPORT_SYMBOL_GPL(prmem_alloc); + +void prmem_free_locked(void *va, size_t size) +{ + int index; + void **cache; + + /* Free the object into its cache. */ + index = prmem_cache_index(size); + cache = &prmem->caches[index]; + *((void **) va) = *cache; + *cache = va; +} + +void prmem_free(void *va, size_t size) +{ + if (!prmem_inited || !va || !size) + return; + + /* This function is only for sizes up to a PAGE_SIZE. */ + if (size > PAGE_SIZE) + return; + + spin_lock(&prmem_lock); + prmem_free_locked(va, size); + spin_unlock(&prmem_lock); +} +EXPORT_SYMBOL_GPL(prmem_free); -- 2.25.1