From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 010F4C46CD2 for ; Tue, 30 Jan 2024 09:19:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BBE66B00B2; Tue, 30 Jan 2024 04:19:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 845006B00B3; Tue, 30 Jan 2024 04:19:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70CD16B00B4; Tue, 30 Jan 2024 04:19:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 61DA66B00B2 for ; Tue, 30 Jan 2024 04:19:12 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id EF9E0A16D1 for ; Tue, 30 Jan 2024 09:19:11 +0000 (UTC) X-FDA: 81735428502.13.8BC2267 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 0C19D100017 for ; Tue, 30 Jan 2024 09:19:09 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706606350; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7gv+P2QBJjPvu4Em1w0jHwdGlmse03coWMpFlgTk6Ec=; b=MCYULfuQT7auya1+3Or3irdmyzdFazJHxTPHbvhWnhrYGpmCFxnfETZWZcz/QlnhJsnKHv MGKMAD3/xYIZrDOPGPGAIyUsATbnkycJK9DPW+lpyTJ/vCCGkZqy6gtBB2HUSrlX4R1zSn vBgY4tpciF1Xx/OSxTdKrrA202OGVFo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706606350; a=rsa-sha256; cv=none; b=PZmWslxrRtHbLII/GVp+pwbVRiKVYOW79XZ5m6kO8iwboXPqb/bGmJRM12G3FNhg6uBtqY zS7VedOD//9oEGviunLxkhAm24NKV7wrwDhk+vfqStUqwOt+6MAPqNUcdfnRZXHtGRybBD UCREfJcCQvkf0SG/3RpIeHeBIFjNKDE= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B99EBDA7; Tue, 30 Jan 2024 01:19:52 -0800 (PST) Received: from [10.163.41.110] (unknown [10.163.41.110]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 860293F738; Tue, 30 Jan 2024 01:18:56 -0800 (PST) Message-ID: <598c9762-83b3-4517-858c-8349d6dceec2@arm.com> Date: Tue, 30 Jan 2024 14:48:53 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH RFC v3 10/35] mm: cma: Fast track allocating memory when the pages are free Content-Language: en-US To: Alexandru Elisei , catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org References: <20240125164256.4147-1-alexandru.elisei@arm.com> <20240125164256.4147-11-alexandru.elisei@arm.com> From: Anshuman Khandual In-Reply-To: <20240125164256.4147-11-alexandru.elisei@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0C19D100017 X-Stat-Signature: 8j8mububpsk4r7jteykwmhjr3wbbu68f X-Rspam-User: X-HE-Tag: 1706606349-771015 X-HE-Meta: U2FsdGVkX1/XGmJOX4oZDWBTkufs3LiRNhhkwGT1Q5Ekee//gXoDFWpfPNwvgeenEm4Ac7VEibW3LfV9+xcf9SfM3HJLPInC1McED/CdERekOH12xVMqDlo5uadWttXXIjfqG/3OgjFFRi1RxptmWnP2ibVQDCr1DIYhZ+PzHD68cD/yF1FgfU9YNbzdlX007OOziDNdAEGSO8+xf3LJw+G+tZfW123pBbOdBp/3maWKitq/ALiQGsK29k08Axw35HXkjqiUXjlEUhqCaQ6uhXbVo63v1iG0IH4ULiSaQN3iCA1VYpn8Tz57inH6nTQjaOGBOoMOk5QcfAIROe1GrM557BljaNtmviOR8CCRkrdpOyVY7sRhbHXl523QOKq9b7RQFflxfFyQ5Wzprmba7Dgn2+hVrsqp7Jcvbt4SqbV3s5NI85/Fo5E4WiubSJcqPOlJHEPnuFH4IfsyVt76xXsbfQxtSXFvo0vgGv3+JE+GgCcCeoJPhJgQGZsZt/IA8ce522ZoF2uIePpB06vK8aL7T9oyyJTJBdSvogayPqMCnFSFSSfFunHhNQEpCs9xEvR2roqTQkHvGxa/DM5L1v8RM4ro1o/5EF2W8xf8mc8wa1OEWbP6rV1RS/wKxwAhXBE3GkoUS2FFR4Vv3ZAPN8FgLttq+0OcLvgNwD8ktNGZvHQzFbc7VBtDyB0P49fX4Od+25TguCiZMeZrRDOvFtRPr/NHJ9w3Jv+mOZa2M9fw4oKnm+2gNfQGoYDeyzn+lbIjz8XNRpv5zZBGsGTQnGZl18MQeLJxCbFvUpbfP9tv+KrANxudNPOZ/g/6vfUzxZb2rpO9L1I68hmATjVA1e9+Lk9Wnz26 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 1/25/24 22:12, Alexandru Elisei wrote: > If the pages to be allocated are free, take them directly off the buddy > allocator, instead of going through alloc_contig_range() and avoiding > costly calls to lru_cache_disable(). > > Only allocations of the same size as the CMA region order are considered, > to avoid taking the zone spinlock for too long. > > Signed-off-by: Alexandru Elisei This patch seems to be improving standard cma_alloc() as well as the previously added new allocator i.e cma_alloc_range() - via a new helper cma_alloc_pages_fastpath(). Should not any standard cma_alloc() improvement be discussed as an independent patch separately irrespective of this series. OR it is some how related to this series which I might be missing ? > --- > > Changes since rfc v2: > > * New patch. Reworked from the rfc v2 patch #26 ("arm64: mte: Fast track > reserving tag storage when the block is free") (David Hildenbrand). > > include/linux/page-flags.h | 15 ++++++++++++-- > mm/Kconfig | 5 +++++ > mm/cma.c | 42 ++++++++++++++++++++++++++++++++++---- > mm/memory-failure.c | 8 ++++---- > mm/page_alloc.c | 23 ++++++++++++--------- > 5 files changed, 73 insertions(+), 20 deletions(-) > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index 735cddc13d20..b7237bce7446 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -575,11 +575,22 @@ TESTSCFLAG(HWPoison, hwpoison, PF_ANY) > #define MAGIC_HWPOISON 0x48575053U /* HWPS */ > extern void SetPageHWPoisonTakenOff(struct page *page); > extern void ClearPageHWPoisonTakenOff(struct page *page); > -extern bool take_page_off_buddy(struct page *page); > -extern bool put_page_back_buddy(struct page *page); > +extern bool PageHWPoisonTakenOff(struct page *page); > #else > PAGEFLAG_FALSE(HWPoison, hwpoison) > +TESTSCFLAG_FALSE(HWPoison, hwpoison) > #define __PG_HWPOISON 0 > +static inline void SetPageHWPoisonTakenOff(struct page *page) { } > +static inline void ClearPageHWPoisonTakenOff(struct page *page) { } > +static inline bool PageHWPoisonTakenOff(struct page *page) > +{ > + return false; > +} > +#endif > + > +#ifdef CONFIG_WANTS_TAKE_PAGE_OFF_BUDDY > +extern bool take_page_off_buddy(struct page *page, bool poison); > +extern bool put_page_back_buddy(struct page *page, bool unpoison); > #endif > > #if defined(CONFIG_PAGE_IDLE_FLAG) && defined(CONFIG_64BIT) > diff --git a/mm/Kconfig b/mm/Kconfig > index ffc3a2ba3a8c..341cf53898db 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -745,12 +745,16 @@ config DEFAULT_MMAP_MIN_ADDR > config ARCH_SUPPORTS_MEMORY_FAILURE > bool > > +config WANTS_TAKE_PAGE_OFF_BUDDY > + bool> + > config MEMORY_FAILURE > depends on MMU > depends on ARCH_SUPPORTS_MEMORY_FAILURE > bool "Enable recovery from hardware memory errors" > select MEMORY_ISOLATION > select RAS > + select WANTS_TAKE_PAGE_OFF_BUDDY > help > Enables code to recover from some memory failures on systems > with MCA recovery. This allows a system to continue running > @@ -891,6 +895,7 @@ config CMA > depends on MMU > select MIGRATION > select MEMORY_ISOLATION > + select WANTS_TAKE_PAGE_OFF_BUDDY > help > This enables the Contiguous Memory Allocator which allows other > subsystems to allocate big physically-contiguous blocks of memory. > diff --git a/mm/cma.c b/mm/cma.c > index 2881bab12b01..15663f95d77b 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -444,6 +444,34 @@ static void cma_debug_show_areas(struct cma *cma) > static inline void cma_debug_show_areas(struct cma *cma) { } > #endif > > +/* Called with the cma mutex held. */ > +static int cma_alloc_pages_fastpath(struct cma *cma, unsigned long start, > + unsigned long end) > +{ > + bool success = false; > + unsigned long i, j; > + > + /* Avoid contention on the zone lock. */ > + if (start - end != 1 << cma->order_per_bit) > + return -EINVAL; > + > + for (i = start; i < end; i++) { > + if (!is_free_buddy_page(pfn_to_page(i))) > + break; > + success = take_page_off_buddy(pfn_to_page(i), false); > + if (!success) > + break; > + } > + > + if (success) > + return 0; > + > + for (j = start; j < i; j++) > + put_page_back_buddy(pfn_to_page(j), false); > + > + return -EBUSY; > +} > + > /** > * cma_alloc_range() - allocate pages in a specific range > * @cma: Contiguous memory region for which the allocation is performed. > @@ -493,7 +521,11 @@ int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count, > > for (i = 0; i < tries; i++) { > mutex_lock(&cma_mutex); > - err = alloc_contig_range(start, start + count, MIGRATE_CMA, gfp); > + err = cma_alloc_pages_fastpath(cma, start, start + count); > + if (err) { > + err = alloc_contig_range(start, start + count, > + MIGRATE_CMA, gfp); > + } > mutex_unlock(&cma_mutex); > > if (err != -EBUSY) > @@ -529,7 +561,6 @@ int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count, > return err; > } > > - > /** > * cma_alloc() - allocate pages from contiguous area > * @cma: Contiguous memory region for which the allocation is performed. > @@ -589,8 +620,11 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > > pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); > mutex_lock(&cma_mutex); > - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, > - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); > + ret = cma_alloc_pages_fastpath(cma, pfn, pfn + count); > + if (ret) { > + ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, > + GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); > + } > mutex_unlock(&cma_mutex); > if (ret == 0) { > page = pfn_to_page(pfn); > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 4f9b61f4a668..b87b533a9871 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -157,7 +157,7 @@ static int __page_handle_poison(struct page *page) > zone_pcp_disable(page_zone(page)); > ret = dissolve_free_huge_page(page); > if (!ret) > - ret = take_page_off_buddy(page); > + ret = take_page_off_buddy(page, true); > zone_pcp_enable(page_zone(page)); > > return ret; > @@ -1353,7 +1353,7 @@ static int page_action(struct page_state *ps, struct page *p, > return action_result(pfn, ps->type, result); > } > > -static inline bool PageHWPoisonTakenOff(struct page *page) > +bool PageHWPoisonTakenOff(struct page *page) > { > return PageHWPoison(page) && page_private(page) == MAGIC_HWPOISON; > } > @@ -2247,7 +2247,7 @@ int memory_failure(unsigned long pfn, int flags) > res = get_hwpoison_page(p, flags); > if (!res) { > if (is_free_buddy_page(p)) { > - if (take_page_off_buddy(p)) { > + if (take_page_off_buddy(p, true)) { > page_ref_inc(p); > res = MF_RECOVERED; > } else { > @@ -2578,7 +2578,7 @@ int unpoison_memory(unsigned long pfn) > ret = folio_test_clear_hwpoison(folio) ? 0 : -EBUSY; > } else if (ghp < 0) { > if (ghp == -EHWPOISON) { > - ret = put_page_back_buddy(p) ? 0 : -EBUSY; > + ret = put_page_back_buddy(p, true) ? 0 : -EBUSY; > } else { > ret = ghp; > unpoison_pr_info("Unpoison: failed to grab page %#lx\n", > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0fa34bcfb1af..502ee3eb8583 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6655,7 +6655,7 @@ bool is_free_buddy_page(struct page *page) > } > EXPORT_SYMBOL(is_free_buddy_page); > > -#ifdef CONFIG_MEMORY_FAILURE > +#ifdef CONFIG_WANTS_TAKE_PAGE_OFF_BUDDY > /* > * Break down a higher-order page in sub-pages, and keep our target out of > * buddy allocator. > @@ -6687,9 +6687,9 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page, > } > > /* > - * Take a page that will be marked as poisoned off the buddy allocator. > + * Take a page off the buddy allocator, and optionally mark it as poisoned. > */ > -bool take_page_off_buddy(struct page *page) > +bool take_page_off_buddy(struct page *page, bool poison) > { > struct zone *zone = page_zone(page); > unsigned long pfn = page_to_pfn(page); > @@ -6710,7 +6710,8 @@ bool take_page_off_buddy(struct page *page) > del_page_from_free_list(page_head, zone, page_order); > break_down_buddy_pages(zone, page_head, page, 0, > page_order, migratetype); > - SetPageHWPoisonTakenOff(page); > + if (poison) > + SetPageHWPoisonTakenOff(page); > if (!is_migrate_isolate(migratetype)) > __mod_zone_freepage_state(zone, -1, migratetype); > ret = true; > @@ -6724,9 +6725,10 @@ bool take_page_off_buddy(struct page *page) > } > > /* > - * Cancel takeoff done by take_page_off_buddy(). > + * Cancel takeoff done by take_page_off_buddy(), and optionally unpoison the > + * page. > */ > -bool put_page_back_buddy(struct page *page) > +bool put_page_back_buddy(struct page *page, bool unpoison) > { > struct zone *zone = page_zone(page); > unsigned long pfn = page_to_pfn(page); > @@ -6736,17 +6738,18 @@ bool put_page_back_buddy(struct page *page) > > spin_lock_irqsave(&zone->lock, flags); > if (put_page_testzero(page)) { > - ClearPageHWPoisonTakenOff(page); > + VM_WARN_ON_ONCE(PageHWPoisonTakenOff(page) && !unpoison); > + if (unpoison) > + ClearPageHWPoisonTakenOff(page); > __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE); > - if (TestClearPageHWPoison(page)) { > + if (!unpoison || (unpoison && TestClearPageHWPoison(page))) > ret = true; > - } > } > spin_unlock_irqrestore(&zone->lock, flags); > > return ret; > } > -#endif > +#endif /* CONFIG_WANTS_TAKE_PAGE_OFF_BUDDY */ > > #ifdef CONFIG_ZONE_DMA > bool has_managed_dma(void)