From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 293E3C4829B for ; Mon, 12 Feb 2024 09:26:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 771476B0078; Mon, 12 Feb 2024 04:26:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 721916B007E; Mon, 12 Feb 2024 04:26:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 610996B0080; Mon, 12 Feb 2024 04:26:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 510726B0078 for ; Mon, 12 Feb 2024 04:26:44 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 190E11204F5 for ; Mon, 12 Feb 2024 09:26:44 +0000 (UTC) X-FDA: 81782621928.11.1D37CCD Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf28.hostedemail.com (Postfix) with ESMTP id 420D8C0012 for ; Mon, 12 Feb 2024 09:26:42 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707730002; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2CWekfCQc/WXDXSn7JBdV3QZh6EH3+jYyWXs6RAvSqY=; b=nQrbSYGN9CoJK2kZBxUuQmERtOSXdcmdUgOa/I5r2320P+EDFEGg5cFhrfgIYmt9RzdEQk lTrICZZtWpEnSAMWdmnaxjmerrdzgrh3RttW3MLYdwHwTTKm40VZgAFJ+i0KQemptvOM64 EiqwaaNALlAHT9o/sPePOYuAHxhLn/I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707730002; a=rsa-sha256; cv=none; b=Kt2tjBd+Qc1WLzlIPRWEyHOpANrDQGmr9E7XXKEf3Q2FyiZq5T8w1ytSAsu08jQj84XlPP pMnqXYx6EXh8rN34d2PR5HvOze2KGk9+FG/Zy/xGEqGNksU4LwlJ/oP8+kKhkDC6h3wW2c Q4aDX7ww4kPlR/e7T/xtTiQPRpxYeVI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8ECCDA7; Mon, 12 Feb 2024 01:27:22 -0800 (PST) Received: from [10.57.78.115] (unknown [10.57.78.115]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 164533F762; Mon, 12 Feb 2024 01:26:37 -0800 (PST) Message-ID: Date: Mon, 12 Feb 2024 09:26:36 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Catalin Marinas , Yin Fengwei , Michal Hocko , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Peter Zijlstra , Michael Ellerman , Christophe Leroy , "Naveen N. Rao" , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org References: <20240209221509.585251-1-david@redhat.com> <20240209221509.585251-10-david@redhat.com> From: Ryan Roberts In-Reply-To: <20240209221509.585251-10-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 420D8C0012 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: na8txxn7rjjommwiyhjssepapaenyjnu X-HE-Tag: 1707730002-827728 X-HE-Meta: U2FsdGVkX18dyB/P3NeBWXpG4vc7oT5O2lCPP56FJXXcZUwl0qiyRkE2iZPA/gvYn2r9snuRr83Ej6+sXaJi2M3yrRPg9sa8/tlE404Uhj/CSyOYBGlj9/9Y0cEy4vXTc/hN3CoZ+gex8Zf+VKnNYchpf3hAXM2qNVAtXpph1DeExf30X0szRFpdMGZOmp+G436h8Zhg+LCvHLcU7G1hUY2Z6cSluGUo46PsepczBQjPTH7sH6pqsgAg7WPJBnpZRXhv8uwa+zMHpSb65bBSsA0xDNKKKNW5Co0QVFVmqAMp/I49zybDQEAQ862S+/JfRDMMDTDkQNxDyClHlsRI6x9gnTvvK9bPjnfIVBHHHQL5wRXFMo750iKvnxvvkcQibWn+BCXi7TSuTfnbhCz3+wXtdmKyxINAA9WXhJ6/roZabwYDzp6DRkOVpJFZFv/L6kpxStshui+4DU4d5nmAjCldTgUZmmHin7Bly2X5B2TCrh6wuWPyurFaUk0oz2cLptZBskZZK+phGmCnpkuQ2R2asJhzxZJFU6yeEGOQxoDmMGXYtPf6LLQ2sJaAzbge96+IFJa7O/GGJMAq2Tfsru7bSY8a9QWEu9ARXcvJjz81UqfhhLxBf3gMAAztKBYVGJlR2OvWITsILEenr31YoqUOQ3QDssKBKNKfcP03uQXoy7eIQDmJk6mdjmlt7sMxKvExrbpNF7A1Vd0kO4BHewO0P2lxcXOzQTYorNPi+RW8Ss8mDeLmFRBnwiCHFheejZRO3mAyLu89di03YgJj5qAyNUDhlGcMMj8mSeSoKSQRw4nOrZjIZ3lwTtja5hdo63aaiAsDQdJdV1PHVEqkCU0sRod5rrBr6HmDC6jcafy+ZKG5kSzaBePO0FdsfYPMrnLlnYT56UUwgnHJ8ntobE4xi2NWWijyZkEzNkQnoiJedag/eT1MQumWcHePkPUxQGy8y68bCfVjz4QY2h1 FKZ5z5M6 XqLkUsbVjn7NvXxbVRlRJKJGq2CRKNpW0NB8NT/IZIZ5YvTRY7IUZ1axLSI54/kPPY39SbBJCogWgPHcHrDgsEt8WY49IyXoddXPx71QhZ/muDwajtxpIXBOCMObWdIQFxI0u1c/6icF0CUrrbTCzTOrwfrnzrR++h/T7j+wLx4z/NevBEq/yKWMCMoAgaAnSLUy/meSXqfX1AT7StOGwQQfjcjwV7H0ZyIwe X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 09/02/2024 22:15, David Hildenbrand wrote: > It's a pain that we have to handle cond_resched() in > tlb_batch_pages_flush() manually and cannot simply handle it in > release_pages() -- release_pages() can be called from atomic context. > Well, in a perfect world we wouldn't have to make our code more at all. > > With page poisoning and init_on_free, we might now run into soft lockups > when we free a lot of rather large folio fragments, because page freeing > time then depends on the actual memory size we are freeing instead of on > the number of folios that are involved. > > In the absolute (unlikely) worst case, on arm64 with 64k we will be able > to free up to 256 folio fragments that each span 512 MiB: zeroing out 128 > GiB does sound like it might take a while. But instead of ignoring this > unlikely case, let's just handle it. > > So, let's teach tlb_batch_pages_flush() that there are some > configurations where page freeing is horribly slow, and let's reschedule > more frequently -- similarly like we did for now before we had large folio > fragments in there. Note that we might end up freeing only a single folio > fragment at a time that might exceed the old 512 pages limit: but if we > cannot even free a single MAX_ORDER page on a system without running into > soft lockups, something else is already completely bogus. > > In the future, we might want to detect if handling cond_resched() is > required at all, and just not do any of that with full preemption enabled. > > Signed-off-by: David Hildenbrand > --- > mm/mmu_gather.c | 50 ++++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 41 insertions(+), 9 deletions(-) > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > index d175c0f1e2c8..2774044b5790 100644 > --- a/mm/mmu_gather.c > +++ b/mm/mmu_gather.c > @@ -91,18 +91,19 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) > } > #endif > > -static void tlb_batch_pages_flush(struct mmu_gather *tlb) > +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) > { > - struct mmu_gather_batch *batch; > - > - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { > - struct encoded_page **pages = batch->encoded_pages; > + struct encoded_page **pages = batch->encoded_pages; > + unsigned int nr, nr_pages; > > + /* > + * We might end up freeing a lot of pages. Reschedule on a regular > + * basis to avoid soft lockups in configurations without full > + * preemption enabled. The magic number of 512 folios seems to work. > + */ > + if (!page_poisoning_enabled_static() && !want_init_on_free()) { Is the performance win really worth 2 separate implementations keyed off this? It seems a bit fragile, in case any other operations get added to free which are proportional to size in future. Why not just always do the conservative version? > while (batch->nr) { > - /* > - * limit free batch count when PAGE_SIZE > 4K > - */ > - unsigned int nr = min(512U, batch->nr); > + nr = min(512, batch->nr); If any entries are for more than 1 page, nr_pages will also be encoded in the batch, so effectively this could be limiting to 256 actual folios (half of 512). Is it worth checking for ENCODED_PAGE_BIT_NR_PAGES_NEXT and limiting accordingly? nit: You're using 512 magic number in 2 places now; perhaps make a macro? > > /* > * Make sure we cover page + nr_pages, and don't leave > @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) > cond_resched(); > } > } > + > + /* > + * With page poisoning and init_on_free, the time it takes to free > + * memory grows proportionally with the actual memory size. Therefore, > + * limit based on the actual memory size and not the number of involved > + * folios. > + */ > + while (batch->nr) { > + for (nr = 0, nr_pages = 0; > + nr < batch->nr && nr_pages < 512; nr++) { > + if (unlikely(encoded_page_flags(pages[nr]) & > + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) > + nr_pages += encoded_nr_pages(pages[++nr]); > + else > + nr_pages++; > + } I guess worst case here is freeing (511 + 8192) * 64K pages = ~544M. That's up from the old limit of 512 * 64K = 32M, and 511 pages bigger than your statement in the commit log. Are you comfortable with this? I guess the only alternative is to start splitting a batch which would be really messy. I agree your approach is preferable if 544M is acceptable. > + > + free_pages_and_swap_cache(pages, nr); > + pages += nr; > + batch->nr -= nr; > + > + cond_resched(); > + } > +} > + > +static void tlb_batch_pages_flush(struct mmu_gather *tlb) > +{ > + struct mmu_gather_batch *batch; > + > + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) > + __tlb_batch_free_encoded_pages(batch); > tlb->active = &tlb->local; > } >