From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D68AAC021A4 for ; Sat, 15 Feb 2025 06:39:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D0C96B0082; Sat, 15 Feb 2025 01:39:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0813B6B0083; Sat, 15 Feb 2025 01:39:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E8AB86B0085; Sat, 15 Feb 2025 01:38:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C678A6B0082 for ; Sat, 15 Feb 2025 01:38:59 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3BE31B25C8 for ; Sat, 15 Feb 2025 06:38:59 +0000 (UTC) X-FDA: 83121226398.27.6B09A99 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id 21AE220004 for ; Sat, 15 Feb 2025 06:38:56 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739601537; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u+FcZXmPCpJ+ayv3lSufeFFcrnyv9uS40TwmfI6zqzM=; b=iCrTv8wW8uhfKGWXUQ27uLIdEbsXDPqKSBXvvuEHRolqLIU8XAv21i0bpWzJfyPVcWSs4/ 2sc5gFSPhYE8ySVmPJFtE4FYcWhLB4F2BA+kzFj3VEBdkYELhX9eDmarkl6Z2opGVwSYDu uhzJkibEUxnZxf7YV24262Jnm8lzmTQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739601537; a=rsa-sha256; cv=none; b=kj5Edyi0PndLUE1wjxkBUZWH6CTEP8ruax83UcYA4zeifXBbxezxOJDZjT9BvLx9zKY9ai xstSTQKMZtdws5GdecUcSXvocK+wG2ssnGJk2CkAxuVxoPzVFICTH+1ZVUEVCUgRTEAtMu SwekX7VY1UHJkK4NoU+cbw8g7VQeW+I= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1FC671063; Fri, 14 Feb 2025 22:39:16 -0800 (PST) Received: from [10.163.93.19] (unknown [10.163.93.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CA40B3F58B; Fri, 14 Feb 2025 22:38:43 -0800 (PST) Message-ID: <5445bc55-6bd2-46fd-8107-99eb31aee172@arm.com> Date: Sat, 15 Feb 2025 12:08:40 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC v2 0/9] khugepaged: mTHP support To: Nico Pache Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, willy@infradead.org, kirill.shutemov@linux.intel.com, david@redhat.com, aarcange@redhat.com, raquini@redhat.com, sunnanyong@huawei.com, usamaarif642@gmail.com, audra@redhat.com, akpm@linux-foundation.org, rostedt@goodmis.org, mathieu.desnoyers@efficios.com, tiwai@suse.de References: <20250211003028.213461-1-npache@redhat.com> <5a995dc9-fee7-442f-b439-c484d9de1750@arm.com> <4ba52062-1bd3-4d53-aa28-fcbbd4913801@arm.com> <71490f8c-f234-4032-bc2a-f6cffa491fcb@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 21AE220004 X-Stat-Signature: nim77ftwd9frcux3ggpkyxhuoiipxjhn X-Rspam-User: X-HE-Tag: 1739601536-953583 X-HE-Meta: U2FsdGVkX18bcohjKsQ7HGxdBjd/e2DR+dlVnxCC2P2oblEQ/iMYzVPSsUI9rM9WTxr5ujUsEKDnulo6tQAvuRPFehTKpzK6MU9mESjs8JKSyrCP5Bk6CZiwto4XgP4oUUyQOa9FGuccUura9GevvW5X6V6rp7KiqB3m5RWFLx4SwTMMgsmIz0J/yTh+bZl0W0/lQhFPQ46+G3Yiq/ULFKOwQLIAIxs7u+dmHE7u1W+JJfvR2YqAQlmi5aP33t8zOsqE0eZiAbNpOi/30ePIHRMxJfGfJrfHdYaYdWbsAw/GC3qN5iqs2mvRjRRZEqbWYbv7nwiKkaOYwfKICEKy+ZeuobUs3aHGusRp5f2JG3+hCANB0J1OsAfx2ge151rwNJ/A9MDMwCd06jL3KA4bqw2JZtzRfvIjCxXQ9Ne7I/Iy7zyVL1OrspgmYmWXLs8FS5VY7obUqSYMgTtULJrCCD6AHBLyrzDrWl6TjGKKX/Q0JLlY95Jf4VPCbS0UpWaN+mDE/Llh6Sdzc4cE/HSDtnqyUbyCJbNHHVp4qWG8YM3VZ/O9y1BZvFrXPqfdIIJBV4duhGrTS+f6Y4RrHR30/g5ze9hrbWCRZTWm68ZIPDi4lWYotRQwRfbmjqASWXkCaCq13r1xLsf/DsL8pV5vbkvYsiWLavGFfMrc8nbEXn5I3Nd5jgdHUFCgmavXp2H0Hidcl7pgBh45gCrSdtseVQDwsKJ+JgxkbZBjrQxUrEqRiZYDGZb6BpgfPpvpfwlqbh62ljN9wIfRXmVS3a2Td38+xXUW7o0CANgy6/fqQ9tp9hu6SdiOAo0W1QTTHdWEcDJxjnOCRdWUHZYIdekqbi3siIwyQiJ58v1QddBxBAO++DibxWcl54YuQSs9lqApVMx3RBcU9+UDI/PNYM59/1mtjFcbohPPG068pfWx8kADRXxDyiAQheCu0ZZ33iqzkz6RP42GmQjd9U0d9bV XOj+lslh yCEtnKXEwyqJo49d6t/nsaHb2/yLmtaesLYo7nFMic6WSSAuJngA0KsNaw3VPKm5ZNbwb+5/g+KZ2zY3A06g+5l1CnV7uXxIiqtBeODu2Bl3BvP67JV/o2/DXYgzUi7MmSR3QtqHT7DbByP9Yb66VEeCJCiBJ8+3IaL6GZMeQQNuiKVgdWk9VWoWJ+vkntL9LPuKhVztCQe8IFAm5aj9VZQpuaX/MZjLDsiBU6coGw396P1l10SWY0hDnde9+NCQHPe8woGxAPuue9wxTAFZykD51149nwZBpZXrRbUAr3v4VxQ8tNBXD28qFsNOejqtD6WibG1OBNPn3IFG4H6pMNc1oSHZ9B5dY+vN+yh0jPf0qVQ+ql0yiCEUNCU0c4oi9123mRbImlbE7R4Y+ImT5hUCV7basrNCz/4IGR5YH64JHQFRtMvsJQxGIBElIn+qoYrcW5l0gSQ3FLZU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 15/02/25 6:22 am, Nico Pache wrote: > On Thu, Feb 13, 2025 at 7:02 PM Dev Jain wrote: >> >> >> >> On 14/02/25 1:09 am, Nico Pache wrote: >>> On Thu, Feb 13, 2025 at 1:26 AM Dev Jain wrote: >>>> >>>> >>>> >>>> On 12/02/25 10:19 pm, Nico Pache wrote: >>>>> On Tue, Feb 11, 2025 at 5:50 AM Dev Jain wrote: >>>>>> >>>>>> >>>>>> >>>>>> On 11/02/25 6:00 am, Nico Pache wrote: >>>>>>> The following series provides khugepaged and madvise collapse with the >>>>>>> capability to collapse regions to mTHPs. >>>>>>> >>>>>>> To achieve this we generalize the khugepaged functions to no longer depend >>>>>>> on PMD_ORDER. Then during the PMD scan, we keep track of chunks of pages >>>>>>> (defined by MTHP_MIN_ORDER) that are utilized. This info is tracked >>>>>>> using a bitmap. After the PMD scan is done, we do binary recursion on the >>>>>>> bitmap to find the optimal mTHP sizes for the PMD range. The restriction >>>>>>> on max_ptes_none is removed during the scan, to make sure we account for >>>>>>> the whole PMD range. max_ptes_none will be scaled by the attempted collapse >>>>>>> order to determine how full a THP must be to be eligible. If a mTHP collapse >>>>>>> is attempted, but contains swapped out, or shared pages, we dont perform the >>>>>>> collapse. >>>>>>> >>>>>>> With the default max_ptes_none=511, the code should keep its most of its >>>>>>> original behavior. To exercise mTHP collapse we need to set max_ptes_none<=255. >>>>>>> With max_ptes_none > HPAGE_PMD_NR/2 you will experience collapse "creep" and >>>>>>> constantly promote mTHPs to the next available size. >>>>>>> >>>>>>> Patch 1: Some refactoring to combine madvise_collapse and khugepaged >>>>>>> Patch 2: Refactor/rename hpage_collapse >>>>>>> Patch 3-5: Generalize khugepaged functions for arbitrary orders >>>>>>> Patch 6-9: The mTHP patches >>>>>>> >>>>>>> --------- >>>>>>> Testing >>>>>>> --------- >>>>>>> - Built for x86_64, aarch64, ppc64le, and s390x >>>>>>> - selftests mm >>>>>>> - I created a test script that I used to push khugepaged to its limits while >>>>>>> monitoring a number of stats and tracepoints. The code is available >>>>>>> here[1] (Run in legacy mode for these changes and set mthp sizes to inherit) >>>>>>> The summary from my testings was that there was no significant regression >>>>>>> noticed through this test. In some cases my changes had better collapse >>>>>>> latencies, and was able to scan more pages in the same amount of time/work, >>>>>>> but for the most part the results were consistant. >>>>>>> - redis testing. I tested these changes along with my defer changes >>>>>>> (see followup post for more details). >>>>>>> - some basic testing on 64k page size. >>>>>>> - lots of general use. These changes have been running in my VM for some time. >>>>>>> >>>>>>> Changes since V1 [2]: >>>>>>> - Minor bug fixes discovered during review and testing >>>>>>> - removed dynamic allocations for bitmaps, and made them stack based >>>>>>> - Adjusted bitmap offset from u8 to u16 to support 64k pagesize. >>>>>>> - Updated trace events to include collapsing order info. >>>>>>> - Scaled max_ptes_none by order rather than scaling to a 0-100 scale. >>>>>>> - No longer require a chunk to be fully utilized before setting the bit. Use >>>>>>> the same max_ptes_none scaling principle to achieve this. >>>>>>> - Skip mTHP collapse that requires swapin or shared handling. This helps prevent >>>>>>> some of the "creep" that was discovered in v1. >>>>>>> >>>>>>> [1] - https://gitlab.com/npache/khugepaged_mthp_test >>>>>>> [2] - https://lore.kernel.org/lkml/20250108233128.14484-1-npache@redhat.com/ >>>>>>> >>>>>>> Nico Pache (9): >>>>>>> introduce khugepaged_collapse_single_pmd to unify khugepaged and >>>>>>> madvise_collapse >>>>>>> khugepaged: rename hpage_collapse_* to khugepaged_* >>>>>>> khugepaged: generalize hugepage_vma_revalidate for mTHP support >>>>>>> khugepaged: generalize alloc_charge_folio for mTHP support >>>>>>> khugepaged: generalize __collapse_huge_page_* for mTHP support >>>>>>> khugepaged: introduce khugepaged_scan_bitmap for mTHP support >>>>>>> khugepaged: add mTHP support >>>>>>> khugepaged: improve tracepoints for mTHP orders >>>>>>> khugepaged: skip collapsing mTHP to smaller orders >>>>>>> >>>>>>> include/linux/khugepaged.h | 4 + >>>>>>> include/trace/events/huge_memory.h | 34 ++- >>>>>>> mm/khugepaged.c | 422 +++++++++++++++++++---------- >>>>>>> 3 files changed, 306 insertions(+), 154 deletions(-) >>>>>>> >>>>>> >>>>>> Does this patchset suffer from the problem described here: >>>>>> https://lore.kernel.org/all/8abd99d5-329f-4f8d-8680-c2d48d4963b6@arm.com/ >>>>> Hi Dev, >>>>> >>>>> Sorry I meant to get back to you about that. >>>>> >>>>> I understand your concern, but like I've mentioned before, the scan >>>>> with the read lock was done so we dont have to do the more expensive >>>>> locking, and could still gain insight into the state. You are right >>>>> that this info could become stale if the state changes dramatically, >>>>> but the collapse_isolate function will verify it and not collapse. >>>> >>>> If the state changes dramatically, the _isolate function will verify it, >>>> and fallback. And this fallback happens after following this costly >>>> path: retrieve a large folio from the buddy allocator -> swapin pages >>>> from the disk -> mmap_write_lock() -> anon_vma_lock_write() -> TLB flush >>>> on all CPUs -> fallback in _isolate(). >>>> If you do fail in _isolate(), doesn't it make sense to get the updated >>>> state for the next fallback order immediately, because we have prior >>>> information that we failed because of PTE state? What your algorithm >>>> will do is *still* follow the costly path described above, and again >>>> fail in _isolate(), instead of failing in hpage_collapse_scan_pmd() like >>>> mine would. >>> >>> You do raise a valid point here, I can optimize my solution by >>> detecting certain collapse failure types and jump to the next scan. >>> I'll add that to my solution, thanks! >>> >>> As for the disagreement around the bitmap, we'll leave that up to the >>> community to decide since we have differing opinions/solutions. >>> >>>> >>>> The verification of the PTE state by the _isolate() function is the "no >>>> turning back" point of the algorithm. The verification by >>>> hpage_collapse_scan_pmd() is the "let us see if proceeding is even worth >>>> it, before we do costly operations" point of the algorithm. >>>> >>>>> From my testing I found this to rarely happen. >>>> >>>> Unfortunately, I am not very familiar with performance testing/load >>>> testing, I am fairly new to kernel programming, so I am getting there. >>>> But it really depends on the type of test you are running, what actually >>>> runs on memory-intensive systems, etc etc. In fact, on loaded systems I >>>> would expect the PTE state to dramatically change. But still, no opinion >>>> here. >>> >>> Yeah there are probably some cases where it happens more often. >>> Probably in cases of short lived allocations, but khugepaged doesn't >>> run that frequently so those won't be that big of an issue. >>> >>> Our performance team is currently testing my implementation so I >>> should have more real workload test results soon. The redis testing >>> had some gains and didn't show any signs of obvious regressions. >>> >>> As for the testing, check out >>> https://gitlab.com/npache/khugepaged_mthp_test/-/blob/master/record-khuge-performance.sh?ref_type=heads >>> this does the tracing for my testing script. It can help you get >>> started. There are 3 different traces being applied there: the >>> bpftrace for collapse latencies, the perf record for the flamegraph >>> (not actually that useful, but may be useful to visualize any >>> weird/long paths that you may not have noticed), and the trace-cmd >>> which records the tracepoint of the scan and the collapse functions >>> then processes the data using the awk script-- the output being the >>> scan rate, the pages collapsed, and their result status (grouped by >>> order). >>> >>> You can also look into https://github.com/gormanm/mmtests for >>> testing/comparing kernels. I was running the >>> config-memdb-redis-benchmark-medium workload. >> >> Thanks. I'll take a look. >> >>> >>>> >>>>> >>>>> Also, khugepaged, my changes, and your changes are all a victim of >>>>> this. Once we drop the read lock (to either allocate the folio, or >>>>> right before acquiring the write_lock), the state can change. In your >>>>> case, yes, you are gathering more up to date information, but is it >>>>> really that important/worth it to retake locks and rescan for each >>>>> instance if we are about to reverify with the write lock taken? >>>> >>>> You said "reverify": You are removing the verification, so this step >>>> won't be reverification, it will be verification. We do not want to >>>> verify *after* we have already done 95% of latency-heavy stuff, only to >>>> know that we are going to fail. >>>> >>>> Algorithms in the kernel, in general, are of the following form: 1) >>>> Verify if a condition is true, resulting in taking a control path -> 2) >>>> do a lot of stuff -> "no turning back" step, wherein before committing >>>> (by taking locks, say), reverify if this is the control path we should >>>> be in. You are eliminating step 1). >>>> >>>> Therefore, I will have to say that I disagree with your approach. >>>> >>>> On top of this, in the subjective analysis in [1], point number 7 (along >>>> with point number 1) remains. And, point number 4 remains. >>> >>> for 1) your worst case of 1024 is not the worst case. There are 8 >>> possible orders in your implementation, if all are enabled, that is >>> 4096 iterations in the worst case. >> >> Yes, that is exactly what I wrote in 1). I am still not convinced that >> the overhead you produce + 512 iterations is going to beat 4096 >> iterations. Anyways, that is hand-waving and we should test this. >> >>> This becomes WAY worse on 64k page size, ~45,000 iterations vs 4096 in my case. >> >> Sorry, I am missing something here; how does the number of iterations >> change with page size? Am I not scanning the PTE table, which is >> invariant to the page size? > > I got the calculation wrong the first time and it's actually worst. > Lets hope I got this right this time > on ARM64 64k kernel: > PMD size = 512M > PTE= 64k > PTEs per PMD = 8192 *facepalm* my bad, thanks. I got thrown off thinking HPAGE_PMD_NR won't depend on page size, but #pte entries = PAGE_SIZE / sizeof(pte) = PAGE_SIZE / 8. So it does depend. You are correct, the PTEs per PMD is 1 << 13. > log2(8192) = 13 - 2 = 11 number of (m)THP sizes including PMD (the > first and second order are skipped) > > Assuming I understand your algorithm correctly, in the worst case you > are scanning the whole PMD for each order. > > So you scan 8192 PTEs 11 times. 8192 * 11 = 90112. Yup. Now it seems that the bitmap overhead may just be worth it; for the worst case the bitmap will give us an 11x saving...for the average case, it will give us 2x, but still, 8192 is a large number. I'll think of ways to test this out. Btw, I was made aware that an LWN article just got posted on our work! https://lwn.net/Articles/1009039/ > > Please let me know if I'm missing something here. >> >>>> >>>> [1] >>>> https://lore.kernel.org/all/23023f48-95c6-4a24-ac8b-aba4b1a441b4@arm.com/ >>>> >>>>> >>>>> So in my eyes, this is not a "problem" >>>> >>>> Looks like the kernel scheduled us for a high-priority debate, I hope >>>> there's no deadlock :) >>>> >>>>> >>>>> Cheers, >>>>> -- Nico >>>>> >>>>> >>>>>> >>>>> >>>> >>> >> >