From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6866EC02198 for ; Sat, 15 Feb 2025 00:53:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D04A96B0083; Fri, 14 Feb 2025 19:53:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB3E26B0085; Fri, 14 Feb 2025 19:53:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B54496B0088; Fri, 14 Feb 2025 19:53:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 935EF6B0083 for ; Fri, 14 Feb 2025 19:53:30 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 33C51140CE4 for ; Sat, 15 Feb 2025 00:53:30 +0000 (UTC) X-FDA: 83120355780.17.EB369C0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 175F9140005 for ; Sat, 15 Feb 2025 00:53:25 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=akFk8DK1; spf=pass (imf26.hostedemail.com: domain of npache@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739580808; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0K9vQ8gnIOE/zovi+uGsR/o5x0rem5sCVj3F6eCSClQ=; b=vfaZlVzojPzchT3srPAq9xVoqS3+dVAXIl6gU7KcU2ysdUAW7kEUu55+YrWVA/jvgpk42t ZI7L1VJUrIpLFyvKM3fzKTWWXo0uncDWSvFbFl/3G+KARqqnBS77qQgof1SkWBUHLH1E4Q +FV6k3TPcVQHsQXYnC3zo9IgSZ9BE7o= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=akFk8DK1; spf=pass (imf26.hostedemail.com: domain of npache@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739580808; a=rsa-sha256; cv=none; b=BBnBlMRIlZO1jnrF09wVF5LaRtXrYPmZqZNR4jE34HEwTK5Wy/a+Sli+3q2WJw1qUr1zZ2 ZSH3A7h4YjpWb6MUP3bQehcBIPW86U1JWCNEb1KuUETWHqK4ECZGb2o7mYaG4mZKwnWYYW oL4vSiB3eneZdHbbCAfSn3RP77/wJ3o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739580804; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0K9vQ8gnIOE/zovi+uGsR/o5x0rem5sCVj3F6eCSClQ=; b=akFk8DK1vs3b0AouN3RAkQTjXmOpeaPqKJXZEp0cT7FgfE8my3ujhvZl0cVCToWar6HRkf 25oTofEusc0L151bsWLKibQ4zGj51OyubpDyM8QuOEOUeIS/RlymIDUDB45+YEkKrRZ52q EyfzpvtbhQEHM9s+72QpN8ZXXcpxLTg= Received: from mail-yb1-f198.google.com (mail-yb1-f198.google.com [209.85.219.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-7-73yfUamrNOSRZ_odDZc8Sw-1; Fri, 14 Feb 2025 19:53:22 -0500 X-MC-Unique: 73yfUamrNOSRZ_odDZc8Sw-1 X-Mimecast-MFC-AGG-ID: 73yfUamrNOSRZ_odDZc8Sw_1739580801 Received: by mail-yb1-f198.google.com with SMTP id 3f1490d57ef6-e57b1e837fbso3038837276.1 for ; Fri, 14 Feb 2025 16:53:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739580801; x=1740185601; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0K9vQ8gnIOE/zovi+uGsR/o5x0rem5sCVj3F6eCSClQ=; b=erCpM/oE9fTS6gtPAtxX2BoifoZdD8P4FmnO5US/U2M9Zn8NaJEeNCqE3qtUvlG6Ph lz2mc/tkSamLrbfPpbyj7zn3IYVSUgCO3c6PserNChbsYZMxc9WuDCIH2RtLd4I95nq3 5kKofoL8rfPvvqo6k2elhddws3RRnPFRDEsG5/y09BFgAJ+DlGfTteEyKY8UCTwXirBQ vOCNC2+fgoRywft6t3uN9aN5Cm5SSk6GpY+T6A/q/9EjNcdCwNJfUtwTeTuHUMFSMmGY lziNXH+sH2xtAOYZ55yJ1xEcB11vrinhLMofr2d+CNj1PtZszwHf+RChSJLYwNYMMBZn C4mw== X-Forwarded-Encrypted: i=1; AJvYcCXgv6MvyZ3p+ZE4mrmhYNi2KRgjm9seb4Xw6J5/R0k2CTjSnzPEHY+1L63N+5Tr6mewmXHoCNQJxg==@kvack.org X-Gm-Message-State: AOJu0YwoZqEaiJmZoTC2eyXykNd9Q71ZAWsQfGlNIwgmvaTmc3IIZZ7v zrR/yrOLuuvs/AsEDR+7YMTZnErcxmIK6wBKCrTZfySYI4gfleFb93RG52o/ktsSAsvl18Lh/DE AsZ/FByUZrzmOZCol8eoRv/EoHcXuL3yYj3pQ8fZta6RswYTwBIphm6lS3lLWqtwtIUhAsSnoSw 5VqYCPTBxg1X/UdQHY1gIcKqA= X-Gm-Gg: ASbGncuJ/OYgDbbl6l9cvGDx95jIpyi/iOs+hnix7tHflr0y5tDyn/NFiZuq6g41Myr hjOXZhAktemEWgZuF7CemtZNTR7MQgFQQtu36r47X1sFhzUtzFQRKLMjPQOzv8OJ/ktbm9HKR58 k= X-Received: by 2002:a05:6902:230d:b0:e5d:b542:f646 with SMTP id 3f1490d57ef6-e5dc91ed712mr1277068276.32.1739580801258; Fri, 14 Feb 2025 16:53:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IFNQptvSfMMTB0JMcea3598F7hlp17rMxacyhbowvlrPFMOZhOR/jtOOTEsV3VggUnKQ8DUzy4tS0o3CdHXAuI= X-Received: by 2002:a05:6902:230d:b0:e5d:b542:f646 with SMTP id 3f1490d57ef6-e5dc91ed712mr1277032276.32.1739580800803; Fri, 14 Feb 2025 16:53:20 -0800 (PST) MIME-Version: 1.0 References: <20250211003028.213461-1-npache@redhat.com> <5a995dc9-fee7-442f-b439-c484d9de1750@arm.com> <4ba52062-1bd3-4d53-aa28-fcbbd4913801@arm.com> <71490f8c-f234-4032-bc2a-f6cffa491fcb@arm.com> In-Reply-To: <71490f8c-f234-4032-bc2a-f6cffa491fcb@arm.com> From: Nico Pache Date: Fri, 14 Feb 2025 17:52:55 -0700 X-Gm-Features: AWEUYZlTlXctQ0Vt1tb0srouH10xebpBf2uF_SiuNgqpxtITK4nGVtrGgR7lVio Message-ID: Subject: Re: [RFC v2 0/9] khugepaged: mTHP support To: Dev Jain Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, willy@infradead.org, kirill.shutemov@linux.intel.com, david@redhat.com, aarcange@redhat.com, raquini@redhat.com, sunnanyong@huawei.com, usamaarif642@gmail.com, audra@redhat.com, akpm@linux-foundation.org, rostedt@goodmis.org, mathieu.desnoyers@efficios.com, tiwai@suse.de X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 5QL5eeEpxCRcsWyEhBKny5M4Msar6JRAhc9LS__19aI_1739580801 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 175F9140005 X-Stat-Signature: os3413m1imcc9wqhhe9m7c1ugdx6w1o3 X-HE-Tag: 1739580805-880710 X-HE-Meta: U2FsdGVkX1+q0RFjjnqDRqVGpINQwdMD7a+zgrQPhFSR51jHlEPVJ35tF5kBgUmwVkE2wwf+NIBCpNqKFFTaEN/WS1BU0tRw+f88GC/QtahBjf6M2dCCbbCpMH0ddTiDak7rYecMkg4y3aKHOIPrk8cSBAmt99TeFEACQ8n5gLcxsvu1dEUMZM3G7mREAligDH6ROZULXeCdLrUlGvK5fa4h7ihqcY8mX+C0MKYe3eZuxQB4oCglP76SwYusNy3hZy1gD6eVk2jdgB3f31OlbQuFLoz6Iyv7LLuqnag7x9Xa4TyDPmTScxyt3YoB28BiBK0TFmhdoSJOD4w1Lq+/W4k9JucvPAPpoKzHTHFi468acJNThG1/XsO8v5gp2gFka4bSsVfqCH/d9iWMMByPnot76eJtAxAG3IX4sRam4wkqha3l/AW5em0aojzpprh+pMFPOBotTPRrR1THX9eq9K97Dou2boWU1EuO7qmqHXQO1QZ5h7rqCFRXyGYhFOpC8/ZmY+T/eHGJe0ldHgmVp87qQSOLLKtSsdho8QoQHe/UlicScTZgl52Kn50hpz37mPOdBgBjcseJOOcCAPvc91HswJb2VMpQUy76BseroDlRzSBHu30EkeigBLCu4dDpdK5jW4U8FoJ8Ymf+fULae0eRkFIwrZUH6E3MVgXD+tsuGiBQaOm+ZZy7kdZvn3Ty2sKh+wNtwFQN9WNzq8IXuqKMVWwjpGHO7LSzcH+RTy56J2gjMPcU//5zr4sMza7wy8j5LWfImFnQPZ8eo3eg1xcKsvmMtNN7Uh1DZfsSQ4iBspxxqdRgPMUgkd1g0mESh7ChJeBdunSAOLIuL9gtlyjFeIjgkXpnJtpD/oZQGGz0mKu4gePJIdQW4H3FcCYINIbqK2ded1MGM6kK+0bp/+xLYMOaikcJn45x1iZwKWsVRf0nA7l1F755roY2550ksnhz85iu7YjTUT6bWCS eGSk6FfE n8rHe6INbrAR4sg6eAuTLX0u4WhuRmhPupCOzGIgxdfBYSbPjoe7sCWCFbCFD2ZRysDM5wpfwTtD5HimEaXqZfATAUg1bP7+M92bR7CycYhiNY/QJlR+noprGCGBFTCnbizx8VGN75vVUc84dERSjYhZlAdz6894j5th1cTKAkfw/Pgoefe9L04Klp1EC8G9sbFsCL5BED0i0jZZI2f1WCNjmRUEo0Zhwr//OoZul1k/LCxEbn539WhVzYUFYQuQaWrpHG6GGISi//m8z92g902l7XT8rNL73ThktEAhgPbzD3MD3lFhnQPayK16PpKcfFopJA3PeBNdx3NTN11LaBsasLrRwL0X2Ah3QuM6zOQ3Vexy+IcwO1GanUApXRarQarOblN9wHUSbuLimziLSVjiojKirnnJDOmkawfKPebLIsKvPOOXPNM4ZES1Iy/UCqo4yeDnmCQD9z8GN+TvAdiGlArJGItRNZvRx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 13, 2025 at 7:02=E2=80=AFPM Dev Jain wrote: > > > > On 14/02/25 1:09 am, Nico Pache wrote: > > On Thu, Feb 13, 2025 at 1:26=E2=80=AFAM Dev Jain wro= te: > >> > >> > >> > >> On 12/02/25 10:19 pm, Nico Pache wrote: > >>> On Tue, Feb 11, 2025 at 5:50=E2=80=AFAM Dev Jain w= rote: > >>>> > >>>> > >>>> > >>>> On 11/02/25 6:00 am, Nico Pache wrote: > >>>>> The following series provides khugepaged and madvise collapse with = the > >>>>> capability to collapse regions to mTHPs. > >>>>> > >>>>> To achieve this we generalize the khugepaged functions to no longer= depend > >>>>> on PMD_ORDER. Then during the PMD scan, we keep track of chunks of = pages > >>>>> (defined by MTHP_MIN_ORDER) that are utilized. This info is tracked > >>>>> using a bitmap. After the PMD scan is done, we do binary recursion = on the > >>>>> bitmap to find the optimal mTHP sizes for the PMD range. The restri= ction > >>>>> on max_ptes_none is removed during the scan, to make sure we accoun= t for > >>>>> the whole PMD range. max_ptes_none will be scaled by the attempted = collapse > >>>>> order to determine how full a THP must be to be eligible. If a mTHP= collapse > >>>>> is attempted, but contains swapped out, or shared pages, we dont pe= rform the > >>>>> collapse. > >>>>> > >>>>> With the default max_ptes_none=3D511, the code should keep its most= of its > >>>>> original behavior. To exercise mTHP collapse we need to set max_pte= s_none<=3D255. > >>>>> With max_ptes_none > HPAGE_PMD_NR/2 you will experience collapse "c= reep" and > >>>>> constantly promote mTHPs to the next available size. > >>>>> > >>>>> Patch 1: Some refactoring to combine madvise_collapse and khuge= paged > >>>>> Patch 2: Refactor/rename hpage_collapse > >>>>> Patch 3-5: Generalize khugepaged functions for arbitrary orders > >>>>> Patch 6-9: The mTHP patches > >>>>> > >>>>> --------- > >>>>> Testing > >>>>> --------- > >>>>> - Built for x86_64, aarch64, ppc64le, and s390x > >>>>> - selftests mm > >>>>> - I created a test script that I used to push khugepaged to its lim= its while > >>>>> monitoring a number of stats and tracepoints. The code is ava= ilable > >>>>> here[1] (Run in legacy mode for these changes and set mthp si= zes to inherit) > >>>>> The summary from my testings was that there was no significan= t regression > >>>>> noticed through this test. In some cases my changes had bette= r collapse > >>>>> latencies, and was able to scan more pages in the same amount= of time/work, > >>>>> but for the most part the results were consistant. > >>>>> - redis testing. I tested these changes along with my defer changes > >>>>> (see followup post for more details). > >>>>> - some basic testing on 64k page size. > >>>>> - lots of general use. These changes have been running in my VM for= some time. > >>>>> > >>>>> Changes since V1 [2]: > >>>>> - Minor bug fixes discovered during review and testing > >>>>> - removed dynamic allocations for bitmaps, and made them stack base= d > >>>>> - Adjusted bitmap offset from u8 to u16 to support 64k pagesize. > >>>>> - Updated trace events to include collapsing order info. > >>>>> - Scaled max_ptes_none by order rather than scaling to a 0-100 scal= e. > >>>>> - No longer require a chunk to be fully utilized before setting the= bit. Use > >>>>> the same max_ptes_none scaling principle to achieve this. > >>>>> - Skip mTHP collapse that requires swapin or shared handling. This = helps prevent > >>>>> some of the "creep" that was discovered in v1. > >>>>> > >>>>> [1] - https://gitlab.com/npache/khugepaged_mthp_test > >>>>> [2] - https://lore.kernel.org/lkml/20250108233128.14484-1-npache@re= dhat.com/ > >>>>> > >>>>> Nico Pache (9): > >>>>> introduce khugepaged_collapse_single_pmd to unify khugepaged a= nd > >>>>> madvise_collapse > >>>>> khugepaged: rename hpage_collapse_* to khugepaged_* > >>>>> khugepaged: generalize hugepage_vma_revalidate for mTHP suppor= t > >>>>> khugepaged: generalize alloc_charge_folio for mTHP support > >>>>> khugepaged: generalize __collapse_huge_page_* for mTHP support > >>>>> khugepaged: introduce khugepaged_scan_bitmap for mTHP support > >>>>> khugepaged: add mTHP support > >>>>> khugepaged: improve tracepoints for mTHP orders > >>>>> khugepaged: skip collapsing mTHP to smaller orders > >>>>> > >>>>> include/linux/khugepaged.h | 4 + > >>>>> include/trace/events/huge_memory.h | 34 ++- > >>>>> mm/khugepaged.c | 422 +++++++++++++++++++---= ------- > >>>>> 3 files changed, 306 insertions(+), 154 deletions(-) > >>>>> > >>>> > >>>> Does this patchset suffer from the problem described here: > >>>> https://lore.kernel.org/all/8abd99d5-329f-4f8d-8680-c2d48d4963b6@arm= .com/ > >>> Hi Dev, > >>> > >>> Sorry I meant to get back to you about that. > >>> > >>> I understand your concern, but like I've mentioned before, the scan > >>> with the read lock was done so we dont have to do the more expensive > >>> locking, and could still gain insight into the state. You are right > >>> that this info could become stale if the state changes dramatically, > >>> but the collapse_isolate function will verify it and not collapse. > >> > >> If the state changes dramatically, the _isolate function will verify i= t, > >> and fallback. And this fallback happens after following this costly > >> path: retrieve a large folio from the buddy allocator -> swapin pages > >> from the disk -> mmap_write_lock() -> anon_vma_lock_write() -> TLB flu= sh > >> on all CPUs -> fallback in _isolate(). > >> If you do fail in _isolate(), doesn't it make sense to get the updated > >> state for the next fallback order immediately, because we have prior > >> information that we failed because of PTE state? What your algorithm > >> will do is *still* follow the costly path described above, and again > >> fail in _isolate(), instead of failing in hpage_collapse_scan_pmd() li= ke > >> mine would. > > > > You do raise a valid point here, I can optimize my solution by > > detecting certain collapse failure types and jump to the next scan. > > I'll add that to my solution, thanks! > > > > As for the disagreement around the bitmap, we'll leave that up to the > > community to decide since we have differing opinions/solutions. > > > >> > >> The verification of the PTE state by the _isolate() function is the "n= o > >> turning back" point of the algorithm. The verification by > >> hpage_collapse_scan_pmd() is the "let us see if proceeding is even wor= th > >> it, before we do costly operations" point of the algorithm. > >> > >>> From my testing I found this to rarely happen. > >> > >> Unfortunately, I am not very familiar with performance testing/load > >> testing, I am fairly new to kernel programming, so I am getting there. > >> But it really depends on the type of test you are running, what actual= ly > >> runs on memory-intensive systems, etc etc. In fact, on loaded systems = I > >> would expect the PTE state to dramatically change. But still, no opini= on > >> here. > > > > Yeah there are probably some cases where it happens more often. > > Probably in cases of short lived allocations, but khugepaged doesn't > > run that frequently so those won't be that big of an issue. > > > > Our performance team is currently testing my implementation so I > > should have more real workload test results soon. The redis testing > > had some gains and didn't show any signs of obvious regressions. > > > > As for the testing, check out > > https://gitlab.com/npache/khugepaged_mthp_test/-/blob/master/record-khu= ge-performance.sh?ref_type=3Dheads > > this does the tracing for my testing script. It can help you get > > started. There are 3 different traces being applied there: the > > bpftrace for collapse latencies, the perf record for the flamegraph > > (not actually that useful, but may be useful to visualize any > > weird/long paths that you may not have noticed), and the trace-cmd > > which records the tracepoint of the scan and the collapse functions > > then processes the data using the awk script-- the output being the > > scan rate, the pages collapsed, and their result status (grouped by > > order). > > > > You can also look into https://github.com/gormanm/mmtests for > > testing/comparing kernels. I was running the > > config-memdb-redis-benchmark-medium workload. > > Thanks. I'll take a look. > > > > >> > >>> > >>> Also, khugepaged, my changes, and your changes are all a victim of > >>> this. Once we drop the read lock (to either allocate the folio, or > >>> right before acquiring the write_lock), the state can change. In your > >>> case, yes, you are gathering more up to date information, but is it > >>> really that important/worth it to retake locks and rescan for each > >>> instance if we are about to reverify with the write lock taken? > >> > >> You said "reverify": You are removing the verification, so this step > >> won't be reverification, it will be verification. We do not want to > >> verify *after* we have already done 95% of latency-heavy stuff, only t= o > >> know that we are going to fail. > >> > >> Algorithms in the kernel, in general, are of the following form: 1) > >> Verify if a condition is true, resulting in taking a control path -> 2= ) > >> do a lot of stuff -> "no turning back" step, wherein before committing > >> (by taking locks, say), reverify if this is the control path we should > >> be in. You are eliminating step 1). > >> > >> Therefore, I will have to say that I disagree with your approach. > >> > >> On top of this, in the subjective analysis in [1], point number 7 (alo= ng > >> with point number 1) remains. And, point number 4 remains. > > > > for 1) your worst case of 1024 is not the worst case. There are 8 > > possible orders in your implementation, if all are enabled, that is > > 4096 iterations in the worst case. > > Yes, that is exactly what I wrote in 1). I am still not convinced that > the overhead you produce + 512 iterations is going to beat 4096 > iterations. Anyways, that is hand-waving and we should test this. > > > This becomes WAY worse on 64k page size, ~45,000 iterations vs 4096 in = my case. > > Sorry, I am missing something here; how does the number of iterations > change with page size? Am I not scanning the PTE table, which is > invariant to the page size? I got the calculation wrong the first time and it's actually worst. Lets hope I got this right this time on ARM64 64k kernel: PMD size =3D 512M PTE=3D 64k PTEs per PMD =3D 8192 log2(8192) =3D 13 - 2 =3D 11 number of (m)THP sizes including PMD (the first and second order are skipped) Assuming I understand your algorithm correctly, in the worst case you are scanning the whole PMD for each order. So you scan 8192 PTEs 11 times. 8192 * 11 =3D 90112. Please let me know if I'm missing something here. > > >> > >> [1] > >> https://lore.kernel.org/all/23023f48-95c6-4a24-ac8b-aba4b1a441b4@arm.c= om/ > >> > >>> > >>> So in my eyes, this is not a "problem" > >> > >> Looks like the kernel scheduled us for a high-priority debate, I hope > >> there's no deadlock :) > >> > >>> > >>> Cheers, > >>> -- Nico > >>> > >>> > >>>> > >>> > >> > > >