From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00B09C5B543 for ; Sat, 7 Jun 2025 12:49:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2204E6B0088; Sat, 7 Jun 2025 08:49:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F8156B0089; Sat, 7 Jun 2025 08:49:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10FA96B008A; Sat, 7 Jun 2025 08:49:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E0BDF6B0088 for ; Sat, 7 Jun 2025 08:48:59 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 54809122039 for ; Sat, 7 Jun 2025 12:48:59 +0000 (UTC) X-FDA: 83528584398.22.963C386 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 2D2E38000D for ; Sat, 7 Jun 2025 12:48:57 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=g7zHzxOV; spf=pass (imf02.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749300537; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8RAp9UmqNf4IWuT2izfPoDb3NMX2RBGe64dHvKM0TnY=; b=W9wTT9uwE2QLsuhhl7MAngrzaT3CvM5B+81rMojpxXgTV11N/06z22dyb1ae15HDFTRXwB M9+/OTSXZ+2UTL/WKmfai1p02r6v950CWPRIFLHIKvfHkkV6dkQWcJHsHxGRelFEjkKdZK SaD2RMHXfQZ8WmTCSMuN7Pp8hu5F7e8= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=g7zHzxOV; spf=pass (imf02.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749300537; a=rsa-sha256; cv=none; b=pXkYPPxtx1STcekVXbaBUlZa1ZCnasQVrYkWmvWuKNfrHSTyAn9xr3q9ft7asPws+oEBfl URseri7No+AnM71BXDqdsxEyju4eBUKqxS2g7wjeb/GUc384bOGtz+DSwSZAKGwTSrCgVk M1RiToaUmqMgiV/tfXZvmcvA/beKWW8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1749300536; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8RAp9UmqNf4IWuT2izfPoDb3NMX2RBGe64dHvKM0TnY=; b=g7zHzxOV4bO5VK3H3H9rLYdZV1ydNitW/g48OPgRbqer6nF8jgGWEZLYTHwNS1vh31RZOB SbFkSP3CxYiPqxAtv8UzsnVD3c5LKOqrsAYg5Km25msAhRcKsIDa3kHcmUBDUUghQUgNH3 blAUA0aHTR+2Amd3lv1qApsCMkKimU4= Received: from mail-yw1-f198.google.com (mail-yw1-f198.google.com [209.85.128.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-608-utBqX498OUiAY5L84TxS0g-1; Sat, 07 Jun 2025 08:48:55 -0400 X-MC-Unique: utBqX498OUiAY5L84TxS0g-1 X-Mimecast-MFC-AGG-ID: utBqX498OUiAY5L84TxS0g_1749300535 Received: by mail-yw1-f198.google.com with SMTP id 00721157ae682-71111a7c31cso1120687b3.3 for ; Sat, 07 Jun 2025 05:48:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749300535; x=1749905335; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8RAp9UmqNf4IWuT2izfPoDb3NMX2RBGe64dHvKM0TnY=; b=pty7oxUmruDZtbczKXRzqc7oNDUb8IQbDjhk5o+7FoGqONNWQb+ts7ssyXUbkfSbMu tkmp5shXy0k3xnZUpC4oLcAMt2pY8BFmhH04aywqnOeNGK2vpXSeebuQHRoC6XcwQH41 ROUucgcNCPXAKziLbwJ5L5Tj7l8pbEgD7a4DLhALT0dWcnaWuXf7SUjgIliy6u2BpAmf J+iDlylghvYCIUoG1DREU2ui4lZdkAXh+C39kb96aPUfJXkdCJTkvANxvQJem54P1FkU aYITz1rtX7R/NitnAHSora4362LcPZepILjUa4UcL3uYpmzkzjEKHzt2/ZClujeQ5eyl zhzQ== X-Forwarded-Encrypted: i=1; AJvYcCWvVgj7V3zL9aLBpU9iJ7zUE+GmNYbafGhvcP7czu1zBhMT5psf+I9eIB/TqXKG9nkHeCygOV1NqQ==@kvack.org X-Gm-Message-State: AOJu0Yy5x/keLX44qr7PJCP4Aa+HJ5rz5vaei7vADcLRfMyXciRUugzo 7+G43KLQaOZKvKa3Sc50ZTaMJZmPQ6aY1hxbfbgsyTmdFEEb1sLXCL+lU676wFPkkRUSUiPGyLc r5PcUcXWMbQ+3o+f8msjffIaRXmgyc5wBDogbHfmdbtcgXx3tLSMMOgGwwDZiWvmqsV9O695/Cd Alli5Yk101F6fO9FZY7D0tLVGqOT4= X-Gm-Gg: ASbGncvwm+jOm+ieczrZLybWf2AIoUyxuWkbejqX6q86Aufdj92Eywx1N5KLKOeMMEz gkOmf9Ywjq+hgdjUJI3ajwPVUv6m1VNlby2KY4MjaE2xjxnMlVKEB0hYECEQufKq+NAK9GU4= X-Received: by 2002:a05:690c:6e12:b0:70e:2cba:868c with SMTP id 00721157ae682-710f766a7f8mr115020097b3.11.1749300534778; Sat, 07 Jun 2025 05:48:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHEd9DSb8F6yMCa3MP8JWlyBkx7270jerJumD5JUFQQx5RE2XCH/cQ/Di2e7HslP9Toakx+z8d+7Z969Me0ffI= X-Received: by 2002:a05:690c:6e12:b0:70e:2cba:868c with SMTP id 00721157ae682-710f766a7f8mr115019547b3.11.1749300534297; Sat, 07 Jun 2025 05:48:54 -0700 (PDT) MIME-Version: 1.0 References: <20250428181218.85925-1-npache@redhat.com> <20250428181218.85925-7-npache@redhat.com> <5feb1d57-e069-4469-9751-af4fb067e858@linux.alibaba.com> In-Reply-To: From: Nico Pache Date: Sat, 7 Jun 2025 06:48:27 -0600 X-Gm-Features: AX0GCFsVx3-BIBWWiKH7KRUsC2qZTp3R0FtpRW4lWTfgsPCpvnkjlq_8ABeItq4 Message-ID: Subject: Re: [PATCH v5 06/12] khugepaged: introduce khugepaged_scan_bitmap for mTHP support To: Dev Jain Cc: Baolin Wang , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: vX0eXfgaHqkncU2r7fiEPWqLHAaUwc94YkDjhn6JQZA_1749300535 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 2D2E38000D X-Stat-Signature: nim7tmjqixe4jqf5nnjf7tsonewnosmt X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1749300537-511151 X-HE-Meta: U2FsdGVkX19CEylCIVedGxalyWmOTZRU6Ghhay0ckOraJwRogfbfuc1ExX3fjuaRLSTft8TVE5J+PRhvUwP+iGfOl6RIWSIuOcuBY6aqfj1z2h5akhAMhAHS6lAV4TWVmpMnvGRJRKolX6BIFhcj9O7ckQEqKW6ZEjKR1B89bqJwhrbP1dckhHw1ZYpKpPCPNnzfXFCM9sa9RJjbrox+x+mM59b/jj0c8Wg1zt63XCJhV18/LfEy3ZgNScF8O4zhbbMlkqeFf4o/yf8HmHsR+ozxf/0tP0/VGqwIqXiSkOIXdyTgis7iD09OJ6WtGe52S+sftKPmeeAG2M1ostbfnCdh74tXfPD8Wq94pimZuWNaGbYDlzyG+hPLxRaPXktsyCpqxCUr/BrxV8VEC04fe0YkPmw9CiIVXr1JVaaDiWQ1eifMOJR5C67t7MztaL9DOjsdL43m458AYHigWT/J6fass/Wnws8fVdwRjFim6yvZlqpd2MgwFtod2kKCCy887VePjE4ivrpccK91oP+RwhaZ1bOIClEELT3F1aBMxtsCmI3KqmTK6IV3ihTrqKRFX6Pmnw0CICLGKVxLPOEXV1D6GW4ez06gNgeQOfUaVgGoib9LRtaBLmS/cTsPqEGEVwYqBx56iIxrgzbigp+gZzllxDbd5Vg+ML1CcgV+r7Qzv86wOUr3fh9fQE0bOQLF9wRll5p66kdbfuS28lvdT0auEslCN0JCO3DIkuz1xr3WZ6GcIfS5WlNIQ3NQaAjyxNe0HupeSWulv7lZE7O8By9UADyIit/tTmCvOtVau7PbcBps4PHk9ZxPHTjzK3EKJsxO1fUXQjZErvql8xBRAxUysJFpybX8aE+s7QqeW6SwAOl9PsrUqrjoQ2280zsN/vLmhHqjRPchmNkxfsC9SNY2hmQBb1woQPpIUdBkwba2QZXbIzIbKijx8efiyqtCIRF+A5DIbpQN2p3RATC LK5epnBO 8E6NHWbQn4Ldp6j8TEYc5HP0lgt9PWB4p9wHhCr7jZUo33eozuflFyH0WDr+xN34SBFBkgnxnL0yDVPA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jun 6, 2025 at 10:38=E2=80=AFAM Dev Jain wrote: > > > On 01/05/25 12:26 am, Nico Pache wrote: > > On Wed, Apr 30, 2025 at 4:08=E2=80=AFAM Baolin Wang > > wrote: > >> > >> > >> On 2025/4/29 02:12, Nico Pache wrote: > >>> khugepaged scans anons PMD ranges for potential collapse to a hugepag= e. > >>> To add mTHP support we use this scan to instead record chunks of util= ized > >>> sections of the PMD. > >>> > >>> khugepaged_scan_bitmap uses a stack struct to recursively scan a bitm= ap > >>> that represents chunks of utilized regions. We can then determine wha= t > >>> mTHP size fits best and in the following patch, we set this bitmap wh= ile > >>> scanning the anon PMD. > >>> > >>> max_ptes_none is used as a scale to determine how "full" an order mus= t > >>> be before being considered for collapse. > >>> > >>> When attempting to collapse an order that has its order set to "alway= s" > >>> lets always collapse to that order in a greedy manner without > >>> considering the number of bits set. > >>> > >>> Signed-off-by: Nico Pache > >>> --- > >>> include/linux/khugepaged.h | 4 ++ > >>> mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++= ---- > >>> 2 files changed, 89 insertions(+), 9 deletions(-) > >>> > >>> diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h > >>> index 1f46046080f5..18fe6eb5051d 100644 > >>> --- a/include/linux/khugepaged.h > >>> +++ b/include/linux/khugepaged.h > >>> @@ -1,6 +1,10 @@ > >>> /* SPDX-License-Identifier: GPL-2.0 */ > >>> #ifndef _LINUX_KHUGEPAGED_H > >>> #define _LINUX_KHUGEPAGED_H > >>> +#define KHUGEPAGED_MIN_MTHP_ORDER 2 > >> Still better to add some comments to explain explicitly why choose 2 a= s > >> the MIN_MTHP_ORDER. > > Ok i'll add a note that explicitly states that the min order of anon mT= HPs is 2 > >>> +#define KHUGEPAGED_MIN_MTHP_NR (1< >>> +#define MAX_MTHP_BITMAP_SIZE (1 << (ilog2(MAX_PTRS_PER_PTE) - KHUGE= PAGED_MIN_MTHP_ORDER)) > >>> +#define MTHP_BITMAP_SIZE (1 << (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MT= HP_ORDER)) > >>> > >>> extern unsigned int khugepaged_max_ptes_none __read_mostly; > >>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE > >>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c > >>> index e21998a06253..6e67db86409a 100644 > >>> --- a/mm/khugepaged.c > >>> +++ b/mm/khugepaged.c > >>> @@ -94,6 +94,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash,= MM_SLOTS_HASH_BITS); > >>> > >>> static struct kmem_cache *mm_slot_cache __ro_after_init; > >>> > >>> +struct scan_bit_state { > >>> + u8 order; > >>> + u16 offset; > >>> +}; > >>> + > >>> struct collapse_control { > >>> bool is_khugepaged; > >>> > >>> @@ -102,6 +107,18 @@ struct collapse_control { > >>> > >>> /* nodemask for allocation fallback */ > >>> nodemask_t alloc_nmask; > >>> + > >>> + /* > >>> + * bitmap used to collapse mTHP sizes. > >>> + * 1bit =3D order KHUGEPAGED_MIN_MTHP_ORDER mTHP > >>> + */ > >>> + DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE); > >>> + DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE); > >>> + struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE]; > >>> +}; > >>> + > >>> +struct collapse_control khugepaged_collapse_control =3D { > >>> + .is_khugepaged =3D true, > >>> }; > >>> > >>> /** > >>> @@ -851,10 +868,6 @@ static void khugepaged_alloc_sleep(void) > >>> remove_wait_queue(&khugepaged_wait, &wait); > >>> } > >>> > >>> -struct collapse_control khugepaged_collapse_control =3D { > >>> - .is_khugepaged =3D true, > >>> -}; > >>> - > >>> static bool khugepaged_scan_abort(int nid, struct collapse_control= *cc) > >>> { > >>> int i; > >>> @@ -1118,7 +1131,8 @@ static int alloc_charge_folio(struct folio **fo= liop, struct mm_struct *mm, > >>> > >>> static int collapse_huge_page(struct mm_struct *mm, unsigned long = address, > >>> int referenced, int unmapped, > >>> - struct collapse_control *cc) > >>> + struct collapse_control *cc, bool *mmap_l= ocked, > >>> + u8 order, u16 offset) > >>> { > >>> LIST_HEAD(compound_pagelist); > >>> pmd_t *pmd, _pmd; > >>> @@ -1137,8 +1151,12 @@ static int collapse_huge_page(struct mm_struct= *mm, unsigned long address, > >>> * The allocation can take potentially a long time if it invol= ves > >>> * sync compaction, and we do not need to hold the mmap_lock d= uring > >>> * that. We will recheck the vma after taking it again in writ= e mode. > >>> + * If collapsing mTHPs we may have already released the read_lo= ck. > >>> */ > >>> - mmap_read_unlock(mm); > >>> + if (*mmap_locked) { > >>> + mmap_read_unlock(mm); > >>> + *mmap_locked =3D false; > >>> + } > >>> > >>> result =3D alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER)= ; > >>> if (result !=3D SCAN_SUCCEED) > >>> @@ -1273,12 +1291,72 @@ static int collapse_huge_page(struct mm_struc= t *mm, unsigned long address, > >>> out_up_write: > >>> mmap_write_unlock(mm); > >>> out_nolock: > >>> + *mmap_locked =3D false; > >>> if (folio) > >>> folio_put(folio); > >>> trace_mm_collapse_huge_page(mm, result =3D=3D SCAN_SUCCEED, re= sult); > >>> return result; > >>> } > >>> > >>> +// Recursive function to consume the bitmap > >> Nit: please use '/* Xxxx */' for comments in this patch. > >> > >>> +static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned lon= g address, > >>> + int referenced, int unmapped, struct collapse_c= ontrol *cc, > >>> + bool *mmap_locked, unsigned long enabled_orders= ) > >>> +{ > >>> + u8 order, next_order; > >>> + u16 offset, mid_offset; > >>> + int num_chunks; > >>> + int bits_set, threshold_bits; > >>> + int top =3D -1; > >>> + int collapsed =3D 0; > >>> + int ret; > >>> + struct scan_bit_state state; > >>> + bool is_pmd_only =3D (enabled_orders =3D=3D (1 << HPAGE_PMD_ORD= ER)); > >>> + > >>> + cc->mthp_bitmap_stack[++top] =3D (struct scan_bit_state) > >>> + { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 }; > >>> + > >>> + while (top >=3D 0) { > >>> + state =3D cc->mthp_bitmap_stack[top--]; > >>> + order =3D state.order + KHUGEPAGED_MIN_MTHP_ORDER; > >>> + offset =3D state.offset; > >>> + num_chunks =3D 1 << (state.order); > >>> + // Skip mTHP orders that are not enabled > >>> + if (!test_bit(order, &enabled_orders)) > >>> + goto next; > >>> + > >>> + // copy the relavant section to a new bitmap > >>> + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitma= p, offset, > >>> + MTHP_BITMAP_SIZE); > >>> + > >>> + bits_set =3D bitmap_weight(cc->mthp_bitmap_temp, num_ch= unks); > >>> + threshold_bits =3D (HPAGE_PMD_NR - khugepaged_max_ptes_= none - 1) > >>> + >> (HPAGE_PMD_ORDER - state.order); > >>> + > >>> + //Check if the region is "almost full" based on the thr= eshold > >>> + if (bits_set > threshold_bits || is_pmd_only > >>> + || test_bit(order, &huge_anon_orders_always)) { > >> When testing this patch, I disabled the PMD-sized THP and enabled > >> 64K-sized mTHP, but it still attempts to collapse into a PMD-sized THP > >> (since bits_set > threshold_bits is ture). This doesn't seem reasonabl= e? > > We are still required to have PMD enabled for mTHP collapse to work. > > It's a limitation of the current khugepaged code (it currently only > > adds mm_slots when PMD is enabled). > > We've discussed this in the past and are looking for a proper way > > forward, but the solution becomes tricky. > > Not sure if this is still a problem, but does this patch solve > it? > > https://lore.kernel.org/all/20250211111326.14295-12-dev.jain@arm.com/ Hi Dev, Baolin sent out a patch to do something similar to what you did here based on my changes. I was going to keep the original behavior of activating khugepaged only if the PMD size is enabled, and make that change separately (outside this series), but I've gone ahead and applied/tested Baolin's patch. Sorry I had forgotten you already had a solution for this. Cheers, -- Nico > > > > > However I'm surprised that it still collapses due to the code below. > > I'll test this out later today. > > + if (!test_bit(order, &enabled_orders)) > > + goto next; > >>> + ret =3D collapse_huge_page(mm, address, referen= ced, unmapped, cc, > >>> + mmap_locked, order, offset * KH= UGEPAGED_MIN_MTHP_NR); > >>> + if (ret =3D=3D SCAN_SUCCEED) { > >>> + collapsed +=3D (1 << order); > >>> + continue; > >>> + } > >>> + } > >>> + > >>> +next: > >>> + if (state.order > 0) { > >>> + next_order =3D state.order - 1; > >>> + mid_offset =3D offset + (num_chunks / 2); > >>> + cc->mthp_bitmap_stack[++top] =3D (struct scan_b= it_state) > >>> + { next_order, mid_offset }; > >>> + cc->mthp_bitmap_stack[++top] =3D (struct scan_b= it_state) > >>> + { next_order, offset }; > >>> + } > >>> + } > >>> + return collapsed; > >>> +} > >>> + > >>> static int khugepaged_scan_pmd(struct mm_struct *mm, > >>> struct vm_area_struct *vma, > >>> unsigned long address, bool *mmap_l= ocked, > >>> @@ -1445,9 +1523,7 @@ static int khugepaged_scan_pmd(struct mm_struct= *mm, > >>> pte_unmap_unlock(pte, ptl); > >>> if (result =3D=3D SCAN_SUCCEED) { > >>> result =3D collapse_huge_page(mm, address, referenced, > >>> - unmapped, cc); > >>> - /* collapse_huge_page will return with the mmap_lock re= leased */ > >>> - *mmap_locked =3D false; > >>> + unmapped, cc, mmap_locked, = HPAGE_PMD_ORDER, 0); > >>> } > >>> out: > >>> trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, refer= enced, >