From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A14EE77188 for ; Mon, 6 Jan 2025 10:17:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B8F46B0082; Mon, 6 Jan 2025 05:17:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 668B06B0088; Mon, 6 Jan 2025 05:17:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5090F6B0089; Mon, 6 Jan 2025 05:17:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2BD186B0082 for ; Mon, 6 Jan 2025 05:17:18 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CF0F31413F3 for ; Mon, 6 Jan 2025 10:17:17 +0000 (UTC) X-FDA: 82976624514.15.561AFB0 Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by imf10.hostedemail.com (Postfix) with ESMTP id C3C3AC0006 for ; Mon, 6 Jan 2025 10:17:15 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NzAuUv1S; spf=pass (imf10.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736158635; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CY68E78+GzrN5GIZoOZe5d5/x4xH2esaV7EQanLz7U8=; b=K4PvxdvWPdGt6RBY9CDYfOarmJsGoSVROBw528nWDay3YnU2H03vQ/a/7/0dYZ4VvLiGPU ZBn+A3M8+UjhdEgtVZv3nH8dwkTca+JhyH6xn9aQRypnMBs0yy7JLrhx4d3ZH81PhtajQ6 dQE31QVLVICcBJW3RdIJYCdJzZlpJEY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NzAuUv1S; spf=pass (imf10.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736158635; a=rsa-sha256; cv=none; b=YzbI5jovmJpINmd2MC3IPvspHG+qMPQpPywW35zqXi/mjK/mRlWXi6mn/6EBuXJpHXNg5O woUlJynKtC84AUnEsBvkEQo+0LYnmTCUitKpZ5pUvT0OgxoGfeQ4TcAhMY2mrYyr1T87Gd wWJCKWmICcciqFvVrROfR+6VG/4XKNI= Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-43626213fffso88020935e9.1 for ; Mon, 06 Jan 2025 02:17:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736158634; x=1736763434; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=CY68E78+GzrN5GIZoOZe5d5/x4xH2esaV7EQanLz7U8=; b=NzAuUv1S75e3fDvzcNl25FllMfHTZEXKuCtUQYVfVTdoV7yQoZX6OazChgxejzZOqa 8jQCPOT8xAdQ1ikORdBt3bI5CMPPeKB4OSjjeksecDDdE8QO3okjGxvANgZquXpj+hMH 5IDqw+I4qPCB9Dvc6R61cjF3r/rLkX3ko9zFwwRWFfyPL6AmFn2uWznjsfDUB2Nzqwma y1u//Z6/Xi8MiHcYiFCptJnW+CaLdGRyqjQAkhxqZYPz0AeQNkKU/95kdtYpzBeSaeqh H+7+UPM/pcVmBVPIB+MCoh0vDcuaoeDM0BVPZYPOnwP/3b2r+1saAwqFfAZiQrSzDSYH A3dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736158634; x=1736763434; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CY68E78+GzrN5GIZoOZe5d5/x4xH2esaV7EQanLz7U8=; b=ANDQAaTdAyjxjYx1u4f4STTnBzFIvdYXboh7mNOrNCZbiVlNpTwoKS+u2GePeFqPJl Mr0qgDErJtRnarHNS8SBX8sHy8kooPAVYEuViKSJXgGhWSxIbSblXEZ8rk2Ej7sae2uk gGnNzhEV6KIJ09KinKJo5xkqJ78xOK1LjWhWUmaXvMzjaj1ePV7H64iMXwqSJEYyf8PB olMlkx17MOZVXCWu6NxK8dRRKXjpxnkag7ffVyyqUoa3GoZSnHMqfA8Ka0Rmo/PpSmUZ wth7J2qRxYlhDjHKAWoOVmOt4e/nh/unV0w6eZRYXbcElYIi54CPrmElCPzsNxCKGFGY 3QeQ== X-Forwarded-Encrypted: i=1; AJvYcCWJgQBiOmRkqK+z+soW0w8SE7vlgKb3Yi5y6dxWIsgHO3Ge4OFVGihfRrUK+OSpENkglskyyHuRLA==@kvack.org X-Gm-Message-State: AOJu0YwdW7d3TOoZ25BHFFMYIuNBK8j/FwonNYGTU7p4mrpyeRRTFGK3 TsENAcSPbMJRdB0cdUk/xJn0AazLrhr2a6G4as9AaWZHVcYC0uco X-Gm-Gg: ASbGncsSTwtuFKvIFVKcsAay4ufCORRgYn+2nSoYETFcMallWEPewjAoqL4A8qd0AMs OFHXdHHYMXHQGE+2pQYc7pZCTuIgT+hqr7BihZHh2wLRwKvo5jIEeUn5aw8wrCf09u8pFn+lfx0 KCWe/H/SQEAQSZxbh1GBku2SVvP2IlqxhflfdrJgEr6ctN9lucPH05PuT2vK0ifBB09WzHMbBXW NEJdOW71hj4sal8mqgslPBmcxfI9+sziJa5N2LVd/lefgwZuFrb86/yoPeWwJi6xgVZi9iPQ2rc 3YovyPKBg3EB475PpHbCfgMRYCCELS0itY2Cnh/HLQE2Xw3Vfp9i X-Google-Smtp-Source: AGHT+IEPJximJTSSvp8V+CJ++O7cZ73UA2LpgkeS7crVTDEId47uXZdNzsIlnUnXSAQ8c/VPChNrIA== X-Received: by 2002:a05:600c:a0a:b0:434:9e17:190c with SMTP id 5b1f17b1804b1-436693f7cc4mr451021225e9.0.1736158634065; Mon, 06 Jan 2025 02:17:14 -0800 (PST) Received: from ?IPV6:2a01:4b00:b211:ad00:1096:2c00:b223:9747? ([2a01:4b00:b211:ad00:1096:2c00:b223:9747]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38a1c8330e0sm46777860f8f.34.2025.01.06.02.17.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 06 Jan 2025 02:17:13 -0800 (PST) Message-ID: <82b9efd1-f2a6-4452-b2ea-6c163e17cdf7@gmail.com> Date: Mon, 6 Jan 2025 10:17:13 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 09/12] khugepaged: Introduce vma_collapse_anon_folio() To: Dev Jain , akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, kirill.shutemov@linux.intel.com, Johannes Weiner Cc: ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hugh Dickins References: <20241216165105.56185-1-dev.jain@arm.com> <20241216165105.56185-10-dev.jain@arm.com> Content-Language: en-US From: Usama Arif In-Reply-To: <20241216165105.56185-10-dev.jain@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C3C3AC0006 X-Rspam-User: X-Stat-Signature: im3cq8k6dmuxrxme8sb4sqozug6x6bk1 X-HE-Tag: 1736158635-126537 X-HE-Meta: U2FsdGVkX197YvqOX8xN8i6HWhomxzEJbCQX4FFtO5/t3taKK5SBbogheIwlp3I5tM8LksRfJKg59cV+BGlMKkZaceAZfnM/HY5iLuZymSzuj23JKy39wZbIz65OatbYIvv3I70yETc61nbnEuzPTFLOwFmNNTXWiqZVBlXOCM2QiUKyVZEPkfSpG9E0tVsPU5vH14ku45zFhUkIojOPSmY7fM1Kb17jxSt5mChE1khae/mmaDF3QhWTJdpQqK0H6pgX+XYq4QMJa3/DZy0Jj6d/1lJv3Mfx9Q5Gg+/dTh4agbcbd98nfKOB5LhKMR5nNkpee3o+N0XEDktkoomN8PUh+vy3lxnNGEgPp+pz/PPuRzElLFJveST5xiKbBFSiEsRlxHj5sQG08vpQ/0DA4D5qJg3E34/7186CkeFah+oVSxs4uNq6S4UZR2N7sv1gBNx1hPTC1PPvouAxsITbhAwHE1APY8whZpR3S2ddRKXYb8m0Cy1pPhNmF0PFrfBV+tH6euz50JJS6yrCM3lAezbsb6KtmUZx1XbYGTi7DUH2CbfGgl3FfNd4314h80DObt8dQ5CuEiZx8qxoajW3pG+qtnsN+DNUU9zhs86eaJluqLaHWWj5Ay87xQPF1sBB0hcX94K3cmmc8ma/odw+8nDFQIpur/6yaYr/SZjVA2vVSYJQ1yMIFc+k2XPkNxB2TkjKAoGlqVxgcCcQFVj/kd6Ede00jwCCJ1MDKvsBeHZomE35b3Ny5QvLX4JmYykwLw8VslHdhiaK/s20dWNsl6dJ997ojb5/AvPjvMuw6b0LYOO/J8m2swIKJvcNpZlTLZPmbhTkkvthDe0PGqsE6OgY/hYEkAqO/0uSisI01fp/OSDO3wVogeKFiBZBJdSfuerWYCstf1Pu8enFO4IH+qghScWmYgDG5uOjmKg4frdfonKAIcDJegGeIdSLcaq2hZeSUMMkFaECDMSek42 khyLifcC HI9KGy3VxfDrKK7OPi/cLbVlUD06j/2kGOaa7kvmJV/Vlo8N2yFbFCO94uJLqzuogU84lLkY6SK4MLyhinhKUmJ02n68Teon3kd8XgtC8bhzdd/3veG6g6+fbkN/aWWlH8hrju/b6fJrBDXxYYb+VMydE8HwNAfo+5aejAmovMIaE78Px7y+Fiae6fw/R4XzlK+JXk1W5L1TVpyeCOSt25mI6rgBKy5YkiWolEJMpDEehcP6fZrlfDlGBvX2rWmkmORuO5uHum+Awg2VQ37K9f861sHr6/Y0W8qtoEPwQZb9UqaMQPBVRbwPxR6iwy/AAv5lly2v0ykeHldulFFRgftMJZ2Fz9+MmhhCF9tBbrFVzo6ccL420lIPV+C49HdHo5iLjCkUG6c1n2dA4Qu5AMH9Mj/DbkBl+MFYd4HzbF1ssgSV0YpM4ik/LDN8nmDn/pztdJJi6od1rx9n+TlbLIWW3kzGLMHizfwt3/CU6FcxMDVTPaqA7kE6YIMFHR0AUO1BwasvGr3UEvq01qFeYPLqeisaZtMGrgegzJQROUZ7WQaWpRAxoYgYS8QIj1qw6e0km+qgZzxHDCNl2/ET6YYGuGTL28gH7oQp0p6Y8IvBKUVqOAGrpPlRIUb9F+HhYlUDYPz9ffEtfWSCRvwAl5lgJgea4v69CvevTASst18kAJ5Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 16/12/2024 16:51, Dev Jain wrote: > In contrast to PMD-collapse, we do not need to operate on two levels of pagetable > simultaneously. Therefore, downgrade the mmap lock from write to read mode. Still > take the anon_vma lock in exclusive mode so as to not waste time in the rmap path, > which is anyways going to fail since the PTEs are going to be changed. Under the PTL, > copy page contents, clear the PTEs, remove folio pins, and (try to) unmap the > old folios. Set the PTEs to the new folio using the set_ptes() API. > > Signed-off-by: Dev Jain > --- > Note: I have been trying hard to get rid of the locks in here: we still are > taking the PTL around the page copying; dropping the PTL and taking it after > the copying should lead to a deadlock, for example: > khugepaged madvise(MADV_COLD) > folio_lock() lock(ptl) > lock(ptl) folio_lock() > > We can create a locked folio list, altogether drop both the locks, take the PTL, > do everything which __collapse_huge_page_isolate() does *except* the isolation and > again try locking folios, but then it will reduce efficiency of khugepaged > and almost looks like a forced solution :) > Please note the following discussion if anyone is interested: > https://lore.kernel.org/all/66bb7496-a445-4ad7-8e56-4f2863465c54@arm.com/ > (Apologies for not CCing the mailing list from the start) > > mm/khugepaged.c | 108 ++++++++++++++++++++++++++++++++++++++---------- > 1 file changed, 87 insertions(+), 21 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 88beebef773e..8040b130e677 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -714,24 +714,28 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte, > struct vm_area_struct *vma, > unsigned long address, > spinlock_t *ptl, > - struct list_head *compound_pagelist) > + struct list_head *compound_pagelist, int order) > { > struct folio *src, *tmp; > pte_t *_pte; > pte_t pteval; > > - for (_pte = pte; _pte < pte + HPAGE_PMD_NR; > + for (_pte = pte; _pte < pte + (1UL << order); > _pte++, address += PAGE_SIZE) { > pteval = ptep_get(_pte); > if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { > add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); > if (is_zero_pfn(pte_pfn(pteval))) { > - /* > - * ptl mostly unnecessary. > - */ > - spin_lock(ptl); > - ptep_clear(vma->vm_mm, address, _pte); > - spin_unlock(ptl); > + if (order == HPAGE_PMD_ORDER) { > + /* > + * ptl mostly unnecessary. > + */ > + spin_lock(ptl); > + ptep_clear(vma->vm_mm, address, _pte); > + spin_unlock(ptl); > + } else { > + ptep_clear(vma->vm_mm, address, _pte); > + } > ksm_might_unmap_zero_page(vma->vm_mm, pteval); > } > } else { > @@ -740,15 +744,20 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte, > src = page_folio(src_page); > if (!folio_test_large(src)) > release_pte_folio(src); > - /* > - * ptl mostly unnecessary, but preempt has to > - * be disabled to update the per-cpu stats > - * inside folio_remove_rmap_pte(). > - */ > - spin_lock(ptl); > - ptep_clear(vma->vm_mm, address, _pte); > - folio_remove_rmap_pte(src, src_page, vma); > - spin_unlock(ptl); > + if (order == HPAGE_PMD_ORDER) { > + /* > + * ptl mostly unnecessary, but preempt has to > + * be disabled to update the per-cpu stats > + * inside folio_remove_rmap_pte(). > + */ > + spin_lock(ptl); > + ptep_clear(vma->vm_mm, address, _pte); > + folio_remove_rmap_pte(src, src_page, vma); > + spin_unlock(ptl); > + } else { > + ptep_clear(vma->vm_mm, address, _pte); > + folio_remove_rmap_pte(src, src_page, vma); > + } > free_page_and_swap_cache(src_page); > } > } > @@ -807,7 +816,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, > static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, > pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma, > unsigned long address, spinlock_t *ptl, > - struct list_head *compound_pagelist) > + struct list_head *compound_pagelist, int order) > { > unsigned int i; > int result = SCAN_SUCCEED; > @@ -815,7 +824,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, > /* > * Copying pages' contents is subject to memory poison at any iteration. > */ > - for (i = 0; i < HPAGE_PMD_NR; i++) { > + for (i = 0; i < (1 << order); i++) { > pte_t pteval = ptep_get(pte + i); > struct page *page = folio_page(folio, i); > unsigned long src_addr = address + i * PAGE_SIZE; > @@ -834,7 +843,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, > > if (likely(result == SCAN_SUCCEED)) > __collapse_huge_page_copy_succeeded(pte, vma, address, ptl, > - compound_pagelist); > + compound_pagelist, order); > else > __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma, > compound_pagelist, order); > @@ -1196,7 +1205,7 @@ static int vma_collapse_anon_folio_pmd(struct mm_struct *mm, unsigned long addre > > result = __collapse_huge_page_copy(pte, folio, pmd, _pmd, > vma, address, pte_ptl, > - &compound_pagelist); > + &compound_pagelist, HPAGE_PMD_ORDER); > pte_unmap(pte); > if (unlikely(result != SCAN_SUCCEED)) > goto out_up_write; > @@ -1228,6 +1237,61 @@ static int vma_collapse_anon_folio_pmd(struct mm_struct *mm, unsigned long addre > return result; > } > > +/* Enter with mmap read lock */ > +static int vma_collapse_anon_folio(struct mm_struct *mm, unsigned long address, > + struct vm_area_struct *vma, struct collapse_control *cc, pmd_t *pmd, > + struct folio *folio, int order) > +{ > + int result; > + struct mmu_notifier_range range; > + spinlock_t *pte_ptl; > + LIST_HEAD(compound_pagelist); > + pte_t *pte; > + pte_t entry; > + int nr_pages = folio_nr_pages(folio); > + > + anon_vma_lock_write(vma->anon_vma); > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, > + address + (PAGE_SIZE << order)); > + mmu_notifier_invalidate_range_start(&range); > + > + pte = pte_offset_map_lock(mm, pmd, address, &pte_ptl); > + if (pte) > + result = __collapse_huge_page_isolate(vma, address, pte, cc, > + &compound_pagelist, order); > + else > + result = SCAN_PMD_NULL; > + > + if (unlikely(result != SCAN_SUCCEED)) > + goto out_up_read; > + > + anon_vma_unlock_write(vma->anon_vma); > + > + __folio_mark_uptodate(folio); > + entry = mk_pte(&folio->page, vma->vm_page_prot); > + entry = maybe_mkwrite(entry, vma); > + > + result = __collapse_huge_page_copy(pte, folio, pmd, *pmd, > + vma, address, pte_ptl, > + &compound_pagelist, order); > + if (unlikely(result != SCAN_SUCCEED)) > + goto out_up_read; > + > + folio_ref_add(folio, nr_pages - 1); > + folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); > + folio_add_lru_vma(folio, vma); > + deferred_split_folio(folio, false); Hi Dev, You are adding the lower order folios to the deferred split queue, but you havent changed the THP shrinker to take this into account. At memory pressure you will be doing a lot of work checking the contents of all mTHP pages which will be wasted unless you change the shrinker, something like below (unbuilt, untested) might work: diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c89aed1510f1..f9586df40f67 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3788,7 +3788,7 @@ static bool thp_underused(struct folio *folio) kaddr = kmap_local_folio(folio, i * PAGE_SIZE); if (!memchr_inv(kaddr, 0, PAGE_SIZE)) { num_zero_pages++; - if (num_zero_pages > khugepaged_max_ptes_none) { + if (num_zero_pages > khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - folio_order(folio))) { kunmap_local(kaddr); return true; } The question is, do we want the shrinker to be run for lower order mTHPs? It can consume a lot of CPU cycles and not be as useful as PMD order THPs. So instead of above, we could disable THP shrinker for lower orders? > + set_ptes(mm, address, pte, entry, nr_pages); > + update_mmu_cache_range(NULL, vma, address, pte, nr_pages); > + pte_unmap_unlock(pte, pte_ptl); > + mmu_notifier_invalidate_range_end(&range); > + result = SCAN_SUCCEED; > + > +out_up_read: > + mmap_read_unlock(mm); > + return result; > +} > + > static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > int referenced, int unmapped, int order, > struct collapse_control *cc) > @@ -1276,6 +1340,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > > if (order == HPAGE_PMD_ORDER) > result = vma_collapse_anon_folio_pmd(mm, address, vma, cc, pmd, folio); > + else > + result = vma_collapse_anon_folio(mm, address, vma, cc, pmd, folio, order); > > if (result == SCAN_SUCCEED) > folio = NULL;