From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AC72C021AA for ; Wed, 19 Feb 2025 16:03:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E2BAE280190; Wed, 19 Feb 2025 11:03:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DDB5228018F; Wed, 19 Feb 2025 11:03:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7BEF280190; Wed, 19 Feb 2025 11:03:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A3C6928018F for ; Wed, 19 Feb 2025 11:03:56 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D7AC0121AD0 for ; Wed, 19 Feb 2025 16:03:17 +0000 (UTC) X-FDA: 83137163634.15.9CE4D6F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 586B0100019 for ; Wed, 19 Feb 2025 16:03:15 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="dOpton/l"; spf=pass (imf14.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739980995; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XKFk2/ljiWc6fVTCMyYKPq5syAbeFPqi3+XwMwfuQyQ=; b=7rpyoBowMduBRn3E2zqn3P0qmDH8DXA0qZ/koPMtHg0NT0+bkZ9d1PTdS3CHXt7Ph1dzv6 7cLyTVbzzPzgH/Te35Khtg1MYXY8o3ABDvvXvmK+vfxkkuKwCoHByVxZQX+fuaNeaRoPQs In1UBQBehI1i+SopPy2vh50ni7lX7/w= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="dOpton/l"; spf=pass (imf14.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739980995; a=rsa-sha256; cv=none; b=1dK2YPPSHtY6NpRClwzokX2CfpM4w8mUMoURd55Gdj2gKajMV07s3j2A+3v2wgiEmL3Au/ uoR3XFbME6mgolQ1oHw7tKBscgRTsiN7+pqMZmOkV9ircVysTAQrQv6xIiCY+dU0Jbtb/x 636wOItwBBY4kU0oQbx/UovacmdQ088= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739980994; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XKFk2/ljiWc6fVTCMyYKPq5syAbeFPqi3+XwMwfuQyQ=; b=dOpton/lQBnsOtqe2OZuwAm31h0o8lNibvp6b0fDg6dFNzfjdEeyWNcu7PjlPlyzeoN1OS 6DjkePwPUl1eS+Sv29lMvsJe4KhjLpTOV5PGRNSCNdor10lgfrrdThGeRxZvpaQUeENaLr LdA3cmDxG/mR0x4WqIWFssWWCFYjiUs= Received: from mail-yb1-f200.google.com (mail-yb1-f200.google.com [209.85.219.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-520-BVOKPFu_M76xI1ymi18UCQ-1; Wed, 19 Feb 2025 11:03:13 -0500 X-MC-Unique: BVOKPFu_M76xI1ymi18UCQ-1 X-Mimecast-MFC-AGG-ID: BVOKPFu_M76xI1ymi18UCQ_1739980993 Received: by mail-yb1-f200.google.com with SMTP id 3f1490d57ef6-e584f13c56aso8019233276.0 for ; Wed, 19 Feb 2025 08:03:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739980993; x=1740585793; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XKFk2/ljiWc6fVTCMyYKPq5syAbeFPqi3+XwMwfuQyQ=; b=GxT3/RO+0sT1yLZWqFPzU1AuQENY87Go9fUDCDj0TutBZv14xNx0qnQRnk5BL+XK0a uS5xsMTG63d+PQwb9pE8HidT24QPouhoRYqU1KI2SpKZIPovRMuPEqsJ7kOzPSPEnRCJ FEazTP4wD4uPfepQMmEp0hH9iu4TEYNtzIoSvy2Of0iUgs9Cv5t0oco8Sjq7yvzVXyGu 4Bq7PN0oWizggYvCwR6kcBIJP69cnJonnBwBnQwECXli0jISXURWvmxwIMB5kxKvvx4L f/K2jhmq6UfLzjErkZZuQ3asnRn0sNFKqWsD921bUWas3Sha/ZrFc8XQxvXZGqisOz++ ZKPQ== X-Forwarded-Encrypted: i=1; AJvYcCXneBU7JGdmdPsovdK4559cnaV483tLN+kGPEJbR95J9LpnFXcvqBRo3ORZBfNIV10hdTa1h+AT+w==@kvack.org X-Gm-Message-State: AOJu0YwLo4hlTMYyFfIjlwYD2cTSfxvlRTXS1si/ofgnf0H4j0KturoQ VSuVsiTDPPCBIlkCpdcE6yUsarDIyfnym9Wgf5UbEKFrYXgehP2p3LvDjh5Wfd2KjydvZ8Q9kJU QPnOhvU1hDCPdmf4q8JilKpxdni+Cnxq4yYViRD13lb6nSdMQXSuT+b7/ewkQgatGgB2GpwXvip aeJoRY8Npn0T3MLTy+L9u5FNA= X-Gm-Gg: ASbGncs782ywoeW9t7lhFgWBT3NVIaBTHYb4rISoj6xfLWi+S83+4mv7+NnmpUWvYNT /xcViGU2oIRIUSjrQHkLJ2y7xCYDaLI2qR7RRt7hSIyNSJsTeftHxPurPxWrpESWMa/l1UA3Q5f o= X-Received: by 2002:a05:6902:1544:b0:e39:6e4d:32fb with SMTP id 3f1490d57ef6-e5dc930fa51mr15415682276.45.1739980992665; Wed, 19 Feb 2025 08:03:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IEI7IQsVTMnMVLEJ2cJxBl9fAUC6OUXY62unlxjHD7pfGGI8q/oaPqqiFE0ielKIWRuxe1/wAMbICEfz1qGuXI= X-Received: by 2002:a05:6902:1544:b0:e39:6e4d:32fb with SMTP id 3f1490d57ef6-e5dc930fa51mr15415609276.45.1739980992083; Wed, 19 Feb 2025 08:03:12 -0800 (PST) MIME-Version: 1.0 References: <20250211003028.213461-1-npache@redhat.com> <20250211003028.213461-6-npache@redhat.com> <8524c7c7-024f-4f17-9b89-ef9aedfca672@arm.com> In-Reply-To: <8524c7c7-024f-4f17-9b89-ef9aedfca672@arm.com> From: Nico Pache Date: Wed, 19 Feb 2025 09:02:46 -0700 X-Gm-Features: AWEUYZkL6je729yXmSmtgH-gbFE3ksuWMe_jMa24RPmXy6JkqrgJgdItUGDZZtU Message-ID: Subject: Re: [RFC v2 5/9] khugepaged: generalize __collapse_huge_page_* for mTHP support To: Ryan Roberts Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, willy@infradead.org, kirill.shutemov@linux.intel.com, david@redhat.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, sunnanyong@huawei.com, usamaarif642@gmail.com, audra@redhat.com, akpm@linux-foundation.org, rostedt@goodmis.org, mathieu.desnoyers@efficios.com, tiwai@suse.de X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: O6rbOzLjGQXNBjeo8HLmbEqe-Pc9p9Nd2ID1CeDwgqk_1739980993 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 4zy8fheizzh3z8raq8983fjxdbawnscr X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 586B0100019 X-HE-Tag: 1739980995-510845 X-HE-Meta: U2FsdGVkX18e4FgIXG176FgLeyAn9c00QHxNT1jYv5tav+UKTO88FiRe6mDacxDehplxSEODkG0wRwTVrInOMS38KrRrIs+M3MV3jFGCljw2ceJtM22IXJwyisDekq/H6/EXzuQ2CGRepy/b55he/OSRdeKZJsXU8/P094HGIO4sirveCVZr6yaeYYAPUdANRSkAC8ManTt7Rny/2kRUx+i61jBGY3t2yyuYYgK0t7xcAc27Dmw7e3w9C7F2jkdUKv8NwHZpWI8h94c77KxRtVW4LnWUYI7ulglkN9xL3tcn2wtCvi78UVTdBe2tfj67HhNi5hjXp796qqo8OacyFoJdqs5tz4LDh0P/SdgwEfCqsw0MraVNZ4pKrjPO8CwOZTYX5B8XNLhQxZnJUfNnFyvqMpkaWbSqupuBulO1BlJQVY3mmbhW1niNFrdleAZP1sJgyBqtub/pxNCpJEShuz2/Jpwmwl/jWq3AcY6GnISi012XIwTEJKjn/EBZvIWjMU+TTRa5UN07wgrpUSjFn5YNQw9NXlY6PZVSSqPapF+tYlNeEx83B3qNAhoRa5zGDg4XGf17mMo+DGc6m4KfEEyBPz8gTtqPYZ4FjRQRjyFuWPyU0QVau4CquXZl0LXsBoEfjx+U7+7+ScFGafiUtJn8i2vfPV4dHP8XGmmu+mS/3Uwj2g0mK7WI+dWoDhvqMWuajf0m8daR4fZVVcRa6DooIRq5gPLy7bikjbV2Qwlo7oC/yZMFHvixESDmH0fCjS8SrDxccOJ/ohmyS0DcFxAVVYiNoUqGvzaA2BruvjqqKewgT4lMCCpRQZONxrwAjkF59GEWTz3ng1jqLhaX66wUuegbA0+TVqDFV38ag2sXawOawL5o22eDZIgNcf1oZ/OqLvata+CByQpL8XUfueC+l+GEWkrSxnlUFaAlNbTwdNeQYj1NYPwkhxEuo1OykdJh5Hy1jrwtWmXhaUW qnXUT71r QVGwP8OjGrtpp7gYhEVdxzWDX3IzDMz+n5sRHaViFTbJGn3zGCX9u1fBrO0Elw1uWMGz+JMFNvjOpwDRdksnZZsS84mmpim+/6EIqdkMuQEBRtVQT2XYb7U6Zf4XuGPH4KikUjxG3tIEhAXOP7MgkOYEpZihKTTgF19+2tKAwgXwNc5ldXT+PrP5uYAgIEg+ODAPHZcujBt6HipRgXDzkre5X7an7qrutCAZKh7nz9UYDSlYvJxe1lDQFv5cSAKzfmEmZCce4+y9yUG7qaNJ3lB/1oc6cuwIC7gmtn8lBk9vNYjk8FcweJ1KGnDV+YBMKBQ8wgdyltPTr5UKEs7Fw8kF02My4EncFV+9BLoldssqfLbXJunk2DThNBp3NNXfP+NEWyikB8PVuVGMgRz4LdDStcbD0oXcQf8LgjWcZocV8uMpewWx8L1wK9J79hMNvW2J3eYp66+3QPWg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Feb 19, 2025 at 8:39=E2=80=AFAM Ryan Roberts = wrote: > > On 11/02/2025 00:30, Nico Pache wrote: > > generalize the order of the __collapse_huge_page_* functions > > to support future mTHP collapse. > > > > mTHP collapse can suffer from incosistant behavior, and memory waste > > "creep". disable swapin and shared support for mTHP collapse. > > > > No functional changes in this patch. > > > > Signed-off-by: Nico Pache > > --- > > mm/khugepaged.c | 48 ++++++++++++++++++++++++++++-------------------- > > 1 file changed, 28 insertions(+), 20 deletions(-) > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index 0cfcdc11cabd..3776055bd477 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -565,15 +565,17 @@ static int __collapse_huge_page_isolate(struct vm= _area_struct *vma, > > unsigned long address, > > pte_t *pte, > > struct collapse_control *cc, > > - struct list_head *compound_pageli= st) > > + struct list_head *compound_pageli= st, > > + u8 order) > > nit: I think we are mostly standardised on order being int. Is there any = reason > to make it u8 here? The reasoning was I didn't want to consume a lot of memory for the mthp_bitmap_stack. originally the order and offset were u8's, but i had to convert the offset to u16 to fit the max offset on 64k kernels. so 64 * (8+16) =3D 192 bytes as opposed to 1024 bytes if they were ints. Not sure if using these u8/16 is frowned upon. Lmk if I need to convert these back to int or if they can stay! > > > { > > struct page *page =3D NULL; > > struct folio *folio =3D NULL; > > pte_t *_pte; > > int none_or_zero =3D 0, shared =3D 0, result =3D SCAN_FAIL, refer= enced =3D 0; > > bool writable =3D false; > > + int scaled_none =3D khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER = - order); > > > > - for (_pte =3D pte; _pte < pte + HPAGE_PMD_NR; > > + for (_pte =3D pte; _pte < pte + (1 << order); > > _pte++, address +=3D PAGE_SIZE) { > > pte_t pteval =3D ptep_get(_pte); > > if (pte_none(pteval) || (pte_present(pteval) && > > @@ -581,7 +583,7 @@ static int __collapse_huge_page_isolate(struct vm_a= rea_struct *vma, > > ++none_or_zero; > > if (!userfaultfd_armed(vma) && > > (!cc->is_khugepaged || > > - none_or_zero <=3D khugepaged_max_ptes_none))= { > > + none_or_zero <=3D scaled_none)) { > > continue; > > } else { > > result =3D SCAN_EXCEED_NONE_PTE; > > @@ -609,8 +611,8 @@ static int __collapse_huge_page_isolate(struct vm_a= rea_struct *vma, > > /* See khugepaged_scan_pmd(). */ > > if (folio_likely_mapped_shared(folio)) { > > ++shared; > > - if (cc->is_khugepaged && > > - shared > khugepaged_max_ptes_shared) { > > + if (order !=3D HPAGE_PMD_ORDER || (cc->is_khugepa= ged && > > + shared > khugepaged_max_ptes_shared)) { > > result =3D SCAN_EXCEED_SHARED_PTE; > > count_vm_event(THP_SCAN_EXCEED_SHARED_PTE= ); > > Same comment about events; I think you will want to be careful to only co= unt > events for PMD-sized THP using count_vm_event() and introduce equivalent = MTHP > events to cover all sizes. Makes sense, Ill work on adding the new counters for THP_SCAN_EXCEED_(SWAP_PTE|NONE_PTE|SHARED_PTE). Thanks! > > > goto out; > > @@ -711,14 +713,15 @@ static void __collapse_huge_page_copy_succeeded(p= te_t *pte, > > struct vm_area_struct *vm= a, > > unsigned long address, > > spinlock_t *ptl, > > - struct list_head *compoun= d_pagelist) > > + struct list_head *compoun= d_pagelist, > > + u8 order) > > { > > struct folio *src, *tmp; > > pte_t *_pte; > > pte_t pteval; > > > > - for (_pte =3D pte; _pte < pte + HPAGE_PMD_NR; > > - _pte++, address +=3D PAGE_SIZE) { > > + for (_pte =3D pte; _pte < pte + (1 << order); > > + _pte++, address +=3D PAGE_SIZE) { > > nit: you changed the indentation here. > > > pteval =3D ptep_get(_pte); > > if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { > > add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); > > @@ -764,7 +767,8 @@ static void __collapse_huge_page_copy_failed(pte_t = *pte, > > pmd_t *pmd, > > pmd_t orig_pmd, > > struct vm_area_struct *vma, > > - struct list_head *compound_p= agelist) > > + struct list_head *compound_p= agelist, > > + u8 order) > > { > > spinlock_t *pmd_ptl; > > > > @@ -781,7 +785,7 @@ static void __collapse_huge_page_copy_failed(pte_t = *pte, > > * Release both raw and compound pages isolated > > * in __collapse_huge_page_isolate. > > */ > > - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist); > > + release_pte_pages(pte, pte + (1 << order), compound_pagelist); > > } > > > > /* > > @@ -802,7 +806,7 @@ static void __collapse_huge_page_copy_failed(pte_t = *pte, > > static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, > > pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma, > > unsigned long address, spinlock_t *ptl, > > - struct list_head *compound_pagelist) > > + struct list_head *compound_pagelist, u8 order) > > { > > unsigned int i; > > int result =3D SCAN_SUCCEED; > > @@ -810,7 +814,7 @@ static int __collapse_huge_page_copy(pte_t *pte, st= ruct folio *folio, > > /* > > * Copying pages' contents is subject to memory poison at any ite= ration. > > */ > > - for (i =3D 0; i < HPAGE_PMD_NR; i++) { > > + for (i =3D 0; i < (1 << order); i++) { > > pte_t pteval =3D ptep_get(pte + i); > > struct page *page =3D folio_page(folio, i); > > unsigned long src_addr =3D address + i * PAGE_SIZE; > > @@ -829,10 +833,10 @@ static int __collapse_huge_page_copy(pte_t *pte, = struct folio *folio, > > > > if (likely(result =3D=3D SCAN_SUCCEED)) > > __collapse_huge_page_copy_succeeded(pte, vma, address, pt= l, > > - compound_pagelist); > > + compound_pagelist, or= der); > > else > > __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma, > > - compound_pagelist); > > + compound_pagelist, order= ); > > > > return result; > > } > > @@ -1000,11 +1004,11 @@ static int check_pmd_still_valid(struct mm_stru= ct *mm, > > static int __collapse_huge_page_swapin(struct mm_struct *mm, > > struct vm_area_struct *vma, > > unsigned long haddr, pmd_t *pmd, > > - int referenced) > > + int referenced, u8 order) > > { > > int swapped_in =3D 0; > > vm_fault_t ret =3D 0; > > - unsigned long address, end =3D haddr + (HPAGE_PMD_NR * PAGE_SIZE)= ; > > + unsigned long address, end =3D haddr + (PAGE_SIZE << order); > > int result; > > pte_t *pte =3D NULL; > > spinlock_t *ptl; > > @@ -1035,6 +1039,11 @@ static int __collapse_huge_page_swapin(struct mm= _struct *mm, > > if (!is_swap_pte(vmf.orig_pte)) > > continue; > > > > + if (order !=3D HPAGE_PMD_ORDER) { > > + result =3D SCAN_EXCEED_SWAP_PTE; > > + goto out; > > + } > > A comment to explain the rationale for this divergent behaviour based on = order > would be helpful. > > > + > > vmf.pte =3D pte; > > vmf.ptl =3D ptl; > > ret =3D do_swap_page(&vmf); > > @@ -1114,7 +1123,6 @@ static int collapse_huge_page(struct mm_struct *m= m, unsigned long address, > > int result =3D SCAN_FAIL; > > struct vm_area_struct *vma; > > struct mmu_notifier_range range; > > - > > nit: no need for this whitespace change? Thanks! Ill clean up the nits and add a comment to the swapin function to describe skipping mTHP swapin. > > > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > > > > /* > > @@ -1149,7 +1157,7 @@ static int collapse_huge_page(struct mm_struct *m= m, unsigned long address, > > * that case. Continuing to collapse causes inconsistenc= y. > > */ > > result =3D __collapse_huge_page_swapin(mm, vma, address, = pmd, > > - referenced); > > + referenced, HPAGE_PMD_ORDER); > > if (result !=3D SCAN_SUCCEED) > > goto out_nolock; > > } > > @@ -1196,7 +1204,7 @@ static int collapse_huge_page(struct mm_struct *m= m, unsigned long address, > > pte =3D pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); > > if (pte) { > > result =3D __collapse_huge_page_isolate(vma, address, pte= , cc, > > - &compound_pagelist)= ; > > + &compound_pagelist, HPAGE_PMD_ORD= ER); > > spin_unlock(pte_ptl); > > } else { > > result =3D SCAN_PMD_NULL; > > @@ -1226,7 +1234,7 @@ static int collapse_huge_page(struct mm_struct *m= m, unsigned long address, > > > > result =3D __collapse_huge_page_copy(pte, folio, pmd, _pmd, > > vma, address, pte_ptl, > > - &compound_pagelist); > > + &compound_pagelist, HPAGE_PMD_= ORDER); > > pte_unmap(pte); > > if (unlikely(result !=3D SCAN_SUCCEED)) > > goto out_up_write; >