From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB5CBC4167B for ; Thu, 30 Nov 2023 05:35:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 192E38D0030; Thu, 30 Nov 2023 00:35:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 142508D0001; Thu, 30 Nov 2023 00:35:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBA648D0030; Thu, 30 Nov 2023 00:35:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D39C18D0001 for ; Thu, 30 Nov 2023 00:35:20 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id AA0421A06D9 for ; Thu, 30 Nov 2023 05:35:20 +0000 (UTC) X-FDA: 81513507600.15.7FB9943 Received: from mail-vs1-f46.google.com (mail-vs1-f46.google.com [209.85.217.46]) by imf22.hostedemail.com (Postfix) with ESMTP id D4CDEC0013 for ; Thu, 30 Nov 2023 05:35:18 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=T+Cz6EyE; spf=pass (imf22.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.46 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701322518; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YjAlT8QJK7kIurBGwNa2wR4QXiS8K3mtvijR1u2BQjE=; b=Bw5FdGJ5cHbJEpX7Hox1xAlptH6YyYruhweFpHk/+P33Ag8BPLYH2dCHfcf1qWBesd9OfL mPjXzBDgGQLd/BgU9njucnPyjifj/2iGAUbAP+i01eWIcFh6145hqxex/P4DxdjDQDC3Fh lQih+gcRLYzmzrmeGNvlSTNXPJXGTo4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701322518; a=rsa-sha256; cv=none; b=aN+vu3LN2lcqT/x55xbm+4+aUKGEwPlyMr/iybfazJHoCNXozaLrwlomweqW5uR9cgd2Pt q+Vt30RJbLuE1klY8XvQzDwQ5qH24SNp6MdFduh0s4/F/pZWi+GcuN/erud8+5lI3kVG+9 kRBKvCitcaUgFEERZebbZQkyDNYS2Bk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=T+Cz6EyE; spf=pass (imf22.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.46 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vs1-f46.google.com with SMTP id ada2fe7eead31-46447559b88so161477137.3 for ; Wed, 29 Nov 2023 21:35:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701322518; x=1701927318; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=YjAlT8QJK7kIurBGwNa2wR4QXiS8K3mtvijR1u2BQjE=; b=T+Cz6EyEThkTGEBqer9rU6H0zc4rhmd7DfTrg+p6C6glrOuT+5y0X29XlzHhPRnOSA i2FMMce4oBZZsIh95xd0Zp5yGZ7iayt2IeiXtv1e8OUwoZx9pQdvFwXnOOXwJvtxSf5F r+OG3DsxaUniT4lUPNVVM6n6sBygR7tl0O1nIVCK8UpTKSjUV7ioZbREkvLz+iC193gl beCpW+AXfXhDOACdpIE/c9kclUMZ9qME55JKyB58xY8i0cAGYqC6NDUVv5pin1pGdeja 46CQCSo0Frg2sC/r93ZMygXwf8Nko56KM6wgtaoVz5RapdLdFl2R/xCpz6Esvow6ZjgF XGQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701322518; x=1701927318; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YjAlT8QJK7kIurBGwNa2wR4QXiS8K3mtvijR1u2BQjE=; b=c+C9Cf1T+iccmJfKegP9I0HaG/fO2dKmwfUfB9P0/W3oXyrOs+6VeSAaxWJy/Da7vm yfLZ9mWnF5Ef0c+6AJF8MCBriPLgF6crwH5PD+gLt40ddlvk1RElkU8vMY+maigPR+rV jigtFqtbzQmu6S7I53ZyTf0Zf4ApGZwVc4gEbUU90Vn0JmpVh+csUxnkAwjl0N2voOIJ kdH0knTvr3cb42E3CJA5zuzld4PGYYIeuG9yMf34bxMMEooCQtqzl2KDpRduOCpMX9ts K3YoG3TGmjnW2+QXBBHwF2rpzN/qA/mE8iYdpD+WohLqsOYYXCv0XWDm9LWXh4R4hDQg buNA== X-Gm-Message-State: AOJu0YweWfk3yJr1pGd3ieGKgQ7y/xCeGquy0vb38jvDiKTKLs0m4nv0 /gWWUYctQFLYcztlX0eouCZNvvwfwnGbjlGsHw8= X-Google-Smtp-Source: AGHT+IGF8YCqm937hznBytSVen4UWVlAz1qR335+eK8pxAaKG6hnNzShs3XeNb95RYgbwujkRwA/KIzd5G+yvj4RGTE= X-Received: by 2002:a05:6102:134d:b0:464:4ed7:1b8 with SMTP id j13-20020a056102134d00b004644ed701b8mr3034217vsl.30.1701322517823; Wed, 29 Nov 2023 21:35:17 -0800 (PST) MIME-Version: 1.0 References: <20231115163018.1303287-15-ryan.roberts@arm.com> <20231128081742.39204-1-v-songbaohua@oppo.com> <207de995-6d48-41ea-8373-2f9caad9b9c3@arm.com> <34da1e06-74da-4e45-b0b5-9c93d64eb64e@arm.com> In-Reply-To: <34da1e06-74da-4e45-b0b5-9c93d64eb64e@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Thu, 30 Nov 2023 13:35:00 +0800 Message-ID: Subject: Re: [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown To: Ryan Roberts Cc: akpm@linux-foundation.org, andreyknvl@gmail.com, anshuman.khandual@arm.com, ardb@kernel.org, catalin.marinas@arm.com, david@redhat.com, dvyukov@google.com, glider@google.com, james.morse@arm.com, jhubbard@nvidia.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, ryabinin.a.a@gmail.com, suzuki.poulose@arm.com, vincenzo.frascino@arm.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yuzenghui@huawei.com, yuzhao@google.com, ziy@nvidia.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: D4CDEC0013 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 1bgu91d4114ueog69zwgwke6p6sma8um X-HE-Tag: 1701322518-628502 X-HE-Meta: U2FsdGVkX19KVu5d7FZL6Lw67YXxNqSX/AJFNnotU7xXBMbVqHtRX7ghtLD0OjhgK98zRL6evDQ2BcYNompJDb0t55j0rklQRdh1FGTyIis34sihSzlMzIsqEMxO5xBnPiaH7iUqq2ZloIyUPpMvXOklhyK0CGZPm6lgxCOWlUafm3Id18+zMBULaPmmkxkdfYqN53YcqUkb7JISNNo7WQw3pyT7O44vseK9pJjnFH6pxjoqt2lDPo3dceVZnuEag/mZ75Wpl1F+NFvfHbz7IraIdT4eZdfRiS+N4nXrV4+QWIMON3ayK/Ua0mkv0PbH1685RZSgmiO1PE6IIurR6A8D90q1z5KaEVG8ANQzo1vHQJw+V2PEDD5x2tAnJCTKtsXklaSjodPLvMIz3PXI/c68bjFv5NKGE7fDiCojTxHYp9QrSJ6PzwR3dbzx0DV0ckAjUSYgLBzvkPa5hSfJIdWDl00vUkPuO2oTZUQOENfKZXh07O6qy7gZZE9CZFA0HRiM5nlYR6jdbL4Dvgc88Y4aKV1A2mmQs8VVobVooQKGGRmWXClE30M65oJVahbheyrzjpvYorILGqRFCNaKHFkybry9opptpPP0knfZiLUOSGVZVLbOY9wjYWpezwjCiCn5MRqv8fkMXOaGihFlbrGmex/BuGObQ6hJVm/q0W1KjCK5WeUKP2W+JUAOU4Enz1RfvgYKelVPtBBJY7mzVHRIpuFewGpJ/rXb9JE+BDy7ufTJ5hMNi648pPwMT6OWO+h/Dw+KzFbXQYeEUQRcR5wNPZv3fOXipNtmzxuZGONORrLlxcmvX+EbHsUwns/7BuO6LDMqPsqjjwOWzFDOQ0Rn+hkSouicAD2/5/E/JvXWRIB86YaNrpP8m4w0DjZ8kVV6Nuuq23dN266c2FFvoEdWqu7+iYAzo0mQrdACEd4cU9LK3C9UavyuOxb8CFp/ANfUy+NmydsC3KkA/Re RVTYG4mM pGR18xBAK2Ya4aBgj+RSplS4RIVh+0zJrIn5UWbMnUX05wdnmujxFWXhXp7HiZi5p3Ap4BE3sCD4oTzX9+CKKaew2UJ0t3EPMvaEzOQcXP0zZL6kJtLQeaxuGWwodsScDLG+2W2p3+sRTtUegDQ6g+/qh8LcUg4VlzrlHGK98lN5LsUncnvjFoqFORdxVehTXYJ7St6K9vaYfQ+4LylMDKlfqEDrVjJhM7OgGRELKdUGCVGNoVZIQj1s2bDC0oARHoktFMforK3o3nJRovvQjdbdIgKvx8kQ+JlIbziZxALAhM5tf9sqf9/U2e8WNJKM71Z29Hg+e92qEZVWlbi2JdW7azNzxDcFs7gkQLRHO4bzzJK0pHG0j4V1Fi8CWJJgNdjq61Z/4bptB81NvcQ8jcONGh/8XemUx7aOzQMcvvU85X1JXSYXA1+QCZ+9FpzG6V1/svKdOd46zzfKJwGuhd32QQjgktKC2K1QIhToQb4kIwNTY31C8dPLx4ufyI0bG8k5yzL+xUSjEfdFiVdOeEAk1KItsYlUT7C7ZrR7VEJ7NcUM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 29, 2023 at 8:43=E2=80=AFPM Ryan Roberts = wrote: > > On 28/11/2023 20:23, Barry Song wrote: > > On Wed, Nov 29, 2023 at 12:49=E2=80=AFAM Ryan Roberts wrote: > >> > >> On 28/11/2023 08:17, Barry Song wrote: > >>>> +pte_t contpte_ptep_get_and_clear_full(struct mm_struct *mm, > >>>> + unsigned long addr, pte_t *ptep= ) > >>>> +{ > >>>> + /* > >>>> + * When doing a full address space teardown, we can avoid unfol= ding the > >>>> + * contiguous range, and therefore avoid the associated tlbi. I= nstead, > >>>> + * just get and clear the pte. The caller is promising to call = us for > >>>> + * every pte, so every pte in the range will be cleared by the = time the > >>>> + * tlbi is issued. > >>>> + * > >>>> + * This approach is not perfect though, as for the duration bet= ween > >>>> + * returning from the first call to ptep_get_and_clear_full() a= nd making > >>>> + * the final call, the contpte block in an intermediate state, = where > >>>> + * some ptes are cleared and others are still set with the PTE_= CONT bit. > >>>> + * If any other APIs are called for the ptes in the contpte blo= ck during > >>>> + * that time, we have to be very careful. The core code current= ly > >>>> + * interleaves calls to ptep_get_and_clear_full() with ptep_get= () and so > >>>> + * ptep_get() must be careful to ignore the cleared entries whe= n > >>>> + * accumulating the access and dirty bits - the same goes for > >>>> + * ptep_get_lockless(). The only other calls we might resonably= expect > >>>> + * are to set markers in the previously cleared ptes. (We shoul= dn't see > >>>> + * valid entries being set until after the tlbi, at which point= we are > >>>> + * no longer in the intermediate state). Since markers are not = valid, > >>>> + * this is safe; set_ptes() will see the old, invalid entry and= will not > >>>> + * attempt to unfold. And the new pte is also invalid so it won= 't > >>>> + * attempt to fold. We shouldn't see this for the 'full' case a= nyway. > >>>> + * > >>>> + * The last remaining issue is returning the access/dirty bits.= That > >>>> + * info could be present in any of the ptes in the contpte bloc= k. > >>>> + * ptep_get() will gather those bits from across the contpte bl= ock. We > >>>> + * don't bother doing that here, because we know that the infor= mation is > >>>> + * used by the core-mm to mark the underlying folio as accessed= /dirty. > >>>> + * And since the same folio must be underpinning the whole bloc= k (that > >>>> + * was a requirement for folding in the first place), that info= rmation > >>>> + * will make it to the folio eventually once all the ptes have = been > >>>> + * cleared. This approach means we don't have to play games wit= h > >>>> + * accumulating and storing the bits. It does mean that any int= erleaved > >>>> + * calls to ptep_get() may lack correct access/dirty informatio= n if we > >>>> + * have already cleared the pte that happened to store it. The = core code > >>>> + * does not rely on this though. > >>> > >>> even without any other threads running and touching those PTEs, this = won't survive > >>> on some hardware. we expose inconsistent CONTPTEs to hardware, this m= ight result > >> > >> No that's not the case; if you read the Arm ARM, the page table is onl= y > >> considered "misgrogrammed" when *valid* entries within the same contpt= e block > >> have different values for the contiguous bit. We are clearing the ptes= to zero > >> here, which is an *invalid* entry. So if the TLB entry somehow gets in= validated > >> (either due to explicit tlbi as you point out below, or due to a concu= rrent TLB > >> miss which selects our entry for removal to make space for the new inc= omming > >> entry), then it gets an access request for an address in our partially= cleared > >> contpte block the address will either be: > >> > >> A) an address for a pte entry we have already cleared, so its invalid = and it > >> will fault (and get serialized behind the PTL). > >> > >> or > >> > >> B) an address for a pte entry we haven't yet cleared, so it will refor= m a TLB > >> entry for the contpte block. But that's ok because the memory still ex= ists > >> because we haven't yet finished clearing the page table and have not y= et issued > >> the final tlbi. > >> > >> > >>> in crashed firmware even in trustzone, strange&unknown faults to trus= tzone we have > >>> seen on Qualcomm, but for MTK, it seems fine. when you do tlbi on a p= art of PTEs > >>> with dropped CONT but still some other PTEs have CONT, we make hardwa= re totally > >>> confused. > >> > >> I suspect this is because in your case you are "misprogramming" the co= ntpte > >> block; there are *valid* pte entries within the block that disagree ab= out the > >> contiguous bit or about various other fields. In this case some HW TLB= designs > >> can do weird things. I suspect in your case, that's resulting in acces= sing bad > >> memory space and causing an SError, which is trapped by EL3, and the F= W is > >> probably just panicking at that point. > > > > you are probably right. as we met the SError, we became very very > > cautious. so anytime > > when we flush tlb for a CONTPTE, we strictly do it by > > 1. set all 16 ptes to zero > > 2. flush the whole 16 ptes > > But my point is that this sequence doesn't guarrantee that the TLB doesn'= t read > the page table half way through the SW clearing the 16 entries; a TLB ent= ry can > be ejected for other reasons than just issuing a TLBI. So in that case th= ese 2 > flows can be equivalent. Its the fact that we are unsetting the valid bit= when > clearing each pte that guarantees this to be safe. > > > > > in your case, it can be: > > 1. set pte0 to zero > > 2. flush pte0 > > > > TBH, i have never tried this. but it might be safe according to your > > description. > > > >> > >>> > >>> zap_pte_range() has a force_flush when tlbbatch is full: > >>> > >>> if (unlikely(__tlb_remove_page(tlb, page, del= ay_rmap))) { > >>> force_flush =3D 1; > >>> addr +=3D PAGE_SIZE; > >>> break; > >>> } > >>> > >>> this means you can expose partial tlbi/flush directly to hardware whi= le some > >>> other PTEs are still CONT. > >> > >> Yes, but that's also possible even if we have a tight loop that clears= down the > >> contpte block; there could still be another core that issues a tlbi wh= ile you're > >> halfway through that loop, or the HW could happen to evict due to TLB = pressure > >> at any time. The point is, it's safe if you are clearing the pte to an= *invalid* > >> entry. > >> > >>> > >>> on the other hand, contpte_ptep_get_and_clear_full() doesn't need to = depend > >>> on fullmm, as long as zap range covers a large folio, we can flush tl= bi for > >>> those CONTPTEs all together in your contpte_ptep_get_and_clear_full()= rather > >>> than clearing one PTE. > >>> > >>> Our approach in [1] is we do a flush for all CONTPTEs and go directly= to the end > >>> of the large folio: > >>> > >>> #ifdef CONFIG_CONT_PTE_HUGEPAGE > >>> if (pte_cont(ptent)) { > >>> unsigned long next =3D pte_cont_addr_en= d(addr, end); > >>> > >>> if (next - addr !=3D HPAGE_CONT_PTE_SIZ= E) { > >>> __split_huge_cont_pte(vma, pte,= addr, false, NULL, ptl); > >>> /* > >>> * After splitting cont-pte > >>> * we need to process pte again= . > >>> */ > >>> goto again_pte; > >>> } else { > >>> cont_pte_huge_ptep_get_and_clea= r(mm, addr, pte); > >>> > >>> tlb_remove_cont_pte_tlb_entry(t= lb, pte, addr); > >>> if (unlikely(!page)) > >>> continue; > >>> > >>> if (is_huge_zero_page(page)) { > >>> tlb_remove_page_size(tl= b, page, HPAGE_CONT_PTE_SIZE); > >>> goto cont_next; > >>> } > >>> > >>> rss[mm_counter(page)] -=3D HPAG= E_CONT_PTE_NR; > >>> page_remove_rmap(page, true); > >>> if (unlikely(page_mapcount(page= ) < 0)) > >>> print_bad_pte(vma, addr= , ptent, page); > >>> > >>> tlb_remove_page_size(tlb, page,= HPAGE_CONT_PTE_SIZE); > >>> } > >>> cont_next: > >>> /* "do while()" will do "pte++" and "ad= dr + PAGE_SIZE" */ > >>> pte +=3D (next - PAGE_SIZE - (addr & PA= GE_MASK))/PAGE_SIZE; > >>> addr =3D next - PAGE_SIZE; > >>> continue; > >>> } > >>> #endif > >>> > >>> this is our "full" counterpart, which clear_flush CONT_PTES pages dir= ectly, and > >>> it never requires tlb->fullmm at all. > >> > >> Yes, but you are benefitting from the fact that contpte is exposed to = core-mm > >> and it is special-casing them at this level. I'm trying to avoid that. > > > > I am thinking we can even do this while we don't expose CONTPTE. > > if zap_pte_range meets a large folio and the zap_range covers the whole > > folio, we can flush all ptes in this folio and jump to the end of this = folio? > > i mean > > > > if (folio head && range_end > folio_end) { > > nr =3D folio_nr_page(folio); > > full_flush_nr_ptes() > > pte +=3D nr -1; > > addr +=3D (nr - 1) * basepage size > > } > > Just because you found a pte that maps a page from a large folio, that do= esn't > mean that all pages from the folio are mapped, and it doesn't mean they a= re > mapped contiguously. We have to deal with partial munmap(), partial mrema= p() > etc. We could split in these cases (and in future it might be sensible to= try), > but that can fail (due to GUP). So we still have to handle the corner cas= e. > > But I can imagine doing a batched version of ptep_get_and_clear(), like I= did > for ptep_set_wrprotects(). And I think this would be an improvement. > > The reason I haven't done that so far, is because ptep_get_and_clear() re= turns > the pte value when it was cleared and that's hard to do if batching due t= o the > storage requirement. But perhaps you could just return the logical OR of = the > dirty and young bits across all ptes in the batch. The caller should be a= ble to > reconstitute the rest if it needs it? > > What do you think? I really don't know why we care about the return value of ptep_get_and_clea= r() as zap_pte_range() doesn't ask for any ret value at all. so why not totally= give up this kind of complex logical OR of dirty and young as they are useless i= n this case? Is it possible for us to introduce a new api like? bool clear_folio_ptes(folio, ptep) { if(ptes are contiguous mapped) { clear all ptes all together // this also clears all CONTPTE return true; } return false; } in zap_pte_range(): if (large_folio(folio) && clear_folio_ptes(folio, ptep)) { addr +=3D nr - 1 pte +=3D nr -1 } else old path. > > > > > zap_pte_range is the most frequent behaviour from userspace libc heap > > as i explained > > before. libc can call madvise(DONTNEED) the most often. It is crucial > > to performance. > > > > and this way can also help drop your full version by moving to full > > flushing the whole > > large folios? and we don't need to depend on fullmm any more? > > > >> > >> I don't think there is any correctness issue here. But there is a prob= lem with > >> fragility, as raised by Alistair. I have some ideas on potentially how= to solve > >> that. I'm going to try to work on it this afternoon and will post if I= get some > >> confidence that it is a real solution. > >> > >> Thanks, > >> Ryan > >> > >>> > >>> static inline pte_t __cont_pte_huge_ptep_get_and_clear_flush(struct m= m_struct *mm, > >>> unsigned long addr, > >>> pte_t *ptep, > >>> bool flush) > >>> { > >>> pte_t orig_pte =3D ptep_get(ptep); > >>> > >>> CHP_BUG_ON(!pte_cont(orig_pte)); > >>> CHP_BUG_ON(!IS_ALIGNED(addr, HPAGE_CONT_PTE_SIZE)); > >>> CHP_BUG_ON(!IS_ALIGNED(pte_pfn(orig_pte), HPAGE_CONT_PTE_NR)); > >>> > >>> return get_clear_flush(mm, addr, ptep, PAGE_SIZE, CONT_PTES, fl= ush); > >>> } > >>> > >>> [1] https://github.com/OnePlusOSS/android_kernel_oneplus_sm8550/blob/= oneplus/sm8550_u_14.0.0_oneplus11/mm/memory.c#L1539 > >>> > >>>> + */ > >>>> + > >>>> + return __ptep_get_and_clear(mm, addr, ptep); > >>>> +} > >>>> +EXPORT_SYMBOL(contpte_ptep_get_and_clear_full); > >>>> + > >>> > > Thanks Barry