From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A74F7C4167B for ; Thu, 30 Nov 2023 05:58:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 153B88D0031; Thu, 30 Nov 2023 00:58:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DD758D0001; Thu, 30 Nov 2023 00:58:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E71698D0031; Thu, 30 Nov 2023 00:58:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CF5C28D0001 for ; Thu, 30 Nov 2023 00:58:05 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A6B651C06D2 for ; Thu, 30 Nov 2023 05:58:05 +0000 (UTC) X-FDA: 81513564930.25.DAF99B3 Received: from mail-ua1-f53.google.com (mail-ua1-f53.google.com [209.85.222.53]) by imf23.hostedemail.com (Postfix) with ESMTP id D38CC140003 for ; Thu, 30 Nov 2023 05:58:03 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Mkyo1TkY; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701323883; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eZ8T2AVd1TEdvlGm/KeDu4CbfSK765E9B8zJ1w8JmFo=; b=vqxBl4DPGa53xQ+1XnSi/QtCTQyi7HRz+WPplPsdPyVQPgJPv6VUkiYTPhBMONjqu0A4dQ snB2rHBYUodzrYellLe6KPYafELgv8IvVYI7SCtw3dojiadvyWMP7eWLYoDfwthj52YnBK kPYF7pr3xLNEi4U+yxJXT/tbNWp6FXw= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Mkyo1TkY; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701323883; a=rsa-sha256; cv=none; b=mNKbqQYRxdzoVI1S2rHo2G2Zn9lVXOjvIjeoqFGSEVEYqSUwSSSmCfbbQrc2q6jx67CyLi OMH9+SHqUwqRrnov7SAg4sbRo5UpRKhCcrktGaeg4EPg+uE0ooTOqSnkAqbZWVssaXdPDG FLmbG6+X7GHCp7pBSvF6RMX6CbV24yA= Received: by mail-ua1-f53.google.com with SMTP id a1e0cc1a2514c-7c47dd348f2so179865241.2 for ; Wed, 29 Nov 2023 21:58:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701323883; x=1701928683; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=eZ8T2AVd1TEdvlGm/KeDu4CbfSK765E9B8zJ1w8JmFo=; b=Mkyo1TkYTjokzrogDNgcYXSfKxb94S8AeYl5GO0VapHxwDKTjAI6AN9MY3tsldSkLK oObmJSndV6uuUWPGP7ary+gwECljZcONmxvkqdulvQYpgYPVKuGwE0r8uOgpDMNzeXYM 0kXZznjH8vqmUyK2B4D0qcMMqz1JtNXCFL/uZ/gzXI4Nf9vJqRyfv7U81a3bIoCS3rO9 YYakggHqHiJD/SYFnqmDav2zUrniodHBcuUj8aXx4gCPhxB85H8OsYZd/64WrKnN1e9A 3QY2CAoQ/0oRDa9uS3Zx86edAIz7YEOFZMlXTdjoBfkF/HDw/cVbzRnULGvPdaRDF6Ff 3mNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701323883; x=1701928683; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eZ8T2AVd1TEdvlGm/KeDu4CbfSK765E9B8zJ1w8JmFo=; b=RErjm8c0lgkpkcWPgPuVaL4t/jlm++QiL2OOz9HqUzqcSlqgna79YKNdrdzMxTcCRp RSa4xQbCmkObOEV3wcOhh387epS+t06yURpvwjNWK1vKExrq0lDWZeCy2ULgeovlSZ4n ORynQL9LaGII7ptYQgg4yIDrk/sN+CeH0pnJQShJH9FO3hU6es4m+I4Wb+1nmDAT/tOd tsSMyHqeP91LuHH+pAUUCzNcFLNu2qejBUYOvY5iVV3GPIoruGx9UabV/SVn56d7Kuu/ mz3nHuehky9aav8ls17RNbwJRtvzVNR9+oeARN/us0+Jgfry9JWWOjvCr2Trd0OhZhbk pIXg== X-Gm-Message-State: AOJu0YxeiXvCujr6kANZ6u+X/a/hACDoI5Zi8QXPNyka/j2cDZeRekoP CuzT4f7Z4nPMmK+/NWPIrIPd4QXYCwyTh1kbLLQ= X-Google-Smtp-Source: AGHT+IFTJr02VFozbq16Tjw02IUNLB92Nj/s3C4W68qnRY6vM4R8SrTGleqTQByduvQ/X/T2sovpZjjhePi5h9kUN3M= X-Received: by 2002:a67:fe06:0:b0:464:47e1:a5e8 with SMTP id l6-20020a67fe06000000b0046447e1a5e8mr4731331vsr.19.1701323882746; Wed, 29 Nov 2023 21:58:02 -0800 (PST) MIME-Version: 1.0 References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231115163018.1303287-15-ryan.roberts@arm.com> <87fs0xxd5g.fsf@nvdebian.thelocal> <3b4f6bff-6322-4394-9efb-9c3b9ef52010@arm.com> <87y1eovsn5.fsf@nvdebian.thelocal> <8fca9ed7-f916-4abe-8284-6e3c9fa33a8c@arm.com> <87wmu3pro8.fsf@nvdebian.thelocal> <26c78fee-4b7a-4d73-9f8b-2e25bbae20e8@arm.com> <87o7fepdun.fsf@nvdebian.thelocal> <87leafg768.fsf@nvdebian.thelocal> In-Reply-To: <87leafg768.fsf@nvdebian.thelocal> From: Barry Song <21cnbao@gmail.com> Date: Thu, 30 Nov 2023 13:57:45 +0800 Message-ID: Subject: Re: [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown To: Alistair Popple Cc: Ryan Roberts , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: D38CC140003 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: cgo6kx4pktbxc6krmc176jzc97ejitza X-HE-Tag: 1701323883-540463 X-HE-Meta: U2FsdGVkX1+8ksWWfkFJ45MahSsXsDVLAl40ABbM8pWiuwJ5z9rdLoWpqYQQDZyJtCgKCGZ0igsWop14Unrquvwm/7v8H9Rbog67o8OFOXdwKkx/Z/6E/AkYbvkFzG+OVRVIHN6HyyBj1sgMYuacKZ0K8ZVm11f6of+HLpDeADK9ZUx0jb/khzrpEndfb/b8GPVsRSfq9spMBt/1/9HlZCSQ2/WiKcl/x71Q7zDQ2dZThP82TJGJTjgzArvwBGW4gIEDKrHYWlnLt61oK5Iwlsnz0g5OrmhA1ncBCPwicntGLtdqU7l1onmJzQO8176DyFmOjZxVuWT+Pc1CrsYAYY9/pxZk7Wx9gWLJJPZmZAdC6petFnmcomM8Rqt/rugjRKfHBA3FVVQ6nY31KD2xkbZtSUm/h0dhk3xXBDP9rNkEagbzBCvP/GcjxyRGjaSn0pwPsO8h5GudXZaW4IPfZWipJRLf6kyWECcWAgMgBJEPCKGPR4zHP/uJ2cn91kKLF5vc3ku1fldnn4yHZHL581qmGmRCMA85fs2sj474DFP1AchpT+iKFl2zdxS6CpXiQ+gaVlcDdj1hwIi83LEG0AP8nG6wAbYpH7lFi18PslUgaX/VCIAvK5jc/M/yreOE3YU+JEwAS/isSxVcNsdzwB3KaNyGSSZXm5NK0J7z+Yo2YYfiqGjTrpHT/BLA5cqFK13euHNBcF3H2auYb1EPH61qhVkDYR+Ib1A2SeVTzuS5iZv/G+EbS5cjbUhwlD7QNajoLgx26NN7gnCVm66HuOkNIzeSPlQTeG0WMt3BxYfry69xJzo5fr3dZrvPOZsVxBO1lyWynQBojDX0IS/jNntyNi4zZpgxfvgsrzjlYETHhN4chYjDnP6DtQ9GjSiXDd1YYn/kChZ4L7vW/GTQ0zfU22y0zbR2f9JIba/4DLJVq6MWePxqc+6OnkLAUPPH51N4nL8p4crHjNe5kM3 aZkHRZmb NkocvR4rdX8E47sf0pAJ6iWhHcSsl422EEP+JTHcpE2CR+cojfxPnLMGw/7RizngUZPewI3CFB82TvykMMr0+D5gnYTknOXSxCOXNb6oA4ZfwbtlYV60nE5K45TsRv1FojSWSjZ8coviSXCKw3L9cY6ZryVJHCRsNWOoiwNU6EsOdj9hhKueQj5BDnWeSSXKoKKweHmWvpHNKXrlqFPMEl6EGg0J8LjvxdyCV/H3ri3fpPw+R/ZOZzOPYGIjBzB39NZq4f3/TXYXYNlAC51PifwQsYtDgqDv94WBgQfBvbP/28sS/12EXgFIncYZHuIJLX8sgwzXZH202BM+p3FD9fuGLuioxWWsqSwy3HR1sWzx3dgbZro1bwlAMiO3/zgS+p7fOfNOTYDPNwbud4rZnhLwjyPecuKqlEcXMMzaFEgryxFVZkGg3YZTZw7LS76X6IuGu69ZBC6DgKxxDgSW8mWeYPQsbDda6T75xj6d5urx2y7peQ/NfZNzzXTKPRf+xLMmO0hNRGdv1G2hskLRylHP9UWabJu0rVgjKrV2/CDsgN2E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Nov 30, 2023 at 1:08=E2=80=AFPM Alistair Popple wrote: > > > Ryan Roberts writes: > > >>>> So if we do need to deal with racing HW, I'm pretty sure my v1 imple= mentation is > >>>> buggy because it iterated through the PTEs, getting and accumulating= . Then > >>>> iterated again, writing that final set of bits to all the PTEs. And = the HW could > >>>> have modified the bits during those loops. I think it would be possi= ble to fix > >>>> the race, but intuition says it would be expensive. > >>> > >>> So the issue as I understand it is subsequent iterations would see a > >>> clean PTE after the first iteration returned a dirty PTE. In > >>> ptep_get_and_clear_full() why couldn't you just copy the dirty/access= ed > >>> bit (if set) from the PTE being cleared to an adjacent PTE rather tha= n > >>> all the PTEs? > >> > >> The raciness I'm describing is the race between reading access/dirty f= rom one > >> pte and applying it to another. But yes I like your suggestion. if we = do: > >> > >> pte =3D __ptep_get_and_clear_full(ptep) > >> > >> on the target pte, then we have grabbed access/dirty from it in a race= -free > >> manner. we can then loop from current pte up towards the top of the bl= ock until > >> we find a valid entry (and I guess wrap at the top to make us robust a= gainst > >> future callers clearing an an arbitrary order). Then atomically accumu= late the > >> access/dirty bits we have just saved into that new entry. I guess that= 's just a > >> cmpxchg loop - there are already examples of how to do that correctly = when > >> racing the TLB. > >> > >> For most entries, we will just be copying up to the next pte. For the = last pte, > >> we would end up reading all ptes and determine we are the last one. > >> > >> What do you think? > > > > OK here is an attempt at something which solves the fragility. I think = this is > > now robust and will always return the correct access/dirty state from > > ptep_get_and_clear_full() and ptep_get(). > > > > But I'm not sure about performance; each call to ptep_get_and_clear_ful= l() for > > each pte in a contpte block will cause a ptep_get() to gather the acces= s/dirty > > bits from across the contpte block - which requires reading each pte in= the > > contpte block. So its O(n^2) in that sense. I'll benchmark it and repor= t back. > > > > Was this the type of thing you were thinking of, Alistair? > > Yes, that is along the lines of what I was thinking. However I have > added a couple of comments inline. > > > --8<-- > > arch/arm64/include/asm/pgtable.h | 23 ++++++++- > > arch/arm64/mm/contpte.c | 81 ++++++++++++++++++++++++++++++++ > > arch/arm64/mm/fault.c | 38 +++++++++------ > > 3 files changed, 125 insertions(+), 17 deletions(-) > > > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/= pgtable.h > > index 9bd2f57a9e11..6c295d277784 100644 > > --- a/arch/arm64/include/asm/pgtable.h > > +++ b/arch/arm64/include/asm/pgtable.h > > @@ -851,6 +851,7 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t = newprot) > > return pte_pmd(pte_modify(pmd_pte(pmd), newprot)); > > } > > > > +extern int __ptep_set_access_flags_notlbi(pte_t *ptep, pte_t entry); > > extern int __ptep_set_access_flags(struct vm_area_struct *vma, > > unsigned long address, pte_t *ptep, > > pte_t entry, int dirty); > > @@ -1145,6 +1146,8 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t = orig_pte); > > extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep); > > extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, > > pte_t *ptep, pte_t pte, unsigned int nr); > > +extern pte_t contpte_ptep_get_and_clear_full(struct mm_struct *mm, > > + unsigned long addr, pte_t *ptep); > > extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vm= a, > > unsigned long addr, pte_t *ptep); > > extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, > > @@ -1270,12 +1273,28 @@ static inline void pte_clear(struct mm_struct *= mm, > > __pte_clear(mm, addr, ptep); > > } > > > > +#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL > > +static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, > > + unsigned long addr, pte_t *ptep, int full= ) > > +{ > > + pte_t orig_pte =3D __ptep_get(ptep); > > + > > + if (!pte_valid_cont(orig_pte)) > > + return __ptep_get_and_clear(mm, addr, ptep); > > + > > + if (!full) { > > + contpte_try_unfold(mm, addr, ptep, orig_pte); > > + return __ptep_get_and_clear(mm, addr, ptep); > > + } > > + > > + return contpte_ptep_get_and_clear_full(mm, addr, ptep); > > +} > > + > > #define __HAVE_ARCH_PTEP_GET_AND_CLEAR > > static inline pte_t ptep_get_and_clear(struct mm_struct *mm, > > unsigned long addr, pte_t *ptep) > > { > > - contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); > > - return __ptep_get_and_clear(mm, addr, ptep); > > + return ptep_get_and_clear_full(mm, addr, ptep, 0); > > } > > > > #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG > > diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c > > index 2a57df16bf58..99b211118d93 100644 > > --- a/arch/arm64/mm/contpte.c > > +++ b/arch/arm64/mm/contpte.c > > @@ -145,6 +145,14 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte= ) > > for (i =3D 0; i < CONT_PTES; i++, ptep++) { > > pte =3D __ptep_get(ptep); > > > > + /* > > + * Deal with the partial contpte_ptep_get_and_clear_full(= ) case, > > + * where some of the ptes in the range may be cleared but= others > > + * are still to do. See contpte_ptep_get_and_clear_full()= . > > + */ > > + if (!pte_valid(pte)) > > + continue; > > + > > if (pte_dirty(pte)) > > orig_pte =3D pte_mkdirty(orig_pte); > > > > @@ -257,6 +265,79 @@ void contpte_set_ptes(struct mm_struct *mm, unsign= ed long addr, > > } > > EXPORT_SYMBOL(contpte_set_ptes); > > > > +pte_t contpte_ptep_get_and_clear_full(struct mm_struct *mm, > > + unsigned long addr, pte_t *ptep) > > +{ > > + /* > > + * When doing a full address space teardown, we can avoid unfoldi= ng the > > + * contiguous range, and therefore avoid the associated tlbi. Ins= tead, > > + * just get and clear the pte. The caller is promising to call us= for > > + * every pte, so every pte in the range will be cleared by the ti= me the > > + * final tlbi is issued. > > + * > > + * This approach requires some complex hoop jumping though, as fo= r the > > + * duration between returning from the first call to > > + * ptep_get_and_clear_full() and making the final call, the contp= te > > + * block is in an intermediate state, where some ptes are cleared= and > > + * others are still set with the PTE_CONT bit. If any other APIs = are > > + * called for the ptes in the contpte block during that time, we = have to > > + * be very careful. The core code currently interleaves calls to > > + * ptep_get_and_clear_full() with ptep_get() and so ptep_get() mu= st be > > + * careful to ignore the cleared entries when accumulating the ac= cess > > + * and dirty bits - the same goes for ptep_get_lockless(). The on= ly > > + * other calls we might resonably expect are to set markers in th= e > > + * previously cleared ptes. (We shouldn't see valid entries being= set > > + * until after the tlbi, at which point we are no longer in the > > + * intermediate state). Since markers are not valid, this is safe= ; > > + * set_ptes() will see the old, invalid entry and will not attemp= t to > > + * unfold. And the new pte is also invalid so it won't attempt to= fold. > > + * We shouldn't see pte markers being set for the 'full' case any= way > > + * since the address space is being torn down. > > + * > > + * The last remaining issue is returning the access/dirty bits. T= hat > > + * info could be present in any of the ptes in the contpte block. > > + * ptep_get() will gather those bits from across the contpte bloc= k (for > > + * the remaining valid entries). So below, if the pte we are clea= ring > > + * has dirty or young set, we need to stash it into a pte that we= are > > + * yet to clear. This allows future calls to return the correct s= tate > > + * even when the info was stored in a different pte. Since the co= re-mm > > + * calls from low to high address, we prefer to stash in the last= pte of > > + * the contpte block - this means we are not "dragging" the bits = up > > + * through all ptes and increases the chances that we can exit ea= rly > > + * because a given pte will have neither dirty or young set. > > + */ > > + > > + pte_t orig_pte =3D __ptep_get_and_clear(mm, addr, ptep); > > + bool dirty =3D pte_dirty(orig_pte); > > + bool young =3D pte_young(orig_pte); > > + pte_t *start; > > + > > + if (!dirty && !young) > > + return contpte_ptep_get(ptep, orig_pte); > > I don't think we need to do this. If the PTE is !dirty && !young we can > just return it. As you say we have to assume HW can set those flags at > any time anyway so it doesn't get us much. This means in the common case > we should only run through the loop setting the dirty/young flags once > which should alay the performance concerns. > > However I am now wondering if we're doing the wrong thing trying to hide > this down in the arch layer anyway. Perhaps it would be better to deal > with this in the core-mm code after all. > > So how about having ptep_get_and_clear_full() clearing the PTEs for the > entire cont block? We know by definition all PTEs should be pointing to I truly believe we should clear all PTEs for the entire folio block. Howeve= r, if the existing api ptep_get_and_clear_full() is always handling a single o= ne PTE, we might keep its behaviour as is. On the other hand, clearing the whole block isn't only required in fullmm case, it is also a requirement fo= r normal zap_pte_range() cases coming from madvise(DONTNEED) etc. I do think we need a folio-level variant. as we are now supporting pte-level large folios, we need some new api to handle folio-level PTEs entirely as we alwa= ys have the needs to drop the whole folio rather than one by one when they are compound. > the same folio anyway, and it seems at least zap_pte_range() would cope > with this just fine because subsequent iterations would just see > pte_none() and continue the loop. I haven't checked the other call sites > though, but in principal I don't see why we couldn't define > ptep_get_and_clear_full() as being something that clears all PTEs > mapping a given folio (although it might need renaming). > > This does assume you don't need to partially unmap a page in > zap_pte_range (ie. end >=3D folio), but we're already making that > assumption. > > > + > > + start =3D contpte_align_down(ptep); > > + ptep =3D start + CONT_PTES - 1; > > + > > + for (; ptep >=3D start; ptep--) { > > + pte_t pte =3D __ptep_get(ptep); > > + > > + if (!pte_valid(pte)) > > + continue; > > + > > + if (dirty) > > + pte =3D pte_mkdirty(pte); > > + > > + if (young) > > + pte =3D pte_mkyoung(pte); > > + > > + __ptep_set_access_flags_notlbi(ptep, pte); > > + return contpte_ptep_get(ptep, orig_pte); > > + } > > + > > + return orig_pte; > > +} > > +EXPORT_SYMBOL(contpte_ptep_get_and_clear_full); > > + > > int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, > > unsigned long addr, pte_t *ptep) > > { > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > > index d63f3a0a7251..b22216a8153c 100644 > > --- a/arch/arm64/mm/fault.c > > +++ b/arch/arm64/mm/fault.c > > @@ -199,19 +199,7 @@ static void show_pte(unsigned long addr) > > pr_cont("\n"); > > } > > > > -/* > > - * This function sets the access flags (dirty, accessed), as well as w= rite > > - * permission, and only to a more permissive setting. > > - * > > - * It needs to cope with hardware update of the accessed/dirty state b= y other > > - * agents in the system and can safely skip the __sync_icache_dcache()= call as, > > - * like __set_ptes(), the PTE is never changed from no-exec to exec he= re. > > - * > > - * Returns whether or not the PTE actually changed. > > - */ > > -int __ptep_set_access_flags(struct vm_area_struct *vma, > > - unsigned long address, pte_t *ptep, > > - pte_t entry, int dirty) > > +int __ptep_set_access_flags_notlbi(pte_t *ptep, pte_t entry) > > { > > pteval_t old_pteval, pteval; > > pte_t pte =3D __ptep_get(ptep); > > @@ -238,10 +226,30 @@ int __ptep_set_access_flags(struct vm_area_struct= *vma, > > pteval =3D cmpxchg_relaxed(&pte_val(*ptep), old_pteval, p= teval); > > } while (pteval !=3D old_pteval); > > > > + return 1; > > +} > > + > > +/* > > + * This function sets the access flags (dirty, accessed), as well as w= rite > > + * permission, and only to a more permissive setting. > > + * > > + * It needs to cope with hardware update of the accessed/dirty state b= y other > > + * agents in the system and can safely skip the __sync_icache_dcache()= call as, > > + * like __set_ptes(), the PTE is never changed from no-exec to exec he= re. > > + * > > + * Returns whether or not the PTE actually changed. > > + */ > > +int __ptep_set_access_flags(struct vm_area_struct *vma, > > + unsigned long address, pte_t *ptep, > > + pte_t entry, int dirty) > > +{ > > + int changed =3D __ptep_set_access_flags_notlbi(ptep, entry); > > + > > /* Invalidate a stale read-only entry */ > > - if (dirty) > > + if (changed && dirty) > > flush_tlb_page(vma, address); > > - return 1; > > + > > + return changed; > > } > > > > static bool is_el1_instruction_abort(unsigned long esr) > > --8<-- > Thanks Barry