From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5E57C4332F for ; Tue, 12 Dec 2023 12:02:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B90E6B02B8; Tue, 12 Dec 2023 07:02:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36A436B02B9; Tue, 12 Dec 2023 07:02:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2311D6B02BA; Tue, 12 Dec 2023 07:02:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 11FBE6B02B8 for ; Tue, 12 Dec 2023 07:02:27 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C0C04C092F for ; Tue, 12 Dec 2023 12:02:26 +0000 (UTC) X-FDA: 81558028692.07.AC8F686 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf12.hostedemail.com (Postfix) with ESMTP id C831E4002F for ; Tue, 12 Dec 2023 12:02:24 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702382544; a=rsa-sha256; cv=none; b=rdL736HjKa6fLQtSO/lFzNb3BwifukvIYyX/+MxBN//Nw+2DKHPSmMHgu0yk1LWvksxFMX htbAJH3yUpqWiFEZsnxShyKFnqwheExnmpbEGlubhvHZunoCqGYJG0jrG6XEEyaIMVN+PU ljgymraZpYZPO3iQn9dafVb3MvPKhE0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702382544; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SQDByDfWKrFG0Tvq2LzSBX3gdtw4nw95RexoJVw5iCA=; b=jhFdrNP6wxVx0KDLr5k08wxm/qTpiv/3SsLUAQWk27++zs7cU9H6Tsgqh7k6VfhYPtSqyb hpV6Dxyu+ZsqIHY3lL5w1KyZ9Rgu9u7tKrEqSUSq+shN3kn+uEPoqxjWUqZ/6oH9yAfz2z 2WdCJkX9z7QtTYSS3iRdziEpyza95Ss= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C8FD143D; Tue, 12 Dec 2023 04:03:10 -0800 (PST) Received: from [10.1.39.183] (XHFQ2J9959.cambridge.arm.com [10.1.39.183]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CB303F762; Tue, 12 Dec 2023 04:02:20 -0800 (PST) Message-ID: <58152ed9-5c8d-484f-8e11-23ee5879bc1e@arm.com> Date: Tue, 12 Dec 2023 12:02:19 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 15/15] arm64/mm: Implement clear_ptes() to optimize exit() Content-Language: en-GB To: Alistair Popple Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231204105440.61448-1-ryan.roberts@arm.com> <20231204105440.61448-16-ryan.roberts@arm.com> <878r65a2m2.fsf@nvdebian.thelocal> From: Ryan Roberts In-Reply-To: <878r65a2m2.fsf@nvdebian.thelocal> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C831E4002F X-Stat-Signature: ywqou438mj9hiz78xph1444b1boymb8a X-HE-Tag: 1702382544-407601 X-HE-Meta: U2FsdGVkX1/0GfzfJxdaFx16aVJapEg8Zt/wInzUg+aUhncPhVXCsnifnirJEDx0U9DR0sAmW5nOmY+M00DRffXKEpMfb/cjqiOa1v0wqHdIM+3EYLuiomxarJnMPBxVC83wThJvt8PvyY+noQkWs/viUHmuIhiO2GtVHI+VWSizJSD2yc2Rw0XDdPiZY6EpaI5eoalcOByYrk1NnpsL2Uy7wKMEau+a5lG0/rElhkdC6PW4Nna+PmUrlORW323wfNnYp04+Z/NGzm92gShCHpCEnXuVVEa+panZVShPUkHZW6XCxP0Xy8nPTF18es2aVqGpi2ImN5Xj71v7thfMi9+spNiujzreP/35rUhezCEfYiBzF3vRVRHxWfcjbF/IMzHv4jNAC3jcPkVn8W1jq28XGmtooA3KVChiQ/poCrKMz/T+3KsEFPh8PK5DQ6sHjzfKszTkEsACj/cLsL8s0FLOf1UaobXXahLNBbjdrEulweOb+9zXN+yJ+XFOlwG4uKQhPsADTyRX4J7cp+vUL1v7g0mZDGSz24uJpof3zrQBFwuc7eFSv+VQwaDJAX7AwpCdYaSWZAnwg8f7gFbHE7m1bLEYBEpLmsNeLdbWK9xZyXZmbcpTtkOtlz47bWunfjEsI6ycV3rdut+i0YaqupZWTdeaJ1zfFSgM8no27ZlzHN/dMGyJPMLLVtRXjwFqKn4K9woKGjudt03BIyFlZChbURFaoeqCYRCXznLwfpjPF48r4kmwPSImNbqAANmHam8mPpJl2N2WMl9fGJbh9rLJ94uVuhBvFG9rGA1RX7f+XK77ZcoZjplQl3iCHqcFg2GgfpOLEKHZMiudd4SjDOE6LzADEx2IOn4VjTp0NA9r9IcvmlhEJ0iUjxe+ytAIXMbiwqRi5rFghy7xNWFBZSX2iOilDCsBOlTPdQ2Ev+RB99fDzFq7AlGAaNgo8Kq8+KxJU4nClImYEi+zS6h Go7O3lF2 n6WDEX/umIpyh2Mef25R/fyvu04olXb5EvtQcSSs//A0gX0x/MmY8Kc4YjQiAoda13BCyN2VDRrgfCrdU+mOPfIettB9J09HpEsUJtl1CNGTZ7WiXkfr1PS+M0R4arc6Qu2e5YGS0I7oVJ6X0/hN+CclzhNSCOAG8AwQAR2in9qbj89cT5LW53xJv5yqTj2JE4zd6DzM0DgFanEp8Vee9+6VPiay9DD/cLWPz2Rn3OJ0J8ArCb/DQL9V3AZJCZVywXc62JqppoztHJgVISe3Xfyep7h+rPZDN2s8aV/Ob+r9NhxdSX+AvXQWo4xHTGYVPFmEXAt4TkWxkRQg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 08/12/2023 01:45, Alistair Popple wrote: > > Ryan Roberts writes: > >> With the core-mm changes in place to batch-clear ptes during >> zap_pte_range(), we can take advantage of this in arm64 to greatly >> reduce the number of tlbis we have to issue, and recover the lost exit >> performance incured when adding support for transparent contiguous ptes. >> >> If we are clearing a whole contpte range, we can elide first unfolding >> that range and save the tlbis. We just clear the whole range. >> >> The following shows the results of running a kernel compilation workload >> and measuring the cost of arm64_sys_exit_group() (which at ~1.5% is a >> very small part of the overall workload). >> >> Benchmarks were run on Ampere Altra in 2 configs; single numa node and 2 >> numa nodes (tlbis are more expensive in 2 node config). >> >> - baseline: v6.7-rc1 + anonfolio-v7 >> - no-opt: contpte series without any attempt to optimize exit() >> - simple-ptep_get_clear_full: simple optimization to exploit full=1. >> ptep_get_clear_full() does not fully conform to its intended semantic >> - robust-ptep_get_clear_full: similar to previous but >> ptep_get_clear_full() fully conforms to its intended semantic >> - clear_ptes: optimization implemented by this patch >> >> | config | numa=1 | numa=2 | >> |----------------------------|--------|--------| >> | baseline | 0% | 0% | >> | no-opt | 190% | 768% | >> | simple-ptep_get_clear_full | 8% | 29% | >> | robust-ptep_get_clear_full | 21% | 19% | >> | clear_ptes | 13% | 9% | >> >> In all cases, the cost of arm64_sys_exit_group() increases; this is >> anticipated because there is more work to do to tear down the page >> tables. But clear_ptes() gives the smallest increase overall. >> >> Signed-off-by: Ryan Roberts >> --- >> arch/arm64/include/asm/pgtable.h | 32 ++++++++++++++++++++++++ >> arch/arm64/mm/contpte.c | 42 ++++++++++++++++++++++++++++++++ >> 2 files changed, 74 insertions(+) >> >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index 9bd2f57a9e11..ff6b3cc9e819 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -1145,6 +1145,8 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); >> extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep); >> extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, >> pte_t *ptep, pte_t pte, unsigned int nr); >> +extern pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, unsigned int nr); >> extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, >> unsigned long addr, pte_t *ptep); >> extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, >> @@ -1270,6 +1272,36 @@ static inline void pte_clear(struct mm_struct *mm, >> __pte_clear(mm, addr, ptep); >> } >> >> +#define clear_ptes clear_ptes >> +static inline pte_t clear_ptes(struct mm_struct *mm, >> + unsigned long addr, pte_t *ptep, int full, >> + unsigned int nr) >> +{ >> + pte_t pte; >> + >> + if (!contpte_is_enabled(mm)) { > > I think it would be better to call the generic definition of > clear_ptes() here. Obviously that won't exist if clear_ptes is defined > here, but you could alcall it __clear_ptes() and #define clear_ptes > __clear_ptes when the arch specific helper isn't defined. My thinking was that we wouldn't bother to expose clear_ptes() when CONFIG_ARM64_CONTPTE is disabled, and just fall back to the core-mm generic one. But I think your proposal is probably cleaner and more consistent with everything else. So I'll do this for the next version. > >> + unsigned int i; >> + pte_t tail; >> + >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + for (i = 1; i < nr; i++) { >> + addr += PAGE_SIZE; >> + ptep++; >> + tail = __ptep_get_and_clear(mm, addr, ptep); >> + if (pte_dirty(tail)) >> + pte = pte_mkdirty(pte); >> + if (pte_young(tail)) >> + pte = pte_mkyoung(pte); >> + } >> + } else if (nr == 1) { >> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + } else >> + pte = contpte_clear_ptes(mm, addr, ptep, nr); >> + >> + return pte; >> +} >> + >> #define __HAVE_ARCH_PTEP_GET_AND_CLEAR >> static inline pte_t ptep_get_and_clear(struct mm_struct *mm, >> unsigned long addr, pte_t *ptep) >> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c >> index 2a57df16bf58..34b43bde3fcd 100644 >> --- a/arch/arm64/mm/contpte.c >> +++ b/arch/arm64/mm/contpte.c >> @@ -257,6 +257,48 @@ void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, >> } >> EXPORT_SYMBOL(contpte_set_ptes); >> >> +pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, >> + unsigned int nr) >> +{ >> + /* >> + * If we cover a partial contpte block at the beginning or end of the >> + * batch, unfold if currently folded. This makes it safe to clear some >> + * of the entries while keeping others. contpte blocks in the middle of >> + * the range, which are fully covered don't need to be unfolded because >> + * we will clear the full block. >> + */ >> + >> + unsigned int i; >> + pte_t pte; >> + pte_t tail; >> + >> + if (ptep != contpte_align_down(ptep) || nr < CONT_PTES) >> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); >> + >> + if (ptep + nr != contpte_align_down(ptep + nr)) >> + contpte_try_unfold(mm, addr + PAGE_SIZE * (nr - 1), >> + ptep + nr - 1, >> + __ptep_get(ptep + nr - 1)); >> + >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + >> + for (i = 1; i < nr; i++) { >> + addr += PAGE_SIZE; >> + ptep++; >> + >> + tail = __ptep_get_and_clear(mm, addr, ptep); >> + >> + if (pte_dirty(tail)) >> + pte = pte_mkdirty(pte); >> + >> + if (pte_young(tail)) >> + pte = pte_mkyoung(pte); >> + } >> + >> + return pte; >> +} >> +EXPORT_SYMBOL(contpte_clear_ptes); >> + >> int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, >> unsigned long addr, pte_t *ptep) >> { >