From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D874C07CA9 for ; Tue, 28 Nov 2023 05:49:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 823BA6B02FB; Tue, 28 Nov 2023 00:49:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7ABA26B02FC; Tue, 28 Nov 2023 00:49:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64BFE6B02FD; Tue, 28 Nov 2023 00:49:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4E7696B02FB for ; Tue, 28 Nov 2023 00:49:31 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 18410C01B4 for ; Tue, 28 Nov 2023 05:49:31 +0000 (UTC) X-FDA: 81506285742.15.FE1135B Received: from mail-vs1-f52.google.com (mail-vs1-f52.google.com [209.85.217.52]) by imf12.hostedemail.com (Postfix) with ESMTP id 2E43A40016 for ; Tue, 28 Nov 2023 05:49:28 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=U8rYEnAO; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701150569; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BcnDikeD6as2th0VRusiTYWeEuAc+7mTw9jCcj0CvXA=; b=CPxbqEIhbsdIUe7UcssMnJ1hcNKahEqWtBBNX6PAc83iccBcLDdENY71rRpepZ7F2CKjrp hstcUAQTBxh+VzBlvuPKokAdOc4e1wh4/nSeffC+a2m7F8LZyDMhLv9SwWtxjkXU362Aqd cYl7vQOdcILRLHyW0Bfyc9RYMQFB11k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701150569; a=rsa-sha256; cv=none; b=jd49PMLFeH6oMnD1KYzw2xvvya3jfCGyXFoURp2C61QDYR22SYBcAOoiKSTqbbwxUmuL43 0ivTCbG8JcgQqU4684f6qWOhLkxWPcwAm5+QSI71AaDZRIJ/G/KDgObJogAyRpxoKRkawE M7Ei2APFX7WfSfX5Q0pAb/+BiVNfHbU= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=U8rYEnAO; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vs1-f52.google.com with SMTP id ada2fe7eead31-4629fac5a43so1677204137.1 for ; Mon, 27 Nov 2023 21:49:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701150568; x=1701755368; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=BcnDikeD6as2th0VRusiTYWeEuAc+7mTw9jCcj0CvXA=; b=U8rYEnAODulyNxaSiCixmbI7LdDr7ZhmQlaGg5V6Am/rn0EdMqd+4h0udfIqrvnvCa wUypklCHSlLRQQDZ6JODdvE3WQL1VG2AmYYr4u5cQAFCciTVi/w3zTVSDJupG9WEKGqU lEMmXnK8HgjdZ8Qkj72miguG7rnwYKx7ozgzsJJGlPRZmvPOs8Ajb/2M5vtG3na7IoDm qHkc+zI+gFd5cIFF4jsGQSFAkJjpIZf4i448qXnJOBOwBi8BFBqcQCAPp/iNbYsDopW+ fmHC/CMTNlIJFZr5anysBbLGe1oBedUQhwYdncQ8eiftuq+F8ydGapuKflkG+SCDhwkp BxuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701150568; x=1701755368; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BcnDikeD6as2th0VRusiTYWeEuAc+7mTw9jCcj0CvXA=; b=P76LaEANbx/H5pc29G3jY7rmfN+PmbXIWdkW8rBIZevqDer5rB43bpjinREOdxFOyY bRFAcOSctfh2QFSIMCMIRy4Qgo7qhUuwUp+ZQ3GJAA5qX3vEy807bLX2tfl///LwPegv 722iK0wUn8kniqS8IWDh+G2dfgXs0XeH5THmFuVn8Gtk8Crbvsf3d3RsIxrXjKCA2m1e EZfVqQ3hsxZwuwaoI2RliOebC5yO9K63PCKGzW5pdsrY9EmPbO1Yo69WJQwe9USAdKex dETxazQRwMExfn16I/8ZmIHQ9q1gpAKlG4QbGP037mniD45G31sjw2DaX1f6/HLO0Q+y GrKw== X-Gm-Message-State: AOJu0YxtIexuz3Wwcc1Es2mycVP4o5bZn1DdEHinqDlt888E0Ty+3gMs 5/Am3DEcub2nVhByV1zt+jQexP8m2jW7WQH+Vi0= X-Google-Smtp-Source: AGHT+IFwDOs82gSKF/qp3Oro+uBkMLk10nGMkQc0S1rLsq5ki0cFMX8ZNAcNLLihcWyjdj9RrOMOH3eDEAsNQiujstw= X-Received: by 2002:a67:e90d:0:b0:462:f544:8ba5 with SMTP id c13-20020a67e90d000000b00462f5448ba5mr6939340vso.21.1701150568116; Mon, 27 Nov 2023 21:49:28 -0800 (PST) MIME-Version: 1.0 References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231127031813.5576-1-v-songbaohua@oppo.com> <234021ba-73c2-474a-82f9-91e1604d5bb5@arm.com> In-Reply-To: <234021ba-73c2-474a-82f9-91e1604d5bb5@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Tue, 28 Nov 2023 13:49:13 +0800 Message-ID: Subject: Re: [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings To: Ryan Roberts Cc: akpm@linux-foundation.org, andreyknvl@gmail.com, anshuman.khandual@arm.com, ardb@kernel.org, catalin.marinas@arm.com, david@redhat.com, dvyukov@google.com, glider@google.com, james.morse@arm.com, jhubbard@nvidia.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, ryabinin.a.a@gmail.com, suzuki.poulose@arm.com, vincenzo.frascino@arm.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yuzenghui@huawei.com, yuzhao@google.com, ziy@nvidia.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: bx1pzrn4dnac711tt57736wtt8w1qyki X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2E43A40016 X-Rspam-User: X-HE-Tag: 1701150568-732529 X-HE-Meta: U2FsdGVkX18qCKurvVyUrt1F6ZzPqervvo17L6mpxSqkqxiFtc9eTqMoH91M83tFqbdjQEMBlHBZ7W22s4IeTLkqRVN9tS6RF8vj7PEDT3MzJjHDMn9cCENhPO9nirCiA4oUHp5tAFLQ6JWD2GxIkkbrS5Z4gLXnx9wqfj1wpWXpatD14ecfu8474I1he3rczhQbV3/yKQTxZRKU95isftnK3lmtCOIfzeJ8zL0/n1qk3Vs+nDsCRiCi9yul+ec9BPBmkyPgxTK4puqHh5JVgmoFn33+iNLHk5sIvRAmUFBJqrUYnXftcfh09v0X/vjNIzkLw8MPXr+RpT7tAMxjLTbx9twomp6RoEUjf68ydiJhubfi0UoPHBSjVW2U/8p9weSMcPsVid/3/wtgHwvzpd80vmAy+rCojk3ukm8lGeLNom1JSI9m4VIbvXhcKkbmJAhu9Xq08AWoTTKQeN1IvEf6esAuOVRbInyQHiMNqDIMOnttav2HcZTFgUj15wMKyesEO0C+xyDa1G0Isw31BD6jDVVDGX8MkhPOiXrWqa+Eiq573xBeQTM22ll9Vn46qzQHQlSwAefD3ouCa7iOzIs0n6YEVHjBelbT4DDLpHn4FFld5HHkav0lLu9AgSvt67FoPOPn7hrHsn+6OSjQRZekoNuBMdqh1FVLy9kHHT/rP1oNAkhXpsT+AAVut0Y/DFNIaM7H8I0RHnXFlCIMdRHBhrKIdz1i8F31UNeAQSxJpPSNFs870ZCEoKc2b8VFlnZ2JoSS6+PhBuvK3wvP7dm+PfgBa81qmHFvfMgXwcRojGJElGd9R/51Q48/zrzAC7bMmbD09KoKVjD0og8PR/wDLqStTvOeo+SD3Umvghorn9SMdEuOLN7bi9VpnABIXXpWFCsh+btVTx+02wtUfPa/3oksT670fiDcUI5gSKn3rxhmn9IsbAt+l7ZMnIEpVQIqbvGR2asnpKbGenN 2Csr2tB8 wjWB4by64GdHj9My/CnMp49Ebnt6urCs9cebk2qcGXgdHvUzqD4oYICARM3quoS44vZSficA7wHF9LbqM3SoPXuzm4JFda7csvtsxCxvUd82S0fG3a/Vagt979oZ85wzSHjxTjicnyYlvpIcJhTcB57DPGBcXRWDLSzQOsHw7Wf2ghNimclDbyvxsI73wYwyvz5jN7iwCV3kM7TaD/xEUv2dI2EMyOIyYQGjDPvULdhKzGqjog0CnaGL0f2uabEv1KOIDtQOzF1Ci3I4LfPsn9IZ7yuTuUOy79GjfF7fnzuZPSgpC5SGMfb7pOw/vfoWk3eQXiE15ZSUOJ/30f8XKCYVSmiFywdCLrY2f3eOLZLMmll2s4X6p3vg3559/+533UeNOPxdVIObPSlCLIWmlmI9rnOxDJ3ZQDTyDRkH7uRx8fGfa+x66cjCcRwKAuI3VO7yOP3OvqEsxtTv2jowBwMsuA1gYhR7NUPUbvkSyrTOdQ80= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 27, 2023 at 5:15=E2=80=AFPM Ryan Roberts = wrote: > > On 27/11/2023 03:18, Barry Song wrote: > >> Ryan Roberts (14): > >> mm: Batch-copy PTE ranges during fork() > >> arm64/mm: set_pte(): New layer to manage contig bit > >> arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit > >> arm64/mm: pte_clear(): New layer to manage contig bit > >> arm64/mm: ptep_get_and_clear(): New layer to manage contig bit > >> arm64/mm: ptep_test_and_clear_young(): New layer to manage contig bi= t > >> arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit > >> arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit > >> arm64/mm: ptep_set_access_flags(): New layer to manage contig bit > >> arm64/mm: ptep_get(): New layer to manage contig bit > >> arm64/mm: Split __flush_tlb_range() to elide trailing DSB > >> arm64/mm: Wire up PTE_CONT for user mappings > >> arm64/mm: Implement ptep_set_wrprotects() to optimize fork() > >> arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown > > > > Hi Ryan, > > Not quite sure if I missed something, are we splitting/unfolding CONTPT= ES > > in the below cases > > The general idea is that the core-mm sets the individual ptes (one at a t= ime if > it likes with set_pte_at(), or in a block with set_ptes()), modifies its > permissions (ptep_set_wrprotect(), ptep_set_access_flags()) and clears th= em > (ptep_clear(), etc); This is exactly the same interface as previously. > > BUT, the arm64 implementation of those interfaces will now detect when a = set of > adjacent PTEs (a contpte block - so 16 naturally aligned entries when usi= ng 4K > base pages) are all appropriate for having the CONT_PTE bit set; in this = case > the block is "folded". And it will detect when the first PTE in the block > changes such that the CONT_PTE bit must now be unset ("unfolded"). One of= the > requirements for folding a contpte block is that all the pages must belon= g to > the *same* folio (that means its safe to only track access/dirty for thec= ontpte > block as a whole rather than for each individual pte). > > (there are a couple of optimizations that make the reality slightly more > complicated than what I've just explained, but you get the idea). > > On that basis, I believe all the specific cases you describe below are al= l > covered and safe - please let me know if you think there is a hole here! > > > > > 1. madvise(MADV_DONTNEED) on a part of basepages on a CONTPTE large fol= io > > The page will first be unmapped (e.g. ptep_clear() or ptep_get_and_clear(= ), or > whatever). The implementation of that will cause an unfold and the CONT_P= TE bit > is removed from the whole contpte block. If there is then a subsequent > set_pte_at() to set a swap entry, the implementation will see that its no= t > appropriate to re-fold, so the range will remain unfolded. > > > > > 2. vma split in a large folio due to various reasons such as mprotect, > > munmap, mlock etc. > > I'm not sure if PTEs are explicitly unmapped/remapped when splitting a VM= A? I > suspect not, so if the VMA is split in the middle of a currently folded c= ontpte > block, it will remain folded. But this is safe and continues to work corr= ectly. > The VMA arrangement is not important; it is just important that a single = folio > is mapped contiguously across the whole block. > > > > > 3. try_to_unmap_one() to reclaim a folio, ptes are scanned one by one > > rather than being as a whole. > > Yes, as per 1; the arm64 implementation will notice when the first entry = is > cleared and unfold the contpte block. > > > > > In hardware, we need to make sure CONTPTE follow the rule - always 16 > > contiguous physical address with CONTPTE set. if one of them run away > > from the 16 ptes group and PTEs become unconsistent, some terrible > > errors/faults can happen in HW. for example > > Yes, the implementation obeys all these rules; see contpte_try_fold() and > contpte_try_unfold(). the fold/unfold operation is only done when all > requirements are met, and we perform it in a manner that is conformant to= the > architecture requirements (see contpte_fold() - being renamed to > contpte_convert() in the next version). Hi Ryan, sorry for too many comments, I remembered another case 4. mremap a CONTPTE might be remapped to another address which might not be aligned with 16*basepage. thus, in move_ptes(), we are copying CONPTEs from src to dst. static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, unsigned long old_addr, unsigned long old_end, struct vm_area_struct *new_vma, pmd_t *new_pmd, unsigned long new_addr, bool need_rmap_locks) { struct mm_struct *mm =3D vma->vm_mm; pte_t *old_pte, *new_pte, pte; ... /* * We don't have to worry about the ordering of src and dst * pte locks because exclusive mmap_lock prevents deadlock. */ old_pte =3D pte_offset_map_lock(mm, old_pmd, old_addr, &old_ptl); if (!old_pte) { err =3D -EAGAIN; goto out; } new_pte =3D pte_offset_map_nolock(mm, new_pmd, new_addr, &new_ptl); if (!new_pte) { pte_unmap_unlock(old_pte, old_ptl); err =3D -EAGAIN; goto out; } if (new_ptl !=3D old_ptl) spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); flush_tlb_batched_pending(vma->vm_mm); arch_enter_lazy_mmu_mode(); for (; old_addr < old_end; old_pte++, old_addr +=3D PAGE_SIZE, new_pte++, new_addr +=3D PAGE_SIZE) { if (pte_none(ptep_get(old_pte))) continue; pte =3D ptep_get_and_clear(mm, old_addr, old_pte); .... } This has two possibilities 1. new_pte is aligned with CONT_PTES, we can still keep CONTPTE; 2. new_pte is not aligned with CONT_PTES, we should drop CONTPTE while copying. does your code also handle this properly=EF=BC=9F > > Thanks for the review! > > Thanks, > Ryan > > > > > case0: > > addr0 PTE - has no CONTPE > > addr0+4kb PTE - has CONTPTE > > .... > > addr0+60kb PTE - has CONTPTE > > > > case 1: > > addr0 PTE - has no CONTPE > > addr0+4kb PTE - has CONTPTE > > .... > > addr0+60kb PTE - has swap > > > > Unconsistent 16 PTEs will lead to crash even in the firmware based on > > our observation. > > > > Thanks > > Barry Thanks Barry