From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75215C4167B for ; Tue, 28 Nov 2023 19:38:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4DF86B034C; Tue, 28 Nov 2023 14:38:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CD6B26B034D; Tue, 28 Nov 2023 14:38:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B50A06B034F; Tue, 28 Nov 2023 14:38:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A4F956B034C for ; Tue, 28 Nov 2023 14:38:02 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7B091120369 for ; Tue, 28 Nov 2023 19:38:02 +0000 (UTC) X-FDA: 81508373604.28.8B48AB0 Received: from mail-vk1-f174.google.com (mail-vk1-f174.google.com [209.85.221.174]) by imf02.hostedemail.com (Postfix) with ESMTP id AC3D680003 for ; Tue, 28 Nov 2023 19:37:59 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=lpA2PUgF; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701200279; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e3Gm6/DIYApCGqZzJw3xggCyDpCzeOTMLSM+4LpuTjg=; b=xewVrgjyFfdc+sixTOMo8TaZrbpOq3+la1W/X6DvVVNebeGNuW1GnCEKOpgeRcWzs2/bZW IZrbOuJ8d2KGaXmpniu6/FWdgH9ZJ+qpP91jqyldwVkK67NcuLI2aAQwXSUavcYyT9xDpY 8bCYflUpg1nWjNGTFz8RV17cVQ9KLGs= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=lpA2PUgF; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701200279; a=rsa-sha256; cv=none; b=Oz8PX9XJhbChjkF08gU2dMj6tYA9wXfJ5/lrd3DL9bD3k8JLA9VE309fHk0A0zIyYYT1WP CIaqwN0Z6/pi+mT1alaz/LbHB1enLWqZ/zyNAYmU3BSBuF5tfHp+LAjcRdLa+xfmoZJWmz Ylj7qhelCQe0RP0RA/MjfI5GjdipbdM= Received: by mail-vk1-f174.google.com with SMTP id 71dfb90a1353d-4ac0d137835so1860920e0c.2 for ; Tue, 28 Nov 2023 11:37:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701200279; x=1701805079; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=e3Gm6/DIYApCGqZzJw3xggCyDpCzeOTMLSM+4LpuTjg=; b=lpA2PUgFq8k1W0mIkUCJWCG6KHGv24h0RJBUh3lmaDS8l925NdXanwAMhpdB5fmd+Y pzIqHJAvgPBSsHx20OVpKmwagGxOITEtadpCai4dqV7zsHopsh0m9rUhVmi38kyWWOJo GO7liVmDuMSYBxssmCZwx3WA2QAgPwoiH5BPCLXsNQOZNnQ1HgMW9aTV09U1NegcGvTX 97Irw/brJKunx5mRYJl7MeRmFPEVoRP+WRsY4zRv8dP6QniA4cq1qkHYZYChF+koWx75 eHZj9COiJjseb8y/1Cuv6rT5ggLeyaN5dKelWyhDWraAt4JetZHcsQQl5msvlv0HbySA sZ2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701200279; x=1701805079; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e3Gm6/DIYApCGqZzJw3xggCyDpCzeOTMLSM+4LpuTjg=; b=mu9dmSu3hDCg/s4G0UhhFywXBRCw+sJxfJsbse8LaiwI14leihIflz3WbfdKbgAprG /fPT/rieQ1SZIq13ggcXTAwbfy+PZRMZWHeKsejRVlfCQP9z9jCYnPc2mdY6Zff+mncQ bj8MycURSz1m41YXgKDAQqLWuqHcqkgqEWcdM2CNPhbcEWGTm8jBSWeBWNTDFBZgHbmO xJu+6UJHtzq96lAFfL+biwcnZM/Hw9877MgrbhIxszuHFar1cyDMYP/CZISgrv+p/VyM jS0ueWW33IE/4FdfTF/oBawFwwzlTNLVrd7aAaiTxDRa5guxNjchuX+i3Ckw/OlZ3CmW vYOw== X-Gm-Message-State: AOJu0YxOR1UCo+HtHYq4Pyrf4CsMc0n3QJ+8ALNRt5d6QDTglPULznwl 1FG8mqu0sKMhqLS5pdalJtu3I+AtNGrh9GQo2Xy9LuXPKlLAPg== X-Google-Smtp-Source: AGHT+IGir5jTOY7RKQD/AnoE6vdw0pLitauMeJc2rnR87SBafSDEKTjLOF8lJSFz9e5TIPpYUFgai5KS5hrLHnTDgGk= X-Received: by 2002:a05:6122:c98:b0:4a8:4218:804b with SMTP id ba24-20020a0561220c9800b004a84218804bmr15605885vkb.12.1701200278658; Tue, 28 Nov 2023 11:37:58 -0800 (PST) MIME-Version: 1.0 References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231127031813.5576-1-v-songbaohua@oppo.com> <234021ba-73c2-474a-82f9-91e1604d5bb5@arm.com> <3e61d181-5e8d-4103-8dee-e18e493bc125@arm.com> In-Reply-To: <3e61d181-5e8d-4103-8dee-e18e493bc125@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 29 Nov 2023 08:37:47 +1300 Message-ID: Subject: Re: [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings To: Ryan Roberts Cc: akpm@linux-foundation.org, andreyknvl@gmail.com, anshuman.khandual@arm.com, ardb@kernel.org, catalin.marinas@arm.com, david@redhat.com, dvyukov@google.com, glider@google.com, james.morse@arm.com, jhubbard@nvidia.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, ryabinin.a.a@gmail.com, suzuki.poulose@arm.com, vincenzo.frascino@arm.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yuzenghui@huawei.com, yuzhao@google.com, ziy@nvidia.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: AC3D680003 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: as4okcpispmwh45mjk44fz495htwociw X-HE-Tag: 1701200279-501754 X-HE-Meta: U2FsdGVkX1/ymgCmvbgu+FjqtUL0P0/ylLLot57b7l8KHBE9UAVMTO2ZFzOSUnYAzzEjsO6EnFlRKSGS4Xh2BTnwo1zqD2isjWHcXXeHN56kW6oVsEtp0haYnMp1n98EdI2xDxC6qSOwD4E1eerBjNYLPNx94Eon7qr5j1YdilU3IdxdeE1/5nBdU1R44o/QBE5WmZBIziXmBh7ms96xT6P7h2bb3pfQ2b9z3S8txe5X3DwXgHvo3GnZoXj9XrYD/+nuFj8PFLFD382kEsw10GDh9awWvB5e5Um3x14CD7jEzpR6s+asqJEGPlGIGqE1zyn5SvGqZwSdx3VN+rXxH8xT4NbHZjjvpQXpvcOU/jkCH+Fe3LA21tJdXOpHxDF9Zp/D10oYg3tXbjuNWxpwWsc//RPoz5ZxrzsYiHxJqOg6tBIlL/gOhkzS4qbUejlTZhkreX0JUiLkDxjrT+Ys+QDkkX2oWb8EJCZ4P8Ue7bOc8RpxR2Qyo4lBbrIw6ooDb5guFSeZclYUmgO3RsGr8o2C1JUb9R0GkkxuFmXbHCziNPXoCSKRtLv1i/tEM1Ug6LDP2SV8lGDOUSUyqAdlp4O4Znrgx5uNBdaBMPkzS5I3fC/wO2HRx8SiTMlusIwUiPHOvrwVTanSXKUAQmy2udFGYPL2LFBXYDQ0DUr5E/JnJVmJ4v/PHaNdcOoMghcP7ukunQEwL7qP/yEs9Wfqrac9Y+cWeqwSaEBbbi2sd4jX8vZet+1IF+tJsT5L8Z42cd4RlC7RhUojkz+FNFBsLhA9Gly2zOrmVXkwOWJXKcdraB3Q8oVBFTWFp+WGsjESQsFy7/GFFWwC6nwMr1qHevgqbZTA0ZQ7/jICfMtYcq4O9+Iz20dBKD0poHNcJV2yI5bRmCts/BdUFlPEGYaUyjQtGTysB8JTsij+QR3dQOzmtTaIP3hZbn145IGZ5OpDRbgXU3YW3J19tNyFvzy b2gxtZ1W Tp+7tZSS7ofctPxdkr7QJZFpruGdZ5/cr23PzbqaXMPb+BtPqI9xfDebPr2mN34sguzsKJZeCD+949X0SzCGFb+6eGJwYSjZiSY2lkBcw3sVWSbv2ZruZToWbVXtNoZSLSU6tz7PpB781wDuwpnYyTHBJ0B86LsrCGcKNmI32ur6r4u+wT1y6RxgRNaLOnFvEKjGxeotJSToUTTVT5yfu9HFfVv5g+ZfZ0O3IJFeLwKCFx/bowBGFss1RBH41TYGVy18dj5SpIvgOkHpF9zWY1+B0DgM7OjXXSAfZSiu0krwnKHRrxAuamc0dLUn+hzeoiIOZ4bQW5ft//5QvL9lmkZCFnXDkCyieu99Z1GCW6CjoL24fJ6k7C0Iw8ustEZDfnhEMeP5iRuUtj0MhYqQ4JfcwHaggMr4grnF0zuW21jatVAfprwzRrMmEHHHFTVGwDEg91bMnC4Qj8JG3WbTlQsNgXTReEWcMl1/XCkEI+ZgYpGU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 29, 2023 at 1:08=E2=80=AFAM Ryan Roberts = wrote: > > On 28/11/2023 05:49, Barry Song wrote: > > On Mon, Nov 27, 2023 at 5:15=E2=80=AFPM Ryan Roberts wrote: > >> > >> On 27/11/2023 03:18, Barry Song wrote: > >>>> Ryan Roberts (14): > >>>> mm: Batch-copy PTE ranges during fork() > >>>> arm64/mm: set_pte(): New layer to manage contig bit > >>>> arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit > >>>> arm64/mm: pte_clear(): New layer to manage contig bit > >>>> arm64/mm: ptep_get_and_clear(): New layer to manage contig bit > >>>> arm64/mm: ptep_test_and_clear_young(): New layer to manage contig = bit > >>>> arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit > >>>> arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit > >>>> arm64/mm: ptep_set_access_flags(): New layer to manage contig bit > >>>> arm64/mm: ptep_get(): New layer to manage contig bit > >>>> arm64/mm: Split __flush_tlb_range() to elide trailing DSB > >>>> arm64/mm: Wire up PTE_CONT for user mappings > >>>> arm64/mm: Implement ptep_set_wrprotects() to optimize fork() > >>>> arm64/mm: Add ptep_get_and_clear_full() to optimize process teardo= wn > >>> > >>> Hi Ryan, > >>> Not quite sure if I missed something, are we splitting/unfolding CONT= PTES > >>> in the below cases > >> > >> The general idea is that the core-mm sets the individual ptes (one at = a time if > >> it likes with set_pte_at(), or in a block with set_ptes()), modifies i= ts > >> permissions (ptep_set_wrprotect(), ptep_set_access_flags()) and clears= them > >> (ptep_clear(), etc); This is exactly the same interface as previously. > >> > >> BUT, the arm64 implementation of those interfaces will now detect when= a set of > >> adjacent PTEs (a contpte block - so 16 naturally aligned entries when = using 4K > >> base pages) are all appropriate for having the CONT_PTE bit set; in th= is case > >> the block is "folded". And it will detect when the first PTE in the bl= ock > >> changes such that the CONT_PTE bit must now be unset ("unfolded"). One= of the > >> requirements for folding a contpte block is that all the pages must be= long to > >> the *same* folio (that means its safe to only track access/dirty for t= hecontpte > >> block as a whole rather than for each individual pte). > >> > >> (there are a couple of optimizations that make the reality slightly mo= re > >> complicated than what I've just explained, but you get the idea). > >> > >> On that basis, I believe all the specific cases you describe below are= all > >> covered and safe - please let me know if you think there is a hole her= e! > >> > >>> > >>> 1. madvise(MADV_DONTNEED) on a part of basepages on a CONTPTE large f= olio > >> > >> The page will first be unmapped (e.g. ptep_clear() or ptep_get_and_cle= ar(), or > >> whatever). The implementation of that will cause an unfold and the CON= T_PTE bit > >> is removed from the whole contpte block. If there is then a subsequent > >> set_pte_at() to set a swap entry, the implementation will see that its= not > >> appropriate to re-fold, so the range will remain unfolded. > >> > >>> > >>> 2. vma split in a large folio due to various reasons such as mprotect= , > >>> munmap, mlock etc. > >> > >> I'm not sure if PTEs are explicitly unmapped/remapped when splitting a= VMA? I > >> suspect not, so if the VMA is split in the middle of a currently folde= d contpte > >> block, it will remain folded. But this is safe and continues to work c= orrectly. > >> The VMA arrangement is not important; it is just important that a sing= le folio > >> is mapped contiguously across the whole block. > >> > >>> > >>> 3. try_to_unmap_one() to reclaim a folio, ptes are scanned one by one > >>> rather than being as a whole. > >> > >> Yes, as per 1; the arm64 implementation will notice when the first ent= ry is > >> cleared and unfold the contpte block. > >> > >>> > >>> In hardware, we need to make sure CONTPTE follow the rule - always 16 > >>> contiguous physical address with CONTPTE set. if one of them run away > >>> from the 16 ptes group and PTEs become unconsistent, some terrible > >>> errors/faults can happen in HW. for example > >> > >> Yes, the implementation obeys all these rules; see contpte_try_fold() = and > >> contpte_try_unfold(). the fold/unfold operation is only done when all > >> requirements are met, and we perform it in a manner that is conformant= to the > >> architecture requirements (see contpte_fold() - being renamed to > >> contpte_convert() in the next version). > > > > Hi Ryan, > > > > sorry for too many comments, I remembered another case > > > > 4. mremap > > > > a CONTPTE might be remapped to another address which might not be > > aligned with 16*basepage. thus, in move_ptes(), we are copying CONPTEs > > from src to dst. > > static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > unsigned long old_addr, unsigned long old_end, > > struct vm_area_struct *new_vma, pmd_t *new_pmd, > > unsigned long new_addr, bool need_rmap_locks) > > { > > struct mm_struct *mm =3D vma->vm_mm; > > pte_t *old_pte, *new_pte, pte; > > ... > > > > /* > > * We don't have to worry about the ordering of src and dst > > * pte locks because exclusive mmap_lock prevents deadlock. > > */ > > old_pte =3D pte_offset_map_lock(mm, old_pmd, old_addr, &old_ptl= ); > > if (!old_pte) { > > err =3D -EAGAIN; > > goto out; > > } > > new_pte =3D pte_offset_map_nolock(mm, new_pmd, new_addr, &new_p= tl); > > if (!new_pte) { > > pte_unmap_unlock(old_pte, old_ptl); > > err =3D -EAGAIN; > > goto out; > > } > > if (new_ptl !=3D old_ptl) > > spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > flush_tlb_batched_pending(vma->vm_mm); > > arch_enter_lazy_mmu_mode(); > > > > for (; old_addr < old_end; old_pte++, old_addr +=3D PAGE_SIZE, > > new_pte++, new_addr +=3D PAGE_SIZE) = { > > if (pte_none(ptep_get(old_pte))) > > continue; > > > > pte =3D ptep_get_and_clear(mm, old_addr, old_pte); > > .... > > } > > > > This has two possibilities > > 1. new_pte is aligned with CONT_PTES, we can still keep CONTPTE; > > 2. new_pte is not aligned with CONT_PTES, we should drop CONTPTE > > while copying. > > > > does your code also handle this properly=EF=BC=9F > > Yes; same mechanism - the arm64 arch code does the CONT_PTE bit managemen= t and > folds/unfolds as neccessary. > > Admittedly this may be non-optimal because we are iterating a single PTE = at a > time. When we clear the first pte of a contpte block in the source, the b= lock > will be unfolded. When we set the last pte of the contpte block in the de= st, the > block will be folded. If we had a batching mechanism, we could just clear= the > whole source contpte block in one hit (no need to unfold first) and we co= uld > just set the dest contpte block in one hit (no need to fold at the end). > > I haven't personally seen this as a hotspot though; I don't know if you h= ave any > data to the contrary? I've followed this type of batching technique for t= he fork > case (patch 13). We could do a similar thing in theory, but its a bit mor= e in my previous testing, i don't see mremap quite often, so no worries. as long as it is bug-free, it is fine to me though a mremap microbench will definitely lose :-) > complex because of the ptep_get_and_clear() return value; you would need = to > return all ptes for the cleared range, or somehow collapse the actual inf= o that > the caller requires (presumably access/dirty info). > Thanks Barry