From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA223C54E68 for ; Thu, 21 Mar 2024 15:24:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 316666B0085; Thu, 21 Mar 2024 11:24:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C6466B0089; Thu, 21 Mar 2024 11:24:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18E6C6B008A; Thu, 21 Mar 2024 11:24:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 09ED56B0085 for ; Thu, 21 Mar 2024 11:24:47 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id AA6F71A0DB1 for ; Thu, 21 Mar 2024 15:24:46 +0000 (UTC) X-FDA: 81921418572.12.763CB77 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id 9CF504000E for ; Thu, 21 Mar 2024 15:24:44 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711034685; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+op4Tb3B0/7emBQQmeIYCcMTVkwH5JDuPmn8Qbas0eI=; b=sB5oPzV1J6EI/77x1Jg3z+u5IS2z4IrCYaNKPJlryxtC2XJSloW7nlHPmfR4MpS6f6N9Er R3mKvJEZUhuDzj5D+d2TxoqDt6iMGqXUkj12TVMAlR/SSYFE6Egypx6UuHGIQC+C6VWDPw H7nHRz3L/Ut328z3wYsQwBWyNFRFtqI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711034685; a=rsa-sha256; cv=none; b=QntQ2Ya1sFl7LgMW0U1Mw5mq2GUMLOJyCJjDJTGOzKyCObAy+dM/ZUWO+CMlR8L87ZnxbU QT+TqFyRYBqgxqc8lg8fod2wz7w3V0OKgsCQoqVrAKd6mY9WfttvNqt6S/LI9wzipFxlKn WI300DQN5TdOj5hfnhhpWy9UsVtw+OU= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BCFFE1007; Thu, 21 Mar 2024 08:25:11 -0700 (PDT) Received: from [10.1.33.177] (XHFQ2J9959.cambridge.arm.com [10.1.33.177]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6428B3F67D; Thu, 21 Mar 2024 08:24:35 -0700 (PDT) Message-ID: <9930c86a-c0c8-4112-9122-0e4faca475f5@arm.com> Date: Thu, 21 Mar 2024 15:24:33 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Content-Language: en-GB To: Lance Yang Cc: Barry Song <21cnbao@gmail.com>, Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240311150058.1122862-1-ryan.roberts@arm.com> <20240311150058.1122862-7-ryan.roberts@arm.com> <7ba06704-2090-4eb2-9534-c4d467cc085a@arm.com> <269375a4-78a3-4c22-8e6e-570368a2c053@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 9CF504000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: gtrxyzd143wh5grs7uf4m5e3tk5hk4au X-HE-Tag: 1711034684-675927 X-HE-Meta: U2FsdGVkX1/xb/DNu4lGwrkcnCekCdXNKi1gsY3Z6wgbCULSsuhCYYXv510IaRCgtQ/LCMhFOHgazzb4nrwNhgThbwdOwjApJFF6tjBjjB6IYW2bhW0AGsYobd2LOtI3SmlbX7WpQkgqaO+28mofbI+P6gal3DkFMrYzbHnTF71tuyNBwfeAgx8I34SvJdBVTlOkdSDHU0LgnlAnz77+oRIFF1Xdia34KdoRzpxhaU6q8uoySqRiZs1R5uzI0LN0aIBElecPdn6d91zgvohM0U0y+NMDUztu7hse6WXKOIpbs1STK6XKGhW5AGegfzWsZP66ueS94eVmpTRUQoQfX5nv7XVCY5vTPh9Xj/0S4jY5xHg9MFC0TiwzyuUmuXm+vfadNj3hJYWJKKCP5I+yuzEz94oPByefZQvBq+2a4LtLgguTMIfUTJXTqZtzmgz67X9FkDF2bxYMKTvig/II7oNYwSs8toEl38JnBcXgIaVmUEE3HtUUvjHbuFxb8L7FxRLYVufqbuV56cadwQQZfI977BdBrT/REEN+GLiPZsgKpqLPH20q7+Z//JAdPyUbVjbz+nDluwM7P8nlyT2sh6UBIG2B23Yf0u0FgxANs7UwXdAIYbxzm7RkjRdn4XIc+VIpfnrNPsQ6f1ketTNk3iGU51KeWTptfVWA3LN7CUSXeYiDJWjdgknLrn63Yg35E4/zfknkLqvZH2OCHBFcd9omxP4KHGePE3zlKjhmo08cJ9L69pBBOp1xDBtmNO4IXq2NWigih34uqqIVEPI3/MCNJsyexuTjYc1tH0rpdeXhNKUro5o1vRrRLIYHHh9siYuakjYkFQ3dWnN5RhCoKQ36H6tO9Jw5iGqod1n65YhdK1zbtOEEpH9uzkF3YA084QBSi3waxKo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 21/03/2024 14:55, Lance Yang wrote: > On Thu, Mar 21, 2024 at 9:38 PM Ryan Roberts wrote: >> >>>>>>>>>> - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); >>>>>>>>>> - >>>>>>>>>> - if (!pageout && pte_young(ptent)) { >>>>>>>>>> - ptent = ptep_get_and_clear_full(mm, addr, pte, >>>>>>>>>> - tlb->fullmm); >>>>>>>>>> - ptent = pte_mkold(ptent); >>>>>>>>>> - set_pte_at(mm, addr, pte, ptent); >>>>>>>>>> - tlb_remove_tlb_entry(tlb, pte, addr); >>>>>>>>>> + if (!pageout) { >>>>>>>>>> + for (; nr != 0; nr--, pte++, addr += PAGE_SIZE) { >>>>>>>>>> + if (ptep_test_and_clear_young(vma, addr, pte)) >>>>>>>>>> + tlb_remove_tlb_entry(tlb, pte, addr); >>>>>>> >>>>>>> IIRC, some of the architecture(ex, PPC) don't update TLB with set_pte_at and >>>>>>> tlb_remove_tlb_entry. So, didn't we consider remapping the PTE with old after >>>>>>> pte clearing? >>>>>> >>>>>> Sorry Lance, I don't understand this question, can you rephrase? Are you saying >>>>>> there is a good reason to do the original clear-mkold-set for some arches? >>>>> >>>>> IIRC, some of the architecture(ex, PPC) don't update TLB with >>>>> ptep_test_and_clear_young() >>>>> and tlb_remove_tlb_entry(). >> >> Afraid I'm still struggling with this comment. Do you mean to say that powerpc >> invalidates the TLB entry as part of the call to ptep_test_and_clear_young()? So >> tlb_remove_tlb_entry() would be redundant here, and likely cause performance >> degradation on that architecture? > > I just thought that using ptep_test_and_clear_young() instead of > ptep_get_and_clear_full() + pte_mkold() might not be correct. > However, it's most likely that I was mistaken :( OK, I'm pretty confident that my usage is correct. > > I also have a question. Why aren't we using ptep_test_and_clear_young() in > madvise_cold_or_pageout_pte_range(), but instead > ptep_get_and_clear_full() + pte_mkold() as we did previously. > > /* > * Some of architecture(ex, PPC) don't update TLB > * with set_pte_at and tlb_remove_tlb_entry so for > * the portability, remap the pte with old|clean > * after pte clearing. > */ Ahh, I see; this is a comment from madvise_free_pte_range() I don't quite understand that comment. I suspect it might be out of date, or saying that doing set_pte_at(pte_mkold(ptep_get(ptent))) is not correct because it is not atomic and the HW could set the dirty bit between the get and the set. Doing the atomic ptep_get_and_clear_full() means you go via a pte_none() state, so if the TLB is racing it will see the entry isn't valid and fault. Note that madvise_free_pte_range() is trying to clear both the access and dirty bits, whereas madvise_cold_or_pageout_pte_range() is only trying to clear the access bit. There is a special helper to clear the access bit atomically - ptep_test_and_clear_young() - but there is no helper to clear the access *and* dirty bit, I don't believe. There is ptep_set_access_flags(), but that sets flags to a "more permissive setting" (i.e. allows setting the flags, not clearing them). Perhaps this constraint can be relaxed given we will follow up with an explicit TLBI - it would require auditing all the implementations. > > According to this comment from madvise_free_pte_range. IIUC, we need to > call ptep_get_and_clear_full() to clear the PTE, and then remap the > PTE with old|clean. > > Thanks, > Lance > >> >> IMHO, ptep_test_and_clear_young() really shouldn't be invalidating the TLB >> entry, that's what ptep_clear_flush_young() is for. >> >> But I do see that for some cases of the 32-bit ppc, there appears to be a flush: >> >> #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG >> static inline int __ptep_test_and_clear_young(struct mm_struct *mm, >> unsigned long addr, pte_t *ptep) >> { >> unsigned long old; >> old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0); >> if (old & _PAGE_HASHPTE) >> flush_hash_entry(mm, ptep, addr); <<<<<<<< >> >> return (old & _PAGE_ACCESSED) != 0; >> } >> #define ptep_test_and_clear_young(__vma, __addr, __ptep) \ >> __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep) >> >> Is that what you are describing? Does any anyone know why flush_hash_entry() is >> called? I'd say that's a bug in ppc and not a reason not to use >> ptep_test_and_clear_young() in the common code! >> >> Thanks, >> Ryan >> >> >>>> >>>> Err, I assumed tlb_remove_tlb_entry() meant "invalidate the TLB entry for this >>>> address please" - albeit its deferred and batched. I'll look into this. >>>> >>>>> >>>>> In my new patch[1], I use refresh_full_ptes() and >>>>> tlb_remove_tlb_entries() to batch-update the >>>>> access and dirty bits. >>>> >>>> I want to avoid the per-pte clear-modify-set approach, because this doesn't >>>> perform well on arm64 when using contpte mappings; it will cause the contpe >>>> mapping to be unfolded by the first clear that touches the contpte block, then >>>> refolded by the last set to touch the block. That's expensive. >>>> ptep_test_and_clear_young() doesn't suffer that problem. >>> >>> Thanks for explaining. I got it. >>> >>> I think that other architectures will benefit from the per-pte clear-modify-set >>> approach. IMO, refresh_full_ptes() can be overridden by arm64. >>> >>> Thanks, >>> Lance >>>> >>>>> >>>>> [1] https://lore.kernel.org/linux-mm/20240316102952.39233-1-ioworker0@gmail.com >>>>> >>>>> Thanks, >>>>> Lance >>>>> >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Lance >>>>>>> >>>>>>> >>>>>>> >>>>>>>>>> + } >>>>>>>>> >>>>>>>>> This looks so smart. if it is not pageout, we have increased pte >>>>>>>>> and addr here; so nr is 0 and we don't need to increase again in >>>>>>>>> for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) >>>>>>>>> >>>>>>>>> otherwise, nr won't be 0. so we will increase addr and >>>>>>>>> pte by nr. >>>>>>>> >>>>>>>> Indeed. I'm hoping that Lance is able to follow a similar pattern for >>>>>>>> madvise_free_pte_range(). >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> /* >>>>>>>>>> -- >>>>>>>>>> 2.25.1 >>>>>>>>>> >>>>>>>>> >>>>>>>>> Overall, LGTM, >>>>>>>>> >>>>>>>>> Reviewed-by: Barry Song >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> >>>>>> >>>> >>