From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47F50C54E58 for ; Wed, 13 Mar 2024 11:08:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE9F680024; Wed, 13 Mar 2024 07:08:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B9A1B940010; Wed, 13 Mar 2024 07:08:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3CDD80024; Wed, 13 Mar 2024 07:08:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9190B940010 for ; Wed, 13 Mar 2024 07:08:33 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 31149811BD for ; Wed, 13 Mar 2024 11:08:33 +0000 (UTC) X-FDA: 81891742506.01.115D2CA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 311E71C0007 for ; Wed, 13 Mar 2024 11:08:31 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710328111; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tJk9TeNrkFBZ8K/6lWml7jxcHLvv4W4Gt91GjFrck88=; b=muefU62EiW3lNafqTjPOVu8Fl9ngu5G12MvrZy7I1Y776jtdR4FaA7tMSSLl1SD95a6lnj DI71R5Jk0JcALKd1fCg1nfX5tdzvdLutC9RFf8fiPzrHH/slzl5mAByTiX+3kXv2awcqD7 tljq/aVp8xCol8ZzZeHHUPXss7CF9I0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710328111; a=rsa-sha256; cv=none; b=m7LTSxbG1XD/ppRqnkVLC8Ry7/2HMZtgInV7JGbMtZm2bfTU51TVZEfqSsoQ5D+maqmsgZ I/rZouMaimDa/6CPuBlCifJkOUqrruWeYxjuG7VCsoNMbmPy7BRK2DXfLJvFa49qebjBc+ 1ZoDi/hzc6c/raa2J7GWf2R0u1fo1rA= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DDEBA1007; Wed, 13 Mar 2024 04:09:06 -0700 (PDT) Received: from [10.57.67.164] (unknown [10.57.67.164]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 41EB93F73F; Wed, 13 Mar 2024 04:08:27 -0700 (PDT) Message-ID: Date: Wed, 13 Mar 2024 11:08:24 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Content-Language: en-GB To: Barry Song <21cnbao@gmail.com> Cc: Lance Yang , Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240311150058.1122862-1-ryan.roberts@arm.com> <20240311150058.1122862-7-ryan.roberts@arm.com> <00a3ba1d-98e1-409b-ae6e-7fbcbdcd74d5@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 311E71C0007 X-Rspam-User: X-Stat-Signature: akzs7dc4jg14cqtzdban7k9xwr1sy1sz X-Rspamd-Server: rspam03 X-HE-Tag: 1710328111-194307 X-HE-Meta: U2FsdGVkX19ianDOocNGCJee+7ZgsZTOc2SucfgSOGXF8G0dSI7n07wZ4ODSDk7zvSrJAP8r7ZB8/F6bgILo2DdKm8V6J9ITeQyDii/1A3QLeM/bMk6pNm96cdBiIQoILMOSHKT4LHds/cdLLthZvcvzzpO/oJHSU43LK3wqr24dLl+uChatIvGaHXSdQDKjKuajAAb8fTC5DjEh/LRfPAU8vJ6rw32KHMuVW49sLdYc3/SrVFB1aoKEyilHP0Nq8xCL3Ug3TwU5CLNyy0vbKRUFn/SWvZs8HsXf9t7NpAccdEd3h4NgOpAp+JNhHdC9rqeSPsE30n6MIFcc/Y9r3KEDPWcdspizRAVUoogD1K+zMBPvvyhTxXJWuZyq7mUv7LmaYShb9tpuMww9K/+3WR1nWHPVtIMc+BjfAO1TlxLloGqpq4FH1aEBTouPC5Xb16rI4rAUvqJHJ8Htu50wB2+1ukDYTWXaPtDa59pzq2SEbbuuOiVlJPS4XQaiz8L3jfY2/8Kokt65STe76XEU2HIWI+tBMBrQ+6CRgP7SOTFZRZMoMQ3NKXi1gEEzncnrBeh1Cunpy9C5or/h1gZvm3SWArNlYjauSzatMb4XfNHtgTUbtinrpISE/Wgdo5e6YBK8mq2yd5a8nYYuYdrOIO3T3nvqlYyJzc0CMw7OJjlKG22I04swX72k6/QVdjXWrP2LX8GgPBbbF08xGcEusimZqgILSlPEmu0s8pKS74uKBZ9YT3VWsP6QuuFY2dEBmgJBgQ1r3R9Ww/mLSbzyfK5g/WzyFhhKeCKjXJSKko5q0YtxcFXyP/qCkVuJAQ1+xZPpK9Cl0ldJ7qlxW7jG7tJPOjQWdftoBuknkWubzX/+JpBTV3qQqOqJQkmbfXMxojDcjiB8uwU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 13/03/2024 10:37, Barry Song wrote: > On Wed, Mar 13, 2024 at 10:36 PM Ryan Roberts wrote: >> >> On 13/03/2024 09:16, Barry Song wrote: >>> On Wed, Mar 13, 2024 at 10:03 PM Ryan Roberts wrote: >>>> >>>> On 13/03/2024 07:19, Barry Song wrote: >>>>> On Tue, Mar 12, 2024 at 4:01 AM Ryan Roberts wrote: >>>>>> >>>>>> Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large >>>>>> folio that is fully and contiguously mapped in the pageout/cold vm >>>>>> range. This change means that large folios will be maintained all the >>>>>> way to swap storage. This both improves performance during swap-out, by >>>>>> eliding the cost of splitting the folio, and sets us up nicely for >>>>>> maintaining the large folio when it is swapped back in (to be covered in >>>>>> a separate series). >>>>>> >>>>>> Folios that are not fully mapped in the target range are still split, >>>>>> but note that behavior is changed so that if the split fails for any >>>>>> reason (folio locked, shared, etc) we now leave it as is and move to the >>>>>> next pte in the range and continue work on the proceeding folios. >>>>>> Previously any failure of this sort would cause the entire operation to >>>>>> give up and no folios mapped at higher addresses were paged out or made >>>>>> cold. Given large folios are becoming more common, this old behavior >>>>>> would have likely lead to wasted opportunities. >>>>>> >>>>>> While we are at it, change the code that clears young from the ptes to >>>>>> use ptep_test_and_clear_young(), which is more efficent than >>>>>> get_and_clear/modify/set, especially for contpte mappings on arm64, >>>>>> where the old approach would require unfolding/refolding and the new >>>>>> approach can be done in place. >>>>>> >>>>>> Signed-off-by: Ryan Roberts >>>>> >>>>> This looks so much better than our initial RFC. >>>>> Thank you for your excellent work! >>>> >>>> Thanks - its a team effort - I had your PoC and David's previous batching work >>>> to use as a template. >>>> >>>>> >>>>>> --- >>>>>> mm/madvise.c | 89 ++++++++++++++++++++++++++++++---------------------- >>>>>> 1 file changed, 51 insertions(+), 38 deletions(-) >>>>>> >>>>>> diff --git a/mm/madvise.c b/mm/madvise.c >>>>>> index 547dcd1f7a39..56c7ba7bd558 100644 >>>>>> --- a/mm/madvise.c >>>>>> +++ b/mm/madvise.c >>>>>> @@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>> LIST_HEAD(folio_list); >>>>>> bool pageout_anon_only_filter; >>>>>> unsigned int batch_count = 0; >>>>>> + int nr; >>>>>> >>>>>> if (fatal_signal_pending(current)) >>>>>> return -EINTR; >>>>>> @@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>> return 0; >>>>>> flush_tlb_batched_pending(mm); >>>>>> arch_enter_lazy_mmu_mode(); >>>>>> - for (; addr < end; pte++, addr += PAGE_SIZE) { >>>>>> + for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) { >>>>>> + nr = 1; >>>>>> ptent = ptep_get(pte); >>>>>> >>>>>> if (++batch_count == SWAP_CLUSTER_MAX) { >>>>>> @@ -447,55 +449,66 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>> continue; >>>>>> >>>>>> /* >>>>>> - * Creating a THP page is expensive so split it only if we >>>>>> - * are sure it's worth. Split it if we are only owner. >>>>>> + * If we encounter a large folio, only split it if it is not >>>>>> + * fully mapped within the range we are operating on. Otherwise >>>>>> + * leave it as is so that it can be swapped out whole. If we >>>>>> + * fail to split a folio, leave it in place and advance to the >>>>>> + * next pte in the range. >>>>>> */ >>>>>> if (folio_test_large(folio)) { >>>>>> - int err; >>>>>> - >>>>>> - if (folio_estimated_sharers(folio) > 1) >>>>>> - break; >>>>>> - if (pageout_anon_only_filter && !folio_test_anon(folio)) >>>>>> - break; >>>>>> - if (!folio_trylock(folio)) >>>>>> - break; >>>>>> - folio_get(folio); >>>>>> - arch_leave_lazy_mmu_mode(); >>>>>> - pte_unmap_unlock(start_pte, ptl); >>>>>> - start_pte = NULL; >>>>>> - err = split_folio(folio); >>>>>> - folio_unlock(folio); >>>>>> - folio_put(folio); >>>>>> - if (err) >>>>>> - break; >>>>>> - start_pte = pte = >>>>>> - pte_offset_map_lock(mm, pmd, addr, &ptl); >>>>>> - if (!start_pte) >>>>>> - break; >>>>>> - arch_enter_lazy_mmu_mode(); >>>>>> - pte--; >>>>>> - addr -= PAGE_SIZE; >>>>>> - continue; >>>>>> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | >>>>>> + FPB_IGNORE_SOFT_DIRTY; >>>>>> + int max_nr = (end - addr) / PAGE_SIZE; >>>>>> + >>>>>> + nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, >>>>>> + fpb_flags, NULL); >>>>> >>>>> I wonder if we have a quick way to avoid folio_pte_batch() if users >>>>> are doing madvise() on a portion of a large folio. >>>> >>>> Good idea. Something like this?: >>>> >>>> if (pte_pfn(pte) == folio_pfn(folio) >>> >>> what about >>> >>> "If (pte_pfn(pte) == folio_pfn(folio) && max_nr >= nr_pages)" >>> >>> just to account for cases where the user's end address falls within >>> the middle of a large folio? >> >> yes, even better. I'll add this for the next version. >> >>> >>> >>> BTW, another minor issue is here: >>> >>> if (++batch_count == SWAP_CLUSTER_MAX) { >>> batch_count = 0; >>> if (need_resched()) { >>> arch_leave_lazy_mmu_mode(); >>> pte_unmap_unlock(start_pte, ptl); >>> cond_resched(); >>> goto restart; >>> } >>> } >>> >>> We are increasing 1 for nr ptes, thus, we are holding PTL longer >>> than small folios case? we used to increase 1 for each PTE. >>> Does it matter? >> >> I thought about that, but the vast majority of the work is per-folio, not >> per-pte. So I concluded it would be best to continue to increment per-folio. > > Okay. The original patch commit b2f557a21bc8 ("mm/madvise: add > cond_resched() in madvise_cold_or_pageout_pte_range()") > primarily addressed the real-time wake-up latency issue. MADV_PAGEOUT > and MADV_COLD are much less critical compared > to other scenarios where operations like do_anon_page or do_swap_page > necessarily need PTL to progress. Therefore, adopting > an approach that relatively aggressively releases the PTL seems to > neither harm MADV_PAGEOUT/COLD nor disadvantage > others. > > We are slightly increasing the duration of holding the PTL due to the > iteration of folio_pte_batch() potentially taking longer than > the case of small folios, which do not require it. If we can't scan all the PTEs in a page table without dropping the PTL intermittently we have bigger problems. This all works perfectly fine in all the other PTE iterators; see zap_pte_range() for example. > However, compared > to operations like folio_isolate_lru() and folio_deactivate(), > this increase seems negligible. Recently, we have actually removed > ptep_test_and_clear_young() for MADV_PAGEOUT, > which should also benefit real-time scenarios. Nonetheless, there is a > small risk with large folios, such as 1 MiB mTHP, where > we may need to loop 256 times in folio_pte_batch(). As I understand it, RT and THP are mutually exclusive. RT can't handle the extra latencies THPs can cause in allocation path, etc. So I don't think you will see a problem here. > > I would vote for increasing 'nr' or maybe max(log2(nr), 1) rather than > 1 for two reasons: > > 1. We are not making MADV_PAGEOUT/COLD worse; in fact, we are > improving them by reducing the time taken to put the same > number of pages into the reclaim list. > > 2. MADV_PAGEOUT/COLD scenarios are not urgent compared to others that > genuinely require the PTL to progress. Moreover, > the majority of time spent on PAGEOUT is actually reclaim_pages(). I understand your logic. But I'd rather optimize for fewer lock acquisitions for the !RT+THP case, since RT+THP is not supported. > >>> >>>> nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, >>>> fpb_flags, NULL); >>>> >>>> If we are not mapping the first page of the folio, then it can't be a full >>>> mapping, so no need to call folio_pte_batch(). Just split it. >>>> >>>>> >>>>>> + >>>>>> + if (nr < folio_nr_pages(folio)) { >>>>>> + int err; >>>>>> + >>>>>> + if (folio_estimated_sharers(folio) > 1) >>>>>> + continue; >>>>>> + if (pageout_anon_only_filter && !folio_test_anon(folio)) >>>>>> + continue; >>>>>> + if (!folio_trylock(folio)) >>>>>> + continue; >>>>>> + folio_get(folio); >>>>>> + arch_leave_lazy_mmu_mode(); >>>>>> + pte_unmap_unlock(start_pte, ptl); >>>>>> + start_pte = NULL; >>>>>> + err = split_folio(folio); >>>>>> + folio_unlock(folio); >>>>>> + folio_put(folio); >>>>>> + if (err) >>>>>> + continue; >>>>>> + start_pte = pte = >>>>>> + pte_offset_map_lock(mm, pmd, addr, &ptl); >>>>>> + if (!start_pte) >>>>>> + break; >>>>>> + arch_enter_lazy_mmu_mode(); >>>>>> + nr = 0; >>>>>> + continue; >>>>>> + } >>>>>> } >>>>>> >>>>>> /* >>>>>> * Do not interfere with other mappings of this folio and >>>>>> - * non-LRU folio. >>>>>> + * non-LRU folio. If we have a large folio at this point, we >>>>>> + * know it is fully mapped so if its mapcount is the same as its >>>>>> + * number of pages, it must be exclusive. >>>>>> */ >>>>>> - if (!folio_test_lru(folio) || folio_mapcount(folio) != 1) >>>>>> + if (!folio_test_lru(folio) || >>>>>> + folio_mapcount(folio) != folio_nr_pages(folio)) >>>>>> continue; >>>>> >>>>> This looks so perfect and is exactly what I wanted to achieve. >>>>> >>>>>> >>>>>> if (pageout_anon_only_filter && !folio_test_anon(folio)) >>>>>> continue; >>>>>> >>>>>> - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); >>>>>> - >>>>>> - if (!pageout && pte_young(ptent)) { >>>>>> - ptent = ptep_get_and_clear_full(mm, addr, pte, >>>>>> - tlb->fullmm); >>>>>> - ptent = pte_mkold(ptent); >>>>>> - set_pte_at(mm, addr, pte, ptent); >>>>>> - tlb_remove_tlb_entry(tlb, pte, addr); >>>>>> + if (!pageout) { >>>>>> + for (; nr != 0; nr--, pte++, addr += PAGE_SIZE) { >>>>>> + if (ptep_test_and_clear_young(vma, addr, pte)) >>>>>> + tlb_remove_tlb_entry(tlb, pte, addr); >>>>>> + } >>>>> >>>>> This looks so smart. if it is not pageout, we have increased pte >>>>> and addr here; so nr is 0 and we don't need to increase again in >>>>> for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) >>>>> >>>>> otherwise, nr won't be 0. so we will increase addr and >>>>> pte by nr. >>>> >>>> Indeed. I'm hoping that Lance is able to follow a similar pattern for >>>> madvise_free_pte_range(). >>>> >>>> >>>>> >>>>> >>>>>> } >>>>>> >>>>>> /* >>>>>> -- >>>>>> 2.25.1 >>>>>> >>>>> >>>>> Overall, LGTM, >>>>> >>>>> Reviewed-by: Barry Song >>>> >>>> Thanks! >>>> > > Thanks > Barry