From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63561C6FD1F for ; Tue, 2 Apr 2024 13:10:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D827A6B0099; Tue, 2 Apr 2024 09:10:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0BE46B009B; Tue, 2 Apr 2024 09:10:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BACF56B009C; Tue, 2 Apr 2024 09:10:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9EB886B0099 for ; Tue, 2 Apr 2024 09:10:17 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5BB97120931 for ; Tue, 2 Apr 2024 13:10:17 +0000 (UTC) X-FDA: 81964625274.28.569298A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 8F99B180015 for ; Tue, 2 Apr 2024 13:10:14 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712063415; a=rsa-sha256; cv=none; b=LXk1etMwhnlOrI/RjWnmd0HTERd+hFOTCZnUNu8GTH21OAldZj70g24Ht83MqLqK7Vdge/ xelM5GiEP39qLR67wydZ0467lQMURClpq+SV9Ui3IyPCERWU6+DC4BuTxCW6O4JSWdFhUk UrO6vbVDKSnlIy00OYKa7xtvcDdLKhE= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712063414; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JyXqLJ7y+RdkP7qVOwmdmEy3YE2UOzv7MRHyWZuhXHY=; b=NrYb1u0kKQQo5ZhoSLwjZULgLYBreDMzuq68mJHYC/O2/to5O5gFqHqrG+wW8T/WIcaVCe 3OhfHTlzqLsnDN5gCQ9X8vLw1ei8Y8EU93i0YoQHiGw5+mqgM1g32uQ3RP/2spfoh8LDen n58DmWw9Xn9+vPMGMFuaZTrl4uEgyUU= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6DCF21007; Tue, 2 Apr 2024 06:10:45 -0700 (PDT) Received: from [10.1.38.163] (XHFQ2J9959.cambridge.arm.com [10.1.38.163]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D3A383F766; Tue, 2 Apr 2024 06:10:10 -0700 (PDT) Message-ID: <63c9caf4-3af4-4149-b3c2-e677788cb11f@arm.com> Date: Tue, 2 Apr 2024 14:10:09 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list() Content-Language: en-GB To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , Lance Yang , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song References: <20240327144537.4165578-1-ryan.roberts@arm.com> <20240327144537.4165578-6-ryan.roberts@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8F99B180015 X-Stat-Signature: px6m7ua1a4ssb5hw81j4ahoad578fek9 X-Rspam-User: X-HE-Tag: 1712063414-241925 X-HE-Meta: U2FsdGVkX1/icXcBp19WaguSofu0TGIsnybI/7Ns4Cnf9wz83z2rWa6LZttinXUUI9xD0DF5ybHhuCUHGzJY31sM3AHWq9p4OPusEoAvlWor592oIx36nhdlV/30t9VOVBzwibEEmZR1vhNzUP1vaqsrRpFMd9B56T6lzrhDoNFFhVbfrW4t/uhWSZPj4+u+VQv4ayOAOAWKkKgjj3C+6WKsTnfpbp2poRVrHGMwOK8GIeWwHlEs2PlZY9V/tQxg8kjJKrUDje0pOF1senO3APvZ7tBO09uNCn9iryOXyqr7pVVsOLqaSRMlVEDxUUwRaZzHs0fye56OE1A0UImx49sH1mRqNdvBns6YLCk+s5kXkvK7OrmIL9Dl4huw1qWe0N2hVU1ccKeRT5xhGi0j3Kcdc2OIjxrq0xF673/bPNqLIQb0LwoykwtQPv1xOINzsReQ3/Tw8V3qwDbvaSXyO1g9QVaDjzv03wLjH4aWdxj6x6peJo20XYLe1CLkf+7T8cBJczpZ2GQePPa9ITnxjTTCrSh2uoW2HpBaX6vD+vf5eHTzEy2WJgyyxxSC+MSvLPY0YfVSRYeUi0bj+xHMd2vNd+t2WIXkME2fzYlgm1SBQCnb6Q0FY4XBnKv5Zkoj27p204FjDpCx9GJ1+tN4h9iMlsHaCTqlopLbp/Dhjw0PKfsjK+x+1zAAYLaI+iO+hBV0B1ybz82iaC6EeO5WCjaHQMGKyUdFqbVnwqOTSRFuKYu6+zHF6kGLygNZVVRPRJ8xh5HLT/Rvh0D3R/LdxIidKt4CVZebK2nhxeJ9cE2HJcSn7IO9wZ9nK8x8vJ9A6+GDFaB6HMpbNb5xRv+bqSQdSrMm2QrW5Zff6jWrQAOVSyRayqR1/rStgzsj8Yte0G0v+VNAFiEYmPo4jDSwqYKJJf0vtbNiRziAFOJdxucu1eEYUk2AgBnhlAwby6BgVR4DDDWoZShIpxQLqea SWqmVIMQ RC8waudmmfjv3ECdSsgq1JB5MWNgb0YCtmN8z80Dh91IGT8cjP3Fd8oSISDy33Iql80Nna8Q/vS6TXFbZFBBKW+l0rQqIF85Q2Q6sJFVTFhIPDnzefeqxCgldpcoccuJO0uKzgthEQpCNpaprEIf3wTJ/qAGxG173JTDd0M0mQM/gaO9bCuCJvpJhTSkn0qMSoSoE3pnie8gHUSgsw3knjbB37Btx7r4Z3wYnt/qlFal3gybVYgyEvpHBDQCmhIBOIf9ulIHZ8LKuUtgORf7JE3n0zpffCNgPsty8BdUsh64cOlV09Xwe1okUO4o6M7mGj+4TLpQODga+lfFBj0LMQ6X7uQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 28/03/2024 08:18, Barry Song wrote: > On Thu, Mar 28, 2024 at 3:45 AM Ryan Roberts wrote: >> >> Now that swap supports storing all mTHP sizes, avoid splitting large >> folios before swap-out. This benefits performance of the swap-out path >> by eliding split_folio_to_list(), which is expensive, and also sets us >> up for swapping in large folios in a future series. >> >> If the folio is partially mapped, we continue to split it since we want >> to avoid the extra IO overhead and storage of writing out pages >> uneccessarily. >> >> Reviewed-by: David Hildenbrand >> Reviewed-by: Barry Song >> Signed-off-by: Ryan Roberts >> --- >> mm/vmscan.c | 9 +++++---- >> 1 file changed, 5 insertions(+), 4 deletions(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 00adaf1cb2c3..293120fe54f3 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, >> if (!can_split_folio(folio, NULL)) >> goto activate_locked; >> /* >> - * Split folios without a PMD map right >> - * away. Chances are some or all of the >> - * tail pages can be freed without IO. >> + * Split partially mapped folios right >> + * away. We can free the unmapped pages >> + * without IO. >> */ >> - if (!folio_entire_mapcount(folio) && >> + if (data_race(!list_empty( >> + &folio->_deferred_list)) && >> split_folio_to_list(folio, >> folio_list)) >> goto activate_locked; > > Hi Ryan, > > Sorry for bringing up another minor issue at this late stage. No problem - I'd rather take a bit longer and get it right, rather than rush it and get it wrong! > > During the debugging of thp counter patch v2, I noticed the discrepancy between > THP_SWPOUT_FALLBACK and THP_SWPOUT. > > Should we make adjustments to the counter? Yes, agreed; we want to be consistent here with all the other existing THP counters; they only refer to PMD-sized THP. I'll make the change for the next version. I guess we will eventually want equivalent counters for per-size mTHP using the framework you are adding. > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 293120fe54f3..d7856603f689 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1241,8 +1241,10 @@ static unsigned int shrink_folio_list(struct > list_head *folio_list, > folio_list)) > goto activate_locked; > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > - > count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); > - count_vm_event(THP_SWPOUT_FALLBACK); > + if (folio_test_pmd_mappable(folio)) { > + > count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); > + > count_vm_event(THP_SWPOUT_FALLBACK); > + } > #endif > if (!add_to_swap(folio)) > goto activate_locked_split; > > > Because THP_SWPOUT is only for pmd: > > static inline void count_swpout_vm_event(struct folio *folio) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > if (unlikely(folio_test_pmd_mappable(folio))) { > count_memcg_folio_events(folio, THP_SWPOUT, 1); > count_vm_event(THP_SWPOUT); > } > #endif > count_vm_events(PSWPOUT, folio_nr_pages(folio)); > } > > I can provide per-order counters for this in my THP counter patch. > >> -- >> 2.25.1 >> > > Thanks > Barry