From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEAA3CD1284 for ; Fri, 5 Apr 2024 04:06:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68ED26B0088; Fri, 5 Apr 2024 00:06:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63EF86B00DA; Fri, 5 Apr 2024 00:06:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 506196B00DB; Fri, 5 Apr 2024 00:06:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3423B6B0088 for ; Fri, 5 Apr 2024 00:06:17 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DD408140635 for ; Fri, 5 Apr 2024 04:06:16 +0000 (UTC) X-FDA: 81974140752.26.AA247A5 Received: from mail-ua1-f51.google.com (mail-ua1-f51.google.com [209.85.222.51]) by imf16.hostedemail.com (Postfix) with ESMTP id 0C57B18000B for ; Fri, 5 Apr 2024 04:06:14 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jYGii8w7; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712289975; a=rsa-sha256; cv=none; b=T3oZcAFC46x5jBn8kYOnpRo4lH5ojqM3gTE5F5Pu2GLeRFoykdw2VHW5Bzf910taiK2JjC Vtv3p6jrG3cgK87jny8XvgOKaImfeMwACZ9Q++YU5afgitpS4NFSZ+hyq1eOLhpxjM3YjA Ybk2NfHLpVlvju2tKYlYTV9S5XBSXvM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jYGii8w7; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712289975; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/oGwlVU9ddzBAFbWf0+zs68/qlXNCHyABcIeCzpjalA=; b=382ifoEhMyKimYU0eHHGGjy0b/I2BAbFfDh1xbonq07UB53qzsLHazjEfTSHjn1fy68TMy 5a663ui1Rxrsl/wBSqmxmKFBV7/JI0MBc4nxmeP0rfLwGVRRh3EceHROt1Ii7OXVs/4X6Q wCGJmkwEfEaGTPDDc3O791BVKCztW7U= Received: by mail-ua1-f51.google.com with SMTP id a1e0cc1a2514c-7e37d39ea76so471660241.2 for ; Thu, 04 Apr 2024 21:06:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712289974; x=1712894774; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=/oGwlVU9ddzBAFbWf0+zs68/qlXNCHyABcIeCzpjalA=; b=jYGii8w7fHfHod+lM9kcP6pufQ+rmuYJIcAvSkXw9SgFma9hMEnVo58ijqaWgnXWcT iF/Wz/OVDdy9aosFfV88q+lJGrRcYSg3Hw+tMIo8vayGiSQeEjZF241Jl+7I6QNx3BfO jf2iNviRBugjveeMjD42qZuEDzpWCQbdrPYD/vOWu+YTZSU/iv+WKbgUkv149PkjYmiy DsBZ2u7vFUZh1wwwl4XJ+tt6wJBok/2UlNfx3uJq/vmL9EUbOPTn5YhoofhA2fGqCOPj Yqr8TRPeITW1ADYdc+y8/1T3Er1U0/1aGza8hErTjyg24qIrnVwU2EQcVtEX4x2zgwb4 je0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712289974; x=1712894774; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/oGwlVU9ddzBAFbWf0+zs68/qlXNCHyABcIeCzpjalA=; b=u2+fBaEgkOHbHSm4izwrmFhY53xEHOFkwlzPv9x72IwPSIrlnBTdnF8P8znf8Hr9xb /KmuQ2qb5NNTthmdlgrzxlj7OaWnFWhtGD6wjZLxFDmyig26mx0zy/UxnOD93aTO9WKr z/9TNj52ryEaNcEx5eKEFmX/C3c7M/vd/r/BKip+L7Ljyo9eDtMpAytYJbPizd3OGVl3 VPdzfmsiGEpmZHAjCjNK95tNxE/9NbUGWd8LDlg5hNyaUJ8vvX+Tc5RgRzpeUQPN/TY4 fz5JgLIjgtCijF2RsYwjPyixpAvRN7OarMae+D5SO2FD0pWcHg1whIIB30s+8zY2wKLb GpRw== X-Forwarded-Encrypted: i=1; AJvYcCUkIipRn64QtZtHUBs1GGTyrHtVwjbsx+HtOMqY2RukPfXFD7Vah1aheReXbsCeE8JFT5MTk/qn2zMV5fJK3WNJW3M= X-Gm-Message-State: AOJu0YwnK3WxtSwB5NhKT79AmixmViMIIRlCvJNgXaC5oUmFOJO8Bvrt aCAsZW9InbZlQGcG1lGjC6ZjO5gq+O5hA+GGsgaLWwQsj14M4ShtoAvOG7Rq8oWqjKCTc9J33l8 5Ngl0KmQaouIWcFVMR//VTvH8BEU= X-Google-Smtp-Source: AGHT+IEwhifn1AjlV40ks2cDrmGC2MQKw88EjzZTnibHX97kUgz+W0ToiN9tkMhomur40IjK01IAnifCGEVvzIazXbM= X-Received: by 2002:a05:6102:b08:b0:479:d98e:2cb7 with SMTP id b8-20020a0561020b0800b00479d98e2cb7mr415139vst.2.1712289974090; Thu, 04 Apr 2024 21:06:14 -0700 (PDT) MIME-Version: 1.0 References: <20240327144537.4165578-1-ryan.roberts@arm.com> <20240327144537.4165578-6-ryan.roberts@arm.com> <63c9caf4-3af4-4149-b3c2-e677788cb11f@arm.com> In-Reply-To: <63c9caf4-3af4-4149-b3c2-e677788cb11f@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Fri, 5 Apr 2024 17:06:03 +1300 Message-ID: Subject: Re: [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list() To: Ryan Roberts Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , Lance Yang , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0C57B18000B X-Stat-Signature: c6dxwjsmkx1s8a7u4c79f86znwsp1uu8 X-Rspam-User: X-HE-Tag: 1712289974-142155 X-HE-Meta: U2FsdGVkX1+922pX+fW/gt8QFpdz9c+4ZepsveqUmo5I0TeFJ+9+NXhKjoV+LgHbeFw0+fAxrszmi5SLBfOtKqKJdBO2A/i5B/+tfHSjJ/ynYLmwFOM0JFjkQYQS1HS9vBo+z6DTkbaCoN4ndX3w1l7zI6XD16OxSGsOMcrPecTE76bzXI5OgkO9T2RyHJTUVbx1uai3GlVptLpwhOeTFfvDDysJ2ofFzpZnw8QLxy+3XUAKvOIbXFEd39z1czeQdWe7cuwIIOMEcIWtxzMUj7l2w+YOIHxJWJOUQuY1ag4MKMF6yeFuEd63SZAmEux47qx2bpWWlpxbWNt2YbKeWr0WEPkWXz5PSCeiUMpSAPKb8+3NyQDtP7EwWAunCixa7aAFsGyZX4gWFnyYK05s/XbdKasJuJqh82kALxlkfi59Ow4yYWllYI40I4+o3huqUh6ME4XqmeXwGClGAP+TSic4ySHGTcoYcygHNuhX2qXRhXgluJm3jahp2uMmHNYK2/RDWOpnj50Yi5oHSohBNLOapKAk49eZN9rYeJR4rvZJ3JG7cH7OFqv4J6wKNeIyyr8IbtT6PyD2aFT3QwqiMQEmCtxbzLpBz/8RcD9TgVNHuR599DTWwhi4Z9Vqle19Pa1R9Jbkag9lmK9Zj2tAnNl1gqJjhV6j+cbV1pjBA9pBpCLhtHWoVcpn4FmZpgJ/H25t84NJPDgLRPAi/957IAXN+rFxipUDAyYkOXkodod4yV4oeEgx41pb4QrV+E0qc+hDxFuH7Qkh22NO+wdPGsHW4yy+aEtuSRqz9lh519uODwEDUqG3CJEpoG1QSXxVBg5Yvg9/zXdwVqXY3nOdVQ12DJB0haDbn6DevUzWxLeAXUONHGVjCYa9szRoOfKcs4FYpJYNw1qEBLRKqS5LhnluDww/bkQmk8EWYQvS9Cd3k+L1CbNEhyk/LoSsZZqq3IUMdqSzNyp3jLdFadS AJy7eVES SBTAdohtG2HE4tt+eySxviYc+GRyD08aABBxtRwyIq0xn9zsPf1u9nApWoqpABfq4nKOLidrttxjPlsaKrDe+2XYldslPlImFH1yiShspAOvcI4F6copfFiY9YlKmS9b6IR1S1VwY8lPD9K33SomwKHX2rnI5vH/l463LpuoWy2SImPjkXNHnEWiqRGRf762RuK6e36j6ZtD5WRvC9ihKCIjpmXagAdj+lOx2qOhpo6SX8Dur/IrX1K9TYi3f1+meEyhf0ra2XT9FPq2D9kdMKqkzdEB0R40mVPZQ8Ar8sgkYPWdlg8sm3nipZrO1lqj0gJPvuCjNg6c5cmM8jVjlHDGczNKuetibJKMRmpXcAQFspTk1RGg9bJM4O1zR3iPR9KpJml6/s5ElBDnes0xEE4EM0eVPm1z1TA+0ZYoYZuJOWqc9mY3RR74L6js6qMNKIqMVaA+0cHO/ONGLJgjdbUvnzcLxV8vl852aECdSP+sh6a7nkX8l9aPSA1tdoiw8oTRyU8zgFDtbFLK0bPtp8AT54rcyZTunMKNf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 3, 2024 at 2:10=E2=80=AFAM Ryan Roberts = wrote: > > On 28/03/2024 08:18, Barry Song wrote: > > On Thu, Mar 28, 2024 at 3:45=E2=80=AFAM Ryan Roberts wrote: > >> > >> Now that swap supports storing all mTHP sizes, avoid splitting large > >> folios before swap-out. This benefits performance of the swap-out path > >> by eliding split_folio_to_list(), which is expensive, and also sets us > >> up for swapping in large folios in a future series. > >> > >> If the folio is partially mapped, we continue to split it since we wan= t > >> to avoid the extra IO overhead and storage of writing out pages > >> uneccessarily. > >> > >> Reviewed-by: David Hildenbrand > >> Reviewed-by: Barry Song > >> Signed-off-by: Ryan Roberts > >> --- > >> mm/vmscan.c | 9 +++++---- > >> 1 file changed, 5 insertions(+), 4 deletions(-) > >> > >> diff --git a/mm/vmscan.c b/mm/vmscan.c > >> index 00adaf1cb2c3..293120fe54f3 100644 > >> --- a/mm/vmscan.c > >> +++ b/mm/vmscan.c > >> @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct l= ist_head *folio_list, > >> if (!can_split_folio(folio, NU= LL)) > >> goto activate_locked; > >> /* > >> - * Split folios without a PMD = map right > >> - * away. Chances are some or a= ll of the > >> - * tail pages can be freed wit= hout IO. > >> + * Split partially mapped foli= os right > >> + * away. We can free the unmap= ped pages > >> + * without IO. > >> */ > >> - if (!folio_entire_mapcount(fol= io) && > >> + if (data_race(!list_empty( > >> + &folio->_deferred_list= )) && > >> split_folio_to_list(folio, > >> folio_= list)) > >> goto activate_locked; > > > > Hi Ryan, > > > > Sorry for bringing up another minor issue at this late stage. > > No problem - I'd rather take a bit longer and get it right, rather than r= ush it > and get it wrong! > > > > > During the debugging of thp counter patch v2, I noticed the discrepancy= between > > THP_SWPOUT_FALLBACK and THP_SWPOUT. > > > > Should we make adjustments to the counter? > > Yes, agreed; we want to be consistent here with all the other existing TH= P > counters; they only refer to PMD-sized THP. I'll make the change for the = next > version. > > I guess we will eventually want equivalent counters for per-size mTHP usi= ng the > framework you are adding. Hi Ryan, Today, I created counters for per-order SWPOUT and SWPOUT_FALLBACK. I'd appreciate any suggestions you might have before I submit this as patch 2/2 of my mTHP counters series. diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index cc13fa14aa32..762a6d8759b9 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -267,6 +267,8 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, enum thp_stat_item { THP_STAT_ANON_ALLOC, THP_STAT_ANON_ALLOC_FALLBACK, + THP_STAT_ANON_SWPOUT, + THP_STAT_ANON_SWPOUT_FALLBACK, __THP_STAT_COUNT }; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e704b4408181..7f2b5d2852cc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -554,10 +554,14 @@ static struct kobj_attribute _name##_attr =3D __ATTR_RO(_name) THP_STATE_ATTR(anon_alloc, THP_STAT_ANON_ALLOC); THP_STATE_ATTR(anon_alloc_fallback, THP_STAT_ANON_ALLOC_FALLBACK); +THP_STATE_ATTR(anon_swpout, THP_STAT_ANON_SWPOUT); +THP_STATE_ATTR(anon_swpout_fallback, THP_STAT_ANON_SWPOUT_FALLBACK); static struct attribute *stats_attrs[] =3D { &anon_alloc_attr.attr, &anon_alloc_fallback_attr.attr, + &anon_swpout_attr.attr, + &anon_swpout_fallback_attr.attr, NULL, }; diff --git a/mm/page_io.c b/mm/page_io.c index a9a7c236aecc..be4f822b39f8 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -212,13 +212,16 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) static inline void count_swpout_vm_event(struct folio *folio) { + long nr_pages =3D folio_nr_pages(folio); #ifdef CONFIG_TRANSPARENT_HUGEPAGE if (unlikely(folio_test_pmd_mappable(folio))) { count_memcg_folio_events(folio, THP_SWPOUT, 1); count_vm_event(THP_SWPOUT); } + if (nr_pages > 0 && nr_pages <=3D HPAGE_PMD_NR) + count_thp_state(folio_order(folio), THP_STAT_ANON_SWPOUT); #endif - count_vm_events(PSWPOUT, folio_nr_pages(folio)); + count_vm_events(PSWPOUT, nr_pages); } #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) diff --git a/mm/vmscan.c b/mm/vmscan.c index ffc4553c8615..b7c5fbd830b6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1247,6 +1247,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, count_vm_event( THP_SWPOUT_FALLBACK= ); } + if (nr_pages > 0 && nr_pages <=3D HPAGE_PMD_NR) + count_thp_state(folio_order(folio), + THP_STAT_ANON_SWPOUT_FALLBACK); + #endif if (!add_to_swap(folio)) goto activate_locked_split; Thanks Barry