From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE14C6FD1F for ; Tue, 2 Apr 2024 13:22:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43DCB6B008C; Tue, 2 Apr 2024 09:22:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C6466B0092; Tue, 2 Apr 2024 09:22:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 241716B0095; Tue, 2 Apr 2024 09:22:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 016CC6B008C for ; Tue, 2 Apr 2024 09:22:31 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BAA2580579 for ; Tue, 2 Apr 2024 13:22:31 +0000 (UTC) X-FDA: 81964656102.02.3EC2998 Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) by imf19.hostedemail.com (Postfix) with ESMTP id D84611A001C for ; Tue, 2 Apr 2024 13:22:28 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=hHs5+Wcx; spf=pass (imf19.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.167.44 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712064149; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=37qcl+O6YD2iKKcTYX3TCpIUlrBbXdw/pysK7vT88Us=; b=pIJB323puZFFYs2TUy59siU/02ybLhww06R3C7ezWnc+0kxyxfTW14QWkLxNMcLbYCPZsl WV/7bf5s9iMOlxbD0b/bTMOFpJnEkZUgyLBo8WVMHgEnwP1ifSsvl8pXL5SwNMV96Iup7H 5zkqu6Os4XmDHpap2rAwaiSwndB9obU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712064149; a=rsa-sha256; cv=none; b=44+adlaXF6Ds7X+9UkyFNqA3IYbIzTY3garoxbs0mGvZMmilPAtrQcIhJhV4Hrw7WtrKRu wkoR3Ho6q4J7gwXSHWWiHmsTW/8gq2ZEQ6zMSBxnFjMt2lLrNbp31xscGCZCYtJtVNuzf9 Fw3ZEtbyYjpnA1RX6k5Ouo2OlHjMRe8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=hHs5+Wcx; spf=pass (imf19.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.167.44 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f44.google.com with SMTP id 2adb3069b0e04-513cf9bacf1so6492890e87.0 for ; Tue, 02 Apr 2024 06:22:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712064147; x=1712668947; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=37qcl+O6YD2iKKcTYX3TCpIUlrBbXdw/pysK7vT88Us=; b=hHs5+WcxeYyqve4NOFaXIh3LwWthl1vY1nG1zkdQsTgsaN5v/L0GCbGYt/bWiPZ09k PpaunTzqDx5ADt3GnZJs8M9foJkHCyKDlogz0usEwXs+gf3IUi+Yx71ROkIpu6rSJt8I 2ZgWIqx8CRoUh+iyhbvaZNpB3MbjHKnUnUYRY0Zap0njQmLQ8gr86a22nE/CcbNrxmtP UA28gw5WjXwOKAfWHBNR8LR91Y7idv135VGnh6SXwURqyperqpOzzk0u7TzpqzL5Ijfy aJELKHqzs0WPAwf03Ci3olMTtiJRaJkohk+OA+gCDIGxAssKlWMnm7Ye6R0roSFXTXE0 xaqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712064147; x=1712668947; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=37qcl+O6YD2iKKcTYX3TCpIUlrBbXdw/pysK7vT88Us=; b=SWPrkHlhQ/Fxg17VAyGGpSvq59wEJFbDfiJBqvCbBYly40lUMCXi6DTGGQHRZz1jC1 9agEfR1Q2wyCZt4J7fYvxCpQ+cMURDKcWP12pdxU73fojtnBAa1Fw/jTVoZyCevq+iUG KPozu8kwxFa4bRD6i0YC/JSkbdv712uX07tWyrlPClscCmiVwlMhttCLKSsIafsINFfs n3m3rp3gvs0B+U7zeRW7ieEhmyi2sGTxkjTQoK2+61xmx1H3ZAGCOaBiehH8pQXdaHb3 d9yTq0NEmtGmz0Y30l2URQtbJhui3tIhNClpIsMTKFDAmvG4NpSTMIKm3ZYMTBLR37gR njqg== X-Forwarded-Encrypted: i=1; AJvYcCV3/cI7RDpqPRJbGULh6aFZ/itPBtKT2IstZC6eery0tVRDr2ns8xogwC1YdlknPtXBx7wSlJ5HR+Slph5JpgjRO5I= X-Gm-Message-State: AOJu0YyoDZfIoAjhKuH1ez7tMYR1E5zARMPxnJeSJcGXZDxzisYIwvx9 XQI8S/cKH0e2ygMWuE8vXYB8tUGdC2dZzdCECptegBreOjaDcdILqjIwoFMXDuWenZVWWX9ZBwY jqyy2KIsXKIEH0ydc7C/6X+7QFLw= X-Google-Smtp-Source: AGHT+IEtz0OZ72umCyrzWRVSeywQrhQ3KFvQPpv0lJyPaEv6zyLlUi9ltNYQabYxlDa40WUsAN9ueDCmzRz5eLpwRis= X-Received: by 2002:a05:6512:2389:b0:515:be10:e288 with SMTP id c9-20020a056512238900b00515be10e288mr10983912lfv.21.1712064146755; Tue, 02 Apr 2024 06:22:26 -0700 (PDT) MIME-Version: 1.0 References: <20240327144537.4165578-1-ryan.roberts@arm.com> <20240327144537.4165578-6-ryan.roberts@arm.com> <63c9caf4-3af4-4149-b3c2-e677788cb11f@arm.com> In-Reply-To: <63c9caf4-3af4-4149-b3c2-e677788cb11f@arm.com> From: Lance Yang Date: Tue, 2 Apr 2024 21:22:15 +0800 Message-ID: Subject: Re: [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list() To: Ryan Roberts Cc: Barry Song <21cnbao@gmail.com>, Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: hwen9etazzenzqoabsppkah5o7dy9bd4 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D84611A001C X-Rspam-User: X-HE-Tag: 1712064148-253558 X-HE-Meta: U2FsdGVkX18rK2rZKT9cccRqkjLKuJQTxyzmdCWCkPOmbGuz/r4x/bFxmddm4sD7aZmgj6oSxxxPOKPuRxdUTruxhFriHoG7H0Btqx2QEdipb0sDI7aHUkCnpQ3kYDPYDAVdUeUUhWQV575AU6pM/BgXtrhwLjUlkNKQmVGvktvx2BMF65aC49E+S+oXcegm+fTebEBIBbpvWiEmaxhqPBMlYI7F9UTRVJq5QderizuZNJCW1KBR1ESmT5wqDpvhlPnYW/hHxeXH31m9bmmQRIDZnPkxyNygPb6plAVD4ihyNurC8MxIKDRTDXqd7dUwqlOfkVl6Ihsk25avg02Onlg4sWv/hGJeI8g6XtZrJ0WClVxnyTCyn/ps7oFx/7eXY7q+2IvPdLUDEJ5vr14zbQy6NsaqmRFD4Pl1zvEsYDxbGIlkDWPLQ/rc2TTXaO5tHQKgYXtM3OeogzVyHyD02I3xOZNTZupWwrLUPDMAdEXq3sJikRZSTqLyJ5ieU42TsFEoEXRDHmNBYmv+PUDF7FE11rI8ANsOpV0A7LqvgRB7fF7WQX08jJjiRa8ImvKP/1Kh1ZTWcE8QYOVKheGAm/ys/20NofEYoyjdQiyVITNcClinL3ehMHTqy4HBOJEBcMfpJaZH1hi8EO6BuIsFYUY5WbwDsxlmmfQD1AaC9bxt/R5jfMu2hMRsAabyZdAt1zG4xskeySyz4f/y19uQqJ6YF4Q6aY388OonUjX1vAZe3A6mKbAVVmP7QiDLfrp488kZbZtpdYxX+FuJx9W722GHMN1RRKNh4jx5ts3V+zMuD/FfTrdPpRZFxgeUkivJxlgVk6j1Zm6Nn9b87YoOHrRZgMB+T/EjyUg3H4CvdgwQ1cToyaY7j5zV2islhw5HQalye73Rivv/6oH7NiOemgfanBPXG7Sos/RUtqk7yivY63G3nWFdAMYfh95HZWwPk8rVdG/fR5kuy5keiMR pgBhKh5x zRJlhKK6056WEr/lTDoDn4cfLaM8wQylIwSojcwUAbp5JKccmhtGPl1Z4ReZVWUNGzAnLlk4FtnkCrTZBi12rpfjBfyGLw6ox6vawvYktwDvTWvCKM6LnTHSftOLv9n/+dIM4jKxp9dvVqGJD3vAstqQw9IT0q6C41IrPW9Awo268yJVwhstzgu7MQad3qiOY0nXT4/P+cgGnt43jpad1d1NlWgSeofuSUOJKDmQnLJ2pK+6yYkjeZVsMNyZRyUsESY6/sOGGwguXqRnglHBLtBugfe2XZDJsKfR1xTIrfimg/uLovRbTOQESpJv+QJYK1/mRJ4HYxv7gBFI75pvu9Z5aH1D443HdBxv7t18lmXSATdB7fAwtGa75qC1Ft7iTutPz3H9zeY5pJH3kL1U7GWVh1IQxI1BSHVh0aWJlCzYee3Dt9Sq4X4w+lC+/uItQhUzDHRUP05dldKM0rovlOCiQ4f7T0VbSHSPHsiGLl9TNp5fxpR+IO7XjiS8o1L2du0uxInaDHK/DdM+Gqbnhm4lwaJDusAW6zPP6uakGKMnouKoP6kKfrWOAtj4x8HbbmdQpyVvHA2b24LbTqUeBY3hgU6wt6/nZ1/On9Mr7d7Gy/1MGVs3MEZ8Bnw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Baolin Wang's patch[1] has avoided confusion with PMD mapped THP related statistics. So, these three counters (THP_SPLIT_PAGE, THP_SPLIT_PAGE_FAILED, and THP_DEFERRED_SPLIT_PAGE) no longer include mTHP. [1] https://lore.kernel.org/linux-mm/a5341defeef27c9ac7b85c97f030f93e4368bb= c1.1711694852.git.baolin.wang@linux.alibaba.com/ On Tue, Apr 2, 2024 at 9:10=E2=80=AFPM Ryan Roberts = wrote: > > On 28/03/2024 08:18, Barry Song wrote: > > On Thu, Mar 28, 2024 at 3:45=E2=80=AFAM Ryan Roberts wrote: > >> > >> Now that swap supports storing all mTHP sizes, avoid splitting large > >> folios before swap-out. This benefits performance of the swap-out path > >> by eliding split_folio_to_list(), which is expensive, and also sets us > >> up for swapping in large folios in a future series. > >> > >> If the folio is partially mapped, we continue to split it since we wan= t > >> to avoid the extra IO overhead and storage of writing out pages > >> uneccessarily. > >> > >> Reviewed-by: David Hildenbrand > >> Reviewed-by: Barry Song > >> Signed-off-by: Ryan Roberts > >> --- > >> mm/vmscan.c | 9 +++++---- > >> 1 file changed, 5 insertions(+), 4 deletions(-) > >> > >> diff --git a/mm/vmscan.c b/mm/vmscan.c > >> index 00adaf1cb2c3..293120fe54f3 100644 > >> --- a/mm/vmscan.c > >> +++ b/mm/vmscan.c > >> @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct l= ist_head *folio_list, > >> if (!can_split_folio(folio, NU= LL)) > >> goto activate_locked; > >> /* > >> - * Split folios without a PMD = map right > >> - * away. Chances are some or a= ll of the > >> - * tail pages can be freed wit= hout IO. > >> + * Split partially mapped foli= os right > >> + * away. We can free the unmap= ped pages > >> + * without IO. > >> */ > >> - if (!folio_entire_mapcount(fol= io) && > >> + if (data_race(!list_empty( > >> + &folio->_deferred_list= )) && > >> split_folio_to_list(folio, > >> folio_= list)) > >> goto activate_locked; > > > > Hi Ryan, > > > > Sorry for bringing up another minor issue at this late stage. > > No problem - I'd rather take a bit longer and get it right, rather than r= ush it > and get it wrong! > > > > > During the debugging of thp counter patch v2, I noticed the discrepancy= between > > THP_SWPOUT_FALLBACK and THP_SWPOUT. > > > > Should we make adjustments to the counter? > > Yes, agreed; we want to be consistent here with all the other existing TH= P > counters; they only refer to PMD-sized THP. I'll make the change for the = next > version. > > I guess we will eventually want equivalent counters for per-size mTHP usi= ng the > framework you are adding. > > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 293120fe54f3..d7856603f689 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1241,8 +1241,10 @@ static unsigned int shrink_folio_list(struct > > list_head *folio_list, > > folio_l= ist)) > > goto activate_locked; > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > - > > count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); > > - count_vm_event(THP_SWPOUT_FALLB= ACK); > > + if (folio_test_pmd_mappable(fol= io)) { > > + > > count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); > > + > > count_vm_event(THP_SWPOUT_FALLBACK); > > + } > > #endif > > if (!add_to_swap(folio)) > > goto activate_locked_sp= lit; > > > > > > Because THP_SWPOUT is only for pmd: > > > > static inline void count_swpout_vm_event(struct folio *folio) > > { > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > if (unlikely(folio_test_pmd_mappable(folio))) { > > count_memcg_folio_events(folio, THP_SWPOUT, 1); > > count_vm_event(THP_SWPOUT); > > } > > #endif > > count_vm_events(PSWPOUT, folio_nr_pages(folio)); > > } > > > > I can provide per-order counters for this in my THP counter patch. > > > >> -- > >> 2.25.1 > >> > > > > Thanks > > Barry >