From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E971CC52D7B for ; Wed, 14 Aug 2024 11:23:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F4DE6B0085; Wed, 14 Aug 2024 07:23:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A3BC6B0088; Wed, 14 Aug 2024 07:23:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66AA96B0089; Wed, 14 Aug 2024 07:23:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3F25F6B0085 for ; Wed, 14 Aug 2024 07:23:55 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id AA9ED80E45 for ; Wed, 14 Aug 2024 11:23:54 +0000 (UTC) X-FDA: 82450616388.02.87B8C06 Received: from mail-vs1-f42.google.com (mail-vs1-f42.google.com [209.85.217.42]) by imf04.hostedemail.com (Postfix) with ESMTP id D8C9540007 for ; Wed, 14 Aug 2024 11:23:52 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723634597; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HVSCZnAx0ph5xRRQ7GkxGxeOGoX+028zxOSJgCS/ioo=; b=HGQFEG6AAnFYsMcQQp7OzL3CwGnzocwP72XOtkY3KrfX+Exs66g0/oX+GJe+gRS7VDorUN ldAxCd3oWDXLd6CeJ9KUpPqFW87UECy9Hmwh/ZRECkFcQxP5ROygMjm+pHzNcZLTvDLxQy XdWTZGZX2/e24Xc/d+tmwoIxhVe+lN8= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723634597; a=rsa-sha256; cv=none; b=nVrAXTPl64tYP8p0g1MvxpglNBfgXCA64ErI8RT5zPk53Ubzdvl9Ag/zRKU2TBF+OOm3pU gN6tYQWmSDWLiOFTDvs2EEGxtqfDydtuc/UnKABG917E02LZpZ6BRZ4sNQ/fZAQV3uHSvi x0/0oWNnG2rSrTnCgq1vnrV5g6UBPzI= Received: by mail-vs1-f42.google.com with SMTP id ada2fe7eead31-4928e347ac5so1834791137.3 for ; Wed, 14 Aug 2024 04:23:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723634632; x=1724239432; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HVSCZnAx0ph5xRRQ7GkxGxeOGoX+028zxOSJgCS/ioo=; b=bhxtdEUBEAsnw0+YSY2RQLnuRrFeNXNNibz9fCIwA2EjvCj3N1TsvJx83Sj1IICq+b mOCI+8b8didkjbxsNd9c47gxG0St4/G0x7NkjSQtqfMGuOeCcGKdz+uHRiFr+fmLy/b3 O+Ibd0q4xO//b00MnANv6wv9Ua0RYJAeJ2HCLT9hQrDt3aKQHvAlDwnJH/1iJVDtCDof aMytd+DQhGFeJw/HyXF75wglpDLUA4AbMtnIQDL3yRwpgiq7Cpgj7kC9MaRCWUrwFEbN HjH3p2GWWzNGmcyKkNfoGmbSA9x9Ij/FGJ3XE8bVEZUK58FUFTpE6KiMY+0Dt2h14MHV RqaA== X-Forwarded-Encrypted: i=1; AJvYcCVkaOnO6yWTFYElnLxE4kpndwXGt7vuBEl5VQK0f14YvmpJGEJOhXML57mQTND398d1p2wN7W7zl53j0gfDi+jeJK8= X-Gm-Message-State: AOJu0YznBEPwhKuK0Od9juvpcTJp5LWjcLse/ll3SoXIyEMmmo/9e4so llIaO+/qOoqYfKnmjWoQBgrgEYRcBWHZBV/MpWR8YiI/CP7mdGZibKYcztifTliz/NUUX1/rlMW JBOfo2lbQNEb/vMmFodpwZ/XK2UA= X-Google-Smtp-Source: AGHT+IGzXOqu6YISJ0i99DXmN81woLtBhlIOuPVGW1ZKvhROMtCyKZYDSVe8tw/itbg3NcFMozoG/Mw+kcLTlwZIMsQ= X-Received: by 2002:a05:6102:3907:b0:48f:23b4:1d96 with SMTP id ada2fe7eead31-4975990e713mr2530052137.16.1723634631767; Wed, 14 Aug 2024 04:23:51 -0700 (PDT) MIME-Version: 1.0 References: <20240813120328.1275952-1-usamaarif642@gmail.com> <20240813120328.1275952-5-usamaarif642@gmail.com> <59725862-f4fc-456c-bafb-cbd302777881@gmail.com> In-Reply-To: <59725862-f4fc-456c-bafb-cbd302777881@gmail.com> From: Barry Song Date: Wed, 14 Aug 2024 23:23:40 +1200 Message-ID: Subject: Re: [PATCH v3 4/6] mm: Introduce a pageflag for partially mapped folios To: Usama Arif Cc: akpm@linux-foundation.org, linux-mm@kvack.org, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: D8C9540007 X-Stat-Signature: ean9smrjmkp881io7i3ikf8ie64hqjjg X-HE-Tag: 1723634632-398936 X-HE-Meta: U2FsdGVkX19oOcmPnCsJflh9diDrnvaRjdGn+xi72DIFIHuA0yK9hm1i610jC6wVk0Zf+YXoRk58JqSjXkJNfGzB8Ce1FS8uFlnrX3g351FRBAfGUG3V/uETx0gzWFvT77qtp8FRIsVuQYIFWI8KkyquZQ0h569y4PZwK9pPjafz6qzwsqTWUmUq6FkP2PiqgcUPlOw7rCnnmwZCBc/uNCTbOoPmrRQdFSTkpRnDUNzuQGQ1mBqn06bkn+9orA9ICN55wXDHAiv/fFNZfl/XXToKmEmSo4AgGysTJrcB8Ezw98mvCv++YOaIUX9N693eeVO2Sj64jHzgyjtv5e6OoN9NCQ9/NDj3RyHxq9GEp5zZtAPRz5q/S1nyhlP32Yn210v/OyKJL7KVhBfhxdeyHe+kHD/bKnfg11v+b4TAbQN8do6WzOcjY32e1vyh/eD9e0jh5eIkdwEYL5T5GbguKQrHNcD6AxdMLvGMBJThUpNmdF2He9vGeuV/nuJq4JRU9TueMFiyrGeGBL7Q2KB1YChELW1W6/hCi/dRLzeaEjbs23rpSC+Rxe4TkDL+6K3y1g+aIW23RhSFJyatzw3d6BV6OUW31sfs7ksQZ4RIE3t72S7V1FCGfP16uvA+gEwEylVLjjciPMewm2IgoWzQig6zdyVWHOz8DJz+Hp5xQb5WPxtl+MXF8IQYHRI9mtjA7dV0lTrNwnsxc35Z1+Om1sCctUYZoazWgqpiqa2NzREsR5aiezINyskvtFNhBryIt/n1tBNhLF3cmndgpU2lvaKuUYTSepmCFngGnu0NQlOgH5Xhm4JZrovYYWAy3zWV64LmSeQaWVXDDCgdx0r9HomGCcN4hlw9ZBnWPRp8bAOLtZ/jCoN2biqVtMoEvPFarOpXYuwh3dgAnQWsWTOB862L5Oi/uzfZp2lvnO99mK8sCHwq9j4EUa8XAqai04yWDXC8bGCjAZrcIbmWadI v9Crxl8X 7T3wDWYXFVzV8oazxkk+6hWhf9hqzCBkFjTMSCMCP7U50MMrwlL+Swvx+LTB/c/rG9V9WN5t3iFK9F5KySXdg9qCPscxzcB9TOqUFUrmpaZcLrfS5FvoLJQNOSSp6Oiv0OM+egLk8DKWTcHHZfXXRJyilhLogKqsxKwQsKRQU2+22dL8OKqBFNHZf06lhtXRyjVeyWaNpdBKJjazUvLsY5l0gHKEsNxNoFeucbc4IvdHLi/FP3TXeh2+Ycs01roKgJb+7DW/Zrck7PePk1TB6OZjLMSJYg/2YUJ7j7U3BY/X7phk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Aug 14, 2024 at 11:20=E2=80=AFPM Usama Arif wrote: > > > > On 14/08/2024 12:10, Barry Song wrote: > > On Wed, Aug 14, 2024 at 12:03=E2=80=AFAM Usama Arif wrote: > >> > >> Currently folio->_deferred_list is used to keep track of > >> partially_mapped folios that are going to be split under memory > >> pressure. In the next patch, all THPs that are faulted in and collapse= d > >> by khugepaged are also going to be tracked using _deferred_list. > >> > >> This patch introduces a pageflag to be able to distinguish between > >> partially mapped folios and others in the deferred_list at split time = in > >> deferred_split_scan. Its needed as __folio_remove_rmap decrements > >> _mapcount, _large_mapcount and _entire_mapcount, hence it won't be > >> possible to distinguish between partially mapped folios and others in > >> deferred_split_scan. > >> > >> Eventhough it introduces an extra flag to track if the folio is > >> partially mapped, there is no functional change intended with this > >> patch and the flag is not useful in this patch itself, it will > >> become useful in the next patch when _deferred_list has non partially > >> mapped folios. > >> > >> Signed-off-by: Usama Arif > >> --- > >> include/linux/huge_mm.h | 4 ++-- > >> include/linux/page-flags.h | 3 +++ > >> mm/huge_memory.c | 21 +++++++++++++-------- > >> mm/hugetlb.c | 1 + > >> mm/internal.h | 4 +++- > >> mm/memcontrol.c | 3 ++- > >> mm/migrate.c | 3 ++- > >> mm/page_alloc.c | 5 +++-- > >> mm/rmap.c | 3 ++- > >> mm/vmscan.c | 3 ++- > >> 10 files changed, 33 insertions(+), 17 deletions(-) > >> > >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > >> index 4c32058cacfe..969f11f360d2 100644 > >> --- a/include/linux/huge_mm.h > >> +++ b/include/linux/huge_mm.h > >> @@ -321,7 +321,7 @@ static inline int split_huge_page(struct page *pag= e) > >> { > >> return split_huge_page_to_list_to_order(page, NULL, 0); > >> } > >> -void deferred_split_folio(struct folio *folio); > >> +void deferred_split_folio(struct folio *folio, bool partially_mapped)= ; > >> > >> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, > >> unsigned long address, bool freeze, struct folio *foli= o); > >> @@ -495,7 +495,7 @@ static inline int split_huge_page(struct page *pag= e) > >> { > >> return 0; > >> } > >> -static inline void deferred_split_folio(struct folio *folio) {} > >> +static inline void deferred_split_folio(struct folio *folio, bool par= tially_mapped) {} > >> #define split_huge_pmd(__vma, __pmd, __address) \ > >> do { } while (0) > >> > >> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > >> index a0a29bd092f8..cecc1bad7910 100644 > >> --- a/include/linux/page-flags.h > >> +++ b/include/linux/page-flags.h > >> @@ -182,6 +182,7 @@ enum pageflags { > >> /* At least one page in this folio has the hwpoison flag set *= / > >> PG_has_hwpoisoned =3D PG_active, > >> PG_large_rmappable =3D PG_workingset, /* anon or file-backed *= / > >> + PG_partially_mapped, /* was identified to be partially mapped = */ > >> }; > >> > >> #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) > >> @@ -861,8 +862,10 @@ static inline void ClearPageCompound(struct page = *page) > >> ClearPageHead(page); > >> } > >> FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) > >> +FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > >> #else > >> FOLIO_FLAG_FALSE(large_rmappable) > >> +FOLIO_FLAG_FALSE(partially_mapped) > >> #endif > >> > >> #define PG_head_mask ((1UL << PG_head)) > >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >> index 6df0e9f4f56c..c024ab0f745c 100644 > >> --- a/mm/huge_memory.c > >> +++ b/mm/huge_memory.c > >> @@ -3397,6 +3397,7 @@ int split_huge_page_to_list_to_order(struct page= *page, struct list_head *list, > >> * page_deferred_list. > >> */ > >> list_del_init(&folio->_deferred_list); > >> + folio_clear_partially_mapped(folio); > >> } > >> spin_unlock(&ds_queue->split_queue_lock); > >> if (mapping) { > >> @@ -3453,11 +3454,12 @@ void __folio_undo_large_rmappable(struct folio= *folio) > >> if (!list_empty(&folio->_deferred_list)) { > >> ds_queue->split_queue_len--; > >> list_del_init(&folio->_deferred_list); > >> + folio_clear_partially_mapped(folio); > >> } > >> spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > >> } > >> > >> -void deferred_split_folio(struct folio *folio) > >> +void deferred_split_folio(struct folio *folio, bool partially_mapped) > >> { > >> struct deferred_split *ds_queue =3D get_deferred_split_queue(f= olio); > >> #ifdef CONFIG_MEMCG > >> @@ -3485,14 +3487,17 @@ void deferred_split_folio(struct folio *folio) > >> if (folio_test_swapcache(folio)) > >> return; > >> > >> - if (!list_empty(&folio->_deferred_list)) > >> - return; > >> - > >> spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > >> + if (partially_mapped) > >> + folio_set_partially_mapped(folio); > >> + else > >> + folio_clear_partially_mapped(folio); > >> if (list_empty(&folio->_deferred_list)) { > >> - if (folio_test_pmd_mappable(folio)) > >> - count_vm_event(THP_DEFERRED_SPLIT_PAGE); > >> - count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DE= FERRED); > >> + if (partially_mapped) { > >> + if (folio_test_pmd_mappable(folio)) > >> + count_vm_event(THP_DEFERRED_SPLIT_PAGE= ); > >> + count_mthp_stat(folio_order(folio), MTHP_STAT_= SPLIT_DEFERRED); > > > > This code completely broke MTHP_STAT_SPLIT_DEFERRED for PMD_ORDER. It > > added the folio to the deferred_list as entirely_mapped > > (partially_mapped =3D=3D false). > > However, when partially_mapped becomes true, there's no opportunity to > > add it again > > as it has been there on the list. Are you consistently seeing the count= er for > > PMD_ORDER as 0? > > > > Ah I see it, this should fix it? > > -void deferred_split_folio(struct folio *folio) > +/* partially_mapped=3Dfalse won't clear PG_partially_mapped folio flag *= / > +void deferred_split_folio(struct folio *folio, bool partially_mapped) > { > struct deferred_split *ds_queue =3D get_deferred_split_queue(foli= o); > #ifdef CONFIG_MEMCG > @@ -3485,14 +3488,14 @@ void deferred_split_folio(struct folio *folio) > if (folio_test_swapcache(folio)) > return; > > - if (!list_empty(&folio->_deferred_list)) > - return; > - > spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > - if (list_empty(&folio->_deferred_list)) { > + if (partially_mapped) { > + folio_set_partially_mapped(folio); > if (folio_test_pmd_mappable(folio)) > count_vm_event(THP_DEFERRED_SPLIT_PAGE); > count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFER= RED); > + } > + if (list_empty(&folio->_deferred_list)) { > list_add_tail(&folio->_deferred_list, &ds_queue->split_qu= eue); > ds_queue->split_queue_len++; > #ifdef CONFIG_MEMCG > not enough. as deferred_split_folio(true) won't be called if folio has been deferred_list in __folio_remove_rmap(): if (partially_mapped && folio_test_anon(folio) && list_empty(&folio->_deferred_list)) deferred_split_folio(folio, true); so you will still see 0. Thanks Barry