From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 616B6C52D7C for ; Mon, 19 Aug 2024 08:29:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C324B6B008A; Mon, 19 Aug 2024 04:29:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE0BF6B008C; Mon, 19 Aug 2024 04:29:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAA166B0092; Mon, 19 Aug 2024 04:29:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8DCE96B008A for ; Mon, 19 Aug 2024 04:29:50 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3D9561A10BB for ; Mon, 19 Aug 2024 08:29:50 +0000 (UTC) X-FDA: 82468321740.09.120BC42 Received: from mail-vs1-f44.google.com (mail-vs1-f44.google.com [209.85.217.44]) by imf24.hostedemail.com (Postfix) with ESMTP id 6F3BF18002F for ; Mon, 19 Aug 2024 08:29:48 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.44 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724056150; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iuMF15siEnl21TzdJwyNBW71l0GzIVEiCkxpUrhBNss=; b=3KV/Gb1HFSbtgnGfkjKUeaE9QPUX64IXRr4FnfxkLplqo2Tk8BRCplEmNf5sSa0UKZV7GM TBFSBktO4beEQARycRYRVk2XyupE6djo5/07CFdPLCRwgtB+lpcyg6RIK0lLB7SbA2DY+H HCHGq4CYPYdy+T3XwExHlBkxISZ/vq4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.44 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724056150; a=rsa-sha256; cv=none; b=eT+cqjLabsaEm48fsJZgHEvxINRUjdbUo/ZkpFlLQB6IrFqeurwltefsJBXK2CflDZnRAT 3IOU6WIUE2xk/xPoCtGVHFbjxSg5gZQsQPQXfMqOTl7BDWaZLSHJlKD6skuS7lrxy33vH0 BDznpCBKp7LbIVGIWpiaETU8AAYxxIY= Received: by mail-vs1-f44.google.com with SMTP id ada2fe7eead31-49294575ad8so2367257137.1 for ; Mon, 19 Aug 2024 01:29:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724056187; x=1724660987; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iuMF15siEnl21TzdJwyNBW71l0GzIVEiCkxpUrhBNss=; b=FK0vmFwgoACMXGMQBavDDPR/2IOyTtMRVDFtphZvvI1kjNO1l/SqFJ0o8s+gD1TOn5 WwN826jTRo0GNsszNNVpPV3LGKlDpsLlbQnPqXTOaFLnzgLR7n1TMgN5O3g4RhpKifAb OLGyzaw4zALxvkU4U1HGyGTOBT+VzSLdmN/sKoY/doT/SsUpCpKuM93PYd3IWqIfbsvB xmGui1+pW1LPZft09orGkpAzcD7JoYAuM33/EOkjnEiEOnesnHJklyeJ6P2K7IHjWn6D vT6ye0h6ZAqXMfxWa1q+VM7DS1BVZ1auSxNonhzhVYfYbwbgPonRERgc8lmh7y4jLFD1 h1Ng== X-Forwarded-Encrypted: i=1; AJvYcCXqDA5uZvYqDgPF/hW9p3NBmzKiG5FLKU83WI274tWI5noUxW/DHX4IzKns0jf2vRbVtpZxsEe7ZmxseRwO1PMXA2s= X-Gm-Message-State: AOJu0Yy3sbE+01VV5yWxUYZ7rRkwnmqD/qRzFOGL15KBXaT1tIVRlMvX vdaPk7ddT4PrDj+9UJoDEKDG76d+eU0xuPRyn2uIzoS3l4USmDfEHTh+qsMqVJpguA0V40UXS8G gIxhKEsUisNy35k/neSWDt00QOgE= X-Google-Smtp-Source: AGHT+IFDvrBKCydDlfNTvb4sjCdTzYkAl3bz2SnBV33eNQOVTBZXCCqzv7cXIsxzFviIhaFSIFxc8Zpa3uqTfrCCL9Y= X-Received: by 2002:a05:6102:f08:b0:492:98bf:75ef with SMTP id ada2fe7eead31-4977b7bda36mr4976045137.8.1724056187324; Mon, 19 Aug 2024 01:29:47 -0700 (PDT) MIME-Version: 1.0 References: <20240819023145.2415299-1-usamaarif642@gmail.com> <20240819023145.2415299-5-usamaarif642@gmail.com> In-Reply-To: <20240819023145.2415299-5-usamaarif642@gmail.com> From: Barry Song Date: Mon, 19 Aug 2024 20:29:36 +1200 Message-ID: Subject: Re: [PATCH v4 4/6] mm: Introduce a pageflag for partially mapped folios To: Usama Arif Cc: akpm@linux-foundation.org, linux-mm@kvack.org, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, ryncsn@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 6F3BF18002F X-Stat-Signature: f7kpuu7scrr3ph5s4mjaegymhtxgmt8m X-HE-Tag: 1724056188-647928 X-HE-Meta: U2FsdGVkX18qbNtfjqWtXdr+VZSovxnJkMJMYO0ionydv6LfeuB403r4yCD+dPreKKUHmTNor9ps8gPojhO+DfW3RTOpD99f8U9AI/Va3ALQWTLpczh+eWAyHRNpxTnXE4RwPwWzXV5BqYebKF+jYer4HoS4oTHdPkvbuENr2sz++ln2OjodmnvVZaduqzLbdC6b4PuALAYhDEx5ifRqZQ/cdM99BDuKeXc1zyxBXXgqOHHrzj+m9Tzvr+6SgChSQr2auGt/cljZ+J6Zqt3Xrm6wgbAGf7cAux82/p9XaUao+xRk/L0cy8w66XeoATUsi5rV83IFzomTRe+Mcv9VbUZeItLhbMeuoqHCuEYbyNheW82WLAYVBKwT03hbxRKLmm3Gaqx7AtAM8rK/1+RbaKRU1DyLDM6pvYosZB8OuyaZ1AR1tFiNKtK9/W2yD4KbqvrXI5VvTMuvsLWJO0iPRPAl84Kk26ZjKrEoQnr3hiFLcjAAlK/BalN/Mz0rC2hzZuyFzJSMhXsNRhmTRAsBEmGoc5xEPM59X2Iq4r/igux7ScxKVyb982kDuTfMx+Iv5f0k5NAE6wRRibYUustGqSp29MbuzKHkv49sp9YldS9ftlUIJLQH8xlku5T+1hZ1koQUWUr5ovYvxSE4Jqr1Tc16CtPa5BHkTiddeKJcwsp6Wu2l+pmckxATyal1GLPYtEWUIzvyFDFqD4vKsW4gzaYFn6dasrZDGnPIGLycDQbxZxS8bRJ0vn5ZI3L6rG6tqMIdwzm7y9RLoG05AxNCg9mXBpVLLMVsLwh6EABbFnsD+wXfByNYVm1SSJg0rC2TAnnd5PTDmhF5ppOLFoaPXiY2W6yyTfpEMbeZux3TcJgeA1C75RnwT8VrPx1pkSTubM4yAwitGzk7Ql9L7ou/lRyhAVrhL2jOrdzM+mALL5CRUmzqqLshhEx3a5w9oAIN8Q4oqw5grESEZQIUsHn NB7Xp7Fk rsDGfoYHJU3XxXdBLmoIhdKX2mJ3kJVb1Wtyw2qB3AqCAV9zU9ArN/7Uqxzyv4MX7HZdTd/xkilV6tsCIMTnTO+29BVJxgXQBJ8jzBweKTIaFZfZFqFM6ZQWei6Bian21/G0hLIh3SoMbW2+k3R88d1u0y2KC5UI8k6dXQFJ03R4mVwvInibLxaYGeIpU+aDDZVgXu1h4x2ljhcHcr+4QsiXTSZS/BzxR88bsSx2u+TJs5SNlwggNa4VOkmu+gzjHFaZ7W/M0Ljpe+4CUoo1afZO1rvNZYYkdt7l8/Y2hUxa4zaM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Usama, I feel it is much better now! thanks! On Mon, Aug 19, 2024 at 2:31=E2=80=AFPM Usama Arif = wrote: > > Currently folio->_deferred_list is used to keep track of > partially_mapped folios that are going to be split under memory > pressure. In the next patch, all THPs that are faulted in and collapsed > by khugepaged are also going to be tracked using _deferred_list. > > This patch introduces a pageflag to be able to distinguish between > partially mapped folios and others in the deferred_list at split time in > deferred_split_scan. Its needed as __folio_remove_rmap decrements > _mapcount, _large_mapcount and _entire_mapcount, hence it won't be > possible to distinguish between partially mapped folios and others in > deferred_split_scan. > > Eventhough it introduces an extra flag to track if the folio is > partially mapped, there is no functional change intended with this > patch and the flag is not useful in this patch itself, it will > become useful in the next patch when _deferred_list has non partially > mapped folios. > > Signed-off-by: Usama Arif > --- > include/linux/huge_mm.h | 4 ++-- > include/linux/page-flags.h | 11 +++++++++++ > mm/huge_memory.c | 23 ++++++++++++++++------- > mm/internal.h | 4 +++- > mm/memcontrol.c | 3 ++- > mm/migrate.c | 3 ++- > mm/page_alloc.c | 5 +++-- > mm/rmap.c | 5 +++-- > mm/vmscan.c | 3 ++- > 9 files changed, 44 insertions(+), 17 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 4c32058cacfe..969f11f360d2 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -321,7 +321,7 @@ static inline int split_huge_page(struct page *page) > { > return split_huge_page_to_list_to_order(page, NULL, 0); > } > -void deferred_split_folio(struct folio *folio); > +void deferred_split_folio(struct folio *folio, bool partially_mapped); > > void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, > unsigned long address, bool freeze, struct folio *folio); > @@ -495,7 +495,7 @@ static inline int split_huge_page(struct page *page) > { > return 0; > } > -static inline void deferred_split_folio(struct folio *folio) {} > +static inline void deferred_split_folio(struct folio *folio, bool partia= lly_mapped) {} > #define split_huge_pmd(__vma, __pmd, __address) \ > do { } while (0) > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index a0a29bd092f8..c3bb0e0da581 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -182,6 +182,7 @@ enum pageflags { > /* At least one page in this folio has the hwpoison flag set */ > PG_has_hwpoisoned =3D PG_active, > PG_large_rmappable =3D PG_workingset, /* anon or file-backed */ > + PG_partially_mapped =3D PG_reclaim, /* was identified to be parti= ally mapped */ > }; > > #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) > @@ -861,8 +862,18 @@ static inline void ClearPageCompound(struct page *pa= ge) > ClearPageHead(page); > } > FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) > +FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > +/* > + * PG_partially_mapped is protected by deferred_split split_queue_lock, > + * so its safe to use non-atomic set/clear. > + */ > +__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > +__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > #else > FOLIO_FLAG_FALSE(large_rmappable) > +FOLIO_TEST_FLAG_FALSE(partially_mapped) > +__FOLIO_SET_FLAG_NOOP(partially_mapped) > +__FOLIO_CLEAR_FLAG_NOOP(partially_mapped) > #endif > > #define PG_head_mask ((1UL << PG_head)) > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 2d77b5d2291e..70ee49dfeaad 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3398,6 +3398,7 @@ int split_huge_page_to_list_to_order(struct page *p= age, struct list_head *list, > * page_deferred_list. > */ > list_del_init(&folio->_deferred_list); > + __folio_clear_partially_mapped(folio); > } > spin_unlock(&ds_queue->split_queue_lock); > if (mapping) { > @@ -3454,11 +3455,13 @@ void __folio_undo_large_rmappable(struct folio *f= olio) > if (!list_empty(&folio->_deferred_list)) { > ds_queue->split_queue_len--; > list_del_init(&folio->_deferred_list); > + __folio_clear_partially_mapped(folio); is it possible to make things clearer by if (folio_clear_partially_mapped) __folio_clear_partially_mapped(folio); While writing without conditions isn't necessarily wrong, adding a conditio= n will improve the readability of the code and enhance the clarity of my mTHP counters series. also help decrease smp cache sync if we can avoid unnecessary writing? > } > spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > } > > -void deferred_split_folio(struct folio *folio) > +/* partially_mapped=3Dfalse won't clear PG_partially_mapped folio flag *= / > +void deferred_split_folio(struct folio *folio, bool partially_mapped) > { > struct deferred_split *ds_queue =3D get_deferred_split_queue(foli= o); > #ifdef CONFIG_MEMCG > @@ -3486,14 +3489,19 @@ void deferred_split_folio(struct folio *folio) > if (folio_test_swapcache(folio)) > return; > > - if (!list_empty(&folio->_deferred_list)) > - return; > - > spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > + if (partially_mapped) { > + if (!folio_test_partially_mapped(folio)) { > + __folio_set_partially_mapped(folio); > + if (folio_test_pmd_mappable(folio)) > + count_vm_event(THP_DEFERRED_SPLIT_PAGE); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SPL= IT_DEFERRED); > + } > + } else { > + /* partially mapped folios cannot become non-partially ma= pped */ > + VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), foli= o); > + } > if (list_empty(&folio->_deferred_list)) { > - if (folio_test_pmd_mappable(folio)) > - count_vm_event(THP_DEFERRED_SPLIT_PAGE); > - count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFER= RED); > list_add_tail(&folio->_deferred_list, &ds_queue->split_qu= eue); > ds_queue->split_queue_len++; > #ifdef CONFIG_MEMCG > @@ -3542,6 +3550,7 @@ static unsigned long deferred_split_scan(struct shr= inker *shrink, > } else { > /* We lost race with folio_put() */ > list_del_init(&folio->_deferred_list); > + __folio_clear_partially_mapped(folio); as above? Do we also need if(test) for split_huge_page_to_list_to_order()? > ds_queue->split_queue_len--; > } > if (!--sc->nr_to_scan) > diff --git a/mm/internal.h b/mm/internal.h > index 52f7fc4e8ac3..27cbb5365841 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -662,8 +662,10 @@ static inline void prep_compound_head(struct page *p= age, unsigned int order) > atomic_set(&folio->_entire_mapcount, -1); > atomic_set(&folio->_nr_pages_mapped, 0); > atomic_set(&folio->_pincount, 0); > - if (order > 1) > + if (order > 1) { > INIT_LIST_HEAD(&folio->_deferred_list); > + __folio_clear_partially_mapped(folio); if partially_mapped is true for a new folio, does it mean we already have a bug somewhere? How is it possible for a new folio to be partially mapped? > + } > } > > static inline void prep_compound_tail(struct page *head, int tail_idx) > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index e1ffd2950393..0fd95daecf9a 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -4669,7 +4669,8 @@ static void uncharge_folio(struct folio *folio, str= uct uncharge_gather *ug) > VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); > VM_BUG_ON_FOLIO(folio_order(folio) > 1 && > !folio_test_hugetlb(folio) && > - !list_empty(&folio->_deferred_list), folio); > + !list_empty(&folio->_deferred_list) && > + folio_test_partially_mapped(folio), folio); > > /* > * Nobody should be changing or seriously looking at > diff --git a/mm/migrate.c b/mm/migrate.c > index 2d2e65d69427..ef4a732f22b1 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1735,7 +1735,8 @@ static int migrate_pages_batch(struct list_head *fr= om, > * use _deferred_list. > */ > if (nr_pages > 2 && > - !list_empty(&folio->_deferred_list)) { > + !list_empty(&folio->_deferred_list) && > + folio_test_partially_mapped(folio)) { > if (!try_split_folio(folio, split_folios,= mode)) { > nr_failed++; > stats->nr_thp_failed +=3D is_thp; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 408ef3d25cf5..a145c550dd2a 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -957,8 +957,9 @@ static int free_tail_page_prepare(struct page *head_p= age, struct page *page) > break; > case 2: > /* the second tail page: deferred_list overlaps ->mapping= */ > - if (unlikely(!list_empty(&folio->_deferred_list))) { > - bad_page(page, "on deferred list"); > + if (unlikely(!list_empty(&folio->_deferred_list) && > + folio_test_partially_mapped(folio))) { > + bad_page(page, "partially mapped folio on deferre= d list"); > goto out; > } > break; > diff --git a/mm/rmap.c b/mm/rmap.c > index a6b9cd0b2b18..4c330635aa4e 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1578,8 +1578,9 @@ static __always_inline void __folio_remove_rmap(str= uct folio *folio, > * Check partially_mapped first to ensure it is a large folio. > */ > if (partially_mapped && folio_test_anon(folio) && > - list_empty(&folio->_deferred_list)) > - deferred_split_folio(folio); > + !folio_test_partially_mapped(folio)) > + deferred_split_folio(folio, true); > + > __folio_mod_stat(folio, -nr, -nr_pmdmapped); > > /* > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 25e43bb3b574..25f4e8403f41 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1233,7 +1233,8 @@ static unsigned int shrink_folio_list(struct list_h= ead *folio_list, > * Split partially mapped folios = right away. > * We can free the unmapped pages= without IO. > */ > - if (data_race(!list_empty(&folio-= >_deferred_list)) && > + if (data_race(!list_empty(&folio-= >_deferred_list) && > + folio_test_partially_mapped(f= olio)) && > split_folio_to_list(folio, fo= lio_list)) > goto activate_locked; > } > -- > 2.43.5 > Thanks Barry