From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43319C4345F for ; Fri, 26 Apr 2024 09:46:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 704B76B0087; Fri, 26 Apr 2024 05:46:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B49C6B0088; Fri, 26 Apr 2024 05:46:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57C616B0089; Fri, 26 Apr 2024 05:46:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3BAB26B0087 for ; Fri, 26 Apr 2024 05:46:17 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E38D94078C for ; Fri, 26 Apr 2024 09:46:16 +0000 (UTC) X-FDA: 82051202352.09.08236AE Received: from mail-vs1-f51.google.com (mail-vs1-f51.google.com [209.85.217.51]) by imf24.hostedemail.com (Postfix) with ESMTP id 4F73E18000D for ; Fri, 26 Apr 2024 09:46:15 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="T/3UqN8C"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714124775; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KVOougTKR2E63G5I1kT79Wg3Gm+lmkU4LrYVubtdS5o=; b=FAQwq+JXerVVhrYfhSw/cDsR9U3C5YfBCmDUnN9LyaaNGKbTD3QojOJ9LSsi9t/7k2Oz8b bFMljHNPa4d3qXQ7rk77ktCx9IzmL41TNKRSyvRG28PCfkn34I+kIygwf4avWmArHtjrE0 kkvRQY3UFy+CVEREXNUOfT5epJzSpjg= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="T/3UqN8C"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714124775; a=rsa-sha256; cv=none; b=5qK/gtj2yejlZ4gN5XFLc44nKVd0wj74WBu6nfCKCfbownDJAtIbab65Udry0v0b1OFdtS PhiyPcQqik5gBdNqg6bWlntFDsTYly7RL3fSgNHKnVp4tRBVBzW6Kj76ZaZ/2/PUjQ8379 tcKQ4uxbHhvKJIlU7QKfjZ9Oem1HJVk= Received: by mail-vs1-f51.google.com with SMTP id ada2fe7eead31-47a21267aa8so756337137.3 for ; Fri, 26 Apr 2024 02:46:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714124774; x=1714729574; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=KVOougTKR2E63G5I1kT79Wg3Gm+lmkU4LrYVubtdS5o=; b=T/3UqN8CEo/hq3+a0XPyYI9pYmre+dsRD0PTg/9iWaNkAEQzGbMatYSTdsJoD8Pz2/ 3uyJOoFJutq1urJ7PIngQhrcRknTRY460WtCfpqvlw02E09jKfcTzP9ahHXluUxHL44f 0FdJSw5T1qF1APyp9AmJpbS0bIR0IBNA0GgnyU6yINQVC27HEurB8RqESeQaVEFIIwGU MmjBqZk0b1ErGwxUtWJmH4139x56zv1Kx3tZ3BLMfACE8MOZVUOPeWjbHX78LzMmwF+a 4o8H5/AEZueVhTXKCs2uUpAeTOalJmcZ5/DhIA3yAwCVg5ocH7YhiLnYg2MeUYAYWUZS 5Iog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714124774; x=1714729574; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KVOougTKR2E63G5I1kT79Wg3Gm+lmkU4LrYVubtdS5o=; b=t1613DqwyWu56Sj4DuEDVCSTh3I1b3cBs3WXa5HPNTKq4dTl8uDSb/4qN5dQExAGsE OP3CzcQDfmwD8QXEy6xzC/PHz8XsqFjd1XrRFXZnLQMK0RDdYKoAmVdU2zZebqKHYT3O qyx1avTmAzm5r4YkOd4FFVQTZXlBMuvXJk8Z1CgSWxgbAbCJ+ptEAfRJSnr/kLUFchuv dFVoBnCLGrwPlj8w8wPEtiKwV/whGEJmbAZL9oIUWuB0s2sacszgohQfuEnd0F3YUSBq V+rlQIk8Dzi+75/4S0cIEPax87L9Q2E3fBynVcsLxxVY0iSPTlmXYS4qDBhvwRYfqQr8 h/pw== X-Forwarded-Encrypted: i=1; AJvYcCWeX9SD36oAuhKnFgiCXSPPBrWO83YppQGBrpodCiXGYdEOVIz3ENCKaEU0hERsW85MFpzPKjDytqnQEOrgzerdPB8= X-Gm-Message-State: AOJu0YzChyQkWv8i1S5nCt1+OwZd8vTHnH+cfIihQH+YTlzDyOymafEQ /5kQpZqFfLkolSqhs+z3EDsErFKbAtaHaOpMw/RhKU29uIR15wTFXvj2Erqc9HwhmpvV8mxOwGj 6qgNKga9+7sCYpNjhh3y7zlFV63g= X-Google-Smtp-Source: AGHT+IH58NJkrUWm0u8dX1JxeSV3i50SJdOXfy/TMRuL7cqe2OlgLT61w+ULPg2fuH1Bw+QOfGAUGozg/klLtLMqFyw= X-Received: by 2002:a05:6102:6d1:b0:47b:976d:8374 with SMTP id m17-20020a05610206d100b0047b976d8374mr2487387vsg.29.1714124774146; Fri, 26 Apr 2024 02:46:14 -0700 (PDT) MIME-Version: 1.0 References: <20240425211136.486184-1-zi.yan@sent.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Fri, 26 Apr 2024 17:46:02 +0800 Message-ID: Subject: Re: [PATCH v4] mm/rmap: do not add fully unmapped large folio to deferred split list To: David Hildenbrand Cc: Zi Yan , Andrew Morton , linux-mm@kvack.org, "Matthew Wilcox (Oracle)" , Yang Shi , Ryan Roberts , Lance Yang , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4F73E18000D X-Rspam-User: X-Stat-Signature: gwcjggnyef9b6hhxa1ucgxbq9awdsqoi X-HE-Tag: 1714124775-440865 X-HE-Meta: U2FsdGVkX1+THyOcjAiWlZw8lnoYBdPmzktAn4K4T2FcZNFXRrtf8y853ycR2HbQRSbi2+H/tkA6faxuvpYqcquxAuoGzN/AVFWmje9Qbf96aOgPTwThdALw0eKBTri5jMlUQ89NIBE37QFfvraL3texZuu1eBfLdxMWNsGiDhNq0q64vwLGu0x0F1/RuNRNOPfJxmoS5ndvga29PjWEY+X3krqbPE/h+06UX9t0BP1RW/8lWznnTvILTU8dxNJMXYBxEOXpsuY6Nyvj1AWJ2dAsFT/ygkRt0pSD/dpxNRoRiog8wbmeqnjclr4L8ZhJFnN1ganwdG2XvhLbiMTu80a8O/g1j7qgBdeUX+1EYA/SEB99BMo9Wel1INJEYX3gMr34sJWTxn0GSMUcC9/yeBKKmkvpeNSxPJy13tzGTJ+/6jOpReqczsJ4qDNR4GeBBsiMiEOc2Tegsca8jYpDorhaK9G+u49d+NCf0HN0hguyqKsaDJ6EVhKWBVmKrEp5z2oZnXJesuVqXYH+T28iHdJfCOhNrR7fCAlfVtjd+mZ8htYkosMBAC3wR8TvRYyUlcberOIhH17Skw2F7Gd6Bhc/ePaoh3GWWrQYJHx++xhClO31m4vHCjNkMSPWzVa7W1xZhodBx28xnKMtVPNIkrh3aZq/H4yxQUfFraEOM8gZB+Wz0tQruL7OHbuFwkkBwzWryu/SONPPNCuwrqc8stDJAbwVkrJAC3IENqzWc5uH4Tf3Y+Dz30urent8q4A3XrpxJynFvNk4n1TvK0SjqcbbracwXSojEuiDXtp9jYGmgdpgfuw7lI3r5FWfF9GtDC2alBmRkyeOxVRAKxguJzPTvwOcQSSlX3qoALuXzNlK3/nCEOS5Q24fJE3OWbg0zzRc4FeMnDSyrGR+lbssJYtD/9WDsBWVPI7wAr7+Qh0fvEAhi36EEv4iGN0mXgUFsjbHXmZUvp5Uh6XAqp4 Kp5+IIc+ cdt5mCEZvieBzBqKEoiGRP/MQBmqSMhoGp/MH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 26, 2024 at 4:19=E2=80=AFPM David Hildenbrand wrote: > > On 25.04.24 23:11, Zi Yan wrote: > > From: Zi Yan > > > > In __folio_remove_rmap(), a large folio is added to deferred split list > > if any page in a folio loses its final mapping. But it is possible that > > the folio is fully unmapped and adding it to deferred split list is > > unnecessary. > > > > For PMD-mapped THPs, that was not really an issue, because removing the > > last PMD mapping in the absence of PTE mappings would not have added th= e > > folio to the deferred split queue. > > > > However, for PTE-mapped THPs, which are now more prominent due to mTHP, > > they are always added to the deferred split queue. One side effect > > is that the THP_DEFERRED_SPLIT_PAGE stat for a PTE-mapped folio can be > > unintentionally increased, making it look like there are many partially > > mapped folios -- although the whole folio is fully unmapped stepwise. > > > > Core-mm now tries batch-unmapping consecutive PTEs of PTE-mapped THPs > > where possible starting from commit b06dc281aa99 ("mm/rmap: introduce > > folio_remove_rmap_[pte|ptes|pmd]()"). When it happens, a whole PTE-mapp= ed > > folio is unmapped in one go and can avoid being added to deferred split > > list, reducing the THP_DEFERRED_SPLIT_PAGE noise. But there will still = be > > noise when we cannot batch-unmap a complete PTE-mapped folio in one go > > -- or where this type of batching is not implemented yet, e.g., migrati= on. > > > > To avoid the unnecessary addition, folio->_nr_pages_mapped is checked > > to tell if the whole folio is unmapped. If the folio is already on > > deferred split list, it will be skipped, too. > > > > Note: commit 98046944a159 ("mm: huge_memory: add the missing > > folio_test_pmd_mappable() for THP split statistics") tried to exclude > > mTHP deferred split stats from THP_DEFERRED_SPLIT_PAGE, but it does not > > fix the above issue. A fully unmapped PTE-mapped order-9 THP was still > > added to deferred split list and counted as THP_DEFERRED_SPLIT_PAGE, > > since nr is 512 (non zero), level is RMAP_LEVEL_PTE, and inside > > deferred_split_folio() the order-9 folio is folio_test_pmd_mappable(). > > > > Signed-off-by: Zi Yan > > Reviewed-by: Yang Shi > > --- > > mm/rmap.c | 8 +++++--- > > 1 file changed, 5 insertions(+), 3 deletions(-) > > > > diff --git a/mm/rmap.c b/mm/rmap.c > > index a7913a454028..220ad8a83589 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -1553,9 +1553,11 @@ static __always_inline void __folio_remove_rmap(= struct folio *folio, > > * page of the folio is unmapped and at least one page > > * is still mapped. > > */ > > - if (folio_test_large(folio) && folio_test_anon(folio)) > > - if (level =3D=3D RMAP_LEVEL_PTE || nr < nr_pmdmap= ped) > > - deferred_split_folio(folio); > > + if (folio_test_large(folio) && folio_test_anon(folio) && > > + list_empty(&folio->_deferred_list) && > > + ((level =3D=3D RMAP_LEVEL_PTE && atomic_read(mapped))= || > > + (level =3D=3D RMAP_LEVEL_PMD && nr < nr_pmdmapped))) > > + deferred_split_folio(folio); > > } > > > > /* > > > > base-commit: 66313c66dd90e8711a8b63fc047ddfc69c53636a > > Reviewed-by: David Hildenbrand > > But maybe we can really improve the code: > > > diff --git a/mm/rmap.c b/mm/rmap.c > index 2608c40dffade..e310b6c4221d7 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1495,6 +1495,7 @@ static __always_inline void __folio_remove_rmap(str= uct folio *folio, > { > atomic_t *mapped =3D &folio->_nr_pages_mapped; > int last, nr =3D 0, nr_pmdmapped =3D 0; > + bool partially_mapped =3D false; > enum node_stat_item idx; > > __folio_rmap_sanity_checks(folio, page, nr_pages, level); > @@ -1515,6 +1516,8 @@ static __always_inline void __folio_remove_rmap(str= uct folio *folio, > nr++; > } > } while (page++, --nr_pages > 0); > + > + partially_mapped =3D nr && atomic_read(mapped); nice! > break; > case RMAP_LEVEL_PMD: > atomic_dec(&folio->_large_mapcount); > @@ -1532,6 +1535,7 @@ static __always_inline void __folio_remove_rmap(str= uct folio *folio, > nr =3D 0; > } > } > + partially_mapped =3D nr < nr_pmdmapped; > break; > } > > @@ -1553,9 +1557,9 @@ static __always_inline void __folio_remove_rmap(str= uct folio *folio, > * page of the folio is unmapped and at least one page > * is still mapped. > */ > - if (folio_test_large(folio) && folio_test_anon(folio)) > - if (level =3D=3D RMAP_LEVEL_PTE || nr < nr_pmdmap= ped) > - deferred_split_folio(folio); > + if (folio_test_large(folio) && folio_test_anon(folio) && > + list_empty(&folio->_deferred_list) && partially_mappe= d) > + deferred_split_folio(folio); > } > > /* > > The compiler should be smart enough to optimize it all -- most likely ;) > > -- > Cheers, > > David / dhildenb >