From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA13BC54798 for ; Thu, 29 Feb 2024 11:45:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 516966B00B9; Thu, 29 Feb 2024 06:45:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C6386B00BA; Thu, 29 Feb 2024 06:45:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38E0F6B00BC; Thu, 29 Feb 2024 06:45:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 294DC6B00B9 for ; Thu, 29 Feb 2024 06:45:10 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id EFFC31A0AD2 for ; Thu, 29 Feb 2024 11:45:09 +0000 (UTC) X-FDA: 81844660338.17.5C708F3 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf26.hostedemail.com (Postfix) with ESMTP id 3CBEE140007 for ; Thu, 29 Feb 2024 11:45:08 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709207108; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N0ZpY65VFg+E6A8qBifiwnosnbkH2ehqTRFWLh41Zio=; b=YUQSNg02kL/Oe2yHMJQtEFFJR83SF0d6OOkbTk1XqzVDQjZAg8VeyQVoPPOchwaQlGYsBg f1XDMYbEc6a0c0Is7feS1gX69GPPD0jwxZ+I+ognRt7PyKbLFh0WPfR4ibpe2QRrAizs8n hHXKyMbtJzORn2xkzdEQDx1ilYuMfDI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709207108; a=rsa-sha256; cv=none; b=cmlKyVPmlOWh8ZpwyNOpNoNgcYz4onskKlB7Gcp1hRwKTmPIQ+FFHf8FSBy4nsEFDa7yOt xhMQ4tiLYWUu5p48gOH9zlEhLLf8Xhb8YVOfoNHrS+yGMesdtyuwuaiAa+C7bmshpZwDGn 54vDu5oXcYJ6Y6R2P3tBkRHb62K3r5c= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 019FE1FB; Thu, 29 Feb 2024 03:45:46 -0800 (PST) Received: from [10.57.68.58] (unknown [10.57.68.58]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 225483F762; Thu, 29 Feb 2024 03:45:05 -0800 (PST) Message-ID: <58c4726d-73ec-40e5-8f1d-e00c37532af9@arm.com> Date: Thu, 29 Feb 2024 11:45:04 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1] mm: convert folio_estimated_sharers() to folio_likely_mapped_shared() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Barry Song , Vishal Moola References: <20240227201548.857831-1-david@redhat.com> From: Ryan Roberts In-Reply-To: <20240227201548.857831-1-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3CBEE140007 X-Stat-Signature: h7bqkcotmux83czbnkm14o3u3ftsw3nd X-HE-Tag: 1709207107-665218 X-HE-Meta: U2FsdGVkX1/vgR1p2RV4i375ohmonrRyBL+MrO2UootrMA9lFkdDFjP7xXorEZd51M2K9Ltt5UiFc6p5XeeldOjcdgJB0QJ6aiMroa973s6Nj4IOq4japbFwm8KLED2qG2s03Sahf6mtl4SNtgYTboaoumUkFC+3PPgCbw44agUuZyZlkSeC1Mg1+pDJEQIu7eoJ2dFiquf/dE6rohK42LvtUFiCllk4wZjLla4Kcw8vMwZbD9qHFpM6btXjmoNW35jBO7FvrQeEiVrOp2vZkRxgkahnzbP3k0eNOUyyuwti2WBcghTq1V2ZLQplL38sMJGjFtXr2Wi//rwIQP/3sNvK/kMMLhUfJg2AAP5FGQ5knLClhajRmSbn/nW1PysjnDGVXrSu67qNmSPtRjBZRMzlufdihwkRoU7IeSlZYr6Vj/PlZO9fU84mxfTXITj5qd+WAd708cG0akb6E8tqS0tWjYhY5Fn2Hv5eXjT86hUXXLviPGT7ue30G5MR1sySy8lWD8n0AmKl7/6k1rogfLf0WbZr6ryi/MUMJcbdQTkip8o8bqzWfbYCyMoa6pfJLskMelpe4sAO6l28X9noI9LwMlBEu1BSuWBiu5uxVrCINM1D2yqt3H0KxvfK0uH60vwTlRIYPCzaNL7AcH8uiYFpiwLOe7nYXkBOPGrnIy9bq3pllG7gkNCMNuwVATJKTm2WI6mrdL8D1lQJEJyOnEnxRgG5Ws0V6Qh2kpYOP32ABTB89fP++vAQYPB/sm97z/1oGjpMi0at1J7u3ue+7kXZ4VaDFUEf5HwKDzJFPtfWi83DbVub5+XkWGzLiwRAInEEPs+lhwHoaxgVk1dyweBiEA4a8ikFvkdKdrbQGGJ9KWtFDD/q6PKiJ984qeeMuOS44wCmerQNO0JIF+483tUX1H1WbG84I0nHO7+D0ycR76VCbHU/+fMvldi1RAZ+nN5tq/8dZ3+6wtFFkLj mGA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 27/02/2024 20:15, David Hildenbrand wrote: > Callers of folio_estimated_sharers() only care about "mapped shared vs. > mapped exclusively", not the exact estimate of sharers. Let's consolidate > and unify the condition users are checking. While at it clarify the > semantics and extend the discussion on the fuzziness. > > Use the "likely mapped shared" terminology to better express what the > (adjusted) function actually checks. > > Whether a partially-mappable folio is more likely to not be partially > mapped than partially mapped is debatable. In the future, we might be able > to improve our estimate for partially-mappable folios, though. > > Note that we will now consistently detect "mapped shared" only if the > first subpage is actually mapped multiple times. When the first subpage > is not mapped, we will consistently detect it as "mapped exclusively". > This change should currently only affect the usage in > madvise_free_pte_range() and queue_folios_pte_range() for large folios: if > the first page was already unmapped, we would have skipped the folio. > > Cc: Barry Song > Cc: Vishal Moola (Oracle) > Cc: Ryan Roberts > Signed-off-by: David Hildenbrand Reviewed-by: Ryan Roberts > --- > include/linux/mm.h | 46 ++++++++++++++++++++++++++++++++++++---------- > mm/huge_memory.c | 2 +- > mm/madvise.c | 6 +++--- > mm/memory.c | 2 +- > mm/mempolicy.c | 14 ++++++-------- > mm/migrate.c | 8 ++++---- > 6 files changed, 51 insertions(+), 27 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 6f4825d829656..795c89632265f 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2147,21 +2147,47 @@ static inline size_t folio_size(struct folio *folio) > } > > /** > - * folio_estimated_sharers - Estimate the number of sharers of a folio. > + * folio_likely_mapped_shared - Estimate if the folio is mapped into the page > + * tables of more than one MM > * @folio: The folio. > * > - * folio_estimated_sharers() aims to serve as a function to efficiently > - * estimate the number of processes sharing a folio. This is done by > - * looking at the precise mapcount of the first subpage in the folio, and > - * assuming the other subpages are the same. This may not be true for large > - * folios. If you want exact mapcounts for exact calculations, look at > - * page_mapcount() or folio_total_mapcount(). > + * This function checks if the folio is currently mapped into more than one > + * MM ("mapped shared"), or if the folio is only mapped into a single MM > + * ("mapped exclusively"). > * > - * Return: The estimated number of processes sharing a folio. > + * As precise information is not easily available for all folios, this function > + * estimates the number of MMs ("sharers") that are currently mapping a folio > + * using the number of times the first page of the folio is currently mapped > + * into page tables. > + * > + * For small anonymous folios (except KSM folios) and anonymous hugetlb folios, > + * the return value will be exactly correct, because they can only be mapped > + * at most once into an MM, and they cannot be partially mapped. > + * > + * For other folios, the result can be fuzzy: > + * (a) For partially-mappable large folios (THP), the return value can wrongly > + * indicate "mapped exclusively" (false negative) when the folio is > + * only partially mapped into at least one MM. > + * (b) For pagecache folios (including hugetlb), the return value can wrongly > + * indicate "mapped shared" (false positive) when two VMAs in the same MM > + * cover the same file range. > + * (c) For (small) KSM folios, the return value can wrongly indicate "mapped > + * shared" (false negative), when the folio is mapped multiple times into > + * the same MM. > + * > + * Further, this function only considers current page table mappings that > + * are tracked using the folio mapcount(s). It does not consider: > + * (1) If the folio might get mapped in the (near) future (e.g., swapcache, > + * pagecache, temporary unmapping for migration). > + * (2) If the folio is mapped differently (VM_PFNMAP). > + * (3) If hugetlb page table sharing applies. Callers might want to check > + * hugetlb_pmd_shared(). > + * > + * Return: Whether the folio is estimated to be mapped into more than one MM. > */ > -static inline int folio_estimated_sharers(struct folio *folio) > +static inline bool folio_likely_mapped_shared(struct folio *folio) > { > - return page_mapcount(folio_page(folio, 0)); > + return page_mapcount(folio_page(folio, 0)) > 1; > } > > #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 50d146eb248ff..4d10904fef70c 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1829,7 +1829,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > * If other processes are mapping this folio, we couldn't discard > * the folio unless they all do MADV_FREE so let's skip the folio. > */ > - if (folio_estimated_sharers(folio) != 1) > + if (folio_likely_mapped_shared(folio)) > goto out; > > if (!folio_trylock(folio)) > diff --git a/mm/madvise.c b/mm/madvise.c > index 44a498c94158c..32a534d200219 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -366,7 +366,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > folio = pfn_folio(pmd_pfn(orig_pmd)); > > /* Do not interfere with other mappings of this folio */ > - if (folio_estimated_sharers(folio) != 1) > + if (folio_likely_mapped_shared(folio)) > goto huge_unlock; > > if (pageout_anon_only_filter && !folio_test_anon(folio)) > @@ -453,7 +453,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > if (folio_test_large(folio)) { > int err; > > - if (folio_estimated_sharers(folio) > 1) > + if (folio_likely_mapped_shared(folio)) > break; > if (pageout_anon_only_filter && !folio_test_anon(folio)) > break; > @@ -677,7 +677,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, > if (folio_test_large(folio)) { > int err; > > - if (folio_estimated_sharers(folio) != 1) > + if (folio_likely_mapped_shared(folio)) > break; > if (!folio_trylock(folio)) > break; > diff --git a/mm/memory.c b/mm/memory.c > index 1c45b6a42a1b9..8394a9843ca06 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5173,7 +5173,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > * Flag if the folio is shared between multiple address spaces. This > * is later used when determining whether to group tasks together > */ > - if (folio_estimated_sharers(folio) > 1 && (vma->vm_flags & VM_SHARED)) > + if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) > flags |= TNF_SHARED; > > nid = folio_nid(folio); > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index f60b4c99f1302..0b92fde395182 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -642,12 +642,11 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, > * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio. > * Choosing not to migrate a shared folio is not counted as a failure. > * > - * To check if the folio is shared, ideally we want to make sure > - * every page is mapped to the same process. Doing that is very > - * expensive, so check the estimated sharers of the folio instead. > + * See folio_likely_mapped_shared() on possible imprecision when we > + * cannot easily detect if a folio is shared. > */ > if ((flags & MPOL_MF_MOVE_ALL) || > - (folio_estimated_sharers(folio) == 1 && !hugetlb_pmd_shared(pte))) > + (!folio_likely_mapped_shared(folio) && !hugetlb_pmd_shared(pte))) > if (!isolate_hugetlb(folio, qp->pagelist)) > qp->nr_failed++; > unlock: > @@ -1032,11 +1031,10 @@ static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist, > * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio. > * Choosing not to migrate a shared folio is not counted as a failure. > * > - * To check if the folio is shared, ideally we want to make sure > - * every page is mapped to the same process. Doing that is very > - * expensive, so check the estimated sharers of the folio instead. > + * See folio_likely_mapped_shared() on possible imprecision when we > + * cannot easily detect if a folio is shared. > */ > - if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) == 1) { > + if ((flags & MPOL_MF_MOVE_ALL) || !folio_likely_mapped_shared(folio)) { > if (folio_isolate_lru(folio)) { > list_add_tail(&folio->lru, foliolist); > node_stat_mod_folio(folio, > diff --git a/mm/migrate.c b/mm/migrate.c > index 73a052a382f13..35d376969f8b9 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2568,11 +2568,11 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, > /* > * Don't migrate file folios that are mapped in multiple processes > * with execute permissions as they are probably shared libraries. > - * To check if the folio is shared, ideally we want to make sure > - * every page is mapped to the same process. Doing that is very > - * expensive, so check the estimated mapcount of the folio instead. > + * > + * See folio_likely_mapped_shared() on possible imprecision when we > + * cannot easily detect if a folio is shared. > */ > - if (folio_estimated_sharers(folio) != 1 && folio_is_file_lru(folio) && > + if (folio_likely_mapped_shared(folio) && folio_is_file_lru(folio) && > (vma->vm_flags & VM_EXEC)) > goto out; >