From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Hugh Dickins <hughd@google.com>,
Yin Fengwei <fengwei.yin@intel.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Muchun Song <muchun.song@linux.dev>, Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH RFC 07/39] mm/rmap: convert folio_add_file_rmap_range() into folio_add_file_rmap_[pte|ptes|pmd]()
Date: Tue, 5 Dec 2023 12:04:53 +0000 [thread overview]
Message-ID: <3e748c18-f489-4ec4-ae71-5a5b18a4b161@arm.com> (raw)
In-Reply-To: <20231204142146.91437-8-david@redhat.com>
On 04/12/2023 14:21, David Hildenbrand wrote:
> Let's get rid of the compound parameter and instead define implicitly
> which mappings we're adding. That is more future proof, easier to read
> and harder to mess up.
>
> Use an enum to express the granularity internally. Make the compiler
> always special-case on the granularity by using __always_inline.
>
> Add plenty of sanity checks with CONFIG_DEBUG_VM. Replace the
> folio_test_pmd_mappable() check by a config check in the caller and
> sanity checks. Convert the single user of folio_add_file_rmap_range().
>
> This function design can later easily be extended to PUDs and to batch
> PMDs. Note that for now we don't support anything bigger than
> PMD-sized folios (as we cleanly separated hugetlb handling). Sanity checks
Is that definitely true? Don't we support PUD-mapping file-backed DAX memory?
> will catch if that ever changes.
>
> Next up is removing page_remove_rmap() along with its "compound"
> parameter and smilarly converting all other rmap functions.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> include/linux/rmap.h | 47 +++++++++++++++++++++++++++--
> mm/memory.c | 2 +-
> mm/rmap.c | 72 ++++++++++++++++++++++++++++----------------
> 3 files changed, 92 insertions(+), 29 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 77e336f86c72d..a4a30c361ac50 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -186,6 +186,45 @@ typedef int __bitwise rmap_t;
> */
> #define RMAP_COMPOUND ((__force rmap_t)BIT(1))
>
> +/*
> + * Internally, we're using an enum to specify the granularity. Usually,
> + * we make the compiler create specialized variants for the different
> + * granularity.
> + */
> +enum rmap_mode {
> + RMAP_MODE_PTE = 0,
> + RMAP_MODE_PMD,
> +};
> +
> +static inline void __folio_rmap_sanity_checks(struct folio *folio,
> + struct page *page, unsigned int nr_pages, enum rmap_mode mode)
> +{
> + /* hugetlb folios are handled separately. */
> + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
> + VM_WARN_ON_FOLIO(folio_test_large(folio) &&
> + !folio_test_large_rmappable(folio), folio);
> +
> + VM_WARN_ON_ONCE(!nr_pages || nr_pages > folio_nr_pages(folio));
nit: I don't think you technically need the second half of this - its covered by
the test below...
> + VM_WARN_ON_FOLIO(page_folio(page) != folio, folio);
> + VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio);
...this one.
> +
> + switch (mode) {
> + case RMAP_MODE_PTE:
> + break;
> + case RMAP_MODE_PMD:
> + /*
> + * We don't support folios larger than a single PMD yet. So
> + * when RMAP_MODE_PMD is set, we assume that we are creating
> + * a single "entire" mapping of the folio.
> + */
> + VM_WARN_ON_FOLIO(folio_nr_pages(folio) != HPAGE_PMD_NR, folio);
> + VM_WARN_ON_FOLIO(nr_pages != HPAGE_PMD_NR, folio);
> + break;
> + default:
> + VM_WARN_ON_ONCE(true);
> + }
> +}
> +
> /*
> * rmap interfaces called when adding or removing pte of page
> */
> @@ -198,8 +237,12 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> unsigned long address);
> void page_add_file_rmap(struct page *, struct vm_area_struct *,
> bool compound);
> -void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
> - struct vm_area_struct *, bool compound);
> +void folio_add_file_rmap_ptes(struct folio *, struct page *, unsigned int nr,
> + struct vm_area_struct *);
> +#define folio_add_file_rmap_pte(folio, page, vma) \
> + folio_add_file_rmap_ptes(folio, page, 1, vma)
> +void folio_add_file_rmap_pmd(struct folio *, struct page *,
> + struct vm_area_struct *);
> void page_remove_rmap(struct page *, struct vm_area_struct *,
> bool compound);
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 1f18ed4a54971..15325587cff01 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4414,7 +4414,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
> folio_add_lru_vma(folio, vma);
> } else {
> add_mm_counter(vma->vm_mm, mm_counter_file(page), nr);
> - folio_add_file_rmap_range(folio, page, nr, vma, false);
> + folio_add_file_rmap_ptes(folio, page, nr, vma);
> }
> set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr);
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index a735ecca47a81..1614d98062948 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1334,31 +1334,19 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> SetPageAnonExclusive(&folio->page);
> }
>
> -/**
> - * folio_add_file_rmap_range - add pte mapping to page range of a folio
> - * @folio: The folio to add the mapping to
> - * @page: The first page to add
> - * @nr_pages: The number of pages which will be mapped
> - * @vma: the vm area in which the mapping is added
> - * @compound: charge the page as compound or small page
> - *
> - * The page range of folio is defined by [first_page, first_page + nr_pages)
> - *
> - * The caller needs to hold the pte lock.
> - */
> -void folio_add_file_rmap_range(struct folio *folio, struct page *page,
> - unsigned int nr_pages, struct vm_area_struct *vma,
> - bool compound)
> +static __always_inline void __folio_add_file_rmap(struct folio *folio,
> + struct page *page, unsigned int nr_pages,
> + struct vm_area_struct *vma, enum rmap_mode mode)
> {
> atomic_t *mapped = &folio->_nr_pages_mapped;
> unsigned int nr_pmdmapped = 0, first;
> int nr = 0;
>
> - VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
> - VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio);
> + VM_WARN_ON_FOLIO(folio_test_anon(folio), folio);
> + __folio_rmap_sanity_checks(folio, page, nr_pages, mode);
>
> /* Is page being mapped by PTE? Is this its first map to be added? */
> - if (likely(!compound)) {
> + if (likely(mode == RMAP_MODE_PTE)) {
> do {
> first = atomic_inc_and_test(&page->_mapcount);
> if (first && folio_test_large(folio)) {
> @@ -1369,9 +1357,7 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page,
> if (first)
> nr++;
> } while (page++, --nr_pages > 0);
> - } else if (folio_test_pmd_mappable(folio)) {
> - /* That test is redundant: it's for safety or to optimize out */
> -
> + } else if (mode == RMAP_MODE_PMD) {
> first = atomic_inc_and_test(&folio->_entire_mapcount);
> if (first) {
> nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped);
> @@ -1399,6 +1385,43 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page,
> mlock_vma_folio(folio, vma);
> }
>
> +/**
> + * folio_add_file_rmap_ptes - add PTE mappings to a page range of a folio
> + * @folio: The folio to add the mappings to
> + * @page: The first page to add
> + * @nr_pages: The number of pages that will be mapped using PTEs
> + * @vma: The vm area in which the mappings are added
> + *
> + * The page range of the folio is defined by [page, page + nr_pages)
> + *
> + * The caller needs to hold the page table lock.
> + */
> +void folio_add_file_rmap_ptes(struct folio *folio, struct page *page,
> + unsigned int nr_pages, struct vm_area_struct *vma)
> +{
> + __folio_add_file_rmap(folio, page, nr_pages, vma, RMAP_MODE_PTE);
> +}
> +
> +/**
> + * folio_add_file_rmap_pmd - add a PMD mapping to a page range of a folio
> + * @folio: The folio to add the mapping to
> + * @page: The first page to add
> + * @vma: The vm area in which the mapping is added
> + *
> + * The page range of the folio is defined by [page, page + HPAGE_PMD_NR)
> + *
> + * The caller needs to hold the page table lock.
> + */
> +void folio_add_file_rmap_pmd(struct folio *folio, struct page *page,
> + struct vm_area_struct *vma)
> +{
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + __folio_add_file_rmap(folio, page, HPAGE_PMD_NR, vma, RMAP_MODE_PMD);
> +#else
> + WARN_ON_ONCE(true);
> +#endif
> +}
> +
> /**
> * page_add_file_rmap - add pte mapping to a file page
> * @page: the page to add the mapping to
> @@ -1411,16 +1434,13 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
> bool compound)
> {
> struct folio *folio = page_folio(page);
> - unsigned int nr_pages;
>
> VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page);
>
> if (likely(!compound))
> - nr_pages = 1;
> + folio_add_file_rmap_pte(folio, page, vma);
> else
> - nr_pages = folio_nr_pages(folio);
> -
> - folio_add_file_rmap_range(folio, page, nr_pages, vma, compound);
> + folio_add_file_rmap_pmd(folio, page, vma);
> }
>
> /**
next prev parent reply other threads:[~2023-12-05 12:04 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-04 14:21 [PATCH RFC 00/39] mm/rmap: interface overhaul David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 01/39] mm/rmap: rename hugepage_add* to hugetlb_add* David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 02/39] mm/rmap: introduce and use hugetlb_remove_rmap() David Hildenbrand
2023-12-06 1:22 ` Yin Fengwei
2023-12-06 12:11 ` David Hildenbrand
2023-12-07 0:56 ` Yin Fengwei
2023-12-04 14:21 ` [PATCH RFC 03/39] mm/rmap: introduce and use hugetlb_add_file_rmap() David Hildenbrand
2023-12-06 1:22 ` Yin Fengwei
2023-12-04 14:21 ` [PATCH RFC 04/39] mm/rmap: introduce and use hugetlb_try_dup_anon_rmap() David Hildenbrand
2023-12-06 1:22 ` Yin Fengwei
2023-12-04 14:21 ` [PATCH RFC 05/39] mm/rmap: introduce and use hugetlb_try_share_anon_rmap() David Hildenbrand
2023-12-06 1:23 ` Yin Fengwei
2023-12-04 14:21 ` [PATCH RFC 06/39] mm/rmap: add hugetlb sanity checks David Hildenbrand
2023-12-06 1:23 ` Yin Fengwei
2023-12-04 14:21 ` [PATCH RFC 07/39] mm/rmap: convert folio_add_file_rmap_range() into folio_add_file_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-05 12:04 ` Ryan Roberts [this message]
2023-12-05 12:25 ` David Hildenbrand
2023-12-06 1:30 ` Yin Fengwei
2023-12-06 9:17 ` David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 08/39] mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]() David Hildenbrand
2023-12-08 1:40 ` Yin, Fengwei
2023-12-04 14:21 ` [PATCH RFC 09/39] mm/huge_memory: page_add_file_rmap() -> folio_add_file_rmap_pmd() David Hildenbrand
2023-12-08 1:41 ` Yin, Fengwei
2023-12-04 14:21 ` [PATCH RFC 10/39] mm/migrate: page_add_file_rmap() -> folio_add_file_rmap_pte() David Hildenbrand
2023-12-08 1:42 ` Yin, Fengwei
2023-12-04 14:21 ` [PATCH RFC 11/39] mm/userfaultfd: " David Hildenbrand
2023-12-08 1:42 ` Yin, Fengwei
2023-12-04 14:21 ` [PATCH RFC 12/39] mm/rmap: remove page_add_file_rmap() David Hildenbrand
2023-12-08 1:42 ` Yin, Fengwei
2023-12-04 14:21 ` [PATCH RFC 13/39] mm/rmap: factor out adding folio mappings into __folio_add_rmap() David Hildenbrand
2023-12-08 1:44 ` Yin, Fengwei
2023-12-04 14:21 ` [PATCH RFC 14/39] mm/rmap: introduce folio_add_anon_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 15/39] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() David Hildenbrand
2023-12-05 12:22 ` Ryan Roberts
2023-12-05 12:26 ` David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 16/39] mm/huge_memory: page_add_anon_rmap() -> folio_add_anon_rmap_pmd() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 17/39] mm/migrate: page_add_anon_rmap() -> folio_add_anon_rmap_pte() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 18/39] mm/ksm: " David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 19/39] mm/swapfile: " David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 20/39] mm/memory: " David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 21/39] mm/rmap: remove page_add_anon_rmap() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 22/39] mm/rmap: remove RMAP_COMPOUND David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 23/39] mm/rmap: introduce folio_remove_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-05 12:52 ` Ryan Roberts
2023-12-05 13:09 ` David Hildenbrand
2023-12-05 13:37 ` Ryan Roberts
2023-12-04 14:21 ` [PATCH RFC 24/39] kernel/events/uprobes: page_remove_rmap() -> folio_remove_rmap_pte() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 25/39] mm/huge_memory: page_remove_rmap() -> folio_remove_rmap_pmd() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 26/39] mm/khugepaged: page_remove_rmap() -> folio_remove_rmap_pte() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 27/39] mm/ksm: " David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 28/39] mm/memory: " David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 29/39] mm/migrate_device: " David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 30/39] mm/rmap: " David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 31/39] Documentation: stop referring to page_remove_rmap() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 32/39] mm/rmap: remove page_remove_rmap() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 33/39] mm/rmap: convert page_dup_file_rmap() to folio_dup_file_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 34/39] mm/rmap: introduce folio_try_dup_anon_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-04 17:59 ` David Hildenbrand
2023-12-05 13:11 ` David Hildenbrand
2023-12-05 13:12 ` Ryan Roberts
2023-12-05 13:17 ` David Hildenbrand
2023-12-05 13:18 ` David Hildenbrand
2023-12-05 13:40 ` Ryan Roberts
2023-12-05 13:50 ` David Hildenbrand
2023-12-05 14:02 ` Ryan Roberts
2023-12-05 14:12 ` David Hildenbrand
2023-12-05 13:32 ` David Hildenbrand
2023-12-05 13:40 ` Ryan Roberts
2023-12-04 14:21 ` [PATCH RFC 35/39] mm/huge_memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pmd() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 36/39] mm/memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pte() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 37/39] mm/rmap: remove page_try_dup_anon_rmap() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 38/39] mm: convert page_try_share_anon_rmap() to folio_try_share_anon_rmap_[pte|pmd]() David Hildenbrand
2023-12-04 14:21 ` [PATCH RFC 39/39] mm/rmap: rename COMPOUND_MAPPED to ENTIRELY_MAPPED David Hildenbrand
2023-12-04 19:53 ` [PATCH RFC 00/39] mm/rmap: interface overhaul Ryan Roberts
2023-12-05 9:56 ` David Hildenbrand
2023-12-05 13:31 ` Ryan Roberts
2023-12-05 13:39 ` David Hildenbrand
2023-12-05 13:49 ` Ryan Roberts
2023-12-05 13:55 ` David Hildenbrand
2023-12-08 11:24 ` David Hildenbrand
2023-12-08 11:38 ` Ryan Roberts
2023-12-05 12:29 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3e748c18-f489-4ec4-ae71-5a5b18a4b161@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
--cc=peterx@redhat.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox