From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Hugh Dickins <hughd@google.com>,
Yin Fengwei <fengwei.yin@intel.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Muchun Song <muchun.song@linux.dev>, Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH v1 08/39] mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]()
Date: Mon, 18 Dec 2023 15:56:23 +0000 [thread overview]
Message-ID: <5ea33aec-86f8-489c-937b-77a53fca20ce@arm.com> (raw)
In-Reply-To: <20231211155652.131054-9-david@redhat.com>
On 11/12/2023 15:56, David Hildenbrand wrote:
> Let's convert insert_page_into_pte_locked() and do_set_pmd(). While at it,
> perform some folio conversion.
>
> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> mm/memory.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 6a5540ba3c65..70754fd65788 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1859,12 +1859,14 @@ static int validate_page_before_insert(struct page *page)
> static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
> unsigned long addr, struct page *page, pgprot_t prot)
> {
> + struct folio *folio = page_folio(page);
> +
> if (!pte_none(ptep_get(pte)))
> return -EBUSY;
> /* Ok, finally just insert the thing.. */
> - get_page(page);
> + folio_get(folio);
> inc_mm_counter(vma->vm_mm, mm_counter_file(page));
> - page_add_file_rmap(page, vma, false);
> + folio_add_file_rmap_pte(folio, page, vma);
> set_pte_at(vma->vm_mm, addr, pte, mk_pte(page, prot));
> return 0;
> }
> @@ -4409,6 +4411,7 @@ static void deposit_prealloc_pte(struct vm_fault *vmf)
>
> vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
> {
> + struct folio *folio = page_folio(page);
> struct vm_area_struct *vma = vmf->vma;
> bool write = vmf->flags & FAULT_FLAG_WRITE;
> unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
> @@ -4418,8 +4421,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
> if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
> return ret;
>
> - page = compound_head(page);
> - if (compound_order(page) != HPAGE_PMD_ORDER)
> + if (page != &folio->page || folio_order(folio) != HPAGE_PMD_ORDER)
> return ret;
>
> /*
> @@ -4428,7 +4430,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
> * check. This kind of THP just can be PTE mapped. Access to
> * the corrupted subpage should trigger SIGBUS as expected.
> */
> - if (unlikely(PageHasHWPoisoned(page)))
> + if (unlikely(folio_test_has_hwpoisoned(folio)))
> return ret;
>
> /*
> @@ -4452,7 +4454,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
> entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>
> add_mm_counter(vma->vm_mm, mm_counter_file(page), HPAGE_PMD_NR);
> - page_add_file_rmap(page, vma, true);
> + folio_add_file_rmap_pmd(folio, page, vma);
>
> /*
> * deposit and withdraw with pmd lock held
next prev parent reply other threads:[~2023-12-18 15:56 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-11 15:56 [PATCH v1 00/39] mm/rmap: interface overhaul David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 01/39] mm/rmap: rename hugepage_add* to hugetlb_add* David Hildenbrand
2023-12-11 16:14 ` Ryan Roberts
2023-12-11 16:24 ` Matthew Wilcox
2023-12-11 15:56 ` [PATCH v1 02/39] mm/rmap: introduce and use hugetlb_remove_rmap() David Hildenbrand
2023-12-11 16:15 ` Ryan Roberts
2023-12-11 16:33 ` Matthew Wilcox
2023-12-11 16:35 ` David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 03/39] mm/rmap: introduce and use hugetlb_add_file_rmap() David Hildenbrand
2023-12-11 16:17 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 04/39] mm/rmap: introduce and use hugetlb_try_dup_anon_rmap() David Hildenbrand
2023-12-11 16:25 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 05/39] mm/rmap: introduce and use hugetlb_try_share_anon_rmap() David Hildenbrand
2023-12-11 16:29 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 06/39] mm/rmap: add hugetlb sanity checks David Hildenbrand
2023-12-11 16:29 ` Ryan Roberts
2023-12-13 9:03 ` David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 07/39] mm/rmap: convert folio_add_file_rmap_range() into folio_add_file_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-13 5:33 ` Yin Fengwei
2023-12-13 8:47 ` David Hildenbrand
2023-12-18 15:48 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 08/39] mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]() David Hildenbrand
2023-12-18 15:56 ` Ryan Roberts [this message]
2023-12-11 15:56 ` [PATCH v1 09/39] mm/huge_memory: page_add_file_rmap() -> folio_add_file_rmap_pmd() David Hildenbrand
2023-12-18 15:58 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 10/39] mm/migrate: page_add_file_rmap() -> folio_add_file_rmap_pte() David Hildenbrand
2023-12-18 15:58 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 11/39] mm/userfaultfd: " David Hildenbrand
2023-12-18 15:59 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 12/39] mm/rmap: remove page_add_file_rmap() David Hildenbrand
2023-12-18 16:00 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 13/39] mm/rmap: factor out adding folio mappings into __folio_add_rmap() David Hildenbrand
2023-12-18 16:07 ` Ryan Roberts
2023-12-18 17:06 ` David Hildenbrand
2023-12-19 8:40 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 14/39] mm/rmap: introduce folio_add_anon_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-15 2:26 ` Yin, Fengwei
2023-12-15 15:16 ` David Hildenbrand
2023-12-18 16:26 ` Ryan Roberts
2023-12-18 17:02 ` David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 15/39] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() David Hildenbrand
2023-12-15 2:27 ` Yin, Fengwei
2023-12-15 2:39 ` Yin, Fengwei
2023-12-18 16:22 ` Ryan Roberts
2023-12-18 17:03 ` David Hildenbrand
2023-12-19 8:42 ` Ryan Roberts
2023-12-11 15:56 ` [PATCH v1 16/39] mm/huge_memory: page_add_anon_rmap() -> folio_add_anon_rmap_pmd() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 17/39] mm/migrate: page_add_anon_rmap() -> folio_add_anon_rmap_pte() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 18/39] mm/ksm: " David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 19/39] mm/swapfile: " David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 20/39] mm/memory: " David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 21/39] mm/rmap: remove page_add_anon_rmap() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 22/39] mm/rmap: remove RMAP_COMPOUND David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 23/39] mm/rmap: introduce folio_remove_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 24/39] kernel/events/uprobes: page_remove_rmap() -> folio_remove_rmap_pte() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 25/39] mm/huge_memory: page_remove_rmap() -> folio_remove_rmap_pmd() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 26/39] mm/khugepaged: page_remove_rmap() -> folio_remove_rmap_pte() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 27/39] mm/ksm: " David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 28/39] mm/memory: " David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 29/39] mm/migrate_device: " David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 30/39] mm/rmap: " David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 31/39] Documentation: stop referring to page_remove_rmap() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 32/39] mm/rmap: remove page_remove_rmap() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 33/39] mm/rmap: convert page_dup_file_rmap() to folio_dup_file_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 34/39] mm/rmap: introduce folio_try_dup_anon_rmap_[pte|ptes|pmd]() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 35/39] mm/huge_memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pmd() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 36/39] mm/memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pte() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 37/39] mm/rmap: remove page_try_dup_anon_rmap() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 38/39] mm: convert page_try_share_anon_rmap() to folio_try_share_anon_rmap_[pte|pmd]() David Hildenbrand
2023-12-11 15:56 ` [PATCH v1 39/39] mm/rmap: rename COMPOUND_MAPPED to ENTIRELY_MAPPED David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5ea33aec-86f8-489c-937b-77a53fca20ce@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
--cc=peterx@redhat.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox