From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>,
<akpm@linux-foundation.org>, <hughd@google.com>
Cc: <willy@infradead.org>, <david@redhat.com>, <ying.huang@intel.com>,
<21cnbao@gmail.com>, <ryan.roberts@arm.com>,
<shy828301@gmail.com>, <ziy@nvidia.com>, <ioworker0@gmail.com>,
<da.gomez@samsung.com>, <p.raghav@samsung.com>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v5 1/6] mm: memory: extend finish_fault() to support large folio
Date: Wed, 12 Jun 2024 21:40:19 +0800 [thread overview]
Message-ID: <bab26abd-9364-4a6b-9bed-5bcf2cb46952@huawei.com> (raw)
In-Reply-To: <3a190892355989d42f59cf9f2f98b94694b0d24d.1718090413.git.baolin.wang@linux.alibaba.com>
On 2024/6/11 18:11, Baolin Wang wrote:
> Add large folio mapping establishment support for finish_fault() as a
> preparation, to support multi-size THP allocation of anonymous shmem pages
> in the following patches.
>
> Keep the same behavior (per-page fault) for non-anon shmem to avoid inflating
> the RSS unintentionally, and we can discuss what size of mapping to build
> when extending mTHP to control non-anon shmem in the future.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 47 insertions(+), 10 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index eef4e482c0c2..72775ee99ff3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4831,9 +4831,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> {
> struct vm_area_struct *vma = vmf->vma;
> struct page *page;
> + struct folio *folio;
> vm_fault_t ret;
> bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
> !(vma->vm_flags & VM_SHARED);
> + int type, nr_pages;
> + unsigned long addr = vmf->address;
>
> /* Did we COW the page? */
> if (is_cow)
> @@ -4864,24 +4867,58 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> return VM_FAULT_OOM;
> }
>
> + folio = page_folio(page);
> + nr_pages = folio_nr_pages(folio);
> +
> + /*
> + * Using per-page fault to maintain the uffd semantics, and same
> + * approach also applies to non-anonymous-shmem faults to avoid
> + * inflating the RSS of the process.
> + */
> + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
> + nr_pages = 1;
> + } else if (nr_pages > 1) {
> + pgoff_t idx = folio_page_idx(folio, page);
> + /* The page offset of vmf->address within the VMA. */
> + pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff;
> +
vma->vm_pgoff
> + /*
> + * Fallback to per-page fault in case the folio size in page
> + * cache beyond the VMA limits.
> + */
> + if (unlikely(vma_off < idx ||
> + vma_off + (nr_pages - idx) > vma_pages(vma))) {
> + nr_pages = 1;
> + } else {
> + /* Now we can set mappings for the whole large folio. */
> + addr = vmf->address - idx * PAGE_SIZE;
addr -= idx * PAGE_SIZE;
> + page = &folio->page;
> + }
> + }
> +
> vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
> - vmf->address, &vmf->ptl);
> + addr, &vmf->ptl);
no newline now,
> if (!vmf->pte)
> return VM_FAULT_NOPAGE;
>
> /* Re-check under ptl */
> - if (likely(!vmf_pte_changed(vmf))) {
> - struct folio *folio = page_folio(page);
> - int type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
> -
> - set_pte_range(vmf, folio, page, 1, vmf->address);
> - add_mm_counter(vma->vm_mm, type, 1);
> - ret = 0;
> - } else {
> - update_mmu_tlb(vma, vmf->address, vmf->pte);
> + if (nr_pages == 1 && unlikely(vmf_pte_changed(vmf))) {
> + update_mmu_tlb(vma, addr, vmf->pte);
> + ret = VM_FAULT_NOPAGE;
> + goto unlock;
> + } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> ret = VM_FAULT_NOPAGE;
> + goto unlock;
> }
We may add a vmf_pte_range_changed(), but separate it.
Some very small nits, up to you,
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> + folio_ref_add(folio, nr_pages - 1);
> + set_pte_range(vmf, folio, page, nr_pages, addr);
> + type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
> + add_mm_counter(vma->vm_mm, type, nr_pages);
> + ret = 0;
> +
> +unlock:
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> return ret;
> }
next prev parent reply other threads:[~2024-06-12 13:40 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-11 10:11 [PATCH v5 0/6] add mTHP support for anonymous shmem Baolin Wang
2024-06-11 10:11 ` [PATCH v5 1/6] mm: memory: extend finish_fault() to support large folio Baolin Wang
2024-06-11 14:38 ` Zi Yan
2024-06-12 9:28 ` Baolin Wang
2024-06-12 13:40 ` Kefeng Wang [this message]
2024-06-13 0:51 ` Baolin Wang
2024-06-11 10:11 ` [PATCH v5 2/6] mm: shmem: add THP validation for PMD-mapped THP related statistics Baolin Wang
2024-06-11 10:11 ` [PATCH v5 3/6] mm: shmem: add multi-size THP sysfs interface for anonymous shmem Baolin Wang
2024-06-11 10:11 ` [PATCH v5 4/6] mm: shmem: add mTHP support " Baolin Wang
2024-07-03 17:25 ` Ryan Roberts
2024-07-04 11:15 ` Baolin Wang
2024-07-04 13:58 ` Ryan Roberts
2024-07-04 14:01 ` David Hildenbrand
2024-07-04 15:05 ` Bang Li
2024-07-04 15:54 ` Ryan Roberts
2024-07-05 2:56 ` Baolin Wang
2024-07-05 8:55 ` Ryan Roberts
2024-07-04 14:46 ` Bang Li
2024-07-05 3:01 ` Baolin Wang
2024-07-05 8:42 ` Ryan Roberts
2024-07-05 8:57 ` Baolin Wang
2024-07-05 9:05 ` Ryan Roberts
2024-06-11 10:11 ` [PATCH v5 5/6] mm: shmem: add mTHP size alignment in shmem_get_unmapped_area Baolin Wang
2024-06-11 10:11 ` [PATCH v5 6/6] mm: shmem: add mTHP counters for anonymous shmem Baolin Wang
2024-06-12 8:04 ` Lance Yang
2024-06-12 9:28 ` Baolin Wang
2024-06-12 14:16 ` Lance Yang
2024-06-12 13:46 ` Kefeng Wang
2024-06-13 1:00 ` Baolin Wang
2024-06-12 14:18 ` Lance Yang
2024-06-13 1:08 ` Baolin Wang
2024-07-04 18:43 ` [PATCH v5 0/6] add mTHP support " Matthew Wilcox
2024-07-04 19:03 ` David Hildenbrand
2024-07-04 19:19 ` David Hildenbrand
2024-07-04 19:49 ` Matthew Wilcox
2024-07-05 5:47 ` Baolin Wang
2024-07-05 8:45 ` Ryan Roberts
2024-07-05 8:59 ` David Hildenbrand
2024-07-05 9:13 ` Ryan Roberts
2024-07-05 9:16 ` David Hildenbrand
2024-07-05 9:23 ` Ryan Roberts
2024-07-07 16:39 ` Daniel Gomez
2024-07-09 8:28 ` Ryan Roberts
2024-07-16 13:11 ` Daniel Gomez
2024-07-16 13:22 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bab26abd-9364-4a6b-9bed-5bcf2cb46952@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=da.gomez@samsung.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=p.raghav@samsung.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox