* [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios
@ 2023-04-10 13:39 Peng Zhang
2023-04-10 13:39 ` [PATCH v6 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio Peng Zhang
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Peng Zhang @ 2023-04-10 13:39 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz,
sidhartha.kumar, vishal.moola
Cc: muchun.song, wangkefeng.wang, sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
This patch series converts several userfaultfd functions to use folios.
Change log:
v5->v6:
- Remove VM_BUG_ON_FOLIO from mfill_atomic_pte_copy() per Mike Kravetz.
(patch 1)
- Keep flush_dcache_page() in copy_folio_from_user() to not change the
behavior of the function, suggested by Vishal Moola. (patch 3)
- Rename copy_user_folio() to copy_user_large_folio(), suggested by Mike
Kravetz. (patch 5)
- Add RB from Mike Kravetz. (patch 1-4,6)
v4->v5:
- Update commit description and change page_kaddr to kaddr, suggested by
Matthew Wilcox. (patch 1,6)
- Remove pages_per_huge_page from copy_user_folio(), suggested by
Matthew Wilcox. (patch 5)
- Add RB from Sidhartha Kumar. (patch 1,3,4)
v3->v4:
- Rebase onto mm-unstable per Andrew Morton. Update commit description
because some function names are changed. (patch 1,4,6)
v2->v3:
- Split patch 2 into three patches, suggested by Mike Kravetz. (patch
2-4)
- Add a new patch to convert copy_user_huge_page to copy_user_folio,
suggested by Mike Kravetz. (patch 5)
- Fix two uninitialized bugs, thanks to Dan Carpenter. (patch 6)
- Do some indenting cleanups.
v1->v2:
Modified patch 2, suggested by Matthew Wilcox:
- Rename copy_large_folio_from_user() to copy_folio_from_user().
- Delete the inner_folio.
- kmap() and kmap_atomic() are converted to kmap_local_page(). Use
pagefault_disable() to ensure that a deadlock will not occur.
- flush_dcache_folio() is placed outside the loop.
ZhangPeng (6):
userfaultfd: convert mfill_atomic_pte_copy() to use a folio
userfaultfd: use kmap_local_page() in copy_huge_page_from_user()
userfaultfd: convert copy_huge_page_from_user() to
copy_folio_from_user()
userfaultfd: convert mfill_atomic_hugetlb() to use a folio
mm: convert copy_user_huge_page() to copy_user_large_folio()
userfaultfd: convert mfill_atomic() to use a folio
include/linux/hugetlb.h | 4 +-
include/linux/mm.h | 14 +++----
include/linux/shmem_fs.h | 4 +-
mm/hugetlb.c | 40 +++++++++---------
mm/memory.c | 61 +++++++++++++---------------
mm/shmem.c | 16 ++++----
mm/userfaultfd.c | 88 ++++++++++++++++++++--------------------
7 files changed, 109 insertions(+), 118 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v6 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio
2023-04-10 13:39 [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
@ 2023-04-10 13:39 ` Peng Zhang
2023-04-10 13:39 ` [PATCH v6 2/6] userfaultfd: use kmap_local_page() in copy_huge_page_from_user() Peng Zhang
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Peng Zhang @ 2023-04-10 13:39 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz,
sidhartha.kumar, vishal.moola
Cc: muchun.song, wangkefeng.wang, sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
Call vma_alloc_folio() directly instead of alloc_page_vma() and convert
page_kaddr to kaddr in mfill_atomic_pte_copy(). Removes several calls to
compound_head().
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/userfaultfd.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 7f1b5f8b712c..313bc683c2b6 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -135,17 +135,18 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
uffd_flags_t flags,
struct page **pagep)
{
- void *page_kaddr;
+ void *kaddr;
int ret;
- struct page *page;
+ struct folio *folio;
if (!*pagep) {
ret = -ENOMEM;
- page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr);
- if (!page)
+ folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma,
+ dst_addr, false);
+ if (!folio)
goto out;
- page_kaddr = kmap_local_page(page);
+ kaddr = kmap_local_folio(folio, 0);
/*
* The read mmap_lock is held here. Despite the
* mmap_lock being read recursive a deadlock is still
@@ -162,45 +163,44 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
* and retry the copy outside the mmap_lock.
*/
pagefault_disable();
- ret = copy_from_user(page_kaddr,
- (const void __user *) src_addr,
+ ret = copy_from_user(kaddr, (const void __user *) src_addr,
PAGE_SIZE);
pagefault_enable();
- kunmap_local(page_kaddr);
+ kunmap_local(kaddr);
/* fallback to copy_from_user outside mmap_lock */
if (unlikely(ret)) {
ret = -ENOENT;
- *pagep = page;
+ *pagep = &folio->page;
/* don't free the page */
goto out;
}
- flush_dcache_page(page);
+ flush_dcache_folio(folio);
} else {
- page = *pagep;
+ folio = page_folio(*pagep);
*pagep = NULL;
}
/*
- * The memory barrier inside __SetPageUptodate makes sure that
+ * The memory barrier inside __folio_mark_uptodate makes sure that
* preceding stores to the page contents become visible before
* the set_pte_at() write.
*/
- __SetPageUptodate(page);
+ __folio_mark_uptodate(folio);
ret = -ENOMEM;
- if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL))
+ if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL))
goto out_release;
ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr,
- page, true, flags);
+ &folio->page, true, flags);
if (ret)
goto out_release;
out:
return ret;
out_release:
- put_page(page);
+ folio_put(folio);
goto out;
}
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v6 2/6] userfaultfd: use kmap_local_page() in copy_huge_page_from_user()
2023-04-10 13:39 [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
2023-04-10 13:39 ` [PATCH v6 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio Peng Zhang
@ 2023-04-10 13:39 ` Peng Zhang
2023-04-10 13:39 ` [PATCH v6 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() Peng Zhang
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Peng Zhang @ 2023-04-10 13:39 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz,
sidhartha.kumar, vishal.moola
Cc: muchun.song, wangkefeng.wang, sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
kmap() and kmap_atomic() are being deprecated in favor of
kmap_local_page() which is appropriate for any thread local context.[1]
Let's replace the kmap() and kmap_atomic() with kmap_local_page() in
copy_huge_page_from_user(). When allow_pagefault is false, disable page
faults to prevent potential deadlock.[2]
[1] https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com/
[2] https://lkml.kernel.org/r/20221025220136.2366143-1-ira.weiny@intel.com
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/memory.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 387226d6094d..808f354bce65 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5880,16 +5880,14 @@ long copy_huge_page_from_user(struct page *dst_page,
for (i = 0; i < pages_per_huge_page; i++) {
subpage = nth_page(dst_page, i);
- if (allow_pagefault)
- page_kaddr = kmap(subpage);
- else
- page_kaddr = kmap_atomic(subpage);
+ page_kaddr = kmap_local_page(subpage);
+ if (!allow_pagefault)
+ pagefault_disable();
rc = copy_from_user(page_kaddr,
usr_src + i * PAGE_SIZE, PAGE_SIZE);
- if (allow_pagefault)
- kunmap(subpage);
- else
- kunmap_atomic(page_kaddr);
+ if (!allow_pagefault)
+ pagefault_enable();
+ kunmap_local(page_kaddr);
ret_val -= (PAGE_SIZE - rc);
if (rc)
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v6 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user()
2023-04-10 13:39 [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
2023-04-10 13:39 ` [PATCH v6 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio Peng Zhang
2023-04-10 13:39 ` [PATCH v6 2/6] userfaultfd: use kmap_local_page() in copy_huge_page_from_user() Peng Zhang
@ 2023-04-10 13:39 ` Peng Zhang
2023-04-10 13:39 ` [PATCH v6 4/6] userfaultfd: convert mfill_atomic_hugetlb() to use a folio Peng Zhang
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Peng Zhang @ 2023-04-10 13:39 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz,
sidhartha.kumar, vishal.moola
Cc: muchun.song, wangkefeng.wang, sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
Replace copy_huge_page_from_user() with copy_folio_from_user().
copy_folio_from_user() does the same as copy_huge_page_from_user(), but
takes in a folio instead of a page. Convert page_kaddr to kaddr in
copy_folio_from_user() to do indenting cleanup.
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
---
include/linux/mm.h | 7 +++----
mm/hugetlb.c | 5 ++---
mm/memory.c | 23 +++++++++++------------
mm/userfaultfd.c | 6 ++----
4 files changed, 18 insertions(+), 23 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 243bfba378c5..a978413b40a4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3698,10 +3698,9 @@ extern void copy_user_huge_page(struct page *dst, struct page *src,
unsigned long addr_hint,
struct vm_area_struct *vma,
unsigned int pages_per_huge_page);
-extern long copy_huge_page_from_user(struct page *dst_page,
- const void __user *usr_src,
- unsigned int pages_per_huge_page,
- bool allow_pagefault);
+long copy_folio_from_user(struct folio *dst_folio,
+ const void __user *usr_src,
+ bool allow_pagefault);
/**
* vma_is_special_huge - Are transhuge page-table entries considered special?
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7e4a80769c9e..aade1b513474 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6217,9 +6217,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
goto out;
}
- ret = copy_huge_page_from_user(&folio->page,
- (const void __user *) src_addr,
- pages_per_huge_page(h), false);
+ ret = copy_folio_from_user(folio, (const void __user *) src_addr,
+ false);
/* fallback to copy_from_user outside mmap_lock */
if (unlikely(ret)) {
diff --git a/mm/memory.c b/mm/memory.c
index 808f354bce65..021cab989703 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5868,26 +5868,25 @@ void copy_user_huge_page(struct page *dst, struct page *src,
process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg);
}
-long copy_huge_page_from_user(struct page *dst_page,
- const void __user *usr_src,
- unsigned int pages_per_huge_page,
- bool allow_pagefault)
+long copy_folio_from_user(struct folio *dst_folio,
+ const void __user *usr_src,
+ bool allow_pagefault)
{
- void *page_kaddr;
+ void *kaddr;
unsigned long i, rc = 0;
- unsigned long ret_val = pages_per_huge_page * PAGE_SIZE;
+ unsigned int nr_pages = folio_nr_pages(dst_folio);
+ unsigned long ret_val = nr_pages * PAGE_SIZE;
struct page *subpage;
- for (i = 0; i < pages_per_huge_page; i++) {
- subpage = nth_page(dst_page, i);
- page_kaddr = kmap_local_page(subpage);
+ for (i = 0; i < nr_pages; i++) {
+ subpage = folio_page(dst_folio, i);
+ kaddr = kmap_local_page(subpage);
if (!allow_pagefault)
pagefault_disable();
- rc = copy_from_user(page_kaddr,
- usr_src + i * PAGE_SIZE, PAGE_SIZE);
+ rc = copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE);
if (!allow_pagefault)
pagefault_enable();
- kunmap_local(page_kaddr);
+ kunmap_local(kaddr);
ret_val -= (PAGE_SIZE - rc);
if (rc)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 313bc683c2b6..1e7dba6c4c5f 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -421,10 +421,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
mmap_read_unlock(dst_mm);
BUG_ON(!page);
- err = copy_huge_page_from_user(page,
- (const void __user *)src_addr,
- vma_hpagesize / PAGE_SIZE,
- true);
+ err = copy_folio_from_user(page_folio(page),
+ (const void __user *)src_addr, true);
if (unlikely(err)) {
err = -EFAULT;
goto out;
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v6 4/6] userfaultfd: convert mfill_atomic_hugetlb() to use a folio
2023-04-10 13:39 [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
` (2 preceding siblings ...)
2023-04-10 13:39 ` [PATCH v6 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() Peng Zhang
@ 2023-04-10 13:39 ` Peng Zhang
2023-04-10 13:39 ` [PATCH v6 5/6] mm: convert copy_user_huge_page() to copy_user_large_folio() Peng Zhang
2023-04-10 13:39 ` [PATCH v6 6/6] userfaultfd: convert mfill_atomic() to use a folio Peng Zhang
5 siblings, 0 replies; 7+ messages in thread
From: Peng Zhang @ 2023-04-10 13:39 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz,
sidhartha.kumar, vishal.moola
Cc: muchun.song, wangkefeng.wang, sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
Convert hugetlb_mfill_atomic_pte() to take in a folio pointer instead of
a page pointer. Convert mfill_atomic_hugetlb() to use a folio.
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c | 26 +++++++++++++-------------
mm/userfaultfd.c | 16 ++++++++--------
3 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 2a758bcd6719..28703fe22386 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -163,7 +163,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
unsigned long dst_addr,
unsigned long src_addr,
uffd_flags_t flags,
- struct page **pagep);
+ struct folio **foliop);
#endif /* CONFIG_USERFAULTFD */
bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
struct vm_area_struct *vma,
@@ -397,7 +397,7 @@ static inline int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
unsigned long dst_addr,
unsigned long src_addr,
uffd_flags_t flags,
- struct page **pagep)
+ struct folio **foliop)
{
BUG();
return 0;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index aade1b513474..c88f856ec2e2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6178,7 +6178,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
unsigned long dst_addr,
unsigned long src_addr,
uffd_flags_t flags,
- struct page **pagep)
+ struct folio **foliop)
{
struct mm_struct *dst_mm = dst_vma->vm_mm;
bool is_continue = uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE);
@@ -6201,8 +6201,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
if (IS_ERR(folio))
goto out;
folio_in_pagecache = true;
- } else if (!*pagep) {
- /* If a page already exists, then it's UFFDIO_COPY for
+ } else if (!*foliop) {
+ /* If a folio already exists, then it's UFFDIO_COPY for
* a non-missing case. Return -EEXIST.
*/
if (vm_shared &&
@@ -6237,33 +6237,33 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
ret = -ENOMEM;
goto out;
}
- *pagep = &folio->page;
- /* Set the outparam pagep and return to the caller to
+ *foliop = folio;
+ /* Set the outparam foliop and return to the caller to
* copy the contents outside the lock. Don't free the
- * page.
+ * folio.
*/
goto out;
}
} else {
if (vm_shared &&
hugetlbfs_pagecache_present(h, dst_vma, dst_addr)) {
- put_page(*pagep);
+ folio_put(*foliop);
ret = -EEXIST;
- *pagep = NULL;
+ *foliop = NULL;
goto out;
}
folio = alloc_hugetlb_folio(dst_vma, dst_addr, 0);
if (IS_ERR(folio)) {
- put_page(*pagep);
+ folio_put(*foliop);
ret = -ENOMEM;
- *pagep = NULL;
+ *foliop = NULL;
goto out;
}
- copy_user_huge_page(&folio->page, *pagep, dst_addr, dst_vma,
+ copy_user_huge_page(&folio->page, &(*foliop)->page, dst_addr, dst_vma,
pages_per_huge_page(h));
- put_page(*pagep);
- *pagep = NULL;
+ folio_put(*foliop);
+ *foliop = NULL;
}
/*
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 1e7dba6c4c5f..2f263afb823d 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -321,7 +321,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
pte_t *dst_pte;
unsigned long src_addr, dst_addr;
long copied;
- struct page *page;
+ struct folio *folio;
unsigned long vma_hpagesize;
pgoff_t idx;
u32 hash;
@@ -341,7 +341,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
src_addr = src_start;
dst_addr = dst_start;
copied = 0;
- page = NULL;
+ folio = NULL;
vma_hpagesize = vma_kernel_pagesize(dst_vma);
/*
@@ -410,7 +410,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
}
err = hugetlb_mfill_atomic_pte(dst_pte, dst_vma, dst_addr,
- src_addr, flags, &page);
+ src_addr, flags, &folio);
hugetlb_vma_unlock_read(dst_vma);
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
@@ -419,9 +419,9 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
if (unlikely(err == -ENOENT)) {
mmap_read_unlock(dst_mm);
- BUG_ON(!page);
+ BUG_ON(!folio);
- err = copy_folio_from_user(page_folio(page),
+ err = copy_folio_from_user(folio,
(const void __user *)src_addr, true);
if (unlikely(err)) {
err = -EFAULT;
@@ -432,7 +432,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
dst_vma = NULL;
goto retry;
} else
- BUG_ON(page);
+ BUG_ON(folio);
if (!err) {
dst_addr += vma_hpagesize;
@@ -449,8 +449,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
out_unlock:
mmap_read_unlock(dst_mm);
out:
- if (page)
- put_page(page);
+ if (folio)
+ folio_put(folio);
BUG_ON(copied < 0);
BUG_ON(err > 0);
BUG_ON(!copied && !err);
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v6 5/6] mm: convert copy_user_huge_page() to copy_user_large_folio()
2023-04-10 13:39 [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
` (3 preceding siblings ...)
2023-04-10 13:39 ` [PATCH v6 4/6] userfaultfd: convert mfill_atomic_hugetlb() to use a folio Peng Zhang
@ 2023-04-10 13:39 ` Peng Zhang
2023-04-10 13:39 ` [PATCH v6 6/6] userfaultfd: convert mfill_atomic() to use a folio Peng Zhang
5 siblings, 0 replies; 7+ messages in thread
From: Peng Zhang @ 2023-04-10 13:39 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz,
sidhartha.kumar, vishal.moola
Cc: muchun.song, wangkefeng.wang, sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
Replace copy_user_huge_page() with copy_user_large_folio().
copy_user_large_folio() does the same as copy_user_huge_page(), but
takes in folios instead of pages. Remove pages_per_huge_page from
copy_user_large_folio(), because we can get that from
folio_nr_pages(dst).
Convert copy_user_gigantic_page() to take in folios.
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
include/linux/mm.h | 7 +++----
mm/hugetlb.c | 11 +++++------
mm/memory.c | 28 ++++++++++++++--------------
3 files changed, 22 insertions(+), 24 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a978413b40a4..c8f05c3e1acb 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3694,10 +3694,9 @@ extern const struct attribute_group memory_failure_attr_group;
extern void clear_huge_page(struct page *page,
unsigned long addr_hint,
unsigned int pages_per_huge_page);
-extern void copy_user_huge_page(struct page *dst, struct page *src,
- unsigned long addr_hint,
- struct vm_area_struct *vma,
- unsigned int pages_per_huge_page);
+void copy_user_large_folio(struct folio *dst, struct folio *src,
+ unsigned long addr_hint,
+ struct vm_area_struct *vma);
long copy_folio_from_user(struct folio *dst_folio,
const void __user *usr_src,
bool allow_pagefault);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c88f856ec2e2..f16b25b1a6b9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5097,8 +5097,9 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
ret = PTR_ERR(new_folio);
break;
}
- copy_user_huge_page(&new_folio->page, ptepage, addr, dst_vma,
- npages);
+ copy_user_large_folio(new_folio,
+ page_folio(ptepage),
+ addr, dst_vma);
put_page(ptepage);
/* Install the new hugetlb folio if src pte stable */
@@ -5616,8 +5617,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
goto out_release_all;
}
- copy_user_huge_page(&new_folio->page, old_page, address, vma,
- pages_per_huge_page(h));
+ copy_user_large_folio(new_folio, page_folio(old_page), address, vma);
__folio_mark_uptodate(new_folio);
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddr,
@@ -6260,8 +6260,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
*foliop = NULL;
goto out;
}
- copy_user_huge_page(&folio->page, &(*foliop)->page, dst_addr, dst_vma,
- pages_per_huge_page(h));
+ copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
folio_put(*foliop);
*foliop = NULL;
}
diff --git a/mm/memory.c b/mm/memory.c
index 021cab989703..f315c2198098 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5815,21 +5815,21 @@ void clear_huge_page(struct page *page,
process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page);
}
-static void copy_user_gigantic_page(struct page *dst, struct page *src,
- unsigned long addr,
- struct vm_area_struct *vma,
- unsigned int pages_per_huge_page)
+static void copy_user_gigantic_page(struct folio *dst, struct folio *src,
+ unsigned long addr,
+ struct vm_area_struct *vma,
+ unsigned int pages_per_huge_page)
{
int i;
- struct page *dst_base = dst;
- struct page *src_base = src;
+ struct page *dst_page;
+ struct page *src_page;
for (i = 0; i < pages_per_huge_page; i++) {
- dst = nth_page(dst_base, i);
- src = nth_page(src_base, i);
+ dst_page = folio_page(dst, i);
+ src_page = folio_page(src, i);
cond_resched();
- copy_user_highpage(dst, src, addr + i*PAGE_SIZE, vma);
+ copy_user_highpage(dst_page, src_page, addr + i*PAGE_SIZE, vma);
}
}
@@ -5847,15 +5847,15 @@ static void copy_subpage(unsigned long addr, int idx, void *arg)
addr, copy_arg->vma);
}
-void copy_user_huge_page(struct page *dst, struct page *src,
- unsigned long addr_hint, struct vm_area_struct *vma,
- unsigned int pages_per_huge_page)
+void copy_user_large_folio(struct folio *dst, struct folio *src,
+ unsigned long addr_hint, struct vm_area_struct *vma)
{
+ unsigned int pages_per_huge_page = folio_nr_pages(dst);
unsigned long addr = addr_hint &
~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
struct copy_subpage_arg arg = {
- .dst = dst,
- .src = src,
+ .dst = &dst->page,
+ .src = &src->page,
.vma = vma,
};
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v6 6/6] userfaultfd: convert mfill_atomic() to use a folio
2023-04-10 13:39 [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
` (4 preceding siblings ...)
2023-04-10 13:39 ` [PATCH v6 5/6] mm: convert copy_user_huge_page() to copy_user_large_folio() Peng Zhang
@ 2023-04-10 13:39 ` Peng Zhang
5 siblings, 0 replies; 7+ messages in thread
From: Peng Zhang @ 2023-04-10 13:39 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz,
sidhartha.kumar, vishal.moola
Cc: muchun.song, wangkefeng.wang, sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
Convert mfill_atomic_pte_copy(), shmem_mfill_atomic_pte() and
mfill_atomic_pte() to take in a folio pointer.
Convert mfill_atomic() to use a folio. Convert page_kaddr to kaddr in
mfill_atomic().
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
---
include/linux/shmem_fs.h | 4 ++--
mm/shmem.c | 16 ++++++++--------
mm/userfaultfd.c | 40 ++++++++++++++++++++--------------------
3 files changed, 30 insertions(+), 30 deletions(-)
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 3bb8d21edbb3..9e151ba45068 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -158,10 +158,10 @@ extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
unsigned long dst_addr,
unsigned long src_addr,
uffd_flags_t flags,
- struct page **pagep);
+ struct folio **foliop);
#else /* !CONFIG_SHMEM */
#define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \
- src_addr, flags, pagep) ({ BUG(); 0; })
+ src_addr, flags, foliop) ({ BUG(); 0; })
#endif /* CONFIG_SHMEM */
#endif /* CONFIG_USERFAULTFD */
diff --git a/mm/shmem.c b/mm/shmem.c
index 6c08f5a75d3a..9218c955f482 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2548,7 +2548,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
unsigned long dst_addr,
unsigned long src_addr,
uffd_flags_t flags,
- struct page **pagep)
+ struct folio **foliop)
{
struct inode *inode = file_inode(dst_vma->vm_file);
struct shmem_inode_info *info = SHMEM_I(inode);
@@ -2566,14 +2566,14 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
* and now we find ourselves with -ENOMEM. Release the page, to
* avoid a BUG_ON in our caller.
*/
- if (unlikely(*pagep)) {
- put_page(*pagep);
- *pagep = NULL;
+ if (unlikely(*foliop)) {
+ folio_put(*foliop);
+ *foliop = NULL;
}
return -ENOMEM;
}
- if (!*pagep) {
+ if (!*foliop) {
ret = -ENOMEM;
folio = shmem_alloc_folio(gfp, info, pgoff);
if (!folio)
@@ -2605,7 +2605,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
/* fallback to copy_from_user outside mmap_lock */
if (unlikely(ret)) {
- *pagep = &folio->page;
+ *foliop = folio;
ret = -ENOENT;
/* don't free the page */
goto out_unacct_blocks;
@@ -2616,9 +2616,9 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
clear_user_highpage(&folio->page, dst_addr);
}
} else {
- folio = page_folio(*pagep);
+ folio = *foliop;
VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
- *pagep = NULL;
+ *foliop = NULL;
}
VM_BUG_ON(folio_test_locked(folio));
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 2f263afb823d..11cfd82c6726 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -133,13 +133,13 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
unsigned long dst_addr,
unsigned long src_addr,
uffd_flags_t flags,
- struct page **pagep)
+ struct folio **foliop)
{
void *kaddr;
int ret;
struct folio *folio;
- if (!*pagep) {
+ if (!*foliop) {
ret = -ENOMEM;
folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma,
dst_addr, false);
@@ -171,15 +171,15 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
/* fallback to copy_from_user outside mmap_lock */
if (unlikely(ret)) {
ret = -ENOENT;
- *pagep = &folio->page;
+ *foliop = folio;
/* don't free the page */
goto out;
}
flush_dcache_folio(folio);
} else {
- folio = page_folio(*pagep);
- *pagep = NULL;
+ folio = *foliop;
+ *foliop = NULL;
}
/*
@@ -470,7 +470,7 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd,
unsigned long dst_addr,
unsigned long src_addr,
uffd_flags_t flags,
- struct page **pagep)
+ struct folio **foliop)
{
ssize_t err;
@@ -493,14 +493,14 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd,
if (uffd_flags_mode_is(flags, MFILL_ATOMIC_COPY))
err = mfill_atomic_pte_copy(dst_pmd, dst_vma,
dst_addr, src_addr,
- flags, pagep);
+ flags, foliop);
else
err = mfill_atomic_pte_zeropage(dst_pmd,
dst_vma, dst_addr);
} else {
err = shmem_mfill_atomic_pte(dst_pmd, dst_vma,
dst_addr, src_addr,
- flags, pagep);
+ flags, foliop);
}
return err;
@@ -518,7 +518,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm,
pmd_t *dst_pmd;
unsigned long src_addr, dst_addr;
long copied;
- struct page *page;
+ struct folio *folio;
/*
* Sanitize the command parameters:
@@ -533,7 +533,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm,
src_addr = src_start;
dst_addr = dst_start;
copied = 0;
- page = NULL;
+ folio = NULL;
retry:
mmap_read_lock(dst_mm);
@@ -629,28 +629,28 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm,
BUG_ON(pmd_trans_huge(*dst_pmd));
err = mfill_atomic_pte(dst_pmd, dst_vma, dst_addr,
- src_addr, flags, &page);
+ src_addr, flags, &folio);
cond_resched();
if (unlikely(err == -ENOENT)) {
- void *page_kaddr;
+ void *kaddr;
mmap_read_unlock(dst_mm);
- BUG_ON(!page);
+ BUG_ON(!folio);
- page_kaddr = kmap_local_page(page);
- err = copy_from_user(page_kaddr,
+ kaddr = kmap_local_folio(folio, 0);
+ err = copy_from_user(kaddr,
(const void __user *) src_addr,
PAGE_SIZE);
- kunmap_local(page_kaddr);
+ kunmap_local(kaddr);
if (unlikely(err)) {
err = -EFAULT;
goto out;
}
- flush_dcache_page(page);
+ flush_dcache_folio(folio);
goto retry;
} else
- BUG_ON(page);
+ BUG_ON(folio);
if (!err) {
dst_addr += PAGE_SIZE;
@@ -667,8 +667,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm,
out_unlock:
mmap_read_unlock(dst_mm);
out:
- if (page)
- put_page(page);
+ if (folio)
+ folio_put(folio);
BUG_ON(copied < 0);
BUG_ON(err > 0);
BUG_ON(!copied && !err);
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-04-10 13:40 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-10 13:39 [PATCH v6 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
2023-04-10 13:39 ` [PATCH v6 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio Peng Zhang
2023-04-10 13:39 ` [PATCH v6 2/6] userfaultfd: use kmap_local_page() in copy_huge_page_from_user() Peng Zhang
2023-04-10 13:39 ` [PATCH v6 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() Peng Zhang
2023-04-10 13:39 ` [PATCH v6 4/6] userfaultfd: convert mfill_atomic_hugetlb() to use a folio Peng Zhang
2023-04-10 13:39 ` [PATCH v6 5/6] mm: convert copy_user_huge_page() to copy_user_large_folio() Peng Zhang
2023-04-10 13:39 ` [PATCH v6 6/6] userfaultfd: convert mfill_atomic() to use a folio Peng Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox