From: Mike Kravetz <mike.kravetz@oracle.com>
To: Hillf Danton <hillf.zj@alibaba-inc.com>,
'Andrea Arcangeli' <aarcange@redhat.com>,
'Andrew Morton' <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org,
"'Dr. David Alan Gilbert'" <dgilbert@redhat.com>,
'Shaohua Li' <shli@fb.com>,
'Pavel Emelyanov' <xemul@parallels.com>,
'Mike Rapoport' <rppt@linux.vnet.ibm.com>
Subject: Re: [PATCH 15/33] userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY
Date: Thu, 3 Nov 2016 12:14:15 -0700 [thread overview]
Message-ID: <31d06dc7-ea2d-4ca3-821a-f14ea69de3e9@oracle.com> (raw)
In-Reply-To: <c9c59023-35ee-1012-1da7-13c3aa89ba61@oracle.com>
On 11/03/2016 10:33 AM, Mike Kravetz wrote:
> On 11/03/2016 03:15 AM, Hillf Danton wrote:
>> [out of topic] Cc list is edited to quite mail agent warning:
>> -"Dr. David Alan Gilbert"@v2.random; " <dgilbert@redhat.com>
>> +"Dr. David Alan Gilbert" <dgilbert@redhat.com>
>> -Pavel Emelyanov <xemul@parallels.com>"@v2.random
>> +Pavel Emelyanov <xemul@parallels.com>
>> -Michael Rapoport <RAPOPORT@il.ibm.com>
>> +Mike Rapoport <rppt@linux.vnet.ibm.com>
>>
>>
>> On Thursday, November 03, 2016 3:34 AM Andrea Arcangeli wrote:
>>> +
>>> +#ifdef CONFIG_HUGETLB_PAGE
>>> +/*
>>> + * __mcopy_atomic processing for HUGETLB vmas. Note that this routine is
>>> + * called with mmap_sem held, it will release mmap_sem before returning.
>>> + */
>>> +static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>> + struct vm_area_struct *dst_vma,
>>> + unsigned long dst_start,
>>> + unsigned long src_start,
>>> + unsigned long len,
>>> + bool zeropage)
>>> +{
>>> + ssize_t err;
>>> + pte_t *dst_pte;
>>> + unsigned long src_addr, dst_addr;
>>> + long copied;
>>> + struct page *page;
>>> + struct hstate *h;
>>> + unsigned long vma_hpagesize;
>>> + pgoff_t idx;
>>> + u32 hash;
>>> + struct address_space *mapping;
>>> +
>>> + /*
>>> + * There is no default zero huge page for all huge page sizes as
>>> + * supported by hugetlb. A PMD_SIZE huge pages may exist as used
>>> + * by THP. Since we can not reliably insert a zero page, this
>>> + * feature is not supported.
>>> + */
>>> + if (zeropage)
>>> + return -EINVAL;
>>
>> Release mmap_sem before return?
>>
>>> +
>>> + src_addr = src_start;
>>> + dst_addr = dst_start;
>>> + copied = 0;
>>> + page = NULL;
>>> + vma_hpagesize = vma_kernel_pagesize(dst_vma);
>>> +
>>> +retry:
>>> + /*
>>> + * On routine entry dst_vma is set. If we had to drop mmap_sem and
>>> + * retry, dst_vma will be set to NULL and we must lookup again.
>>> + */
>>> + err = -EINVAL;
>>> + if (!dst_vma) {
>>> + dst_vma = find_vma(dst_mm, dst_start);
>>
>> In case of retry, s/dst_start/dst_addr/?
>> And check if we find a valid vma?
>>
>>> @@ -182,6 +355,13 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
>>> goto out_unlock;
>>>
>>> /*
>>> + * If this is a HUGETLB vma, pass off to appropriate routine
>>> + */
>>> + if (dst_vma->vm_flags & VM_HUGETLB)
>>> + return __mcopy_atomic_hugetlb(dst_mm, dst_vma, dst_start,
>>> + src_start, len, false);
>>
>> Use is_vm_hugetlb_page()?
>>
>>
>
> Thanks Hillf, all valid points. I will create another version of
> this patch.
Below is an updated patch addressing Hillf's comments. Tested with error
injection code to hit the retry path.
Andrea, let me know if you prefer a delta from original patch.
From: Mike Kravetz <mike.kravetz@oracle.com>
userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY
__mcopy_atomic_hugetlb performs the UFFDIO_COPY operation for huge
pages. It is based on the existing __mcopy_atomic routine for normal
pages. Unlike normal pages, there is no huge page support for the
UFFDIO_ZEROPAGE operation.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
mm/userfaultfd.c | 186
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 186 insertions(+)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 9c2ed70..e01d013 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -14,6 +14,8 @@
#include <linux/swapops.h>
#include <linux/userfaultfd_k.h>
#include <linux/mmu_notifier.h>
+#include <linux/hugetlb.h>
+#include <linux/pagemap.h>
#include <asm/tlbflush.h>
#include "internal.h"
@@ -139,6 +141,183 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm,
unsigned long address)
return pmd;
}
+
+#ifdef CONFIG_HUGETLB_PAGE
+/*
+ * __mcopy_atomic processing for HUGETLB vmas. Note that this routine is
+ * called with mmap_sem held, it will release mmap_sem before returning.
+ */
+static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct
*dst_mm,
+ struct vm_area_struct *dst_vma,
+ unsigned long dst_start,
+ unsigned long src_start,
+ unsigned long len,
+ bool zeropage)
+{
+ ssize_t err;
+ pte_t *dst_pte;
+ unsigned long src_addr, dst_addr;
+ long copied;
+ struct page *page;
+ struct hstate *h;
+ unsigned long vma_hpagesize;
+ pgoff_t idx;
+ u32 hash;
+ struct address_space *mapping;
+
+ /*
+ * There is no default zero huge page for all huge page sizes as
+ * supported by hugetlb. A PMD_SIZE huge pages may exist as used
+ * by THP. Since we can not reliably insert a zero page, this
+ * feature is not supported.
+ */
+ if (zeropage) {
+ up_read(&dst_mm->mmap_sem);
+ return -EINVAL;
+ }
+
+ src_addr = src_start;
+ dst_addr = dst_start;
+ copied = 0;
+ page = NULL;
+ vma_hpagesize = vma_kernel_pagesize(dst_vma);
+
+retry:
+ /*
+ * On routine entry dst_vma is set. If we had to drop mmap_sem and
+ * retry, dst_vma will be set to NULL and we must lookup again.
+ */
+ err = -EINVAL;
+ if (!dst_vma) {
+ /* lookup dst_addr as we may have copied some pages */
+ dst_vma = find_vma(dst_mm, dst_addr);
+ if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
+ goto out_unlock;
+
+ vma_hpagesize = vma_kernel_pagesize(dst_vma);
+
+ /*
+ * Make sure the vma is not shared, that the remaining dst
+ * range is both valid and fully within a single existing vma.
+ */
+ if (dst_vma->vm_flags & VM_SHARED)
+ goto out_unlock;
+ if (dst_addr < dst_vma->vm_start ||
+ dst_addr + len - (copied * vma_hpagesize) > dst_vma->vm_end)
+ goto out_unlock;
+ }
+
+ /*
+ * Validate alignment based on huge page size
+ */
+ if (dst_addr & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
+ goto out_unlock;
+
+ /*
+ * Only allow __mcopy_atomic_hugetlb on userfaultfd registered ranges.
+ */
+ if (!dst_vma->vm_userfaultfd_ctx.ctx)
+ goto out_unlock;
+
+ /*
+ * Ensure the dst_vma has a anon_vma.
+ */
+ err = -ENOMEM;
+ if (unlikely(anon_vma_prepare(dst_vma)))
+ goto out_unlock;
+
+ h = hstate_vma(dst_vma);
+
+ while (src_addr < src_start + len) {
+ pte_t dst_pteval;
+
+ BUG_ON(dst_addr >= dst_start + len);
+ dst_addr &= huge_page_mask(h);
+
+ /*
+ * Serialize via hugetlb_fault_mutex
+ */
+ idx = linear_page_index(dst_vma, dst_addr);
+ mapping = dst_vma->vm_file->f_mapping;
+ hash = hugetlb_fault_mutex_hash(h, dst_mm, dst_vma, mapping,
+ idx, dst_addr);
+ mutex_lock(&hugetlb_fault_mutex_table[hash]);
+
+ err = -ENOMEM;
+ dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
+ if (!dst_pte) {
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ goto out_unlock;
+ }
+
+ err = -EEXIST;
+ dst_pteval = huge_ptep_get(dst_pte);
+ if (!huge_pte_none(dst_pteval)) {
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ goto out_unlock;
+ }
+
+ err = hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma,
+ dst_addr, src_addr, &page);
+
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+
+ cond_resched();
+
+ if (unlikely(err == -EFAULT)) {
+ up_read(&dst_mm->mmap_sem);
+ BUG_ON(!page);
+
+ err = copy_huge_page_from_user(page,
+ (const void __user *)src_addr,
+ pages_per_huge_page(h));
+ if (unlikely(err)) {
+ err = -EFAULT;
+ goto out;
+ }
+ down_read(&dst_mm->mmap_sem);
+
+ dst_vma = NULL;
+ goto retry;
+ } else
+ BUG_ON(page);
+
+ if (!err) {
+ dst_addr += vma_hpagesize;
+ src_addr += vma_hpagesize;
+ copied += vma_hpagesize;
+
+ if (fatal_signal_pending(current))
+ err = -EINTR;
+ }
+ if (err)
+ break;
+ }
+
+out_unlock:
+ up_read(&dst_mm->mmap_sem);
+out:
+ if (page)
+ put_page(page);
+ BUG_ON(copied < 0);
+ BUG_ON(err > 0);
+ BUG_ON(!copied && !err);
+ return copied ? copied : err;
+}
+#else /* !CONFIG_HUGETLB_PAGE */
+static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct
*dst_mm,
+ struct vm_area_struct *dst_vma,
+ unsigned long dst_start,
+ unsigned long src_start,
+ unsigned long len,
+ bool zeropage)
+{
+ up_read(&dst_mm->mmap_sem); /* HUGETLB not configured */
+ BUG();
+ return -EINVAL;
+}
+#endif /* CONFIG_HUGETLB_PAGE */
+
static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
unsigned long dst_start,
unsigned long src_start,
@@ -182,6 +361,13 @@ retry:
goto out_unlock;
/*
+ * If this is a HUGETLB vma, pass off to appropriate routine
+ */
+ if (is_vm_hugetlb_page(dst_vma))
+ return __mcopy_atomic_hugetlb(dst_mm, dst_vma, dst_start,
+ src_start, len, false);
+
+ /*
* Be strict and only allow __mcopy_atomic on userfaultfd
* registered ranges to prevent userland errors going
* unnoticed. As far as the VM consistency is concerned, it
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-11-03 19:14 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-02 19:33 [PATCH 00/33] userfaultfd tmpfs/hugetlbfs/non-cooperative Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 01/33] userfaultfd: document _IOR/_IOW Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 02/33] userfaultfd: correct comment about UFFD_FEATURE_PAGEFAULT_FLAG_WP Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 03/33] userfaultfd: convert BUG() to WARN_ON_ONCE() Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 04/33] userfaultfd: use vma_is_anonymous Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 05/33] userfaultfd: non-cooperative: Split the find_userfault() routine Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 06/33] userfaultfd: non-cooperative: Add ability to report non-PF events from uffd descriptor Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 07/33] userfaultfd: non-cooperative: report all available features to userland Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 08/33] userfaultfd: non-cooperative: Add fork() event Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 09/33] userfaultfd: non-cooperative: Add fork() event, build warning fix Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 10/33] userfaultfd: non-cooperative: dup_userfaultfd: use mm_count instead of mm_users Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 11/33] userfaultfd: non-cooperative: Add mremap() event Andrea Arcangeli
2016-11-03 7:41 ` Hillf Danton
2016-11-03 17:52 ` Mike Rapoport
2016-11-04 15:40 ` Mike Rapoport
2016-11-02 19:33 ` [PATCH 12/33] userfaultfd: non-cooperative: Add madvise() event for MADV_DONTNEED request Andrea Arcangeli
2016-11-03 8:01 ` Hillf Danton
2016-11-03 17:24 ` Mike Rapoport
2016-11-04 16:40 ` [PATCH 12/33] userfaultfd: non-cooperative: Add madvise() event for MADV_DONTNEED requestg Andrea Arcangeli
2016-11-04 15:42 ` [PATCH 12/33] userfaultfd: non-cooperative: Add madvise() event for MADV_DONTNEED request Mike Rapoport
2016-11-02 19:33 ` [PATCH 13/33] userfaultfd: hugetlbfs: add copy_huge_page_from_user for hugetlb userfaultfd support Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 14/33] userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for " Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 15/33] userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY Andrea Arcangeli
2016-11-03 10:15 ` Hillf Danton
2016-11-03 17:33 ` Mike Kravetz
2016-11-03 19:14 ` Mike Kravetz [this message]
2016-11-04 6:43 ` Hillf Danton
2016-11-04 19:36 ` Andrea Arcangeli
2016-11-04 20:34 ` Mike Kravetz
2016-11-08 21:06 ` Mike Kravetz
2016-11-16 18:28 ` Andrea Arcangeli
2016-11-16 18:53 ` Mike Kravetz
2016-11-17 15:40 ` Andrea Arcangeli
2016-11-17 19:26 ` Mike Kravetz
2016-11-18 0:05 ` Andrea Arcangeli
2016-11-18 5:52 ` Mike Kravetz
2016-11-22 1:16 ` Mike Kravetz
2016-11-23 6:38 ` Hillf Danton
2016-12-15 19:02 ` Andrea Arcangeli
2016-12-16 3:54 ` Hillf Danton
2016-11-17 19:41 ` Mike Kravetz
2016-11-04 16:35 ` Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 16/33] userfaultfd: hugetlbfs: add userfaultfd hugetlb hook Andrea Arcangeli
2016-11-04 7:02 ` Hillf Danton
2016-11-02 19:33 ` [PATCH 17/33] userfaultfd: hugetlbfs: allow registration of ranges containing huge pages Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 18/33] userfaultfd: hugetlbfs: add userfaultfd_hugetlb test Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 19/33] userfaultfd: hugetlbfs: userfaultfd_huge_must_wait for hugepmd ranges Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 20/33] userfaultfd: introduce vma_can_userfault Andrea Arcangeli
2016-11-04 7:39 ` Hillf Danton
2016-11-02 19:33 ` [PATCH 21/33] userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 22/33] userfaultfd: shmem: introduce vma_is_shmem Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 23/33] userfaultfd: shmem: add tlbflush.h header for microblaze Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 24/33] userfaultfd: shmem: use shmem_mcopy_atomic_pte for shared memory Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 25/33] userfaultfd: shmem: add userfaultfd hook for shared memory faults Andrea Arcangeli
2016-11-04 8:59 ` Hillf Danton
2016-11-04 14:53 ` Mike Rapoport
2016-11-04 15:44 ` Mike Rapoport
2016-11-04 16:56 ` Andrea Arcangeli
2016-11-18 0:37 ` Andrea Arcangeli
2016-11-20 12:10 ` Mike Rapoport
2016-11-02 19:33 ` [PATCH 26/33] userfaultfd: shmem: allow registration of shared memory ranges Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 27/33] userfaultfd: shmem: add userfaultfd_shmem test Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 28/33] userfaultfd: shmem: lock the page before adding it to pagecache Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 29/33] userfaultfd: shmem: avoid leaking blocks and used blocks in UFFDIO_COPY Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 30/33] userfaultfd: non-cooperative: selftest: introduce userfaultfd_open Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 31/33] userfaultfd: non-cooperative: selftest: add ufd parameter to copy_page Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 32/33] userfaultfd: non-cooperative: selftest: add test for FORK, MADVDONTNEED and REMAP events Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 33/33] mm: mprotect: use pmd_trans_unstable instead of taking the pmd_lock Andrea Arcangeli
2016-11-02 20:07 ` [PATCH 00/33] userfaultfd tmpfs/hugetlbfs/non-cooperative Andrea Arcangeli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=31d06dc7-ea2d-4ca3-821a-f14ea69de3e9@oracle.com \
--to=mike.kravetz@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=dgilbert@redhat.com \
--cc=hillf.zj@alibaba-inc.com \
--cc=linux-mm@kvack.org \
--cc=rppt@linux.vnet.ibm.com \
--cc=shli@fb.com \
--cc=xemul@parallels.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox