linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>,
	'Andrew Morton' <akpm@linux-foundation.org>,
	linux-mm@kvack.org,
	"'Dr. David Alan Gilbert'" <dgilbert@redhat.com>,
	'Shaohua Li' <shli@fb.com>,
	'Pavel Emelyanov' <xemul@parallels.com>,
	'Mike Rapoport' <rppt@linux.vnet.ibm.com>
Subject: Re: [PATCH 15/33] userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY
Date: Fri, 4 Nov 2016 13:34:08 -0700	[thread overview]
Message-ID: <5c4fdd58-1743-0cc0-9f71-bc1d019aaf7e@oracle.com> (raw)
In-Reply-To: <20161104193626.GU4611@redhat.com>

On 11/04/2016 12:36 PM, Andrea Arcangeli wrote:
> On Thu, Nov 03, 2016 at 12:14:15PM -0700, Mike Kravetz wrote:
>> +		/* lookup dst_addr as we may have copied some pages */
>> +		dst_vma = find_vma(dst_mm, dst_addr);
> 
> I put back dst_start here.
> 
>> +		if (dst_addr < dst_vma->vm_start ||
>> +		    dst_addr + len - (copied * vma_hpagesize) > dst_vma->vm_end)
>> +			goto out_unlock;
> 
> Actually this introduces a bug: copied * vma_hpagesize in the new
> patch is wrong, copied is already in byte units. I rolled back this
> one because of the dst_start commented above anyway.
> 
>> +	/*
>> +	 * Validate alignment based on huge page size
>> +	 */
>> +	if (dst_addr & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
>> +		goto out_unlock;
> 
> If the vma changes under us we an as well fail. So I moved the
> alignment checks on dst_start/len before the retry loop and I added a
> further WARN_ON check inside the loop on dst_addr/len-copied just in
> case but that cannot trigger as we abort if the vma_hpagesize changed
> (hence WARN_ON).
> 
> If we need to relax this later and handle a change of vma_hpagesize,
> it'll be backwards compatible change. I don't think it's needed and
> this is more strict behavior.
> 
>> +	while (src_addr < src_start + len) {
>> +		pte_t dst_pteval;
>> +
>> +		BUG_ON(dst_addr >= dst_start + len);
>> +		dst_addr &= huge_page_mask(h);
> 
> The additional mask is superflous here, it was already enforced by the
> alignment checks so I turned it into a bugcheck.

Thanks,

I had made similar hugetlb changes and was testing.  I'll perform hugetlb
testing with the full diff patch below.

-- 
Mike Kravetz

> 
> This is the current status, I'm sending a full diff against the
> previous submit for review of the latest updates. It's easier to
> review incrementally I think.
> 
> Please test it, I updated the aa.git tree userfault branch in sync
> with this.
> 
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index 063ccc7..8a0ee3ba 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -628,11 +628,11 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma,
>  	}
>  }
>  
> -void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx vm_ctx,
> +void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *vm_ctx,
>  				 unsigned long from, unsigned long to,
>  				 unsigned long len)
>  {
> -	struct userfaultfd_ctx *ctx = vm_ctx.ctx;
> +	struct userfaultfd_ctx *ctx = vm_ctx->ctx;
>  	struct userfaultfd_wait_queue ewq;
>  
>  	if (!ctx)
> @@ -657,6 +657,7 @@ void madvise_userfault_dontneed(struct vm_area_struct *vma,
>  				struct vm_area_struct **prev,
>  				unsigned long start, unsigned long end)
>  {
> +	struct mm_struct *mm = vma->vm_mm;
>  	struct userfaultfd_ctx *ctx;
>  	struct userfaultfd_wait_queue ewq;
>  
> @@ -665,8 +666,9 @@ void madvise_userfault_dontneed(struct vm_area_struct *vma,
>  		return;
>  
>  	userfaultfd_ctx_get(ctx);
> +	up_read(&mm->mmap_sem);
> +
>  	*prev = NULL; /* We wait for ACK w/o the mmap semaphore */
> -	up_read(&vma->vm_mm->mmap_sem);
>  
>  	msg_init(&ewq.msg);
>  
> @@ -676,7 +678,7 @@ void madvise_userfault_dontneed(struct vm_area_struct *vma,
>  
>  	userfaultfd_event_wait_completion(ctx, &ewq);
>  
> -	down_read(&vma->vm_mm->mmap_sem);
> +	down_read(&mm->mmap_sem);
>  }
>  
>  static int userfaultfd_release(struct inode *inode, struct file *file)
> diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> index 5caf97f..01a4e98 100644
> --- a/include/linux/userfaultfd_k.h
> +++ b/include/linux/userfaultfd_k.h
> @@ -77,7 +77,7 @@ extern void dup_userfaultfd_complete(struct list_head *);
>  
>  extern void mremap_userfaultfd_prep(struct vm_area_struct *,
>  				    struct vm_userfaultfd_ctx *);
> -extern void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx,
> +extern void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *,
>  					unsigned long from, unsigned long to,
>  					unsigned long len);
>  
> @@ -143,7 +143,7 @@ static inline void mremap_userfaultfd_prep(struct vm_area_struct *vma,
>  {
>  }
>  
> -static inline void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx ctx,
> +static inline void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *ctx,
>  					       unsigned long from,
>  					       unsigned long to,
>  					       unsigned long len)
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 450e811..cef4967 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -592,6 +592,6 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
>  	up_write(&current->mm->mmap_sem);
>  	if (locked && new_len > old_len)
>  		mm_populate(new_addr + old_len, new_len - old_len);
> -	mremap_userfaultfd_complete(uf, addr, new_addr, old_len);
> +	mremap_userfaultfd_complete(&uf, addr, new_addr, old_len);
>  	return ret;
>  }
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 578622e..5d3e8bf 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1609,7 +1609,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  			if (fault_type) {
>  				*fault_type |= VM_FAULT_MAJOR;
>  				count_vm_event(PGMAJFAULT);
> -				mem_cgroup_count_vm_event(vma->vm_mm,
> +				mem_cgroup_count_vm_event(charge_mm,
>  							  PGMAJFAULT);
>  			}
>  			/* Here we actually start the io */
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index d47b743..e8d7a89 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -172,8 +172,10 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  	 * by THP.  Since we can not reliably insert a zero page, this
>  	 * feature is not supported.
>  	 */
> -	if (zeropage)
> +	if (zeropage) {
> +		up_read(&dst_mm->mmap_sem);
>  		return -EINVAL;
> +	}
>  
>  	src_addr = src_start;
>  	dst_addr = dst_start;
> @@ -181,6 +183,12 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  	page = NULL;
>  	vma_hpagesize = vma_kernel_pagesize(dst_vma);
>  
> +	/*
> +	 * Validate alignment based on huge page size
> +	 */
> +	if (dst_start & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
> +		goto out_unlock;
> +
>  retry:
>  	/*
>  	 * On routine entry dst_vma is set.  If we had to drop mmap_sem and
> @@ -189,11 +197,15 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  	err = -EINVAL;
>  	if (!dst_vma) {
>  		dst_vma = find_vma(dst_mm, dst_start);
> -		vma_hpagesize = vma_kernel_pagesize(dst_vma);
> +		if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
> +			goto out_unlock;
> +
> +		if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
> +			goto out_unlock;
>  
>  		/*
> -		 * Make sure the vma is not shared, that the dst range is
> -		 * both valid and fully within a single existing vma.
> +		 * Make sure the vma is not shared, that the remaining dst
> +		 * range is both valid and fully within a single existing vma.
>  		 */
>  		if (dst_vma->vm_flags & VM_SHARED)
>  			goto out_unlock;
> @@ -202,10 +214,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  			goto out_unlock;
>  	}
>  
> -	/*
> -	 * Validate alignment based on huge page size
> -	 */
> -	if (dst_start & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
> +	if (WARN_ON(dst_addr & (vma_hpagesize - 1) ||
> +		    (len - copied) & (vma_hpagesize - 1)))
>  		goto out_unlock;
>  
>  	/*
> @@ -227,7 +237,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  		pte_t dst_pteval;
>  
>  		BUG_ON(dst_addr >= dst_start + len);
> -		dst_addr &= huge_page_mask(h);
> +		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>  
>  		/*
>  		 * Serialize via hugetlb_fault_mutex
> @@ -300,17 +310,13 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  	return copied ? copied : err;
>  }
>  #else /* !CONFIG_HUGETLB_PAGE */
> -static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
> -					      struct vm_area_struct *dst_vma,
> -					      unsigned long dst_start,
> -					      unsigned long src_start,
> -					      unsigned long len,
> -					      bool zeropage)
> -{
> -	up_read(&dst_mm->mmap_sem);	/* HUGETLB not configured */
> -	BUG();
> -	return -EINVAL;
> -}
> +/* fail at build time if gcc attempts to use this */
> +extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
> +				      struct vm_area_struct *dst_vma,
> +				      unsigned long dst_start,
> +				      unsigned long src_start,
> +				      unsigned long len,
> +				      bool zeropage);
>  #endif /* CONFIG_HUGETLB_PAGE */
>  
>  static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
> @@ -360,9 +366,9 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
>  	/*
>  	 * If this is a HUGETLB vma, pass off to appropriate routine
>  	 */
> -	if (dst_vma->vm_flags & VM_HUGETLB)
> +	if (is_vm_hugetlb_page(dst_vma))
>  		return  __mcopy_atomic_hugetlb(dst_mm, dst_vma, dst_start,
> -						src_start, len, false);
> +						src_start, len, zeropage);
>  
>  	/*
>  	 * Be strict and only allow __mcopy_atomic on userfaultfd
> @@ -431,8 +437,11 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
>  				err = mfill_zeropage_pte(dst_mm, dst_pmd,
>  							 dst_vma, dst_addr);
>  		} else {
> -			err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma,
> -						     dst_addr, src_addr, &page);
> +			err = -EINVAL; /* if zeropage is true return -EINVAL */
> +			if (likely(!zeropage))
> +				err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd,
> +							     dst_vma, dst_addr,
> +							     src_addr, &page);
>  		}
>  
>  		cond_resched();
> diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
> index fed2119..5a840a6 100644
> --- a/tools/testing/selftests/vm/userfaultfd.c
> +++ b/tools/testing/selftests/vm/userfaultfd.c
> @@ -625,6 +625,86 @@ static int faulting_process(void)
>  	return 0;
>  }
>  
> +static int uffdio_zeropage(int ufd, unsigned long offset)
> +{
> +	struct uffdio_zeropage uffdio_zeropage;
> +	int ret;
> +	unsigned long has_zeropage = EXPECTED_IOCTLS & (1 << _UFFDIO_ZEROPAGE);
> +
> +	if (offset >= nr_pages * page_size)
> +		fprintf(stderr, "unexpected offset %lu\n",
> +			offset), exit(1);
> +	uffdio_zeropage.range.start = (unsigned long) area_dst + offset;
> +	uffdio_zeropage.range.len = page_size;
> +	uffdio_zeropage.mode = 0;
> +	ret = ioctl(ufd, UFFDIO_ZEROPAGE, &uffdio_zeropage);
> +	if (ret) {
> +		/* real retval in ufdio_zeropage.zeropage */
> +		if (has_zeropage) {
> +			if (uffdio_zeropage.zeropage == -EEXIST)
> +				fprintf(stderr, "UFFDIO_ZEROPAGE -EEXIST\n"),
> +					exit(1);
> +			else
> +				fprintf(stderr, "UFFDIO_ZEROPAGE error %Ld\n",
> +					uffdio_zeropage.zeropage), exit(1);
> +		} else {
> +			if (uffdio_zeropage.zeropage != -EINVAL)
> +				fprintf(stderr,
> +					"UFFDIO_ZEROPAGE not -EINVAL %Ld\n",
> +					uffdio_zeropage.zeropage), exit(1);
> +		}
> +	} else if (has_zeropage) {
> +		if (uffdio_zeropage.zeropage != page_size) {
> +			fprintf(stderr, "UFFDIO_ZEROPAGE unexpected %Ld\n",
> +				uffdio_zeropage.zeropage), exit(1);
> +		} else
> +			return 1;
> +	} else {
> +		fprintf(stderr,
> +			"UFFDIO_ZEROPAGE succeeded %Ld\n",
> +			uffdio_zeropage.zeropage), exit(1);
> +	}
> +
> +	return 0;
> +}
> +
> +/* exercise UFFDIO_ZEROPAGE */
> +static int userfaultfd_zeropage_test(void)
> +{
> +	struct uffdio_register uffdio_register;
> +	unsigned long expected_ioctls;
> +
> +	printf("testing UFFDIO_ZEROPAGE: ");
> +	fflush(stdout);
> +
> +	if (release_pages(area_dst))
> +		return 1;
> +
> +	if (userfaultfd_open(0) < 0)
> +		return 1;
> +	uffdio_register.range.start = (unsigned long) area_dst;
> +	uffdio_register.range.len = nr_pages * page_size;
> +	uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING;
> +	if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register))
> +		fprintf(stderr, "register failure\n"), exit(1);
> +
> +	expected_ioctls = EXPECTED_IOCTLS;
> +	if ((uffdio_register.ioctls & expected_ioctls) !=
> +	    expected_ioctls)
> +		fprintf(stderr,
> +			"unexpected missing ioctl for anon memory\n"),
> +			exit(1);
> +
> +	if (uffdio_zeropage(uffd, 0)) {
> +		if (my_bcmp(area_dst, zeropage, page_size))
> +			fprintf(stderr, "zeropage is not zero\n"), exit(1);
> +	}
> +
> +	close(uffd);
> +	printf("done.\n");
> +	return 0;
> +}
> +
>  static int userfaultfd_events_test(void)
>  {
>  	struct uffdio_register uffdio_register;
> @@ -679,6 +759,7 @@ static int userfaultfd_events_test(void)
>  	if (pthread_join(uffd_mon, (void **)&userfaults))
>  		return 1;
>  
> +	close(uffd);
>  	printf("userfaults: %ld\n", userfaults);
>  
>  	return userfaults != nr_pages;
> @@ -852,7 +933,7 @@ static int userfaultfd_stress(void)
>  		return err;
>  
>  	close(uffd);
> -	return userfaultfd_events_test();
> +	return userfaultfd_zeropage_test() || userfaultfd_events_test();
>  }
>  
>  #ifndef HUGETLB_TEST
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-11-04 20:34 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-02 19:33 [PATCH 00/33] userfaultfd tmpfs/hugetlbfs/non-cooperative Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 01/33] userfaultfd: document _IOR/_IOW Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 02/33] userfaultfd: correct comment about UFFD_FEATURE_PAGEFAULT_FLAG_WP Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 03/33] userfaultfd: convert BUG() to WARN_ON_ONCE() Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 04/33] userfaultfd: use vma_is_anonymous Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 05/33] userfaultfd: non-cooperative: Split the find_userfault() routine Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 06/33] userfaultfd: non-cooperative: Add ability to report non-PF events from uffd descriptor Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 07/33] userfaultfd: non-cooperative: report all available features to userland Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 08/33] userfaultfd: non-cooperative: Add fork() event Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 09/33] userfaultfd: non-cooperative: Add fork() event, build warning fix Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 10/33] userfaultfd: non-cooperative: dup_userfaultfd: use mm_count instead of mm_users Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 11/33] userfaultfd: non-cooperative: Add mremap() event Andrea Arcangeli
2016-11-03  7:41   ` Hillf Danton
2016-11-03 17:52     ` Mike Rapoport
2016-11-04 15:40     ` Mike Rapoport
2016-11-02 19:33 ` [PATCH 12/33] userfaultfd: non-cooperative: Add madvise() event for MADV_DONTNEED request Andrea Arcangeli
2016-11-03  8:01   ` Hillf Danton
2016-11-03 17:24     ` Mike Rapoport
2016-11-04 16:40       ` [PATCH 12/33] userfaultfd: non-cooperative: Add madvise() event for MADV_DONTNEED requestg Andrea Arcangeli
2016-11-04 15:42     ` [PATCH 12/33] userfaultfd: non-cooperative: Add madvise() event for MADV_DONTNEED request Mike Rapoport
2016-11-02 19:33 ` [PATCH 13/33] userfaultfd: hugetlbfs: add copy_huge_page_from_user for hugetlb userfaultfd support Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 14/33] userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for " Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 15/33] userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY Andrea Arcangeli
2016-11-03 10:15   ` Hillf Danton
2016-11-03 17:33     ` Mike Kravetz
2016-11-03 19:14       ` Mike Kravetz
2016-11-04  6:43         ` Hillf Danton
2016-11-04 19:36         ` Andrea Arcangeli
2016-11-04 20:34           ` Mike Kravetz [this message]
2016-11-08 21:06           ` Mike Kravetz
2016-11-16 18:28             ` Andrea Arcangeli
2016-11-16 18:53               ` Mike Kravetz
2016-11-17 15:40                 ` Andrea Arcangeli
2016-11-17 19:26                   ` Mike Kravetz
2016-11-18  0:05                     ` Andrea Arcangeli
2016-11-18  5:52                       ` Mike Kravetz
2016-11-22  1:16                         ` Mike Kravetz
2016-11-23  6:38                           ` Hillf Danton
2016-12-15 19:02                             ` Andrea Arcangeli
2016-12-16  3:54                               ` Hillf Danton
2016-11-17 19:41               ` Mike Kravetz
2016-11-04 16:35       ` Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 16/33] userfaultfd: hugetlbfs: add userfaultfd hugetlb hook Andrea Arcangeli
2016-11-04  7:02   ` Hillf Danton
2016-11-02 19:33 ` [PATCH 17/33] userfaultfd: hugetlbfs: allow registration of ranges containing huge pages Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 18/33] userfaultfd: hugetlbfs: add userfaultfd_hugetlb test Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 19/33] userfaultfd: hugetlbfs: userfaultfd_huge_must_wait for hugepmd ranges Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 20/33] userfaultfd: introduce vma_can_userfault Andrea Arcangeli
2016-11-04  7:39   ` Hillf Danton
2016-11-02 19:33 ` [PATCH 21/33] userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 22/33] userfaultfd: shmem: introduce vma_is_shmem Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 23/33] userfaultfd: shmem: add tlbflush.h header for microblaze Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 24/33] userfaultfd: shmem: use shmem_mcopy_atomic_pte for shared memory Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 25/33] userfaultfd: shmem: add userfaultfd hook for shared memory faults Andrea Arcangeli
2016-11-04  8:59   ` Hillf Danton
2016-11-04 14:53     ` Mike Rapoport
2016-11-04 15:44     ` Mike Rapoport
2016-11-04 16:56       ` Andrea Arcangeli
2016-11-18  0:37       ` Andrea Arcangeli
2016-11-20 12:10         ` Mike Rapoport
2016-11-02 19:33 ` [PATCH 26/33] userfaultfd: shmem: allow registration of shared memory ranges Andrea Arcangeli
2016-11-02 19:33 ` [PATCH 27/33] userfaultfd: shmem: add userfaultfd_shmem test Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 28/33] userfaultfd: shmem: lock the page before adding it to pagecache Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 29/33] userfaultfd: shmem: avoid leaking blocks and used blocks in UFFDIO_COPY Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 30/33] userfaultfd: non-cooperative: selftest: introduce userfaultfd_open Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 31/33] userfaultfd: non-cooperative: selftest: add ufd parameter to copy_page Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 32/33] userfaultfd: non-cooperative: selftest: add test for FORK, MADVDONTNEED and REMAP events Andrea Arcangeli
2016-11-02 19:34 ` [PATCH 33/33] mm: mprotect: use pmd_trans_unstable instead of taking the pmd_lock Andrea Arcangeli
2016-11-02 20:07 ` [PATCH 00/33] userfaultfd tmpfs/hugetlbfs/non-cooperative Andrea Arcangeli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5c4fdd58-1743-0cc0-9f71-bc1d019aaf7e@oracle.com \
    --to=mike.kravetz@oracle.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=dgilbert@redhat.com \
    --cc=hillf.zj@alibaba-inc.com \
    --cc=linux-mm@kvack.org \
    --cc=rppt@linux.vnet.ibm.com \
    --cc=shli@fb.com \
    --cc=xemul@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox