linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "zhangpeng (AS)" <zhangpeng362@huawei.com>
To: Vishal Moola <vishal.moola@gmail.com>
Cc: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	<akpm@linux-foundation.org>, <willy@infradead.org>,
	<mike.kravetz@oracle.com>, <sidhartha.kumar@oracle.com>,
	<muchun.song@linux.dev>, <wangkefeng.wang@huawei.com>,
	<sunnanyong@huawei.com>
Subject: Re: [PATCH v5 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user()
Date: Sat, 8 Apr 2023 12:43:28 +0800	[thread overview]
Message-ID: <a874c84b-4a83-c12f-e064-eab6a792c1e6@huawei.com> (raw)
In-Reply-To: <CAOzc2pzr7VJRdsx1ud_ceBhbu2XP7Ay72jFETtN8eOt5yR7S=Q@mail.gmail.com>

On 2023/4/7 10:28, Vishal Moola wrote:

> On Fri, Mar 31, 2023 at 2:41 AM Peng Zhang <zhangpeng362@huawei.com> wrote:
>> From: ZhangPeng <zhangpeng362@huawei.com>
>>
>> Replace copy_huge_page_from_user() with copy_folio_from_user().
>> copy_folio_from_user() does the same as copy_huge_page_from_user(), but
>> takes in a folio instead of a page. Convert page_kaddr to kaddr in
>> copy_folio_from_user() to do indenting cleanup.
>>
>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>> ---
>>   include/linux/mm.h |  7 +++----
>>   mm/hugetlb.c       |  5 ++---
>>   mm/memory.c        | 26 ++++++++++++--------------
>>   mm/userfaultfd.c   |  6 ++----
>>   4 files changed, 19 insertions(+), 25 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index e249208f8fbe..cf4d773ca506 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -3682,10 +3682,9 @@ extern void copy_user_huge_page(struct page *dst, struct page *src,
>>                                  unsigned long addr_hint,
>>                                  struct vm_area_struct *vma,
>>                                  unsigned int pages_per_huge_page);
>> -extern long copy_huge_page_from_user(struct page *dst_page,
>> -                               const void __user *usr_src,
>> -                               unsigned int pages_per_huge_page,
>> -                               bool allow_pagefault);
>> +long copy_folio_from_user(struct folio *dst_folio,
>> +                          const void __user *usr_src,
>> +                          bool allow_pagefault);
>>
>>   /**
>>    * vma_is_special_huge - Are transhuge page-table entries considered special?
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 7e4a80769c9e..aade1b513474 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -6217,9 +6217,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
>>                          goto out;
>>                  }
>>
>> -               ret = copy_huge_page_from_user(&folio->page,
>> -                                               (const void __user *) src_addr,
>> -                                               pages_per_huge_page(h), false);
>> +               ret = copy_folio_from_user(folio, (const void __user *) src_addr,
>> +                                          false);
>>
>>                  /* fallback to copy_from_user outside mmap_lock */
>>                  if (unlikely(ret)) {
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 808f354bce65..4976422b6979 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -5868,35 +5868,33 @@ void copy_user_huge_page(struct page *dst, struct page *src,
>>          process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg);
>>   }
>>
>> -long copy_huge_page_from_user(struct page *dst_page,
>> -                               const void __user *usr_src,
>> -                               unsigned int pages_per_huge_page,
>> -                               bool allow_pagefault)
>> +long copy_folio_from_user(struct folio *dst_folio,
>> +                          const void __user *usr_src,
>> +                          bool allow_pagefault)
>>   {
>> -       void *page_kaddr;
>> +       void *kaddr;
>>          unsigned long i, rc = 0;
>> -       unsigned long ret_val = pages_per_huge_page * PAGE_SIZE;
>> +       unsigned int nr_pages = folio_nr_pages(dst_folio);
>> +       unsigned long ret_val = nr_pages * PAGE_SIZE;
>>          struct page *subpage;
>>
>> -       for (i = 0; i < pages_per_huge_page; i++) {
>> -               subpage = nth_page(dst_page, i);
>> -               page_kaddr = kmap_local_page(subpage);
>> +       for (i = 0; i < nr_pages; i++) {
>> +               subpage = folio_page(dst_folio, i);
>> +               kaddr = kmap_local_page(subpage);
>>                  if (!allow_pagefault)
>>                          pagefault_disable();
>> -               rc = copy_from_user(page_kaddr,
>> -                               usr_src + i * PAGE_SIZE, PAGE_SIZE);
>> +               rc = copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE);
>>                  if (!allow_pagefault)
>>                          pagefault_enable();
>> -               kunmap_local(page_kaddr);
>> +               kunmap_local(kaddr);
>>
>>                  ret_val -= (PAGE_SIZE - rc);
>>                  if (rc)
>>                          break;
>>
>> -               flush_dcache_page(subpage);
>> -
>>                  cond_resched();
>>          }
>> +       flush_dcache_folio(dst_folio);
>>          return ret_val;
>>   }
> Moving the flush_dcache_page() outside the loop to be
> flush_dcache_folio() changes the behavior of the function.
>
> Initially, if it fails to copy the entire page, the function breaks out
> of the loop and returns the number of unwritten bytes without
> flushing the page from the cache. Now if it fails, it will still flush
> out the page it failed on, as well as any later pages it may not
> have gotten to yet.

Agreed. If it fails, could we just not flush the folio?
Like this:
long copy_folio_from_user(...)
{
	...
	for (i = 0; i < nr_pages; i++) {
		...
		rc = copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE);
		...
		ret_val -= (PAGE_SIZE - rc);
		if (rc)
-                       break;
+                       return ret_val;
		cond_resched();
	}
	flush_dcache_folio(dst_folio);
	return ret_val;
}

Thanks for your review.

Best Regards,
Peng



  reply	other threads:[~2023-04-08  4:43 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-31  9:39 [PATCH v5 0/6] userfaultfd: convert userfaultfd functions to use folios Peng Zhang
2023-03-31  9:39 ` [PATCH v5 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio Peng Zhang
2023-04-06 21:31   ` Mike Kravetz
2023-04-08  4:42     ` zhangpeng (AS)
2023-03-31  9:39 ` [PATCH v5 2/6] userfaultfd: use kmap_local_page() in copy_huge_page_from_user() Peng Zhang
2023-04-06 21:32   ` Mike Kravetz
2023-03-31  9:39 ` [PATCH v5 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() Peng Zhang
2023-04-06 22:22   ` Mike Kravetz
2023-04-07  2:28   ` Vishal Moola
2023-04-08  4:43     ` zhangpeng (AS) [this message]
2023-04-10 21:26       ` Mike Kravetz
2023-04-11  1:30         ` Yin, Fengwei
2023-04-11  3:40     ` Matthew Wilcox
2023-04-18 22:21       ` Andrew Morton
2023-03-31  9:39 ` [PATCH v5 4/6] userfaultfd: convert mfill_atomic_hugetlb() to use a folio Peng Zhang
2023-04-06 22:48   ` Mike Kravetz
2023-03-31  9:39 ` [PATCH v5 5/6] mm: convert copy_user_huge_page() to copy_user_folio() Peng Zhang
2023-04-06 23:55   ` Mike Kravetz
2023-04-08  4:42     ` zhangpeng (AS)
2023-03-31  9:39 ` [PATCH v5 6/6] userfaultfd: convert mfill_atomic() to use a folio Peng Zhang
2023-04-07  0:07   ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a874c84b-4a83-c12f-e064-eab6a792c1e6@huawei.com \
    --to=zhangpeng362@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    --cc=sidhartha.kumar@oracle.com \
    --cc=sunnanyong@huawei.com \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox