From: Jason Gunthorpe <jgg@nvidia.com>
To: <linux-kernel@vger.kernel.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Hellwig <hch@lst.de>, Hugh Dickins <hughd@google.com>,
Jan Kara <jack@suse.cz>, Jann Horn <jannh@google.com>,
John Hubbard <jhubbard@nvidia.com>,
Kirill Shutemov <kirill@shutemov.name>,
Kirill Tkhai <ktkhai@virtuozzo.com>,
Linux-MM <linux-mm@kvack.org>, Michal Hocko <mhocko@suse.com>,
Oleg Nesterov <oleg@redhat.com>, Peter Xu <peterx@redhat.com>
Subject: [PATCH 1/2] mm: reorganize internal_get_user_pages_fast()
Date: Fri, 23 Oct 2020 21:19:19 -0300 [thread overview]
Message-ID: <1-v1-281e425c752f+2df-gup_fork_jgg@nvidia.com> (raw)
In-Reply-To: <0-v1-281e425c752f+2df-gup_fork_jgg@nvidia.com>
The next patch in this series makes the lockless flow a little more
complex, so move the entire block into a new function and remove a level
of indention. Tidy a bit of cruft:
- addr is always the same as start, so use start
- Use the modern check_add_overflow() for computing end = start + len
- nr_pinned << PAGE_SHIFT needs an unsigned long cast, like nr_pages
- The handling of ret and nr_pinned can be streamlined a bit
No functional change.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
mm/gup.c | 88 +++++++++++++++++++++++++++++---------------------------
1 file changed, 46 insertions(+), 42 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 102877ed77a4b4..ecbe1639ea2af7 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2671,13 +2671,42 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages,
return ret;
}
+static unsigned int lockless_pages_from_mm(unsigned long addr,
+ unsigned long end,
+ unsigned int gup_flags,
+ struct page **pages)
+{
+ unsigned long flags;
+ int nr_pinned = 0;
+
+ if (!IS_ENABLED(CONFIG_HAVE_FAST_GUP) ||
+ !gup_fast_permitted(addr, end))
+ return 0;
+
+ /*
+ * Disable interrupts. The nested form is used, in order to allow full,
+ * general purpose use of this routine.
+ *
+ * With interrupts disabled, we block page table pages from being freed
+ * from under us. See struct mmu_table_batch comments in
+ * include/asm-generic/tlb.h for more details.
+ *
+ * We do not adopt an rcu_read_lock(.) here as we also want to block
+ * IPIs that come from THPs splitting.
+ */
+ local_irq_save(flags);
+ gup_pgd_range(addr, end, gup_flags, pages, &nr_pinned);
+ local_irq_restore(flags);
+ return nr_pinned;
+}
+
static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags,
struct page **pages)
{
- unsigned long addr, len, end;
- unsigned long flags;
- int nr_pinned = 0, ret = 0;
+ unsigned long len, end;
+ unsigned int nr_pinned;
+ int ret;
if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
FOLL_FORCE | FOLL_PIN | FOLL_GET |
@@ -2691,53 +2720,28 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
might_lock_read(¤t->mm->mmap_lock);
start = untagged_addr(start) & PAGE_MASK;
- addr = start;
len = (unsigned long) nr_pages << PAGE_SHIFT;
- end = start + len;
-
- if (end <= start)
+ if (check_add_overflow(start, len, &end))
return 0;
if (unlikely(!access_ok((void __user *)start, len)))
return -EFAULT;
- /*
- * Disable interrupts. The nested form is used, in order to allow
- * full, general purpose use of this routine.
- *
- * With interrupts disabled, we block page table pages from being
- * freed from under us. See struct mmu_table_batch comments in
- * include/asm-generic/tlb.h for more details.
- *
- * We do not adopt an rcu_read_lock(.) here as we also want to
- * block IPIs that come from THPs splitting.
- */
- if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && gup_fast_permitted(start, end)) {
- unsigned long fast_flags = gup_flags;
-
- local_irq_save(flags);
- gup_pgd_range(addr, end, fast_flags, pages, &nr_pinned);
- local_irq_restore(flags);
- ret = nr_pinned;
- }
-
- if (nr_pinned < nr_pages && !(gup_flags & FOLL_FAST_ONLY)) {
- /* Try to get the remaining pages with get_user_pages */
- start += nr_pinned << PAGE_SHIFT;
- pages += nr_pinned;
-
- ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned,
- gup_flags, pages);
+ nr_pinned = lockless_pages_from_mm(start, end, gup_flags, pages);
+ if (nr_pinned == nr_pages || gup_flags & FOLL_FAST_ONLY)
+ return nr_pinned;
+ /* Try to get the remaining pages with get_user_pages */
+ start += (unsigned long)nr_pinned << PAGE_SHIFT;
+ pages += nr_pinned;
+ ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned, gup_flags,
+ pages);
+ if (ret < 0) {
/* Have to be a bit careful with return values */
- if (nr_pinned > 0) {
- if (ret < 0)
- ret = nr_pinned;
- else
- ret += nr_pinned;
- }
+ if (nr_pinned)
+ return nr_pinned;
+ return ret;
}
-
- return ret;
+ return ret + nr_pinned;
}
/**
* get_user_pages_fast_only() - pin user pages in memory
--
2.28.0
next prev parent reply other threads:[~2020-10-24 0:19 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-24 0:19 [PATCH 0/2] Add a 'seqcount' between gup_fast and copy_page_range Jason Gunthorpe
2020-10-24 0:19 ` Jason Gunthorpe [this message]
2020-10-24 4:44 ` [PATCH 1/2] mm: reorganize internal_get_user_pages_fast() John Hubbard
2020-10-26 23:59 ` Jason Gunthorpe
2020-10-27 9:33 ` Jan Kara
2020-10-27 9:55 ` Christoph Hellwig
2020-10-28 6:00 ` John Hubbard
2020-10-27 13:15 ` Jason Gunthorpe
2020-10-28 6:00 ` John Hubbard
2020-10-28 6:05 ` John Hubbard
2020-10-24 0:19 ` [PATCH 2/2] mm: prevent gup_fast from racing with COW during fork Jason Gunthorpe
2020-10-24 5:19 ` John Hubbard
2020-10-24 5:31 ` John Hubbard
2020-10-26 23:49 ` Jason Gunthorpe
2020-10-27 0:14 ` Linus Torvalds
2020-10-27 11:32 ` Jason Gunthorpe
2020-10-27 0:35 ` John Hubbard
2020-10-27 7:32 ` John Hubbard
2020-11-02 3:25 ` [mm] e498078ae9: will-it-scale.per_thread_ops -1.4% regression kernel test robot
2020-10-24 5:14 ` [PATCH 0/2] Add a 'seqcount' between gup_fast and copy_page_range John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1-v1-281e425c752f+2df-gup_fork_jgg@nvidia.com \
--to=jgg@nvidia.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hch@lst.de \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=jhubbard@nvidia.com \
--cc=kirill@shutemov.name \
--cc=ktkhai@virtuozzo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=oleg@redhat.com \
--cc=peterx@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox