From: John Hubbard <jhubbard@nvidia.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
"Al Viro" <viro@zeniv.linux.org.uk>,
"Christoph Hellwig" <hch@infradead.org>,
"Dan Williams" <dan.j.williams@intel.com>,
"Dave Chinner" <david@fromorbit.com>,
"Ira Weiny" <ira.weiny@intel.com>, "Jan Kara" <jack@suse.cz>,
"Jason Gunthorpe" <jgg@ziepe.ca>,
"Jonathan Corbet" <corbet@lwn.net>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Michal Hocko" <mhocko@suse.com>,
"Mike Kravetz" <mike.kravetz@oracle.com>,
"Shuah Khan" <shuah@kernel.org>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Matthew Wilcox" <willy@infradead.org>,
linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-kselftest@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [PATCH v3 07/12] mm/gup: track FOLL_PIN pages
Date: Mon, 3 Feb 2020 13:01:14 -0800 [thread overview]
Message-ID: <44f9e71f-dc65-fb37-dd6d-228270170aad@nvidia.com> (raw)
In-Reply-To: <20200203134024.htczuqghduajb3yx@box>
On 2/3/20 5:40 AM, Kirill A. Shutemov wrote:
> On Fri, Jan 31, 2020 at 07:40:24PM -0800, John Hubbard wrote:
>> @@ -4405,7 +4392,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>> same_page:
>> if (pages) {
>> pages[i] = mem_map_offset(page, pfn_offset);
>> - get_page(pages[i]);
>> + if (!try_grab_page(pages[i], flags)) {
>> + spin_unlock(ptl);
>> + remainder = 0;
>> + err = -ENOMEM;
>> + WARN_ON_ONCE(1);
>
> The WARN_ON_ONCE deserve a comment. And I guess you can put it into 'if'
> condition.
OK, I've changed it to the following, which I *think* is an accurate comment, but
I'm still a bit new to huge pages:
if (pages) {
pages[i] = mem_map_offset(page, pfn_offset);
/*
* try_grab_page() should always succeed here, because:
* a) we hold the ptl lock, and b) we've just checked
* that the huge page is present in the page tables. If
* the huge page is present, then the tail pages must
* also be present. The ptl prevents the head page and
* tail pages from being rearranged in any way. So this
* page must be available at this point, unless the page
* refcount overflowed:
*/
if (WARN_ON_ONCE(!try_grab_page(pages[i], flags))) {
spin_unlock(ptl);
remainder = 0;
err = -ENOMEM;
break;
}
}
>
>> + break;
>> + }
>> }
>>
>> if (vmas)
>> @@ -4965,6 +4958,12 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>> struct page *page = NULL;
>> spinlock_t *ptl;
>> pte_t pte;
>> +
>> + /* FOLL_GET and FOLL_PIN are mutually exclusive. */
>> + if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
>> + (FOLL_PIN | FOLL_GET)))
>> + return NULL;
>> +
>> retry:
>> ptl = pmd_lockptr(mm, pmd);
>> spin_lock(ptl);
>> @@ -4977,8 +4976,11 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>> pte = huge_ptep_get((pte_t *)pmd);
>> if (pte_present(pte)) {
>> page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
>> - if (flags & FOLL_GET)
>> - get_page(page);
>> + if (unlikely(!try_grab_page(page, flags))) {
>> + WARN_ON_ONCE(1);
>
> Ditto.
OK, I've added a similar comment as the one above. Now it looks like this:
if (pte_present(pte)) {
page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
/*
* try_grab_page() should always succeed here, because: a) we
* hold the pmd (ptl) lock, and b) we've just checked that the
* huge pmd (head) page is present in the page tables. The ptl
* prevents the head page and tail pages from being rearranged
* in any way. So this page must be available at this point,
* unless the page refcount overflowed:
*/
if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
page = NULL;
goto out;
}
thanks,
--
John Hubbard
NVIDIA
>
>> + page = NULL;
>> + goto out;
>> + }
>> } else {
>> if (is_hugetlb_entry_migration(pte)) {
>> spin_unlock(ptl);
>
next prev parent reply other threads:[~2020-02-03 21:01 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-01 3:40 [PATCH v3 00/12] " John Hubbard
2020-02-01 3:40 ` [PATCH v3 01/12] mm: dump_page(): better diagnostics for compound pages John Hubbard
2020-02-03 13:16 ` Kirill A. Shutemov
2020-02-03 19:51 ` John Hubbard
2020-02-01 3:40 ` [PATCH v3 02/12] mm/gup: split get_user_pages_remote() into two routines John Hubbard
2020-02-03 13:17 ` Kirill A. Shutemov
2020-02-03 14:20 ` Jan Kara
2020-02-03 21:09 ` John Hubbard
2020-02-01 3:40 ` [PATCH v3 03/12] mm/gup: pass a flags arg to __gup_device_* functions John Hubbard
2020-02-03 13:19 ` Kirill A. Shutemov
2020-02-03 19:56 ` John Hubbard
2020-02-01 3:40 ` [PATCH v3 04/12] mm: introduce page_ref_sub_return() John Hubbard
2020-02-03 13:23 ` Kirill A. Shutemov
2020-02-03 20:03 ` John Hubbard
2020-02-01 3:40 ` [PATCH v3 05/12] mm/gup: pass gup flags to two more routines John Hubbard
2020-02-03 13:24 ` Kirill A. Shutemov
2020-02-03 14:18 ` Jan Kara
2020-02-01 3:40 ` [PATCH v3 06/12] mm/gup: require FOLL_GET for get_user_pages_fast() John Hubbard
2020-02-03 13:26 ` Kirill A. Shutemov
2020-02-03 14:18 ` Jan Kara
2020-02-01 3:40 ` [PATCH v3 07/12] mm/gup: track FOLL_PIN pages John Hubbard
2020-02-03 13:40 ` Kirill A. Shutemov
2020-02-03 21:01 ` John Hubbard [this message]
2020-02-03 14:29 ` Jan Kara
2020-02-01 3:40 ` [PATCH v3 08/12] mm/gup: page->hpage_pinned_refcount: exact pin counts for huge pages John Hubbard
2020-02-03 13:45 ` Kirill A. Shutemov
2020-02-03 14:43 ` Jan Kara
2020-02-01 3:40 ` [PATCH v3 09/12] mm: dump_page(): better diagnostics for huge pinned pages John Hubbard
2020-02-03 13:46 ` Kirill A. Shutemov
2020-02-03 14:44 ` Jan Kara
2020-02-01 3:40 ` [PATCH v3 10/12] mm/gup: /proc/vmstat: pin_user_pages (FOLL_PIN) reporting John Hubbard
2020-02-03 13:53 ` Kirill A. Shutemov
2020-02-03 21:04 ` John Hubbard
2020-02-03 21:30 ` Kirill A. Shutemov
2020-02-03 21:34 ` John Hubbard
2020-02-03 23:16 ` John Hubbard
2020-02-03 23:43 ` John Hubbard
2020-02-01 3:40 ` [PATCH v3 11/12] mm/gup_benchmark: support pin_user_pages() and related calls John Hubbard
2020-02-03 13:58 ` Kirill A. Shutemov
2020-02-03 21:17 ` John Hubbard
2020-02-03 21:55 ` Kirill A. Shutemov
2020-02-03 22:07 ` John Hubbard
2020-02-01 3:40 ` [PATCH v3 12/12] selftests/vm: run_vmtests: invoke gup_benchmark with basic FOLL_PIN coverage John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=44f9e71f-dc65-fb37-dd6d-228270170aad@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=corbet@lwn.net \
--cc=dan.j.williams@intel.com \
--cc=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=ira.weiny@intel.com \
--cc=jack@suse.cz \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=shuah@kernel.org \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox