From: Ralph Campbell <rcampbell@nvidia.com>
To: <linux-mm@kvack.org>, <nouveau@lists.freedesktop.org>,
<linux-kselftest@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Cc: Jerome Glisse <jglisse@redhat.com>,
John Hubbard <jhubbard@nvidia.com>,
Alistair Popple <apopple@nvidia.com>,
Christoph Hellwig <hch@lst.de>, Jason Gunthorpe <jgg@nvidia.com>,
Bharata B Rao <bharata@linux.ibm.com>,
Ben Skeggs <bskeggs@redhat.com>, Shuah Khan <shuah@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Ralph Campbell <rcampbell@nvidia.com>
Subject: [PATCH v2 1/7] mm/thp: fix __split_huge_pmd_locked() for migration PMD
Date: Wed, 2 Sep 2020 09:58:24 -0700 [thread overview]
Message-ID: <20200902165830.5367-2-rcampbell@nvidia.com> (raw)
In-Reply-To: <20200902165830.5367-1-rcampbell@nvidia.com>
A migrating transparent huge page has to already be unmapped. Otherwise,
the page could be modified while it is being copied to a new page and
data could be lost. The function __split_huge_pmd() checks for a PMD
migration entry before calling __split_huge_pmd_locked() leading one to
think that __split_huge_pmd_locked() can handle splitting a migrating PMD.
However, the code always increments the page->_mapcount and adjusts the
memory control group accounting assuming the page is mapped.
Also, if the PMD entry is a migration PMD entry, the call to
is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead
of migration_entry_to_pfn(pmd_to_swp_entry(pmd)).
Fix these problems by checking for a PMD migration entry.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
---
mm/huge_memory.c | 42 +++++++++++++++++++++++-------------------
1 file changed, 23 insertions(+), 19 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2a468a4acb0a..606d712d9505 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2023,7 +2023,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
put_page(page);
add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
return;
- } else if (is_huge_zero_pmd(*pmd)) {
+ } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) {
/*
* FIXME: Do we want to invalidate secondary mmu by calling
* mmu_notifier_invalidate_range() see comments below inside
@@ -2117,30 +2117,34 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
pte = pte_offset_map(&_pmd, addr);
BUG_ON(!pte_none(*pte));
set_pte_at(mm, addr, pte, entry);
- atomic_inc(&page[i]._mapcount);
- pte_unmap(pte);
- }
-
- /*
- * Set PG_double_map before dropping compound_mapcount to avoid
- * false-negative page_mapped().
- */
- if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
- for (i = 0; i < HPAGE_PMD_NR; i++)
+ if (!pmd_migration)
atomic_inc(&page[i]._mapcount);
+ pte_unmap(pte);
}
- lock_page_memcg(page);
- if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
- /* Last compound_mapcount is gone. */
- __dec_lruvec_page_state(page, NR_ANON_THPS);
- if (TestClearPageDoubleMap(page)) {
- /* No need in mapcount reference anymore */
+ if (!pmd_migration) {
+ /*
+ * Set PG_double_map before dropping compound_mapcount to avoid
+ * false-negative page_mapped().
+ */
+ if (compound_mapcount(page) > 1 &&
+ !TestSetPageDoubleMap(page)) {
for (i = 0; i < HPAGE_PMD_NR; i++)
- atomic_dec(&page[i]._mapcount);
+ atomic_inc(&page[i]._mapcount);
+ }
+
+ lock_page_memcg(page);
+ if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
+ /* Last compound_mapcount is gone. */
+ __dec_lruvec_page_state(page, NR_ANON_THPS);
+ if (TestClearPageDoubleMap(page)) {
+ /* No need in mapcount reference anymore */
+ for (i = 0; i < HPAGE_PMD_NR; i++)
+ atomic_dec(&page[i]._mapcount);
+ }
}
+ unlock_page_memcg(page);
}
- unlock_page_memcg(page);
smp_wmb(); /* make pte visible before pmd */
pmd_populate(mm, pmd, pgtable);
--
2.20.1
next prev parent reply other threads:[~2020-09-02 16:58 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-02 16:58 [PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_* Ralph Campbell
2020-09-02 16:58 ` Ralph Campbell [this message]
2020-09-02 20:29 ` [PATCH v2 1/7] mm/thp: fix __split_huge_pmd_locked() for migration PMD Yang Shi
2020-09-02 21:47 ` Zi Yan
2020-09-02 22:02 ` Ralph Campbell
2020-09-03 12:54 ` Kirill A. Shutemov
2020-09-02 16:58 ` [PATCH v2 2/7] mm/migrate: move migrate_vma_collect_skip() Ralph Campbell
2020-09-02 16:58 ` [PATCH v2 3/7] mm: support THP migration to device private memory Ralph Campbell
2020-09-02 16:58 ` [PATCH v2 4/7] mm/thp: add prep_transhuge_device_private_page() Ralph Campbell
2020-09-02 16:58 ` [PATCH v2 5/7] mm/thp: add THP allocation helper Ralph Campbell
2020-09-02 16:58 ` [PATCH v2 6/7] mm/hmm/test: add self tests for THP migration Ralph Campbell
2020-09-02 16:58 ` [PATCH v2 7/7] nouveau: support THP migration to private memory Ralph Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200902165830.5367-2-rcampbell@nvidia.com \
--to=rcampbell@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bharata@linux.ibm.com \
--cc=bskeggs@redhat.com \
--cc=hch@lst.de \
--cc=jgg@nvidia.com \
--cc=jglisse@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nouveau@lists.freedesktop.org \
--cc=shuah@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox