From: Andrea Arcangeli <aarcange@redhat.com>
To: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>
Cc: Michael Rapoport <RAPOPORT@il.ibm.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Pavel Emelyanov <xemul@parallels.com>,
Hillf Danton <hillf.zj@alibaba-inc.com>
Subject: [PATCH 1/1] userfaultfd: shmem: avoid a lockup resulting from corrupted page->flags
Date: Mon, 16 Jan 2017 19:04:08 +0100 [thread overview]
Message-ID: <20170116180408.12184-2-aarcange@redhat.com> (raw)
In-Reply-To: <20170116180408.12184-1-aarcange@redhat.com>
Use the non atomic version of __SetPageUptodate while the page is
still private and not visible to lookup operations. Using the non
atomic version after the page is already visible to lookups is unsafe
as there would be concurrent lock_page operation modifying the
page->flags while it runs.
This solves a lockup in find_lock_entry with the userfaultfd_shmem
selftest.
userfaultfd_shm D14296 691 1 0x00000004
Call Trace:
? __schedule+0x311/0xb60
schedule+0x3d/0x90
schedule_timeout+0x228/0x420
? mark_held_locks+0x71/0x90
? ktime_get+0x134/0x170
? kvm_clock_read+0x25/0x30
? kvm_clock_get_cycles+0x9/0x10
? ktime_get+0xd6/0x170
? __delayacct_blkio_start+0x1f/0x30
io_schedule_timeout+0xa4/0x110
? trace_hardirqs_on+0xd/0x10
__lock_page+0x12d/0x170
? add_to_page_cache_lru+0xe0/0xe0
find_lock_entry+0xa4/0x190
shmem_getpage_gfp+0xb9/0xc30
? alloc_set_pte+0x56e/0x610
? radix_tree_next_chunk+0xf6/0x2d0
shmem_fault+0x70/0x1c0
? filemap_map_pages+0x3bd/0x530
__do_fault+0x21/0x150
handle_mm_fault+0xec9/0x1490
__do_page_fault+0x20d/0x520
trace_do_page_fault+0x61/0x270
do_async_page_fault+0x19/0x80
async_page_fault+0x25/0x30
Reported-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
mm/shmem.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index b1ecd07..873b847 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2247,6 +2247,7 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm,
VM_BUG_ON(PageLocked(page) || PageSwapBacked(page));
__SetPageLocked(page);
__SetPageSwapBacked(page);
+ __SetPageUptodate(page);
ret = mem_cgroup_try_charge(page, dst_mm, gfp, &memcg, false);
if (ret)
@@ -2271,8 +2272,6 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm,
if (!pte_none(*dst_pte))
goto out_release_uncharge_unlock;
- __SetPageUptodate(page);
-
lru_cache_add_anon(page);
spin_lock(&info->lock);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-01-16 18:04 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-16 18:04 [PATCH 0/1] " Andrea Arcangeli
2017-01-16 18:04 ` Andrea Arcangeli [this message]
2017-01-19 9:50 ` [PATCH 1/1] " Hillf Danton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170116180408.12184-2-aarcange@redhat.com \
--to=aarcange@redhat.com \
--cc=RAPOPORT@il.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=dgilbert@redhat.com \
--cc=hillf.zj@alibaba-inc.com \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=xemul@parallels.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox