From: Rajesh Venkatasubramanian <vrajesh@umich.edu>
To: akpm@osdl.org
Cc: linux-kernel@vger.kernel.org, Linux-MM@kvack.org
Subject: [PATCH] mremap NULL pointer dereference fix
Date: Mon, 16 Feb 2004 23:41:17 -0500 (EST) [thread overview]
Message-ID: <Pine.SOL.4.44.0402162331580.20215-100000@blue.engin.umich.edu> (raw)
This path fixes a NULL pointer dereference bug in mremap. In
move_one_page we need to re-check the src because an allocation
for the dst page table can drop page_table_lock, and somebody
else can invalidate the src.
In my old Quad Pentium II 200MHz 256MB, with 2.6.3-rc3-mm1-preempt,
I could hit the NULL pointer dereference bug with the program in the
following URL:
http://www-personal.engin.umich.edu/~vrajesh/linux/mremap-nullptr/
Full trace of the bug can be found at the above URL. A partial call
trace is below.
kernel: PREEMPT SMP
kernel: EIP is at copy_one_pte+0x12/0xa0
kernel: [<c01558a3>] move_one_page+0xa3/0x110
kernel: [<c0155947>] move_page_tables+0x37/0x80
kernel: [<c0155a1a>] move_vma+0x8a/0x5e0
kernel: [<c015620c>] do_mremap+0x29c/0x3d0
kernel: [<c015638d>] sys_mremap+0x4d/0x6d
kernel: [<c03d5ee7>] syscall_call+0x7/0xb
Please apply.
mm/mremap.c | 26 ++++++++++++++++++++------
1 files changed, 20 insertions(+), 6 deletions(-)
diff -puN mm/mremap.c~nullptr mm/mremap.c
--- mmlinux-2.6/mm/mremap.c~nullptr 2004-02-16 17:24:00.000000000 -0500
+++ mmlinux-2.6-jaya/mm/mremap.c 2004-02-16 17:24:00.000000000 -0500
@@ -135,17 +135,31 @@ move_one_page(struct vm_area_struct *vma
dst = alloc_one_pte_map(mm, new_addr);
if (src == NULL)
src = get_one_pte_map_nested(mm, old_addr);
+ /*
+ * Since alloc_one_pte_map can drop and re-acquire
+ * page_table_lock, we should re-check the src entry...
+ */
+ if (src == NULL) {
+ pte_unmap(dst);
+ goto flush_out;
+ }
error = copy_one_pte(vma, old_addr, src, dst, &pte_chain);
pte_unmap_nested(src);
pte_unmap(dst);
- } else
- /*
- * Why do we need this flush ? If there is no pte for
- * old_addr, then there must not be a pte for it as well.
- */
- flush_tlb_page(vma, old_addr);
+ goto unlock_out;
+ }
+
+flush_out:
+ /*
+ * Why do we need this flush ? If there is no pte for
+ * old_addr, then there must not be a pte for it as well.
+ */
+ flush_tlb_page(vma, old_addr);
+
+unlock_out:
spin_unlock(&mm->page_table_lock);
pte_chain_free(pte_chain);
+
out:
return error;
}
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next reply other threads:[~2004-02-17 4:41 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-02-17 4:41 Rajesh Venkatasubramanian [this message]
2004-02-17 5:31 ` Andrew Morton
2004-02-17 5:38 ` Linus Torvalds
2004-02-17 5:49 ` Linus Torvalds
2004-02-17 6:00 ` Andrew Morton
2004-02-17 6:06 ` Linus Torvalds
2004-02-17 13:23 ` Rajesh Venkatasubramanian
2004-02-17 21:33 ` Rajesh Venkatasubramanian
2004-02-19 14:29 ` [PATCH] orphaned ptes -- mremap vs. truncate race Rajesh Venkatasubramanian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.SOL.4.44.0402162331580.20215-100000@blue.engin.umich.edu \
--to=vrajesh@umich.edu \
--cc=Linux-MM@kvack.org \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox