From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail190.messagelabs.com (mail190.messagelabs.com [216.82.249.51]) by kanga.kvack.org (Postfix) with SMTP id 7D3B76B01EF for ; Wed, 7 Apr 2010 22:56:24 -0400 (EDT) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: [PATCH 39 of 67] don't leave orhpaned swap cache after ksm merging Message-Id: <2aff4b2014a8175867ff.1270691482@v2.random> In-Reply-To: References: Date: Thu, 08 Apr 2010 03:51:22 +0200 From: Andrea Arcangeli Sender: owner-linux-mm@kvack.org To: linux-mm@kvack.org, Andrew Morton Cc: Marcelo Tosatti , Adam Litke , Avi Kivity , Izik Eidus , Hugh Dickins , Nick Piggin , Rik van Riel , Mel Gorman , Dave Hansen , Benjamin Herrenschmidt , Ingo Molnar , Mike Travis , KAMEZAWA Hiroyuki , Christoph Lameter , Chris Wright , bpicco@redhat.com, KOSAKI Motohiro , Balbir Singh , Arnd Bergmann , "Michael S. Tsirkin" , Peter Zijlstra , Johannes Weiner , Daisuke Nishimura , Chris Mason List-ID: From: Andrea Arcangeli When swapcache is replaced by a ksm page don't leave orhpaned swap cache. Signed-off-by: Andrea Arcangeli Reviewed-by: Rik van Riel --- diff --git a/mm/ksm.c b/mm/ksm.c --- a/mm/ksm.c +++ b/mm/ksm.c @@ -817,7 +817,7 @@ static int replace_page(struct vm_area_s set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot)); page_remove_rmap(page); - put_page(page); + free_page_and_swap_cache(page); pte_unmap_unlock(ptep, ptl); err = 0; @@ -863,7 +863,18 @@ static int try_to_merge_one_page(struct * ptes are necessarily already write-protected. But in either * case, we need to lock and check page_count is not raised. */ - if (write_protect_page(vma, page, &orig_pte) == 0) { + err = write_protect_page(vma, page, &orig_pte); + + /* + * After this mapping is wrprotected we don't need further + * checks for PageSwapCache vs page_count unlock_page(page) + * and we rely only on the pte_same() check run under PT lock + * to ensure the pte didn't change since when we wrprotected + * it under PG_lock. + */ + unlock_page(page); + + if (!err) { if (!kpage) { /* * While we hold page lock, upgrade page from @@ -872,22 +883,22 @@ static int try_to_merge_one_page(struct */ set_page_stable_node(page, NULL); mark_page_accessed(page); - err = 0; } else if (pages_identical(page, kpage)) err = replace_page(vma, page, kpage, orig_pte); - } + } else + err = -EFAULT; if ((vma->vm_flags & VM_LOCKED) && kpage && !err) { + lock_page(page); /* for LRU manipulation */ munlock_vma_page(page); + unlock_page(page); if (!PageMlocked(kpage)) { - unlock_page(page); lock_page(kpage); mlock_vma_page(kpage); - page = kpage; /* for final unlock */ + unlock_page(kpage); } } - unlock_page(page); out: return err; } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org