From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f176.google.com (mail-pd0-f176.google.com [209.85.192.176]) by kanga.kvack.org (Postfix) with ESMTP id 137AD6B0099 for ; Tue, 3 Dec 2013 19:10:19 -0500 (EST) Received: by mail-pd0-f176.google.com with SMTP id w10so21140596pde.35 for ; Tue, 03 Dec 2013 16:10:18 -0800 (PST) Received: from LGEMRELSE1Q.lge.com (LGEMRELSE1Q.lge.com. [156.147.1.111]) by mx.google.com with ESMTP id f4si24646103pbm.115.2013.12.03.16.10.16 for ; Tue, 03 Dec 2013 16:10:17 -0800 (PST) From: Joonsoo Kim Subject: [PATCH v2 1/9] mm/rmap: recompute pgoff for huge page Date: Wed, 4 Dec 2013 09:12:12 +0900 Message-Id: <1386115940-21425-2-git-send-email-iamjoonsoo.kim@lge.com> In-Reply-To: <1386115940-21425-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1386115940-21425-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Mel Gorman , Hugh Dickins , Rik van Riel , Ingo Molnar , Naoya Horiguchi , Hillf Danton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Joonsoo Kim , Joonsoo Kim We have to recompute pgoff if the given page is huge, since result based on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as shown by commit 36e4f20af833 ("hugetlb: do not use vma_hugecache_offset() for vma_prio_tree_foreach") and commit 369a713e ("rmap: recompute pgoff for unmapping huge page"). To handle both the cases, normal page for page cache and hugetlb page, by same way, we can use compound_page(). It returns 0 on non-compound page and it also returns proper value on compound page. Signed-off-by: Joonsoo Kim diff --git a/mm/rmap.c b/mm/rmap.c index 55c8b8d..20c1a0d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1508,7 +1508,7 @@ static int try_to_unmap_anon(struct page *page, enum ttu_flags flags) static int try_to_unmap_file(struct page *page, enum ttu_flags flags) { struct address_space *mapping = page->mapping; - pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + pgoff_t pgoff = page->index << compound_order(page); struct vm_area_struct *vma; int ret = SWAP_AGAIN; unsigned long cursor; @@ -1516,9 +1516,6 @@ static int try_to_unmap_file(struct page *page, enum ttu_flags flags) unsigned long max_nl_size = 0; unsigned int mapcount; - if (PageHuge(page)) - pgoff = page->index << compound_order(page); - mutex_lock(&mapping->i_mmap_mutex); vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { unsigned long address = vma_address(page, vma); @@ -1708,7 +1705,7 @@ static int rmap_walk_file(struct page *page, int (*rmap_one)(struct page *, struct vm_area_struct *, unsigned long, void *), void *arg) { struct address_space *mapping = page->mapping; - pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + pgoff_t pgoff = page->index << compound_order(page); struct vm_area_struct *vma; int ret = SWAP_AGAIN; -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org