We have to recompute pgoff if the given page is huge, since result based on
HPAGE_SIZE is inappropriate for scanning the vma interval tree, as shown
by commit 36e4f20af833(hugetlb: do not use vma_hugecache_offset() for
vma_prio_tree_foreach)


Signed-off-by: Hillf Danton <dhillf@gmail.com>
---

--- a/mm/rmap.c Mon Mar  4 20:00:00 2013
+++ b/mm/rmap.c Mon Mar  4 20:02:16 2013
@@ -1513,6 +1513,9 @@ static int try_to_unmap_file(struct page
  unsigned long max_nl_size = 0;
  unsigned int mapcount;
 
+ if (PageHuge(page))
+ pgoff = page->index << compound_order(page);
+
  mutex_lock(&mapping->i_mmap_mutex);
  vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
  unsigned long address = vma_address(page, vma);
--