From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f49.google.com (mail-pa0-f49.google.com [209.85.220.49]) by kanga.kvack.org (Postfix) with ESMTP id 5ECBA6B0264 for ; Thu, 3 Sep 2015 11:14:00 -0400 (EDT) Received: by pacfv12 with SMTP id fv12so50538463pac.2 for ; Thu, 03 Sep 2015 08:14:00 -0700 (PDT) Received: from mga03.intel.com (mga03.intel.com. [134.134.136.65]) by mx.google.com with ESMTP id pe7si19111930pbb.152.2015.09.03.08.13.52 for ; Thu, 03 Sep 2015 08:13:52 -0700 (PDT) From: "Kirill A. Shutemov" Subject: [PATCHv10 08/36] khugepaged: ignore pmd tables with THP mapped with ptes Date: Thu, 3 Sep 2015 18:12:54 +0300 Message-Id: <1441293202-137314-9-git-send-email-kirill.shutemov@linux.intel.com> In-Reply-To: <1441293202-137314-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1441293202-137314-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Andrea Arcangeli , Hugh Dickins Cc: Dave Hansen , Mel Gorman , Rik van Riel , Vlastimil Babka , Christoph Lameter , Naoya Horiguchi , Steve Capper , "Aneesh Kumar K.V" , Johannes Weiner , Michal Hocko , Jerome Marchand , Sasha Levin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Prepare khugepaged to see compound pages mapped with pte. For now we won't collapse the pmd table with such pte. khugepaged is subject for future rework wrt new refcounting. Signed-off-by: Kirill A. Shutemov Tested-by: Sasha Levin Tested-by: Aneesh Kumar K.V Acked-by: Jerome Marchand Acked-by: Vlastimil Babka --- mm/huge_memory.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 304c7d2825eb..ba7e9d96097d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2728,6 +2728,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, page = vm_normal_page(vma, _address, pteval); if (unlikely(!page)) goto out_unmap; + + /* TODO: teach khugepaged to collapse THP mapped with pte */ + if (PageCompound(page)) + goto out_unmap; + /* * Record which node the original page is from and save this * information to khugepaged_node_load[]. @@ -2738,7 +2743,6 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, if (khugepaged_scan_abort(node)) goto out_unmap; khugepaged_node_load[node]++; - VM_BUG_ON_PAGE(PageCompound(page), page); if (!PageLRU(page) || PageLocked(page) || !PageAnon(page)) goto out_unmap; /* -- 2.5.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org