From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 2A9866B0265 for ; Thu, 1 Sep 2016 11:18:31 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id m89so13718310pfj.1 for ; Thu, 01 Sep 2016 08:18:31 -0700 (PDT) Received: from mga09.intel.com (mga09.intel.com. [134.134.136.24]) by mx.google.com with ESMTPS id 23si1687136pfq.32.2016.09.01.08.18.09 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 01 Sep 2016 08:18:09 -0700 (PDT) From: "Huang, Ying" Subject: [PATCH -v2 09/10] mm, THP, swap: Support to split THP in swap cache Date: Thu, 1 Sep 2016 08:17:02 -0700 Message-Id: <1472743023-4116-10-git-send-email-ying.huang@intel.com> In-Reply-To: <1472743023-4116-1-git-send-email-ying.huang@intel.com> References: <1472743023-4116-1-git-send-email-ying.huang@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: tim.c.chen@intel.com, dave.hansen@intel.com, andi.kleen@intel.com, aaron.lu@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrea Arcangeli , "Kirill A . Shutemov" , Ebru Akagunduz From: Huang Ying This patch enhanced the split_huge_page_to_list() to work properly for the THP (Transparent Huge Page) in the swap cache during swapping out. This is used for delaying splitting the THP during swapping out. Where for a THP to be swapped out, we will allocate a swap cluster, add the THP into the swap cache, then split the THP. The page lock will be held during this process. So in the code path other than swapping out, if the THP need to be split, the PageSwapCache(THP) will be always false. Cc: Andrea Arcangeli Cc: Kirill A. Shutemov Cc: Ebru Akagunduz Signed-off-by: "Huang, Ying" --- mm/huge_memory.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7082fe3..06b310e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1777,7 +1777,7 @@ static void __split_huge_page_tail(struct page *head, int tail, * atomic_set() here would be safe on all archs (and not only on x86), * it's safer to use atomic_inc()/atomic_add(). */ - if (PageAnon(head)) { + if (PageAnon(head) && !PageSwapCache(head)) { page_ref_inc(page_tail); } else { /* Additional pin to radix tree */ @@ -1788,6 +1788,7 @@ static void __split_huge_page_tail(struct page *head, int tail, page_tail->flags |= (head->flags & ((1L << PG_referenced) | (1L << PG_swapbacked) | + (1L << PG_swapcache) | (1L << PG_mlocked) | (1L << PG_uptodate) | (1L << PG_active) | @@ -1850,7 +1851,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageCompound(head); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { - page_ref_inc(head); + /* Additional pin to radix tree of swap cache */ + if (PageSwapCache(head)) + page_ref_add(head, 2); + else + page_ref_inc(head); } else { /* Additional pin to radix tree */ page_ref_add(head, 2); @@ -1962,10 +1967,12 @@ int page_trans_huge_mapcount(struct page *page, int *total_mapcount) /* Racy check whether the huge page can be split */ bool can_split_huge_page(struct page *page) { - int extra_pins = 0; + int extra_pins; /* Additional pins from radix tree */ - if (!PageAnon(page)) + if (PageAnon(page)) + extra_pins = PageSwapCache(page) ? HPAGE_PMD_NR : 0; + else extra_pins = HPAGE_PMD_NR; return total_mapcount(page) == page_count(page) - extra_pins - 1; } @@ -2018,7 +2025,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) ret = -EBUSY; goto out; } - extra_pins = 0; + extra_pins = PageSwapCache(head) ? HPAGE_PMD_NR : 0; mapping = NULL; anon_vma_lock_write(anon_vma); } else { -- 2.8.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org