From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id B58596B0253 for ; Thu, 21 Sep 2017 16:49:38 -0400 (EDT) Received: by mail-pf0-f200.google.com with SMTP id f84so11904128pfj.0 for ; Thu, 21 Sep 2017 13:49:38 -0700 (PDT) Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id g11si1673900pgf.438.2017.09.21.13.49.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Sep 2017 13:49:37 -0700 (PDT) From: Shaohua Li Subject: [PATCH 1/2] mm: avoid marking swap cached page as lazyfree Date: Thu, 21 Sep 2017 13:27:10 -0700 Message-Id: In-Reply-To: References: In-Reply-To: References: Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: Artem Savkov , Kernel-team@fb.com, Shaohua Li , stable@vger.kernel.org, Johannes Weiner , Michal Hocko , Hillf Danton , Minchan Kim , Hugh Dickins , Rik van Riel , Mel Gorman , Andrew Morton From: Shaohua Li MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear SwapBacked). There is no lock to prevent the page is added to swap cache between these two steps by page reclaim. If the page is added to swap cache, marking the page lazyfree will confuse page fault if the page is reclaimed and refault. Reported-and-tested-y: Artem Savkov Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages) Signed-off-by: Shaohua Li Cc: stable@vger.kernel.org Cc: Johannes Weiner Cc: Michal Hocko Cc: Hillf Danton Cc: Minchan Kim Cc: Hugh Dickins Cc: Rik van Riel Cc: Mel Gorman Cc: Andrew Morton --- mm/swap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 9295ae9..a77d68f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -575,7 +575,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, void *arg) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && - !PageUnevictable(page)) { + !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); del_page_from_lru_list(page, lruvec, @@ -665,7 +665,7 @@ void deactivate_file_page(struct page *page) void mark_page_lazyfree(struct page *page) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && - !PageUnevictable(page)) { + !PageSwapCache(page) && !PageUnevictable(page)) { struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs); get_page(page); -- 2.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org