From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6369C83004 for ; Thu, 30 Apr 2020 00:47:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7FCCF2192A for ; Thu, 30 Apr 2020 00:47:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FCCF2192A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 19ADF8E0005; Wed, 29 Apr 2020 20:47:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 14B7A8E0001; Wed, 29 Apr 2020 20:47:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03AD98E0005; Wed, 29 Apr 2020 20:47:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id DFF708E0001 for ; Wed, 29 Apr 2020 20:47:45 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A6DC1181AC9CB for ; Thu, 30 Apr 2020 00:47:45 +0000 (UTC) X-FDA: 76762683690.05.slave31_60bbe61d76535 X-HE-Tag: slave31_60bbe61d76535 X-Filterd-Recvd-Size: 4775 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Thu, 30 Apr 2020 00:47:44 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0Tx3-FoL_1588207658; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0Tx3-FoL_1588207658) by smtp.aliyun-inc.com(127.0.0.1); Thu, 30 Apr 2020 08:47:40 +0800 Subject: Re: [linux-next PATCH 2/2] mm: khugepaged: don't have to put being freed page back to lru From: Yang Shi To: kirill.shutemov@linux.intel.com, hughd@google.com, aarcange@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1588200982-69492-1-git-send-email-yang.shi@linux.alibaba.com> <1588200982-69492-2-git-send-email-yang.shi@linux.alibaba.com> Message-ID: <1d7c1fdd-3589-da46-716f-7767eecb87a4@linux.alibaba.com> Date: Wed, 29 Apr 2020 17:47:34 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/29/20 5:41 PM, Yang Shi wrote: > > > On 4/29/20 3:56 PM, Yang Shi wrote: >> When khugepaged successfully isolated and copied data from base page t= o >> collapsed THP, the base page is about to be freed.=C2=A0 So putting th= e page >> back to lru sounds not that productive since the page might be isolate= d >> by vmscan but it can't be reclaimed by vmscan since it can't be unmapp= ed >> by try_to_unmap() at all. >> >> Actually khugepaged is the last user of this page so it can be freed >> directly.=C2=A0 So, clearing active and unevictable flags, unlocking a= nd >> dropping refcount from isolate instead of calling putback_lru_page(). > > Please disregard the patch. I just remembered Kirill added collapse=20 > shared pages support. If the pages are shared then they have to be put=20 > back to lru since they may be still mapped by other processes. So we=20 > need check the mapcount if we would like to skip lru. > > And I spotted the other issue. The release_pte_page() calls=20 > mod_node_page_state() unconditionally, it was fine before. But, due to=20 > the support for collapsing shared pages we need check if the last=20 > mapcount is gone or not. Hmm... this is false. I mixed up NR_ISOLATED_ANON and NR_ANON_MAPPED. > > Andrew, would you please remove this patch from -mm tree? I will send=20 > one or two rectified patches. Sorry for the inconvenience. > >> >> Cc: Kirill A. Shutemov >> Cc: Hugh Dickins >> Cc: Andrea Arcangeli >> Signed-off-by: Yang Shi >> --- >> =C2=A0 mm/khugepaged.c | 15 +++++++++++++-- >> =C2=A0 1 file changed, 13 insertions(+), 2 deletions(-) >> >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >> index 0c8d30b..c131a90 100644 >> --- a/mm/khugepaged.c >> +++ b/mm/khugepaged.c >> @@ -559,6 +559,17 @@ void __khugepaged_exit(struct mm_struct *mm) >> =C2=A0 static void release_pte_page(struct page *page) >> =C2=A0 { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mod_node_page_state(page_pgdat(page), >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 NR_ISOLATED_ANON + page_is= _file_lru(page), -compound_nr(page)); >> +=C2=A0=C2=A0=C2=A0 ClearPageActive(page); >> +=C2=A0=C2=A0=C2=A0 ClearPageUnevictable(page); >> +=C2=A0=C2=A0=C2=A0 unlock_page(page); >> +=C2=A0=C2=A0=C2=A0 /* Drop refcount from isolate */ >> +=C2=A0=C2=A0=C2=A0 put_page(page); >> +} >> + >> +static void release_pte_page_to_lru(struct page *page) >> +{ >> +=C2=A0=C2=A0=C2=A0 mod_node_page_state(page_pgdat(page), >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 NR_ISOLATED_ANON + page_is_file_lru(page), >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 -compound_nr(page)); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unlock_page(page); >> @@ -576,12 +587,12 @@ static void release_pte_pages(pte_t *pte, pte_t=20 >> *_pte, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 page =3D pte_pa= ge(pteval); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!pte_none(p= teval) && !is_zero_pfn(pte_pfn(pteval)) && >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 !PageCompound(page)) >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 re= lease_pte_page(page); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 re= lease_pte_page_to_lru(page); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >> =C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_for_each_entry_safe(page, t= mp, compound_pagelist, lru) { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_del(&page-= >lru); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 release_pte_page(page); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 release_pte_page_to_lru(pa= ge); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >> =C2=A0 } >