From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0A43C4332D for ; Thu, 19 Mar 2020 05:39:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D564D206D7 for ; Thu, 19 Mar 2020 05:39:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D564D206D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FCDA6B0005; Thu, 19 Mar 2020 01:39:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 386C06B0006; Thu, 19 Mar 2020 01:39:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 275A66B0007; Thu, 19 Mar 2020 01:39:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 100866B0005 for ; Thu, 19 Mar 2020 01:39:35 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9BA48181AC9C6 for ; Thu, 19 Mar 2020 05:39:34 +0000 (UTC) X-FDA: 76611009468.09.edge86_364bfc3974821 X-HE-Tag: edge86_364bfc3974821 X-Filterd-Recvd-Size: 6445 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Thu, 19 Mar 2020 05:39:32 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R741e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04428;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0Tt03i03_1584596364; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0Tt03i03_1584596364) by smtp.aliyun-inc.com(127.0.0.1); Thu, 19 Mar 2020 13:39:26 +0800 Subject: Re: [PATCH] mm: khugepaged: fix potential page state corruption From: Yang Shi To: "Kirill A. Shutemov" Cc: kirill.shutemov@linux.intel.com, hughd@google.com, aarcange@redhat.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1584573582-116702-1-git-send-email-yang.shi@linux.alibaba.com> <20200319001258.creziw6ffw4jvwl3@box> <2cdc734c-c222-4b9d-9114-1762b29dafb4@linux.alibaba.com> Message-ID: Date: Wed, 18 Mar 2020 22:39:21 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <2cdc734c-c222-4b9d-9114-1762b29dafb4@linux.alibaba.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/18/20 5:55 PM, Yang Shi wrote: > > > On 3/18/20 5:12 PM, Kirill A. Shutemov wrote: >> On Thu, Mar 19, 2020 at 07:19:42AM +0800, Yang Shi wrote: >>> When khugepaged collapses anonymous pages, the base pages would be=20 >>> freed >>> via pagevec or free_page_and_swap_cache().=C2=A0 But, the anonymous p= age may >>> be added back to LRU, then it might result in the below race: >>> >>> =C2=A0=C2=A0=C2=A0=C2=A0CPU A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 CPU B >>> khugepaged: >>> =C2=A0=C2=A0 unlock page >>> =C2=A0=C2=A0 putback_lru_page >>> =C2=A0=C2=A0=C2=A0=C2=A0 add to lru >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 page reclaim: >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 isolate this page >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 try_to_unmap >>> =C2=A0=C2=A0 page_remove_rmap <-- corrupt _mapcount >>> >>> It looks nothing would prevent the pages from isolating by reclaimer. >> Hm. Why should it? >> >> try_to_unmap() doesn't exclude parallel page unmapping. _mapcount is >> protected by ptl. And this particular _mapcount pin is reachable for >> reclaim as it's not part of usual page table tree. Basically >> try_to_unmap() will never succeeds until we give up the _mapcount on >> khugepaged side. > > I don't quite get. What does "not part of usual page table tree" means? > > How's about try_to_unmap() acquires ptl before khugepaged? > >> >> I don't see the issue right away. >> >>> The other problem is the page's active or unevictable flag might be >>> still set when freeing the page via free_page_and_swap_cache(). >> So what? > > The flags may leak to page free path then kernel may complain if=20 > DEBUG_VM is set. > >> >>> The putback_lru_page() would not clear those two flags if the pages a= re >>> released via pagevec, it sounds nothing prevents from isolating activ= e Sorry, this is a typo. If the page is freed via pagevec, active and=20 unevictable flag would get cleared before freeing by page_off_lru(). But, if the page is freed by free_page_and_swap_cache(), these two flags=20 are not cleared. But, it seems this path is hit rare, the pages are=20 freed by pagevec for the most cases. >>> or unevictable pages. >> Again, why should it? vmscan is equipped to deal with this. > > I don't mean vmscan, I mean khugepaged may isolate active and=20 > unevictable pages since it just simply walks page table. > >> >>> However I didn't really run into these problems, just in theory by=20 >>> visual >>> inspection. >>> >>> And, it also seems unnecessary to have the pages add back to LRU=20 >>> again since >>> they are about to be freed when reaching this point.=C2=A0 So, cleari= ng=20 >>> active >>> and unevictable flags, unlocking and dropping refcount from isolate >>> instead of calling putback_lru_page() as what page cache collapse doe= s. >> Hm? But we do call putback_lru_page() on the way out. I do not follow. > > It just calls putback_lru_page() at error path, not success path.=20 > Putting pages back to lru on error path definitely makes sense. Here=20 > it is the success path. > >> >>> Cc: Kirill A. Shutemov >>> Cc: Hugh Dickins >>> Cc: Andrea Arcangeli >>> Signed-off-by: Yang Shi >>> --- >>> =C2=A0 mm/khugepaged.c | 10 +++++++++- >>> =C2=A0 1 file changed, 9 insertions(+), 1 deletion(-) >>> >>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >>> index b679908..f42fa4e 100644 >>> --- a/mm/khugepaged.c >>> +++ b/mm/khugepaged.c >>> @@ -673,7 +673,6 @@ static void __collapse_huge_page_copy(pte_t=20 >>> *pte, struct page *page, >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 src_page =3D pte_page(pteval); >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 copy_user_highpage(page, src_page, address, vma); >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 VM_BUG_ON_PAGE(page_mapcount(src_page) !=3D 1, src_page); >>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 r= elease_pte_page(src_page); >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 /* >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 * ptl mostly unnecessary, but preempt has to >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 * be disabled to update the per-cpu stats >>> @@ -687,6 +686,15 @@ static void __collapse_huge_page_copy(pte_t=20 >>> *pte, struct page *page, >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 pte_clear(vma->vm_mm, address, _pte); >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 page_remove_rmap(src_page, false); >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 spin_unlock(ptl); >>> + >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 d= ec_node_page_state(src_page, >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 NR_ISOLATED_ANON + page_is_file_cache(src_page)); >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 C= learPageActive(src_page); >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 C= learPageUnevictable(src_page); >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 u= nlock_page(src_page); >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /= * Drop refcount from isolate */ >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 p= ut_page(src_page); >>> + >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 free_page_and_swap_cache(src_page); >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >>> --=20 >>> 1.8.3.1 >>> >>> >