From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADDA5C433DB for ; Wed, 24 Mar 2021 02:15:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 24363619C2 for ; Wed, 24 Mar 2021 02:15:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24363619C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 757B56B01F0; Tue, 23 Mar 2021 22:15:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 708AE6B01F1; Tue, 23 Mar 2021 22:15:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 593A26B01F3; Tue, 23 Mar 2021 22:15:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 3D8906B01F0 for ; Tue, 23 Mar 2021 22:15:05 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 03FA98249980 for ; Wed, 24 Mar 2021 02:15:05 +0000 (UTC) X-FDA: 77953150170.20.7B90D99 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf03.hostedemail.com (Postfix) with ESMTP id 0B858C0007C1 for ; Wed, 24 Mar 2021 02:15:02 +0000 (UTC) Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4F4sG52PRjz19HpZ; Wed, 24 Mar 2021 10:13:01 +0800 (CST) Received: from [10.174.178.163] (10.174.178.163) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Wed, 24 Mar 2021 10:14:59 +0800 Subject: Re: [PATCH v2 5/5] mm/migrate.c: fix potential deadlock in NUMA balancing shared exec THP case To: Yang Shi CC: Andrew Morton , Jerome Glisse , Rafael Aquini , David Hildenbrand , Alistair Popple , "Linux Kernel Mailing List" , Linux MM References: <20210323135405.65059-1-linmiaohe@huawei.com> <20210323135405.65059-6-linmiaohe@huawei.com> From: Miaohe Lin Message-ID: Date: Wed, 24 Mar 2021 10:14:59 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.163] X-CFilter-Loop: Reflected X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0B858C0007C1 X-Stat-Signature: wfqg1ndersip58ej38qjhz7cbg5g7x84 Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=szxga04-in.huawei.com; client-ip=45.249.212.190 X-HE-DKIM-Result: none/none X-HE-Tag: 1616552102-498975 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021/3/24 9:16, Yang Shi wrote: > On Tue, Mar 23, 2021 at 10:17 AM Yang Shi wrote: >> >> On Tue, Mar 23, 2021 at 6:55 AM Miaohe Lin wrote: >>> >>> Since commit c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA >>> balancing"), the NUMA balancing would skip shared exec transhuge page. >>> But this enhancement is not suitable for transhuge page. Because it's >>> required that page_mapcount() must be 1 due to no migration pte dance >>> is done here. On the other hand, the shared exec transhuge page will >>> leave the migrate_misplaced_page() with pte entry untouched and page >>> locked. Thus pagefault for NUMA will be triggered again and deadlock >>> occurs when we start waiting for the page lock held by ourselves. >> >> Thanks for catching this. By relooking the code I think the other >> important reason for removing this is >> migrate_misplaced_transhuge_page() actually can't see shared exec file >> THP at all since page_lock_anon_vma_read() is called before and if >> page is not anonymous page it will just restore the PMD without >> migrating anything. >> >> The pages for private mapped file vma may be anonymous pages due to >> COW but they can't be THP so it won't trigger THP numa fault at all. I >> think this is why no bug was reported. I overlooked this in the first >> place. >> >> Your fix is correct, and please add the above justification to your commit log. > > BTW, I think you can just undo or revert commit c77c5cbafe54 ("mm: > migrate: skip shared exec THP for NUMA balancing"). > Yep, we can revert this commit. I thought it handle the shared exec base page too. Will do it and with the above justification to the commit log. Many Thanks! > Thanks, > Yang > >> >> Reviewed-by: Yang Shi >> >>> >>> Fixes: c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA balancing") >>> Signed-off-by: Miaohe Lin >>> --- >>> mm/migrate.c | 4 ---- >>> 1 file changed, 4 deletions(-) >>> >>> diff --git a/mm/migrate.c b/mm/migrate.c >>> index 5357a8527ca2..68bfa1625898 100644 >>> --- a/mm/migrate.c >>> +++ b/mm/migrate.c >>> @@ -2192,9 +2192,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, >>> int page_lru = page_is_file_lru(page); >>> unsigned long start = address & HPAGE_PMD_MASK; >>> >>> - if (is_shared_exec_page(vma, page)) >>> - goto out; >>> - >>> new_page = alloc_pages_node(node, >>> (GFP_TRANSHUGE_LIGHT | __GFP_THISNODE), >>> HPAGE_PMD_ORDER); >>> @@ -2306,7 +2303,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, >>> >>> out_unlock: >>> unlock_page(page); >>> -out: >>> put_page(page); >>> return 0; >>> } >>> -- >>> 2.19.1 >>> > . >