From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DF07C04A6A for ; Thu, 3 Aug 2023 07:08:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E33E7280210; Thu, 3 Aug 2023 03:08:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DBD022801EB; Thu, 3 Aug 2023 03:08:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5DD8280210; Thu, 3 Aug 2023 03:08:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AFC442801EB for ; Thu, 3 Aug 2023 03:08:24 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 73F81140FBD for ; Thu, 3 Aug 2023 07:08:24 +0000 (UTC) X-FDA: 81081914928.12.C07EBB7 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf17.hostedemail.com (Postfix) with ESMTP id AA56E4000A for ; Thu, 3 Aug 2023 07:08:19 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691046502; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rZJJA08Ipam4moJbbO/nX+KRz/wWHgh4VmQE0V7cc+8=; b=ezmvObQDlscRdxrY6bMMfZ6nxOpw3/DB8gg19PPz2VBL/3ZzFZ0l0lgBlROQxtXFCLO3dp ZVw1Ln1E912OHaSYKuS5JFWV5quKGFXsKjs/I6ZcFeSXPkxSzk184da97SrqDe8GyZ3aGe SxvlrsVjQbR+v58kf9+xzpfzZorGuyc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691046502; a=rsa-sha256; cv=none; b=3czmL7Wq4v2jYOFhU+k7BRJW9C4fPuT5Nyk+JZYy0humt+biKxDqaJ3J1efPrJ6+fZkbsz 7yoBudZKnDPV63ufPpdMkmLR+Qv62l3tnoWBzaAYq9SgcXka4lPs4/POetm2+uxFPJTT0F msU+qvh613pHfi89uRqT6piDsGc7Ffk= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RGfyp6XGKzVjr7; Thu, 3 Aug 2023 15:06:26 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 3 Aug 2023 15:08:10 +0800 Message-ID: <2f6c2ddb-b1a7-7152-bb7c-a5dcaf61ce36@huawei.com> Date: Thu, 3 Aug 2023 15:08:10 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.10.1 Subject: Re: [PATCH 2/4] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio() Content-Language: en-US To: Matthew Wilcox , Hugh Dickins , Mel Gorman CC: Andrew Morton , , , Huang Ying , David Hildenbrand References: <20230802095346.87449-1-wangkefeng.wang@huawei.com> <20230802095346.87449-3-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: AA56E4000A X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: drw14h8mmo8oentp4r3u4obm65su7j1g X-HE-Tag: 1691046499-1687 X-HE-Meta: U2FsdGVkX18++l/nvCk3RqJo11azyOr+LUOYuN3AKzBW42y7M8KLByskGF+Kol4tfwka8qmOHwTntnkGWq23ZXdfNbhMDi3fZ7IRjZcTISpVIg+OnbomyQ7GFQuHQcYmfYk4rmFzN1d9v6ahUWzhCHU4zo8rZu5iA8HkBIFB3JjnM/E7FJncQxUWuDC6B2ShsWnEX43tR681R1RLHDifU+enyT2UW2vZe83YAl30M6NwRmV8GLaDPE6tmsBy8yojCil3dyBFz+iexx4J5zSoKh1ebbk3f8hrP6jivVOavhSMmQTXb9JAhD9fQGfVmagxyPPJrXzw5geQ56S9f1I+HC2pN3ODuAykBK4BaofnhmOle3q0eM1KI7V3q+x4myztTT18a8oSY1ztSi5fofwYPgGKCaEesI7WLmh94u+6Id7DY9NC18UtJAy11VGl76YAB9FTDmYZg6IDqdM5OlDNPGpsdxBKu9vhOtos2hUnBAESoBiVINghFMBySYQYlkEZT7MpJYy/dn06e8j9paX6bUNwbu5ax5PEa2PgiR9hvWTcfhsnRBGvuRrePk1pJP7ncl9XewuSkOFZgUJsyfvI014fNzmdgBlW2ytMXBE9EMYt6yNgwKGPmwstUJlhnK2ZXrkjcf4+KMnyTCsj029LlI9UQW8+0kLYnScyonGm9xO+yS1MEqHLMdBAZjLPCE2JXP+Yo5KTDKIG6j0mi5KIhc4qWUYQ5rFeA4NLva0/0wk0uT0uYhTRcZqdBp7tQnhXbpDziNAzL1w1DRGUhwi3D8RGoeMcOq/etThL8h2RwHA3GAEiYotSRITMGeUyEDCtuZZRUJrcS74eFmMXg1wGyoLrCy5VcIlvVh9DPQOIyEJXQiYmurIpl2jV0eKqDm3KsB+Fz17cJd3QLjCQneUp8qGygddm1qltSkIRgP29ICGtoLOHeZZucmOgosr7dzgtvl2/iThTHh+gAsbAwKM 8XC/JsbC U8v59vRzjq50dzJlnoR409Q5SCdGza1ro4pAPSZPiQ19uLHyIsa2OBBMLUsHWlvs2jz4mF9+ZXQbuYMrYvWc1OWV8mlTlmspLWxgQuvvcyyzD1o2Q+P1cmVzcl3SZ7o7uYNGty86V5yzI13pcKLzxtPXW90EmL3WwokNQnPWPdK1cBj695QI6lfdDhg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/8/2 20:30, Matthew Wilcox wrote: > On Wed, Aug 02, 2023 at 05:53:44PM +0800, Kefeng Wang wrote: >> -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) >> +static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio) >> { >> - int nr_pages = thp_nr_pages(page); >> - int order = compound_order(page); >> + int nr_pages = folio_nr_pages(folio); >> + int order = folio_order(folio); >> >> - VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); >> + VM_BUG_ON_FOLIO(order && !folio_test_pmd_mappable(folio), folio); > > I don't know why we have this assertion. I would be inclined to delete > it as part of generalising the migration code to handle arbitrary sizes > of folio, rather than assert that we only support PMD size folios. Ok, will drop it. > >> /* Do not migrate THP mapped by multiple processes */ >> - if (PageTransHuge(page) && total_mapcount(page) > 1) >> + if (folio_test_pmd_mappable(folio) && folio_estimated_sharers(folio) > 1) >> return 0; > > I don't know if this is the right logic. We've willing to move folios > mapped by multiple processes, as long as they're smaller than PMD size, > but once they get to PMD size they're magical and can't be moved? It seems that the logical is introduced by commit 04fa5d6a6547 ("mm: migrate: check page_count of THP before migrating") and refactor by 340ef3902cf2 ("mm: numa: cleanup flow of transhuge page migration"), "Hugh Dickins pointed out that migrate_misplaced_transhuge_page() does not check page_count before migrating like base page migration and khugepage. He could not see why this was safe and he is right." For now, there is no migrate_misplaced_transhuge_page() and base/thp page migrate's path is unified, there is a check(for old/new kernel) in migrate_misplaced_page(), "Don't migrate file pages that are mapped in multiple processes with execute permissions as they are probably shared libraries." We could drop the above check in numamigrate_isolate_page(), but according to 04fa5d6a6547, maybe disable migrate page shared by multi-process during numa balance for both base/thp page. > >