From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00D1CE77199 for ; Thu, 9 Jan 2025 11:48:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E5DA6B0088; Thu, 9 Jan 2025 06:48:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 895626B0089; Thu, 9 Jan 2025 06:48:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C1F26B008A; Thu, 9 Jan 2025 06:48:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4424C6B0088 for ; Thu, 9 Jan 2025 06:48:06 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0559C1A187A for ; Thu, 9 Jan 2025 11:48:06 +0000 (UTC) X-FDA: 82987739772.30.015B45E Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2043.outbound.protection.outlook.com [40.107.223.43]) by imf20.hostedemail.com (Postfix) with ESMTP id 1AD1F1C0017 for ; Thu, 9 Jan 2025 11:48:02 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=UlWoy+Lz; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf20.hostedemail.com: domain of shivankg@amd.com designates 40.107.223.43 as permitted sender) smtp.mailfrom=shivankg@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736423283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CZ2Lw3IeQDQBh3+uPk+d7ZzbAloTJzUtMGyHglJjX/o=; b=MQ8tCJol+X9ZIPqLHwiEu2hheJP7sABbD7aNMQH5hleevIIRZE6sPhHi28e5tMCDdmY+br 6Zwc4cI8EraxUAA1+sKaXYIln3LWJr3nK43ZxdUOIpWJUKT/lmNPtyKTZ9cVYBl0Fn7Og6 NS75PINX2GSMJF1A9utQZRH3urDL4L8= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1736423283; a=rsa-sha256; cv=pass; b=QlrhDOOa/iVDvI7BdKOpl0Vfo6f8m5cHZhRKSPhveTRdroFhSvQ8BhJS0YnHDN1aelqB9k qMm4MRBGan3bs6/TnBWRFVdpTcUHUYQU8/qi6ZVXMIp8MAePKtlsIXMVbM6nJ4uR+BVsnM 1iaIsYXhE6we6U43mcgIZKjQioDlJAc= ARC-Authentication-Results: i=2; imf20.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=UlWoy+Lz; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf20.hostedemail.com: domain of shivankg@amd.com designates 40.107.223.43 as permitted sender) smtp.mailfrom=shivankg@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=WYQDuSOwrqyA+O8psPFiKo+2nXiDDPLhg6iQStOvYuK0J8HODbbyDUcYjFdBs3CYzcW6JaVUVgRkd3KL5IFMeULqWD+zCrEgoB1/1DrQ8TrMuK/igXZ+zVPHukkVvSOw0A4YiCr4KYKGXQm8WJEnCfumhKisB2R1bn0zu3SGty+g2OSpMIr5WUG9EiqqMFCa51iQLRD/4DDmVt1eSeBGmUN9+loemnGlMwf/F4Rq/2+sHGyuXp0l7H4s1gds8otDP7a4DSrO9L18sQnAemV6G7nbptpPUD+x4r2JpCy5ZUh8piqYwvZQ+wdyFiqpZCs80KrVC9kWSqDFDT0NdRV7Pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CZ2Lw3IeQDQBh3+uPk+d7ZzbAloTJzUtMGyHglJjX/o=; b=bdz1tZ3ZlIYTaptj3ADGaYlNXWst6DkdN2DwKfzTs4MMqyhHa3nh+KJVm4giTXF/voCx7Ei78CkdJraEIWG+ITslkGvH0kPhcNAIV89XuG/lVPMO7d1fsgz4ZueYywGOfV97I1F7/5QfGSv3Ix54MD1AWup6htSeQez9vxsnRwEuvehDeIQ42PG2aFIZVO2pKqaJ37AVaMvFBO6Zp2gpUt/rerG7wSTnoOSo/bpEoV52cGt2I3Pi/AgXlp/wBvoWSt6CwXK9gSKaooPBJ+ZlP9nJLI8D28CtH8vAKHsUT92H5XKtBIPqccNcMTB2nankeobhYZ0evq6yu39pFXJMvw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CZ2Lw3IeQDQBh3+uPk+d7ZzbAloTJzUtMGyHglJjX/o=; b=UlWoy+LzilftCqquA6MoB0ktKALpU6pljdxqNp7L5XWleVOMYgUariWVnLpJQu0Xlgi8cciSFa2vjObrQvVeVnwL2YyThsUKfmyKT7LbKiLLZrJRtRy+9uRZcb2fgxaQk3O/srwaG+i4j9z/atx9S0sXimprNXr2IWY8N0Q0iok= Received: from CH2PR12MB4262.namprd12.prod.outlook.com (2603:10b6:610:af::8) by SN7PR12MB6930.namprd12.prod.outlook.com (2603:10b6:806:262::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Thu, 9 Jan 2025 11:48:00 +0000 Received: from CH2PR12MB4262.namprd12.prod.outlook.com ([fe80::3bdb:bf3d:8bde:7870]) by CH2PR12MB4262.namprd12.prod.outlook.com ([fe80::3bdb:bf3d:8bde:7870%2]) with mapi id 15.20.8335.011; Thu, 9 Jan 2025 11:48:00 +0000 Message-ID: <97ed042a-fe70-46cf-80f1-59e7add66860@amd.com> Date: Thu, 9 Jan 2025 17:17:50 +0530 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 3/5] mm/migrate: add migrate_folios_batch_move to batch the folio move operations To: Zi Yan , linux-mm@kvack.org Cc: David Rientjes , Aneesh Kumar , David Hildenbrand , John Hubbard , Kirill Shutemov , Matthew Wilcox , Mel Gorman , "Rao, Bharata Bhasker" , Rik van Riel , RaghavendraKT , Wei Xu , Suyeon Lee , Lei Chen , "Shukla, Santosh" , "Grimm, Jon" , sj@kernel.org, shy828301@gmail.com, Liam Howlett , Gregory Price , "Huang, Ying" References: <20250103172419.4148674-1-ziy@nvidia.com> <20250103172419.4148674-4-ziy@nvidia.com> Content-Language: en-US From: Shivank Garg In-Reply-To: <20250103172419.4148674-4-ziy@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-ClientProxiedBy: BMXPR01CA0094.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:54::34) To CH2PR12MB4262.namprd12.prod.outlook.com (2603:10b6:610:af::8) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PR12MB4262:EE_|SN7PR12MB6930:EE_ X-MS-Office365-Filtering-Correlation-Id: 23906978-86f8-40ac-37de-08dd30a37787 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024|7053199007; X-Microsoft-Antispam-Message-Info: =?utf-8?B?dFRFNVB0OTRzeU1nU1I4eVFvRmJEbzVNTXhEekljcFVsNFVLRExJa1MwMkxj?= =?utf-8?B?cFA1VEhhZGZ6bkM3dU1uVWxWbzRLSTR1cVQvWUF5ZXdzVHFwY0c1Y0RlVTRL?= =?utf-8?B?cmxiVjdKaFBqVjJyMkVTWE5kVm9telZsYUgvc3dzMHpIYnBneXNjMTA4a3NZ?= =?utf-8?B?MDUyQ0ZHazR5d1o2cVdpMlN2ek9pREJBUXkrUEJWeGlaK210RDdQU210aG13?= =?utf-8?B?NUU2YSs0N1FtT3JjUTdwTyswNm9ocFJQY2RUeUNNc1lScFo3bW1ta2F2YlBK?= =?utf-8?B?djIwWnZHNXhkZWVYMUErc0luU3o4dTZxUWY4cy9QUWx3TTN1Y3lRakJYNnc1?= =?utf-8?B?OTRaVGxaVjB5WWNOZTVYZ0R2VW45alU5aFhKMGlvQzJOMmFGaUtvOWsvWXVV?= =?utf-8?B?aml3UFFtcXpFUkhpa3FLV0lBN2dZaVVKcUgvUWMrd1NBLzNQNmllb0U4Vmdl?= =?utf-8?B?SjhMbEQ5NzdSd2N3OXRKYjlkb1RPUk1uck1ObHlOb2MzUEVlc3ptYnJGenZK?= =?utf-8?B?UEN0dmtlVlJPMnhFdVN6MDN6K3lWczdYbENueEtyNVR6MjVRSXUzOWkxaTBy?= =?utf-8?B?OW0wSmc1aWdjZGJJQXlvR25Ic3J3TXBWVG50SHFGOUNGZEg0SHB3clBDZHg3?= =?utf-8?B?SmtFc0VOL3FmNmFidUJ4MG1OQVEyUEE0VEhoQjRtQnBPNGJaNE5IVzdkZ2Vh?= =?utf-8?B?UlVtQ2pneFpYRWdKSkZpUExzM09Idkc3Y2NRRlhoWnk1ZWxSQ0ZnZDd2Smln?= =?utf-8?B?R1c5bTdGVjJtUnorUjlpL3BHbzhzTzVGLzZIQW5wUWxML2FmRG4vdmVTWVJ0?= =?utf-8?B?dWRzY0VEVm5FQnFCMDJsWmpmdUFVcGNaNTZFQ29KdVVTcFBSdWNsaSs0eWdK?= =?utf-8?B?NDBnUDNsK1FycnpWa0Rua1pXWUxDNFdPbmd2SWZVenA0MTFweVJrRWNUYzl2?= =?utf-8?B?S21XRGdRUFhvV09qdFpHbUtSczRwUThudXRCWkZwbnd0eEZZckMrdHV1am1G?= =?utf-8?B?RmhQdW5UV29laDBKejViV1BTYnBQVG4wME1YOGh5ZXhxbTZiREtSNGMvWGJt?= =?utf-8?B?aVJVT3piY1NyVGIwQW93SXZXeUpvWktESW15TlQxREpGWVc5N21MWDMrM2VL?= =?utf-8?B?OGIydy9mcU1LYStlOU1pQ2I5ZmNNR2RFeXRmQUtMU1VZSHhtQW1qK2IxdVhG?= =?utf-8?B?blRxMUk2amdhNExmR0NxWXpxMk9oS3pWd0wxT3ZhWm52SHFEbkcrTTZlQjhv?= =?utf-8?B?VWpiVmt1M0R6VVR1WFRXNGk4T0dJQmVNWjFkMEtybE9Vak5GTTlJMDFvS3JW?= =?utf-8?B?U0JkTVgzc09jb01CMmdFaXQyUm1uRVkwMzduelBQZk1XUnI5SzkyVDJacVVR?= =?utf-8?B?WUN3TUdKWXFpODNkM3MyQ2d0Vk1HbEsvckprMzE4ZmxZank2S3BoUnU5K1N2?= =?utf-8?B?SWl3b055Z0VvYUNXZ1hiazBRNEl5UDdHdFBZd2IwcDZIWGxPMTBDcDdoSDFR?= =?utf-8?B?dStpa0M4SEYrRlc5TUJrMUV6ZUtYSDFMRS9xMFk3Wmw0ZlpoeG50L3RYblVt?= =?utf-8?B?SUZVT1MrRndZNTdQWmgxZ0k0YlA1MTdBQkZxa2xSclQ3aVhGSWhwY0FLZnpC?= =?utf-8?B?V3krTE9kYWwxTko2VGgxclNCdkpjaXpWc2dwKzk5aWVwQXhxbmhCcUx6Y0Y5?= =?utf-8?B?MGxPTjZTWkI2dE9qK1NpWkdmN1Z5NUZQclpESFFPdzk3NWdlbWtjRk9HVGxo?= =?utf-8?B?ZllmTkZnL2R5dS9ESDczdklJTnNpdTRHcThtUkE0c25Zckpkb01URElTOGZF?= =?utf-8?B?VzFhUHVsYWZpcEhQUGdrcFI1bTJXOG81bXpyTVJycDRTSFZqakpEcmo3UTVs?= =?utf-8?Q?xa+IOnM2soj4C?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH2PR12MB4262.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024)(7053199007);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cHQ1djhaYzNlYkRoNlc0N2NIM3k4QS83MkN1cVkzT2RKbTYyd21YV3M5RFBE?= =?utf-8?B?MGp2Y01WeTBRMGxMTlpVN1JiK2tBVndrdk1YMEd3WHF5ekpUOWRya09VenJN?= =?utf-8?B?QkxNOCs2OWZuY3NPY0JWWkYxQmVyekEyRXdneXlSQWZ3dC9jU1VVMW9PUTg4?= =?utf-8?B?SE12T3RvdU53SEUzbTNUT1loUXN6aGt1dk9pNnlzQmpjakk0WDJKQnJyMXJT?= =?utf-8?B?Rm4rS3FRL0huVHkxS0FZNFFvK21YYlhpVXN4K2VBRDVMcHpSYmlhZk01Rzgv?= =?utf-8?B?a29rdmNaMFF1YWVpZGM5YXVLK1FscDNCSysxWlBFeTViTmxCMk9XZzhHL21o?= =?utf-8?B?RjJ6QUl6TFdhQmJ3VEhIZDhIT2tNMHNPVUxLeWtsY3E3NS9JZW1mUVBDRXQv?= =?utf-8?B?RlFxejZRS0RCTDFra0dQVVNkSVlGamNqdlFyZzdaLzc5SG9CMVdIWGRRZ0Jz?= =?utf-8?B?ZTF2eXBMQnB1QVhVS01vdzdHa0lKcmY1Z2lkQm53UmZVdHN0NjFGMWFaYyty?= =?utf-8?B?SS9HNFZPRDVRak52dEVIeHRKUWVuUHJKb3VveVJTMmlGMlgxa3EzcEcxZXAw?= =?utf-8?B?ZTFpa1BPRDFzbGh5ZVJnMmhJZFFiQUFIeXQ0LzhKcjZSVVlTUHNmVEE1Vzd0?= =?utf-8?B?R0VDYmNXZHN1SzZRTmRRQjlBK0UyWWVBNGtPMlZhbVhDMTdJZ3dvdVU0RDJl?= =?utf-8?B?elBSRjRDS0FEZkpBSjM5a3BuT3FqVHpXcDd6UDU5QVhiU28xdnRScHhydzEy?= =?utf-8?B?aVEwYmF4SVptVHZGK2VZN3JBdUZaSU84RWVXbDdWV0w5TFdFVlVzNVFrbWhF?= =?utf-8?B?N1dKRm5XQzBybitmU0FMMWZiL2dHL291QjFhQWZQSUVLK2lxRFZ3eWtuMTBM?= =?utf-8?B?Y3RkbHUra0tlNXpDNFdsVnpRQWVNMWdYSnB6eVplY2x2ekJpUTJPZUNCL0JM?= =?utf-8?B?MzFyTS9WSFNsUGlkYW14RlpsMnlva0pjZjBiYnQ4bnp6UlRldzVCL2JvUnZX?= =?utf-8?B?L1ZnNFJYUDQxU0xNSng4TFJLQm4yNzc4L0UvY09nc1RiL0ltUVlFNHlaU2VY?= =?utf-8?B?Y1ZxNlNwcVVYTmVBVytzSlRqeVJ6SUVCQy9saElVUWpYOEF6cmg3ZXJrbER1?= =?utf-8?B?UFJBSmluMzN1UGNGNmRIWnhCLzByT3RtL2pOejBYTytiTmVyclpaNGVpKzNu?= =?utf-8?B?bllzMmFkU0JYNklPY2NYNDhFMy8wbWcwTFRlUURJbVBtT3IvRTNBZ3FUdVF4?= =?utf-8?B?aEJwbEdPZzFlRjBRcUpEc2lUTmFGbXhnWXYwZTJNMUpWMFhmK2g5U0gvQWFY?= =?utf-8?B?MlNXS284TDNlajA4eXJVNFlTaCtZQ295Z1JZbEhTSGUvRjVubHAwUldTT29j?= =?utf-8?B?SGh3eVczd2Q0SU90cGpzYnVEMTBLQ1ZXbzRGTHFLM1dERlczV0lMV1B3UkZw?= =?utf-8?B?NG1KNVdPS3hYbjFoOGFKSHJSYVlaMVpPSW9UR0VRVldETlZ6UXZrRGQvU21r?= =?utf-8?B?cDczSFV4ckQzaUEyU2xxL1pFckcxQ2YwaWx6ditRR0ZvOER5WGRNOXllWjIy?= =?utf-8?B?THpOd2dGa0J0RUhkcVdEbFlWM2RNVlZqYmp0M1dKRkt1dEJsM2poMmx4YVhs?= =?utf-8?B?a1ZhdUZHMFhJYmpZV2thcUdNd09pZ0s4cGFOWnIrT1d5UFRyTjhJQW1aTGh3?= =?utf-8?B?U1BqeDRCT3FrNDlXOVJuNkVZektkWUdwdUtFUWt5K2VLNVpjRWVrS3UwaTJQ?= =?utf-8?B?OWVKNnRTSnVhdGVrRnk5c0l1ZkJpM2R2WEhxNGVPdFg0c29oUEFYNFAwcits?= =?utf-8?B?UitadnkwdWIzOWRTd0dyaGtRM281TnQyRERFUWxFOHdiUTV1d25hSHMxM0Fl?= =?utf-8?B?RWpqbktwMTl1dXhvcVRycWFKNUN4YkZ5dHMxZ01DR3BhWFQxaSsrSk9iWExW?= =?utf-8?B?QkN6ckNZQ0dXOFBYT2kvVCsycjIyRG5SZDFNaFovblppN1J3MEpXK2loNGE5?= =?utf-8?B?UGNBU0pKOWZMbXhLSGh4VmN3a2NITTcwOERxTXNNM05DUXJDeUlUQ3dERU1i?= =?utf-8?B?MGV1cnZTRE12UGFwZ1hGTW80VnBCczdaeTg1dnFqRHM3Sy9zTDR4TU95SklW?= =?utf-8?Q?z69CNmtPnmMhKNnsasSE3myaL?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 23906978-86f8-40ac-37de-08dd30a37787 X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4262.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 11:48:00.7567 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: UyrqHc3acjj5SjnHZJ8S7c/20+ipMKYJWsuTWCMyAQXgn97jU+8gEPbUqdL4Q43fawKkbr5SVP+d5hgduSM6YQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6930 X-Rspamd-Queue-Id: 1AD1F1C0017 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: qcmfhhs13dz9cwktu31tp9ja964fxji1 X-HE-Tag: 1736423282-519223 X-HE-Meta: U2FsdGVkX18GaMUiJrro0ip2lIehJfePdMKFoaGR0XlUwgCCfbGRuda2u0wlxKlVJdtPjItsb2qjT71UZfAkjO6DJtR2L9pPvFpbZov/ScRjpbxb6P6ee4AeOnq8PSFk+Niu+HXoZeX0JgX2zwqn/kJUFemPjuyUwGmbNZ5s9isKQn7b27VVPVr87S9sREtwJKtyIRRHYyD5uHfJSwC4F9sY7UAWEYmm9A7M2dLFVv2P+ulE1Q79RiD/rgnCdwu5MpPIXg7s/S2Vx9t/24qgkPOmesMnRmZBYxvLl4sDF9v0OKTI7EZ9vXroNMQxCQoXIif46Nv0tVCEiAOmf0X2kwIWy9eRZafE0SUZ9a4vAVkqidS94Rw8cZglrNy1k5wjPXuqafAi5DCcE5t6QPyQCbnZwxsww7esYch80PND9pYagr1AEHHpgObwiBdQQ3J+9eHb3nYnBzyZ++0QwtoI5I+Q0BxmQ/s0HHqm2jdNexDvmSNQl8OqFvUzIcB6oXQ7sqYwU5c3qgU4gsZoXKt5J7XkxIYcMrmYkket5+J216CTow0a/VFQoxSyvqCozrlaKi/QeuhOD0ns6kG/q4Pb1Lu9iMJiM4uUhjEeaX2hy4nI+QlTua6NSe7dhCBbsh5TdgzquSmwh9J6otm3TwQxiiAi2nBUiJymfCU6DRiYbe75lrV0YuFtEOfYoey2jWRQ7lImuyjUwabtGvorCF3b56IzHaLL48kC5Y5wufiBnUpoS5E6bACqqsW+L3CoL9gZhXGtlOn6P/2W1MSX2Ij0zD17KDTD8YDYEXPkjMYb8KYHdkKtle7FEFDswJ5+qJqDuTd/vnPFOYnx4JySXH14bac4/13DP1UYlmy9+5eFuSHUgdgsglPJ29UD6WD6aCS7N9s4PJd/WdTHmmGyzgocIVjCK3IkAbpZRvtPqcFB6Y/LaOAzXFxar3dv9kAhr2Pcq9R47ZxF3Vyu2QaFpcn cJcWZIJf h3nQiyvVXtglCGJbuE+mZtsg/FnPff28tRJ9YDmmHBo1An+blFyEXPhrn73QSNxFLRXzZsi5I+pqU/Pv9M3njWltPaBayUYcTc1EGKWMniyrq72hUjB8SyN5XKJCpvVGMxEh1Y1nPc52zwPyu8KjCbK5ic2jMVAZT+MLWe5yWKXx4N0E+wphqoDH/FOL7aZjQGuc6i+iYN6eBH8L+pbFSdxgh5vxkDZMoUiFcJvyPmahlsMyhqmva70zZJU9iwOAjXntCk3x/MpD2+SrE2bBxggmMFVERkddIx9Q4Jto/vgUErzQ5hszJzK0I23sjmuDmDpQywuhM5LbN4SoDlPhcUF9xKzRHeTxsunfm9e7yS4K4otjGDxZQmzzsFpFtTM+ZbJ0XOUOO8C1A1Ho+TbSBhyvn1oiYdQcdK8pvhT1kFwKcNYWMAG0DAPrcHn2/KJXTHCCw2+l7waPSxXof1tl3geESO/BUCkUKPYqsW8nQpgpv5vlMnvhMTarLZjtZdy+LQcN6raNWiAhjb5E6E4h5dGA7Yg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 1/3/2025 10:54 PM, Zi Yan wrote: > This is a preparatory patch that enables batch copying for folios > undergoing migration. By enabling batch copying the folio content, we can > efficiently utilize the capabilities of DMA hardware or multi-threaded > folio copy. It also adds MIGRATE_NO_COPY back to migrate_mode, so that > folio copy will be skipped during metadata copy process and performed > in a batch later. > > Currently, the folio move operation is performed individually for each > folio in sequential manner: > for_each_folio() { > Copy folio metadata like flags and mappings > Copy the folio content from src to dst > Update page tables with dst folio > } > > With this patch, we transition to a batch processing approach as shown > below: > for_each_folio() { > Copy folio metadata like flags and mappings > } > Batch copy all src folios to dst > for_each_folio() { > Update page tables with dst folios > } > > dst->private is used to store page states and possible anon_vma value, > thus needs to be cleared during metadata copy process. To avoid additional > memory allocation to store the data during batch copy process, src->private > is used to store the data after metadata copy process, since src is no > longer used. > > Originally-by: Shivank Garg > Signed-off-by: Zi Yan > --- Hi Zi, Please retain my Signed-off-by for future posting of batch page migration patchset. I think we can separate out the MIGRATE_NO_COPY support into separate patch. Thanks, Shivank > include/linux/migrate_mode.h | 2 + > mm/migrate.c | 207 +++++++++++++++++++++++++++++++++-- > 2 files changed, 201 insertions(+), 8 deletions(-) > > diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h > index 265c4328b36a..9af6c949a057 100644 > --- a/include/linux/migrate_mode.h > +++ b/include/linux/migrate_mode.h > @@ -7,11 +7,13 @@ > * on most operations but not ->writepage as the potential stall time > * is too significant > * MIGRATE_SYNC will block when migrating pages > + * MIGRATE_NO_COPY will not copy page content > */ > enum migrate_mode { > MIGRATE_ASYNC, > MIGRATE_SYNC_LIGHT, > MIGRATE_SYNC, > + MIGRATE_NO_COPY, > }; > > enum migrate_reason { > diff --git a/mm/migrate.c b/mm/migrate.c > index a83508f94c57..95c4cc4a7823 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -51,6 +51,7 @@ > > #include "internal.h" > > + > bool isolate_movable_page(struct page *page, isolate_mode_t mode) > { > struct folio *folio = folio_get_nontail_page(page); > @@ -752,14 +753,19 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst, > enum migrate_mode mode) > { > int rc, expected_count = folio_expected_refs(mapping, src); > + unsigned long dst_private = (unsigned long)dst->private; > > /* Check whether src does not have extra refs before we do more work */ > if (folio_ref_count(src) != expected_count) > return -EAGAIN; > > - rc = folio_mc_copy(dst, src); > - if (unlikely(rc)) > - return rc; > + if (mode == MIGRATE_NO_COPY) > + dst->private = NULL; > + else { > + rc = folio_mc_copy(dst, src); > + if (unlikely(rc)) > + return rc; > + } > > rc = __folio_migrate_mapping(mapping, dst, src, expected_count); > if (rc != MIGRATEPAGE_SUCCESS) > @@ -769,6 +775,10 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst, > folio_attach_private(dst, folio_detach_private(src)); > > folio_migrate_flags(dst, src); > + > + if (mode == MIGRATE_NO_COPY) > + src->private = (void *)dst_private; > + > return MIGRATEPAGE_SUCCESS; > } > > @@ -1042,7 +1052,7 @@ static int _move_to_new_folio_prep(struct folio *dst, struct folio *src, > mode); > else > rc = fallback_migrate_folio(mapping, dst, src, mode); > - } else { > + } else if (mode != MIGRATE_NO_COPY) { > const struct movable_operations *mops; > > /* > @@ -1060,7 +1070,8 @@ static int _move_to_new_folio_prep(struct folio *dst, struct folio *src, > rc = mops->migrate_page(&dst->page, &src->page, mode); > WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && > !folio_test_isolated(src)); > - } > + } else > + rc = -EAGAIN; > out: > return rc; > } > @@ -1138,7 +1149,7 @@ static void __migrate_folio_record(struct folio *dst, > dst->private = (void *)anon_vma + old_page_state; > } > > -static void __migrate_folio_extract(struct folio *dst, > +static void __migrate_folio_read(struct folio *dst, > int *old_page_state, > struct anon_vma **anon_vmap) > { > @@ -1146,6 +1157,13 @@ static void __migrate_folio_extract(struct folio *dst, > > *anon_vmap = (struct anon_vma *)(private & ~PAGE_OLD_STATES); > *old_page_state = private & PAGE_OLD_STATES; > +} > + > +static void __migrate_folio_extract(struct folio *dst, > + int *old_page_state, > + struct anon_vma **anon_vmap) > +{ > + __migrate_folio_read(dst, old_page_state, anon_vmap); > dst->private = NULL; > } > > @@ -1771,6 +1789,174 @@ static void migrate_folios_move(struct list_head *src_folios, > } > } > > +static void migrate_folios_batch_move(struct list_head *src_folios, > + struct list_head *dst_folios, > + free_folio_t put_new_folio, unsigned long private, > + enum migrate_mode mode, int reason, > + struct list_head *ret_folios, > + struct migrate_pages_stats *stats, > + int *retry, int *thp_retry, int *nr_failed, > + int *nr_retry_pages) > +{ > + struct folio *folio, *folio2, *dst, *dst2; > + int rc, nr_pages = 0, nr_mig_folios = 0; > + int old_page_state = 0; > + struct anon_vma *anon_vma = NULL; > + bool is_lru; > + int is_thp = 0; > + LIST_HEAD(err_src); > + LIST_HEAD(err_dst); > + > + if (mode != MIGRATE_ASYNC) { > + *retry += 1; > + return; > + } > + > + /* > + * Iterate over the list of locked src/dst folios to copy the metadata > + */ > + dst = list_first_entry(dst_folios, struct folio, lru); > + dst2 = list_next_entry(dst, lru); > + list_for_each_entry_safe(folio, folio2, src_folios, lru) { > + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); > + nr_pages = folio_nr_pages(folio); > + is_lru = !__folio_test_movable(folio); > + > + /* > + * dst->private is not cleared here. It is cleared and moved to > + * src->private in __migrate_folio(). > + */ > + __migrate_folio_read(dst, &old_page_state, &anon_vma); > + > + /* > + * Use MIGRATE_NO_COPY mode in migrate_folio family functions > + * to copy the flags, mapping and some other ancillary information. > + * This does everything except the page copy. The actual page copy > + * is handled later in a batch manner. > + */ > + rc = _move_to_new_folio_prep(dst, folio, MIGRATE_NO_COPY); > + > + /* > + * -EAGAIN: Move src/dst folios to tmp lists for retry > + * Other Errno: Put src folio on ret_folios list, remove the dst folio > + * Success: Copy the folio bytes, restoring working pte, unlock and > + * decrement refcounter > + */ > + if (rc == -EAGAIN) { > + *retry += 1; > + *thp_retry += is_thp; > + *nr_retry_pages += nr_pages; > + > + list_move_tail(&folio->lru, &err_src); > + list_move_tail(&dst->lru, &err_dst); > + __migrate_folio_record(dst, old_page_state, anon_vma); > + } else if (rc != MIGRATEPAGE_SUCCESS) { > + *nr_failed += 1; > + stats->nr_thp_failed += is_thp; > + stats->nr_failed_pages += nr_pages; > + > + list_del(&dst->lru); > + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, > + anon_vma, true, ret_folios); > + migrate_folio_undo_dst(dst, true, put_new_folio, private); > + } else /* MIGRATEPAGE_SUCCESS */ > + nr_mig_folios++; > + > + dst = dst2; > + dst2 = list_next_entry(dst, lru); > + } > + > + /* Exit if folio list for batch migration is empty */ > + if (!nr_mig_folios) > + goto out; > + > + /* Batch copy the folios */ > + { > + dst = list_first_entry(dst_folios, struct folio, lru); > + dst2 = list_next_entry(dst, lru); > + list_for_each_entry_safe(folio, folio2, src_folios, lru) { > + is_thp = folio_test_large(folio) && > + folio_test_pmd_mappable(folio); > + nr_pages = folio_nr_pages(folio); > + rc = folio_mc_copy(dst, folio); > + > + if (rc) { > + int old_page_state = 0; > + struct anon_vma *anon_vma = NULL; > + > + /* > + * dst->private is moved to src->private in > + * __migrate_folio(), so page state and anon_vma > + * values can be extracted from (src) folio. > + */ > + __migrate_folio_extract(folio, &old_page_state, > + &anon_vma); > + migrate_folio_undo_src(folio, > + old_page_state & PAGE_WAS_MAPPED, > + anon_vma, true, ret_folios); > + list_del(&dst->lru); > + migrate_folio_undo_dst(dst, true, put_new_folio, > + private); > + } > + > + switch (rc) { > + case MIGRATEPAGE_SUCCESS: > + stats->nr_succeeded += nr_pages; > + stats->nr_thp_succeeded += is_thp; > + break; > + default: > + *nr_failed += 1; > + stats->nr_thp_failed += is_thp; > + stats->nr_failed_pages += nr_pages; > + break; > + } > + > + dst = dst2; > + dst2 = list_next_entry(dst, lru); > + } > + } > + > + /* > + * Iterate the folio lists to remove migration pte and restore them > + * as working pte. Unlock the folios, add/remove them to LRU lists (if > + * applicable) and release the src folios. > + */ > + dst = list_first_entry(dst_folios, struct folio, lru); > + dst2 = list_next_entry(dst, lru); > + list_for_each_entry_safe(folio, folio2, src_folios, lru) { > + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); > + nr_pages = folio_nr_pages(folio); > + /* > + * dst->private is moved to src->private in __migrate_folio(), > + * so page state and anon_vma values can be extracted from > + * (src) folio. > + */ > + __migrate_folio_extract(folio, &old_page_state, &anon_vma); > + list_del(&dst->lru); > + > + _move_to_new_folio_finalize(dst, folio, MIGRATEPAGE_SUCCESS); > + > + /* > + * Below few steps are only applicable for lru pages which is > + * ensured as we have removed the non-lru pages from our list. > + */ > + _migrate_folio_move_finalize1(folio, dst, old_page_state); > + > + _migrate_folio_move_finalize2(folio, dst, reason, anon_vma); > + > + /* Page migration successful, increase stat counter */ > + stats->nr_succeeded += nr_pages; > + stats->nr_thp_succeeded += is_thp; > + > + dst = dst2; > + dst2 = list_next_entry(dst, lru); > + } > +out: > + /* Add tmp folios back to the list to let CPU re-attempt migration. */ > + list_splice(&err_src, src_folios); > + list_splice(&err_dst, dst_folios); > +} > + > static void migrate_folios_undo(struct list_head *src_folios, > struct list_head *dst_folios, > free_folio_t put_new_folio, unsigned long private, > @@ -1981,13 +2167,18 @@ static int migrate_pages_batch(struct list_head *from, > /* Flush TLBs for all unmapped folios */ > try_to_unmap_flush(); > > - retry = 1; > + retry = 0; > + /* Batch move the unmapped folios */ > + migrate_folios_batch_move(&unmap_folios, &dst_folios, put_new_folio, > + private, mode, reason, ret_folios, stats, &retry, > + &thp_retry, &nr_failed, &nr_retry_pages); > + > for (pass = 0; pass < nr_pass && retry; pass++) { > retry = 0; > thp_retry = 0; > nr_retry_pages = 0; > > - /* Move the unmapped folios */ > + /* Move the remaining unmapped folios */ > migrate_folios_move(&unmap_folios, &dst_folios, > put_new_folio, private, mode, reason, > ret_folios, stats, &retry, &thp_retry,