From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E056C83F01 for ; Thu, 31 Aug 2023 00:08:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5D6728003A; Wed, 30 Aug 2023 20:08:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0D998E001A; Wed, 30 Aug 2023 20:08:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5F5228003A; Wed, 30 Aug 2023 20:08:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9CBFE8E001A for ; Wed, 30 Aug 2023 20:08:46 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 786761A0465 for ; Thu, 31 Aug 2023 00:08:46 +0000 (UTC) X-FDA: 81182463852.03.8F8C0BF Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by imf24.hostedemail.com (Postfix) with ESMTP id 19CFD180012 for ; Thu, 31 Aug 2023 00:08:41 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NQNLJiTn; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of fengwei.yin@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; arc=reject ("signature check failed: fail, {[1] = sig:microsoft.com:reject}") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693440523; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4zy++5YnkHUuy6WWN2kPbA7r5MF3oU2tvNjQJlHCawM=; b=lRxP66NMI1xu8LAhtYC9NCPOPNrGRsAlFCTxyV3uvbaTBcj9xpLdL7MK/FHzAsfnmmGt3O PeWDumgJ4p2lOzSvXx/i3dg3Ax2opouxDcRP8wQjdHUqwtJWLB6g3oaLCSHKAmKCF5J3zt 0tKElhowf4fdmb0HF1O5L7pWxSCgg0A= ARC-Authentication-Results: i=2; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NQNLJiTn; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of fengwei.yin@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; arc=reject ("signature check failed: fail, {[1] = sig:microsoft.com:reject}") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1693440523; a=rsa-sha256; cv=fail; b=rNfzKQ6Azy+xpG4Xiaj4/JdF8EbWgSZokKiKCofldEXQskNpzuiaImPyU09IUKS483USm7 O5Lso+/ZALOylwNFLDrhsxzx7fKLqZhY9otdf1GDR8jXt3DwjWFT0EvqRdjtAlqfnnZIzA ghB4CdOas1HwN6lHxmJrwVOTexQjB/w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693440522; x=1724976522; h=message-id:date:subject:to:cc:references:from: in-reply-to:content-transfer-encoding:mime-version; bh=QhRGI09d3dtjLciJivyjk0yupsteSWZvUeUT+0vrBgo=; b=NQNLJiTnp4651KbR0AgIarr5CQUhiO6g8Isihp/PR64347807Q1xJJPw YFTO1yJj10kpYvttEr7R6FocP3AuFQiYV4j/s2ZheNdRNBgGEQ8VZaqXs JQ+FfZomFRlf/tVuoOElOQdaSyeWvz2YOdsQ3aI8k9WkXEQuv2IVmiFsV qlJWbcxk43TNl0qHjXKDiT0CG1+G86sJGPIknOzDbXfg1pbzaLtb2v6eF N7Nl1UIpqP/4nfr0s1Mpyh8OvSjnf4isWA6ztrNgXzVI7VVCYx5J+8sTs WfWOA1dOh8b75lwp/slzMDtSSPCSegdrTae2f++KMcJtzgQwCIEdQHEyN Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10818"; a="375732229" X-IronPort-AV: E=Sophos;i="6.02,215,1688454000"; d="scan'208";a="375732229" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Aug 2023 17:08:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10818"; a="689097199" X-IronPort-AV: E=Sophos;i="6.02,215,1688454000"; d="scan'208";a="689097199" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by orsmga003.jf.intel.com with ESMTP; 30 Aug 2023 17:08:23 -0700 Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Wed, 30 Aug 2023 17:08:22 -0700 Received: from orsmsx603.amr.corp.intel.com (10.22.229.16) by ORSMSX612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Wed, 30 Aug 2023 17:08:22 -0700 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27 via Frontend Transport; Wed, 30 Aug 2023 17:08:22 -0700 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.176) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.27; Wed, 30 Aug 2023 17:08:22 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CD7NGYu79WKpp7rp5BChfArYVTUQgw+R4e2pUHS6nTcyC8SJDEZWJtMh7x2hlsPZVb5IJA7aX5aja4YAPXbIxoIeILXT3UtLOj8unSffaBWtfT6b1mgKz7z2Jwz9Ehy1IggQbGmL8QgBxmHxACjeXTowSV4xUS0OSU6tVbNx7V/oj8AhDjm8rCF3LQJEcrtgotizF3qRh6lREYpPZuDlFyxt/dnS3kszR1N3RhemB3B2hD5Tk5iwlj+UhMoxLnGPSsZSnmILZodToGFqHeRNawmw6NL+wHeljs6J7EZ0+1pixfYAHSlEu2x0caNkHszRfI0uykeiH/2Fu1+juhLz9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4zy++5YnkHUuy6WWN2kPbA7r5MF3oU2tvNjQJlHCawM=; b=WqkB1qnLLmHjd5M1yNhPEqrug/0TJ8whdcdzVREEnaIBSSZ97jfp9xF/rMJIqz96tbQnu+jvpl2HEciFIiBYDcSK9U1H+y8wTYVZZ0Alw0HmRqsk4h3SJb5YV50uO3gTtKlg9jeHX+xSkYSIlhNIKNYXsbHRvW8BQx9cC37GXA9PfSF/XwV9WimO3YgnybQOtKiMNNc9UDt9x4a4YpngcUHIHTJT5iBYA8mphoZh01bbYZ2OSARNhxDFfowvgosU16AKlkUTtfUwGk8stSuTLRXD/m0hzHHHh9qfAWtd03777AK/rIES3gxOWEjoyo8fozUNSMHLUESK/kFHOdSJpQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from CO1PR11MB4820.namprd11.prod.outlook.com (2603:10b6:303:6f::8) by PH8PR11MB6708.namprd11.prod.outlook.com (2603:10b6:510:1c7::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6745.20; Thu, 31 Aug 2023 00:08:20 +0000 Received: from CO1PR11MB4820.namprd11.prod.outlook.com ([fe80::2846:e5f8:b2e8:1da6]) by CO1PR11MB4820.namprd11.prod.outlook.com ([fe80::2846:e5f8:b2e8:1da6%4]) with mapi id 15.20.6745.015; Thu, 31 Aug 2023 00:08:19 +0000 Message-ID: Date: Thu, 31 Aug 2023 08:08:12 +0800 User-Agent: Mozilla Thunderbird Subject: Re: Prerequisites for Large Anon Folios Content-Language: en-US To: Ryan Roberts , Zi Yan , "Matthew Wilcox" , David Hildenbrand , Yu Zhao CC: Linux-MM References: <7f66344b-bf63-41e0-ae79-0a0a1d4f2afd@arm.com> From: "Yin, Fengwei" In-Reply-To: <7f66344b-bf63-41e0-ae79-0a0a1d4f2afd@arm.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-ClientProxiedBy: SI2PR02CA0046.apcprd02.prod.outlook.com (2603:1096:4:196::15) To CO1PR11MB4820.namprd11.prod.outlook.com (2603:10b6:303:6f::8) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PR11MB4820:EE_|PH8PR11MB6708:EE_ X-MS-Office365-Filtering-Correlation-Id: 204df919-348f-465a-8e32-08dba9b661f9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tu26LADqEhV3deJiboFOOjxz7LHbDreuRa97b8ObNyu2n5TbepJKFodlkBy2NKj406w489ANrWCqGzC8yf6/Xm8LsQBypRg8uNAmniXG81N0CKUa0TYbuUHK7pDETlO015NVIzF9Jd89t0MYamLlmHYTVokM3F/FsueXsFXbV0rfh6mrthqSbgiTUy82wtZdeNhVX0J9F7MCimg8QzHhwxGeuQ8VcId23ugWbzjAnZDUBdCeIlIfU14Dzgv+BawUqGNxuckQiqaK5d4LILOW3E27kWr38Aav5pxjcYnxPH3CrynsKYivzdjkr3sEZsLVSJhYo5HewdvIY/dlz9G0wRS+jblSf5zhqwulj0Ci7Agi2GOja6pIPNwYWUBoX+CcQsGaOeknX4xpy/M4SWMyDwICmlJHN6TY/f8jqizfZgIu/dPglDKcndz6eK7qZFwIYv2gVJzi8Il96p5ncib32fTHrBQGdD7Ht0it444Lhqn+wNwsHz++bz5udbeVgFmzoFqGfa473Sh4k5bJcikJ5K2BCSxvTzuv5DvARg99FX28Q9FDV24SVS+kHUt+yYTSpSM7+Hs4Pi43yvvnelXXt+JI4IsMwE7sRaOpgK7GH7DDUfkFQ9e48Dg4Wobel8LxG0ojH+bXgZe8uf8Z43+R+h6E0v3f3tuczYNpleWjxeopEihxicbCJvATB9G4cSQ5 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR11MB4820.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(39860400002)(366004)(346002)(136003)(396003)(376002)(1800799009)(186009)(451199024)(478600001)(66476007)(66946007)(66556008)(82960400001)(316002)(38100700002)(110136005)(966005)(2906002)(31696002)(41300700001)(8676002)(86362001)(8936002)(5660300002)(6512007)(4326008)(83380400001)(6486002)(26005)(53546011)(6506007)(6666004)(2616005)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dXViRXZ2eEE2TG5GRHI2RnkyM2JQL3Y4NmJJclA3UXJnL0xtOUQxSlJaMnJz?= =?utf-8?B?dVZnUmtBS1ZwR3J2UUk3a1JkTE5DYlVUTFA1QlZTVERXb3Z3SG5MdVlyMlVT?= =?utf-8?B?QkxKUURMREF5U05Ea2dtRmdqQ0lOb2E0YXAzTnNlbGNuYmtYb3RTQjBlSW5K?= =?utf-8?B?eUx1Zzl3bVQzVnRrNFAyUWlzODFqVytVNXRPRWFGZVpFNWhkOHhPM1JGMXNB?= =?utf-8?B?b1lWZDlKMlZLQ0VLVUFML2xmeDdqcTF5a0NBQzdiVEpXZHZZU2x5OG42VC9U?= =?utf-8?B?U2pjVlVhRGZLWlZWWm9nS3hMdU0vYldIL0IzbTVDYUdqcjVnM2ZyR3lwN1Jr?= =?utf-8?B?MEhqd3JGak1CVXNRME9aUE1Ic1BTUmR2NStSUDZGdXJlRGFVbTlRRUFWRGNB?= =?utf-8?B?cUdZODlCS0ZHQmtSVDRBN2c0Mm43RHBUNXhxWGZ6TVV3aUg5a2pCMXhUODJV?= =?utf-8?B?UXBNYVdBUzdyd0d0OWlMVWNRcjJNbjJmNVIzRjA4djBEUkpFMWhZa3plQTMx?= =?utf-8?B?WGxCVU8yZVhXWThUc1VnQ2lKNVptUlk4eUp0Ni9DbzlFTThvaVEzTXh3MmJk?= =?utf-8?B?VlM3MVFyY3FIdFlyS0U3a0ZTc0kwT3FoVVN5S3NkbnVKWDBhM0tzdUZDK3hZ?= =?utf-8?B?Z3Iyd04yRTQzQ0J1UGZQZHFXWjJXVnV1UzdCQkl2VEtMZUs0VkxkMDJGalBa?= =?utf-8?B?ODUwMnNKRGhwRjdVdWd3ZjRLeDZlOW1KaEUxeXorWm1XNUNVdTVFUE54enlq?= =?utf-8?B?V1VsUjVOU2V5ZXdjM2pFb0J0NjJQWGo2TXQ1QVpiVkJid2JuaHJiTGtOZjAv?= =?utf-8?B?ek9PcjRkeUxldUMwa3BkUURUMTFWQ25HQlQweUtyTHEzakhLaCs5RnpTb3c4?= =?utf-8?B?ZWJQLzhacUMyQTBDUTBDMndLZjhCa0Vqb2drR1N1ekxSSkxwR04wSzlvUi9K?= =?utf-8?B?R2xxYkNtRnFpMTFFVnExMm9jVmR5ZTM3VWdqOFpESXFybjgxemtMUDA3RHU0?= =?utf-8?B?Q0xLNlBIOCtabE9HWHh4eGw4THY2TUt6WmlwWVNHVUx3b2JDSVg0K1o4Vkhl?= =?utf-8?B?T1FOczNEU0Y1TlpRWG4zRGlocmpQYWkrU0hXaUEzcDJhTXNSblRlNXZCS2NZ?= =?utf-8?B?L1pGY2V6SzRPbjBwOTNsa09GRzNOVXlySCtSWUF5SWZWYUpOMXJJMmtRMUYr?= =?utf-8?B?WEVqTDFGZlNyYWZveHZHNnlydDRlSFpxMnFEN0p1bktOcm84Ky8rNnYra2VI?= =?utf-8?B?TlF5Ulg3UXZ5cVdoZFdzNEtFZjd4eGNKeFFFYjNGNzVBTERvMFlyalJLM0J0?= =?utf-8?B?UVBwK2czUG5HaFRiRE4xUFp3RU9RQkpEYnArKzRBT2ZHdzlIWEFzMXZSYkk2?= =?utf-8?B?Rzh3WnhucFNjdkkvOTBBVFAzYVpvc1FOcGF6ditSOUFDVU1iZ3NCWk9lUnZz?= =?utf-8?B?dEluSWNaZ1hLWFlVc0F3ODFQZFRkOFlWNjNZbnB6Rkkzbzd6N2ZIUG9Wek1Z?= =?utf-8?B?TnE2NE5kWFRxUll4QmpabklFc25rN0doNFhxa000dXZXWWpQbHhmOUNCbUNX?= =?utf-8?B?YVUydGhaWEdENUlZTTdDN2RTUFZ4ZlBzUGNRcVZFd21xMkI3SDFveWNzSE9P?= =?utf-8?B?RFo5QkFoem5DNm9DaERVL1lYNjZqVkU5ZFh2bUE0Q3FheDNDaWNOcGIzKzlq?= =?utf-8?B?WGJoR2U3WmtGdzZNbWZYc1Y0V2thS3FrTk5TeDI4dkY1UkpYd0o4bXVmOXNW?= =?utf-8?B?S3Fmekh3Q0Y4cjgwdVBKdU9SMnd2amlpclB4Z09IYW9iN3NDakRUaVZtZGIv?= =?utf-8?B?OWNLeGlUZjlDMG15c3BMOEMra3RST0pxY2E5a1pSZEg2ejU5aUJlSUlyRXBt?= =?utf-8?B?cWhET2dMN0MwaFFRN0h6eHVrZldxc2dieDVHbWdIQW55OG9kL1VqU1o4VUxn?= =?utf-8?B?dUh1dmlla3NuNWpiOXF4NVR0ZExIczVmRnREQlpoUzhDUDY5VWZjT21SeWtH?= =?utf-8?B?QkR3WUtKR0tqcGhneGhLMEJJK2FJaVpZWW5qSVNEaHFDVmdnUmVBbkpvaXlK?= =?utf-8?B?Z2tDN0VXOFlPb0lBRk45UTJ3Z2w0UThCL2YxZVpzdFNmQkhJdERFK1p3aEov?= =?utf-8?B?TXc4NzFVUVNUdzBCZ0xCcXhwSHBMVTh3Z3NzNHFYY3R6Uk5IUm8xRS8rV1pE?= =?utf-8?B?VVE9PQ==?= X-MS-Exchange-CrossTenant-Network-Message-Id: 204df919-348f-465a-8e32-08dba9b661f9 X-MS-Exchange-CrossTenant-AuthSource: CO1PR11MB4820.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2023 00:08:19.6751 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: U5eneLzcqguNG6OH4X9B+VVW7OtrmCjeKACvlMR+RIgtcOjmDoby1WeiKmBDboExfJJvv2a88sBC7yFTkWEPpg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR11MB6708 X-OriginatorOrg: intel.com X-Rspamd-Queue-Id: 19CFD180012 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: fk1ceei5jh3wzoj1dy1exxx3ugeddehf X-HE-Tag: 1693440521-673886 X-HE-Meta: U2FsdGVkX187CirsYT6HndNioEFQBE+OEacFzlkO2te/RsYb/IMnMPX+NXNhe4DF+BSsgNDZ45SMp+qLhyrDzoCZosPatDbYrFKU/7PqQHXJXTm6qS9K96v38APFzd+16H52E5n4f3VxmZFhyvqIcHGjLbuL2y4ALVMz3ZKkDnDO/DTRpxRx7YUNpnG5DmFHxcg/VqjIkzy1VUS/OSU+8oQCMO3qpTWJG8de800rXPlRlTRBEdgZ3GpWMHRRXqXSZv0SQ/qW4cGE7v+nxn+iMPsfPidjgLrIMIvOZasgI/BIlT/UIcCHV6xgldHq7He0SfBxL+TZk643017DjZfLeRX5bZ3Q+r9dlP+veRPKeKMG9xcC4i3DPULih2MpTpBoJ8PyiKFU+dBtz4yw8FhK7DdqcIF4B4HFzzgqTe6BFUCdZtHUkF+17MAtRMYG4N4YQNcGIplQuuv2ggYt+0UqBN1biGR6UrQnoMcQkvFV5N1456/O3qYCkiqcF2iUPqsN+KvJi9+jWiM+ZbsbrPxyohN7wy+1hEr8WBJ7meGOaLUP5VkzFOYSAjP+fq3Lz2M+b9WpMpYSAOBRpXc5mGbJ6siWgnnLSbrMWRZ0r4at4/BOhETbqeDHgi7oBqpE+W7tga7Csj1RzA11XjJECDDkp0eNbSG1ZEYiTBIXwpeOJApJQW7V04nATAvk6T5lUDKKklZw8e9w9gH7LGP0wb6OItKMFNb6HNjjl1TJ92Cx+kGpJbsgGrWK/U4JKLd1lNC9dD/HVzIOqBl+mJGnqN0tIjtysG7dclYV0CEXer3UzzSh8KInooXpYQO7jpBPFBLcrGWJVx2+rQ57yBD83FLgLm7HBuoUI/xWOJOmusyWdZJZgCzUIWWqtyj8a8egPpcL2xA9lXcr0RvDhD15VnAmaWm5CvIhgiyARNPvH8N+96qrm5nuvnFxubudThvb4rJVPPWuozSwpBtCx9qV/uB du7mHVlp 6XLQTox98RT51curF3c2H7iu7oSkmuCvllePGVe/00PhzHm6MFXdnAo64ZMRyYzVjsGLRRb8kvAymdHDkPGGn3WcZyptRCbvHdxUeLathQmCKjuEVn60Yo83qey/EwuOALXgRYMchwc/iizjwyhrAdQCsrqI1xvuwybEa9dJ3Qoqq83nC6UHnmw1554UUJk/yZVL20rwkHvmxBgD85Ldv2o9i6DIRqw0G4Tn0tgU6mf/Z3czupqlTNokW8kuT2ZQ/5pw9jygqDX2Vn2ApVGlVm+kuLAW6ScZP9wQfso41DbQpUrL3IEUuR0SHWRq9/ndTuCpiMkeswoe/7naKAX0o3M/Hvl5L9vFebqvc5eMvJaLWb6tySAtaVvWE9jDkr9zbCmHHe9Jp+g9Fx1smJ5QhGeqS4/C9SW2yK49gc0optPrjGmiQA5FCeBr3a/wx3jeTjuLFLUGesazYKnoMg/KSGQAESsCi4zoR/tV//IguyyEeu5M6i4SIwxDk35CEQ8ffZamnkNEFOMTxkyEQBR6f4T6g8t0Z9KBwQsbj4hGaTifufshyS+FZ9nHdKI1rqAFMf20BcI9aFdieXjU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/30/2023 6:44 PM, Ryan Roberts wrote: > Hi All, > > > I want to get serious about getting large anon folios merged. To do that, there > are a number of outstanding prerequistes. I'm hoping the respective owners may > be able to provide an update on progress? > > I appreciate everyone is busy and likely juggling multiple things, so understand > if no progress has been made or likely to be made - it would be good to know > that though, so I can attempt to make alternative plans. > > See questions/comments below. > > Thanks! > > > > On 20/07/2023 10:41, Ryan Roberts wrote: >> Hi All, >> >> As discussed at Matthew's call yesterday evening, I've put together a list of >> items that need to be done as prerequisites for merging large anonymous folios >> support. >> >> It would be great to get some review and confirmation as to whether anything is >> missing or incorrect. Most items have an assignee - in that case it would be >> good to check that my understanding that you are working on the item is correct. >> >> I think most things are independent, with the exception of "shared vs exclusive >> mappings", which I think becomes a dependency for a couple of things (marked in >> depender description); again would be good to confirm. >> >> Finally, although I'm concentrating on the prerequisites to clear the path for >> merging an MVP Large Anon Folios implementation, I've included one "enhancement" >> item ("large folios in swap cache"), solely because we explicitly discussed it >> last night. My view is that enhancements can come after the initial large anon >> folios merge. Over time, I plan to add other enhancements (e.g. retain large >> folios over COW, etc). >> >> I'm posting the table as yaml as that seemed easiest for email. You can convert >> to csv with something like this in Python: >> >> import yaml >> import pandas as pd >> pd.DataFrame(yaml.safe_load(open('work-items.yml'))).to_csv('work-items.csv') >> >> Thanks, >> Ryan >> >> ----- >> >> - item: >> shared vs exclusive mappings >> >> priority: >> prerequisite >> >> description: >- >> New mechanism to allow us to easily determine precisely whether a given >> folio is mapped exclusively or shared between multiple processes. Required >> for (from David H): >> >> (1) Detecting shared folios, to not mess with them while they are shared. >> MADV_PAGEOUT, user-triggered page migration, NUMA hinting, khugepaged ... >> replace cases where folio_estimated_sharers() == 1 would currently be the >> best we can do (and in some cases, page_mapcount() == 1). >> >> (2) COW improvements for PTE-mapped large anon folios after fork(). Before >> fork(), PageAnonExclusive would have been reliable, after fork() it's not. >> >> For (1), "MADV_PAGEOUT" maps to the "madvise" item captured in this list. I >> *think* "NUMA hinting" maps to "numa balancing" (but need confirmation!). >> "user-triggered page migration" and "khugepaged" not yet captured (would >> appreciate someone fleshing it out). I previously understood migration to be >> working for large folios - is "user-triggered page migration" some specific >> aspect that does not work? >> >> For (2), this relates to Large Anon Folio enhancements which I plan to >> tackle after we get the basic series merged. >> >> links: >> - 'email thread: Mapcount games: "exclusive mapped" vs. "mapped shared"' >> >> location: >> - shrink_folio_list() >> >> assignee: >> David Hildenbrand > > Any comment on this David? I think the last comment I saw was that you were > planning to start an implementation a couple of weeks back? Did that get anywhere? > >> >> >> >> - item: >> compaction >> >> priority: >> prerequisite >> >> description: >- >> Raised at LSFMM: Compaction skips non-order-0 pages. Already problem for >> page-cache pages today. >> >> links: >> - https://lore.kernel.org/linux-mm/ZKgPIXSrxqymWrsv@casper.infradead.org/ >> - https://lore.kernel.org/linux-mm/C56EA745-E112-4887-8C22-B74FCB6A14EB@nvidia.com/ >> >> location: >> - compaction_alloc() >> >> assignee: >> Zi Yan >> >> > > Are you still planning to work on this, Zi? The last email I have is [1] where > you agreed to take a look. > > [1] > https://lore.kernel.org/linux-mm/4DD00BE6-4141-4887-B5E5-0B7E8D1E2086@nvidia.com/ > > >> >> - item: >> mlock >> >> priority: >> prerequisite >> >> description: >- >> Large, pte-mapped folios are ignored when mlock is requested. Code comment >> for mlock_vma_folio() says "...filter out pte mappings of THPs, which cannot >> be consistently counted: a pte mapping of the THP head cannot be >> distinguished by the page alone." >> >> location: >> - mlock_pte_range() >> - mlock_vma_folio() >> >> links: >> - https://lore.kernel.org/linux-mm/20230712060144.3006358-1-fengwei.yin@intel.com/ >> >> assignee: >> Yin, Fengwei >> >> > > series on list at [2]. Does this series cover everything? Yes. I suppose so. I already collected comment from you. And I am waiting for review comment from Yu who is on vacation now. Then, I will work on v3. > > [2] https://lore.kernel.org/linux-mm/20230809061105.3369958-1-fengwei.yin@intel.com/ > > >> >> - item: >> madvise >> >> priority: >> prerequisite >> >> description: >- >> MADV_COLD, MADV_PAGEOUT, MADV_FREE: For large folios, code assumes exclusive >> only if mapcount==1, else skips remainder of operation. For large, >> pte-mapped folios, exclusive folios can have mapcount upto nr_pages and >> still be exclusive. Even better; don't split the folio if it fits entirely >> within the range. Likely depends on "shared vs exclusive mappings". >> >> links: >> - https://lore.kernel.org/linux-mm/20230713150558.200545-1-fengwei.yin@intel.com/ >> >> location: >> - madvise_cold_or_pageout_pte_range() >> - madvise_free_pte_range() >> >> assignee: >> Yin, Fengwei > > As I understand it: initial solution based on folio_estimated_sharers() has gone > into v6.5. Have a dependecy on David's precise shared vs exclusive work for an > improved solution. And I think you mentioned you are planning to do a change > that avoids splitting a large folio if it is entirely covered by the range? The changes based on folio_estimated_sharers() is in. Once David's solution is ready, will switch to new solution. For avoids splitting large folio, it was in the patchset I posted (before split folio_estimated_sharers() part out). Regards Yin, Fengwei > > >> >> >> >> - item: >> deferred_split_folio >> >> priority: >> prerequisite >> >> description: >- >> zap_pte_range() will remove each page of a large folio from the rmap, one at >> a time, causing the rmap code to see the folio as partially mapped and call >> deferred_split_folio() for it. Then it subsquently becmes fully unmapped and >> it is removed from the queue. This can cause some lock contention. Proposed >> fix is to modify to zap_pte_range() to "batch zap" a whole pte range that >> corresponds to a folio to avoid the unneccessary deferred_split_folio() >> call. >> >> links: >> - https://lore.kernel.org/linux-mm/20230719135450.545227-1-ryan.roberts@arm.com/ >> >> location: >> - zap_pte_range() >> >> assignee: >> Ryan Roberts > > I have a series at [3] to solve this (different approach than described above). > Although Yu has suggested this is not a prerequisite after all [4]. > > [3] https://lore.kernel.org/linux-mm/20230830095011.1228673-1-ryan.roberts@arm.com/ > [4] > https://lore.kernel.org/linux-mm/CAOUHufZr8ym0kzoa99=k3Gquc4AdoYXMaj-kv99u5FPv1KkezA@mail.gmail.com/ > > >> >> >> >> - item: >> numa balancing >> >> priority: >> prerequisite >> >> description: >- >> Large, pte-mapped folios are ignored by numa-balancing code. Commit comment >> (e81c480): "We're going to have THP mapped with PTEs. It will confuse >> numabalancing. Let's skip them for now." Likely depends on "shared vs >> exclusive mappings". >> >> links: [] >> >> location: >> - do_numa_page() >> >> assignee: >> >> > > Vaguely sounded like David might be planning to tackle this as part of his work > on "shared vs exclusive mappings" ("NUMA hinting"??). David? > >> >> >> - item: >> large folios in swap cache >> >> priority: >> enhancement >> >> description: >- >> shrink_folio_list() currently splits large folios to single pages before >> adding them to the swap cache. It would be preferred to add the large folio >> as an atomic unit to the swap cache. It is still expected that each page >> would use a separate swap entry when swapped out. This represents an >> efficiency improvement. There is risk that this change will expose bad >> assumptions in the swap cache that assume any large folio is pmd-mappable. >> >> links: >> - https://lore.kernel.org/linux-mm/CAOUHufbC76OdP16mRsY3i920qB7khcu8FM+nUOG0kx5BMRdKXw@mail.gmail.com/ >> >> location: >> - shrink_folio_list() >> >> assignee: >> > > Not a prerequisite so not worrying about it for now. > >> >> ----- >