From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2DAB2D262AD for ; Tue, 20 Jan 2026 23:34:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A81C6B0005; Tue, 20 Jan 2026 18:34:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 755816B0088; Tue, 20 Jan 2026 18:34:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D91D6B0089; Tue, 20 Jan 2026 18:34:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 49DE76B0005 for ; Tue, 20 Jan 2026 18:34:37 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E145E13BAB8 for ; Tue, 20 Jan 2026 23:34:36 +0000 (UTC) X-FDA: 84353948952.17.0F1A396 Received: from SA9PR02CU001.outbound.protection.outlook.com (mail-southcentralusazon11013062.outbound.protection.outlook.com [40.93.196.62]) by imf27.hostedemail.com (Postfix) with ESMTP id EAA7F4000E for ; Tue, 20 Jan 2026 23:34:33 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b="C/oSPSjH"; spf=pass (imf27.hostedemail.com: domain of jniethe@nvidia.com designates 40.93.196.62 as permitted sender) smtp.mailfrom=jniethe@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1768952074; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Vk7aGO8KS68mEJDMr8TfVp9YtnjBYzgSIUFqG35xXq4=; b=b1Ta6Zb54wrnbvO8K2uGAb5tcTTAA66+ok7xo2PmAEi+2sxQMK85cHl6uk1PYxvX4OlcZQ KGW9cdBc+hPwT+H/SqfQiwJmgIfk8hic1drYajrRuOcSrIIOUc1z91QvnxNP/AMlg/niLU F0N9DZrRJ91NG1nf00PevG2FhmXlj9U= ARC-Authentication-Results: i=2; imf27.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b="C/oSPSjH"; spf=pass (imf27.hostedemail.com: domain of jniethe@nvidia.com designates 40.93.196.62 as permitted sender) smtp.mailfrom=jniethe@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1768952074; a=rsa-sha256; cv=pass; b=GcrulsDnRt2pkfDlKcigNww9YM+nHPG1mlC9eps1SX52ZaV98dyIgBj6+DS2Z1oTsHq5jW 3JT/LhqlVEdnLeXSYptBuLdpRTbRR5f5IUSTPnJHBuFWmP0ChAmkTawe0nRxM3Q5F3jem8 33sT+htXADZFql6Def25ZZCAUVaVXeY= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=n50Lob+YiAF5R1j47EIYd2dkrmG2fLUUo68M7HPXJacTsc6hothqHQRPytu6DWTBlD1OXXj7EBcLhDuQQtKo2NRDlSmK+AEo7sVGnb57lOfIJ6W/R8btELTbRLG6gCsbxfUDk3LYFxC3fImE3RVUo0G4iDx2saAq/cL88URD6X562y2mcL6dWumJGXTG6zZGrBa+mDsJg7EgNsPgSqB5cc7P87XwXUZWREWgQG0oLL9OLY+xDKFVCsSbji76pOr7e9Jt7TJzf24c/dzJXVIFJwsMy+paTyu638w43RJMwT6ogY91GGnvee3530IMSr+KbWQW/ScIjG+JUsAJl9M4jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Vk7aGO8KS68mEJDMr8TfVp9YtnjBYzgSIUFqG35xXq4=; b=Bh3iQtOrv8pTMeqf2yBPRP+e4wsyJlrbHsum48KpHoZt5wiZhg/UACdgFZqMYR2xoFW6xFxm6iHa8/HgOmAPfyQiEyY3u7Im7+0PK1HHRVHn4Sv7ycAhrBxG1MOI23l9uN2mvsXAV9lhJ6PbU94Zk2qWZ5YPXN5kFIQDJGlgpHNFww6/RfzO+A6msZAHaY6WTRs45aLB5VkY4yvoJjvH1O6a5s9vGZAf4uD/7+Ntm4Wb8xCcxtWDAr3VudtrDvQBL1JzHTqu6sxI1utnxByK4SNTe71nTIi1hewjxkL3LUizXTad1FZGBZpaKjzYdzItoIMOmTopFXGtnTlOjk32mw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Vk7aGO8KS68mEJDMr8TfVp9YtnjBYzgSIUFqG35xXq4=; b=C/oSPSjHd3bMFmurf1o4GnM2lRx9j22raMiTl53WcH1wbcV24IS94voJW47EZi/iphTjqFXK7hXPd9CpoCs4viSbLvVqwSZqUaH9g27Vr8VT8DSngInLKVHnfOeTy/S9Zz5CjUVkafqkLEaUVgQfo2HU1uRzmK1oy4JsCoqGtFK7ww69/+xP2Hn9WRw2PFUm7rMSGOFin8eni2rkVC8I4lOLzOHY16WbRzpZmDQATo7cLJZ2JH0hzfLOuzF/e+jq50ao9H/CKjDTBXrbbWVXi9v2tfr01IEsbNague+UOpXnZdzso6jgvJ8IQ5g9tSZ+1Jt9Je2izNjbBKMtOsa4Lw== Received: from DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) by DM4PR12MB6037.namprd12.prod.outlook.com (2603:10b6:8:b0::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9542.9; Tue, 20 Jan 2026 23:34:29 +0000 Received: from DM4PR12MB9072.namprd12.prod.outlook.com ([fe80::9e49:782:8e98:1ff1]) by DM4PR12MB9072.namprd12.prod.outlook.com ([fe80::9e49:782:8e98:1ff1%5]) with mapi id 15.20.9520.011; Tue, 20 Jan 2026 23:34:29 +0000 Message-ID: <649cc20c-161c-4343-8263-3712a4f6dccb@nvidia.com> Date: Wed, 21 Jan 2026 10:34:21 +1100 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 11/11] mm: Remove device private pages from the physical address space To: Zi Yan Cc: linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com, lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org, airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com, mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org, linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca, Felix.Kuehling@amd.com References: <20260107091823.68974-1-jniethe@nvidia.com> <20260107091823.68974-12-jniethe@nvidia.com> <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com> <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com> <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com> Content-Language: en-US From: Jordan Niethe In-Reply-To: <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-ClientProxiedBy: SJ0PR13CA0077.namprd13.prod.outlook.com (2603:10b6:a03:2c4::22) To DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR12MB9072:EE_|DM4PR12MB6037:EE_ X-MS-Office365-Filtering-Correlation-Id: 9f2027f0-bf2a-4f0a-5537-08de587c7494 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014|7416014; X-Microsoft-Antispam-Message-Info: =?utf-8?B?QmgzY3ZCcnFXWldoZE85K2lPWFhIZjhDbS85RStOaHFMaW5qUEFlZnFPN1hC?= =?utf-8?B?dGs5YVVGRGwyVzhqaElaTFUzZ3J2S0lGMVFwY3IwY01Va1RMR3dSdzFxYmpV?= =?utf-8?B?V01hcGxiMVo3Vk8rb3J3OERjeFM0NHVHOE9HT29WQkN5dFp0N3lRU09ZR0E4?= =?utf-8?B?TmpVZmI5c2ZQU0JGOUphcXJNTFVwWi9YRW5PR0VDbVpBQVZnWEVKeGJrNUtG?= =?utf-8?B?MWRtSnBmQlV1SGk3cFVieEdVbzZEa1BRendXMXZkUklIZFFHUml4K2JpL0lL?= =?utf-8?B?ZERpMTdxUTNJckg3MG9aZXQ5YzdyTmI3N2RzSEdDWFJ4SHpxMWgxYkJmQXNQ?= =?utf-8?B?OE9salZWMGZXZnNScDJzcUNnbGJvd1M1Y2h0akZaZkJJbG0zZ1FNUG50WnVp?= =?utf-8?B?UFJJS1Zla05ySjNkT1A5NU9sN0Exb1NXcGdZN2JYOXJ5N0YzUFZ3V1pkMi94?= =?utf-8?B?eHlpVDhXY0d2SEVjZFlmZmtTMm8rU1F5RmdQRmx6d1NWamhTS2Y2WlRQOVNU?= =?utf-8?B?Tzk2ZUlZRCs0VjgrUi90M0FtVG83N2ZrdUVkdkRFVlVYd250aUQvTlZGQmU5?= =?utf-8?B?RkRUSWpNU2Fhc2tyeDR2UEE1bEtTSWdUZFAvZjVHeGR3d25qaXk1UlFKeXRy?= =?utf-8?B?c095ZzBSU1diYnc1d2pEVmR5UVMrNzFhUS8rSnVOSTdOaHJ1QzJ0eG9jWGNx?= =?utf-8?B?bldaOEN3clh1aFNMNEVyeEJrRktDZkFJdmx0ZnJaejVhc01EazdkL3c4aCtS?= =?utf-8?B?MDE1YmJaYWFJQTF5ZjFyOUtlcDZLNlpRelFMU21aM2RFcDhEcGRYcEtJeEc0?= =?utf-8?B?a2hHZ0hXbzRSTW1KOGVBc3BRSmMvZlhPY1U5YkhYUVpNQ3BqNDFFQS9Sdjdi?= =?utf-8?B?MTRoRTVvTXhrbk9sRHYzeEh1d3poenlqU3EwUk5UeGRBVXB4VGd6SU1mSVRv?= =?utf-8?B?NzhSUTFGcmdHOHdkTUpXY0xMZkdmVUx4dUFoYVpNU0VWVmR2NmxBeWxYNXRL?= =?utf-8?B?U1pEckNRU0p0bjZ1bUpZTGwxNWp3RnRmZGU4UnlxZUUrMmJLc2UzZFFudUV1?= =?utf-8?B?V05VUkdTVTg3UStJMFloczJ6QXlLRmpCbUJldEJ6M0lPeEVJWjgveTN6bmx1?= =?utf-8?B?NHlzSTd2TEdUalcyMHUyeWtlRnpKYkJoNkk2R3MzeUNwRldBRENRZGpDU1lu?= =?utf-8?B?b3NxREN5NEt4ak5wanJrYXhzdDlJT25adXlCaTN6dlJRVk9JcFA5eUUxZnNM?= =?utf-8?B?a3lzNXpJbnZUZk0yZkRjYVB3RkYzUDhlR1BIYmJtZTZPVkp0UkdIVnJpYlpH?= =?utf-8?B?ellpa3MvV2tlNXVSbmZnVTlrUSt3eEdZcGxtWHl2ajhON1JGVzZMaDRySnRD?= =?utf-8?B?QlBneHJTOXgzTk5rZFJoMFdpY1RJNjFmYzRRUEJTdzRxajBFRXNWczEzNVpG?= =?utf-8?B?UUF1Y2lxckRyZDhmek9DNEMyT0l0MzFJMTQybHIvQzlidVJCVXV3Q0o3dXBV?= =?utf-8?B?a2paRktvOGxsaDhGVzlIRk1pb1VzcHA5VUhMUURmNk9mL3E3YkdHYkUyYXI3?= =?utf-8?B?Tlh1dFJhRXRhYmZFVTU1YmZGN0dib2NFWnU0elVCczZZaTZ1UU95YlFySnFa?= =?utf-8?B?UGlHTnJYbGtvUTNzMHdUM1pZVHdQaXRhRU5oTzNPRTR1MEowbHpSZ2d6WWxx?= =?utf-8?B?WUl6UmdtR2RneTNVVkw1M3RpUjZiNW9JL3Z3Q0dGWG4yQTlpMTEzbFpxMnU3?= =?utf-8?B?VXBXaGk4dWI5UC9oSGRTYjRndUxjL0F3ZmpWc2VLVGE5ZTRJUmxIUzlJbExp?= =?utf-8?B?b01VTDllNnlueVdtanRRZlpIbWtxNW8vTGFlbGNSMUpJZnA3L1BTOERoMnFZ?= =?utf-8?B?MkVwL2RSREp2NVlTY29jMGhkclEzc1MveW5wSGJaTjJ0b3ZWeDd6YTQxeWl2?= =?utf-8?B?YWtRNnE4VXUvVTRMSEovUGRQdENlWlBUM2RRM3k4azc4cVdxdk02TG9KeHJE?= =?utf-8?B?TXFBNGRXbmE4a2VNR1d2M0lxdGU3Q2hvNnRoM3NSazhpYzNUaFgyKzB2VitV?= =?utf-8?B?M3lPNUd2ckFGWGU4aklRNGtmY2NWMEk1eTNsZE5uelFEUUZVTnAxaDNVT3Zy?= =?utf-8?Q?OW1k=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB9072.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014)(7416014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?M2U5OGorYmQzbzNKT01seCtVbXd6YnVEQ1FiUU5kWlUxaVdDOUp6WHJvOWhQ?= =?utf-8?B?OVdzcEdKcWt4bWRSNzROU2dPOFkweC9IYmwyUkFFVDllZ21jR0JoV001RTBP?= =?utf-8?B?ZnNWMGFpSzR2K0VFcXV1cERGOEM1VzZLcm1BNG1rdHE3dkFmWVpBQkgwOThH?= =?utf-8?B?cW01N1EwbWhCWld6LzBteVRXeDBZR08zTlhKTVJWNEFMV0g3RGpWTllGV3ZM?= =?utf-8?B?diswK2MwRHhjSHRLSkpvZ0J0TVdXZ1ZoSE5WUExUNkVaY0pmSVY5WDJ3T0h6?= =?utf-8?B?UW4rclgwT0dWSWFEdnR1alpBYmxNRzRZMFFHNDZXdWFNQUphNXNSSGx5bHl3?= =?utf-8?B?TjMwVUJNdUExaGQ5ZHpUWXFMeG9EdFFzU3NNR2FrZk82VWd5c0NNS24ybm1y?= =?utf-8?B?WFFUYUlaWGl5ODJLUDNVWU16ZGdRd09xc2xwd0tralhaTTF5RFZNZUtPNVUy?= =?utf-8?B?WDhtMml1VWEvRFlZMytHTVNVNlNQWjZNVU0rb0Rmb2gyWCt4RnRiNnhpdzVK?= =?utf-8?B?bnhaNjEyZ1RBOUQ1NEZSQVdXWCtvYTJQM2EzKzVvZTZ5V2FEMEp0aE9EMTVN?= =?utf-8?B?cTVabEhOU2FMM0JTOExmQzJUajZDaWlLQUdOdzBBRXZPNWdhRzJ6S0hKUTVB?= =?utf-8?B?bjJnZmorNjVjekdWSU9zNTNrUEtiSVBnZnpoUG5XVzVlbkswWENRRU0vUkdM?= =?utf-8?B?SUxUaTNqbXpkU1BVU1Frb0loUExsUnJLSUhWSVRPMUxiWU84bXdZdEZBdGFB?= =?utf-8?B?c0tCb1N0bDRQbHA5NDFUN3pRTHFLN0tvWDFGQWV5dGxsRi81Tk5UUCtvdnRh?= =?utf-8?B?Wkt1c3l6bit2MHlFbk5DU3ZpK2R1cFlJajF2RWVraVlWU0hMcEo2eUpJMjZR?= =?utf-8?B?dTNkY0YxTldTWHVBVTRiTXlMYm5Gc2N2OWVjeTRYbWRRbXZIaHBGTDcrSjdi?= =?utf-8?B?VmVxY1VWYXlibVNRTVBiajJsTFlhYytWN2dmZ1dDbTFRWmhZK1d3OWt5cDk3?= =?utf-8?B?Nmp2a2Fxc0VCWXcwc2pwRWlNWm8vV3FrQ042N3NLSTdxa2h2ejdpMURVeVl2?= =?utf-8?B?L20yNHNOdituNWVvSXJ4dmdEbHA4QkZ2N3ptNmVJSGVOMUIxRTRvWFRWcU9z?= =?utf-8?B?b1ZGQno3TDgxLy9UNzJpRmFLdlVDZUhMdk04b29sSTI2ajFKcUp1MTFNcnVv?= =?utf-8?B?QkRqOURuNjhtNTNndFNCUFpSdXFtZUgvZnVYdzVqNTFDUmFOSGNrWjhPYXpr?= =?utf-8?B?bEljbHRpcUlLdzJ4TWxaVTl0aW1NQlJUT2JyQnFWQUU1UlZGdTRoVzIvcHor?= =?utf-8?B?VTM2MitDbmlhK3FmT1JwYXdsWUMrNjI2SEdhL1dYZ1l1NzNMZkx1NkxUYWlY?= =?utf-8?B?bDZudkRZbVNRZVhYN2M2TGszTnBZZ3MrRVR4NkZSWHk5M2QrQUEzWStaVUto?= =?utf-8?B?anVqcGtMRlVGOUhlQzZ0aEcrYWdMV01YZTYzMkM4Ym0yRDFyNVBRdGxyUG9E?= =?utf-8?B?TUFBbVQ3WUFhbVlJcldHOUR6cTVweE5pSTA1a2RVSU9IWFYxd25xTUFNYTFK?= =?utf-8?B?Q0REVE1ldmJocG9Ha3VWK3FnOVlLWlVNYXI3bzdPWGg2NnV2SnA5VUZWQml2?= =?utf-8?B?dkJJWW1FSGxJeXRUaWVJSGtWdHp4a2RtR3JGdTdOVzZ1R1phOXBHYVpkUDZy?= =?utf-8?B?S1ZnMW4yMnZHU1YraVNnTFhubWt6cnBibUNVMHIrbTB3RVpwV1lrR250NFZW?= =?utf-8?B?TG94RS9TUzdBc2t4dG40NWNmUUp3YldJVTVXb2p5WmZSa3JrK0NXMWl4K1Fs?= =?utf-8?B?U2VhMXllaEpuY1pzMEsrZXk1Yk1nQ3UraEFLTlFXZGxWNG1CZHZIVlorTW1i?= =?utf-8?B?S0hSZDhuYUpYL1VycmdkSWpPaHBTS0EyelNwei9taUxjNE5MYzAyYWNnUUlL?= =?utf-8?B?ZGRWaGllQmxJM1ZtcHVzWnBXZ0dqTTlQZDJxNGdYcHVldGw5YjVjbWwrODh2?= =?utf-8?B?R0xoNFlxWjNpTk8vZ1dZeHN1dDc1UWlhUHBVOVNra200VmJQOTB5bk1lUlhv?= =?utf-8?B?R1FZUm9Oc1Q2NmdNR2lWUm42NFRUMmFacWdRK3VINmNLQ2pkMWw5ckUwK1Bo?= =?utf-8?B?Vlc2czFxYzJFZmdIa2lTUExjbllpb2lCdGF4Z2w0VzlPOUx2ZlpycTE4TWlW?= =?utf-8?B?R0lKaVlYN0ozNU9GTk9oNENRMmhQajRHYVRxZ1AyWEQzUDVhNStlVWRxOEhD?= =?utf-8?B?RVc3RFZHSzJ3aUZTcWNqOVFmb3NwR2Zienl4MnJ5ZmdFQmRyT0ZNOEh2R3dX?= =?utf-8?B?dXBFNVg2ZWJjNFFlN2NscmdXK2MxTXRoZUFVWFg1UkIzKzlxbUM0dz09?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9f2027f0-bf2a-4f0a-5537-08de587c7494 X-MS-Exchange-CrossTenant-AuthSource: DM4PR12MB9072.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2026 23:34:29.0335 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: xwkD99ugmWOIhZFyh0XAIheCkb7qq/NBEnEAPLCnR4nxcyUevU6TA4qzKqkE6RzODOfysORVXQ20a+80qFatQg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6037 X-Rspamd-Queue-Id: EAA7F4000E X-Stat-Signature: 6b59dqbbjfr1845iys785u9khnchoxp6 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1768952073-397226 X-HE-Meta: U2FsdGVkX192IHSjJnCiAXlbA77KV3ng+kNzVNP0ttPIl8EmVEwywTKzQXeNOcZndczcHmYEZsAhOZfP9pLvSHp2QTrBopLpZWcuZTwAxkIhk1cAfju3TH7SeKPM/m6eVXCoxXHq/gwivzUFg4TNHfsOVjkOOvbtPsJ+JU+Cn/tHL33EfefRHkdni4gPsJeo+RomY4Gh5ahFIQ/5guZ13lnxknw29S2Dn4uc7D6ObymV0jNJ8lqJqLc/U+TYt7R4UhJSlt1WF5awluvtJIxG2IF05Vk5J32JYLzTEIupN4ASO3aD5yIbJQUbaAVTi9Q5R2oZUeRrEBCans3btyil9u7BYkv5lh+3fPdH940PNfmbvNk+RgGPBu6V2Md1lrOcNQF5DrwSDrp0l+DaIh2rfeiE+sXHX7YNIOh6QbNJlXkWZSVb2nBe459HpOpvGbyL7+L5MGvIiXDz5mMSVO0MeNpuWh/98mhpL/n6B18Pv8uXgUbcwWnhTgNoKtDlpLdOXBgRpVqxf9NkN5+voTvu6Fcn9hipt1Ne8tQM2FDwXxm4IeMn/uLv8c/x4bf4KYjM4iHk9BQzjGrjx0URdkKCWEcvBotftJ4IGUnSI7xeoUZ4RVMKugEF7LzKVLk8vywtwnCvAOj+kSkdUh5cMmDAogliKW9QMSXGV4ZYRIL6n3Jyxhji5fll3e6Y3vu2LTl3V76mJ1ZsfZzUl6jX3HxEA+uaZi2Fm4FcDmIHEokR2ctZra10oyqzn5dOGfzhQFjehHytrdMIZ9UGRcjCxHqA0RgdErhU0oPtnfxSRdSL7Vls8BwqaT4RgQ/1bvLhnUKHBb2m0mT5XS25SvkmV5/cBu8sGba4RtZmoKoDRcPk2PLZICn3b6QjitcookvLq+d+59ejeG85kk4agl0c1iKUkriR7FmglkHbzO6ut0DuW4Bku/V3ONlTGi6/6rIu1GhoO1Ic0BN/ea1qiRT+r3j AoS1BcAl V8vgnqL9Hgw6tFLn/KsqX21Lp5BiVCpWcS0Gp0lQpP2T78cSDN5jCNXz8jSdLabiFPER04SLbFPJ+uVZjimBlzmjuRYNiTzRHIiIMelexCEcR0yxRFDjEG0e/GQjw9dqX+s3/RVmZ9dxriatw1yFjrCuP5VxvUErVOOUo7cXKtgh4pTLhyJKK+hyiF8aQRmRu47kAFfINb5in98X0bsmdoknIcHOMnUWnqWxGiOxcRv9ftW5aKJ9Q/2Hm2NHG4t+29elnJ8CrwkenlzmkDGlVde0Ld8uN/7yGCaIzrCraZqLww32Di2CKy0+juURzfNwQyrRFrmyQJZfJ89XSH0AnFwUs+XfaWLRr27HCi6JbSMYxYxMWFJoBOn6GmhnuYJ95IyJVrrQeewrJvEOJo258a+Noev25UZIk3AkyfYKUUUvtwtLyUK2Xo08QbQVSTCC6OIlu6rYDHeFHIZg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, On 21/1/26 10:06, Zi Yan wrote: > On 20 Jan 2026, at 18:02, Jordan Niethe wrote: > >> Hi, >> >> On 21/1/26 09:53, Zi Yan wrote: >>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote: >>> >>>> On 14/1/26 07:04, Zi Yan wrote: >>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote: >>>>> >>>>>> Currently when creating these device private struct pages, the first >>>>>> step is to use request_free_mem_region() to get a range of physical >>>>>> address space large enough to represent the devices memory. This >>>>>> allocated physical address range is then remapped as device private >>>>>> memory using memremap_pages(). >>>>>> >>>>>> Needing allocation of physical address space has some problems: >>>>>> >>>>>> 1) There may be insufficient physical address space to represent the >>>>>> device memory. KASLR reducing the physical address space and VM >>>>>> configurations with limited physical address space increase the >>>>>> likelihood of hitting this especially as device memory increases. This >>>>>> has been observed to prevent device private from being initialized. >>>>>> >>>>>> 2) Attempting to add the device private pages to the linear map at >>>>>> addresses beyond the actual physical memory causes issues on >>>>>> architectures like aarch64 meaning the feature does not work there. >>>>>> >>>>>> Instead of using the physical address space, introduce a device private >>>>>> address space and allocate devices regions from there to represent the >>>>>> device private pages. >>>>>> >>>>>> Introduce a new interface memremap_device_private_pagemap() that >>>>>> allocates a requested amount of device private address space and creates >>>>>> the necessary device private pages. >>>>>> >>>>>> To support this new interface, struct dev_pagemap needs some changes: >>>>>> >>>>>> - Add a new dev_pagemap::nr_pages field as an input parameter. >>>>>> - Add a new dev_pagemap::pages array to store the device >>>>>> private pages. >>>>>> >>>>>> When using memremap_device_private_pagemap(), rather then passing in >>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to >>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device >>>>>> private range that is reserved is returned in dev_pagemap::range. >>>>>> >>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type = >>>>>> MEMORY_DEVICE_PRIVATE. >>>>>> >>>>>> Represent this device private address space using a new >>>>>> device_private_pgmap_tree maple tree. This tree maps a given device >>>>>> private address to a struct dev_pagemap, where a specific device private >>>>>> page may then be looked up in that dev_pagemap::pages array. >>>>>> >>>>>> Device private address space can be reclaimed and the assoicated device >>>>>> private pages freed using the corresponding new >>>>>> memunmap_device_private_pagemap() interface. >>>>>> >>>>>> Because the device private pages now live outside the physical address >>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(), >>>>>> et al. are no longer meaningful. >>>>>> >>>>>> Introduce helpers: >>>>>> >>>>>> - device_private_page_to_offset() >>>>>> - device_private_folio_to_offset() >>>>>> >>>>>> to take a given device private page / folio and return its offset within >>>>>> the device private address space. >>>>>> >>>>>> Update the places where we previously converted a device private page to >>>>>> a PFN to use these new helpers. When we encounter a device private >>>>>> offset, instead of looking up its page within the pagemap use >>>>>> device_private_offset_to_page() instead. >>>>>> >>>>>> Update the existing users: >>>>>> >>>>>> - lib/test_hmm.c >>>>>> - ppc ultravisor >>>>>> - drm/amd/amdkfd >>>>>> - gpu/drm/xe >>>>>> - gpu/drm/nouveau >>>>>> >>>>>> to use the new memremap_device_private_pagemap() interface. >>>>>> >>>>>> Signed-off-by: Jordan Niethe >>>>>> Signed-off-by: Alistair Popple >>>>>> >>>>>> --- >>>>>> >>>>>> NOTE: The updates to the existing drivers have only been compile tested. >>>>>> I'll need some help in testing these drivers. >>>>>> >>>>>> v1: >>>>>> - Include NUMA node paramater for memremap_device_private_pagemap() >>>>>> - Add devm_memremap_device_private_pagemap() and friends >>>>>> - Update existing users of memremap_pages(): >>>>>> - ppc ultravisor >>>>>> - drm/amd/amdkfd >>>>>> - gpu/drm/xe >>>>>> - gpu/drm/nouveau >>>>>> - Update for HMM huge page support >>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE >>>>>> >>>>>> v2: >>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges); >>>>>> --- >>>>>> Documentation/mm/hmm.rst | 11 +- >>>>>> arch/powerpc/kvm/book3s_hv_uvmem.c | 41 ++--- >>>>>> drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 23 +-- >>>>>> drivers/gpu/drm/nouveau/nouveau_dmem.c | 35 ++-- >>>>>> drivers/gpu/drm/xe/xe_svm.c | 28 +--- >>>>>> include/linux/hmm.h | 3 + >>>>>> include/linux/leafops.h | 16 +- >>>>>> include/linux/memremap.h | 64 +++++++- >>>>>> include/linux/migrate.h | 6 +- >>>>>> include/linux/mm.h | 2 + >>>>>> include/linux/rmap.h | 5 +- >>>>>> include/linux/swapops.h | 10 +- >>>>>> lib/test_hmm.c | 69 ++++---- >>>>>> mm/debug.c | 9 +- >>>>>> mm/memremap.c | 193 ++++++++++++++++++----- >>>>>> mm/mm_init.c | 8 +- >>>>>> mm/page_vma_mapped.c | 19 ++- >>>>>> mm/rmap.c | 43 +++-- >>>>>> mm/util.c | 5 +- >>>>>> 19 files changed, 391 insertions(+), 199 deletions(-) >>>>>> >>>>> >>>>> >>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>>>>> index e65329e1969f..b36599ab41ba 100644 >>>>>> --- a/include/linux/mm.h >>>>>> +++ b/include/linux/mm.h >>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf) >>>>>> */ >>>>>> static inline unsigned long folio_pfn(const struct folio *folio) >>>>>> { >>>>>> + VM_BUG_ON(folio_is_device_private(folio)); >>>>> >>>>> Please use VM_WARN_ON instead. >>>> >>>> ack. >>>> >>>>> >>>>>> + >>>>>> return page_to_pfn(&folio->page); >>>>>> } >>>>>> >>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h >>>>>> index 57c63b6a8f65..c1561a92864f 100644 >>>>>> --- a/include/linux/rmap.h >>>>>> +++ b/include/linux/rmap.h >>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn) >>>>>> static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio) >>>>>> { >>>>>> if (folio_is_device_private(folio)) >>>>>> - return page_vma_walk_pfn(folio_pfn(folio)) | >>>>>> + return page_vma_walk_pfn(device_private_folio_to_offset(folio)) | >>>>>> PVMW_PFN_DEVICE_PRIVATE; >>>>>> >>>>>> return page_vma_walk_pfn(folio_pfn(folio)); >>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio) >>>>>> >>>>>> static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn) >>>>>> { >>>>>> + if (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE) >>>>>> + return device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT); >>>>>> + >>>>>> return pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT); >>>>>> } >>>>> >>>>> >>>>> >>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c >>>>>> index 96c525785d78..141fe5abd33f 100644 >>>>>> --- a/mm/page_vma_mapped.c >>>>>> +++ b/mm/page_vma_mapped.c >>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp, >>>>>> static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr) >>>>>> { >>>>>> unsigned long pfn; >>>>>> + bool device_private = false; >>>>>> pte_t ptent = ptep_get(pvmw->pte); >>>>>> >>>>>> if (pvmw->flags & PVMW_MIGRATION) { >>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr) >>>>>> if (!softleaf_is_migration(entry)) >>>>>> return false; >>>>>> >>>>>> + if (softleaf_is_migration_device_private(entry)) >>>>>> + device_private = true; >>>>>> + >>>>>> pfn = softleaf_to_pfn(entry); >>>>>> } else if (pte_present(ptent)) { >>>>>> pfn = pte_pfn(ptent); >>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr) >>>>>> return false; >>>>>> >>>>>> pfn = softleaf_to_pfn(entry); >>>>>> + >>>>>> + if (softleaf_is_device_private(entry)) >>>>>> + device_private = true; >>>>>> } >>>>>> >>>>>> + if ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE)) >>>>>> + return false; >>>>>> + >>>>>> if ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT)) >>>>>> return false; >>>>>> if (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)) >>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr) >>>>>> } >>>>>> >>>>>> /* Returns true if the two ranges overlap. Careful to not overflow. */ >>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw) >>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw) >>>>>> { >>>>>> + if ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE)) >>>>>> + return false; >>>>>> + >>>>>> if ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT)) >>>>>> return false; >>>>>> if (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1) >>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) >>>>>> >>>>>> if (!softleaf_is_migration(entry) || >>>>>> !check_pmd(softleaf_to_pfn(entry), >>>>>> + softleaf_is_device_private(entry) || >>>>>> + softleaf_is_migration_device_private(entry), >>>>>> pvmw)) >>>>>> return not_found(pvmw); >>>>>> return true; >>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) >>>>>> if (likely(pmd_trans_huge(pmde))) { >>>>>> if (pvmw->flags & PVMW_MIGRATION) >>>>>> return not_found(pvmw); >>>>>> - if (!check_pmd(pmd_pfn(pmde), pvmw)) >>>>>> + if (!check_pmd(pmd_pfn(pmde), false, pvmw)) >>>>>> return not_found(pvmw); >>>>>> return true; >>>>>> } >>>>> >>>>> It seems to me that you can add a new flag like “bool is_device_private” to >>>>> indicate whether pfn is a device private index instead of pfn without >>>>> manipulating pvmw->pfn itself. >>>> >>>> We could do it like that, however my concern with using a new param was that >>>> storing this info seperately might make it easier to misuse a device private >>>> index as a regular pfn. >>>> >>>> It seemed like it could be easy to overlook both when creating the pvmw and >>>> then when accessing the pfn. >>> >>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to >>> return the converted pfn instead of pvmw->pfn directly. You can add a comment >>> to ask people to use helper function and even mark pvmw->pfn /* do not use >>> directly */. >> >> Yeah I agree that is a good idea. >> >>> >>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure >>> there is no weird arch having pfns with bit 63 being 1? Your change could >>> break it, right? >> >> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I >> thought doing something similiar here should be safe. > > Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used > by almost every arch, so it has a broader impact. That is a good point. I see a few options: - On every arch we can assume SWP_PFN_BITS? I could add a sanity check that we have an extra bit on top of SWP_PFN_BITS within an unsigned long. - We could define PVMW_PFN_SHIFT as 0 if !CONFIG_MIGRATION as the flag is not required. - Instead of modifying pvmw->pfn we could use pvmw->flags but that has the issues of separating the offset type and offset. Thanks, Jordan. > > Best Regards, > Yan, Zi