From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F91AC5AD49 for ; Mon, 2 Jun 2025 23:53:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B53666B036C; Mon, 2 Jun 2025 19:53:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2AD06B036E; Mon, 2 Jun 2025 19:53:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F38C6B036F; Mon, 2 Jun 2025 19:53:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 79C1A6B036C for ; Mon, 2 Jun 2025 19:53:19 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 20A1DC1421 for ; Mon, 2 Jun 2025 23:53:19 +0000 (UTC) X-FDA: 83512114518.10.0C3E0B5 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2040.outbound.protection.outlook.com [40.107.102.40]) by imf27.hostedemail.com (Postfix) with ESMTP id 4522B40003 for ; Mon, 2 Jun 2025 23:53:16 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=gMG7q1R5; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf27.hostedemail.com: domain of ziy@nvidia.com designates 40.107.102.40 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1748908396; a=rsa-sha256; cv=pass; b=58fqmklO0yy/ul1AD3KbfwsI6wG/1b7QrTpEu7uzIsV19fm17WldoG0hXVW8nYLpP55Aof TCBdHR+ccCvacFDeoQYH6/54LJn/3VRL3grd5mCU2MtfHHoUqdcPt/71LuXqnc0oyICIxq byaPu/5OzJRRhLOFm2qjbBDjr6ntfyw= ARC-Authentication-Results: i=2; imf27.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=gMG7q1R5; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf27.hostedemail.com: domain of ziy@nvidia.com designates 40.107.102.40 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748908396; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XPYeAlQll8sVHoT4JWslf9XEsn6UipUm64tBSCQSXQ4=; b=RUqC7L2D+5vkh7PTtPfbEe6qwhkRGT72WmqILOkSogijYVzSxJT4atRqsSiKNHiJvpQFC0 WiQQCM9VU5HhGBsIeVKeue+WyUhnQ2bugP5TtVo65tVHTKghuHkpr9FV/29eTYenc4OktL sTMRuJjPCVCByNzxpqcV7Hl9/mvbqNI= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=cOq3dI4Tws2+dC9gFPRVW/mV3NQ5mI/Ior0lUfrbV1FUPz0TMxOC9Wv80EjfIedyFgPfvmOUfqRTS/7Z7W/KC0vz1A+NodA/9e4px3Uu0fiAo7jG5HBHWXWN34j/q2jOV5Y7SImaP8tgnxBqDY45D6sY4aHEKicKNh8/2EgadY6VOzd13iBgjJ93OxCCH9UcAX25p2vZi37/DLvHEFiVS7+vrj2qPa3fE5Z8vAqe1QDqiJxKzkQR5swSoC+ArxmnuNiEQ2oQ387wkN9TOmDPjvreEG6BXw/k9TtYhBFCUahzJeYTErr2XUdq+FotsUlkNUR8D7AZ73GvG+PMP4RF+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XPYeAlQll8sVHoT4JWslf9XEsn6UipUm64tBSCQSXQ4=; b=Qlb6162kv3Gq3TdYi/H/vUBre+288euo/Xr9NLXRhzyQR8aYNf31tkFTfdLZAsIfKq2N52snkoQrFrCYZbrqfoszZRwO859s+J6aRpLz2HcKgslPYNexgK2MibpxAM3cTSLr1xejIDDW2qDoRQfb5xb8eadBKpC7bU6bCXvAmBqyDakTDEC4lbSo1m2DHqYh6KiBcvLfXeqqv6dlV/pSvtP3rawfMSwr3anMB8CMoh2l5R4ndP8/W6huHcvB2rgIOMsV6CIh5EQfpkCjxvboOI/SRns7LRDykMPe2GNtDbyar67Xq8SV0XDp99Nja6T9PcscWZeU1oSxhzqf9nSOlA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XPYeAlQll8sVHoT4JWslf9XEsn6UipUm64tBSCQSXQ4=; b=gMG7q1R5yk1FT5HupK8DqKC/t52CSyVNtxSp/VYCCknmoH+tPdTE2iNtVVpO0Z5UbN51OZzqHOLHpFbHSFBXHIZGYGibAydjTguFc+4rsDhqWyvYC9kcmuS0flQGeH1dlmr6FRb5qDB82yNdiuzvLemxQMJ5WGD9pw2SyPy6yNsqdkJ6e+mZ+Ej31wLSmvAhWFKYqp10NTUii9Y33cjLfvyZu4KUTcqnde6u/0YvLukUYxz4k96uS4Yw9c9yhdvh2628oODJu4XQ1P02UIwJvMYnDBNnPUDnWtXQuObnZMkMrP8dtZ+e88z6CJvJbRXnNm9lfd1a+u+/FfTCL35lXA== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by DS4PR12MB9658.namprd12.prod.outlook.com (2603:10b6:8:280::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8792.34; Mon, 2 Jun 2025 23:53:05 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%5]) with mapi id 15.20.8792.034; Mon, 2 Jun 2025 23:53:05 +0000 From: Zi Yan To: David Hildenbrand , Johannes Weiner , Vlastimil Babka , linux-mm@kvack.org Cc: Andrew Morton , Oscar Salvador , Baolin Wang , "Kirill A . Shutemov" , Mel Gorman , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Richard Chang , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v8 6/6] mm/page_isolation: remove migratetype parameter from more functions. Date: Mon, 2 Jun 2025 19:52:47 -0400 Message-ID: <20250602235247.1219983-7-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250602235247.1219983-1-ziy@nvidia.com> References: <20250602235247.1219983-1-ziy@nvidia.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: MN2PR06CA0027.namprd06.prod.outlook.com (2603:10b6:208:23d::32) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|DS4PR12MB9658:EE_ X-MS-Office365-Filtering-Correlation-Id: 1d637cfb-94bc-4faa-8f86-08dda2309e39 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|1800799024|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?SLmOAg8y3uzigg9pEu2eWVA9ClGZ496vTf4ggaMXhRk8H5E8S6UUoFyUbEfv?= =?us-ascii?Q?oPQ6QhL7X994Fsx2+xJ6SsU599cSYhn80jhrEsdsurnnL9Ig5lzAc+r8haVE?= =?us-ascii?Q?RLr4y0/WLJh/MC1vchcw6dgZIxU97wD/JEffPSktCgtEfG5uCD8q/ntVqQ1I?= =?us-ascii?Q?d2msTuSYaPQVX7IXrLc1iLK5KzUnNvhrjC2M7a8Rgi0mczxpHmGjdJOpkshY?= =?us-ascii?Q?EIgdPIWyt60AkoKn493aMB0ABpl6gYFef43WKP8et25cWQPotfkZk/JL0fy5?= =?us-ascii?Q?eI906F2ZA6paqnm7ZJJSTZtnjlIqkLKbHEp2ATp7nM07Bn1pM7vbbKq42gDu?= =?us-ascii?Q?YuAqa1eB+6Wxr3F4v2dmmMdBpOHxh6Vu0bUEasHsCIIVp2HMnQXcSO6mn9ac?= =?us-ascii?Q?URU0yGAlR8I0zC8RSFNwh4rwfORnSIsSt3HJq2G4lq1NbV8LpjLNdlNyMU/7?= =?us-ascii?Q?FWwdrKznYyujZKchD1iJduf/Pop+Ya8PwYxhTJBO9AuqF2gOofQCYW5s4fzC?= =?us-ascii?Q?mHYjX6y+v2UopcUK+XnrFDFSTr24YkbqAjta7QrbQhquMLm98VynTviYF5VO?= =?us-ascii?Q?VYldsfIxa2olvDt99uWHVNWrl0GmDYTO8iMpkr2y5eS+TNyLIKcDrWiidnhe?= =?us-ascii?Q?thj7SEZzUxkMqwqrDioHH8AWRiw+imISJlfLnkbnknv2RtfbAHl9N4JdvUhq?= =?us-ascii?Q?ijGM/FBydVDL5f7pauoaZJP+ekbP71sfJ4U0rE6nRNw5V146mOZscR21Rr6R?= =?us-ascii?Q?g+FsRFYUwv+4d4yD0+XjYGjlhoohOFw/TrsgZUiTyUfcPrMPckjNSepWCQ90?= =?us-ascii?Q?5mrPmfgiLaRo2dB+2FdUSrM09Jdpfxd/Z9GwPRAfl6Hr3WrVh2gW8Vl4T2sw?= =?us-ascii?Q?KXa0CgalyTJ0xucxYKeVmvan7W98TPy1sDikn5PhHbNiQyO5VhwPIOkMzfry?= =?us-ascii?Q?zEBMYHGNEErq7LILUWzt8OjwUphWtg6rQA58f2mOz2+t2ekcRs6YCPotnDKZ?= =?us-ascii?Q?fOJpVAnQ0r2l9fv1BW6XM1GkYzPEIe1FUskaOeB1BZeQ/ndulw73R5H/9Mi0?= =?us-ascii?Q?qnvQ+2zKi/z6JMoSGsZSJFI9KDQMHglS30/yr45nhEy5Qmw0YsBRdQkwNhHB?= =?us-ascii?Q?RijOL87SVWLFK35i1bCke5LsvgVBM5C7NFRisDbzroChrxoMTDnRwFu96yxJ?= =?us-ascii?Q?rS/JIEVgTBJxLasGA13uZora9C+CpLp3IhX1gGiEJkfGMwExVaWJOgSkvmi0?= =?us-ascii?Q?qiaGl7yFLne87ywGr5dnRLde5d5P6W3k/OC175TgfQkrYxKSxvUDdPrNW0Ft?= =?us-ascii?Q?0t5L56MRmvqxlgCMluKTLNqs+q3QPtHxVQt8M2zXfSe9fdfRlHG1Tyv+0IXr?= =?us-ascii?Q?2LjXq886lxxHlczzsMi9tYg5V5lzqt2s/bucMUrt29xs92HuJf49ukaTRzCo?= =?us-ascii?Q?htN01u39GNw=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(1800799024)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?l/OUKTaBvgZfUND/OkNQ6zbT3xEM7HqEECKfIbauPykwzPv2lRuapWwKrx7W?= =?us-ascii?Q?ubG+NpNaLAE+08IxtVtBQ+E97LzukAkuDcgUxO0qibmumfocswzkGlX54ZYk?= =?us-ascii?Q?R48tX80cdaSliMnHPFnofWY1z1USOepo3CxRZFDJAmOFHSKAOqPVKudHY4U9?= =?us-ascii?Q?YA6kEiAkvXiCeAJSGaCTB+rczWR9w++qMAGqq8piF6pbhAVfeHpVZa1uopjg?= =?us-ascii?Q?LRmZj2rmQEcturgWvdXav7TNY9eVbKBOcfT4IxtcEnliu/Y8v27JjiEVdBnk?= =?us-ascii?Q?g43Mb0v/eZXu31e+tqgsTg40dPC31fGNMkIsb01TXtjwRsgPY+Godf5fdRjA?= =?us-ascii?Q?rvOpARW3EOXqB60QnNwLLviNYznDdLzdW2KZ12quP+53U+QHnfjguLVWy6le?= =?us-ascii?Q?qQSNZx/BY6NQcrhKbMq4Gbc/smaDugmUPciFCmAyNuQVPtm/JDppVHo7FSDi?= =?us-ascii?Q?IyM8w7AiPE75dR/qo/PM98WAyT8yqHnF9Le9V0lMRlTTFNxhOJ2GYhpmVrJu?= =?us-ascii?Q?ua0WCZzjC5zq0fnlx6yt8b01aV8eBilR1wXC8xb/iXD9JMyTOuIN9r1SRLYj?= =?us-ascii?Q?TGhdDGXYSvFKigCxMdC5sqEYhS8We7eEa6N5sOAvJcg1IFWIVy9VlB/LG3kQ?= =?us-ascii?Q?9vLzMLxKw6h/WrdtLA444t0KQpflPCAMFmoDGdZsuXuljJ15y3f15ZkGLVTh?= =?us-ascii?Q?o1l2/CkvoOlT02/QS2Py0BGs4s/4lvT1R7kbEB4xFFA2tOj9+tySx3S5mX1l?= =?us-ascii?Q?Eshy3QKkmLKUUuSeZzK5xDzrbJT+W/CTgem81eX+c3qCWm+UWp/b99K1Xhv4?= =?us-ascii?Q?Rbvg1gKKNt9rWKW0fn9+Q6TQN/AC8Wzd3WnhwvSWeP3/sfK9lLuN6cuneVWK?= =?us-ascii?Q?2XoyxQKQIXgwukw2ugvdVVI8DlNY2z0vn/FYQqTrjBkn4Z3/TXBC4Sxr8wsk?= =?us-ascii?Q?852XHVE38ZzXxz4GdPJfe9lUlwUC7SnjSSsgGIijvxEqHTIH4JRh6ct/IjZK?= =?us-ascii?Q?E5SReAUnGDoI6NFrbVB/15QBqPPgYXhXsmDyUeBX8tAJPxZqG40WZZrwbDAI?= =?us-ascii?Q?MoVBzyATg3kqptrl68j+GY0b+lS2HFOWS93IxybukQKFJuVCbi8gjVkWXr4A?= =?us-ascii?Q?6N5RCFvedN4qLoC9NDi4/KGGj7Z7HgRES22MLJ4YLTy8EQkt0W8LgTFW7ZjV?= =?us-ascii?Q?XnsEceTNTd1Z+j+JWHMZ83CXhT5yTRLPnHz+lU2FkeQY9nk65VYWVsubidLw?= =?us-ascii?Q?YcQHHOgiKeCWnLi+Ro2RxN+Z0izZFSnSO2T4nTG/kcRujksOe6bq+B7mgi9C?= =?us-ascii?Q?pn0LTER0d7MDJVsbVh/XE8fUQ7G2aVm08LPoJgvSzBnrvVqNZAY3LmwQbnKs?= =?us-ascii?Q?VlZY+FVAU3gj58Gcow7thFjaZ+xpMEvmY7jTKKmu9B2NlUUn5BntJ2/LLyej?= =?us-ascii?Q?V4ro2lOQMuXfZR8FCETwOf97QNFAiTzJyyMzRzpr8pabfwUlYqafzOl4QsFV?= =?us-ascii?Q?uS8aXiMDbj/RcPn9I7LBs00imMCeTmgnqdB++fD6K7Qt4Tgm3Q2c+0alNLl5?= =?us-ascii?Q?ugQV4tAhHyXiQYDUR3e00dERGEs+Isy20VrnLL2Z?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1d637cfb-94bc-4faa-8f86-08dda2309e39 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2025 23:53:05.4534 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 8ZEvo2I7htdTQRnIsUuUt9KCBGAL31SjHGqyVoyWunCW0GAPfmDijNZrxUbirn/0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS4PR12MB9658 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4522B40003 X-Stat-Signature: f5umks4bhd33rgrhniaqtbwfn9kd4s7e X-Rspam-User: X-HE-Tag: 1748908396-362231 X-HE-Meta: U2FsdGVkX1+nyjO63N4bl+fV4H50KPP89P/L1HC/p8YWE1f8CTSmlXC52RQMnxe97oL3VtT0llnsTUxDzL5PFxpX3Qm37Deuf7TOgzZ9jtZwkjg3Bn0COiF3DD2GZMrkNP2DhqIvZ9OK5mpNm63ryRYb+Fl7TPssZXKtxtQXCNB1VS8rw1bJqKD5riwYSNSCsHwyZfHIdzB5+ypwMZT+TShWmMQTWLaJLhAA4Yag1BvLeZeiWW+Gyj4H+/spR7HJCv2Z+VhCwthX3/CrLUlmpUmk2wqQOpAs574m4g+yPqgC5rQ7RaWD7jsnwXOXkdTbhMEgHDMMg7y+SyHM4UNn3/Mf9Ti+kBv6bnfzcQ51FYf4wzAOAEGUh1ACCoeX5D36WWcaEEY5R5swRAEs9hnFJjHe/8NRo+2TPSoSN3Enh1/MEUnyy6Qngg3CP+wRjlkr724AOGZKSRCoW5vHr/l1K/pFAtoDWnfxlXQRURId3NdzwfOR+IWxPuY+B6cmcgtiEsB93t2G/IEEsVmV+JHXefdEDHN1204E8TEY6BXyCJ16NIhWLIA2zWo942rRIfvmYjXF3dpo1i1nnihKZNLeAvbRwPiL3/jDTMbT2Cd+JPQUXq0/hlQ8OcI3nnTnT8PB7IX+RPWGWPmBzH4hyaNIeJ0AAkiqqMl36ds3XLnBZmz+p1kZw3oPeYCmchUYqcuPi5U0eVk+el0FdA6n7F9Q6u1x4SSB56WupcheWaFJTF1ZbGnRnPuBbvv9PkN+leaqnzprRoUwtwNYztnBcpdwvC1Y09XRVsYyOTiFhUG4uIX6M0L/9L2d+vVDq6DuQ9hc/FZFp/BZB/fXL5yYhWYZ4HaZOJlwC1aWSBJhU/7/5N1Au1/+5MBdaSW28cD3J3ITRDczJBjaiUhYR6/9YoVoXVq4R6By6SJ2Rrx1vdookb58knU5xHl6FVlWQ6EY91uJb7luqeqM1WdhONjC2Md 3ywwkjs2 80FlQgiTU1Fx/GHWoLW/a8Ym4CJgK4xatEAabJuLmU5RttvjskCQir+nDr7daMFxhKwhOgMbLS72YfLUZKcvj6s6j/NwmDoQ0JPLVcY3/hnbf/qusi8lreSQ1AOndtlKGrL1DM1J25mdPd83aOM+WLoPtYcSwkJyWvlifJ+cYTVOA6jWddgSKhPFLkTI6icJZZ97Ba7Ha4PVkXfY1KwxzXOqJZfp3piGmbw4870PQuoclblLIJ1L3lKOK+gh6FExb5ga46iUrJijBl1uMQSxmWytavfvTkMiGdnGibaU+FI3Zv5WlmKxiaEHnLM415YCdnk4YiVzxBBGv3pisc6/PJ3YP0Zrtmr8RDd7h+/FFK8UD/D0omZR7Qs6txqSoaLN4GzfPWGMzZw12MdrX9svES9FnwS5iAra7C5/kSaB1Dj6pUtK7Jbowd1vQcnYX5C5amSIQYHjfykuJUWGkDBibY9+xLKoUGoh6mgDhxrByUTlliz4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: migratetype is no longer overwritten during pageblock isolation, start_isolate_page_range(), has_unmovable_pages(), and set_migratetype_isolate() no longer need which migratetype to restore during isolation failure. For has_unmoable_pages(), it needs to know if the isolation is for CMA allocation, so adding PB_ISOLATE_MODE_CMA_ALLOC provide the information. At the same time change isolation flags to enum pb_isolate_mode (PB_ISOLATE_MODE_MEM_OFFLINE, PB_ISOLATE_MODE_CMA_ALLOC, PB_ISOLATE_MODE_OTHER). Remove REPORT_FAILURE and check PB_ISOLATE_MODE_MEM_OFFLINE, since only PB_ISOLATE_MODE_MEM_OFFLINE reports isolation failures. alloc_contig_range() no longer needs migratetype. Replace it with a newly defined acr_flags_t to tell if an allocation is for CMA. So does __alloc_contig_migrate_range(). Add ACR_NONE (set to 0) to indicate ordinary allocations. Signed-off-by: Zi Yan Reviewed-by: Vlastimil Babka --- drivers/virtio/virtio_mem.c | 2 +- include/linux/gfp.h | 7 +++- include/linux/page-isolation.h | 20 ++++++++-- include/trace/events/kmem.h | 14 ++++--- mm/cma.c | 2 +- mm/memory_hotplug.c | 6 +-- mm/page_alloc.c | 27 ++++++------- mm/page_isolation.c | 70 +++++++++++++++------------------- 8 files changed, 80 insertions(+), 68 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 56d0dbe62163..42ebaafb9591 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -1243,7 +1243,7 @@ static int virtio_mem_fake_offline(struct virtio_mem *vm, unsigned long pfn, if (atomic_read(&vm->config_changed)) return -EAGAIN; - rc = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE, + rc = alloc_contig_range(pfn, pfn + nr_pages, ACR_NONE, GFP_KERNEL); if (rc == -ENOMEM) /* whoops, out of memory */ diff --git a/include/linux/gfp.h b/include/linux/gfp.h index be160e8d8bcb..ccf35cc351ff 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -423,9 +423,14 @@ static inline bool gfp_compaction_allowed(gfp_t gfp_mask) extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); #ifdef CONFIG_CONTIG_ALLOC + +typedef unsigned int __bitwise acr_flags_t; +#define ACR_NONE ((__force acr_flags_t)0) // ordinary allocation request +#define ACR_CMA ((__force acr_flags_t)BIT(0)) // allocate for CMA + /* The below functions must be run on a range from a single zone. */ extern int alloc_contig_range_noprof(unsigned long start, unsigned long end, - unsigned migratetype, gfp_t gfp_mask); + acr_flags_t alloc_flags, gfp_t gfp_mask); #define alloc_contig_range(...) alloc_hooks(alloc_contig_range_noprof(__VA_ARGS__)) extern struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask, diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 7a681a49e73c..3e2f960e166c 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -38,8 +38,20 @@ static inline void set_pageblock_isolate(struct page *page) } #endif -#define MEMORY_OFFLINE 0x1 -#define REPORT_FAILURE 0x2 +/* + * Pageblock isolation modes: + * PB_ISOLATE_MODE_MEM_OFFLINE - isolate to offline (!allocate) memory + * e.g., skip over PageHWPoison() pages and + * PageOffline() pages. Unmovable pages will be + * reported in this mode. + * PB_ISOLATE_MODE_CMA_ALLOC - isolate for CMA allocations + * PB_ISOLATE_MODE_OTHER - isolate for other purposes + */ +enum pb_isolate_mode { + PB_ISOLATE_MODE_MEM_OFFLINE, + PB_ISOLATE_MODE_CMA_ALLOC, + PB_ISOLATE_MODE_OTHER, +}; void __meminit init_pageblock_migratetype(struct page *page, enum migratetype migratetype, @@ -49,10 +61,10 @@ bool pageblock_isolate_and_move_free_pages(struct zone *zone, struct page *page) bool pageblock_unisolate_and_move_free_pages(struct zone *zone, struct page *page); int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int flags); + enum pb_isolate_mode mode); void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, - int isol_flags); + enum pb_isolate_mode mode); #endif diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index f74925a6cf69..efffcf578217 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -304,6 +304,7 @@ TRACE_EVENT(mm_page_alloc_extfrag, __entry->change_ownership) ); +#ifdef CONFIG_CONTIG_ALLOC TRACE_EVENT(mm_alloc_contig_migrate_range_info, TP_PROTO(unsigned long start, @@ -311,9 +312,9 @@ TRACE_EVENT(mm_alloc_contig_migrate_range_info, unsigned long nr_migrated, unsigned long nr_reclaimed, unsigned long nr_mapped, - int migratetype), + acr_flags_t alloc_flags), - TP_ARGS(start, end, nr_migrated, nr_reclaimed, nr_mapped, migratetype), + TP_ARGS(start, end, nr_migrated, nr_reclaimed, nr_mapped, alloc_flags), TP_STRUCT__entry( __field(unsigned long, start) @@ -321,7 +322,7 @@ TRACE_EVENT(mm_alloc_contig_migrate_range_info, __field(unsigned long, nr_migrated) __field(unsigned long, nr_reclaimed) __field(unsigned long, nr_mapped) - __field(int, migratetype) + __field(acr_flags_t, alloc_flags) ), TP_fast_assign( @@ -330,17 +331,18 @@ TRACE_EVENT(mm_alloc_contig_migrate_range_info, __entry->nr_migrated = nr_migrated; __entry->nr_reclaimed = nr_reclaimed; __entry->nr_mapped = nr_mapped; - __entry->migratetype = migratetype; + __entry->alloc_flags = alloc_flags; ), - TP_printk("start=0x%lx end=0x%lx migratetype=%d nr_migrated=%lu nr_reclaimed=%lu nr_mapped=%lu", + TP_printk("start=0x%lx end=0x%lx alloc_flags=%d nr_migrated=%lu nr_reclaimed=%lu nr_mapped=%lu", __entry->start, __entry->end, - __entry->migratetype, + __entry->alloc_flags, __entry->nr_migrated, __entry->nr_reclaimed, __entry->nr_mapped) ); +#endif TRACE_EVENT(mm_setup_per_zone_wmarks, diff --git a/mm/cma.c b/mm/cma.c index 397567883a10..9ee8fad797bc 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -822,7 +822,7 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr, pfn = cmr->base_pfn + (bitmap_no << cma->order_per_bit); mutex_lock(&cma->alloc_mutex); - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, gfp); + ret = alloc_contig_range(pfn, pfn + count, ACR_CMA, gfp); mutex_unlock(&cma->alloc_mutex); if (ret == 0) { page = pfn_to_page(pfn); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 4626064705ac..3eea3008727f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -2009,8 +2009,7 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, /* set above range as isolated */ ret = start_isolate_page_range(start_pfn, end_pfn, - MIGRATE_MOVABLE, - MEMORY_OFFLINE | REPORT_FAILURE); + PB_ISOLATE_MODE_MEM_OFFLINE); if (ret) { reason = "failure to isolate range"; goto failed_removal_pcplists_disabled; @@ -2069,7 +2068,8 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, goto failed_removal_isolated; } - ret = test_pages_isolated(start_pfn, end_pfn, MEMORY_OFFLINE); + ret = test_pages_isolated(start_pfn, end_pfn, + PB_ISOLATE_MODE_MEM_OFFLINE); } while (ret); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ccb21af002b0..0867e2b2e187 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6695,11 +6695,12 @@ static void alloc_contig_dump_pages(struct list_head *page_list) /* * [start, end) must belong to a single zone. - * @migratetype: using migratetype to filter the type of migration in + * @alloc_flags: using acr_flags_t to filter the type of migration in * trace_mm_alloc_contig_migrate_range_info. */ static int __alloc_contig_migrate_range(struct compact_control *cc, - unsigned long start, unsigned long end, int migratetype) + unsigned long start, unsigned long end, + acr_flags_t alloc_flags) { /* This function is based on compact_zone() from compaction.c. */ unsigned int nr_reclaimed; @@ -6771,7 +6772,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, putback_movable_pages(&cc->migratepages); } - trace_mm_alloc_contig_migrate_range_info(start, end, migratetype, + trace_mm_alloc_contig_migrate_range_info(start, end, alloc_flags, total_migrated, total_reclaimed, total_mapped); @@ -6842,10 +6843,7 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask) * alloc_contig_range() -- tries to allocate given range of pages * @start: start PFN to allocate * @end: one-past-the-last PFN to allocate - * @migratetype: migratetype of the underlying pageblocks (either - * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks - * in range must have the same migratetype and it must - * be either of the two. + * @alloc_flags: allocation information * @gfp_mask: GFP mask. Node/zone/placement hints are ignored; only some * action and reclaim modifiers are supported. Reclaim modifiers * control allocation behavior during compaction/migration/reclaim. @@ -6862,7 +6860,7 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask) * need to be freed with free_contig_range(). */ int alloc_contig_range_noprof(unsigned long start, unsigned long end, - unsigned migratetype, gfp_t gfp_mask) + acr_flags_t alloc_flags, gfp_t gfp_mask) { unsigned long outer_start, outer_end; int ret = 0; @@ -6877,6 +6875,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, .alloc_contig = true, }; INIT_LIST_HEAD(&cc.migratepages); + enum pb_isolate_mode mode = (alloc_flags & ACR_CMA) ? + PB_ISOLATE_MODE_CMA_ALLOC : + PB_ISOLATE_MODE_OTHER; gfp_mask = current_gfp_context(gfp_mask); if (__alloc_contig_verify_gfp_mask(gfp_mask, (gfp_t *)&cc.gfp_mask)) @@ -6903,7 +6904,7 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, * put back to page allocator so that buddy can use them. */ - ret = start_isolate_page_range(start, end, migratetype, 0); + ret = start_isolate_page_range(start, end, mode); if (ret) goto done; @@ -6919,7 +6920,7 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, * allocated. So, if we fall through be sure to clear ret so that * -EBUSY is not accidentally used or returned to caller. */ - ret = __alloc_contig_migrate_range(&cc, start, end, migratetype); + ret = __alloc_contig_migrate_range(&cc, start, end, alloc_flags); if (ret && ret != -EBUSY) goto done; @@ -6953,7 +6954,7 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, outer_start = find_large_buddy(start); /* Make sure the range is really isolated. */ - if (test_pages_isolated(outer_start, end, 0)) { + if (test_pages_isolated(outer_start, end, mode)) { ret = -EBUSY; goto done; } @@ -6996,8 +6997,8 @@ static int __alloc_contig_pages(unsigned long start_pfn, { unsigned long end_pfn = start_pfn + nr_pages; - return alloc_contig_range_noprof(start_pfn, end_pfn, MIGRATE_MOVABLE, - gfp_mask); + return alloc_contig_range_noprof(start_pfn, end_pfn, ACR_NONE, + gfp_mask); } static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 1edfef408faf..ece3bfc56bcd 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -31,7 +31,7 @@ * */ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int flags) + enum pb_isolate_mode mode) { struct page *page = pfn_to_page(start_pfn); struct zone *zone = page_zone(page); @@ -46,7 +46,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e * isolate CMA pageblocks even when they are not movable in fact * so consider them movable here. */ - if (is_migrate_cma(migratetype)) + if (mode == PB_ISOLATE_MODE_CMA_ALLOC) return NULL; return page; @@ -117,7 +117,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e * The HWPoisoned page may be not in buddy system, and * page_count() is not 0. */ - if ((flags & MEMORY_OFFLINE) && PageHWPoison(page)) + if ((mode == PB_ISOLATE_MODE_MEM_OFFLINE) && PageHWPoison(page)) continue; /* @@ -130,7 +130,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e * move these pages that still have a reference count > 0. * (false negatives in this function only) */ - if ((flags & MEMORY_OFFLINE) && PageOffline(page)) + if ((mode == PB_ISOLATE_MODE_MEM_OFFLINE) && PageOffline(page)) continue; if (__PageMovable(page) || PageLRU(page)) @@ -151,7 +151,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e * present in [start_pfn, end_pfn). The pageblock must intersect with * [start_pfn, end_pfn). */ -static int set_migratetype_isolate(struct page *page, int migratetype, int isol_flags, +static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode, unsigned long start_pfn, unsigned long end_pfn) { struct zone *zone = page_zone(page); @@ -186,7 +186,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ end_pfn); unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end, - migratetype, isol_flags); + mode); if (!unmovable) { if (!pageblock_isolate_and_move_free_pages(zone, page)) { spin_unlock_irqrestore(&zone->lock, flags); @@ -198,7 +198,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ } spin_unlock_irqrestore(&zone->lock, flags); - if (isol_flags & REPORT_FAILURE) { + if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) { /* * printk() with zone->lock held will likely trigger a * lockdep splat, so defer it here. @@ -292,11 +292,10 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * isolate_single_pageblock() -- tries to isolate a pageblock that might be * within a free or in-use page. * @boundary_pfn: pageblock-aligned pfn that a page might cross - * @flags: isolation flags + * @mode: isolation mode * @isolate_before: isolate the pageblock before the boundary_pfn * @skip_isolation: the flag to skip the pageblock isolation in second * isolate_single_pageblock() - * @migratetype: migrate type to set in error recovery. * * Free and in-use pages can be as big as MAX_PAGE_ORDER and contain more than one * pageblock. When not all pageblocks within a page are isolated at the same @@ -311,8 +310,9 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * either. The function handles this by splitting the free page or migrating * the in-use page then splitting the free page. */ -static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, - bool isolate_before, bool skip_isolation, int migratetype) +static int isolate_single_pageblock(unsigned long boundary_pfn, + enum pb_isolate_mode mode, bool isolate_before, + bool skip_isolation) { unsigned long start_pfn; unsigned long isolate_pageblock; @@ -338,12 +338,11 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, zone->zone_start_pfn); if (skip_isolation) { - int mt __maybe_unused = get_pageblock_migratetype(pfn_to_page(isolate_pageblock)); - - VM_BUG_ON(!is_migrate_isolate(mt)); + VM_BUG_ON(!get_pageblock_isolate(pfn_to_page(isolate_pageblock))); } else { - ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype, - flags, isolate_pageblock, isolate_pageblock + pageblock_nr_pages); + ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), + mode, isolate_pageblock, + isolate_pageblock + pageblock_nr_pages); if (ret) return ret; @@ -441,14 +440,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, * start_isolate_page_range() - mark page range MIGRATE_ISOLATE * @start_pfn: The first PFN of the range to be isolated. * @end_pfn: The last PFN of the range to be isolated. - * @migratetype: Migrate type to set in error recovery. - * @flags: The following flags are allowed (they can be combined in - * a bit mask) - * MEMORY_OFFLINE - isolate to offline (!allocate) memory - * e.g., skip over PageHWPoison() pages - * and PageOffline() pages. - * REPORT_FAILURE - report details about the failure to - * isolate the range + * @mode: isolation mode * * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in * the range will never be allocated. Any free pages and pages freed in the @@ -481,7 +473,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, * Return: 0 on success and -EBUSY if any part of range cannot be isolated. */ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int flags) + enum pb_isolate_mode mode) { unsigned long pfn; struct page *page; @@ -492,8 +484,8 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, bool skip_isolation = false; /* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */ - ret = isolate_single_pageblock(isolate_start, flags, false, - skip_isolation, migratetype); + ret = isolate_single_pageblock(isolate_start, mode, false, + skip_isolation); if (ret) return ret; @@ -501,8 +493,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, skip_isolation = true; /* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */ - ret = isolate_single_pageblock(isolate_end, flags, true, - skip_isolation, migratetype); + ret = isolate_single_pageblock(isolate_end, mode, true, skip_isolation); if (ret) { unset_migratetype_isolate(pfn_to_page(isolate_start)); return ret; @@ -513,8 +504,8 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, pfn < isolate_end - pageblock_nr_pages; pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); - if (page && set_migratetype_isolate(page, migratetype, flags, - start_pfn, end_pfn)) { + if (page && set_migratetype_isolate(page, mode, start_pfn, + end_pfn)) { undo_isolate_page_range(isolate_start, pfn); unset_migratetype_isolate( pfn_to_page(isolate_end - pageblock_nr_pages)); @@ -556,7 +547,7 @@ void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn) */ static unsigned long __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, - int flags) + enum pb_isolate_mode mode) { struct page *page; @@ -569,11 +560,12 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, * simple way to verify that as VM_BUG_ON(), though. */ pfn += 1 << buddy_order(page); - else if ((flags & MEMORY_OFFLINE) && PageHWPoison(page)) + else if ((mode == PB_ISOLATE_MODE_MEM_OFFLINE) && + PageHWPoison(page)) /* A HWPoisoned page cannot be also PageBuddy */ pfn++; - else if ((flags & MEMORY_OFFLINE) && PageOffline(page) && - !page_count(page)) + else if ((mode == PB_ISOLATE_MODE_MEM_OFFLINE) && + PageOffline(page) && !page_count(page)) /* * The responsible driver agreed to skip PageOffline() * pages when offlining memory by dropping its @@ -591,11 +583,11 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, * test_pages_isolated - check if pageblocks in range are isolated * @start_pfn: The first PFN of the isolated range * @end_pfn: The first PFN *after* the isolated range - * @isol_flags: Testing mode flags + * @mode: Testing mode * * This tests if all in the specified range are free. * - * If %MEMORY_OFFLINE is specified in @flags, it will consider + * If %PB_ISOLATE_MODE_MEM_OFFLINE specified in @mode, it will consider * poisoned and offlined pages free as well. * * Caller must ensure the requested range doesn't span zones. @@ -603,7 +595,7 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, * Returns 0 if true, -EBUSY if one or more pages are in use. */ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, - int isol_flags) + enum pb_isolate_mode mode) { unsigned long pfn, flags; struct page *page; @@ -639,7 +631,7 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, /* Check all pages are free or marked as ISOLATED */ zone = page_zone(page); spin_lock_irqsave(&zone->lock, flags); - pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn, isol_flags); + pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn, mode); spin_unlock_irqrestore(&zone->lock, flags); ret = pfn < end_pfn ? -EBUSY : 0; -- 2.47.2