From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 743DDC71136 for ; Mon, 16 Jun 2025 12:13:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 165E36B00AF; Mon, 16 Jun 2025 08:13:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0EF876B00B1; Mon, 16 Jun 2025 08:13:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAB526B00B2; Mon, 16 Jun 2025 08:13:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D31456B00AF for ; Mon, 16 Jun 2025 08:13:40 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id ABDC31D7AC7 for ; Mon, 16 Jun 2025 12:13:40 +0000 (UTC) X-FDA: 83561154600.20.83053F7 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2077.outbound.protection.outlook.com [40.107.244.77]) by imf07.hostedemail.com (Postfix) with ESMTP id D02AB40014 for ; Mon, 16 Jun 2025 12:13:37 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=loEnV0Q6; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf07.hostedemail.com: domain of ziy@nvidia.com designates 40.107.244.77 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750076018; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wE8sLxzQaz8zGgNGZIBWr6Ap7mgkheIObVmJrbM19qM=; b=2JRFhGtp+mm8WRHHLTZMcFYQlfBZofG1t/jq+qKvTwP/QlQsKQ6afa50LZE82Qjb5apHaP XwP0BIG2/PKMalBWFxlhk+XD66jkyaXl8KrvTHbcSHVcgvHegYTCPAVO4AZxkAt0IT9P5n mhmyzPr904cx/QiLkXxtTqlfLdtj06I= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1750076018; a=rsa-sha256; cv=pass; b=iKHuZW3OmfxIhgS7Zr0tU8+uQL+pfgMiNluf0bF7wgIX7SMUi+e47mRObx5KyMs0UMozot 85xNVpWBJt2u1o/U8dlpxRspWRNV8D5welbAWhQitNoFjsyYFS6+5mF4y5Ljpe3GyZT+Gv DqoqFmfQVXE26RLalyiAebL7wSm0za4= ARC-Authentication-Results: i=2; imf07.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=loEnV0Q6; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf07.hostedemail.com: domain of ziy@nvidia.com designates 40.107.244.77 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Z5TmCDawQgqBHgYy8msBvVC7lp5S37qNWYIwWdpmW2WmsQ3c+/QdcV/4Ve6SgNmlcgWCgmavT2Qm8g1C3tFKy7Tf6+YXdmpXTWMNQqYSW3YOnvyo7QqRNCW1P7Szd9JfpcZxYzn3vC++Cf4hMALnKSNMMHSPgKpQPzsdTVbjUKaGQOWmKBohqoLMS5tyRH0j/nPh13800tfwHuyYHmXpD7DXfhLDlFe0cQ0h3bongzYNdEKyrBwP2fAJGgTXjJeut29GIzsmLTGeUYH6ZaeErq4+UUyyICy33t1wWKWT/0q0GMEnrf0iXlM7xp8DRS9C6Mq9A/6ewxiT1G5RGrKM6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wE8sLxzQaz8zGgNGZIBWr6Ap7mgkheIObVmJrbM19qM=; b=kTHcpnX8FxFGf5KU+ltZnk6eXsmXetd6VuFDh5+wEeRiXYpJCmJwNOlzDG3ZdwzbPBZRfF2+mqQWuKgP+GeThURaM8h7ab9vnoj76sSjjhR4qTTEIxJUtzPrNyasVmtvKAN58Yh2aDHmpvSSAejQOwWgDtkAVHutM6fx+Mh6hwbOWWUVt/I0ANomLMoLDJOwJNncBCDN/YBgZyRT1ldEL0Une1IxzoLerjb+jNi1vOeV2QXjxMVPe70Yv310l7KJrflfr1dYTf67VYwdneHJybnZCJ6dExtkpWuzXfrmoIxCP8xymh9+Ut22qSf3us1mPBCLj7TYPM5poAoZBx2kMg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wE8sLxzQaz8zGgNGZIBWr6Ap7mgkheIObVmJrbM19qM=; b=loEnV0Q6nQNjjuD7coUxJKgsv/81QzQSlYsqVCPT4jyyuSumHgjZ8uPrjnw4XnCBbzm3wHOZ+s2XsNHxiF+lBrhlkDwEGe2euKZCV9sU2iD2TD+8KrlqRj1ViipprK1Fql2FUv+/9cLxMCWrJxUT2U5F3tG5lZiS3eKZL+lBLysPJeq/X8a8VTH0pcX/ZvRsSLNjMR4PxOrBQihzAxPn8W4wehJGAEwE8yqvVJ9iQVHWNmdZpfQfYAxZlbKRVvwKXHVy2dyaysJ4YMs9Yuhhgw4Z3l1zVDbK5SPxSLSYmKpLP0vc4cIZvsWyJChlQRQ04NlDuKw+iz6nBWy5mHRhlg== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by CY1PR12MB9560.namprd12.prod.outlook.com (2603:10b6:930:fd::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8835.29; Mon, 16 Jun 2025 12:10:26 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%5]) with mapi id 15.20.8835.027; Mon, 16 Jun 2025 12:10:26 +0000 From: Zi Yan To: Andrew Morton , linux-mm@kvack.org Cc: David Hildenbrand , Johannes Weiner , Vlastimil Babka , Oscar Salvador , Baolin Wang , "Kirill A . Shutemov" , Mel Gorman , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Richard Chang , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v9 1/6] mm/page_alloc: pageblock flags functions clean up. Date: Mon, 16 Jun 2025 08:10:14 -0400 Message-ID: <20250616121019.1925851-2-ziy@nvidia.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250616121019.1925851-1-ziy@nvidia.com> References: <20250616121019.1925851-1-ziy@nvidia.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: MN2PR07CA0017.namprd07.prod.outlook.com (2603:10b6:208:1a0::27) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|CY1PR12MB9560:EE_ X-MS-Office365-Filtering-Correlation-Id: 8249c89d-6892-4534-5b86-08ddaccec6f8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|1800799024|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?58Pe9qBnmKmDNVYQ8l/uUqU9k81aJD9QonwoQaXGHVCCPbaKhr/LUnDJif1+?= =?us-ascii?Q?sxirNVJYH7IfC6GlOAwvtPUSRcGB+a3tZUL2geeDbEL1MgAzYUqkfdlHcTBD?= =?us-ascii?Q?vkOHrva5bWWENc13qX+AQdZXj/hdiYC33FVlfTZW++RxZZ80kH7qCKZF/bk4?= =?us-ascii?Q?JZ6r6jqecOJzdtzqAF8gRmDdG3Z9+fEoiQ4gJnzAVilikkc+qtamIu3iQI9n?= =?us-ascii?Q?VLovPK/CapOZ7b91LQoZpjf2r6hA65lAElCCPjtJ7y0iQPGYyxqsNgKmsoJ7?= =?us-ascii?Q?k7diYRncSwHnFbbapshxV9MPT3/lHx8IR41Dh0a1m6OUP7MKl6Jv42J9eNZL?= =?us-ascii?Q?jqMXTMwTicgtDT1aiJz3Go+2m0v6ibMFkopQE97zZG8XVHPdnrRmuSEw2TVD?= =?us-ascii?Q?t77mlceRWMTTIku16j+k8fvyCLhQpmIBUfngJGOCPkmir0GRs6qphrffQWVP?= =?us-ascii?Q?ke6SZSt2tqvmjGAuD+euJcy5fuO34IUB162sydpEZVd6ZyjTalBYaZtZ42ng?= =?us-ascii?Q?bRQXOIZ90++GoxCLRDlid1ZL13ICTmO/nK1TqW4owd6H8ItFEAGJqTNKbGgR?= =?us-ascii?Q?CtzpneTSHSg5oTp0gS9m8RIz8izfmP7c6Ej/PpH+1uQIzS+X6dexfkFANXnR?= =?us-ascii?Q?CoA4Ad24HXSn+dR5twu2qI28dgESJYs6B56drTSrsmYLnZUZ1waIYHHLG6EO?= =?us-ascii?Q?2cpff8/Zk1ucIn08z8b/metu7IpQYrFTkwERzRy/is+C8iamAjzDtcluG7Ry?= =?us-ascii?Q?YVT1KSjoMHOQanDVZk62EUGNYp9VizP2NzPTmd53nHIc7euQj2ru3T7SR1M3?= =?us-ascii?Q?Q4dmrOIPTA4goLNn4nJDRi3HS7cHF581gTOagW0UVdhA3oJMCsWd6fB77r9i?= =?us-ascii?Q?Oif1Mj38PfbV7D9oheV14mgLzW5H6SQFeIsEsln8alLCVCnKMCuSHW2zkjGa?= =?us-ascii?Q?tBBFKWaTJDSXrXfIJzRnxle/WLhTzI7eNtcH+FoD0VyKnCDVKI7q4IPkmzH2?= =?us-ascii?Q?HkvbazMxQwldFxXsKLdxXtGJcsDq1hoVgWpn85zsu9u5eB20GsN7NzMiCd5g?= =?us-ascii?Q?W+DC8c7fJNLAz1O8Tr/fs0gJNekV1gXBDba3ao/BzpTZNr9oshxxjY5+HbFb?= =?us-ascii?Q?8xNqx4MV9SlZNfgB+Gj0Iui+Xje2Z17dNkVTkU98YQOj/Tv94w9/YgOwLmzI?= =?us-ascii?Q?HOGVwBEVpX6TwOGTcmmb+NRAqled1l1X+p7qf4fMa7QIykijngxwuvQdqJHk?= =?us-ascii?Q?MlJRDmoom0QD/YBKDZ3oemQE09YjvDf4pUF2mQwCu8By24mWbAxtgmFvyAdX?= =?us-ascii?Q?PQuAUXCDXIrXrU6cv3YdT+yrDWxne6ohis5hyFNEvbCTy3Pd7Z7CK8DM+eYc?= =?us-ascii?Q?U/tb+MugHnNWfiTxmE/Jal9ZVL6n6gu9d+muD7ONs0VTf3/TX/ODMw7h8Bao?= =?us-ascii?Q?CW/l/fSt5dU=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(1800799024)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?fHGuKXsf4f6LJDHwaGkalJ8pZmaL4/IcqxUlfKSyHCTk1N5myJaHJg8/goF4?= =?us-ascii?Q?U8Iu2tKm/l0B8gVDi3KJ5zZPcZfnmNYv1pFeb/TQLmmKyFTfcalCshKWh1aD?= =?us-ascii?Q?vktBQjpdV8hrJs2l3WCKVzF1rKapBYl2dcMhpDhGqZ46VbzwtJr2/t7wBb2N?= =?us-ascii?Q?S9pbsSvvqXcB1Ne64Gd393Ex8D+QcRQC3aXXdwvlxg78cjE5Th+8JO2Yk2F4?= =?us-ascii?Q?TQFWdyF1Uv1/9/CHYHOtkXfWz0XVnugcXB/9LxV8AcNLVRn9wBA+bPGN1Jp+?= =?us-ascii?Q?1f0xniBjxsNZYSY4hMlu3V7UwJ4oZSbOCHCNDgcotKdsPESUwqnVXHLVrLlu?= =?us-ascii?Q?whipN3CdyFkt7P+Jnni26v0Meh8GXskp5OmGN9xHbkHypG/L6lzopce3kyru?= =?us-ascii?Q?WOk5fMLIW/e8HTrB3zwNHzQ4ewz8nW257xbjdPN6U5SK2ZqONESVm4GpKFZy?= =?us-ascii?Q?J1tkpruKmfxVlPAbdSa30h+NHJRo1u6jPY41758/a35gZgDXAhfa0Ujydh5j?= =?us-ascii?Q?57qZKJCkHpXCPVAOSuaBQt3U+/qxmip11W7sWONcXn8zmI8OkkNHQcmxoLWz?= =?us-ascii?Q?Ndi2SJmx1m72BvkXuYWSt9dw032a2Hg8Wf6VgJsQKr0x2Nfd3RPArrpBHgq6?= =?us-ascii?Q?nB7gufacj0xX/8SOD4zPQ1HiwMiT0J8bd+Y/rlQdbZ0UfLlYu/2Yr7Lfj4BX?= =?us-ascii?Q?NWgUFDnQq2dGgrsopkmj17VfsP5qEpIGqbGREzz6dMmNJmypd1Vt1hxm1OJt?= =?us-ascii?Q?Y/kJ6F4F2dfLXbrrWh5YuNNZ9yZpdjuACMW6/qtUaKdlWAGF6ymxxlASeAui?= =?us-ascii?Q?RFVzzUv4IXK2BH8eslkydATP+KxmfjpYmzwqBHl4G4Rv4ro0syX5+FQWd4+V?= =?us-ascii?Q?M2VpPAosRUZWOW9JAbi14vW6bcEgu1quYGAXg4US5kNPgf6yHT9RTyTCY9pG?= =?us-ascii?Q?bVT9Zr5LuLyoKqmVDMCkmv1+QceFZ9wS/4lbFTe71boGqHoOLoreAaQOClyi?= =?us-ascii?Q?FHxqpkh7D5wlHlpXeqeOL2l2aAep8JsUCMvOHgAnmIKEKn04lpGoK5aKjpHT?= =?us-ascii?Q?pSwIR5D2N/iApEK+cQR1y3G/0D4IZrRtrLn52dqbOr9EOkDbxViE+qyQMSn7?= =?us-ascii?Q?UVKAMK4S9tcExKROJDzepFeEREqyL/FMnjVpaVXZMaeXR9BhCJHjk31q9z0B?= =?us-ascii?Q?NRYa7QfGvCy5VmUVXHsCzb1Wm7Addr+IcDJ/G2jeNwQmULSYMUm6HiCbz32d?= =?us-ascii?Q?g32ZYkdLa58eUHfZrCkV9lWcvM9hqledhB+936eZzZbF3Gm/Phwi1nIR77tP?= =?us-ascii?Q?fPriMr+LZQNOf5QTCrd/y3hjRzH5jh3XW6t93W0ATGJEvKTa6zkFIzXLny15?= =?us-ascii?Q?jUjm7cnqPmWcmeROOflye/eiugoWHgBhcv7dKI5SGplAe0hHqgrLcqkV2NFu?= =?us-ascii?Q?/SRO7kj52massK6Zl2Wx8fXYQtLOEcdqTiaGrhtD5ZRATc/gDX7GsCSf4s83?= =?us-ascii?Q?3Y/delVG1HzLdsCigTWHkZIX+t7JJsTSHoghhOYG4RfEcpn8UmdfDgy8ffK4?= =?us-ascii?Q?JJ5+P1wp0ZG7hGW48/p8+ffz0SXAfIXnh0q0F4Q+?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8249c89d-6892-4534-5b86-08ddaccec6f8 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2025 12:10:26.0853 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: +Y/vAvEFfMZpVMhQhqzyuF9p+s6Qu7lnZjL21wiQ/rOWFhrnnuLuOxr3SF7on1uH X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR12MB9560 X-Stat-Signature: nzj5nzsjopgnyy9dn67u15n9nkji9qrm X-Rspamd-Queue-Id: D02AB40014 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1750076017-905573 X-HE-Meta: U2FsdGVkX1+O0xI9AdLDhetNzWq3WvCI+gJ2AnI/qJh9Q9Vno6YWvrfo6+GRcBftdZs6I4WCCi8+heK6ymSsCAzjgeV3tKbSMWd21816DBdUjnucS8k8TUdY55OrjTdwFq6bzXqlN2W0414YHJ0HSsuOYcWoy1J0+UnarPjiGs0E/DEG5WivkuRMuugLPwC2yUNPXMa84ErjpdBKXJaUuRVx/Vtk9b0qsBgrdZjKABmRGenvW8/Cy4Hw574JJ8qcWFQHv3selbNdkObjPcK4Z2ebJK6NRtDxEmlhcNx9yvvCrkuLBfh8feat9r+5vdCbfMEQF6/LVfzXiWceNpjI4Uu2IuXcCA4Ckw/Cobx05XukJFcr+YOtdyNl23Q9LjLo/kcNhNM+P0vq5YHHZ8lfwiPJjrolwcQTBmQlwTy6SD3jhTUu7Ku8OH1wb1hqB7f643FoERFgxN9OOTLUYcRIyWgCz1QIGsuV6Ikbi2Nrqo2AvH+DKjxv4qlZSnl7eeBjPnEoGlO38b/ZKfsdaf+B+FIpKpFJJmhm2IAzDBF5ZrvL79xnOjlQUOUfvfKM9teudHkiuXVOvNafXCgUj9dqDarYabU/ZIHUGhLEqhcGfGaWYa+3/mT98+YnKXXMbs7ja/sisyImPYUDF8439hgfpS+x1oZ2nYxgc7UOiDscRxAasYRyr0fsXfxV8Ag/oPXt0eQT8noYZfz7X6yt/XB+ut8Dlk5wwE8DidirA6sG6YkoM0LTp2ORUzQR8ZV6KLFCJnb6tflkJXjiMoLgPEeT6XsgFgViU/b+9BoWfM3NcJb4YLebKmQKBOEqSyCxRSpv8lp+FFqtJopvItHB6cht96tzccaiEOGIQlqz3VAUcu7obBb+5W8Tq5CvslffJqbQ/nEz31jP8D3wWdjqgwzofrrxylC8JDoyAGeNwLur//u0tXP4ki5XOxN2GIxYIQTwrocPBzujzQ4UQK/lC// hTgoJO4h IwOcD5JSnkV4qPJWZ6eipHB1O7U1mlm1jJs/uT+kU3xSRoO5YfVfjjGeFDOQ9gJmN+3r826VcXWMXSoJOlYiYWPZVp19parUCsUA/D9zHzcOL8Hx2mxyspig6IKdv8TjdEgI4YhsNrGvMB6/oseSF0NdTMH3U/n+Ui/eSY4WN1rFdgo7sr/XfgUf+I8asm71AYOmUmFLSStCrgtYHf5ysz5aKhhTY1U5NtWhd9taZR0Z8HIj0pXd/Eq0I2jv46F0QA0R9O0XyMsPh5acPys5wDzc0Dp8FlhaFolL2iyVwR9oy8d60TZr69uMyPf094kDvBYH/MtNhYluAfRJG2s55Ew1P/cThDc47USSoPguPCtqbGDe475D9ak4x6TEm/V6seB9VdPtGKkHmi0PQQeFa8AhWsgM2WGwFmlO/HeF7tjSbzTlT36osFCDIyZqYHZBUt2O0z1XUyjRFCjZ1yxYT1e9czxnt4mnqQUQfLalUfxks0A5RZNFoFu8mdFUqzfqoFOZu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: No functional change is intended. 1. Add __NR_PAGEBLOCK_BITS for the number of pageblock flag bits and use roundup_pow_of_two(__NR_PAGEBLOCK_BITS) as NR_PAGEBLOCK_BITS to take right amount of bits for pageblock flags. 2. Rename PB_migrate_skip to PB_compact_skip. 3. Add {get,set,clear}_pfnblock_bit() to operate one a standalone bit, like PB_compact_skip. 3. Make {get,set}_pfnblock_flags_mask() internal functions and use {get,set}_pfnblock_migratetype() for pageblock migratetype operations. 4. Move pageblock flags common code to get_pfnblock_bitmap_bitidx(). 3. Use MIGRATETYPE_MASK to get the migratetype of a pageblock from its flags. 4. Use PB_migrate_end in the definition of MIGRATETYPE_MASK instead of PB_migrate_bits. 5. Add a comment on is_migrate_cma_folio() to prevent one from changing it to use get_pageblock_migratetype() and causing issues. Signed-off-by: Zi Yan Reviewed-by: Vlastimil Babka Acked-by: David Hildenbrand --- Documentation/mm/physical_memory.rst | 2 +- include/linux/mmzone.h | 18 +-- include/linux/page-isolation.h | 2 +- include/linux/pageblock-flags.h | 34 +++--- mm/memory_hotplug.c | 2 +- mm/page_alloc.c | 171 +++++++++++++++++++++------ 6 files changed, 162 insertions(+), 67 deletions(-) diff --git a/Documentation/mm/physical_memory.rst b/Documentation/mm/physical_memory.rst index d3ac106e6b14..9af11b5bd145 100644 --- a/Documentation/mm/physical_memory.rst +++ b/Documentation/mm/physical_memory.rst @@ -584,7 +584,7 @@ Compaction control ``compact_blockskip_flush`` Set to true when compaction migration scanner and free scanner meet, which - means the ``PB_migrate_skip`` bits should be cleared. + means the ``PB_compact_skip`` bits should be cleared. ``contiguous`` Set to true when the zone is contiguous (in other words, no hole). diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5bec8b1d0e66..76d66c07b673 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -92,8 +92,12 @@ extern const char * const migratetype_names[MIGRATE_TYPES]; #ifdef CONFIG_CMA # define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA) # define is_migrate_cma_page(_page) (get_pageblock_migratetype(_page) == MIGRATE_CMA) -# define is_migrate_cma_folio(folio, pfn) (MIGRATE_CMA == \ - get_pfnblock_flags_mask(&folio->page, pfn, MIGRATETYPE_MASK)) +/* + * __dump_folio() in mm/debug.c passes a folio pointer to on-stack struct folio, + * so folio_pfn() cannot be used and pfn is needed. + */ +# define is_migrate_cma_folio(folio, pfn) \ + (get_pfnblock_migratetype(&folio->page, pfn) == MIGRATE_CMA) #else # define is_migrate_cma(migratetype) false # define is_migrate_cma_page(_page) false @@ -122,14 +126,12 @@ static inline bool migratetype_is_mergeable(int mt) extern int page_group_by_mobility_disabled; -#define MIGRATETYPE_MASK ((1UL << PB_migratetype_bits) - 1) +#define get_pageblock_migratetype(page) \ + get_pfnblock_migratetype(page, page_to_pfn(page)) -#define get_pageblock_migratetype(page) \ - get_pfnblock_flags_mask(page, page_to_pfn(page), MIGRATETYPE_MASK) +#define folio_migratetype(folio) \ + get_pageblock_migratetype(&folio->page) -#define folio_migratetype(folio) \ - get_pfnblock_flags_mask(&folio->page, folio_pfn(folio), \ - MIGRATETYPE_MASK) struct free_area { struct list_head free_list[MIGRATE_TYPES]; unsigned long nr_free; diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 898bb788243b..277d8d92980c 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -25,7 +25,7 @@ static inline bool is_migrate_isolate(int migratetype) #define MEMORY_OFFLINE 0x1 #define REPORT_FAILURE 0x2 -void set_pageblock_migratetype(struct page *page, int migratetype); +void set_pageblock_migratetype(struct page *page, enum migratetype migratetype); bool move_freepages_block_isolate(struct zone *zone, struct page *page, int migratetype); diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index 6297c6343c55..c240c7a1fb03 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -19,15 +19,19 @@ enum pageblock_bits { PB_migrate, PB_migrate_end = PB_migrate + PB_migratetype_bits - 1, /* 3 bits required for migrate types */ - PB_migrate_skip,/* If set the block is skipped by compaction */ + PB_compact_skip,/* If set the block is skipped by compaction */ /* * Assume the bits will always align on a word. If this assumption * changes then get/set pageblock needs updating. */ - NR_PAGEBLOCK_BITS + __NR_PAGEBLOCK_BITS }; +#define NR_PAGEBLOCK_BITS (roundup_pow_of_two(__NR_PAGEBLOCK_BITS)) + +#define MIGRATETYPE_MASK ((1UL << (PB_migrate_end + 1)) - 1) + #if defined(CONFIG_HUGETLB_PAGE) #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE @@ -65,27 +69,23 @@ extern unsigned int pageblock_order; /* Forward declaration */ struct page; -unsigned long get_pfnblock_flags_mask(const struct page *page, - unsigned long pfn, - unsigned long mask); - -void set_pfnblock_flags_mask(struct page *page, - unsigned long flags, - unsigned long pfn, - unsigned long mask); +enum migratetype get_pfnblock_migratetype(const struct page *page, + unsigned long pfn); +bool get_pfnblock_bit(const struct page *page, unsigned long pfn, + enum pageblock_bits pb_bit); +void set_pfnblock_bit(const struct page *page, unsigned long pfn, + enum pageblock_bits pb_bit); +void clear_pfnblock_bit(const struct page *page, unsigned long pfn, + enum pageblock_bits pb_bit); /* Declarations for getting and setting flags. See mm/page_alloc.c */ #ifdef CONFIG_COMPACTION #define get_pageblock_skip(page) \ - get_pfnblock_flags_mask(page, page_to_pfn(page), \ - (1 << (PB_migrate_skip))) + get_pfnblock_bit(page, page_to_pfn(page), PB_compact_skip) #define clear_pageblock_skip(page) \ - set_pfnblock_flags_mask(page, 0, page_to_pfn(page), \ - (1 << PB_migrate_skip)) + clear_pfnblock_bit(page, page_to_pfn(page), PB_compact_skip) #define set_pageblock_skip(page) \ - set_pfnblock_flags_mask(page, (1 << PB_migrate_skip), \ - page_to_pfn(page), \ - (1 << PB_migrate_skip)) + set_pfnblock_bit(page, page_to_pfn(page), PB_compact_skip) #else static inline bool get_pageblock_skip(struct page *page) { diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index dd1c4332347c..ddc6c6c63a30 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -774,7 +774,7 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, /* * TODO now we have a visible range of pages which are not associated - * with their zone properly. Not nice but set_pfnblock_flags_mask + * with their zone properly. Not nice but set_pfnblock_migratetype() * expects the zone spans the pfn range. All the pages in the range * are reserved so nobody should be touching them so we should be safe */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1d46d0fb1f61..b303f60b6ed1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -353,81 +353,174 @@ static inline int pfn_to_bitidx(const struct page *page, unsigned long pfn) return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS; } +static __always_inline bool is_standalone_pb_bit(enum pageblock_bits pb_bit) +{ + return pb_bit > PB_migrate_end && pb_bit < __NR_PAGEBLOCK_BITS; +} + +static __always_inline void +get_pfnblock_bitmap_bitidx(const struct page *page, unsigned long pfn, + unsigned long **bitmap_word, unsigned long *bitidx) +{ + unsigned long *bitmap; + unsigned long word_bitidx; + + BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4); + BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits)); + VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page); + + bitmap = get_pageblock_bitmap(page, pfn); + *bitidx = pfn_to_bitidx(page, pfn); + word_bitidx = *bitidx / BITS_PER_LONG; + *bitidx &= (BITS_PER_LONG - 1); + *bitmap_word = &bitmap[word_bitidx]; +} + + /** - * get_pfnblock_flags_mask - Return the requested group of flags for the pageblock_nr_pages block of pages + * __get_pfnblock_flags_mask - Return the requested group of flags for + * a pageblock_nr_pages block of pages * @page: The page within the block of interest * @pfn: The target page frame number * @mask: mask of bits that the caller is interested in * * Return: pageblock_bits flags */ -unsigned long get_pfnblock_flags_mask(const struct page *page, - unsigned long pfn, unsigned long mask) +static unsigned long __get_pfnblock_flags_mask(const struct page *page, + unsigned long pfn, + unsigned long mask) { - unsigned long *bitmap; - unsigned long bitidx, word_bitidx; + unsigned long *bitmap_word; + unsigned long bitidx; unsigned long word; - bitmap = get_pageblock_bitmap(page, pfn); - bitidx = pfn_to_bitidx(page, pfn); - word_bitidx = bitidx / BITS_PER_LONG; - bitidx &= (BITS_PER_LONG-1); + get_pfnblock_bitmap_bitidx(page, pfn, &bitmap_word, &bitidx); /* - * This races, without locks, with set_pfnblock_flags_mask(). Ensure + * This races, without locks, with set_pfnblock_migratetype(). Ensure * a consistent read of the memory array, so that results, even though * racy, are not corrupted. */ - word = READ_ONCE(bitmap[word_bitidx]); + word = READ_ONCE(*bitmap_word); return (word >> bitidx) & mask; } -static __always_inline int get_pfnblock_migratetype(const struct page *page, - unsigned long pfn) +/** + * get_pfnblock_bit - Check if a standalone bit of a pageblock is set + * @page: The page within the block of interest + * @pfn: The target page frame number + * @pb_bit: pageblock bit to check + * + * Return: true if the bit is set, otherwise false + */ +bool get_pfnblock_bit(const struct page *page, unsigned long pfn, + enum pageblock_bits pb_bit) { - return get_pfnblock_flags_mask(page, pfn, MIGRATETYPE_MASK); + unsigned long *bitmap_word; + unsigned long bitidx; + + if (WARN_ON_ONCE(!is_standalone_pb_bit(pb_bit))) + return false; + + get_pfnblock_bitmap_bitidx(page, pfn, &bitmap_word, &bitidx); + + return test_bit(bitidx + pb_bit, bitmap_word); } /** - * set_pfnblock_flags_mask - Set the requested group of flags for a pageblock_nr_pages block of pages + * get_pfnblock_migratetype - Return the migratetype of a pageblock + * @page: The page within the block of interest + * @pfn: The target page frame number + * + * Return: The migratetype of the pageblock + * + * Use get_pfnblock_migratetype() if caller already has both @page and @pfn + * to save a call to page_to_pfn(). + */ +__always_inline enum migratetype +get_pfnblock_migratetype(const struct page *page, unsigned long pfn) +{ + return __get_pfnblock_flags_mask(page, pfn, MIGRATETYPE_MASK); +} + +/** + * __set_pfnblock_flags_mask - Set the requested group of flags for + * a pageblock_nr_pages block of pages * @page: The page within the block of interest - * @flags: The flags to set * @pfn: The target page frame number + * @flags: The flags to set * @mask: mask of bits that the caller is interested in */ -void set_pfnblock_flags_mask(struct page *page, unsigned long flags, - unsigned long pfn, - unsigned long mask) +static void __set_pfnblock_flags_mask(struct page *page, unsigned long pfn, + unsigned long flags, unsigned long mask) { - unsigned long *bitmap; - unsigned long bitidx, word_bitidx; + unsigned long *bitmap_word; + unsigned long bitidx; unsigned long word; - BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4); - BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits)); - - bitmap = get_pageblock_bitmap(page, pfn); - bitidx = pfn_to_bitidx(page, pfn); - word_bitidx = bitidx / BITS_PER_LONG; - bitidx &= (BITS_PER_LONG-1); - - VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page); + get_pfnblock_bitmap_bitidx(page, pfn, &bitmap_word, &bitidx); mask <<= bitidx; flags <<= bitidx; - word = READ_ONCE(bitmap[word_bitidx]); + word = READ_ONCE(*bitmap_word); do { - } while (!try_cmpxchg(&bitmap[word_bitidx], &word, (word & ~mask) | flags)); + } while (!try_cmpxchg(bitmap_word, &word, (word & ~mask) | flags)); +} + +/** + * set_pfnblock_bit - Set a standalone bit of a pageblock + * @page: The page within the block of interest + * @pfn: The target page frame number + * @pb_bit: pageblock bit to set + */ +void set_pfnblock_bit(const struct page *page, unsigned long pfn, + enum pageblock_bits pb_bit) +{ + unsigned long *bitmap_word; + unsigned long bitidx; + + if (WARN_ON_ONCE(!is_standalone_pb_bit(pb_bit))) + return; + + get_pfnblock_bitmap_bitidx(page, pfn, &bitmap_word, &bitidx); + + set_bit(bitidx + pb_bit, bitmap_word); } -void set_pageblock_migratetype(struct page *page, int migratetype) +/** + * clear_pfnblock_bit - Clear a standalone bit of a pageblock + * @page: The page within the block of interest + * @pfn: The target page frame number + * @pb_bit: pageblock bit to clear + */ +void clear_pfnblock_bit(const struct page *page, unsigned long pfn, + enum pageblock_bits pb_bit) +{ + unsigned long *bitmap_word; + unsigned long bitidx; + + if (WARN_ON_ONCE(!is_standalone_pb_bit(pb_bit))) + return; + + get_pfnblock_bitmap_bitidx(page, pfn, &bitmap_word, &bitidx); + + clear_bit(bitidx + pb_bit, bitmap_word); +} + +/** + * set_pageblock_migratetype - Set the migratetype of a pageblock + * @page: The page within the block of interest + * @migratetype: migratetype to set + */ +__always_inline void set_pageblock_migratetype(struct page *page, + enum migratetype migratetype) { if (unlikely(page_group_by_mobility_disabled && migratetype < MIGRATE_PCPTYPES)) migratetype = MIGRATE_UNMOVABLE; - set_pfnblock_flags_mask(page, (unsigned long)migratetype, - page_to_pfn(page), MIGRATETYPE_MASK); + __set_pfnblock_flags_mask(page, page_to_pfn(page), + (unsigned long)migratetype, MIGRATETYPE_MASK); } #ifdef CONFIG_DEBUG_VM @@ -667,7 +760,7 @@ static inline void __add_to_free_list(struct page *page, struct zone *zone, int nr_pages = 1 << order; VM_WARN_ONCE(get_pageblock_migratetype(page) != migratetype, - "page type is %lu, passed migratetype is %d (nr=%d)\n", + "page type is %d, passed migratetype is %d (nr=%d)\n", get_pageblock_migratetype(page), migratetype, nr_pages); if (tail) @@ -693,7 +786,7 @@ static inline void move_to_free_list(struct page *page, struct zone *zone, /* Free page moving can fail, so it happens before the type update */ VM_WARN_ONCE(get_pageblock_migratetype(page) != old_mt, - "page type is %lu, passed migratetype is %d (nr=%d)\n", + "page type is %d, passed migratetype is %d (nr=%d)\n", get_pageblock_migratetype(page), old_mt, nr_pages); list_move_tail(&page->buddy_list, &area->free_list[new_mt]); @@ -715,7 +808,7 @@ static inline void __del_page_from_free_list(struct page *page, struct zone *zon int nr_pages = 1 << order; VM_WARN_ONCE(get_pageblock_migratetype(page) != migratetype, - "page type is %lu, passed migratetype is %d (nr=%d)\n", + "page type is %d, passed migratetype is %d (nr=%d)\n", get_pageblock_migratetype(page), migratetype, nr_pages); /* clear reported state and update reported page count */ @@ -3123,7 +3216,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Do not instrument rmqueue() with KMSAN. This function may call - * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * __msan_poison_alloca() through a call to set_pfnblock_migratetype(). * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it * may call rmqueue() again, which will result in a deadlock. */ -- 2.47.2