From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAF11C6FD1C for ; Thu, 23 Mar 2023 11:43:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B21E6B0072; Thu, 23 Mar 2023 07:43:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 361766B0074; Thu, 23 Mar 2023 07:43:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 228F96B0075; Thu, 23 Mar 2023 07:43:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 12B3E6B0072 for ; Thu, 23 Mar 2023 07:43:33 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id DC48B1C3CD9 for ; Thu, 23 Mar 2023 11:43:32 +0000 (UTC) X-FDA: 80599977864.18.DC61F4D Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by imf29.hostedemail.com (Postfix) with ESMTP id 486AC12001C for ; Thu, 23 Mar 2023 11:43:29 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ZDT/83zj"; spf=pass (imf29.hostedemail.com: domain of lkp@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679571810; a=rsa-sha256; cv=none; b=J8IjHmcs89uzVb5QuQhrmvXBavXdJOS5Pdau9RCeNrtAG3X8UHMnROjj9yv97D9ZVKMefE 39qCv/hZhALezZKVL8vWn0IgKGdErWCgkpkJOLMScL+1i4FM1RGk5EWQ4uejRMvHP3janD O7w4AvPbGKscV9Xcl/0W6BgddcMG9MY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ZDT/83zj"; spf=pass (imf29.hostedemail.com: domain of lkp@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679571810; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=Ej/kbdD9U5OvNjgN3RKbJjoOXdYffz6AjlUmWfmWonE=; b=g6L0lt1r0VqO6rLz0LJXm9BmNglObJ6yWH+2iTOf6CTBe7UgZ3bqoxqA01o7CGZP1G4Kaw I3l6sNkecbrBjV27mcaYyS3pe1GNN9ukHeqq/XHSva3uxBKvfKtzJA8IeTcv7SmVFqPMYg 2DeCvha7WKCNyeMUE1jm0abBnccrkOU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679571809; x=1711107809; h=date:from:to:cc:subject:message-id:mime-version; bh=SAIAzcHaTs+J6WVW75uOcs1trEEUAEboKHVPd93WZcM=; b=ZDT/83zj738eB5F68t5DdB3eyw3FkcXxU7Fh6HhlaKh48P76jiCXLLZZ HYrL6PTAvR8Yy7vhiVWQJkNR/OPUVnob/ASQ3XXoelJxJjIaaeQNHTBB5 j0SAt9LfAu9rABZ8azk33c+LzTK6ZsD2F9mC1Z/BbtijVu/lkcDGy1sM7 g3KZ/xiOOU8R/uwzNEqtyDBUSgB4gwpgAig/MyR4jsZZvtPIOpAwQanDe t6JRpA9MtKAkeuBa+SVLQC9+nr01fkdujOcpO/F1ZWMwGkkIJhteRPPxr DxrT5c5fdI+n2+pfLgGOOUfuJhpX1p3ECSaebXksTcwpwZnsbCkjp7JOD A==; X-IronPort-AV: E=McAfee;i="6600,9927,10657"; a="402031173" X-IronPort-AV: E=Sophos;i="5.98,283,1673942400"; d="scan'208";a="402031173" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2023 04:43:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10657"; a="806222128" X-IronPort-AV: E=Sophos;i="5.98,283,1673942400"; d="scan'208";a="806222128" Received: from lkp-server01.sh.intel.com (HELO b613635ddfff) ([10.239.97.150]) by orsmga004.jf.intel.com with ESMTP; 23 Mar 2023 04:43:24 -0700 Received: from kbuild by b613635ddfff with local (Exim 4.96) (envelope-from ) id 1pfJLn-000EIm-1n; Thu, 23 Mar 2023 11:43:23 +0000 Date: Thu, 23 Mar 2023 19:42:55 +0800 From: kernel test robot To: Zi Yan Cc: oe-kbuild-all@lists.linux.dev, linux-kernel@vger.kernel.org, Andrew Morton , Linux Memory Management List Subject: arch/sparc/mm/tsb.c:405:39: error: left shift count is negative Message-ID: <202303231905.iHyozZ4Z-lkp@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Rspam-User: X-Rspamd-Queue-Id: 486AC12001C X-Rspamd-Server: rspam01 X-Stat-Signature: t6qsdoudnswbmiozzrmdte6y5dfbd5kp X-HE-Tag: 1679571809-228414 X-HE-Meta: U2FsdGVkX1/LUPzKMTSprESONPydEjNkTdO7FL15Hz20pdqCbvnNvWHNwkn9BbmgQVYhzkTbMq7CE9uFaMv2Q1rA5CL+BwhYYeBLbZQ/yqTjNW+l74juR2r50sZPejKYo8RKT3g2H/VxoaRO6DPPOo8pQhLfhytMahYrMVfo/NrBnmC5aRojebo1ndeO7h8ZQVlOgb0LjlTAeM3KhT37yt+E51kcs/1Rn0gFRvHjBkwb73UrYd38bw8ZJfFrQ6BFD+aCO+5gYsqkjepIw7419fJZqDXRg6KyVrBUpLDLG7fVsFBubG2vDUYYurO0s4shJsZkgfRdpGt7kSBU5JZUGUT0DB2T+LxfASyvc0v4AvOrXqU+SBsVdCdfHjbFaYLicWNYK7h616tJR91vC7erQpSVx4oaffX7dc6TFFLZW/EO5MH5wuyO53RDObMDE1R4wCheM2Q/zk5KOwalfkvszMjORe6YccQEY4U11vtGKhuVt6rzE6U3IOCp2CZH3JCE9XYb5I3eWia4E3HuuFM6fMHhxq0r1x2/Pt4UIU/Nhm7kjchVeDoeCpwR3tsjX0BymJ+vJj7y06MEB2sKMlwcW/W9drzwkJ5uZ75NClYIP+eaTPmvIlIpD7i68U2SuoKim3YHOzPL7Du6sWLOmsa2Nj1dgbhkWEfETOIHyHmnIFapVhSKXwJ7BXDYvMLXbCCGBDDQABocNxq9j1IQXLZbGmmqdA9JrFI2nP8rc1Cl4K2lf7MBy32L/6D0S8qlIXk1NRKW0RK2zpQstwwXlj61prUynXNHgMPP/tq5NZB8FAqpgPvFBeaS+vR8ICd7OQg0igt+WyLyaKVjcslldEej6b9ufPC69DzLUlJ+CvudDoJivG1YAe65k27oNjT47T8h64N5fT6+azzz24/WOzPCak97qP9uORnna3/Hb0pKz1T8KbkOkIqSpLHuiBQxIn1nWE9p3aOxvbzUwbDlMcs rIBTfA3O +HJDDif3mJ4B0Elsdx+NqpYljyqWfTBhno5BMUzgiBB0aib5SwLYlWOUfJZTT/BqfruQ863E/Z3CVYPTThf5SnXnIBzw5ftmPZ6grPIxdIzqz6ilmsCW0CHbvORHZCK5VUxecksLRfwWUyWL1pcnb7/Umy8A1J97KUFqSrOWbY/dzs+r/lxFUf1SSkTuHWZevcC1JSqpl874iLa+Q83YEDYSP1rkFenrirVMfN2LCQkiMzc7dsAo4Pc3ZzPUbp5LYi/uyOHcHVJyN+zbkgFkBe9cam0uJvfoW8QTad0ocwzD4u/suDiUDyNCK/meGWf+3HW4KrUiLtkUfV2N2BdRZ6MKlecz4hTZnH4DICQ5Tf5HC78v7DQzn454e0nQy8P0cfIx+6WbYdPJvzfiXbyirPC262vlwWJYU9Nd+8YggcJ+IlcWbucTq5mnmErnXY6/jEFaUz7sxVbX7aXRFW9qfDQGKwEBK1QkvSyA9rshRGYfF8EDzCTQXg3RUpwgE7EmBuEoTRvR5vxGJqGs8RhL8RW92egP4rGZakx+FxadklNR9H0mQTZVEWsKC7Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Zi, FYI, the error/warning was bisected to this commit, please ignore it if it's irrelevant. tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master head: fff5a5e7f528b2ed2c335991399a766c2cf01103 commit: 0192445cb2f7ed1cd7a95a0fc8c7645480baba25 arch: mm: rename FORCE_MAX_ZONEORDER to ARCH_FORCE_MAX_ORDER date: 6 months ago config: sparc64-randconfig-r014-20230322 (https://download.01.org/0day-ci/archive/20230323/202303231905.iHyozZ4Z-lkp@intel.com/config) compiler: sparc64-linux-gcc (GCC) 12.1.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0192445cb2f7ed1cd7a95a0fc8c7645480baba25 git remote add linus https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git git fetch --no-tags linus master git checkout 0192445cb2f7ed1cd7a95a0fc8c7645480baba25 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=sparc64 olddefconfig COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=sparc64 SHELL=/bin/bash arch/sparc/mm/ drivers/gpu/drm/ mm/ If you fix the issue, kindly add following tag where applicable | Reported-by: kernel test robot | Link: https://lore.kernel.org/oe-kbuild-all/202303231905.iHyozZ4Z-lkp@intel.com/ All error/warnings (new ones prefixed by >>): In file included from include/linux/gfp.h:7, from include/linux/slab.h:15, from arch/sparc/mm/tsb.c:9: include/linux/mmzone.h:636:33: error: size of array 'free_area' is negative 636 | struct free_area free_area[MAX_ORDER]; | ^~~~~~~~~ arch/sparc/mm/tsb.c: In function 'tsb_grow': >> arch/sparc/mm/tsb.c:405:39: error: left shift count is negative [-Werror=shift-count-negative] 405 | if (max_tsb_size > (PAGE_SIZE << MAX_ORDER)) | ^~ arch/sparc/mm/tsb.c:406:43: error: left shift count is negative [-Werror=shift-count-negative] 406 | max_tsb_size = (PAGE_SIZE << MAX_ORDER); | ^~ cc1: all warnings being treated as errors -- In file included from include/linux/gfp.h:7, from include/linux/mm.h:7, from mm/shuffle.c:4: include/linux/mmzone.h:636:33: error: size of array 'free_area' is negative 636 | struct free_area free_area[MAX_ORDER]; | ^~~~~~~~~ In file included from arch/sparc/include/asm/bug.h:6, from include/linux/bug.h:5, from include/linux/mmdebug.h:5, from include/linux/mm.h:6: mm/internal.h: In function 'mem_map_offset': include/linux/mmzone.h:32:31: warning: left shift count is negative [-Wshift-count-negative] 32 | #define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) | ^~ include/linux/compiler.h:33:55: note: in definition of macro '__branch_check__' 33 | ______r = __builtin_expect(!!(x), expect); \ | ^ mm/internal.h:649:13: note: in expansion of macro 'unlikely' 649 | if (unlikely(offset >= MAX_ORDER_NR_PAGES)) | ^~~~~~~~ mm/internal.h:649:32: note: in expansion of macro 'MAX_ORDER_NR_PAGES' 649 | if (unlikely(offset >= MAX_ORDER_NR_PAGES)) | ^~~~~~~~~~~~~~~~~~ include/linux/mmzone.h:32:31: warning: left shift count is negative [-Wshift-count-negative] 32 | #define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) | ^~ include/linux/compiler.h:35:54: note: in definition of macro '__branch_check__' 35 | expect, is_constant); \ | ^~~~~~~~~~~ mm/internal.h:649:13: note: in expansion of macro 'unlikely' 649 | if (unlikely(offset >= MAX_ORDER_NR_PAGES)) | ^~~~~~~~ mm/internal.h:649:32: note: in expansion of macro 'MAX_ORDER_NR_PAGES' 649 | if (unlikely(offset >= MAX_ORDER_NR_PAGES)) | ^~~~~~~~~~~~~~~~~~ mm/internal.h: In function 'mem_map_next': include/linux/mmzone.h:32:31: warning: left shift count is negative [-Wshift-count-negative] 32 | #define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) | ^~ include/linux/compiler.h:33:55: note: in definition of macro '__branch_check__' 33 | ______r = __builtin_expect(!!(x), expect); \ | ^ mm/internal.h:661:13: note: in expansion of macro 'unlikely' 661 | if (unlikely((offset & (MAX_ORDER_NR_PAGES - 1)) == 0)) { | ^~~~~~~~ mm/internal.h:661:33: note: in expansion of macro 'MAX_ORDER_NR_PAGES' 661 | if (unlikely((offset & (MAX_ORDER_NR_PAGES - 1)) == 0)) { | ^~~~~~~~~~~~~~~~~~ include/linux/mmzone.h:32:31: warning: left shift count is negative [-Wshift-count-negative] 32 | #define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) | ^~ include/linux/compiler.h:35:54: note: in definition of macro '__branch_check__' 35 | expect, is_constant); \ | ^~~~~~~~~~~ mm/internal.h:661:13: note: in expansion of macro 'unlikely' 661 | if (unlikely((offset & (MAX_ORDER_NR_PAGES - 1)) == 0)) { | ^~~~~~~~ mm/internal.h:661:33: note: in expansion of macro 'MAX_ORDER_NR_PAGES' 661 | if (unlikely((offset & (MAX_ORDER_NR_PAGES - 1)) == 0)) { | ^~~~~~~~~~~~~~~~~~ mm/shuffle.c: In function '__shuffle_zone': >> mm/shuffle.c:87:35: warning: left shift count is negative [-Wshift-count-negative] 87 | const int order_pages = 1 << order; | ~~^~~~~~~~ -- In file included from include/linux/gfp.h:7, from include/linux/umh.h:4, from include/linux/kmod.h:9, from include/linux/module.h:17, from drivers/gpu/drm/drm_gem_vram_helper.c:4: include/linux/mmzone.h:636:33: error: size of array 'free_area' is negative 636 | struct free_area free_area[MAX_ORDER]; | ^~~~~~~~~ In file included from include/drm/ttm/ttm_device.h:31, from include/drm/ttm/ttm_bo_driver.h:40, from include/drm/drm_gem_ttm_helper.h:11, from drivers/gpu/drm/drm_gem_vram_helper.c:13: >> include/drm/ttm/ttm_pool.h:75:38: error: size of array 'orders' is negative 75 | struct ttm_pool_type orders[MAX_ORDER]; | ^~~~~~ -- In file included from include/linux/gfp.h:7, from include/linux/umh.h:4, from include/linux/kmod.h:9, from include/linux/module.h:17, from drivers/gpu/drm/ttm/ttm_pool.c:34: include/linux/mmzone.h:636:33: error: size of array 'free_area' is negative 636 | struct free_area free_area[MAX_ORDER]; | ^~~~~~~~~ In file included from drivers/gpu/drm/ttm/ttm_pool.c:43: >> include/drm/ttm/ttm_pool.h:75:38: error: size of array 'orders' is negative 75 | struct ttm_pool_type orders[MAX_ORDER]; | ^~~~~~ >> drivers/gpu/drm/ttm/ttm_pool.c:67:29: error: size of array 'global_write_combined' is negative 67 | static struct ttm_pool_type global_write_combined[MAX_ORDER]; | ^~~~~~~~~~~~~~~~~~~~~ >> drivers/gpu/drm/ttm/ttm_pool.c:68:29: error: size of array 'global_uncached' is negative 68 | static struct ttm_pool_type global_uncached[MAX_ORDER]; | ^~~~~~~~~~~~~~~ >> drivers/gpu/drm/ttm/ttm_pool.c:70:29: error: size of array 'global_dma32_write_combined' is negative 70 | static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ >> drivers/gpu/drm/ttm/ttm_pool.c:71:29: error: size of array 'global_dma32_uncached' is negative 71 | static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; | ^~~~~~~~~~~~~~~~~~~~~ vim +405 arch/sparc/mm/tsb.c 0871420fad5844 arch/sparc/mm/tsb.c David S. Miller 2008-11-16 379 dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 380 /* When the RSS of an address space exceeds tsb_rss_limit for a TSB, dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 381 * do_sparc64_fault() invokes this routine to try and grow it. 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 382 * bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 383 * When we reach the maximum TSB size supported, we stick ~0UL into dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 384 * tsb_rss_limit for that TSB so the grow checks in do_sparc64_fault() bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 385 * will not trigger any longer. bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 386 * bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 387 * The TSB can be anywhere from 8K to 1MB in size, in increasing powers bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 388 * of two. The TSB must be aligned to it's size, so f.e. a 512K TSB b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 389 * must be 512K aligned. It also must be physically contiguous, so we b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 390 * cannot use vmalloc(). bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 391 * bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 392 * The idea here is to grow the TSB when the RSS of the process approaches bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 393 * the number of entries that the current TSB can hold at once. Currently, bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 394 * we trigger when the RSS hits 3/4 of the TSB capacity. bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 395 */ dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 396 void tsb_grow(struct mm_struct *mm, unsigned long tsb_index, unsigned long rss) bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 397 { bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 398 unsigned long max_tsb_size = 1 * 1024 * 1024; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 399 unsigned long new_size, old_size, flags; 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 400 struct tsb *old_tsb, *new_tsb; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 401 unsigned long new_cache_index, old_cache_index; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 402 unsigned long new_rss_limit; b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 403 gfp_t gfp_flags; bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 404 bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 @405 if (max_tsb_size > (PAGE_SIZE << MAX_ORDER)) bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 406 max_tsb_size = (PAGE_SIZE << MAX_ORDER); bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 407 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 408 new_cache_index = 0; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 409 for (new_size = 8192; new_size < max_tsb_size; new_size <<= 1UL) { 0871420fad5844 arch/sparc/mm/tsb.c David S. Miller 2008-11-16 410 new_rss_limit = tsb_size_to_rss_limit(new_size); 0871420fad5844 arch/sparc/mm/tsb.c David S. Miller 2008-11-16 411 if (new_rss_limit > rss) bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 412 break; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 413 new_cache_index++; bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 414 } bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 415 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 416 if (new_size == max_tsb_size) b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 417 new_rss_limit = ~0UL; b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 418 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 419 retry_tsb_alloc: b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 420 gfp_flags = GFP_KERNEL; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 421 if (new_size > (PAGE_SIZE * 2)) a55ee1ff751f88 arch/sparc/mm/tsb.c David S. Miller 2013-02-19 422 gfp_flags |= __GFP_NOWARN | __GFP_NORETRY; b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 423 1f261ef53ba066 arch/sparc64/mm/tsb.c David S. Miller 2008-03-19 424 new_tsb = kmem_cache_alloc_node(tsb_caches[new_cache_index], 1f261ef53ba066 arch/sparc64/mm/tsb.c David S. Miller 2008-03-19 425 gfp_flags, numa_node_id()); 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 426 if (unlikely(!new_tsb)) { b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 427 /* Not being able to fork due to a high-order TSB b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 428 * allocation failure is very bad behavior. Just back b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 429 * down to a 0-order allocation and force no TSB b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 430 * growing for this address space. b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 431 */ dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 432 if (mm->context.tsb_block[tsb_index].tsb == NULL && dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 433 new_cache_index > 0) { 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 434 new_cache_index = 0; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 435 new_size = 8192; b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 436 new_rss_limit = ~0UL; 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 437 goto retry_tsb_alloc; b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 438 } b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 439 b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 440 /* If we failed on a TSB grow, we are under serious b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 441 * memory pressure so don't try to grow any more. b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 442 */ dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 443 if (mm->context.tsb_block[tsb_index].tsb != NULL) dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 444 mm->context.tsb_block[tsb_index].tsb_rss_limit = ~0UL; bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 445 return; b52439c22c63db arch/sparc64/mm/tsb.c David S. Miller 2006-03-17 446 } bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 447 8b234274418d6d arch/sparc64/mm/tsb.c David S. Miller 2006-02-17 448 /* Mark all tags as invalid. */ bb8646d8340fa7 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 449 tsb_init(new_tsb, new_size); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 450 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 451 /* Ok, we are about to commit the changes. If we are 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 452 * growing an existing TSB the locking is very tricky, 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 453 * so WATCH OUT! 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 454 * 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 455 * We have to hold mm->context.lock while committing to the 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 456 * new TSB, this synchronizes us with processors in 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 457 * flush_tsb_user() and switch_mm() for this address space. 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 458 * 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 459 * But even with that lock held, processors run asynchronously 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 460 * accessing the old TSB via TLB miss handling. This is OK 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 461 * because those actions are just propagating state from the 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 462 * Linux page tables into the TSB, page table mappings are not 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 463 * being changed. If a real fault occurs, the processor will 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 464 * synchronize with us when it hits flush_tsb_user(), this is 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 465 * also true for the case where vmscan is modifying the page 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 466 * tables. The only thing we need to be careful with is to 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 467 * skip any locked TSB entries during copy_tsb(). 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 468 * 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 469 * When we finish committing to the new TSB, we have to drop 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 470 * the lock and ask all other cpus running this address space 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 471 * to run tsb_context_switch() to see the new TSB table. 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 472 */ 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 473 spin_lock_irqsave(&mm->context.lock, flags); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 474 dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 475 old_tsb = mm->context.tsb_block[tsb_index].tsb; dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 476 old_cache_index = dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 477 (mm->context.tsb_block[tsb_index].tsb_reg_val & 0x7UL); dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 478 old_size = (mm->context.tsb_block[tsb_index].tsb_nentries * dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 479 sizeof(struct tsb)); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 480 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 481 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 482 /* Handle multiple threads trying to grow the TSB at the same time. 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 483 * One will get in here first, and bump the size and the RSS limit. 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 484 * The others will get in here next and hit this check. 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 485 */ dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 486 if (unlikely(old_tsb && dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 487 (rss < mm->context.tsb_block[tsb_index].tsb_rss_limit))) { 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 488 spin_unlock_irqrestore(&mm->context.lock, flags); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 489 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 490 kmem_cache_free(tsb_caches[new_cache_index], new_tsb); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 491 return; 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 492 } 8b234274418d6d arch/sparc64/mm/tsb.c David S. Miller 2006-02-17 493 dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 494 mm->context.tsb_block[tsb_index].tsb_rss_limit = new_rss_limit; bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 495 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 496 if (old_tsb) { 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 497 extern void copy_tsb(unsigned long old_tsb_base, 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 498 unsigned long old_tsb_size, 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 499 unsigned long new_tsb_base, 654f4807624a65 arch/sparc/mm/tsb.c Mike Kravetz 2017-06-02 500 unsigned long new_tsb_size, 654f4807624a65 arch/sparc/mm/tsb.c Mike Kravetz 2017-06-02 501 unsigned long page_size_shift); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 502 unsigned long old_tsb_base = (unsigned long) old_tsb; 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 503 unsigned long new_tsb_base = (unsigned long) new_tsb; bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 504 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 505 if (tlb_type == cheetah_plus || tlb_type == hypervisor) { 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 506 old_tsb_base = __pa(old_tsb_base); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 507 new_tsb_base = __pa(new_tsb_base); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 508 } 654f4807624a65 arch/sparc/mm/tsb.c Mike Kravetz 2017-06-02 509 copy_tsb(old_tsb_base, old_size, new_tsb_base, new_size, 654f4807624a65 arch/sparc/mm/tsb.c Mike Kravetz 2017-06-02 510 tsb_index == MM_TSB_BASE ? 654f4807624a65 arch/sparc/mm/tsb.c Mike Kravetz 2017-06-02 511 PAGE_SHIFT : REAL_HPAGE_SHIFT); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 512 } bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 513 dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 514 mm->context.tsb_block[tsb_index].tsb = new_tsb; dcc1e8dd88d4bc arch/sparc64/mm/tsb.c David S. Miller 2006-03-22 515 setup_tsb_params(mm, tsb_index, new_size); bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 516 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 517 spin_unlock_irqrestore(&mm->context.lock, flags); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 518 bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 519 /* If old_tsb is NULL, we're being invoked for the first time bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 520 * from init_new_context(). bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 521 */ bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 522 if (old_tsb) { 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 523 /* Reload it on the local cpu. */ bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 524 tsb_context_switch(mm); bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 525 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 526 /* Now force other processors to do the same. */ a3cf5e6b6f2548 arch/sparc64/mm/tsb.c David S. Miller 2008-08-03 527 preempt_disable(); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 528 smp_tsb_sync(mm); a3cf5e6b6f2548 arch/sparc64/mm/tsb.c David S. Miller 2008-08-03 529 preempt_enable(); 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 530 7a1ac5264108fc arch/sparc64/mm/tsb.c David S. Miller 2006-03-16 531 /* Now it is safe to free the old tsb. */ 9b4006dcf6a8c4 arch/sparc64/mm/tsb.c David S. Miller 2006-03-18 532 kmem_cache_free(tsb_caches[old_cache_index], old_tsb); bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 533 } bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 534 } bd40791e1d289d arch/sparc64/mm/tsb.c David S. Miller 2006-01-31 535 :::::: The code at line 405 was first introduced by commit :::::: bd40791e1d289d807b8580abe1f117e9c62894e4 [SPARC64]: Dynamically grow TSB in response to RSS growth. :::::: TO: David S. Miller :::::: CC: David S. Miller -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests