From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C877CEB64D9 for ; Mon, 19 Jun 2023 07:51:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43EDA8D0002; Mon, 19 Jun 2023 03:51:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3ED638D0001; Mon, 19 Jun 2023 03:51:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B5548D0002; Mon, 19 Jun 2023 03:51:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 19AD38D0001 for ; Mon, 19 Jun 2023 03:51:24 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id DEA19A04DE for ; Mon, 19 Jun 2023 07:51:23 +0000 (UTC) X-FDA: 80918727246.22.6904E4F Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf30.hostedemail.com (Postfix) with ESMTP id 061548000A for ; Mon, 19 Jun 2023 07:51:20 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=c0SGv4Qt; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf30.hostedemail.com: domain of lkp@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=lkp@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687161082; a=rsa-sha256; cv=none; b=0HYbApEAxz5IRGDepTn056u4wKbTVTfmKwtOo/SnmV9uqiHWRr2uXEEK5LEaAKD4muvvBz txypChswpWI5wHoP7hgvSwk/gFnjmnnOXSi1zB9VEiF5PnEfhSv0tyCwmifmB3LaXSZpFO cbm2Lvs7fWYud6VxQ+vmUgI6DV6hqKI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=c0SGv4Qt; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf30.hostedemail.com: domain of lkp@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=lkp@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687161082; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uMZM/s1WRvnmaKb7eYyYIq8p2LkxkIlaqaxxfRjtRXI=; b=3LkA8RHlhJD37hOJ+qB9/pLPnGdanWdF9qiberrPuq7mUwbZWnlBpvs8dTmSXjea3Aeaew LlAUnb6EnVTalaImvpQnAdZb1aS48AqrP0LB9l5Puy5A8iUyjCz8W55r3Trh1p81SexZcd jr32j55la6jv02Q2+rlqEwzMwPMptg4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1687161081; x=1718697081; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=PxcVBICQNeVAj3QFH7+/xDQDOITRjaHft/KImrlUfV8=; b=c0SGv4QtZXx5rje4tbUAyjssFcLzI4ZThB4AZiQLWw/aLZh9TUS+M0EH Yv9V2/Dah6bVV110KkVrh8EdUsKwqNsJXZ0nure+3iw1tqbn9Jg1G07Gs s3gtePszi+U4hptEWUid2IzoU+7LKrN5itvaRRdzmuWUnOrIRCT410Uv3 hO0N5amxi3JpGJrRaoyQ3EAQopsMRo+zTPjX5hKkJambm2COsGRqUW4s9 JQ4tgC0aPxr7Dwrn5Cpfrv7ijxhlTpakBAH9F7m3Q9N38D4YPrg6NlHxj kCznKx097yhGRP6uMFgg29Co6m7pdVAfzB3C0ZgXBQKiCjHyTo6MK9Kst Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10745"; a="358442964" X-IronPort-AV: E=Sophos;i="6.00,254,1681196400"; d="scan'208";a="358442964" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2023 00:51:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10745"; a="1043802890" X-IronPort-AV: E=Sophos;i="6.00,254,1681196400"; d="scan'208";a="1043802890" Received: from lkp-server01.sh.intel.com (HELO 783282924a45) ([10.239.97.150]) by fmsmga005.fm.intel.com with ESMTP; 19 Jun 2023 00:51:17 -0700 Received: from kbuild by 783282924a45 with local (Exim 4.96) (envelope-from ) id 1qB9fR-0004XA-01; Mon, 19 Jun 2023 07:51:17 +0000 Date: Mon, 19 Jun 2023 15:50:29 +0800 From: kernel test robot To: Jan Glauber , akpm@linux-foundation.org Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jan Glauber Subject: Re: [PATCH] mm: Fix shmem THP counters on migration Message-ID: <202306191542.ru4fKk5y-lkp@intel.com> References: <20230619055735.141740-1-jglauber@digitalocean.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230619055735.141740-1-jglauber@digitalocean.com> X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 061548000A X-Stat-Signature: hfzi9ggtckgas74xb5o65xjzeseud5fg X-HE-Tag: 1687161080-631863 X-HE-Meta: U2FsdGVkX1/O90HbeTST1b7gMR87u+OTO0+bXgt3nnAPNIbC9yfevpC24g7YVAWpufVhMZayjqvWRSKYl5Bt5S7ijn2xJwwQCEuqoHYfpBaEuUHsONiI0xcfAHZvEOyUls06RVJfPUcrYMcaQsZSbEUF75QDhzZzgq3pfGxHKqcHbeBwLmuNaDvaOjZ/G5DhL7A/Ybx/Ml7d9m7j7El17RiZdQANybg7BM7U+++tj9W4ukJby3px7VxWDZ0TKE/MB5qUZdgxT93b2cgnnk+DYm+jTJcs32TUldoyLZ0QxMEHI4VdkSsdqhEwbrc3KuDkzGB0WbtpORzhAuTCj+LlJejGsmiyBvqwWMcYlaI7QOYjAnosKXNdEQv1QjG+g8kmzXIsAydBWGAnWEiF+4OHoZXldVX3gYiN2jCnG31uEC8APESaJtEFgyV8mTfn1TQNhHJYWtZDVS1gQbeofuMf6D1bjSGd6mje62+YjfvbVQBAQ1hCTHNn4ldH4ES1gi3M9mdiXmtBl0nWw1d81D/LUhEYilr1NB4wUdnzyhs2hZ8/OYXgRZ/iu/1Pkkx8SjjdlBZBaPJ4w6KCQABzCJYMRKJfjJB/ABirLCzoBdJgyf3dC5Dqa2hUMdShbRaI0hfuAzztoZK7mrgrieksuZ+BoQyyM+kEbpVwwCIc/Plg4JhYBT7VZgvdH3U9Q8Zyaz6ZrR5aqhHhhXYNKhJLGvmgp7IDIYaBNU8l4YXQuh0wNTKjBT/yTjL919fgmkAH3Yse1G2rS7iSOp8pkyjp7v+nWiw2Y+aoIsAJ8yMNdmxl/Eiwh94ehR9a2z2ruR68iag5NzowA6woGfed1y1SEdKjr7qwxlvixO0xqrDhVwsstc3+oWyhPgWApR1j0TfbysfqmiB2GXuT60z8XQt7xLMmwmYCboUywd8moQkiOYUqG9FupsZMw641R1T/cHDWrQ8RNY9Vlne8yCSo1A1jX5B k/UBQTRl KViDuC8vaGwL1ICp+uDTiAVWZOEbg11bjTp3F5F7uLwnEwT4ywyPHasa76sFUrOXOBQFNsaQ2En0RSYgkxj3JENHnrZwhid/OMPFFp3ac3CnZEw9cV6t4EwnjePGuzEbbknAenjBMVnQWmakSOn8WCMQ0Us4FgD81GImRqhyrYNcAmt2bX0nt1pVhrnGhNl0NMSLpeSsHOHI9UmkkN1SrlGbeC9YChSc8JxXto3JShYGDYO2Rmmu7lzuzhRF4qe4KcSGYkWhk/EXk8dWXhLfVhl7Iksfy0xBtkKRHbTt+rGX6vXInF/GX0bmICxDb5fEvIiUd+RdtEuiTlfU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Jan, kernel test robot noticed the following build errors: [auto build test ERROR on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Jan-Glauber/mm-Fix-shmem-THP-counters-on-migration/20230619-135947 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20230619055735.141740-1-jglauber%40digitalocean.com patch subject: [PATCH] mm: Fix shmem THP counters on migration config: powerpc-randconfig-r021-20230619 (https://download.01.org/0day-ci/archive/20230619/202306191542.ru4fKk5y-lkp@intel.com/config) compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a) reproduce: (https://download.01.org/0day-ci/archive/20230619/202306191542.ru4fKk5y-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202306191542.ru4fKk5y-lkp@intel.com/ All errors (new ones prefixed by >>): >> mm/migrate.c:491:36: error: use of undeclared identifier 'NR_SHMEM_THP'; did you mean 'NR_SHMEM_THPS'? 491 | __mod_lruvec_state(old_lruvec, NR_SHMEM_THP, -nr); | ^~~~~~~~~~~~ | NR_SHMEM_THPS include/linux/mmzone.h:181:2: note: 'NR_SHMEM_THPS' declared here 181 | NR_SHMEM_THPS, | ^ mm/migrate.c:492:36: error: use of undeclared identifier 'NR_SHMEM_THP'; did you mean 'NR_SHMEM_THPS'? 492 | __mod_lruvec_state(new_lruvec, NR_SHMEM_THP, nr); | ^~~~~~~~~~~~ | NR_SHMEM_THPS include/linux/mmzone.h:181:2: note: 'NR_SHMEM_THPS' declared here 181 | NR_SHMEM_THPS, | ^ 2 errors generated. vim +491 mm/migrate.c 389 390 /* 391 * Replace the page in the mapping. 392 * 393 * The number of remaining references must be: 394 * 1 for anonymous pages without a mapping 395 * 2 for pages with a mapping 396 * 3 for pages with a mapping and PagePrivate/PagePrivate2 set. 397 */ 398 int folio_migrate_mapping(struct address_space *mapping, 399 struct folio *newfolio, struct folio *folio, int extra_count) 400 { 401 XA_STATE(xas, &mapping->i_pages, folio_index(folio)); 402 struct zone *oldzone, *newzone; 403 int dirty; 404 int expected_count = folio_expected_refs(mapping, folio) + extra_count; 405 long nr = folio_nr_pages(folio); 406 407 if (!mapping) { 408 /* Anonymous page without mapping */ 409 if (folio_ref_count(folio) != expected_count) 410 return -EAGAIN; 411 412 /* No turning back from here */ 413 newfolio->index = folio->index; 414 newfolio->mapping = folio->mapping; 415 if (folio_test_swapbacked(folio)) 416 __folio_set_swapbacked(newfolio); 417 418 return MIGRATEPAGE_SUCCESS; 419 } 420 421 oldzone = folio_zone(folio); 422 newzone = folio_zone(newfolio); 423 424 xas_lock_irq(&xas); 425 if (!folio_ref_freeze(folio, expected_count)) { 426 xas_unlock_irq(&xas); 427 return -EAGAIN; 428 } 429 430 /* 431 * Now we know that no one else is looking at the folio: 432 * no turning back from here. 433 */ 434 newfolio->index = folio->index; 435 newfolio->mapping = folio->mapping; 436 folio_ref_add(newfolio, nr); /* add cache reference */ 437 if (folio_test_swapbacked(folio)) { 438 __folio_set_swapbacked(newfolio); 439 if (folio_test_swapcache(folio)) { 440 folio_set_swapcache(newfolio); 441 newfolio->private = folio_get_private(folio); 442 } 443 } else { 444 VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); 445 } 446 447 /* Move dirty while page refs frozen and newpage not yet exposed */ 448 dirty = folio_test_dirty(folio); 449 if (dirty) { 450 folio_clear_dirty(folio); 451 folio_set_dirty(newfolio); 452 } 453 454 xas_store(&xas, newfolio); 455 456 /* 457 * Drop cache reference from old page by unfreezing 458 * to one less reference. 459 * We know this isn't the last reference. 460 */ 461 folio_ref_unfreeze(folio, expected_count - nr); 462 463 xas_unlock(&xas); 464 /* Leave irq disabled to prevent preemption while updating stats */ 465 466 /* 467 * If moved to a different zone then also account 468 * the page for that zone. Other VM counters will be 469 * taken care of when we establish references to the 470 * new page and drop references to the old page. 471 * 472 * Note that anonymous pages are accounted for 473 * via NR_FILE_PAGES and NR_ANON_MAPPED if they 474 * are mapped to swap space. 475 */ 476 if (newzone != oldzone) { 477 struct lruvec *old_lruvec, *new_lruvec; 478 struct mem_cgroup *memcg; 479 480 memcg = folio_memcg(folio); 481 old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); 482 new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); 483 484 __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); 485 __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); 486 if (folio_test_swapbacked(folio) && !folio_test_swapcache(folio)) { 487 __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); 488 __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); 489 490 if (folio_test_transhuge(folio)) { > 491 __mod_lruvec_state(old_lruvec, NR_SHMEM_THP, -nr); 492 __mod_lruvec_state(new_lruvec, NR_SHMEM_THP, nr); 493 } 494 } 495 #ifdef CONFIG_SWAP 496 if (folio_test_swapcache(folio)) { 497 __mod_lruvec_state(old_lruvec, NR_SWAPCACHE, -nr); 498 __mod_lruvec_state(new_lruvec, NR_SWAPCACHE, nr); 499 } 500 #endif 501 if (dirty && mapping_can_writeback(mapping)) { 502 __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); 503 __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr); 504 __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); 505 __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); 506 } 507 } 508 local_irq_enable(); 509 510 return MIGRATEPAGE_SUCCESS; 511 } 512 EXPORT_SYMBOL(folio_migrate_mapping); 513 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki