From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8D0FC021A4 for ; Sat, 15 Feb 2025 19:56:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 829CD280011; Sat, 15 Feb 2025 14:56:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D97E280007; Sat, 15 Feb 2025 14:56:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 678C3280011; Sat, 15 Feb 2025 14:56:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 49D86280007 for ; Sat, 15 Feb 2025 14:56:31 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0742EB419D for ; Sat, 15 Feb 2025 19:56:31 +0000 (UTC) X-FDA: 83123236182.10.1D11F52 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by imf08.hostedemail.com (Postfix) with ESMTP id 5F48B160002 for ; Sat, 15 Feb 2025 19:56:28 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WwWRS5Mz; spf=pass (imf08.hostedemail.com: domain of lkp@intel.com designates 198.175.65.9 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739649389; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=iuJiRWaIA5YBPS9xrmPDLs2i9Ks+oRiS+8m4g10QZzA=; b=I/ibLBVgM8fX3r2kjKu6m9n9KjnsTpvh+7S0kSbYZI01kgd/6U19A5r6H+g8bs+6YNPa/T 2nlKvRLSE4HdaaO40TCNBygd2l3N8jlq9liAk7YW4ukNGv8sEMxf7o7QAw972rmvTFm23n KzTX9GwT/AVLOfKgZ4cgV92c0EXrRko= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WwWRS5Mz; spf=pass (imf08.hostedemail.com: domain of lkp@intel.com designates 198.175.65.9 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739649389; a=rsa-sha256; cv=none; b=ESmZcXK3e4Pjm9ujg+XYBZlmhiJPAfYpuI4ts9S+MuWjPeMHy8m4Ps/nkLllmjwUsHAmD6 TcRZ6AaXIvU+wIWzz5LszFj2VYBbH+Ky2Ly+fqIbG8Cb1DbjzrNy5kqRiBnUBfYqabg5aH CJsreBgZxCEJmf+Cx1mQw9pTbVGwMQE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1739649389; x=1771185389; h=date:from:to:cc:subject:message-id:mime-version; bh=ydU1zNDahgtP6kUc1gFheUqp4Vjx1qtR/ZPWizYZ47k=; b=WwWRS5MzX7aUUsqi1LkZ2uU7rFz3URd9hRM78sTBG4i6ljIdeDic2LUL eE84NMRviDuKKS0mr0UR60mRnpvlZFbheUulGVYEXFGt7d3yR3aaUEuNE wmqjsarCGPISQMSfy6oENzr+wwSCATulLGfvdOqRegYumGu2UDSbaj+Ga /nwJpIhU3CuMaV1+tm7mlf+CzRWQJnfFKKMCTNK/3YuonGelIdhPWMPAr adpTssFgEY3YYfQp1gbY1nhKmx/HnjvyS2L+q5AgLUGfAYTi7U9ggpLet I/SZM0BrMtyH+kNnAnCb+YeRAlkGXAbxleiQqSz0SNEkwdccwvo4UKK9T Q==; X-CSE-ConnectionGUID: ye6IToNRR8iltw7UMpTx4g== X-CSE-MsgGUID: U9Roce3oQ+m4cMRJ6MBdYQ== X-IronPort-AV: E=McAfee;i="6700,10204,11346"; a="62846545" X-IronPort-AV: E=Sophos;i="6.13,289,1732608000"; d="scan'208";a="62846545" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2025 11:56:27 -0800 X-CSE-ConnectionGUID: Ei33MiQuR6WSY/1OfaAoeA== X-CSE-MsgGUID: YUFMtB6wRSSfz3r6FqwVQA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,289,1732608000"; d="scan'208";a="114384691" Received: from lkp-server01.sh.intel.com (HELO d63d4d77d921) ([10.239.97.150]) by fmviesa009.fm.intel.com with ESMTP; 15 Feb 2025 11:56:25 -0800 Received: from kbuild by d63d4d77d921 with local (Exim 4.96) (envelope-from ) id 1tjOH0-001B8N-2C; Sat, 15 Feb 2025 19:56:22 +0000 Date: Sun, 16 Feb 2025 03:56:06 +0800 From: kernel test robot To: Shakeel Butt Cc: oe-kbuild-all@lists.linux.dev, linux-kernel@vger.kernel.org, Andrew Morton , Linux Memory Management List , Roman Gushchin , "T.J. Mercier" Subject: mm/workingset.c:621 workingset_update_node() warn: unsigned '_x' is never less than zero. Message-ID: <202502160323.ZLUfooA0-lkp@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Rspam-User: X-Rspamd-Queue-Id: 5F48B160002 X-Stat-Signature: ibkba4ig9e1ue7zd8tmwem88x4gkogzj X-Rspamd-Server: rspam03 X-HE-Tag: 1739649388-375905 X-HE-Meta: U2FsdGVkX1+xkAt/Kv2S+3/tcRABKTf9Mkr9t77M0lH+4/l1gP9rBEp3WxJyd4O1kHgW2pmxcMK8ff5EtT2iEXHtbjqiQh/W7x8aU3czUnazovi0tlvct09JOFzzLicQ1WMfjeaK5KLj7KcoGwO4CpRutsb8ySJiHShAEkKWGbyup3JR92O8ctfMzRze8ZmdCnDIqspiMIcMKDu9igblLcEKwUIP66CQF6GmM2Tgo6gVuGuN3lOjprK7ZiomVu/9D6Ljx4bCPmdJwb6khbey1Uz39/AjE+f6zUYoLSaJLAWJA7gyCsDRIX2pWaUV4ncjh91wKvc8eeS4GYIwzOLRyGP81srPDTtof83i+XjgPRxB3H1qrJMnnl/uORfmzj1Nf3MsqhZ1csRCvp2qpIkJpPoRANk17e5uIhvsJQB/bVMufQOq7d/RFm9icsun9M1nISAnm/fKbXvEe7LCn+oL9yN8PYXtPtNUPZfvtzU046X3siRB+snypI3nUdgE/Y1/6cl9cn+Iw8LXz2c5PWh0SWGGfwFD7tPVXaHoCQb8Wq0Qt6WphbjcH/TQb0qMkpiA8BLVwqjr5PaAgXc2+jHnDPecHnO1JnFf83Rds+/2B1412zLP4GAjG3ZcHRHk1PA1yU+7aH2iNpLlY9g9WgIATA4uu1g5Rx0jrXwu4Cxk6nzC2aDSymTwIpRd0fJ7Kh3P9arbahB6PCOkFF7xYg0U4pNpudrwBw+bg5dU1YcdV0L8dbljvc37X7vLgn8SMXMVPFksEzbzn+Nk5xvah6Ob5ZWnO5m2PZbcLpLhpqvBw3sGtD4OUCT55urC/wxJSQf6gUEw+TzxmEy3cW53luXJyIysyWLzquC29cV4+ZeUdQAKsGCxYMRUq3YZq1uteldUXoGxl3/KRLrD03VfXwUeCsO2oUQIOudM0dwFFF9/fFsZAQu/WldE9u+S0as+rY93ohfMIlixjiQAp+3iSes YOKwcjEp DFewWAB0aO/3w2mPfYzztIOSCBxVMH3zjy5PqcpAcGlnt2vkinHgjG4JQEiYN3qw0Y2omQKThbPin/tAjuTX1Q+4boVV/xCoaA86Tu0leqI+O4ArVTzQbXstbDMKD1IA+NkXHR8y8aNbF161ZDDEd20MrE3Uk14SjOurPsh5UfuKG8C0YxB5BJVOiICLcmgEruTynozuXEQUPfa3vQxjLfMIMV3K7wuPUWpjAQWcADS+yPjy+J/BDb3opOWEpcrWdYXA0As1MS4EdugKKuqoNy/ohouFNzvwvLQpst/K9HgQAbWbBYpqzGckKWnL3bgrIMJhrZUM+Jij4HG7eFBsYUL0DCnPvnFz9comOFcTXnfqaxfAJn62pAU6GYA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master head: 7ff71e6d923969d933e1ba7e0db857782d36cd19 commit: 4715c6a753dccd15fd3a8928168f57e349205bd4 mm: cleanup WORKINGSET_NODES in workingset date: 9 months ago config: riscv-randconfig-r073-20250213 (https://download.01.org/0day-ci/archive/20250216/202502160323.ZLUfooA0-lkp@intel.com/config) compiler: riscv32-linux-gcc (GCC) 14.2.0 If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202502160323.ZLUfooA0-lkp@intel.com/ New smatch warnings: mm/workingset.c:621 workingset_update_node() warn: unsigned '_x' is never less than zero. mm/workingset.c:746 shadow_lru_isolate() warn: unsigned '_x' is never less than zero. Old smatch warnings: include/linux/mm.h:1306 virt_to_head_page() warn: unsigned '_x' is never less than zero. vim +/_x +621 mm/workingset.c 617 618 void workingset_update_node(struct xa_node *node) 619 { 620 struct address_space *mapping; > 621 struct page *page = virt_to_page(node); 622 623 /* 624 * Track non-empty nodes that contain only shadow entries; 625 * unlink those that contain pages or are being freed. 626 * 627 * Avoid acquiring the list_lru lock when the nodes are 628 * already where they should be. The list_empty() test is safe 629 * as node->private_list is protected by the i_pages lock. 630 */ 631 mapping = container_of(node->array, struct address_space, i_pages); 632 lockdep_assert_held(&mapping->i_pages.xa_lock); 633 634 if (node->count && node->count == node->nr_values) { 635 if (list_empty(&node->private_list)) { 636 list_lru_add_obj(&shadow_nodes, &node->private_list); 637 __inc_node_page_state(page, WORKINGSET_NODES); 638 } 639 } else { 640 if (!list_empty(&node->private_list)) { 641 list_lru_del_obj(&shadow_nodes, &node->private_list); 642 __dec_node_page_state(page, WORKINGSET_NODES); 643 } 644 } 645 } 646 647 static unsigned long count_shadow_nodes(struct shrinker *shrinker, 648 struct shrink_control *sc) 649 { 650 unsigned long max_nodes; 651 unsigned long nodes; 652 unsigned long pages; 653 654 nodes = list_lru_shrink_count(&shadow_nodes, sc); 655 if (!nodes) 656 return SHRINK_EMPTY; 657 658 /* 659 * Approximate a reasonable limit for the nodes 660 * containing shadow entries. We don't need to keep more 661 * shadow entries than possible pages on the active list, 662 * since refault distances bigger than that are dismissed. 663 * 664 * The size of the active list converges toward 100% of 665 * overall page cache as memory grows, with only a tiny 666 * inactive list. Assume the total cache size for that. 667 * 668 * Nodes might be sparsely populated, with only one shadow 669 * entry in the extreme case. Obviously, we cannot keep one 670 * node for every eligible shadow entry, so compromise on a 671 * worst-case density of 1/8th. Below that, not all eligible 672 * refaults can be detected anymore. 673 * 674 * On 64-bit with 7 xa_nodes per page and 64 slots 675 * each, this will reclaim shadow entries when they consume 676 * ~1.8% of available memory: 677 * 678 * PAGE_SIZE / xa_nodes / node_entries * 8 / PAGE_SIZE 679 */ 680 #ifdef CONFIG_MEMCG 681 if (sc->memcg) { 682 struct lruvec *lruvec; 683 int i; 684 685 mem_cgroup_flush_stats_ratelimited(sc->memcg); 686 lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); 687 for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) 688 pages += lruvec_page_state_local(lruvec, 689 NR_LRU_BASE + i); 690 pages += lruvec_page_state_local( 691 lruvec, NR_SLAB_RECLAIMABLE_B) >> PAGE_SHIFT; 692 pages += lruvec_page_state_local( 693 lruvec, NR_SLAB_UNRECLAIMABLE_B) >> PAGE_SHIFT; 694 } else 695 #endif 696 pages = node_present_pages(sc->nid); 697 698 max_nodes = pages >> (XA_CHUNK_SHIFT - 3); 699 700 if (nodes <= max_nodes) 701 return 0; 702 return nodes - max_nodes; 703 } 704 705 static enum lru_status shadow_lru_isolate(struct list_head *item, 706 struct list_lru_one *lru, 707 spinlock_t *lru_lock, 708 void *arg) __must_hold(lru_lock) 709 { 710 struct xa_node *node = container_of(item, struct xa_node, private_list); 711 struct address_space *mapping; 712 int ret; 713 714 /* 715 * Page cache insertions and deletions synchronously maintain 716 * the shadow node LRU under the i_pages lock and the 717 * lru_lock. Because the page cache tree is emptied before 718 * the inode can be destroyed, holding the lru_lock pins any 719 * address_space that has nodes on the LRU. 720 * 721 * We can then safely transition to the i_pages lock to 722 * pin only the address_space of the particular node we want 723 * to reclaim, take the node off-LRU, and drop the lru_lock. 724 */ 725 726 mapping = container_of(node->array, struct address_space, i_pages); 727 728 /* Coming from the list, invert the lock order */ 729 if (!xa_trylock(&mapping->i_pages)) { 730 spin_unlock_irq(lru_lock); 731 ret = LRU_RETRY; 732 goto out; 733 } 734 735 /* For page cache we need to hold i_lock */ 736 if (mapping->host != NULL) { 737 if (!spin_trylock(&mapping->host->i_lock)) { 738 xa_unlock(&mapping->i_pages); 739 spin_unlock_irq(lru_lock); 740 ret = LRU_RETRY; 741 goto out; 742 } 743 } 744 745 list_lru_isolate(lru, item); > 746 __dec_node_page_state(virt_to_page(node), WORKINGSET_NODES); 747 748 spin_unlock(lru_lock); 749 750 /* 751 * The nodes should only contain one or more shadow entries, 752 * no pages, so we expect to be able to remove them all and 753 * delete and free the empty node afterwards. 754 */ 755 if (WARN_ON_ONCE(!node->nr_values)) 756 goto out_invalid; 757 if (WARN_ON_ONCE(node->count != node->nr_values)) 758 goto out_invalid; 759 xa_delete_node(node, workingset_update_node); 760 __inc_lruvec_kmem_state(node, WORKINGSET_NODERECLAIM); 761 762 out_invalid: 763 xa_unlock_irq(&mapping->i_pages); 764 if (mapping->host != NULL) { 765 if (mapping_shrinkable(mapping)) 766 inode_add_lru(mapping->host); 767 spin_unlock(&mapping->host->i_lock); 768 } 769 ret = LRU_REMOVED_RETRY; 770 out: 771 cond_resched(); 772 spin_lock_irq(lru_lock); 773 return ret; 774 } 775 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki