From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1339C0218A for ; Mon, 27 Jan 2025 20:14:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 619AF2801A5; Mon, 27 Jan 2025 15:14:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5CA23280191; Mon, 27 Jan 2025 15:14:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 490F82801A5; Mon, 27 Jan 2025 15:14:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 28B7E280191 for ; Mon, 27 Jan 2025 15:14:51 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B529E120622 for ; Mon, 27 Jan 2025 20:14:50 +0000 (UTC) X-FDA: 83054335140.17.55A4308 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by imf29.hostedemail.com (Postfix) with ESMTP id D4C63120007 for ; Mon, 27 Jan 2025 20:14:47 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dyIzRNsR; spf=pass (imf29.hostedemail.com: domain of lkp@intel.com designates 198.175.65.11 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738008888; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HQmqpUbXaY0sHXZ5f+gbCfPhwrgcG6weEXlTKlHHn40=; b=TdjpQU8Fl/8yKl7+nzS+ZUF+JqkdenhuFr/C7fl9Yh+RcpEKLCy+fnn/wsA1pX5w6sXrzG qeECrziVrEHAKZfSdh1mUOMog94e35tdU8RHHyeib/y2H2qn/ZD4zkgi2bnYOSfVHzZ2oC eSJbtFYvdvLuqNxFdA4LB9KXnKmyH28= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dyIzRNsR; spf=pass (imf29.hostedemail.com: domain of lkp@intel.com designates 198.175.65.11 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738008888; a=rsa-sha256; cv=none; b=faJgYZV9GnIomNo/8Ddb+9mVElusdeWzIB3bwc+2yYNjNLD5jWQqUrtAldiN/JQbLcs0e3 M5lppxFsGP/ZbA2BLL9IBY226FqFMdHP/ZNVFP6lpESqdVSTgGCjByxZBxCOi7qYGwAFfL DdGnh94HGNDq1uoXDxL34YQpf4Z4QxI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738008888; x=1769544888; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=MkWqeW0Ep+kSVYc/sL14/9OAz9FIkQdZ2Yi0TVhAJ+8=; b=dyIzRNsRf0S/IzA2YPG3+6mpUYwiqFZWytpRBbMqGAxC7V8V2YkhtmFG 4bS7I4HadveCM33RQu+5zi2i25sDdnXk+idn5v4p09hU29AgLHsfs+5Pl 3Sat7TR01nMpFPN7uHmgnG9UN3MRkUlU2fNZnmt08rkd3NWi222JZr9mD 90yrB0sq6OqHdRUmZ082uuc7cN7n2JRYZKVAKn9mSa8mfOu7BnPFAPdbl uld05i7+e8iBqZYNtTwA5iCI8ZbUB7ydPgpAB5OHrVAK8kZIuYYLMe8SF kxfekTf40ocd8OHlWCXnLfht4T3BhUpV3bL10EQmEh0a3t4lxSiR41n8W g==; X-CSE-ConnectionGUID: aSlSmeL3SdufFe0q0nZwDA== X-CSE-MsgGUID: 50/ckN9HT+ihzZ1EADW0Tg== X-IronPort-AV: E=McAfee;i="6700,10204,11328"; a="48980513" X-IronPort-AV: E=Sophos;i="6.13,239,1732608000"; d="scan'208";a="48980513" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2025 12:14:47 -0800 X-CSE-ConnectionGUID: T7o5iwAqS4+6KqaDEQz1Ug== X-CSE-MsgGUID: +D0ESP75SE6fuWtoBSE96Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,239,1732608000"; d="scan'208";a="108327517" Received: from lkp-server01.sh.intel.com (HELO d63d4d77d921) ([10.239.97.150]) by fmviesa006.fm.intel.com with ESMTP; 27 Jan 2025 12:14:43 -0800 Received: from kbuild by d63d4d77d921 with local (Exim 4.96) (envelope-from ) id 1tcVVJ-000h7n-1e; Mon, 27 Jan 2025 20:14:41 +0000 Date: Tue, 28 Jan 2025 04:13:52 +0800 From: kernel test robot To: Lorenzo Stoakes , Andrew Morton Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev, Linux Memory Management List , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , linux-kernel@vger.kernel.org Subject: Re: [PATCH 5/5] mm: completely abstract unnecessary adj_start calculation Message-ID: <202501280337.7bKYRAYQ-lkp@intel.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: D4C63120007 X-Stat-Signature: 79m1k6bs8ub7rmgjp9mjdew4kxfsmdbh X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1738008887-495439 X-HE-Meta: U2FsdGVkX1/r9hC26pfAyRvenUFRxKKxi4WIx7w2K8aNBn0elHRVlNiA0CWw9Rqt0ivHx96GlmnjhTHHER0612Gr6LnSVOT4pNwQ9TiP23Wg14qs+3jnoyZCOWnVBGpGu0KL5o3EbdU/an0ICpM4aR3anic65dn2TQ88+Jw4sP9/pgqhTX2ryA4Mv5ayyNTMd7ECfTijG7Ia3XuLOaSLwLlasuLlvrpGN/AtAUJsKGae2xdhu2ISBcjtYW1Wj3ky+e1VLthv6MqZNwEF8peyaXBP7FsW5ziFoATMPcTfCydm1zGg+9recfoAIMlwDrtcBz3YAVlrQLKtq1uDupxbHFGs3qcZHBRXrkt2GPDCWGhK0f+XMqIzCve+HVxBC1QYr/3XJNPw5wX/ppY1AeeWeR/OPG6J/utHD7t03gtH1NDcf+lbkIbindnvLsAwE8/B2sj4i/lthoZvQ8MUXmrJHjll9AMS7B5qKv2jqUYFP1s8sDJjHK+CRTCmjBF0teT4GMUaY4mJRmVu6EiFzdtEPr4drqQa73xSE1UoSKOUE1WVH2mRIeKYFrTxxXYKjuRiumXgVzMUzCNq6IY/J4M/FjsWwkT12hvwR6RfAswrlXqVPm74Bu+8EY8lb+WMjH+X9udp5JixIiVNCAKksUpnFUkn8GXK2Lqz7OTP+wmvJ7Pre5EV51XSJMttnu7cbFfFF+fePYpt6sUEgSBKU9FbgqTQFlqwOkdr8bZeKp3pM/L3GaEzNmpckSiOB49jFvHCJLi4SX+/D3y2rwjpx2GiE4nzklgJ9JjkR6BEDdTIKd19/4VoBFFkg5fGG5AgZj7xye9J1wMMq5WLEwoTPAVAqKCVUmeKXxzEeN7dN9uEI6MaIslQTJF4Fpqp7/8ojYhOMYSJLUSpFMSn9AKpAy39w69fAE/LeafEG/1qlJSID75Lok7/6evO66dQXuTuv+IIWItCrcxkPhvomAyzXBn 9ToMPw3l fHFS327PNhAzTEGi2WX3+Il/h3nFae7GRIcEDj7YB1SDaU7QnSTCUi+eixg7BDVZs5FNm5IX/PpPQxgS9MRe66jINWxOpDMKm6uiVDE9XvWJ+ZhrX4L9+iN2e7mijS0nhrocB6hjvL0mdp06pequhuxo7J7xNbXd20NCVrqTUdKM1tnaFNg2S6i7VeyvuR1InINk8C2ADSxITeY4YruhjPmL2R2Nf5MBHQkzFH/a5bqOEngTftE4qOEtGd9n61YHwV+POM4Fw1Yoy5be+IPxEUA6oQh4fXlUqbDFlsY26QvWzZjwDwoP4aTP8IKTYIkp5sXpqXE76eUhSk6BVQlOz0jKADLhh+pS2Rt3BUJqT7869gPf/OqYgc/S0clG+j1R75ZUTFQK82P2Qt7M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Lorenzo, kernel test robot noticed the following build errors: [auto build test ERROR on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Lorenzo-Stoakes/mm-simplify-vma-merge-structure-and-expand-comments/20250127-235322 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/ef00aec42a892fe6ac9557b3a11f18f30a2e51b3.1737929364.git.lorenzo.stoakes%40oracle.com patch subject: [PATCH 5/5] mm: completely abstract unnecessary adj_start calculation config: hexagon-randconfig-001-20250128 (https://download.01.org/0day-ci/archive/20250128/202501280337.7bKYRAYQ-lkp@intel.com/config) compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 19306351a2c45e266fa11b41eb1362b20b6ca56d) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250128/202501280337.7bKYRAYQ-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202501280337.7bKYRAYQ-lkp@intel.com/ All errors (new ones prefixed by >>): In file included from mm/vma.c:7: In file included from mm/vma_internal.h:29: include/linux/mm_inline.h:47:41: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion] 47 | __mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages); | ~~~~~~~~~~~ ^ ~~~ include/linux/mm_inline.h:49:22: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum lru_list') [-Wenum-enum-conversion] 49 | NR_ZONE_LRU_BASE + lru, nr_pages); | ~~~~~~~~~~~~~~~~ ^ ~~~ >> mm/vma.c:518:50: error: incompatible pointer to integer conversion passing 'void *' to parameter of type 'long' [-Wint-conversion] 518 | vma_adjust_trans_huge(vma, vma->vm_start, addr, NULL); | ^~~~ include/linux/stddef.h:8:14: note: expanded from macro 'NULL' 8 | #define NULL ((void *)0) | ^~~~~~~~~~~ include/linux/huge_mm.h:574:12: note: passing argument to parameter 'adjust_next' here 574 | long adjust_next) | ^ >> mm/vma.c:704:10: error: incompatible pointer to integer conversion passing 'struct vm_area_struct *' to parameter of type 'long' [-Wint-conversion] 704 | adj_middle ? vmg->middle : NULL); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/huge_mm.h:574:12: note: passing argument to parameter 'adjust_next' here 574 | long adjust_next) | ^ mm/vma.c:1141:41: error: incompatible pointer to integer conversion passing 'void *' to parameter of type 'long' [-Wint-conversion] 1141 | vma_adjust_trans_huge(vma, start, end, NULL); | ^~~~ include/linux/stddef.h:8:14: note: expanded from macro 'NULL' 8 | #define NULL ((void *)0) | ^~~~~~~~~~~ include/linux/huge_mm.h:574:12: note: passing argument to parameter 'adjust_next' here 574 | long adjust_next) | ^ 2 warnings and 3 errors generated. vim +518 mm/vma.c 459 460 /* 461 * __split_vma() bypasses sysctl_max_map_count checking. We use this where it 462 * has already been checked or doesn't make sense to fail. 463 * VMA Iterator will point to the original VMA. 464 */ 465 static __must_check int 466 __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, 467 unsigned long addr, int new_below) 468 { 469 struct vma_prepare vp; 470 struct vm_area_struct *new; 471 int err; 472 473 WARN_ON(vma->vm_start >= addr); 474 WARN_ON(vma->vm_end <= addr); 475 476 if (vma->vm_ops && vma->vm_ops->may_split) { 477 err = vma->vm_ops->may_split(vma, addr); 478 if (err) 479 return err; 480 } 481 482 new = vm_area_dup(vma); 483 if (!new) 484 return -ENOMEM; 485 486 if (new_below) { 487 new->vm_end = addr; 488 } else { 489 new->vm_start = addr; 490 new->vm_pgoff += ((addr - vma->vm_start) >> PAGE_SHIFT); 491 } 492 493 err = -ENOMEM; 494 vma_iter_config(vmi, new->vm_start, new->vm_end); 495 if (vma_iter_prealloc(vmi, new)) 496 goto out_free_vma; 497 498 err = vma_dup_policy(vma, new); 499 if (err) 500 goto out_free_vmi; 501 502 err = anon_vma_clone(new, vma); 503 if (err) 504 goto out_free_mpol; 505 506 if (new->vm_file) 507 get_file(new->vm_file); 508 509 if (new->vm_ops && new->vm_ops->open) 510 new->vm_ops->open(new); 511 512 vma_start_write(vma); 513 vma_start_write(new); 514 515 init_vma_prep(&vp, vma); 516 vp.insert = new; 517 vma_prepare(&vp); > 518 vma_adjust_trans_huge(vma, vma->vm_start, addr, NULL); 519 520 if (new_below) { 521 vma->vm_start = addr; 522 vma->vm_pgoff += (addr - new->vm_start) >> PAGE_SHIFT; 523 } else { 524 vma->vm_end = addr; 525 } 526 527 /* vma_complete stores the new vma */ 528 vma_complete(&vp, vmi, vma->vm_mm); 529 validate_mm(vma->vm_mm); 530 531 /* Success. */ 532 if (new_below) 533 vma_next(vmi); 534 else 535 vma_prev(vmi); 536 537 return 0; 538 539 out_free_mpol: 540 mpol_put(vma_policy(new)); 541 out_free_vmi: 542 vma_iter_free(vmi); 543 out_free_vma: 544 vm_area_free(new); 545 return err; 546 } 547 548 /* 549 * Split a vma into two pieces at address 'addr', a new vma is allocated 550 * either for the first part or the tail. 551 */ 552 static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, 553 unsigned long addr, int new_below) 554 { 555 if (vma->vm_mm->map_count >= sysctl_max_map_count) 556 return -ENOMEM; 557 558 return __split_vma(vmi, vma, addr, new_below); 559 } 560 561 /* 562 * dup_anon_vma() - Helper function to duplicate anon_vma 563 * @dst: The destination VMA 564 * @src: The source VMA 565 * @dup: Pointer to the destination VMA when successful. 566 * 567 * Returns: 0 on success. 568 */ 569 static int dup_anon_vma(struct vm_area_struct *dst, 570 struct vm_area_struct *src, struct vm_area_struct **dup) 571 { 572 /* 573 * Easily overlooked: when mprotect shifts the boundary, make sure the 574 * expanding vma has anon_vma set if the shrinking vma had, to cover any 575 * anon pages imported. 576 */ 577 if (src->anon_vma && !dst->anon_vma) { 578 int ret; 579 580 vma_assert_write_locked(dst); 581 dst->anon_vma = src->anon_vma; 582 ret = anon_vma_clone(dst, src); 583 if (ret) 584 return ret; 585 586 *dup = dst; 587 } 588 589 return 0; 590 } 591 592 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE 593 void validate_mm(struct mm_struct *mm) 594 { 595 int bug = 0; 596 int i = 0; 597 struct vm_area_struct *vma; 598 VMA_ITERATOR(vmi, mm, 0); 599 600 mt_validate(&mm->mm_mt); 601 for_each_vma(vmi, vma) { 602 #ifdef CONFIG_DEBUG_VM_RB 603 struct anon_vma *anon_vma = vma->anon_vma; 604 struct anon_vma_chain *avc; 605 #endif 606 unsigned long vmi_start, vmi_end; 607 bool warn = 0; 608 609 vmi_start = vma_iter_addr(&vmi); 610 vmi_end = vma_iter_end(&vmi); 611 if (VM_WARN_ON_ONCE_MM(vma->vm_end != vmi_end, mm)) 612 warn = 1; 613 614 if (VM_WARN_ON_ONCE_MM(vma->vm_start != vmi_start, mm)) 615 warn = 1; 616 617 if (warn) { 618 pr_emerg("issue in %s\n", current->comm); 619 dump_stack(); 620 dump_vma(vma); 621 pr_emerg("tree range: %px start %lx end %lx\n", vma, 622 vmi_start, vmi_end - 1); 623 vma_iter_dump_tree(&vmi); 624 } 625 626 #ifdef CONFIG_DEBUG_VM_RB 627 if (anon_vma) { 628 anon_vma_lock_read(anon_vma); 629 list_for_each_entry(avc, &vma->anon_vma_chain, same_vma) 630 anon_vma_interval_tree_verify(avc); 631 anon_vma_unlock_read(anon_vma); 632 } 633 #endif 634 /* Check for a infinite loop */ 635 if (++i > mm->map_count + 10) { 636 i = -1; 637 break; 638 } 639 } 640 if (i != mm->map_count) { 641 pr_emerg("map_count %d vma iterator %d\n", mm->map_count, i); 642 bug = 1; 643 } 644 VM_BUG_ON_MM(bug, mm); 645 } 646 #endif /* CONFIG_DEBUG_VM_MAPLE_TREE */ 647 648 /* 649 * Based on the vmg flag indicating whether we need to adjust the vm_start field 650 * for the middle or next VMA, we calculate what the range of the newly adjusted 651 * VMA ought to be, and set the VMA's range accordingly. 652 */ 653 static void vmg_adjust_set_range(struct vma_merge_struct *vmg) 654 { 655 unsigned long flags = vmg->merge_flags; 656 struct vm_area_struct *adjust; 657 pgoff_t pgoff; 658 659 if (flags & __VMG_FLAG_ADJUST_MIDDLE_START) { 660 adjust = vmg->middle; 661 pgoff = adjust->vm_pgoff + PHYS_PFN(vmg->end - adjust->vm_start); 662 } else if (flags & __VMG_FLAG_ADJUST_NEXT_START) { 663 adjust = vmg->next; 664 pgoff = adjust->vm_pgoff - PHYS_PFN(adjust->vm_start - vmg->end); 665 } else { 666 return; 667 } 668 669 vma_set_range(adjust, vmg->end, adjust->vm_end, pgoff); 670 } 671 672 /* 673 * Actually perform the VMA merge operation. 674 * 675 * On success, returns the merged VMA. Otherwise returns NULL. 676 */ 677 static int commit_merge(struct vma_merge_struct *vmg) 678 { 679 struct vm_area_struct *vma; 680 struct vma_prepare vp; 681 bool adj_middle = vmg->merge_flags & __VMG_FLAG_ADJUST_MIDDLE_START; 682 683 if (vmg->merge_flags & __VMG_FLAG_ADJUST_NEXT_START) { 684 /* In this case we manipulate middle and return next. */ 685 vma = vmg->middle; 686 vma_iter_config(vmg->vmi, vmg->end, vmg->next->vm_end); 687 } else { 688 vma = vmg->target; 689 /* Note: vma iterator must be pointing to 'start'. */ 690 vma_iter_config(vmg->vmi, vmg->start, vmg->end); 691 } 692 693 init_multi_vma_prep(&vp, vma, vmg); 694 695 if (vma_iter_prealloc(vmg->vmi, vma)) 696 return -ENOMEM; 697 698 vma_prepare(&vp); 699 /* 700 * THP pages may need to do additional splits if we increase 701 * middle->vm_start. 702 */ 703 vma_adjust_trans_huge(vma, vmg->start, vmg->end, > 704 adj_middle ? vmg->middle : NULL); 705 vma_set_range(vma, vmg->start, vmg->end, vmg->pgoff); 706 vmg_adjust_set_range(vmg); 707 vma_iter_store(vmg->vmi, vmg->target); 708 709 vma_complete(&vp, vmg->vmi, vma->vm_mm); 710 711 return 0; 712 } 713 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki