From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91DB5C25B6F for ; Wed, 25 Oct 2023 13:51:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 767EF6B02F5; Wed, 25 Oct 2023 09:51:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EFCF6B02F7; Wed, 25 Oct 2023 09:51:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 573736B02F9; Wed, 25 Oct 2023 09:51:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 404046B02F5 for ; Wed, 25 Oct 2023 09:51:16 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1C626B5553 for ; Wed, 25 Oct 2023 13:51:16 +0000 (UTC) X-FDA: 81384120552.19.6BE642C Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by imf07.hostedemail.com (Postfix) with ESMTP id EEA2A40005 for ; Wed, 25 Oct 2023 13:51:12 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NjzjKMQZ; spf=pass (imf07.hostedemail.com: domain of lkp@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698241873; a=rsa-sha256; cv=none; b=38nI2Djb1zvM5SLvm2NypkEuz3rLBrq32ybRs6g9BNVjB7NZ6gBvOtY2tBKabosZYW3I08 ao871kKphf2QdgO2n11HeCEE3eNEUL4e1QMlyjWr51SCr8ARl7AA0sDwsx51C1YGVgsOc2 b6a8N259ypjXbLNvwDIwLbX3i1vm0+o= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NjzjKMQZ; spf=pass (imf07.hostedemail.com: domain of lkp@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698241873; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uw4TVhhtPRKL8bXMPOgLV5fHni+ZC9apHLsE0n7n1ow=; b=mrSjeo9SMUNcFl4RN07aZUUqqu+ep5b+xp02ncwGXMKm/pjzsHCuOWJpDQIdCKoHJnpiv/ A/itqEyuk1t+0VxUaL/2G15kAgPy1PNQ/1lJ3uPoouuZ4zNpe8lFlqLgOIYJjP2L5/WV/z x33Pc64h2dIybyQ1l2OWi2FFvPHoS64= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698241873; x=1729777873; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=BZ4OU/6+7GvkYA351EMlCgYY6xHM2FSjvBKN5Q9IyNo=; b=NjzjKMQZaQqcP2Eves5rwDk4yvwWTH2Lr1uXVaIw3rQxo6fhcFIAa2zu vV0T4qXXWLNGEPvzDENrf49aFXi4vJl4niBRd6wC/bbYqYQNjhgqpkHtC 7D95oJ3dKWMhjM8+rZTuZqdYZjIcH4wXpPAbKa2wsPlvYRcPsPxyPQjqb Ykfz3ZqmNtDegrHxo65JderJ3rJ411/0tg7LIiWD6ycptA/CYdy08EM1x jwGWjsbtvgqmFReXX5m7WPTpagYDENU5mzlJFds9Gpkd8sJbjkTdpPz3Q Id+L4CoUw+FKbSUwfvbap72qLzbJVby/vzPuhAevri5bxZUjRcHcCPhHu Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10874"; a="387128044" X-IronPort-AV: E=Sophos;i="6.03,250,1694761200"; d="scan'208";a="387128044" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2023 06:51:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10874"; a="849530607" X-IronPort-AV: E=Sophos;i="6.03,250,1694761200"; d="scan'208";a="849530607" Received: from lkp-server01.sh.intel.com (HELO 8917679a5d3e) ([10.239.97.150]) by FMSMGA003.fm.intel.com with ESMTP; 25 Oct 2023 06:51:08 -0700 Received: from kbuild by 8917679a5d3e with local (Exim 4.96) (envelope-from ) id 1qveHq-0008v1-1e; Wed, 25 Oct 2023 13:51:06 +0000 Date: Wed, 25 Oct 2023 21:50:15 +0800 From: kernel test robot To: Mel Gorman , Matthew Wilcox Cc: oe-kbuild-all@lists.linux.dev, Kefeng Wang , Andrew Morton , Linux Memory Management List , Mike Kravetz , Muchun Song , Yuan Can Subject: Re: Re: [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list() Message-ID: <202310252149.qzjGG49d-lkp@intel.com> References: <20231025093254.xvomlctwhcuerzky@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231025093254.xvomlctwhcuerzky@techsingularity.net> X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: EEA2A40005 X-Stat-Signature: t335ei46z8ywr8f71r4t1myxt4xk9fum X-Rspam-User: X-HE-Tag: 1698241872-810805 X-HE-Meta: U2FsdGVkX1+2Z9lTdewieFv2OwWH286Ov51APWgtZLQPqkYF39hf9yDo3VLfGra49ak7KFJAUAz4B3Jq+k1szxnF+qC7VrCCdJ0S//FqmVf1wYFdpCcPOJeA52tNyhPC/GuMe1Skg1Q7LYt0Gu79dIqpe3t8CLVShCvDdbrV4vyBG0SdmV7oNwJJidT8s6zY0A9KgsEf5Bp6acvDnRM5MO6judSw6ODDNWZjDY8KXk84YLTFb1ZM4dFprf5Cdl76yDOQm+4zb9YHgpR5L4u+NiiaHRseDMQTNOqpfC1FJC0V5HiZdDabeo4iUoNLqDBnyT81barWumXcjn/Q173b9xXUIWCtkm+Qoa5gM7ECPIP/eGK6AY8wDM+BUfjoLZia994BEMpThDNcanMVi5ah+HtkLGYQ6G8gPlAD6FpurzmYuSPxo23gzjHaqYmznGovXqVOScDWmZZ23CXy8WlLXPuM6UTRMEDvGRkEIf5E1J7tDWAakEJchLMH6tVC0NSbpKtGduwvr0uUEVoUpJUfxvGY4Lo4Rjq8AfRIqpIAy03EyKYHJ+7GX8PiC3g3j9A2iKe4JKmZT60+YCwuj8p0C8djigL+nakz3BEt+bx4+RHQx5kif3l8ftB4sKbB1uVesgfcwvOrxEBTI3PF+mJcmY5G344ruPoh/ORcSI9eVQTAHZAjp+33aiqj6J8VK5HfQQtAdth5vs2IceSDi6CtShQUoXldhadiDNBswUHeOxGH8ErO0d3y8xJy7gPYWSN2Ub96Ue5xU9A4F9RE/LJLyiIoetZcD6WA96VN4oChSavPzgPQqrUzBd5L/kUHQrMylnpVX6EkDz/gSbkN8VmfiBeyR9o3Qq9uBcU0FEUvwwcQoiXM+65RlzDivoVYi5AxqaaBe0CrviZE8X6DivJkBsyaQe0x94hBa1uV8KQxM96R/F2oXWMw8KMv3PX59kQ7CaTtYILz1HU+x9oWB4Y YIcMoSua aUSMszTzlociE5+z03UwZKDmeAIXT3vKahaMHkq9VxMzU8haw7/zjFYmPDOm55sj9c/NWIA8/uqCR/jKcGvfm3XKaj23sKZp+06paqcGSJTWo0zndMaLwdmIugplUjZmS4NRh4v2/MBAav9OVmtWk2tYkYpVPgGqlygf49G7YTPdRUVNLwb3+awWW5oEu5I/hrMZOT/tnaPWHEiOj3Aq9SG6ikQM+ugUgSd3Yaj1VJ30Kfzdb2/kM+QtnTUxbr8Qq8meRxaEKbRBi8FjCbOY2asNaecQT89GBJjZ2zyQjDJMHzH3RmUih9dIfdXDVH4D6Ae96ElkKCB+hD2xxPucpAacvMIllRcoXGkfpmLQsdIcl7gHBmb3/mrsOBV8S1b7mgeA6rOgCohDKpe+YCrZL8pOK+1YOXea7M3aid0zXA7GmOEJ1jPj5rvrM5U5znVInuehfksi5PiBk8wMgh3Chp320lTVJe6bbfZrvGknZeEZrwvWfkydo9pU4zNmHm+e1jCVyy54kAiqg3B4ME4X4BssF4A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Mel, kernel test robot noticed the following build errors: [auto build test ERROR on trondmy-nfs/linux-next] [also build test ERROR on linus/master v6.6-rc7] [cannot apply to akpm-mm/mm-everything next-20231025] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Mel-Gorman/Re-PATCH-resend-mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list/20231025-173425 base: git://git.linux-nfs.org/projects/trondmy/linux-nfs.git linux-next patch link: https://lore.kernel.org/r/20231025093254.xvomlctwhcuerzky%40techsingularity.net patch subject: Re: [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list() config: loongarch-randconfig-002-20231025 (https://download.01.org/0day-ci/archive/20231025/202310252149.qzjGG49d-lkp@intel.com/config) compiler: loongarch64-linux-gcc (GCC) 13.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231025/202310252149.qzjGG49d-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202310252149.qzjGG49d-lkp@intel.com/ All errors (new ones prefixed by >>): fs/xfs/xfs_buf.c: In function 'xfs_buf_alloc_pages': >> fs/xfs/xfs_buf.c:391:26: error: implicit declaration of function 'alloc_pages_bulk_array'; did you mean 'alloc_pages_bulk_node'? [-Werror=implicit-function-declaration] 391 | filled = alloc_pages_bulk_array(gfp_mask, bp->b_page_count, | ^~~~~~~~~~~~~~~~~~~~~~ | alloc_pages_bulk_node cc1: some warnings being treated as errors -- fs/btrfs/extent_io.c: In function 'btrfs_alloc_page_array': >> fs/btrfs/extent_io.c:688:29: error: implicit declaration of function 'alloc_pages_bulk_array'; did you mean 'alloc_pages_bulk_node'? [-Werror=implicit-function-declaration] 688 | allocated = alloc_pages_bulk_array(GFP_NOFS, nr_pages, page_array); | ^~~~~~~~~~~~~~~~~~~~~~ | alloc_pages_bulk_node cc1: some warnings being treated as errors vim +391 fs/xfs/xfs_buf.c 0e6e847ffe3743 fs/xfs/linux-2.6/xfs_buf.c Dave Chinner 2011-03-26 353 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 354 static int 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 355 xfs_buf_alloc_pages( 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 356 struct xfs_buf *bp, 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 357 xfs_buf_flags_t flags) 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 358 { 289ae7b48c2c4d fs/xfs/xfs_buf.c Dave Chinner 2021-06-07 359 gfp_t gfp_mask = __GFP_NOWARN; c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 360 long filled = 0; 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 361 289ae7b48c2c4d fs/xfs/xfs_buf.c Dave Chinner 2021-06-07 362 if (flags & XBF_READ_AHEAD) 289ae7b48c2c4d fs/xfs/xfs_buf.c Dave Chinner 2021-06-07 363 gfp_mask |= __GFP_NORETRY; 289ae7b48c2c4d fs/xfs/xfs_buf.c Dave Chinner 2021-06-07 364 else 289ae7b48c2c4d fs/xfs/xfs_buf.c Dave Chinner 2021-06-07 365 gfp_mask |= GFP_NOFS; 289ae7b48c2c4d fs/xfs/xfs_buf.c Dave Chinner 2021-06-07 366 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 367 /* Make sure that we have a page list */ 934d1076bb2c5b fs/xfs/xfs_buf.c Christoph Hellwig 2021-06-07 368 bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 369 if (bp->b_page_count <= XB_PAGES) { 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 370 bp->b_pages = bp->b_page_array; 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 371 } else { 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 372 bp->b_pages = kzalloc(sizeof(struct page *) * bp->b_page_count, 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 373 gfp_mask); 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 374 if (!bp->b_pages) 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 375 return -ENOMEM; 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 376 } 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 377 bp->b_flags |= _XBF_PAGES; 02c5117386884e fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 378 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 379 /* Assure zeroed buffer for non-read cases. */ 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 380 if (!(flags & XBF_READ)) 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 381 gfp_mask |= __GFP_ZERO; 0a683794ace283 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 382 c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 383 /* c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 384 * Bulk filling of pages can take multiple calls. Not filling the entire c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 385 * array is not an allocation failure, so don't back off if we get at c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 386 * least one extra page. c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 387 */ c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 388 for (;;) { c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 389 long last = filled; c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 390 c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 @391 filled = alloc_pages_bulk_array(gfp_mask, bp->b_page_count, c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 392 bp->b_pages); c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 393 if (filled == bp->b_page_count) { c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 394 XFS_STATS_INC(bp->b_mount, xb_page_found); c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 395 break; c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 396 } c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 397 c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 398 if (filled != last) c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 399 continue; c9fa563072e133 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 400 ce8e922c0e79c8 fs/xfs/linux-2.6/xfs_buf.c Nathan Scott 2006-01-11 401 if (flags & XBF_READ_AHEAD) { e7d236a6fe5102 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 402 xfs_buf_free_pages(bp); e7d236a6fe5102 fs/xfs/xfs_buf.c Dave Chinner 2021-06-01 403 return -ENOMEM; ^1da177e4c3f41 fs/xfs/linux-2.6/xfs_buf.c Linus Torvalds 2005-04-16 404 } ^1da177e4c3f41 fs/xfs/linux-2.6/xfs_buf.c Linus Torvalds 2005-04-16 405 dbd329f1e44ed4 fs/xfs/xfs_buf.c Christoph Hellwig 2019-06-28 406 XFS_STATS_INC(bp->b_mount, xb_page_retries); 4034247a0d6ab2 fs/xfs/xfs_buf.c NeilBrown 2022-01-14 407 memalloc_retry_wait(gfp_mask); ^1da177e4c3f41 fs/xfs/linux-2.6/xfs_buf.c Linus Torvalds 2005-04-16 408 } 0e6e847ffe3743 fs/xfs/linux-2.6/xfs_buf.c Dave Chinner 2011-03-26 409 return 0; ^1da177e4c3f41 fs/xfs/linux-2.6/xfs_buf.c Linus Torvalds 2005-04-16 410 } ^1da177e4c3f41 fs/xfs/linux-2.6/xfs_buf.c Linus Torvalds 2005-04-16 411 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki