From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B6E6E7718A for ; Fri, 20 Dec 2024 10:06:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E691E6B007B; Fri, 20 Dec 2024 05:06:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E18EE6B0083; Fri, 20 Dec 2024 05:06:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBA086B0085; Fri, 20 Dec 2024 05:06:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B14346B007B for ; Fri, 20 Dec 2024 05:06:13 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 37F45140840 for ; Fri, 20 Dec 2024 10:06:13 +0000 (UTC) X-FDA: 82914906606.28.267E636 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by imf01.hostedemail.com (Postfix) with ESMTP id 38B9A40007 for ; Fri, 20 Dec 2024 10:05:44 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nD8H7OcC; spf=pass (imf01.hostedemail.com: domain of lkp@intel.com designates 192.198.163.19 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734689139; a=rsa-sha256; cv=none; b=LawEo3ins7RY0EgS+KrvHlYBMi4zsySdPki8081Xdk6wcuJpmOmfIjTAAPZOdKN3H6HeqQ WezLVrU/+V1e923hTuGei41DhtWjpRwYnkS8Yf0cW0Is4FZUfFIVJWV3JsA8gmBEUarvHL n95pwThtwoqOOCbX1WpkAk91IwCopsk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nD8H7OcC; spf=pass (imf01.hostedemail.com: domain of lkp@intel.com designates 192.198.163.19 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734689139; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U6sjFKlsEwjlyhq8pE2ltPOaCoFLDpjGCJ8y/vTP8Ss=; b=I//GSkxNAKL82PTb1OJYrOOMBeOqo+kQ+kabticlSYdsWRIxdgMymO57MtjUGdlwm9z96t GY+NeiPtxmY+RNjEJWB/mdtusef+7P5Wr6fSvhoenEOrZW/AZN6+J6vsIXOFdiG7tevB0P YCYTYX3l97rMu0F1LUwbXLfvcq0ZDjo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1734689170; x=1766225170; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=T1d6GyeYDaBjO+dA0JJGtybDiCqRqm5kDPlxNtBknKY=; b=nD8H7OcCRfMLYQWKa5agKswX5jLfc2qHZpXaNDx/09hv/MpduzvrWPzl MJZ2sWhaWZ8M85BV+Ff+ML6qJJ8v1olG9wMVRstdxOxPVaar+bpB0UUmF quiXKIlPNxjr+4qV24uix3TNvXMBb9rIJtfTD7DZ/hFmVnMS+Zp8E+JL6 YU042PbWAqZ2phXv90Yt2geegg1LttT/0MhH+m+XbyfN10qkcuOCEvmbK HAN39KQunAadPhxd59xQ8DCTmMa68um/JkJjDbCbNCxoEhuonldrHVWaX 4X8AwYZnG2Dgswk/epmmbc2TlxN/T/Udqeqzp/MkoqElePi6jVdKWRvn+ w==; X-CSE-ConnectionGUID: iZk4CsY9TOqtD6rhECWQ6g== X-CSE-MsgGUID: Exs6TwaJSc+lX6RCIOPGEA== X-IronPort-AV: E=McAfee;i="6700,10204,11291"; a="34517828" X-IronPort-AV: E=Sophos;i="6.12,250,1728975600"; d="scan'208";a="34517828" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2024 02:06:08 -0800 X-CSE-ConnectionGUID: hxtLPGe9Rb20ww3CeUgdzQ== X-CSE-MsgGUID: w0ZUH6aZRaaKHYTvcvkrOA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="129432685" Received: from lkp-server01.sh.intel.com (HELO a46f226878e0) ([10.239.97.150]) by fmviesa001.fm.intel.com with ESMTP; 20 Dec 2024 02:06:05 -0800 Received: from kbuild by a46f226878e0 with local (Exim 4.96) (envelope-from ) id 1tOZtT-00012Z-1q; Fri, 20 Dec 2024 10:06:03 +0000 Date: Fri, 20 Dec 2024 18:05:34 +0800 From: kernel test robot To: Donet Tom , Andrew Morton , linux-kernel@vger.kernel.org Cc: oe-kbuild-all@lists.linux.dev, Linux Memory Management List , Ritesh Harjani , Baolin Wang , "Aneesh Kumar K . V" , Zi Yan , David Hildenbrand , shuah Khan , Dev Jain Subject: Re: [PATCH] mm: migration :shared anonymous migration test is failing Message-ID: <202412201738.gUNPwJ1x-lkp@intel.com> References: <20241219124717.4907-1-donettom@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241219124717.4907-1-donettom@linux.ibm.com> X-Rspamd-Queue-Id: 38B9A40007 X-Stat-Signature: wb4hi3e11thwug96g5rkpmsokdbaokjj X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1734689144-607550 X-HE-Meta: U2FsdGVkX1/b2I+88lwfBTf2FvzymmvNRkdffoZ6WL3jrQWs33dseHzMLTs+h8B/TOj7NP9TUi0xVzwmb4F78MtpZdAr4efd19xGKr/9gaWFI3tn7/YMd4QihdjeaM/fOc7E1kiUm9L702igLxsP6kfaley05pV3ula63hKf27nmKkhUafKlPv+4fIQiUwiCPug3gylGrKAvacQPBKHlcnjmkNGXMDsYMoyzq49aJT1dVNp+S00jlNPbMhA5i3c77Weso0gFJK1wCGLYmItisLvDndJq9eU+Kuvo1Th4VES68F0YA/Kt9IRQcsiaGFJqAAAQKXq97iBSXhS6sgAQOfIQkSOXRWTcK5MSRwEm0vIihS6gEJbuUX49LaTjuwRGRi1zjO9BDqKBixGO4c/Yjkx8tmI3n3HkSlBAkUSPPQuEOonWils8tK8OJFPJyg12Ok9MSjwOWnob1Soqj2fXDFpkzk1Pk0M/zFlB/VrGGIsZ3PZERGZu3ITbvBN4GaIp4bAEkmJ710XLawpdt45fXhSZu4e0hT6b//sZTCsM8PsV8tjxYO4PevWP57XdWuQpq7Eb2Xvd6WqyFiOq6dxkjWRaPRQVCO/SC1icUyIUNhFVZscrUTQRgzqg/lHJMt/raYIJ5tmUOeGVhitd0BNxA9k6U6+aHt/XY3p8dfph6feqtSoITWvkFxrsGnK4g8Tza/0cJ4GjAI9Z8Wfe92fYFu798Oga0ZQ62qbT1A1TukI7UVhWbclCxVwjh4pt/UtQSZfV3/djf2Ksn0vY+jJlRfMvbz/MmzovHfUeUxq+lV2FYkz6MtA5AVOEyt7k5sepDJAz7BQPblPhZQCKgYfnmA6oCwM2zgapJWZSl58CZa77nkxRy+TB0SorvFeNmhQJbJGXFjDQZuBVEOP+/Xm4OwcjE+NEhCUhBJROrifyTEjlx+mv5ubZ/wNKnTgP0rJ+rSPbfxv0wlFJY8zyG6T 6NrbTMDE fHZzE6AakFwSAMbyvPGRhqfEbKqFKIkeuDEDB7yIXbeDzy76NWEMV6b8rNWCLNCPtEb5JCVNEEqmUED1ucdiXYdnNDl4wtZMz5K2VTnc834rtMuaoVHjfUooySA2HAPUySFD/KQ05tYVaRHGHJlUOLSNrsd4CcjGnVbeWZTORd0ygmF5+a01ZeQq8eQscWGO2dAgPnXIvjX9UUG4bmY4+3g8tEs4Ey72l8bwyeqF9v4Fns18CKvbMzmVnTsJyMxe/00alq9WhZaqJPhhoZGdwOmVhoa0eRf/HrumBicXtOw4gT9C2VxQ1hXGPwo9sq2QWKTMQnwpkhq8anXKU1fL8lZSBDkDG3r3PC6IVMrTucUxSdZ0pYeO0o5w9K97IcaDbGZMRz14wWypvzkNWcyOSmwuzUxl0D2VaArNr1BceZ4EF6+0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Donet, kernel test robot noticed the following build errors: [auto build test ERROR on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Donet-Tom/mm-migration-shared-anonymous-migration-test-is-failing/20241219-204920 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20241219124717.4907-1-donettom%40linux.ibm.com patch subject: [PATCH] mm: migration :shared anonymous migration test is failing config: arc-randconfig-002-20241220 (https://download.01.org/0day-ci/archive/20241220/202412201738.gUNPwJ1x-lkp@intel.com/config) compiler: arc-elf-gcc (GCC) 13.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241220/202412201738.gUNPwJ1x-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202412201738.gUNPwJ1x-lkp@intel.com/ All errors (new ones prefixed by >>): In file included from include/linux/migrate.h:8, from mm/rmap.c:69: include/linux/hugetlb.h:1063:5: warning: no previous prototype for 'replace_free_hugepage_folios' [-Wmissing-prototypes] 1063 | int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ mm/rmap.c: In function 'try_to_migrate_one': >> mm/rmap.c:2157:34: error: implicit declaration of function 'huge_ptep_get_and_clear'; did you mean 'ptep_get_and_clear'? [-Werror=implicit-function-declaration] 2157 | pteval = huge_ptep_get_and_clear(mm, address, pvmw.pte); | ^~~~~~~~~~~~~~~~~~~~~~~ | ptep_get_and_clear >> mm/rmap.c:2157:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >> mm/rmap.c:2326:33: error: implicit declaration of function 'flush_hugetlb_page'; did you mean 'flush_tlb_page'? [-Werror=implicit-function-declaration] 2326 | flush_hugetlb_page(vma, address); | ^~~~~~~~~~~~~~~~~~ | flush_tlb_page cc1: some warnings being treated as errors vim +2157 mm/rmap.c 2081 2082 /* Unexpected PMD-mapped THP? */ 2083 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 2084 2085 pfn = pte_pfn(ptep_get(pvmw.pte)); 2086 2087 if (folio_is_zone_device(folio)) { 2088 /* 2089 * Our PTE is a non-present device exclusive entry and 2090 * calculating the subpage as for the common case would 2091 * result in an invalid pointer. 2092 * 2093 * Since only PAGE_SIZE pages can currently be 2094 * migrated, just set it to page. This will need to be 2095 * changed when hugepage migrations to device private 2096 * memory are supported. 2097 */ 2098 VM_BUG_ON_FOLIO(folio_nr_pages(folio) > 1, folio); 2099 subpage = &folio->page; 2100 } else { 2101 subpage = folio_page(folio, pfn - folio_pfn(folio)); 2102 } 2103 address = pvmw.address; 2104 anon_exclusive = folio_test_anon(folio) && 2105 PageAnonExclusive(subpage); 2106 2107 if (folio_test_hugetlb(folio)) { 2108 bool anon = folio_test_anon(folio); 2109 2110 /* 2111 * huge_pmd_unshare may unmap an entire PMD page. 2112 * There is no way of knowing exactly which PMDs may 2113 * be cached for this mm, so we must flush them all. 2114 * start/end were already adjusted above to cover this 2115 * range. 2116 */ 2117 flush_cache_range(vma, range.start, range.end); 2118 2119 /* 2120 * To call huge_pmd_unshare, i_mmap_rwsem must be 2121 * held in write mode. Caller needs to explicitly 2122 * do this outside rmap routines. 2123 * 2124 * We also must hold hugetlb vma_lock in write mode. 2125 * Lock order dictates acquiring vma_lock BEFORE 2126 * i_mmap_rwsem. We can only try lock here and 2127 * fail if unsuccessful. 2128 */ 2129 if (!anon) { 2130 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 2131 if (!hugetlb_vma_trylock_write(vma)) { 2132 page_vma_mapped_walk_done(&pvmw); 2133 ret = false; 2134 break; 2135 } 2136 if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { 2137 hugetlb_vma_unlock_write(vma); 2138 flush_tlb_range(vma, 2139 range.start, range.end); 2140 2141 /* 2142 * The ref count of the PMD page was 2143 * dropped which is part of the way map 2144 * counting is done for shared PMDs. 2145 * Return 'true' here. When there is 2146 * no other sharing, huge_pmd_unshare 2147 * returns false and we will unmap the 2148 * actual page and drop map count 2149 * to zero. 2150 */ 2151 page_vma_mapped_walk_done(&pvmw); 2152 break; 2153 } 2154 hugetlb_vma_unlock_write(vma); 2155 } 2156 /* Nuke the hugetlb page table entry */ > 2157 pteval = huge_ptep_get_and_clear(mm, address, pvmw.pte); 2158 } else { 2159 flush_cache_page(vma, address, pfn); 2160 /* Nuke the page table entry. */ 2161 if (should_defer_flush(mm, flags)) { 2162 /* 2163 * We clear the PTE but do not flush so potentially 2164 * a remote CPU could still be writing to the folio. 2165 * If the entry was previously clean then the 2166 * architecture must guarantee that a clear->dirty 2167 * transition on a cached TLB entry is written through 2168 * and traps if the PTE is unmapped. 2169 */ 2170 pteval = ptep_get_and_clear(mm, address, pvmw.pte); 2171 2172 set_tlb_ubc_flush_pending(mm, pteval, address); 2173 } else { 2174 pteval = ptep_get_and_clear(mm, address, pvmw.pte); 2175 } 2176 } 2177 2178 /* Set the dirty flag on the folio now the pte is gone. */ 2179 if (pte_dirty(pteval)) 2180 folio_mark_dirty(folio); 2181 2182 /* Update high watermark before we lower rss */ 2183 update_hiwater_rss(mm); 2184 2185 if (folio_is_device_private(folio)) { 2186 unsigned long pfn = folio_pfn(folio); 2187 swp_entry_t entry; 2188 pte_t swp_pte; 2189 2190 if (anon_exclusive) 2191 WARN_ON_ONCE(folio_try_share_anon_rmap_pte(folio, 2192 subpage)); 2193 2194 /* 2195 * Store the pfn of the page in a special migration 2196 * pte. do_swap_page() will wait until the migration 2197 * pte is removed and then restart fault handling. 2198 */ 2199 entry = pte_to_swp_entry(pteval); 2200 if (is_writable_device_private_entry(entry)) 2201 entry = make_writable_migration_entry(pfn); 2202 else if (anon_exclusive) 2203 entry = make_readable_exclusive_migration_entry(pfn); 2204 else 2205 entry = make_readable_migration_entry(pfn); 2206 swp_pte = swp_entry_to_pte(entry); 2207 2208 /* 2209 * pteval maps a zone device page and is therefore 2210 * a swap pte. 2211 */ 2212 if (pte_swp_soft_dirty(pteval)) 2213 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2214 if (pte_swp_uffd_wp(pteval)) 2215 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2216 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 2217 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 2218 folio_order(folio)); 2219 /* 2220 * No need to invalidate here it will synchronize on 2221 * against the special swap migration pte. 2222 */ 2223 } else if (PageHWPoison(subpage)) { 2224 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 2225 if (folio_test_hugetlb(folio)) { 2226 hugetlb_count_sub(folio_nr_pages(folio), mm); 2227 set_huge_pte_at(mm, address, pvmw.pte, pteval, 2228 hsz); 2229 } else { 2230 dec_mm_counter(mm, mm_counter(folio)); 2231 set_pte_at(mm, address, pvmw.pte, pteval); 2232 } 2233 2234 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2235 /* 2236 * The guest indicated that the page content is of no 2237 * interest anymore. Simply discard the pte, vmscan 2238 * will take care of the rest. 2239 * A future reference will then fault in a new zero 2240 * page. When userfaultfd is active, we must not drop 2241 * this page though, as its main user (postcopy 2242 * migration) will not expect userfaults on already 2243 * copied pages. 2244 */ 2245 dec_mm_counter(mm, mm_counter(folio)); 2246 } else { 2247 swp_entry_t entry; 2248 pte_t swp_pte; 2249 2250 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2251 if (folio_test_hugetlb(folio)) 2252 set_huge_pte_at(mm, address, pvmw.pte, 2253 pteval, hsz); 2254 else 2255 set_pte_at(mm, address, pvmw.pte, pteval); 2256 ret = false; 2257 page_vma_mapped_walk_done(&pvmw); 2258 break; 2259 } 2260 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2261 !anon_exclusive, subpage); 2262 2263 /* See folio_try_share_anon_rmap_pte(): clear PTE first. */ 2264 if (folio_test_hugetlb(folio)) { 2265 if (anon_exclusive && 2266 hugetlb_try_share_anon_rmap(folio)) { 2267 set_huge_pte_at(mm, address, pvmw.pte, 2268 pteval, hsz); 2269 ret = false; 2270 page_vma_mapped_walk_done(&pvmw); 2271 break; 2272 } 2273 } else if (anon_exclusive && 2274 folio_try_share_anon_rmap_pte(folio, subpage)) { 2275 set_pte_at(mm, address, pvmw.pte, pteval); 2276 ret = false; 2277 page_vma_mapped_walk_done(&pvmw); 2278 break; 2279 } 2280 2281 /* 2282 * Store the pfn of the page in a special migration 2283 * pte. do_swap_page() will wait until the migration 2284 * pte is removed and then restart fault handling. 2285 */ 2286 if (pte_write(pteval)) 2287 entry = make_writable_migration_entry( 2288 page_to_pfn(subpage)); 2289 else if (anon_exclusive) 2290 entry = make_readable_exclusive_migration_entry( 2291 page_to_pfn(subpage)); 2292 else 2293 entry = make_readable_migration_entry( 2294 page_to_pfn(subpage)); 2295 if (pte_young(pteval)) 2296 entry = make_migration_entry_young(entry); 2297 if (pte_dirty(pteval)) 2298 entry = make_migration_entry_dirty(entry); 2299 swp_pte = swp_entry_to_pte(entry); 2300 if (pte_soft_dirty(pteval)) 2301 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2302 if (pte_uffd_wp(pteval)) 2303 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2304 if (folio_test_hugetlb(folio)) 2305 set_huge_pte_at(mm, address, pvmw.pte, swp_pte, 2306 hsz); 2307 else 2308 set_pte_at(mm, address, pvmw.pte, swp_pte); 2309 trace_set_migration_pte(address, pte_val(swp_pte), 2310 folio_order(folio)); 2311 /* 2312 * No need to invalidate here it will synchronize on 2313 * against the special swap migration pte. 2314 */ 2315 } 2316 2317 if (unlikely(folio_test_hugetlb(folio))) 2318 hugetlb_remove_rmap(folio); 2319 else 2320 folio_remove_rmap_pte(folio, subpage, vma); 2321 if (vma->vm_flags & VM_LOCKED) 2322 mlock_drain_local(); 2323 2324 if (!should_defer_flush(mm, flags)) { 2325 if (folio_test_hugetlb(folio)) > 2326 flush_hugetlb_page(vma, address); 2327 else 2328 flush_tlb_page(vma, address); 2329 } 2330 2331 folio_put(folio); 2332 } 2333 2334 mmu_notifier_invalidate_range_end(&range); 2335 2336 return ret; 2337 } 2338 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki