From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03B71C433F5 for ; Sun, 13 Mar 2022 03:01:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 727A78D0002; Sat, 12 Mar 2022 22:01:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D6018D0001; Sat, 12 Mar 2022 22:01:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59DCE8D0002; Sat, 12 Mar 2022 22:01:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id 46E4E8D0001 for ; Sat, 12 Mar 2022 22:01:54 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 002729519A for ; Sun, 13 Mar 2022 03:01:53 +0000 (UTC) X-FDA: 79237863348.20.3EB5C27 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by imf16.hostedemail.com (Postfix) with ESMTP id 471C118001D for ; Sun, 13 Mar 2022 03:01:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647140512; x=1678676512; h=date:from:to:cc:subject:message-id:mime-version; bh=IiyapXjMscK8cK0k9+eVFeqT+VZeRC8KTlJwi+vTTx4=; b=MXqVWbJl+pAm2r8zzjc1IT3qgW564DS6RAPrgsG/xg2VQvKRw4zYjcSk pohtrpTsjdkBiHUo0q2oRgbRcIOaFsA1eo3hBs4Lv/3BQZ56+EVWQzSRm 7dN0Rsgvev6Nh6zRQZHLOi9Bag6iUDNhGgQZFWZDYnGSw9ZOts6HXcZUO vRCNBLUGNTXla4/FIaAjdDi/VaN7H6DkzLI2KKBT3gqed9/DDuGFZnMU8 2CDBRrO7fUuoAU0GCALiCsiYddyW+EWC5QxJrhPxk5pkCvO5FFGRmxmqz mJ/BPKaZ82t+EN/gZ46QgEpMAypH9Sb40LX2zEPKUCa2cgBSyfKzgt+cO A==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="316559642" X-IronPort-AV: E=Sophos;i="5.90,177,1643702400"; d="scan'208";a="316559642" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2022 19:01:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,177,1643702400"; d="scan'208";a="497216055" Received: from lkp-server02.sh.intel.com (HELO 89b41b6ae01c) ([10.239.97.151]) by orsmga003.jf.intel.com with ESMTP; 12 Mar 2022 19:01:47 -0800 Received: from kbuild by 89b41b6ae01c with local (Exim 4.92) (envelope-from ) id 1nTEUM-0008X8-P5; Sun, 13 Mar 2022 03:01:46 +0000 Date: Sun, 13 Mar 2022 11:01:09 +0800 From: kernel test robot To: "Matthew Wilcox (Oracle)" Cc: kbuild-all@lists.01.org, Linux Memory Management List Subject: [linux-next:master 9762/11953] mm/page_vma_mapped.c:246 page_vma_mapped_walk() warn: always true condition '(pvmw->nr_pages >= (1 << ( - (12)))) => (0-u64max >= 0)' Message-ID: <202203131056.WINF40Gt-lkp@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: 471C118001D X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MXqVWbJl; spf=none (imf16.hostedemail.com: domain of lkp@intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Stat-Signature: tjyjja9epz75c4w1ai9b6h5exezszudj X-Rspamd-Server: rspam07 X-HE-Tag: 1647140512-954224 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master head: 71941773e143369a73c9c4a3b62fbb60736a1182 commit: b786e44a4dbfe64476e7120ec7990b89a37be37d [9762/11953] mm: Convert page_vma_mapped_walk to work on PFNs config: riscv-randconfig-m031-20220312 (https://download.01.org/0day-ci/archive/20220313/202203131056.WINF40Gt-lkp@intel.com/config) compiler: riscv64-linux-gcc (GCC) 11.2.0 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot smatch warnings: mm/page_vma_mapped.c:246 page_vma_mapped_walk() warn: always true condition '(pvmw->nr_pages >= (1 << ( - (12)))) => (0-u64max >= 0)' vim +246 mm/page_vma_mapped.c 126 127 /** 128 * page_vma_mapped_walk - check if @pvmw->pfn is mapped in @pvmw->vma at 129 * @pvmw->address 130 * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags 131 * must be set. pmd, pte and ptl must be NULL. 132 * 133 * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point 134 * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is 135 * adjusted if needed (for PTE-mapped THPs). 136 * 137 * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page 138 * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in 139 * a loop to find all PTEs that map the THP. 140 * 141 * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry 142 * regardless of which page table level the page is mapped at. @pvmw->pmd is 143 * NULL. 144 * 145 * Returns false if there are no more page table entries for the page in 146 * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped. 147 * 148 * If you need to stop the walk before page_vma_mapped_walk() returned false, 149 * use page_vma_mapped_walk_done(). It will do the housekeeping. 150 */ 151 bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) 152 { 153 struct vm_area_struct *vma = pvmw->vma; 154 struct mm_struct *mm = vma->vm_mm; 155 unsigned long end; 156 pgd_t *pgd; 157 p4d_t *p4d; 158 pud_t *pud; 159 pmd_t pmde; 160 161 /* The only possible pmd mapping has been handled on last iteration */ 162 if (pvmw->pmd && !pvmw->pte) 163 return not_found(pvmw); 164 165 if (unlikely(is_vm_hugetlb_page(vma))) { 166 unsigned long size = pvmw->nr_pages * PAGE_SIZE; 167 /* The only possible mapping was handled on last iteration */ 168 if (pvmw->pte) 169 return not_found(pvmw); 170 171 /* when pud is not present, pte will be NULL */ 172 pvmw->pte = huge_pte_offset(mm, pvmw->address, size); 173 if (!pvmw->pte) 174 return false; 175 176 pvmw->ptl = huge_pte_lockptr(size_to_hstate(size), mm, 177 pvmw->pte); 178 spin_lock(pvmw->ptl); 179 if (!check_pte(pvmw)) 180 return not_found(pvmw); 181 return true; 182 } 183 184 end = vma_address_end(pvmw); 185 if (pvmw->pte) 186 goto next_pte; 187 restart: 188 do { 189 pgd = pgd_offset(mm, pvmw->address); 190 if (!pgd_present(*pgd)) { 191 step_forward(pvmw, PGDIR_SIZE); 192 continue; 193 } 194 p4d = p4d_offset(pgd, pvmw->address); 195 if (!p4d_present(*p4d)) { 196 step_forward(pvmw, P4D_SIZE); 197 continue; 198 } 199 pud = pud_offset(p4d, pvmw->address); 200 if (!pud_present(*pud)) { 201 step_forward(pvmw, PUD_SIZE); 202 continue; 203 } 204 205 pvmw->pmd = pmd_offset(pud, pvmw->address); 206 /* 207 * Make sure the pmd value isn't cached in a register by the 208 * compiler and used as a stale value after we've observed a 209 * subsequent update. 210 */ 211 pmde = READ_ONCE(*pvmw->pmd); 212 213 if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { 214 pvmw->ptl = pmd_lock(mm, pvmw->pmd); 215 pmde = *pvmw->pmd; 216 if (likely(pmd_trans_huge(pmde))) { 217 if (pvmw->flags & PVMW_MIGRATION) 218 return not_found(pvmw); 219 if (!check_pmd(pmd_pfn(pmde), pvmw)) 220 return not_found(pvmw); 221 return true; 222 } 223 if (!pmd_present(pmde)) { 224 swp_entry_t entry; 225 226 if (!thp_migration_supported() || 227 !(pvmw->flags & PVMW_MIGRATION)) 228 return not_found(pvmw); 229 entry = pmd_to_swp_entry(pmde); 230 if (!is_migration_entry(entry) || 231 !check_pmd(swp_offset(entry), pvmw)) 232 return not_found(pvmw); 233 return true; 234 } 235 /* THP pmd was split under us: handle on pte level */ 236 spin_unlock(pvmw->ptl); 237 pvmw->ptl = NULL; 238 } else if (!pmd_present(pmde)) { 239 /* 240 * If PVMW_SYNC, take and drop THP pmd lock so that we 241 * cannot return prematurely, while zap_huge_pmd() has 242 * cleared *pmd but not decremented compound_mapcount(). 243 */ 244 if ((pvmw->flags & PVMW_SYNC) && 245 transparent_hugepage_active(vma) && > 246 (pvmw->nr_pages >= HPAGE_PMD_NR)) { 247 spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); 248 249 spin_unlock(ptl); 250 } 251 step_forward(pvmw, PMD_SIZE); 252 continue; 253 } 254 if (!map_pte(pvmw)) 255 goto next_pte; 256 this_pte: 257 if (check_pte(pvmw)) 258 return true; 259 next_pte: 260 do { 261 pvmw->address += PAGE_SIZE; 262 if (pvmw->address >= end) 263 return not_found(pvmw); 264 /* Did we cross page table boundary? */ 265 if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) { 266 if (pvmw->ptl) { 267 spin_unlock(pvmw->ptl); 268 pvmw->ptl = NULL; 269 } 270 pte_unmap(pvmw->pte); 271 pvmw->pte = NULL; 272 goto restart; 273 } 274 pvmw->pte++; 275 if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) { 276 pvmw->ptl = pte_lockptr(mm, pvmw->pmd); 277 spin_lock(pvmw->ptl); 278 } 279 } while (pte_none(*pvmw->pte)); 280 281 if (!pvmw->ptl) { 282 pvmw->ptl = pte_lockptr(mm, pvmw->pmd); 283 spin_lock(pvmw->ptl); 284 } 285 goto this_pte; 286 } while (pvmw->address < end); 287 288 return false; 289 } 290 --- 0-DAY CI Kernel Test Service https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org