From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3F95C28B20 for ; Sun, 30 Mar 2025 14:10:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 314FA280002; Sun, 30 Mar 2025 10:10:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C46C280001; Sun, 30 Mar 2025 10:10:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18DEE280002; Sun, 30 Mar 2025 10:10:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EA228280001 for ; Sun, 30 Mar 2025 10:10:23 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8DB3D161228 for ; Sun, 30 Mar 2025 14:10:24 +0000 (UTC) X-FDA: 83278402368.05.CCA0177 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by imf23.hostedemail.com (Postfix) with ESMTP id 0010F140004 for ; Sun, 30 Mar 2025 14:10:21 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ImiIh3vR; spf=pass (imf23.hostedemail.com: domain of lkp@intel.com designates 192.198.163.8 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743343822; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TBgsEhJ/GCt8x+Y+yqNmJzxPiIw9wzwZ7vtWhWdQcFs=; b=MIutPqMBpNeboroWh7ttvZJ2Ni0hdtR2HbsrOBmenYk78sX4i4naZC/nUqYGLMK93mEGDf GxHnVAmnDvdXXYWbYS8ic6M5eh2NLkBm3Qjc0rnMPgaZyuTyAE/OWtn6u72/6U6ztJ6OEd BqYsMrOewREHM76y+jYnjufyiWvQYMw= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ImiIh3vR; spf=pass (imf23.hostedemail.com: domain of lkp@intel.com designates 192.198.163.8 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743343822; a=rsa-sha256; cv=none; b=lsx4LwKEEt661Wv2eYnp+QBpU9HshOt6RwKUt93h0yLX/ZO/3f90WmCo3wKfty66svqwJM LNFAMQ7EuIcIjHaTMmXJeBQRAGpJ9HADDDHG1B/1zwa3ELNQNrWLjhP6Lb2N1i4yZMW5gu kF6pUXrQaP2UfkxVAvl3pw8340L+FOg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1743343822; x=1774879822; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=eL+sAvIsIuK5ZnDbY2sh5KNdE8EacABalHfrxI0ikAg=; b=ImiIh3vRf0G/foHQ5E/afCtzix1kgfDUHwyygnp9Qvtf7jP85zM7N2XJ qtcCh+SFiA5pq6uUdRSpzUtDzP3ehOpGP2KOOOR8k/7/n/YsaKVZDYIDG oE+y4spSN7fxJJjG1AbQ3cR1BeeQmVj491pNUVj9dTGo+GZ5aAbS62XeN sgB1/Johto5Xx2pbTxpn4idKRGi2XSGil+RJKXGX3LSXhTQRTHVdos2SZ KqvmeuopSIUm/3yCpOpEg2w/wooy2iefTpJs8tQwQyUgXDbepc0zPpRu5 82s7LlkbJTnC8HCK1devsYLwpELWWM5J+4rsrdojF8qVIdQ0RJiaAQz1b A==; X-CSE-ConnectionGUID: rtdwql02T9yYdEZnm5KGOg== X-CSE-MsgGUID: 81MmmcH+RfG6EGO2giUvUg== X-IronPort-AV: E=McAfee;i="6700,10204,11389"; a="62180548" X-IronPort-AV: E=Sophos;i="6.14,289,1736841600"; d="scan'208";a="62180548" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2025 07:10:20 -0700 X-CSE-ConnectionGUID: XFvGNpkgTim6jljaCFihjA== X-CSE-MsgGUID: OssSkRP+RUSz/hvv2qaHEQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,289,1736841600"; d="scan'208";a="149056187" Received: from lkp-server02.sh.intel.com (HELO e98e3655d6d2) ([10.239.97.151]) by fmviesa002.fm.intel.com with ESMTP; 30 Mar 2025 07:10:17 -0700 Received: from kbuild by e98e3655d6d2 with local (Exim 4.96) (envelope-from ) id 1tytMd-0008ja-2R; Sun, 30 Mar 2025 14:10:15 +0000 Date: Sun, 30 Mar 2025 22:09:52 +0800 From: kernel test robot To: Baoquan He , akpm@linux-foundation.org Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Baoquan He Subject: Re: [PATCH 2/7] mm/gup: check if both GUP_GET and GUP_PIN are set in __get_user_pages() earlier Message-ID: <202503302116.cBgHEPWk-lkp@intel.com> References: <20250330121718.175815-3-bhe@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250330121718.175815-3-bhe@redhat.com> X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0010F140004 X-Stat-Signature: aaduqfrjxq3c9s5nhobg94jahijowux7 X-HE-Tag: 1743343821-707167 X-HE-Meta: U2FsdGVkX1/oQs9t/zDNnGcs568p6c0GJhuFZJZqyjDJ4JIuRTPDkXc4reKI/o9syaN/IhYo5N27G0LRMpb3nc1CMxhyeTTVu3uJN4Y70dwjRbPtAIr4U29QMMS10cGXXY6WCqKFRRDVGMr9fCtg1vUqazkSy49Xl6+YATHevLj+NPb5mLPQchAeY+FO/kuglSNrpoLMrkQxW20Gnh8y25Z5dWjGS+xv5AbQ0raWg1cDjeCFR8DwymuyFRj6jWTpMzJ/rquIM0nfCEEEKu4ykGA12dvoL/Mn3euw7Tz5GAxuIUUdsGg6JRqeGGdZhjr1l4mxqOf/ESdwkw0v5zjC6TYYEyirvcup4sM7/7eV5uqLD6THa1HXo5h+KVfDUCDxSz+BmzNAM9mIACKrMkeurxF5e9wz8bGiOBqyGv/9mH5TEP0MISRWfFEutGVLp1JHktOAEQCDukbczu6eYnEKOQLQoaWWOiLv89MKv9PX2JR1DLVEJzBI8/pEQ7GKADkhnsl8C5mOnqLSoSN1KwmzFROtl5fh730eXDdF0W8elPpF6vrbdohe70jJqCuMZmcE9lJPCpjIirDYNLDy5Y3eavUh89KowXS5vAerGRcyaMDtC4CySzyr4E7vbSplp2mWjcHbaXBlctJx84SGrPeSxGXzD+a2el+HlS39bzxe3iUiA6ciehOyEGCX+ITHYPOKAV9uMlqC1/ecGu30RSJQhMc3mR6owWb4NdHYNtu8nrF1ZBgR46zBqOX3eCuze39psJaZ6uCjiQDSnym3MpOn1PNDOCSFGZld2t85snjILz+XhWSIy3J44XuqT7pRNIlhQOHYHmp4pw00xI8zJvTQQW0T+se27lHUSg4JoWn3ZsaWMbIQkh2Fi4jAUBZWCPbrlwDX3/3BwIBB3tPBqLzjjRIr9GGTFhpngyEOgKD6VuZHqaYnPTTUOqYrv+VxyXR/04y2MNAhrh9nONzZXaG NR9LM06m oOVknKcspPz0p9fYFoknGUQmXykeI3yOzVmTTkAN8skgG6GZbFxvTyrGXXLJHr8ie61pFlFAHYhE4aflMSc2VFMrXqJxQEG3nm4redE6PUVEQg4RkOs/cNwTHxEGmdcJXwvE4GwviR8KbxXJAT1bp2dytSf+LOmoucg3jYDHoNmKuRjqQYF1MOiNgnd/skSZNGRpbxQke6oeuyQWUQ3pwdMtIb+oDllAt91fAtYGWV9CSFFNJoPWwJOSc9Qhd2sVJjDeqzPWgKvnsR3ShpbUJYhPfhnqztG4A5jf5uOgt2Fvl9Y9KBnESOCTQ9E3dr9ThwggJIDXam/ykn4hrvFl/nfjRJHa3LOFzbEUlx2CkY2aXAcCLv6l51T96ULhDX4Pq0EEUeGFKka3Tong= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Baoquan, kernel test robot noticed the following build errors: [auto build test ERROR on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/mm-gup-fix-wrongly-calculated-returned-value-in-fault_in_safe_writeable/20250330-201949 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20250330121718.175815-3-bhe%40redhat.com patch subject: [PATCH 2/7] mm/gup: check if both GUP_GET and GUP_PIN are set in __get_user_pages() earlier config: x86_64-allnoconfig (https://download.01.org/0day-ci/archive/20250330/202503302116.cBgHEPWk-lkp@intel.com/config) compiler: clang version 20.1.1 (https://github.com/llvm/llvm-project 424c2d9b7e4de40d0804dd374721e6411c27d1d1) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250330/202503302116.cBgHEPWk-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202503302116.cBgHEPWk-lkp@intel.com/ All errors (new ones prefixed by >>): >> mm/gup.c:1433:20: error: use of undeclared identifier 'flags' 1433 | if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == | ^ >> mm/gup.c:1435:10: error: incompatible pointer to integer conversion returning 'void *' from a function with result type 'long' [-Wint-conversion] 1435 | return ERR_PTR(-EINVAL); | ^~~~~~~~~~~~~~~~ 2 errors generated. vim +/flags +1433 mm/gup.c 1361 1362 /** 1363 * __get_user_pages() - pin user pages in memory 1364 * @mm: mm_struct of target mm 1365 * @start: starting user address 1366 * @nr_pages: number of pages from start to pin 1367 * @gup_flags: flags modifying pin behaviour 1368 * @pages: array that receives pointers to the pages pinned. 1369 * Should be at least nr_pages long. Or NULL, if caller 1370 * only intends to ensure the pages are faulted in. 1371 * @locked: whether we're still with the mmap_lock held 1372 * 1373 * Returns either number of pages pinned (which may be less than the 1374 * number requested), or an error. Details about the return value: 1375 * 1376 * -- If nr_pages is 0, returns 0. 1377 * -- If nr_pages is >0, but no pages were pinned, returns -errno. 1378 * -- If nr_pages is >0, and some pages were pinned, returns the number of 1379 * pages pinned. Again, this may be less than nr_pages. 1380 * -- 0 return value is possible when the fault would need to be retried. 1381 * 1382 * The caller is responsible for releasing returned @pages, via put_page(). 1383 * 1384 * Must be called with mmap_lock held. It may be released. See below. 1385 * 1386 * __get_user_pages walks a process's page tables and takes a reference to 1387 * each struct page that each user address corresponds to at a given 1388 * instant. That is, it takes the page that would be accessed if a user 1389 * thread accesses the given user virtual address at that instant. 1390 * 1391 * This does not guarantee that the page exists in the user mappings when 1392 * __get_user_pages returns, and there may even be a completely different 1393 * page there in some cases (eg. if mmapped pagecache has been invalidated 1394 * and subsequently re-faulted). However it does guarantee that the page 1395 * won't be freed completely. And mostly callers simply care that the page 1396 * contains data that was valid *at some point in time*. Typically, an IO 1397 * or similar operation cannot guarantee anything stronger anyway because 1398 * locks can't be held over the syscall boundary. 1399 * 1400 * If @gup_flags & FOLL_WRITE == 0, the page must not be written to. If 1401 * the page is written to, set_page_dirty (or set_page_dirty_lock, as 1402 * appropriate) must be called after the page is finished with, and 1403 * before put_page is called. 1404 * 1405 * If FOLL_UNLOCKABLE is set without FOLL_NOWAIT then the mmap_lock may 1406 * be released. If this happens *@locked will be set to 0 on return. 1407 * 1408 * A caller using such a combination of @gup_flags must therefore hold the 1409 * mmap_lock for reading only, and recognize when it's been released. Otherwise, 1410 * it must be held for either reading or writing and will not be released. 1411 * 1412 * In most cases, get_user_pages or get_user_pages_fast should be used 1413 * instead of __get_user_pages. __get_user_pages should be used only if 1414 * you need some special @gup_flags. 1415 */ 1416 static long __get_user_pages(struct mm_struct *mm, 1417 unsigned long start, unsigned long nr_pages, 1418 unsigned int gup_flags, struct page **pages, 1419 int *locked) 1420 { 1421 long ret = 0, i = 0; 1422 struct vm_area_struct *vma = NULL; 1423 struct follow_page_context ctx = { NULL }; 1424 1425 if (!nr_pages) 1426 return 0; 1427 1428 start = untagged_addr_remote(mm, start); 1429 1430 VM_BUG_ON(!!pages != !!(gup_flags & (FOLL_GET | FOLL_PIN))); 1431 1432 /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > 1433 if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == 1434 (FOLL_PIN | FOLL_GET))) > 1435 return ERR_PTR(-EINVAL); 1436 1437 do { 1438 struct page *page; 1439 unsigned int page_increm; 1440 1441 /* first iteration or cross vma bound */ 1442 if (!vma || start >= vma->vm_end) { 1443 /* 1444 * MADV_POPULATE_(READ|WRITE) wants to handle VMA 1445 * lookups+error reporting differently. 1446 */ 1447 if (gup_flags & FOLL_MADV_POPULATE) { 1448 vma = vma_lookup(mm, start); 1449 if (!vma) { 1450 ret = -ENOMEM; 1451 goto out; 1452 } 1453 if (check_vma_flags(vma, gup_flags)) { 1454 ret = -EINVAL; 1455 goto out; 1456 } 1457 goto retry; 1458 } 1459 vma = gup_vma_lookup(mm, start); 1460 if (!vma && in_gate_area(mm, start)) { 1461 ret = get_gate_page(mm, start & PAGE_MASK, 1462 gup_flags, &vma, 1463 pages ? &page : NULL); 1464 if (ret) 1465 goto out; 1466 ctx.page_mask = 0; 1467 goto next_page; 1468 } 1469 1470 if (!vma) { 1471 ret = -EFAULT; 1472 goto out; 1473 } 1474 ret = check_vma_flags(vma, gup_flags); 1475 if (ret) 1476 goto out; 1477 } 1478 retry: 1479 /* 1480 * If we have a pending SIGKILL, don't keep faulting pages and 1481 * potentially allocating memory. 1482 */ 1483 if (fatal_signal_pending(current)) { 1484 ret = -EINTR; 1485 goto out; 1486 } 1487 cond_resched(); 1488 1489 page = follow_page_mask(vma, start, gup_flags, &ctx); 1490 if (!page || PTR_ERR(page) == -EMLINK) { 1491 ret = faultin_page(vma, start, gup_flags, 1492 PTR_ERR(page) == -EMLINK, locked); 1493 switch (ret) { 1494 case 0: 1495 goto retry; 1496 case -EBUSY: 1497 case -EAGAIN: 1498 ret = 0; 1499 fallthrough; 1500 case -EFAULT: 1501 case -ENOMEM: 1502 case -EHWPOISON: 1503 goto out; 1504 } 1505 BUG(); 1506 } else if (PTR_ERR(page) == -EEXIST) { 1507 /* 1508 * Proper page table entry exists, but no corresponding 1509 * struct page. If the caller expects **pages to be 1510 * filled in, bail out now, because that can't be done 1511 * for this page. 1512 */ 1513 if (pages) { 1514 ret = PTR_ERR(page); 1515 goto out; 1516 } 1517 } else if (IS_ERR(page)) { 1518 ret = PTR_ERR(page); 1519 goto out; 1520 } 1521 next_page: 1522 page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); 1523 if (page_increm > nr_pages) 1524 page_increm = nr_pages; 1525 1526 if (pages) { 1527 struct page *subpage; 1528 unsigned int j; 1529 1530 /* 1531 * This must be a large folio (and doesn't need to 1532 * be the whole folio; it can be part of it), do 1533 * the refcount work for all the subpages too. 1534 * 1535 * NOTE: here the page may not be the head page 1536 * e.g. when start addr is not thp-size aligned. 1537 * try_grab_folio() should have taken care of tail 1538 * pages. 1539 */ 1540 if (page_increm > 1) { 1541 struct folio *folio = page_folio(page); 1542 1543 /* 1544 * Since we already hold refcount on the 1545 * large folio, this should never fail. 1546 */ 1547 if (try_grab_folio(folio, page_increm - 1, 1548 gup_flags)) { 1549 /* 1550 * Release the 1st page ref if the 1551 * folio is problematic, fail hard. 1552 */ 1553 gup_put_folio(folio, 1, gup_flags); 1554 ret = -EFAULT; 1555 goto out; 1556 } 1557 } 1558 1559 for (j = 0; j < page_increm; j++) { 1560 subpage = nth_page(page, j); 1561 pages[i + j] = subpage; 1562 flush_anon_page(vma, subpage, start + j * PAGE_SIZE); 1563 flush_dcache_page(subpage); 1564 } 1565 } 1566 1567 i += page_increm; 1568 start += page_increm * PAGE_SIZE; 1569 nr_pages -= page_increm; 1570 } while (nr_pages); 1571 out: 1572 if (ctx.pgmap) 1573 put_dev_pagemap(ctx.pgmap); 1574 return i ? i : ret; 1575 } 1576 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki