From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9674AC28B20 for ; Sun, 30 Mar 2025 13:50:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C64E280002; Sun, 30 Mar 2025 09:50:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 451E9280001; Sun, 30 Mar 2025 09:50:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CAF9280002; Sun, 30 Mar 2025 09:50:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 092B0280001 for ; Sun, 30 Mar 2025 09:50:19 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 04EC3120456 for ; Sun, 30 Mar 2025 13:50:19 +0000 (UTC) X-FDA: 83278351800.18.E8B7EEC Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by imf05.hostedemail.com (Postfix) with ESMTP id 2DB2010000C for ; Sun, 30 Mar 2025 13:50:16 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hiHhMzB3; spf=pass (imf05.hostedemail.com: domain of lkp@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743342618; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QfNxy2QBssGKaq5TKguGKDYNUq3r+8EJqGyO3YlvdKM=; b=KsHhYWEN+T5ySQbg+7M6/nwDga1laM+150LqLJYcQRSRjcR83QxLMr6XF89ySf3BXDXKwz AalHSQyQ6lgC4wpfgsY8YsY+uL2vjIoKw36FuWqwhY/SCj9fJEHotvGRlmCI2HdHwXR4ae 5y9gwC6pOi85zWMNaKkZ5f22RZoysuk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743342618; a=rsa-sha256; cv=none; b=DKvw8p6UYmnrSiKj/2aBmJ+3efHXYPS3Gv641DhzNgI4gR8r2PESVxZtzxTh9IQSZwZK1v lb2E3UBMltpX3HR1IcMblHWDDSGWh/RmMLLZjOaIkIQmfw01ldsy95Hn359eoIhMqRz5VY EZD1+kIRWtOHYLyMegdc/1z68eg77a8= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hiHhMzB3; spf=pass (imf05.hostedemail.com: domain of lkp@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1743342617; x=1774878617; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=q6oKps2QzDXS+WadpVL7q5BlVzYZDHhU73n2OPm7sWM=; b=hiHhMzB38apyAg+y1QxCD26H9Exxhf4Gwq2+PhfamZF/NMPaZk1YDPzc LgiTHh1sXOkVRIuunj3t0+99UA6VQgk8XhW+kkZwEZ9xrZyvmpGpNyCd1 smyM2bAunRQPXgKIk8acOW2h8j7M+KNEDB1LJzblSrYfe2ltECi6AoGGc 4QWONbwNzrU/F6tw5/blVcTfUaV2bRssDmMEgapBKc0QWoYfUPc5u+Efr W7wa7R8lHtHq9MX1eXZGh35EhzG/opZcHvhBDsazke09OgA2yh6ZRyKYl 0c4KWwQELlp20MRzszIV32BNSCW3+TpGgWnz6xlHfdgXXNaB3lOmnWIwV A==; X-CSE-ConnectionGUID: aUDrHghpTMqzwdNM/fU0QA== X-CSE-MsgGUID: p3Uyy5WnS3mfeOPOBaafMQ== X-IronPort-AV: E=McAfee;i="6700,10204,11389"; a="43893081" X-IronPort-AV: E=Sophos;i="6.14,289,1736841600"; d="scan'208";a="43893081" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2025 06:50:16 -0700 X-CSE-ConnectionGUID: aqilbuGiRS++DUWlf0o4ig== X-CSE-MsgGUID: r5eC2AheQNW3botLv2sSHA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,289,1736841600"; d="scan'208";a="156798845" Received: from lkp-server02.sh.intel.com (HELO e98e3655d6d2) ([10.239.97.151]) by orviesa002.jf.intel.com with ESMTP; 30 Mar 2025 06:50:13 -0700 Received: from kbuild by e98e3655d6d2 with local (Exim 4.96) (envelope-from ) id 1tyt3D-0008j0-0b; Sun, 30 Mar 2025 13:50:11 +0000 Date: Sun, 30 Mar 2025 21:49:33 +0800 From: kernel test robot To: Baoquan He , akpm@linux-foundation.org Cc: oe-kbuild-all@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Baoquan He Subject: Re: [PATCH 2/7] mm/gup: check if both GUP_GET and GUP_PIN are set in __get_user_pages() earlier Message-ID: <202503302151.MdrisJhx-lkp@intel.com> References: <20250330121718.175815-3-bhe@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250330121718.175815-3-bhe@redhat.com> X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2DB2010000C X-Stat-Signature: oitn9yu3pqsrxnr79zpppz5eijymc1az X-HE-Tag: 1743342616-202327 X-HE-Meta: U2FsdGVkX1/lzxXRMS2+q8Om/YEQSfk0H9QzFJzJ1MJjU/gtGo1Em1bty4f6+h9jUGIxSBkg/1bDGDJod36b9hZ2jPxMRH8KQw4jxgaAp0/zqrKSghqHL6iqR/pIXEjvz6XBlEv/cbnV+yu6J4u1pUQmXjqp4cnUmD32ytaTpoIxht7em2H5G8dim2ZCG5mlSOasb4Rglko2GSROcshMGVaq7tP2DLf4NmHfBIS4UFOB8UbQuVoy1iUBX24bk5hhjgRgMNqH29f8cO3g9wbG+QKMCZ/fYXwywgsGerTOzawNqb7b9NWru5bEt3H4QEQG/hms+D3DG9SYx/BeHk/NfnMtHoxB18ctrWph3LdCJEwI19FzEQpq0KrZIxiVGYvKSK2N49yihFQgehF6EvnowkUlKv4bFP68pu5gFjIM++fFYCxHBN5Skp5PPJmAFD4I7KF/tmr2nHOgd+KbVakYiwlWHttIyk8bzRifQEsvKulTzhLoHULgCxMiKJsEtGDcDGWxDznlkmW8bu0RFYb9aX9ynu8bGTtDvDIPtUkxrf7GgKtD6pzgoztV1PFu7dtXPyHnwHx/OwhCGi2oZaN+Cbllv40fU3UIzHJPt5fpdzjHY9O274XXZHNxwthUHt4L2WaE3cItDFoGR+9fbruTrKovHfOVzPAtkMKne/jtFsiSWHbrD+7EpLd/MdE6kvI4TCDeXOpVU2b39A98rAwisSKNhYkNU35e1GiWCNFPRSbJtCM1wlo2WkjuYZA8RPoRR74XxnvIWGyzU7eNbCsOHUJ4hI5WxOxB24a2GAMA/0bBsuNOcsrTngAqR6wOc4tdIpVya99FI+0soAQXVmWqm0RYnOFVZQKjaybFltiJ/6zFD9146TOZHk5fa997CqK6y2XulSrsas+cOjNevY7+nA6tk40Wv0iXkgMufmb+Zyvzsxi/6Hlci0izV7tC1fv82Bukt+0Vw9bleTWMc4X MKoQ+qBV +9zztzUkEXtF2rZ5MmLWdxaXBACQoWJiASsTnE3nvyJPrT0KGq1MvXV/HYzRFHc5WKrWNzB40SsHgVFYpCEimuZBsxDDsArtixfCLeGL0c5ciDPqQTvNNjbvbNd4hUSisMGnqiqhWxuP6+U5Qdqj9xz/FaPE0BYWhJjJLDSFfqilXWhNdr2fOBKoUvqZb+GYxee8D7TJhtCy/M6euU2wGXtWSpfO1qOFroBN1wklWsiHkB1EbulGJ4dD0o20z9r9t1rkPljxU2mDhjBMohQQrwFEOpo9SxCi3We5M9JoZPPRM3tV7oemRpuDCDFJk8qIzOJi7O5V+h6zJL9xGbHkJRNJQjz/c6+ojyvNo+bDc4UYdSqtXXzo0zSXRM9YH8OCijjzovPOZY9UdWos= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Baoquan, kernel test robot noticed the following build warnings: [auto build test WARNING on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/mm-gup-fix-wrongly-calculated-returned-value-in-fault_in_safe_writeable/20250330-201949 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20250330121718.175815-3-bhe%40redhat.com patch subject: [PATCH 2/7] mm/gup: check if both GUP_GET and GUP_PIN are set in __get_user_pages() earlier config: x86_64-defconfig (https://download.01.org/0day-ci/archive/20250330/202503302151.MdrisJhx-lkp@intel.com/config) compiler: gcc-11 (Debian 11.3.0-12) 11.3.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250330/202503302151.MdrisJhx-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202503302151.MdrisJhx-lkp@intel.com/ All warnings (new ones prefixed by >>): In file included from arch/x86/include/asm/bug.h:110, from include/linux/bug.h:5, from include/linux/thread_info.h:13, from include/linux/spinlock.h:60, from mm/gup.c:5: mm/gup.c: In function '__get_user_pages': mm/gup.c:1433:27: error: 'flags' undeclared (first use in this function) 1433 | if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == | ^~~~~ include/asm-generic/bug.h:121:32: note: in definition of macro 'WARN_ON_ONCE' 121 | int __ret_warn_on = !!(condition); \ | ^~~~~~~~~ mm/gup.c:1433:27: note: each undeclared identifier is reported only once for each function it appears in 1433 | if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == | ^~~~~ include/asm-generic/bug.h:121:32: note: in definition of macro 'WARN_ON_ONCE' 121 | int __ret_warn_on = !!(condition); \ | ^~~~~~~~~ >> mm/gup.c:1435:24: warning: returning 'void *' from a function with return type 'long int' makes integer from pointer without a cast [-Wint-conversion] 1435 | return ERR_PTR(-EINVAL); | ^~~~~~~~~~~~~~~~ vim +1435 mm/gup.c 1361 1362 /** 1363 * __get_user_pages() - pin user pages in memory 1364 * @mm: mm_struct of target mm 1365 * @start: starting user address 1366 * @nr_pages: number of pages from start to pin 1367 * @gup_flags: flags modifying pin behaviour 1368 * @pages: array that receives pointers to the pages pinned. 1369 * Should be at least nr_pages long. Or NULL, if caller 1370 * only intends to ensure the pages are faulted in. 1371 * @locked: whether we're still with the mmap_lock held 1372 * 1373 * Returns either number of pages pinned (which may be less than the 1374 * number requested), or an error. Details about the return value: 1375 * 1376 * -- If nr_pages is 0, returns 0. 1377 * -- If nr_pages is >0, but no pages were pinned, returns -errno. 1378 * -- If nr_pages is >0, and some pages were pinned, returns the number of 1379 * pages pinned. Again, this may be less than nr_pages. 1380 * -- 0 return value is possible when the fault would need to be retried. 1381 * 1382 * The caller is responsible for releasing returned @pages, via put_page(). 1383 * 1384 * Must be called with mmap_lock held. It may be released. See below. 1385 * 1386 * __get_user_pages walks a process's page tables and takes a reference to 1387 * each struct page that each user address corresponds to at a given 1388 * instant. That is, it takes the page that would be accessed if a user 1389 * thread accesses the given user virtual address at that instant. 1390 * 1391 * This does not guarantee that the page exists in the user mappings when 1392 * __get_user_pages returns, and there may even be a completely different 1393 * page there in some cases (eg. if mmapped pagecache has been invalidated 1394 * and subsequently re-faulted). However it does guarantee that the page 1395 * won't be freed completely. And mostly callers simply care that the page 1396 * contains data that was valid *at some point in time*. Typically, an IO 1397 * or similar operation cannot guarantee anything stronger anyway because 1398 * locks can't be held over the syscall boundary. 1399 * 1400 * If @gup_flags & FOLL_WRITE == 0, the page must not be written to. If 1401 * the page is written to, set_page_dirty (or set_page_dirty_lock, as 1402 * appropriate) must be called after the page is finished with, and 1403 * before put_page is called. 1404 * 1405 * If FOLL_UNLOCKABLE is set without FOLL_NOWAIT then the mmap_lock may 1406 * be released. If this happens *@locked will be set to 0 on return. 1407 * 1408 * A caller using such a combination of @gup_flags must therefore hold the 1409 * mmap_lock for reading only, and recognize when it's been released. Otherwise, 1410 * it must be held for either reading or writing and will not be released. 1411 * 1412 * In most cases, get_user_pages or get_user_pages_fast should be used 1413 * instead of __get_user_pages. __get_user_pages should be used only if 1414 * you need some special @gup_flags. 1415 */ 1416 static long __get_user_pages(struct mm_struct *mm, 1417 unsigned long start, unsigned long nr_pages, 1418 unsigned int gup_flags, struct page **pages, 1419 int *locked) 1420 { 1421 long ret = 0, i = 0; 1422 struct vm_area_struct *vma = NULL; 1423 struct follow_page_context ctx = { NULL }; 1424 1425 if (!nr_pages) 1426 return 0; 1427 1428 start = untagged_addr_remote(mm, start); 1429 1430 VM_BUG_ON(!!pages != !!(gup_flags & (FOLL_GET | FOLL_PIN))); 1431 1432 /* FOLL_GET and FOLL_PIN are mutually exclusive. */ 1433 if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == 1434 (FOLL_PIN | FOLL_GET))) > 1435 return ERR_PTR(-EINVAL); 1436 1437 do { 1438 struct page *page; 1439 unsigned int page_increm; 1440 1441 /* first iteration or cross vma bound */ 1442 if (!vma || start >= vma->vm_end) { 1443 /* 1444 * MADV_POPULATE_(READ|WRITE) wants to handle VMA 1445 * lookups+error reporting differently. 1446 */ 1447 if (gup_flags & FOLL_MADV_POPULATE) { 1448 vma = vma_lookup(mm, start); 1449 if (!vma) { 1450 ret = -ENOMEM; 1451 goto out; 1452 } 1453 if (check_vma_flags(vma, gup_flags)) { 1454 ret = -EINVAL; 1455 goto out; 1456 } 1457 goto retry; 1458 } 1459 vma = gup_vma_lookup(mm, start); 1460 if (!vma && in_gate_area(mm, start)) { 1461 ret = get_gate_page(mm, start & PAGE_MASK, 1462 gup_flags, &vma, 1463 pages ? &page : NULL); 1464 if (ret) 1465 goto out; 1466 ctx.page_mask = 0; 1467 goto next_page; 1468 } 1469 1470 if (!vma) { 1471 ret = -EFAULT; 1472 goto out; 1473 } 1474 ret = check_vma_flags(vma, gup_flags); 1475 if (ret) 1476 goto out; 1477 } 1478 retry: 1479 /* 1480 * If we have a pending SIGKILL, don't keep faulting pages and 1481 * potentially allocating memory. 1482 */ 1483 if (fatal_signal_pending(current)) { 1484 ret = -EINTR; 1485 goto out; 1486 } 1487 cond_resched(); 1488 1489 page = follow_page_mask(vma, start, gup_flags, &ctx); 1490 if (!page || PTR_ERR(page) == -EMLINK) { 1491 ret = faultin_page(vma, start, gup_flags, 1492 PTR_ERR(page) == -EMLINK, locked); 1493 switch (ret) { 1494 case 0: 1495 goto retry; 1496 case -EBUSY: 1497 case -EAGAIN: 1498 ret = 0; 1499 fallthrough; 1500 case -EFAULT: 1501 case -ENOMEM: 1502 case -EHWPOISON: 1503 goto out; 1504 } 1505 BUG(); 1506 } else if (PTR_ERR(page) == -EEXIST) { 1507 /* 1508 * Proper page table entry exists, but no corresponding 1509 * struct page. If the caller expects **pages to be 1510 * filled in, bail out now, because that can't be done 1511 * for this page. 1512 */ 1513 if (pages) { 1514 ret = PTR_ERR(page); 1515 goto out; 1516 } 1517 } else if (IS_ERR(page)) { 1518 ret = PTR_ERR(page); 1519 goto out; 1520 } 1521 next_page: 1522 page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); 1523 if (page_increm > nr_pages) 1524 page_increm = nr_pages; 1525 1526 if (pages) { 1527 struct page *subpage; 1528 unsigned int j; 1529 1530 /* 1531 * This must be a large folio (and doesn't need to 1532 * be the whole folio; it can be part of it), do 1533 * the refcount work for all the subpages too. 1534 * 1535 * NOTE: here the page may not be the head page 1536 * e.g. when start addr is not thp-size aligned. 1537 * try_grab_folio() should have taken care of tail 1538 * pages. 1539 */ 1540 if (page_increm > 1) { 1541 struct folio *folio = page_folio(page); 1542 1543 /* 1544 * Since we already hold refcount on the 1545 * large folio, this should never fail. 1546 */ 1547 if (try_grab_folio(folio, page_increm - 1, 1548 gup_flags)) { 1549 /* 1550 * Release the 1st page ref if the 1551 * folio is problematic, fail hard. 1552 */ 1553 gup_put_folio(folio, 1, gup_flags); 1554 ret = -EFAULT; 1555 goto out; 1556 } 1557 } 1558 1559 for (j = 0; j < page_increm; j++) { 1560 subpage = nth_page(page, j); 1561 pages[i + j] = subpage; 1562 flush_anon_page(vma, subpage, start + j * PAGE_SIZE); 1563 flush_dcache_page(subpage); 1564 } 1565 } 1566 1567 i += page_increm; 1568 start += page_increm * PAGE_SIZE; 1569 nr_pages -= page_increm; 1570 } while (nr_pages); 1571 out: 1572 if (ctx.pgmap) 1573 put_dev_pagemap(ctx.pgmap); 1574 return i ? i : ret; 1575 } 1576 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki