From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93E08C4345F for ; Wed, 1 May 2024 08:55:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB2756B0087; Wed, 1 May 2024 04:55:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E62646B008C; Wed, 1 May 2024 04:55:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D02A06B0092; Wed, 1 May 2024 04:55:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AC6AC6B0087 for ; Wed, 1 May 2024 04:55:55 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DBB0A1A06A9 for ; Wed, 1 May 2024 08:55:53 +0000 (UTC) X-FDA: 82069219386.15.A841F81 Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by imf21.hostedemail.com (Postfix) with ESMTP id E71581C000B for ; Wed, 1 May 2024 08:55:51 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nfcYgSS3; spf=pass (imf21.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714553752; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/y8UeTwlLpqIPwiAOtel8TLXxGQ0tanieNFNQRaWUEs=; b=BF1Arq+YTuPTSnCQDJ0oiQ0MgU+EPWwX9YG6OJ3z3VOO15NKM5BhlanwYKaZ53z1pQAmDV Sor5pdA9uaDWWPaFQLDD+vLgdYT3ZNDH7LAH7ektL4pINAtrIQYQVDBy1GHxbFXUpSzNv4 K2jrgRLCicgAPjo6tR4SvJoNhRIiuag= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nfcYgSS3; spf=pass (imf21.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714553752; a=rsa-sha256; cv=none; b=AORgZYAxRxEzo6lwM0iYGG/ki+gnBVvGLzCOBG0xyiKaJ9YaIZjtkprb5M6T97Gw6eOt5Y OEOTC/iqod8UmHFNtZUiCQ+mwTWDyEbr7kSkM58ADcxiRVwRlz9YM+4Q7jsJwdPha0d65B rMLjEe+ZXMu55UoZOXQftPv3MF2NwWg= Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-56e477db7fbso11381691a12.3 for ; Wed, 01 May 2024 01:55:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714553750; x=1715158550; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=/y8UeTwlLpqIPwiAOtel8TLXxGQ0tanieNFNQRaWUEs=; b=nfcYgSS3uFYAjDHj2jegXXzxRbg9QyfSKxrvG+RkgPzjGjQ8ILxVnghdtt1FTxrE+v 6G7916oqLdh0+ekrCVrsjRblYOJuxaklN3ikGI1NBtNdrUvTTmFr5QcPKINRYLUoiiNd lL0ieR3SNkQh6vcf4UxFUIxe4j3v0DVaOWv8ovCjH/bBMBSSnH2J9hYB5VdV/9fHbp0H iGsSdLEAtapddgXTsb2zc0wu7L+564ioC3gQApYEgVseF7HkDGnRbZm2shiSRRxQ0wR5 yTjXnDa3cY8/E+J7AC7vk/H+LxkVe+IDCnmuFwdj6MPqkr1yNQfzdbx1HDHNriLFrpiZ RmwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714553750; x=1715158550; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/y8UeTwlLpqIPwiAOtel8TLXxGQ0tanieNFNQRaWUEs=; b=S9ZXj3Jt3PZZ28HPBWv3AWm3gwaoOzcgP8VKLO7MtTt3F1rt4XO7OAf1MO+145xM+u d1lqEu3LTIGUSgkRmHdUCKA2PrMhyMBUrSM2nyZ/RKC0uaAlluvtT0n8OdGbAWCyFIVY gYqUZB0A9XtfDin0kY+lVfYsiawsPgZx/quOTZlZ7cprdOp2k2TQqLwXZyey2Dk+WFAB Fhz2pv9yQLXmsgVc3rnQvlareI6UcAU9+IrW2BHZBz8YrQlk1VgZxHkVtpq4hG8krCdV 9uBI6Rv07WeZHIIOzruNYuhIVWREuq+hlgPUbcRLWX0T6b5odjRR0wPy8b0tCbHtuJfq Y+BA== X-Forwarded-Encrypted: i=1; AJvYcCXKY2HJaKsSiOVC0AxgCF5dMxi+9yVPPQGPOU6vYit9SxVjNL2TByNv+m23x9+6PqW/200L4W+c4xKTNu+YeFbThd4= X-Gm-Message-State: AOJu0YxfisXtRoA+g+pOatMWxMUjlNN4tSOGah41+wr1AedCXBiWmhdK hhGoS4oZxXljKBNHS3QdjymZTpFvdpFza63y1xzIGzTBuUgylrcc1J2txofr0RVn8ZBPC9uLEtI 45oOpGMfCN6yjMOtgBEXS0LU1aBw= X-Google-Smtp-Source: AGHT+IH9wLbZoblHxsvWRGIs1/8StuWcQcwwNMVUkoxgdj+i5GoqKiK/QEyyw6KyQuo3j0y+wdWRKUoGGn7m+PH4ESg= X-Received: by 2002:a50:cd17:0:b0:572:9b20:2929 with SMTP id z23-20020a50cd17000000b005729b202929mr1432626edi.37.1714553749981; Wed, 01 May 2024 01:55:49 -0700 (PDT) MIME-Version: 1.0 References: <202405011624.KzqucHwp-lkp@intel.com> In-Reply-To: <202405011624.KzqucHwp-lkp@intel.com> From: Lance Yang Date: Wed, 1 May 2024 16:55:38 +0800 Message-ID: Subject: Re: [linux-next:master 9311/9709] mm/rmap.c:1651:27: error: call to '__compiletime_assert_333' declared with 'error' attribute: BUILD_BUG failed To: kernel test robot Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev, Linux Memory Management List , Andrew Morton Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: E71581C000B X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: qh4dqio4zkasjgawnbxer1go5sz8aaae X-HE-Tag: 1714553751-270432 X-HE-Meta: U2FsdGVkX19sFgr5EmCIw22E7KAuxbAfxAKLEC0Ij+qH9vQQipLJldsJGMplTve4yVvxfJSvEeY7kgYfnZhfN+4v0lnXx6cABjy7ZpMXHW5VqAIEVuRHCJkohsuqQL17ObKrioK32l8H5ZPy1nyMAYjWC94vanAIWm8bCwyvH9p2GO7iDpGObuTpqnWymNwXax4TSJfYRxJj2qztrF83vUAA1UQeiEVvH1V+kuTna4qurUdcDnqYyL5BXMbFQwJZoW9Bg/Kh1APADrPhzgSeqIE85TZKSHgBpE/k86MaklsfAetzEZMdFT6A/tc9DCVTPmYkk+HpZWdrKgVwOMEEaJAZnR9wjr3xNeFRNeG8ro8ucy3wLghFgW7C2oHjF24oUp0daBbQXePntZHG9Hm1FxcwSiadRzj8mSGQQk2qbUbqxPD6HFc/buQ7YN76EOMSpfe4VlYp/S+Sr2Wo6lTXy1MBmyEkmmDVoQ3GvFrqFbtKFv6gwIq+d10OnO/KiiaZNZncmFj9VNd1xz9U/cuFDlczuwpQTdKVjnZ6HPIfk3+wmq6cY0Ih9Lnx2vqfQ8z7gQyxSFHWxoxt+5A6hE7rAzwZvGnRx4gIk1HbFkyIdEtw7OyX+qgo7kRmG7pTivuVctJGAP/NawylGirLIYbIdhPGMELDVGVfQ+N7YRYdtVLqZ/xyl3CM3kXGyWrWogJTC3ykolxY7C9HFYzr+i8thjyFp2Y1oRSlBggD7lonZV6D2Ee2RTZhye3XSd/8ZNGTlVUapPyMa7572op5kaBOGXlxepvMdM1qevAm/2daPe8hLRbbbk7oVJt0H85iTTYSlydLh+y2SR5sDkRWbht8S7dMDfUFqdnPoCvGAmhasLwtMfy3NgN739aBLMKgY/GdHAqcGvfycIM8zlAoByN+2YSrANPAE5BUcELoEoYXI7u7asv83tTPxdQN1OYnEETlO/mWqBGgzq+w90EunwH 7eEN+z2t KrZIcHOigw7Gsuz2U3/Dv0WQ/T8JVKC1S8ulqqPPh4Ru6t4Q3IfpQ+zICMBtJ0ogU3aRpsQYiMK/VnuKd/AI+gFiJUIKvut+JrkQicQhH+pKx6LbJy1XsRgHlUfkPZc8P8mwgvscQOjnwlUnzd+0cWp8AJwWMtUMhrCFWREfAna2DHFj3TQvRiE1mEPexgIkVXCgkY7RxZHH4mjwWUMsa/aoGGkcr+hFBB/pLloHL1x/OSh+t0TtiHxswDDQK10v9oChMXNkOg64LWD9bbzLVeHFfhMpw8yIjolOg9cEdOMlKDIbaD20bXJS+MCIrP8I3F0ABV84ngmQkQLlEchP5mH0B9uH91ML067aM5WCn+Vx6JZ6dseNgyZ6Rivo7LGke7Yv3XxnaXqYdvwRWsoJzGxwZTJWdTtj1ugjwqmlQy8w89wMjpBlInsAVITUdWrWtGuNyGdceZ+JxsymNPUh+F4u7r24rcP4ziIWiOHXiAWs2F5loWc0xnsEHOqZ0G8Jwb/U9sHRmVbYSwM/6wKWiTbWSGFlHsQVwvfnFNIlAoVciLNCglY30XCm7qPSsLJhhHym7cJ78j9n0q/k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi all, This bug was introduced in v3[1] and has been fixed in v4[2]. Sorry for any trouble this may have caused :( [1] https://lore.kernel.org/linux-mm/20240429132308.38794-1-ioworker0@gmail= .com [2] https://lore.kernel.org/linux-mm/20240501042700.83974-1-ioworker0@gmail= .com Thanks, Lance Yang On Wed, May 1, 2024 at 4:38=E2=80=AFPM kernel test robot wr= ote: > > tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.g= it master > head: d04466706db5e241ee026f17b5f920e50dee26b5 > commit: 34d66beb14bdedb5c12733f2fd2498634dd1fd91 [9311/9709] mm/rmap: int= egrate PMD-mapped folio splitting into pagewalk loop > config: s390-allnoconfig (https://download.01.org/0day-ci/archive/2024050= 1/202405011624.KzqucHwp-lkp@intel.com/config) > compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project 3= 7ae4ad0eef338776c7e2cffb3896153d43dcd90) > reproduce (this is a W=3D1 build): (https://download.01.org/0day-ci/archi= ve/20240501/202405011624.KzqucHwp-lkp@intel.com/reproduce) > > If you fix the issue in a separate patch/commit (i.e. not just a new vers= ion of > the same patch/commit), kindly add following tags > | Reported-by: kernel test robot > | Closes: https://lore.kernel.org/oe-kbuild-all/202405011624.KzqucHwp-lkp= @intel.com/ > > Note: the linux-next/master HEAD d04466706db5e241ee026f17b5f920e50dee26b5= builds fine. > It may have been fixed somewhere. > > All errors (new ones prefixed by >>): > > In file included from mm/rmap.c:56: > In file included from include/linux/mm.h:2253: > include/linux/vmstat.h:514:36: warning: arithmetic between different e= numeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-c= onversion] > 514 | return node_stat_name(NR_LRU_BASE + lru) + 3; // skip = "nr_" > | ~~~~~~~~~~~ ^ ~~~ > In file included from mm/rmap.c:77: > include/linux/mm_inline.h:47:41: warning: arithmetic between different= enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum= -conversion] > 47 | __mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages= ); > | ~~~~~~~~~~~ ^ ~~~ > include/linux/mm_inline.h:49:22: warning: arithmetic between different= enumeration types ('enum zone_stat_item' and 'enum lru_list') [-Wenum-enum= -conversion] > 49 | NR_ZONE_LRU_BASE + lru, nr_pag= es); > | ~~~~~~~~~~~~~~~~ ^ ~~~ > >> mm/rmap.c:1651:27: error: call to '__compiletime_assert_333' declared = with 'error' attribute: BUILD_BUG failed > 1651 | range.start =3D address & HPAGE_PMD_MASK; > | ^ > include/linux/huge_mm.h:103:27: note: expanded from macro 'HPAGE_PMD_M= ASK' > 103 | #define HPAGE_PMD_MASK (~(HPAGE_PMD_SIZE - 1)) > | ^ > include/linux/huge_mm.h:104:34: note: expanded from macro 'HPAGE_PMD_S= IZE' > 104 | #define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT) > | ^ > include/linux/huge_mm.h:97:28: note: expanded from macro 'HPAGE_PMD_SH= IFT' > 97 | #define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; }) > | ^ > note: (skipping 3 expansions in backtrace; use -fmacro-backtrace-limit= =3D0 to see all) > include/linux/compiler_types.h:448:2: note: expanded from macro '_comp= iletime_assert' > 448 | __compiletime_assert(condition, msg, prefix, suffix) > | ^ > include/linux/compiler_types.h:441:4: note: expanded from macro '__com= piletime_assert' > 441 | prefix ## suffix(); = \ > | ^ > :140:1: note: expanded from here > 140 | __compiletime_assert_333 > | ^ > 3 warnings and 1 error generated. > > > vim +1651 mm/rmap.c > > 1613 > 1614 /* > 1615 * @arg: enum ttu_flags will be passed to this argument > 1616 */ > 1617 static bool try_to_unmap_one(struct folio *folio, struct vm_area_= struct *vma, > 1618 unsigned long address, void *arg) > 1619 { > 1620 struct mm_struct *mm =3D vma->vm_mm; > 1621 DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > 1622 pte_t pteval; > 1623 struct page *subpage; > 1624 bool anon_exclusive, ret =3D true; > 1625 struct mmu_notifier_range range; > 1626 enum ttu_flags flags =3D (enum ttu_flags)(long)arg; > 1627 unsigned long pfn; > 1628 unsigned long hsz =3D 0; > 1629 > 1630 /* > 1631 * When racing against e.g. zap_pte_range() on another cp= u, > 1632 * in between its ptep_get_and_clear_full() and folio_rem= ove_rmap_*(), > 1633 * try_to_unmap() may return before page_mapped() has bec= ome false, > 1634 * if page table locking is skipped: use TTU_SYNC to wait= for that. > 1635 */ > 1636 if (flags & TTU_SYNC) > 1637 pvmw.flags =3D PVMW_SYNC; > 1638 > 1639 /* > 1640 * For THP, we have to assume the worse case ie pmd for i= nvalidation. > 1641 * For hugetlb, it could be much worse if we need to do p= ud > 1642 * invalidation in the case of pmd sharing. > 1643 * > 1644 * Note that the folio can not be freed in this function = as call of > 1645 * try_to_unmap() must hold a reference on the folio. > 1646 */ > 1647 range.end =3D vma_address_end(&pvmw); > 1648 mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma-= >vm_mm, > 1649 address, range.end); > 1650 if (flags & TTU_SPLIT_HUGE_PMD) { > > 1651 range.start =3D address & HPAGE_PMD_MASK; > 1652 range.end =3D (address & HPAGE_PMD_MASK) + HPAGE_= PMD_SIZE; > 1653 } > 1654 if (folio_test_hugetlb(folio)) { > 1655 /* > 1656 * If sharing is possible, start and end will be = adjusted > 1657 * accordingly. > 1658 */ > 1659 adjust_range_if_pmd_sharing_possible(vma, &range.= start, > 1660 &range.end); > 1661 > 1662 /* We need the huge page size for set_huge_pte_at= () */ > 1663 hsz =3D huge_page_size(hstate_vma(vma)); > 1664 } > 1665 mmu_notifier_invalidate_range_start(&range); > 1666 > 1667 while (page_vma_mapped_walk(&pvmw)) { > 1668 /* > 1669 * If the folio is in an mlock()d vma, we must no= t swap it out. > 1670 */ > 1671 if (!(flags & TTU_IGNORE_MLOCK) && > 1672 (vma->vm_flags & VM_LOCKED)) { > 1673 /* Restore the mlock which got missed */ > 1674 if (!folio_test_large(folio)) > 1675 mlock_vma_folio(folio, vma); > 1676 goto walk_done_err; > 1677 } > 1678 > 1679 if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { > 1680 /* > 1681 * We temporarily have to drop the PTL an= d start once > 1682 * again from that now-PTE-mapped page ta= ble. > 1683 */ > 1684 split_huge_pmd_locked(vma, range.start, p= vmw.pmd, false, > 1685 folio); > 1686 pvmw.pmd =3D NULL; > 1687 spin_unlock(pvmw.ptl); > 1688 flags &=3D ~TTU_SPLIT_HUGE_PMD; > 1689 continue; > 1690 } > 1691 > 1692 /* Unexpected PMD-mapped THP? */ > 1693 VM_BUG_ON_FOLIO(!pvmw.pte, folio); > 1694 > 1695 pfn =3D pte_pfn(ptep_get(pvmw.pte)); > 1696 subpage =3D folio_page(folio, pfn - folio_pfn(fol= io)); > 1697 address =3D pvmw.address; > 1698 anon_exclusive =3D folio_test_anon(folio) && > 1699 PageAnonExclusive(subpage); > 1700 > 1701 if (folio_test_hugetlb(folio)) { > 1702 bool anon =3D folio_test_anon(folio); > 1703 > 1704 /* > 1705 * The try_to_unmap() is only passed a hu= getlb page > 1706 * in the case where the hugetlb page is = poisoned. > 1707 */ > 1708 VM_BUG_ON_PAGE(!PageHWPoison(subpage), su= bpage); > 1709 /* > 1710 * huge_pmd_unshare may unmap an entire P= MD page. > 1711 * There is no way of knowing exactly whi= ch PMDs may > 1712 * be cached for this mm, so we must flus= h them all. > 1713 * start/end were already adjusted above = to cover this > 1714 * range. > 1715 */ > 1716 flush_cache_range(vma, range.start, range= .end); > 1717 > 1718 /* > 1719 * To call huge_pmd_unshare, i_mmap_rwsem= must be > 1720 * held in write mode. Caller needs to e= xplicitly > 1721 * do this outside rmap routines. > 1722 * > 1723 * We also must hold hugetlb vma_lock in = write mode. > 1724 * Lock order dictates acquiring vma_lock= BEFORE > 1725 * i_mmap_rwsem. We can only try lock he= re and fail > 1726 * if unsuccessful. > 1727 */ > 1728 if (!anon) { > 1729 VM_BUG_ON(!(flags & TTU_RMAP_LOCK= ED)); > 1730 if (!hugetlb_vma_trylock_write(vm= a)) > 1731 goto walk_done_err; > 1732 if (huge_pmd_unshare(mm, vma, add= ress, pvmw.pte)) { > 1733 hugetlb_vma_unlock_write(= vma); > 1734 flush_tlb_range(vma, > 1735 range.start, rang= e.end); > 1736 /* > 1737 * The ref count of the P= MD page was > 1738 * dropped which is part = of the way map > 1739 * counting is done for s= hared PMDs. > 1740 * Return 'true' here. W= hen there is > 1741 * no other sharing, huge= _pmd_unshare > 1742 * returns false and we w= ill unmap the > 1743 * actual page and drop m= ap count > 1744 * to zero. > 1745 */ > 1746 goto walk_done; > 1747 } > 1748 hugetlb_vma_unlock_write(vma); > 1749 } > 1750 pteval =3D huge_ptep_clear_flush(vma, add= ress, pvmw.pte); > 1751 } else { > 1752 flush_cache_page(vma, address, pfn); > 1753 /* Nuke the page table entry. */ > 1754 if (should_defer_flush(mm, flags)) { > 1755 /* > 1756 * We clear the PTE but do not fl= ush so potentially > 1757 * a remote CPU could still be wr= iting to the folio. > 1758 * If the entry was previously cl= ean then the > 1759 * architecture must guarantee th= at a clear->dirty > 1760 * transition on a cached TLB ent= ry is written through > 1761 * and traps if the PTE is unmapp= ed. > 1762 */ > 1763 pteval =3D ptep_get_and_clear(mm,= address, pvmw.pte); > 1764 > 1765 set_tlb_ubc_flush_pending(mm, pte= val, address); > 1766 } else { > 1767 pteval =3D ptep_clear_flush(vma, = address, pvmw.pte); > 1768 } > 1769 } > 1770 > 1771 /* > 1772 * Now the pte is cleared. If this pte was uffd-w= p armed, > 1773 * we may want to replace a none pte with a marke= r pte if > 1774 * it's file-backed, so we don't lose the trackin= g info. > 1775 */ > 1776 pte_install_uffd_wp_if_needed(vma, address, pvmw.= pte, pteval); > 1777 > 1778 /* Set the dirty flag on the folio now the pte is= gone. */ > 1779 if (pte_dirty(pteval)) > 1780 folio_mark_dirty(folio); > 1781 > 1782 /* Update high watermark before we lower rss */ > 1783 update_hiwater_rss(mm); > 1784 > 1785 if (PageHWPoison(subpage) && (flags & TTU_HWPOISO= N)) { > 1786 pteval =3D swp_entry_to_pte(make_hwpoison= _entry(subpage)); > 1787 if (folio_test_hugetlb(folio)) { > 1788 hugetlb_count_sub(folio_nr_pages(= folio), mm); > 1789 set_huge_pte_at(mm, address, pvmw= .pte, pteval, > 1790 hsz); > 1791 } else { > 1792 dec_mm_counter(mm, mm_counter(fol= io)); > 1793 set_pte_at(mm, address, pvmw.pte,= pteval); > 1794 } > 1795 > 1796 } else if (pte_unused(pteval) && !userfaultfd_arm= ed(vma)) { > 1797 /* > 1798 * The guest indicated that the page cont= ent is of no > 1799 * interest anymore. Simply discard the p= te, vmscan > 1800 * will take care of the rest. > 1801 * A future reference will then fault in = a new zero > 1802 * page. When userfaultfd is active, we m= ust not drop > 1803 * this page though, as its main user (po= stcopy > 1804 * migration) will not expect userfaults = on already > 1805 * copied pages. > 1806 */ > 1807 dec_mm_counter(mm, mm_counter(folio)); > 1808 } else if (folio_test_anon(folio)) { > 1809 swp_entry_t entry =3D page_swap_entry(sub= page); > 1810 pte_t swp_pte; > 1811 /* > 1812 * Store the swap location in the pte. > 1813 * See handle_pte_fault() ... > 1814 */ > 1815 if (unlikely(folio_test_swapbacked(folio)= !=3D > 1816 folio_test_swapcache(foli= o))) { > 1817 WARN_ON_ONCE(1); > 1818 goto walk_done_err; > 1819 } > 1820 > 1821 /* MADV_FREE page check */ > 1822 if (!folio_test_swapbacked(folio)) { > 1823 int ref_count, map_count; > 1824 > 1825 /* > 1826 * Synchronize with gup_pte_range= (): > 1827 * - clear PTE; barrier; read ref= count > 1828 * - inc refcount; barrier; read = PTE > 1829 */ > 1830 smp_mb(); > 1831 > 1832 ref_count =3D folio_ref_count(fol= io); > 1833 map_count =3D folio_mapcount(foli= o); > 1834 > 1835 /* > 1836 * Order reads for page refcount = and dirty flag > 1837 * (see comments in __remove_mapp= ing()). > 1838 */ > 1839 smp_rmb(); > 1840 > 1841 /* > 1842 * The only page refs must be one= from isolation > 1843 * plus the rmap(s) (dropped by d= iscard:). > 1844 */ > 1845 if (ref_count =3D=3D 1 + map_coun= t && > 1846 !folio_test_dirty(folio)) { > 1847 dec_mm_counter(mm, MM_ANO= NPAGES); > 1848 goto discard; > 1849 } > 1850 > 1851 /* > 1852 * If the folio was redirtied, it= cannot be > 1853 * discarded. Remap the page to p= age table. > 1854 */ > 1855 set_pte_at(mm, address, pvmw.pte,= pteval); > 1856 folio_set_swapbacked(folio); > 1857 goto walk_done_err; > 1858 } > 1859 > 1860 if (swap_duplicate(entry) < 0) { > 1861 set_pte_at(mm, address, pvmw.pte,= pteval); > 1862 goto walk_done_err; > 1863 } > 1864 if (arch_unmap_one(mm, vma, address, ptev= al) < 0) { > 1865 swap_free(entry); > 1866 set_pte_at(mm, address, pvmw.pte,= pteval); > 1867 goto walk_done_err; > 1868 } > 1869 > 1870 /* See folio_try_share_anon_rmap(): clear= PTE first. */ > 1871 if (anon_exclusive && > 1872 folio_try_share_anon_rmap_pte(folio, = subpage)) { > 1873 swap_free(entry); > 1874 set_pte_at(mm, address, pvmw.pte,= pteval); > 1875 goto walk_done_err; > 1876 } > 1877 if (list_empty(&mm->mmlist)) { > 1878 spin_lock(&mmlist_lock); > 1879 if (list_empty(&mm->mmlist)) > 1880 list_add(&mm->mmlist, &in= it_mm.mmlist); > 1881 spin_unlock(&mmlist_lock); > 1882 } > 1883 dec_mm_counter(mm, MM_ANONPAGES); > 1884 inc_mm_counter(mm, MM_SWAPENTS); > 1885 swp_pte =3D swp_entry_to_pte(entry); > 1886 if (anon_exclusive) > 1887 swp_pte =3D pte_swp_mkexclusive(s= wp_pte); > 1888 if (pte_soft_dirty(pteval)) > 1889 swp_pte =3D pte_swp_mksoft_dirty(= swp_pte); > 1890 if (pte_uffd_wp(pteval)) > 1891 swp_pte =3D pte_swp_mkuffd_wp(swp= _pte); > 1892 set_pte_at(mm, address, pvmw.pte, swp_pte= ); > 1893 } else { > 1894 /* > 1895 * This is a locked file-backed folio, > 1896 * so it cannot be removed from the page > 1897 * cache and replaced by a new folio befo= re > 1898 * mmu_notifier_invalidate_range_end, so = no > 1899 * concurrent thread might update its pag= e table > 1900 * to point at a new folio while a device= is > 1901 * still using this folio. > 1902 * > 1903 * See Documentation/mm/mmu_notifier.rst > 1904 */ > 1905 dec_mm_counter(mm, mm_counter_file(folio)= ); > 1906 } > 1907 discard: > 1908 if (unlikely(folio_test_hugetlb(folio))) > 1909 hugetlb_remove_rmap(folio); > 1910 else > 1911 folio_remove_rmap_pte(folio, subpage, vma= ); > 1912 if (vma->vm_flags & VM_LOCKED) > 1913 mlock_drain_local(); > 1914 folio_put(folio); > 1915 continue; > 1916 walk_done_err: > 1917 ret =3D false; > 1918 walk_done: > 1919 page_vma_mapped_walk_done(&pvmw); > 1920 break; > 1921 } > 1922 > 1923 mmu_notifier_invalidate_range_end(&range); > 1924 > 1925 return ret; > 1926 } > 1927 > > -- > 0-DAY CI Kernel Test Service > https://github.com/intel/lkp-tests/wiki