From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7E69C433E7 for ; Wed, 2 Sep 2020 18:06:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 810A8206EB for ; Wed, 2 Sep 2020 18:06:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=sent.com header.i=@sent.com header.b="er9QpnC9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="Rl9EvIJW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 810A8206EB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=sent.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 019E090001D; Wed, 2 Sep 2020 14:06:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5C5790001F; Wed, 2 Sep 2020 14:06:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4DCCB90001F; Wed, 2 Sep 2020 14:06:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 0723890001A for ; Wed, 2 Sep 2020 14:06:35 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C37B9180AD806 for ; Wed, 2 Sep 2020 18:06:34 +0000 (UTC) X-FDA: 77218901508.22.neck50_3615489270a2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 9D86618038E67 for ; Wed, 2 Sep 2020 18:06:34 +0000 (UTC) X-HE-Tag: neck50_3615489270a2 X-Filterd-Recvd-Size: 14206 Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Sep 2020 18:06:33 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id C19645C0163; Wed, 2 Sep 2020 14:06:33 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 02 Sep 2020 14:06:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=b9eSgdBVc6KPh Gvl15otgB0JUvsTDCKSyaNNO5dyWDg=; b=er9QpnC9pzM1R7WWQiY5NxWp9AHDD MaNd48f0CiyA/HkajR0zAjmnLP7JT8fC9Wi6B3SrQH41vdRx8uBGbjf6bymNnVzl GL9JLzRPPcMRxdBHxcnfZGJKlsZV8eBBThqNmYpNY7sl0SbC+bfUcvUOBJff9Tvy H9+50K4VxJ9UKMC5vxHoko3AT1Qm/cFE0il4BQZ2zeLSBWr81iM/5grQ3GiatvUL SBDi03Hwgj3v04PccCt5e37EkkIvQ7MVRFPYK93ClkN+5Rk0BmaFGQqzAOOZljBJ BLRSh0s1UGLqifZiIY+12BepFUkexOWclMpDRa42Ge2azkRxhXxzFp0OQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=b9eSgdBVc6KPhGvl15otgB0JUvsTDCKSyaNNO5dyWDg=; b=Rl9EvIJW 8nBI4WIKOFKe4Y+iElnDPRJiFX3vCfp3M/nlpT3zG9LUhR3S6ylBE8/iNTHimfT3 tUtt1X/4/Z13K1MCYsGOGhAy+K0l30GVolETvOB9e3F/JZCmH7uzbiuEDFs2ZhzH pymZK7ArXPIkWTifRNwCjSG7BMj4Ehy1qO+uKHVE5+ydxY6AgiWgfHfg6b7bOuXI K7s0vc84PWZxA8j6fHLAr+6J8UFF7PVnnvJNMZS1CWY9CYRz3FKpgWD0M0CcnAdF POX5rsH0cNPHsLXkpgeEyS/ebjZw1YbosxaPiTjoFmv40LdEmAB8oygNsNaXmzZ7 vfixVQmNaJ72EA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefledguddvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeduhf ffveektdduhfdutdfgtdekkedvhfetuedufedtgffgvdevleehheevjefgtdenucfkphep uddvrdegiedruddtiedrudeigeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrdgtohhm X-ME-Proxy: Received: from nvrsysarch6.NVidia.COM (unknown [12.46.106.164]) by mail.messagingengine.com (Postfix) with ESMTPA id DEA5D30600A6; Wed, 2 Sep 2020 14:06:32 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, Roman Gushchin Cc: Rik van Riel , "Kirill A . Shutemov" , Matthew Wilcox , Shakeel Butt , Yang Shi , David Nellans , linux-kernel@vger.kernel.org, Zi Yan Subject: [RFC PATCH 09/16] mm: thp: 1GB THP support in try_to_unmap(). Date: Wed, 2 Sep 2020 14:06:21 -0400 Message-Id: <20200902180628.4052244-10-zi.yan@sent.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902180628.4052244-1-zi.yan@sent.com> References: <20200902180628.4052244-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: 9D86618038E67 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan Unmap different subpages in different sized THPs properly in the try_to_unmap() function. Signed-off-by: Zi Yan --- mm/migrate.c | 2 +- mm/rmap.c | 159 +++++++++++++++++++++++++++++++++++++-------------- 2 files changed, 116 insertions(+), 45 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index be0e80b32686..df069a55722e 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -225,7 +225,7 @@ static bool remove_migration_pte(struct page *page, s= truct vm_area_struct *vma, =20 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION /* PMD-mapped THP migration entry */ - if (!pvmw.pte) { + if (!pvmw.pte && pvmw.pmd) { VM_BUG_ON_PAGE(PageHuge(page) || !PageTransCompound(page), page); remove_migration_pmd(&pvmw, new); continue; diff --git a/mm/rmap.c b/mm/rmap.c index 0bbaaa891b3c..6c788abdb0b9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1123,6 +1123,7 @@ void do_page_add_anon_rmap(struct page *page, { bool compound =3D flags & RMAP_COMPOUND; bool first; + struct page *head =3D compound_head(page); =20 if (unlikely(PageKsm(page))) lock_page_memcg(page); @@ -1132,8 +1133,8 @@ void do_page_add_anon_rmap(struct page *page, if (compound) { atomic_t *mapcount =3D NULL; VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageTransHuge(page), page); - if (compound_order(page) =3D=3D HPAGE_PUD_ORDER) { + VM_BUG_ON_PAGE(!PMDPageInPUD(page) && !PageTransHuge(page), page); + if (compound_order(head) =3D=3D HPAGE_PUD_ORDER) { if (order =3D=3D HPAGE_PUD_ORDER) { mapcount =3D compound_mapcount_ptr(page); } else if (order =3D=3D HPAGE_PMD_ORDER) { @@ -1141,7 +1142,7 @@ void do_page_add_anon_rmap(struct page *page, mapcount =3D sub_compound_mapcount_ptr(page, 1); } else VM_BUG_ON(1); - } else if (compound_order(page) =3D=3D HPAGE_PMD_ORDER) { + } else if (compound_order(head) =3D=3D HPAGE_PMD_ORDER) { mapcount =3D compound_mapcount_ptr(page); } else VM_BUG_ON(1); @@ -1151,7 +1152,8 @@ void do_page_add_anon_rmap(struct page *page, } =20 if (first) { - int nr =3D compound ? thp_nr_pages(page) : 1; + /* int nr =3D compound ? thp_nr_pages(page) : 1; */ + int nr =3D 1<vm_flags & VM_LOCKED)) @@ -1473,6 +1478,11 @@ static bool try_to_unmap_one(struct page *page, st= ruct vm_area_struct *vma, is_zone_device_page(page) && !is_device_private_page(page)) return true; =20 + if (flags & TTU_SPLIT_HUGE_PUD) { + split_huge_pud_address(vma, address, + flags & TTU_SPLIT_FREEZE, page); + } + if (flags & TTU_SPLIT_HUGE_PMD) { split_huge_pmd_address(vma, address, flags & TTU_SPLIT_FREEZE, page); @@ -1505,7 +1515,7 @@ static bool try_to_unmap_one(struct page *page, str= uct vm_area_struct *vma, while (page_vma_mapped_walk(&pvmw)) { #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION /* PMD-mapped THP migration entry */ - if (!pvmw.pte && (flags & TTU_MIGRATION)) { + if (!pvmw.pte && pvmw.pmd && (flags & TTU_MIGRATION)) { VM_BUG_ON_PAGE(PageHuge(page) || !PageTransCompound(page), page); =20 set_pmd_migration_entry(&pvmw, page); @@ -1537,9 +1547,18 @@ static bool try_to_unmap_one(struct page *page, st= ruct vm_area_struct *vma, } =20 /* Unexpected PMD-mapped THP? */ - VM_BUG_ON_PAGE(!pvmw.pte, page); =20 - subpage =3D page - page_to_pfn(page) + pte_pfn(*pvmw.pte); + if (pvmw.pte) { + subpage =3D page - page_to_pfn(page) + pte_pfn(*pvmw.pte); + order =3D 0; + } else if (!pvmw.pte && pvmw.pmd) { + subpage =3D page - page_to_pfn(page) + pmd_pfn(*pvmw.pmd); + order =3D HPAGE_PMD_ORDER; + } else if (!pvmw.pte && !pvmw.pmd && pvmw.pud) { + subpage =3D page - page_to_pfn(page) + pud_pfn(*pvmw.pud); + order =3D HPAGE_PUD_ORDER; + } + VM_BUG_ON(!subpage); address =3D pvmw.address; =20 if (PageHuge(page)) { @@ -1617,16 +1636,26 @@ static bool try_to_unmap_one(struct page *page, s= truct vm_area_struct *vma, } =20 if (!(flags & TTU_IGNORE_ACCESS)) { - if (ptep_clear_flush_young_notify(vma, address, - pvmw.pte)) { - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; + if ((pvmw.pte && + ptep_clear_flush_young_notify(vma, address, pvmw.pte)) || + ((!pvmw.pte && pvmw.pmd) && + pmdp_clear_flush_young_notify(vma, address, pvmw.pmd)) || + ((!pvmw.pte && !pvmw.pmd && pvmw.pud) && + pudp_clear_flush_young_notify(vma, address, pvmw.pud)) + ) { + ret =3D false; + page_vma_mapped_walk_done(&pvmw); + break; } } =20 /* Nuke the page table entry. */ - flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + if (pvmw.pte) + flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + else if (!pvmw.pte && pvmw.pmd) + flush_cache_page(vma, address, pmd_pfn(*pvmw.pmd)); + else if (!pvmw.pte && !pvmw.pmd && pvmw.pud) + flush_cache_page(vma, address, pud_pfn(*pvmw.pud)); if (should_defer_flush(mm, flags)) { /* * We clear the PTE but do not flush so potentially @@ -1636,16 +1665,34 @@ static bool try_to_unmap_one(struct page *page, s= truct vm_area_struct *vma, * transition on a cached TLB entry is written through * and traps if the PTE is unmapped. */ - pteval =3D ptep_get_and_clear(mm, address, pvmw.pte); + if (pvmw.pte) { + pteval =3D ptep_get_and_clear(mm, address, pvmw.pte); + + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else if (!pvmw.pte && pvmw.pmd) { + pmdval =3D pmdp_huge_get_and_clear(mm, address, pvmw.pmd); =20 - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + set_tlb_ubc_flush_pending(mm, pmd_dirty(pmdval)); + } else if (!pvmw.pte && !pvmw.pmd && pvmw.pud) { + pudval =3D pudp_huge_get_and_clear(mm, address, pvmw.pud); + + set_tlb_ubc_flush_pending(mm, pud_dirty(pudval)); + } } else { - pteval =3D ptep_clear_flush(vma, address, pvmw.pte); + if (pvmw.pte) + pteval =3D ptep_clear_flush(vma, address, pvmw.pte); + else if (!pvmw.pte && pvmw.pmd) + pmdval =3D pmdp_huge_clear_flush(vma, address, pvmw.pmd); + else if (!pvmw.pte && !pvmw.pmd && pvmw.pud) + pudval =3D pudp_huge_clear_flush(vma, address, pvmw.pud); } =20 /* Move the dirty bit to the page. Now the pte is gone. */ - if (pte_dirty(pteval)) - set_page_dirty(page); + if ((pvmw.pte && pte_dirty(pteval)) || + ((!pvmw.pte && pvmw.pmd) && pmd_dirty(pmdval)) || + ((!pvmw.pte && !pvmw.pmd && pvmw.pud) && pud_dirty(pudval)) + ) + set_page_dirty(page); =20 /* Update high watermark before we lower rss */ update_hiwater_rss(mm); @@ -1680,35 +1727,59 @@ static bool try_to_unmap_one(struct page *page, s= truct vm_area_struct *vma, } else if (IS_ENABLED(CONFIG_MIGRATION) && (flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE))) { swp_entry_t entry; - pte_t swp_pte; =20 - if (arch_unmap_one(mm, vma, address, pteval) < 0) { - set_pte_at(mm, address, pvmw.pte, pteval); - ret =3D false; - page_vma_mapped_walk_done(&pvmw); - break; - } + if (pvmw.pte) { + pte_t swp_pte; =20 - /* - * Store the pfn of the page in a special migration - * pte. do_swap_page() will wait until the migration - * pte is removed and then restart fault handling. - */ - entry =3D make_migration_entry(subpage, - pte_write(pteval)); - swp_pte =3D swp_entry_to_pte(entry); - if (pte_soft_dirty(pteval)) - swp_pte =3D pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pteval)) - swp_pte =3D pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, address, pvmw.pte, swp_pte); - /* - * No need to invalidate here it will synchronize on - * against the special swap migration pte. - */ + if (arch_unmap_one(mm, vma, address, pteval) < 0) { + set_pte_at(mm, address, pvmw.pte, pteval); + ret =3D false; + page_vma_mapped_walk_done(&pvmw); + break; + } + + /* + * Store the pfn of the page in a special migration + * pte. do_swap_page() will wait until the migration + * pte is removed and then restart fault handling. + */ + entry =3D make_migration_entry(subpage, + pte_write(pteval)); + swp_pte =3D swp_entry_to_pte(entry); + if (pte_soft_dirty(pteval)) + swp_pte =3D pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pteval)) + swp_pte =3D pte_swp_mkuffd_wp(swp_pte); + set_pte_at(mm, address, pvmw.pte, swp_pte); + /* + * No need to invalidate here it will synchronize on + * against the special swap migration pte. + */ + } else if (!pvmw.pte && pvmw.pmd) { + pmd_t swp_pmd; + /* + * Store the pfn of the page in a special migration + * pte. do_swap_page() will wait until the migration + * pte is removed and then restart fault handling. + */ + entry =3D make_migration_entry(subpage, + pmd_write(pmdval)); + swp_pmd =3D swp_entry_to_pmd(entry); + if (pmd_soft_dirty(pmdval)) + swp_pmd =3D pmd_swp_mksoft_dirty(swp_pmd); + set_pmd_at(mm, address, pvmw.pmd, swp_pmd); + /* + * No need to invalidate here it will synchronize on + * against the special swap migration pte. + */ + } else if (!pvmw.pte && !pvmw.pmd && pvmw.pud) { + VM_BUG_ON(1); + } } else if (PageAnon(page)) { swp_entry_t entry =3D { .val =3D page_private(subpage) }; pte_t swp_pte; + + VM_BUG_ON(!pvmw.pte); /* * Store the swap location in the pte. * See handle_pte_fault() ... @@ -1794,7 +1865,7 @@ static bool try_to_unmap_one(struct page *page, str= uct vm_area_struct *vma, * * See Documentation/vm/mmu_notifier.rst */ - page_remove_rmap(subpage, PageHuge(page), 0); + page_remove_rmap(subpage, PageHuge(page) || order >=3D HPAGE_PMD_ORDER= , order); put_page(page); } =20 --=20 2.28.0