From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A833C3ABC3 for ; Thu, 15 May 2025 03:24:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 698536B0110; Wed, 14 May 2025 23:24:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6481A6B0111; Wed, 14 May 2025 23:24:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 474AF6B0112; Wed, 14 May 2025 23:24:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 238A46B0110 for ; Wed, 14 May 2025 23:24:47 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C8D6FE2DE0 for ; Thu, 15 May 2025 03:24:47 +0000 (UTC) X-FDA: 83443700214.25.F4A0906 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 0D852140003 for ; Thu, 15 May 2025 03:24:45 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KTsKYbHi; spf=pass (imf26.hostedemail.com: domain of npache@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747279486; a=rsa-sha256; cv=none; b=SCHZe3NKdC5OcvRWIhryB2WGp8zFtLzsWdRuNMZZnudydSgIQuyrwSVI7H3w019l6lYoF6 RlT6RwuxL8oU5OQgK6jGIt/wuNNo9SjBTbKWXMu/e7vE4WicnNfbxyuU1dmsHPq4G/30pF lWx1pTb2l2SDBiIA1Q9PnXbVtXaHoq0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KTsKYbHi; spf=pass (imf26.hostedemail.com: domain of npache@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747279486; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EbOEP5xz4TLlDd2kGqdoFYPcT1Bt8vrvYMIPBSbDfF4=; b=NC5X/GOhXLuOo7VgG+k1dAdRZcwOc5SZMXJTpCjvaL6zp6R+EcfYUJiWHzXTfyt3+OPCqC 3dvD02Au7bmYMhrcrjZ7E/NSX6vbhisHa7yfGEUy/PImNaf/mjqaacmMxc1FHjPk467fFk kvpEwL56Uulrih0sAwiMnCNK992h0wQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1747279485; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EbOEP5xz4TLlDd2kGqdoFYPcT1Bt8vrvYMIPBSbDfF4=; b=KTsKYbHiyyNpN0nbUODAFb+M1qLv7bZGAkZgnRB029tiYZi9UWgNa3R4TYCVhu8ADlgjWk bzwraHmvaOjvdMRhSVQ+FsijRuIcLPve5916QfSXDAlE4Rc/7f2z5MuR4lDY5rCShgA2gE eBmY3C217bGLZlGz/rNEjbWeMq42vkk= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-255-s0z4c1UvOqOuquuKW6qBnA-1; Wed, 14 May 2025 23:24:42 -0400 X-MC-Unique: s0z4c1UvOqOuquuKW6qBnA-1 X-Mimecast-MFC-AGG-ID: s0z4c1UvOqOuquuKW6qBnA_1747279478 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C3B8519560AF; Thu, 15 May 2025 03:24:37 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.88.116]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4749518008F0; Thu, 15 May 2025 03:24:22 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org Subject: [PATCH v7 07/12] khugepaged: add mTHP support Date: Wed, 14 May 2025 21:22:21 -0600 Message-ID: <20250515032226.128900-8-npache@redhat.com> In-Reply-To: <20250515032226.128900-1-npache@redhat.com> References: <20250515032226.128900-1-npache@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 0D852140003 X-Stat-Signature: d46p7nisgwspjkkdc5u79k6ki8d5d9s3 X-HE-Tag: 1747279485-159952 X-HE-Meta: U2FsdGVkX183HjhvXP7DY2uRZVhPKMdQSvJ7SE4pX+7b4txi2IiQzcSbBI7kZ/2E4VDON50RdWaaCwCbpShHnuGHuGHnaUwUEjkHSlVpSBhtxOmHeOW6RfrEkSecE0chF6k2xTm9U/mSssMVyfrwbUB+xnMIu5jOBF/oI1+nv1xMnb4wq/gHRpjhcBn9RWDH0D/1x/u/zBKA0ghfd3suzFty65XFDKf6sUrnbnzoGA3nPPtxu6kJwfC4kRbTkKsXNdSfQlnOyLIxU4OSliztJKt/wBrabB2XVrw7Zkm1IRSOVGHX9hBJNFnJt9fLyBZCn0nYly+8gfuAJpFVPK+9yBMIQqIHZgF+A4VgCB4wQox2xYcQg5qTSmzxUWnb9NLgwCbDXOOqOj878UCIoxGXSUE4EmfFS5HYfIjLjHfYPUCMys/hZoOl0t7xdSBml4mVXQJGtMIOEZl9zYLmcShzXkkJrCIYLr1CPrA7nE2j/rXm5xrZud1b7WWvo6Pq27XFU24uTm6RY3Otzi6EZ4C87XknK5967b35UdPZKWpEmTDZGmpkODm7PFAuwGkS/1hMs+8JBXXTMERl+RVDf+EmPoE+Ebvwfyx/y7z36CJMsD05tGQpXlHiv/8yugM69sGpT7RGXKwA1vgHMavgeZXYZi+QY66SGcPx+UkCsqbS0uWArmJV65xQFbKBo8NMoYPS5vvIGJaPMXCscc+2UGxvqe3Kdnlzbe1HoynUdyOsvwMT022CHpHUPkSHZoA+3CNvEWVhAaWBYMzO95h9zShC63Rp9Q6IZ0xA08oqi+zK8w/X5z/Cv/R1CKd1KmQEwNxoYNWYQ5mqfFm8TN9hWewXtrMbxmy3sHXyRxikQjz7iQCOI1RLjMwNl8zxhX5CCFQi+79lcOTuiIvz9UAnuTMTUcOLZRXRgGnm0tY+LIA3PapHG6ck6H3PUMhe1XxvKRjmQrd9a49k1/YqBy0XAwE /G/brpGZ wkps9Q36UA3yenj2iGqW1R6Bp5+v9URE0qe1cFyUZ3tG2oP9eHyKk6yQe7zH85Nsvsjrlz80cc7dcn60lw0XcFng01jGnTLqedRrWxg/uezQKHU5aRO7u5vvsK5BuUPWdMGVn+tX3Zejv2Uf2YaxFMVelYobFRl1riQnfTd3ORTJH0OlnXTdECmGYjSGQmL6rMtjwRFUQyjuJfhPv7oHzJm+LYAgY5MaEh/n/IsQPBTB9WxfCXZoJqHDLCHCE8deKDnMNDlU8LQPTeaNdPEG7lmvI/GI488zOJP47DrkySN0aA4/tM/OfP+bMG9JzvTjhpcI4tYhEgpih9IF4NeTtiqkla5BdWXbxugw2mbM/beS4qkFoqqQ6C5Wlpu4HZKazgIRxR4hobJJLGMDbrYr4WvNY6w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce the ability for khugepaged to collapse to different mTHP sizes. While scanning PMD ranges for potential collapse candidates, keep track of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER ptes. If mTHPs are enabled we remove the restriction of max_ptes_none during the scan phase so we dont bailout early and miss potential mTHP candidates. After the scan is complete we will perform binary recursion on the bitmap to determine which mTHP size would be most efficient to collapse to. max_ptes_none will be scaled by the attempted collapse order to determine how full a THP must be to be eligible. If a mTHP collapse is attempted, but contains swapped out, or shared pages, we dont perform the collapse. For non PMD collapse we much leave the anon VMA write locked until after we collapse the mTHP Signed-off-by: Nico Pache --- mm/khugepaged.c | 136 +++++++++++++++++++++++++++++++++--------------- 1 file changed, 95 insertions(+), 41 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 044fec869b50..afad75fc01b7 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1138,13 +1138,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; - pte_t *pte; + pte_t *pte = NULL, mthp_pte; pgtable_t pgtable; struct folio *folio; spinlock_t *pmd_ptl, *pte_ptl; int result = SCAN_FAIL; struct vm_area_struct *vma; struct mmu_notifier_range range; + unsigned long _address = address + offset * PAGE_SIZE; VM_BUG_ON(address & ~HPAGE_PMD_MASK); @@ -1160,12 +1161,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, *mmap_locked = false; } - result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); + result = alloc_charge_folio(&folio, mm, cc, order); if (result != SCAN_SUCCEED) goto out_nolock; mmap_read_lock(mm); - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER); + *mmap_locked = true; + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order); if (result != SCAN_SUCCEED) { mmap_read_unlock(mm); goto out_nolock; @@ -1183,13 +1185,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * released when it fails. So we jump out_nolock directly in * that case. Continuing to collapse causes inconsistency. */ - result = __collapse_huge_page_swapin(mm, vma, address, pmd, - referenced, HPAGE_PMD_ORDER); + result = __collapse_huge_page_swapin(mm, vma, _address, pmd, + referenced, order); if (result != SCAN_SUCCEED) goto out_nolock; } mmap_read_unlock(mm); + *mmap_locked = false; /* * Prevent all access to pagetables with the exception of * gup_fast later handled by the ptep_clear_flush and the VM @@ -1199,7 +1202,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * mmap_lock. */ mmap_write_lock(mm); - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER); + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order); if (result != SCAN_SUCCEED) goto out_up_write; /* check if the pmd is still valid */ @@ -1210,11 +1213,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, vma_start_write(vma); anon_vma_lock_write(vma->anon_vma); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, - address + HPAGE_PMD_SIZE); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address, + _address + (PAGE_SIZE << order)); mmu_notifier_invalidate_range_start(&range); pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */ + /* * This removes any huge TLB entry from the CPU so we won't allow * huge and small TLB entries for the same virtual address to @@ -1228,18 +1232,16 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, mmu_notifier_invalidate_range_end(&range); tlb_remove_table_sync_one(); - pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); + pte = pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl); if (pte) { - result = __collapse_huge_page_isolate(vma, address, pte, cc, - &compound_pagelist, HPAGE_PMD_ORDER); + result = __collapse_huge_page_isolate(vma, _address, pte, cc, + &compound_pagelist, order); spin_unlock(pte_ptl); } else { result = SCAN_PMD_NULL; } if (unlikely(result != SCAN_SUCCEED)) { - if (pte) - pte_unmap(pte); spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); /* @@ -1254,17 +1256,17 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, } /* - * All pages are isolated and locked so anon_vma rmap - * can't run anymore. + * For PMD collapse all pages are isolated and locked so anon_vma + * rmap can't run anymore */ - anon_vma_unlock_write(vma->anon_vma); + if (order == HPAGE_PMD_ORDER) + anon_vma_unlock_write(vma->anon_vma); result = __collapse_huge_page_copy(pte, folio, pmd, _pmd, - vma, address, pte_ptl, - &compound_pagelist, HPAGE_PMD_ORDER); - pte_unmap(pte); + vma, _address, pte_ptl, + &compound_pagelist, order); if (unlikely(result != SCAN_SUCCEED)) - goto out_up_write; + goto out_unlock_anon_vma; /* * The smp_wmb() inside __folio_mark_uptodate() ensures the @@ -1272,25 +1274,45 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * write. */ __folio_mark_uptodate(folio); - pgtable = pmd_pgtable(_pmd); - - _pmd = folio_mk_pmd(folio, vma->vm_page_prot); - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); - - spin_lock(pmd_ptl); - BUG_ON(!pmd_none(*pmd)); - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); - pgtable_trans_huge_deposit(mm, pmd, pgtable); - set_pmd_at(mm, address, pmd, _pmd); - update_mmu_cache_pmd(vma, address, pmd); - deferred_split_folio(folio, false); - spin_unlock(pmd_ptl); + if (order == HPAGE_PMD_ORDER) { + pgtable = pmd_pgtable(_pmd); + _pmd = folio_mk_pmd(folio, vma->vm_page_prot); + _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); + + spin_lock(pmd_ptl); + BUG_ON(!pmd_none(*pmd)); + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + pgtable_trans_huge_deposit(mm, pmd, pgtable); + set_pmd_at(mm, address, pmd, _pmd); + update_mmu_cache_pmd(vma, address, pmd); + deferred_split_folio(folio, false); + spin_unlock(pmd_ptl); + } else { /* mTHP collapse */ + mthp_pte = mk_pte(&folio->page, vma->vm_page_prot); + mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma); + + spin_lock(pmd_ptl); + folio_ref_add(folio, (1 << order) - 1); + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order)); + update_mmu_cache_range(NULL, vma, _address, pte, (1 << order)); + + smp_wmb(); /* make pte visible before pmd */ + pmd_populate(mm, pmd, pmd_pgtable(_pmd)); + spin_unlock(pmd_ptl); + } folio = NULL; result = SCAN_SUCCEED; +out_unlock_anon_vma: + if (order != HPAGE_PMD_ORDER) + anon_vma_unlock_write(vma->anon_vma); out_up_write: + if (pte) + pte_unmap(pte); mmap_write_unlock(mm); out_nolock: *mmap_locked = false; @@ -1366,31 +1388,57 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, { pmd_t *pmd; pte_t *pte, *_pte; + int i; int result = SCAN_FAIL, referenced = 0; int none_or_zero = 0, shared = 0; struct page *page = NULL; struct folio *folio = NULL; unsigned long _address; + unsigned long enabled_orders; spinlock_t *ptl; int node = NUMA_NO_NODE, unmapped = 0; + bool is_pmd_only; bool writable = false; - + int chunk_none_count = 0; + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER); + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; VM_BUG_ON(address & ~HPAGE_PMD_MASK); result = find_pmd_or_thp_or_none(mm, address, &pmd); if (result != SCAN_SUCCEED) goto out; + bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE); + bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE); memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); + + enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags, + tva_flags, THP_ORDERS_ALL_ANON); + + is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER)); + pte = pte_offset_map_lock(mm, pmd, address, &ptl); if (!pte) { result = SCAN_PMD_NULL; goto out; } - for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR; - _pte++, _address += PAGE_SIZE) { + for (i = 0; i < HPAGE_PMD_NR; i++) { + /* + * we are reading in KHUGEPAGED_MIN_MTHP_NR page chunks. if + * there are pages in this chunk keep track of it in the bitmap + * for mTHP collapsing. + */ + if (i % KHUGEPAGED_MIN_MTHP_NR == 0) { + if (chunk_none_count <= scaled_none) + bitmap_set(cc->mthp_bitmap, + i / KHUGEPAGED_MIN_MTHP_NR, 1); + chunk_none_count = 0; + } + + _pte = pte + i; + _address = address + i * PAGE_SIZE; pte_t pteval = ptep_get(_pte); if (is_swap_pte(pteval)) { ++unmapped; @@ -1413,10 +1461,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, } } if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { + ++chunk_none_count; ++none_or_zero; if (!userfaultfd_armed(vma) && - (!cc->is_khugepaged || - none_or_zero <= khugepaged_max_ptes_none)) { + (!cc->is_khugepaged || !is_pmd_only || + none_or_zero <= khugepaged_max_ptes_none)) { continue; } else { result = SCAN_EXCEED_NONE_PTE; @@ -1512,6 +1561,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, address))) referenced++; } + if (!writable) { result = SCAN_PAGE_RO; } else if (cc->is_khugepaged && @@ -1524,8 +1574,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, out_unmap: pte_unmap_unlock(pte, ptl); if (result == SCAN_SUCCEED) { - result = collapse_huge_page(mm, address, referenced, - unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0); + result = khugepaged_scan_bitmap(mm, address, referenced, unmapped, cc, + mmap_locked, enabled_orders); + if (result > 0) + result = SCAN_SUCCEED; + else + result = SCAN_FAIL; } out: trace_mm_khugepaged_scan_pmd(mm, folio, writable, referenced, -- 2.49.0